Sample records for image display system

  1. Features and limitations of mobile tablet devices for viewing radiological images.

    PubMed

    Grunert, J H

    2015-03-01

    Mobile radiological image display systems are becoming increasingly common, necessitating a comparison of the features of these systems, specifically the operating system employed, connection to stationary PACS, data security and rang of image display and image analysis functions. In the fall of 2013, a total of 17 PACS suppliers were surveyed regarding the technical features of 18 mobile radiological image display systems using a standardized questionnaire. The study also examined to what extent the technical specifications of the mobile image display systems satisfy the provisions of the Germany Medical Devices Act as well as the provisions of the German X-ray ordinance (RöV). There are clear differences in terms of how the mobile systems connected to the stationary PACS. Web-based solutions allow the mobile image display systems to function independently of their operating systems. The examined systems differed very little in terms of image display and image analysis functions. Mobile image display systems complement stationary PACS and can be used to view images. The impacts of the new quality assurance guidelines (QS-RL) as well as the upcoming new standard DIN 6868 - 157 on the acceptance testing of mobile image display units for the purpose of image evaluation are discussed. © Georg Thieme Verlag KG Stuttgart · New York.

  2. Monocular display unit for 3D display with correct depth perception

    NASA Astrophysics Data System (ADS)

    Sakamoto, Kunio; Hosomi, Takashi

    2009-11-01

    A study of virtual-reality system has been popular and its technology has been applied to medical engineering, educational engineering, a CAD/CAM system and so on. The 3D imaging display system has two types in the presentation method; one is a 3-D display system using a special glasses and the other is the monitor system requiring no special glasses. A liquid crystal display (LCD) recently comes into common use. It is possible for this display unit to provide the same size of displaying area as the image screen on the panel. A display system requiring no special glasses is useful for a 3D TV monitor, but this system has demerit such that the size of a monitor restricts the visual field for displaying images. Thus the conventional display can show only one screen, but it is impossible to enlarge the size of a screen, for example twice. To enlarge the display area, the authors have developed an enlarging method of display area using a mirror. Our extension method enables the observers to show the virtual image plane and to enlarge a screen area twice. In the developed display unit, we made use of an image separating technique using polarized glasses, a parallax barrier or a lenticular lens screen for 3D imaging. The mirror can generate the virtual image plane and it enlarges a screen area twice. Meanwhile the 3D display system using special glasses can also display virtual images over a wide area. In this paper, we present a monocular 3D vision system with accommodation mechanism, which is useful function for perceiving depth.

  3. XVD Image Display Program

    NASA Technical Reports Server (NTRS)

    Deen, Robert G.; Andres, Paul M.; Mortensen, Helen B.; Parizher, Vadim; McAuley, Myche; Bartholomew, Paul

    2009-01-01

    The XVD [X-Windows VICAR (video image communication and retrieval) Display] computer program offers an interactive display of VICAR and PDS (planetary data systems) images. It is designed to efficiently display multiple-GB images and runs on Solaris, Linux, or Mac OS X systems using X-Windows.

  4. Digital Image Display Control System, DIDCS. [for astronomical analysis

    NASA Technical Reports Server (NTRS)

    Fischel, D.; Klinglesmith, D. A., III

    1979-01-01

    DIDCS is an interactive image display and manipulation system that is used for a variety of astronomical image reduction and analysis operations. The hardware system consists of a PDP 11/40 main frame with 32K of 16-bit core memory; 96K of 16-bit MOS memory; two 9 track 800 BPI tape drives; eight 2.5 million byte RKO5 type disk packs, three user terminals, and a COMTAL 8000-S display system which has sufficient memory to store and display three 512 x 512 x 8 bit images along with an overlay plane and function table for each image, a pseudo color table and the capability for displaying true color. The software system is based around the language FORTH, which will permit an open ended dictionary of user level words for image analyses and display. A description of the hardware and software systems will be presented along with examples of the types of astronomical research that are being performed. Also a short discussion of the commonality and exchange of this type of image analysis system will be given.

  5. Mobile viewer system for virtual 3D space using infrared LED point markers and camera

    NASA Astrophysics Data System (ADS)

    Sakamoto, Kunio; Taneji, Shoto

    2006-09-01

    The authors have developed a 3D workspace system using collaborative imaging devices. A stereoscopic display enables this system to project 3D information. In this paper, we describe the position detecting system for a see-through 3D viewer. A 3D display system is useful technology for virtual reality, mixed reality and augmented reality. We have researched spatial imaging and interaction system. We have ever proposed 3D displays using the slit as a parallax barrier, the lenticular screen and the holographic optical elements(HOEs) for displaying active image 1)2)3)4). The purpose of this paper is to propose the interactive system using these 3D imaging technologies. The observer can view virtual images in the real world when the user watches the screen of a see-through 3D viewer. The goal of our research is to build the display system as follows; when users see the real world through the mobile viewer, the display system gives users virtual 3D images, which is floating in the air, and the observers can touch these floating images and interact them such that kids can make play clay. The key technologies of this system are the position recognition system and the spatial imaging display. The 3D images are presented by the improved parallax barrier 3D display. Here the authors discuss the measuring method of the mobile viewer using infrared LED point markers and a camera in the 3D workspace (augmented reality world). The authors show the geometric analysis of the proposed measuring method, which is the simplest method using a single camera not the stereo camera, and the results of our viewer system.

  6. Display And Analysis Of Tomographic Volumetric Images Utilizing A Vari-Focal Mirror

    NASA Astrophysics Data System (ADS)

    Harris, L. D.; Camp, J. J.

    1984-10-01

    A system for the three-dimensional (3-D) display and analysis of stacks of tomographic images is described. The device utilizes the principle of a variable focal (vari-focal) length optical element in the form of an aluminized membrane stretched over a loudspeaker to generate a virtual 3-D image which is a visible representation of a 3-D array of image elements (voxels). The system displays 500,000 voxels per mirror cycle in a 3-D raster which appears continuous and demonstrates no distracting artifacts. The display is bright enough so that portions of the image can be dimmed without compromising the number of shades of gray. For x-ray CT, a displayed volume image looks like a 3-D radiograph which appears to be in the space directly behind the mirror. The viewer sees new views by moving his/her head from side to side or up and down. The system facilitates a variety of operator interactive functions which allow the user to point at objects within the image, control the orientation and location of brightened oblique planes within the volume, numerically dissect away selected image regions, and control intensity window levels. Photographs of example volume images displayed on the system illustrate, to the degree possible in a flat picture, the nature of displayed images and the capabilities of the system. Preliminary application of the display device to the analysis of volume reconstructions obtained from the Dynamic Spatial Reconstructor indicates significant utility of the system in selecting oblique sections and gaining an appreciation of the shape and dimensions of complex organ systems.

  7. Tile-Image Merging and Delivering for Virtual Camera Services on Tiled-Display for Real-Time Remote Collaboration

    NASA Astrophysics Data System (ADS)

    Choe, Giseok; Nang, Jongho

    The tiled-display system has been used as a Computer Supported Cooperative Work (CSCW) environment, in which multiple local (and/or remote) participants cooperate using some shared applications whose outputs are displayed on a large-scale and high-resolution tiled-display, which is controlled by a cluster of PC's, one PC per display. In order to make the collaboration effective, each remote participant should be aware of all CSCW activities on the titled display system in real-time. This paper presents a capturing and delivering mechanism of all activities on titled-display system to remote participants in real-time. In the proposed mechanism, the screen images of all PC's are periodically captured and delivered to the Merging Server that maintains separate buffers to store the captured images from the PCs. The mechanism selects one tile image from each buffer, merges the images to make a screen shot of the whole tiled-display, clips a Region of Interest (ROI), compresses and streams it to remote participants in real-time. A technical challenge in the proposed mechanism is how to select a set of tile images, one from each buffer, for merging so that the tile images displayed at the same time on the tiled-display can be properly merged together. This paper presents three selection algorithms; a sequential selection algorithm, a capturing time based algorithm, and a capturing time and visual consistency based algorithm. It also proposes a mechanism of providing several virtual cameras on tiled-display system to remote participants by concurrently clipping several different ROI's from the same merged tiled-display images, and delivering them after compressing with video encoders requested by the remote participants. By interactively changing and resizing his/her own ROI, a remote participant can check the activities on the tiled-display effectively. Experiments on a 3 × 2 tiled-display system show that the proposed merging algorithm can build a tiled-display image stream synchronously, and the ROI-based clipping and delivering mechanism can provide individual views on the tiled-display system to multiple remote participants in real-time.

  8. Future Directions for Astronomical Image Display

    NASA Technical Reports Server (NTRS)

    Mandel, Eric

    2000-01-01

    In the "Future Directions for Astronomical Image Displav" project, the Smithsonian Astrophysical Observatory (SAO) and the National Optical Astronomy Observatories (NOAO) evolved our existing image display program into fully extensible. cross-platform image display software. We also devised messaging software to support integration of image display into astronomical analysis systems. Finally, we migrated our software from reliance on Unix and the X Window System to a platform-independent architecture that utilizes the cross-platform Tcl/Tk technology.

  9. The research on a novel type of the solar-blind UV head-mounted displays

    NASA Astrophysics Data System (ADS)

    Zhao, Shun-long

    2011-08-01

    Ultraviolet technology of detecting is playing a more and more important role in the field of civil application, especially in the corona discharge detection, in modern society. Now the UV imaging detector is one of the most important equipments in power equipment flaws detection. And the modern head-mounted displays (HMDs) have shown the applications in the fields of military, industry production, medical treatment, entertainment, 3D visualization, education and training. We applied the system of head-mounted displays to the UV image detection, and a novel type of head-mounted displays is presented: the solar-blind UV head-mounted displays. And the structure is given. By the solar-blind UV head-mounted displays, a real-time, isometric and visible image of the corona discharge is correctly displayed upon the background scene where it exists. The user will see the visible image of the corona discharge on the real scene rather than on a small screen. Then the user can easily find out the power equipment flaws and repair them. Compared with the traditional UV imaging detector, the introducing of the HMDs simplifies the structure of the whole system. The original visible spectrum optical system is replaced by the eye in the solar-blind UV head-mounted displays. And the optical image fusion technology would be used rather than the digital image fusion system which is necessary in traditional UV imaging detector. That means the visible spectrum optical system and digital image fusion system are not necessary. This makes the whole system cheaper than the traditional UV imaging detector. Another advantage of the solar-blind UV head-mounted displays is that the two hands of user will be free. So while observing the corona discharge the user can do some things about it. Therefore the solar-blind UV head-mounted displays can make the corona discharge expose itself to the user in a better way, and it will play an important role in corona detection in the future.

  10. Image change detection systems, methods, and articles of manufacture

    DOEpatents

    Jones, James L.; Lassahn, Gordon D.; Lancaster, Gregory D.

    2010-01-05

    Aspects of the invention relate to image change detection systems, methods, and articles of manufacture. According to one aspect, a method of identifying differences between a plurality of images is described. The method includes loading a source image and a target image into memory of a computer, constructing source and target edge images from the source and target images to enable processing of multiband images, displaying the source and target images on a display device of the computer, aligning the source and target edge images, switching displaying of the source image and the target image on the display device, to enable identification of differences between the source image and the target image.

  11. Real-time image reconstruction and display system for MRI using a high-speed personal computer.

    PubMed

    Haishi, T; Kose, K

    1998-09-01

    A real-time NMR image reconstruction and display system was developed using a high-speed personal computer and optimized for the 32-bit multitasking Microsoft Windows 95 operating system. The system was operated at various CPU clock frequencies by changing the motherboard clock frequency and the processor/bus frequency ratio. When the Pentium CPU was used at the 200 MHz clock frequency, the reconstruction time for one 128 x 128 pixel image was 48 ms and that for the image display on the enlarged 256 x 256 pixel window was about 8 ms. NMR imaging experiments were performed with three fast imaging sequences (FLASH, multishot EPI, and one-shot EPI) to demonstrate the ability of the real-time system. It was concluded that in most cases, high-speed PC would be the best choice for the image reconstruction and display system for real-time MRI. Copyright 1998 Academic Press.

  12. Three-dimensional imaging and remote sensing imaging; Proceedings of the Meeting, Los Angeles, CA, Jan. 14, 15, 1988

    NASA Astrophysics Data System (ADS)

    Robbins, Woodrow E.

    1988-01-01

    The present conference discusses topics in novel technologies and techniques of three-dimensional imaging, human factors-related issues in three-dimensional display system design, three-dimensional imaging applications, and image processing for remote sensing. Attention is given to a 19-inch parallactiscope, a chromostereoscopic CRT-based display, the 'SpaceGraph' true three-dimensional peripheral, advantages of three-dimensional displays, holographic stereograms generated with a liquid crystal spatial light modulator, algorithms and display techniques for four-dimensional Cartesian graphics, an image processing system for automatic retina diagnosis, the automatic frequency control of a pulsed CO2 laser, and a three-dimensional display of magnetic resonance imaging of the spine.

  13. Three-dimensional hologram display system

    NASA Technical Reports Server (NTRS)

    Mintz, Frederick (Inventor); Chao, Tien-Hsin (Inventor); Bryant, Nevin (Inventor); Tsou, Peter (Inventor)

    2009-01-01

    The present invention relates to a three-dimensional (3D) hologram display system. The 3D hologram display system includes a projector device for projecting an image upon a display medium to form a 3D hologram. The 3D hologram is formed such that a viewer can view the holographic image from multiple angles up to 360 degrees. Multiple display media are described, namely a spinning diffusive screen, a circular diffuser screen, and an aerogel. The spinning diffusive screen utilizes spatial light modulators to control the image such that the 3D image is displayed on the rotating screen in a time-multiplexing manner. The circular diffuser screen includes multiple, simultaneously-operated projectors to project the image onto the circular diffuser screen from a plurality of locations, thereby forming the 3D image. The aerogel can use the projection device described as applicable to either the spinning diffusive screen or the circular diffuser screen.

  14. Display system for imaging scientific telemetric information

    NASA Technical Reports Server (NTRS)

    Zabiyakin, G. I.; Rykovanov, S. N.

    1979-01-01

    A system for imaging scientific telemetric information, based on the M-6000 minicomputer and the SIGD graphic display, is described. Two dimensional graphic display of telemetric information and interaction with the computer, in analysis and processing of telemetric parameters displayed on the screen is provided. The running parameter information output method is presented. User capabilities in the analysis and processing of telemetric information imaged on the display screen and the user language are discussed and illustrated.

  15. Quantitative evaluation of three advanced laparoscopic viewing technologies: a stereo endoscope, an image projection display, and a TFT display.

    PubMed

    Wentink, M; Jakimowicz, J J; Vos, L M; Meijer, D W; Wieringa, P A

    2002-08-01

    Compared to open surgery, minimally invasive surgery (MIS) relies heavily on advanced technology, such as endoscopic viewing systems and innovative instruments. The aim of the study was to objectively compare three technologically advanced laparoscopic viewing systems with the standard viewing system currently used in most Dutch hospitals. We evaluated the following advanced laparoscopic viewing systems: a Thin Film Transistor (TFT) display, a stereo endoscope, and an image projection display. The standard viewing system was comprised of a monocular endoscope and a high-resolution monitor. Task completion time served as the measure of performance. Eight surgeons with laparoscopic experience participated in the experiment. The average task time was significantly greater (p <0.05) with the stereo viewing system than with the standard viewing system. The average task times with the TFT display and the image projection display did not differ significantly from the standard viewing system. Although the stereo viewing system promises improved depth perception and the TFT and image projection displays are supposed to improve hand-eye coordination, none of these systems provided better task performance than the standard viewing system in this pelvi-trainer experiment.

  16. Interactive display system having a digital micromirror imaging device

    DOEpatents

    Veligdan, James T.; DeSanto, Leonard; Kaull, Lisa; Brewster, Calvin

    2006-04-11

    A display system includes a waveguide optical panel having an inlet face and an opposite outlet face. A projector cooperates with a digital imaging device, e.g. a digital micromirror imaging device, for projecting an image through the panel for display on the outlet face. The imaging device includes an array of mirrors tiltable between opposite display and divert positions. The display positions reflect an image light beam from the projector through the panel for display on the outlet face. The divert positions divert the image light beam away from the panel, and are additionally used for reflecting a probe light beam through the panel toward the outlet face. Covering a spot on the panel, e.g. with a finger, reflects the probe light beam back through the panel toward the inlet face for detection thereat and providing interactive capability.

  17. Development of Land Analysis System display modules

    NASA Technical Reports Server (NTRS)

    Gordon, Douglas; Hollaren, Douglas; Huewe, Laurie

    1986-01-01

    The Land Analysis System (LAS) display modules were developed to allow a user to interactively display, manipulate, and store image and image related data. To help accomplish this task, these modules utilize the Transportable Applications Executive and the Display Management System software to interact with the user and the display device. The basic characteristics of a display are outlined and some of the major modifications and additions made to the display management software are discussed. Finally, all available LAS display modules are listed along with a short description of each.

  18. Preliminary display comparison for dental diagnostic applications

    NASA Astrophysics Data System (ADS)

    Odlum, Nicholas; Spalla, Guillaume; van Assche, Nele; Vandenberghe, Bart; Jacobs, Reinhilde; Quirynen, Marc; Marchessoux, Cédric

    2012-02-01

    The aim of this study is to predict the clinical performance and image quality of a display system for viewing dental images. At present, the use of dedicated medical displays is not uniform among dentists - many still view images on ordinary consumer displays. This work investigated whether the use of a medical display improved the perception of dental images by a clinician, compared to a consumer display. Display systems were simulated using the MEdical Virtual Imaging Chain (MEVIC). Images derived from two carefully performed studies on periodontal bone lesion detection and endodontic file length determination, were used. Three displays were selected: a medical grade one and two consumer displays (Barco MDRC-2120, Dell 1907FP and Dell 2007FPb). Some typical characteristics of the displays are evaluated by measurements and simulations like the Modulation Function (MTF), the Noise Power Spectrum (NPS), backlight stability or calibration. For the MTF, the display with the largest pixel pitch has logically the worst MTF. Moreover, the medical grade display has a slightly better MTF and the displays have similar NPS. The study shows the instability effect for the emitted intensity of the consumer displays compared to the medical grade one. Finally the study on the calibration methodology of the display shows that the signal in the dental images will be always more perceivable on the DICOM GSDF display than a gamma 2,2 display.

  19. Integrated clinical workstations for image and text data capture, display, and teleconsultation.

    PubMed

    Dayhoff, R; Kuzmak, P M; Kirin, G

    1994-01-01

    The Department of Veterans Affairs (VA) DHCP Imaging System digitally records clinically significant diagnostic images selected by medical specialists in a variety of hospital departments, including radiology, cardiology, gastroenterology, pathology, dermatology, hematology, surgery, podiatry, dental clinic, and emergency room. These images, which include true color and gray scale images, scanned documents, and electrocardiogram waveforms, are stored on network file servers and displayed on workstations located throughout a medical center. All images are managed by the VA's hospital information system (HIS), allowing integrated displays of text and image data from all medical specialties. Two VA medical centers currently have DHCP Imaging Systems installed, and other installations are underway.

  20. High-resolution, continuous field-of-view (FOV), non-rotating imaging system

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terrance L. (Inventor); Stirbl, Robert C. (Inventor); Aghazarian, Hrand (Inventor); Padgett, Curtis W. (Inventor)

    2010-01-01

    A high resolution CMOS imaging system especially suitable for use in a periscope head. The imaging system includes a sensor head for scene acquisition, and a control apparatus inclusive of distributed processors and software for device-control, data handling, and display. The sensor head encloses a combination of wide field-of-view CMOS imagers and narrow field-of-view CMOS imagers. Each bank of imagers is controlled by a dedicated processing module in order to handle information flow and image analysis of the outputs of the camera system. The imaging system also includes automated or manually controlled display system and software for providing an interactive graphical user interface (GUI) that displays a full 360-degree field of view and allows the user or automated ATR system to select regions for higher resolution inspection.

  1. A system for the real-time display of radar and video images of targets

    NASA Technical Reports Server (NTRS)

    Allen, W. W.; Burnside, W. D.

    1990-01-01

    Described here is a software and hardware system for the real-time display of radar and video images for use in a measurement range. The main purpose is to give the reader a clear idea of the software and hardware design and its functions. This system is designed around a Tektronix XD88-30 graphics workstation, used to display radar images superimposed on video images of the actual target. The system's purpose is to provide a platform for tha analysis and documentation of radar images and their associated targets in a menu-driven, user oriented environment.

  2. High-Resolution Large-Field-of-View Three-Dimensional Hologram Display System and Method Thereof

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin (Inventor); Mintz, Frederick W. (Inventor); Tsou, Peter (Inventor); Bryant, Nevin A. (Inventor)

    2001-01-01

    A real-time, dynamic, free space-virtual reality, 3-D image display system is enabled by using a unique form of Aerogel as the primary display media. A preferred embodiment of this system comprises a 3-D mosaic topographic map which is displayed by fusing four projected hologram images. In this embodiment, four holographic images are projected from four separate holograms. Each holographic image subtends a quadrant of the 4(pi) solid angle. By fusing these four holographic images, a static 3-D image such as a featured terrain map would be visible for 360 deg in the horizontal plane and 180 deg in the vertical plane. An input, either acquired by 3-D image sensor or generated by computer animation, is first converted into a 2-D computer generated hologram (CGH). This CGH is then downloaded into large liquid crystal (LC) panel. A laser projector illuminates the CGH-filled LC panel and generates and displays a real 3-D image in the Aerogel matrix.

  3. Imaging Systems: What, When, How.

    ERIC Educational Resources Information Center

    Lunin, Lois F.; And Others

    1992-01-01

    The three articles in this special section on document image files discuss intelligent character recognition, including comparison with optical character recognition; selection of displays for document image processing, focusing on paperlike displays; and imaging hardware, software, and vendors, including guidelines for system selection. (MES)

  4. A head-mounted compressive three-dimensional display system with polarization-dependent focus switching

    NASA Astrophysics Data System (ADS)

    Lee, Chang-Kun; Moon, Seokil; Lee, Byounghyo; Jeong, Youngmo; Lee, Byoungho

    2016-10-01

    A head-mounted compressive three-dimensional (3D) display system is proposed by combining polarization beam splitter (PBS), fast switching polarization rotator and micro display with high pixel density. According to the polarization state of the image controlled by polarization rotator, optical path of image in the PBS can be divided into transmitted and reflected components. Since optical paths of each image are spatially separated, it is possible to independently focus both images at different depth positions. Transmitted p-polarized and reflected s-polarized images can be focused by convex lens and mirror, respectively. When the focal lengths of the convex lens and mirror are properly determined, two image planes can be located in intended positions. The geometrical relationship is easily modulated by replacement of the components. The fast switching of polarization realizes the real-time operation of multi-focal image planes with a single display panel. Since it is possible to conserve the device characteristic of single panel, the high image quality, reliability and uniformity can be retained. For generating 3D images, layer images for compressive light field display between two image planes are calculated. Since the display panel with high pixel density is adopted, high quality 3D images are reconstructed. In addition, image degradation by diffraction between physically stacked display panels can be mitigated. Simple optical configuration of the proposed system is implemented and the feasibility of the proposed method is verified through experiments.

  5. Floating aerial 3D display based on the freeform-mirror and the improved integral imaging system

    NASA Astrophysics Data System (ADS)

    Yu, Xunbo; Sang, Xinzhu; Gao, Xin; Yang, Shenwu; Liu, Boyang; Chen, Duo; Yan, Binbin; Yu, Chongxiu

    2018-09-01

    A floating aerial three-dimensional (3D) display based on the freeform-mirror and the improved integral imaging system is demonstrated. In the traditional integral imaging (II), the distortion originating from lens aberration warps elemental images and degrades the visual effect severely. To correct the distortion of the observed pixels and to improve the image quality, a directional diffuser screen (DDS) is introduced. However, the improved integral imaging system can hardly present realistic images with the large off-screen depth, which limits floating aerial visual experience. To display the 3D image in the free space, the off-axis reflection system with the freeform-mirror is designed. By combining the improved II and the designed freeform optical element, the floating aerial 3D image is presented.

  6. Aberration improvement of the floating 3D display system based on Tessar array and directional diffuser screen

    NASA Astrophysics Data System (ADS)

    Gao, Xin; Sang, Xinzhu; Yu, Xunbo; Zhang, Wanlu; Yan, Binbin; Yu, Chongxiu

    2018-06-01

    The floating 3D display system based on Tessar array and directional diffuser screen is proposed. The directional diffuser screen can smoothen the gap of lens array and make the 3D image's brightness continuous. The optical structure and aberration characteristics of the floating three-dimensional (3D) display system are analyzed. The simulation and experiment are carried out, which show that the 3D image quality becomes more and more deteriorative with the further distance of the image plane and the increasing viewing angle. To suppress the aberrations, the Tessar array is proposed according to the aberration characteristics of the floating 3D display system. A 3840 × 2160 liquid crystal display panel (LCD) with the size of 23.6 inches, a directional diffuser screen and a Tessar array are used to display the final 3D images. The aberrations are reduced and the definition is improved compared with that of the display with a single-lens array. The display depth of more than 20 cm and the viewing angle of more than 45° can be achieved.

  7. Seamless tiled display system

    NASA Technical Reports Server (NTRS)

    Dubin, Matthew B. (Inventor); Larson, Brent D. (Inventor); Kolosowsky, Aleksandra (Inventor)

    2006-01-01

    A modular and scalable seamless tiled display apparatus includes multiple display devices, a screen, and multiple lens assemblies. Each display device is subdivided into multiple sections, and each section is configured to display a sectional image. One of the lens assemblies is optically coupled to each of the sections of each of the display devices to project the sectional image displayed on that section onto the screen. The multiple lens assemblies are configured to merge the projected sectional images to form a single tiled image. The projected sectional images may be merged on the screen by magnifying and shifting the images in an appropriate manner. The magnification and shifting of these images eliminates any visual effect on the tiled display that may result from dead-band regions defined between each pair of adjacent sections on each display device, and due to gaps between multiple display devices.

  8. Integrated clinical workstations for image and text data capture, display, and teleconsultation.

    PubMed Central

    Dayhoff, R.; Kuzmak, P. M.; Kirin, G.

    1994-01-01

    The Department of Veterans Affairs (VA) DHCP Imaging System digitally records clinically significant diagnostic images selected by medical specialists in a variety of hospital departments, including radiology, cardiology, gastroenterology, pathology, dermatology, hematology, surgery, podiatry, dental clinic, and emergency room. These images, which include true color and gray scale images, scanned documents, and electrocardiogram waveforms, are stored on network file servers and displayed on workstations located throughout a medical center. All images are managed by the VA's hospital information system (HIS), allowing integrated displays of text and image data from all medical specialties. Two VA medical centers currently have DHCP Imaging Systems installed, and other installations are underway. PMID:7949899

  9. Does the choice of display system influence perception and visibility of clinically relevant features in digital pathology images?

    NASA Astrophysics Data System (ADS)

    Kimpe, Tom; Rostang, Johan; Avanaki, Ali; Espig, Kathryn; Xthona, Albert; Cocuranu, Ioan; Parwani, Anil V.; Pantanowitz, Liron

    2014-03-01

    Digital pathology systems typically consist of a slide scanner, processing software, visualization software, and finally a workstation with display for visualization of the digital slide images. This paper studies whether digital pathology images can look different when presenting them on different display systems, and whether these visual differences can result in different perceived contrast of clinically relevant features. By analyzing a set of four digital pathology images of different subspecialties on three different display systems, it was concluded that pathology images look different when visualized on different display systems. The importance of these visual differences is elucidated when they are located in areas of the digital slide that contain clinically relevant features. Based on a calculation of dE2000 differences between background and clinically relevant features, it was clear that perceived contrast of clinically relevant features is influenced by the choice of display system. Furthermore, it seems that the specific calibration target chosen for the display system has an important effect on the perceived contrast of clinically relevant features. Preliminary results suggest that calibrating to DICOM GSDF calibration performed slightly worse than sRGB, while a new experimental calibration target CSDF performed better than both DICOM GSDF and sRGB. This result is promising as it suggests that further research work could lead to better definition of an optimized calibration target for digital pathology images resulting in a positive effect on clinical performance.

  10. Flatbed-type 3D display systems using integral imaging method

    NASA Astrophysics Data System (ADS)

    Hirayama, Yuzo; Nagatani, Hiroyuki; Saishu, Tatsuo; Fukushima, Rieko; Taira, Kazuki

    2006-10-01

    We have developed prototypes of flatbed-type autostereoscopic display systems using one-dimensional integral imaging method. The integral imaging system reproduces light beams similar of those produced by a real object. Our display architecture is suitable for flatbed configurations because it has a large margin for viewing distance and angle and has continuous motion parallax. We have applied our technology to 15.4-inch displays. We realized horizontal resolution of 480 with 12 parallaxes due to adoption of mosaic pixel arrangement of the display panel. It allows viewers to see high quality autostereoscopic images. Viewing the display from angle allows the viewer to experience 3-D images that stand out several centimeters from the surface of the display. Mixed reality of virtual 3-D objects and real objects are also realized on a flatbed display. In seeking reproduction of natural 3-D images on the flatbed display, we developed proprietary software. The fast playback of the CG movie contents and real-time interaction are realized with the aid of a graphics card. Realization of the safety 3-D images to the human beings is very important. Therefore, we have measured the effects on the visual function and evaluated the biological effects. For example, the accommodation and convergence were measured at the same time. The various biological effects are also measured before and after the task of watching 3-D images. We have found that our displays show better results than those to a conventional stereoscopic display. The new technology opens up new areas of application for 3-D displays, including arcade games, e-learning, simulations of buildings and landscapes, and even 3-D menus in restaurants.

  11. Large depth of focus dynamic micro integral imaging for optical see-through augmented reality display using a focus-tunable lens.

    PubMed

    Shen, Xin; Javidi, Bahram

    2018-03-01

    We have developed a three-dimensional (3D) dynamic integral-imaging (InIm)-system-based optical see-through augmented reality display with enhanced depth range of a 3D augmented image. A focus-tunable lens is adopted in the 3D display unit to relay the elemental images with various positions to the micro lens array. Based on resolution priority integral imaging, multiple lenslet image planes are generated to enhance the depth range of the 3D image. The depth range is further increased by utilizing both the real and virtual 3D imaging fields. The 3D reconstructed image and the real-world scene are overlaid using an optical see-through display for augmented reality. The proposed system can significantly enhance the depth range of a 3D reconstructed image with high image quality in the micro InIm unit. This approach provides enhanced functionality for augmented information and adjusts the vergence-accommodation conflict of a traditional augmented reality display.

  12. Composite video and graphics display for multiple camera viewing system in robotics and teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)

    1991-01-01

    A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.

  13. Composite video and graphics display for camera viewing systems in robotics and teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)

    1993-01-01

    A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.

  14. Numerical image manipulation and display in solar astronomy

    NASA Technical Reports Server (NTRS)

    Levine, R. H.; Flagg, J. C.

    1977-01-01

    The paper describes the system configuration and data manipulation capabilities of a solar image display system which allows interactive analysis of visual images and on-line manipulation of digital data. Image processing features include smoothing or filtering of images stored in the display, contrast enhancement, and blinking or flickering images. A computer with a core memory of 28,672 words provides the capacity to perform complex calculations based on stored images, including computing histograms, selecting subsets of images for further analysis, combining portions of images to produce images with physical meaning, and constructing mathematical models of features in an image. Some of the processing modes are illustrated by some image sequences from solar observations.

  15. Improved Interactive Medical-Imaging System

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D.; Twombly, Ian A.; Senger, Steven

    2003-01-01

    An improved computational-simulation system for interactive medical imaging has been invented. The system displays high-resolution, three-dimensional-appearing images of anatomical objects based on data acquired by such techniques as computed tomography (CT) and magnetic-resonance imaging (MRI). The system enables users to manipulate the data to obtain a variety of views for example, to display cross sections in specified planes or to rotate images about specified axes. Relative to prior such systems, this system offers enhanced capabilities for synthesizing images of surgical cuts and for collaboration by users at multiple, remote computing sites.

  16. Hard copies for digital medical images: an overview

    NASA Astrophysics Data System (ADS)

    Blume, Hartwig R.; Muka, Edward

    1995-04-01

    This paper is a condensed version of an invited overview on the technology of film hard-copies used in radiology. Because the overview was given to an essentially nonmedical audience, the reliance on film hard-copies in radiology is outlined in greater detail. The overview is concerned with laser image recorders generating monochrome prints on silver-halide films. The basic components of laser image recorders are sketched. The paper concentrates on the physical parameters - characteristic function, dynamic range, digitization resolution, modulation transfer function, and noise power spectrum - which define image quality and information transfer capability of the printed image. A preliminary approach is presented to compare the printed image quality with noise in the acquired image as well as with the noise of state-of- the-art cathode-ray-tube display systems. High-performance laser-image- recorder/silver-halide-film/light-box systems are well capable of reproducing acquired radiologic information. Most recently development was begun toward a display function standard for soft-copy display systems to facilitate similarity of image presentation between different soft-copy displays as well as between soft- and hard-copy displays. The standard display function is based on perceptional linearization. The standard is briefly reviewed to encourage the printer industry to adopt it, too.

  17. Development of an image operation system with a motion sensor in dental radiology.

    PubMed

    Sato, Mitsuru; Ogura, Toshihiro; Yasumoto, Yoshiaki; Kadowaki, Yuta; Hayashi, Norio; Doi, Kunio

    2015-07-01

    During examinations and/or treatment, a dentist in the examination room needs to view images with a proper display system. However, they cannot operate the image display system by hands, because dentists always wear gloves to be kept their hands away from unsanitized materials. Therefore, we developed a new image operating system that uses a motion sensor. We used the Leap motion sensor technique to read the hand movements of a dentist. We programmed the system using C++ to enable various movements of the display system, i.e., click, double click, drag, and drop. Thus, dentists with their gloves on in the examination room can control dental and panoramic images on the image display system intuitively and quickly with movement of their hands only. We investigated the time required with the conventional method using a mouse and with the new method using the finger operation. The average operation time with the finger method was significantly shorter than that with the mouse method. This motion sensor method, with appropriate training for finger movements, can provide a better operating performance than the conventional mouse method.

  18. Network, system, and status software enhancements for the autonomously managed electrical power system breadboard. Volume 4: Graphical status display

    NASA Technical Reports Server (NTRS)

    Mckee, James W.

    1990-01-01

    This volume (4 of 4) contains the description, structured flow charts, prints of the graphical displays, and source code to generate the displays for the AMPS graphical status system. The function of these displays is to present to the manager of the AMPS system a graphical status display with the hot boxes that allow the manager to get more detailed status on selected portions of the AMPS system. The development of the graphical displays is divided into two processes; the creation of the screen images and storage of them in files on the computer, and the running of the status program which uses the screen images.

  19. IMDISP - INTERACTIVE IMAGE DISPLAY PROGRAM

    NASA Technical Reports Server (NTRS)

    Martin, M. D.

    1994-01-01

    The Interactive Image Display Program (IMDISP) is an interactive image display utility for the IBM Personal Computer (PC, XT and AT) and compatibles. Until recently, efforts to utilize small computer systems for display and analysis of scientific data have been hampered by the lack of sufficient data storage capacity to accomodate large image arrays. Most planetary images, for example, require nearly a megabyte of storage. The recent development of the "CDROM" (Compact Disk Read-Only Memory) storage technology makes possible the storage of up to 680 megabytes of data on a single 4.72-inch disk. IMDISP was developed for use with the CDROM storage system which is currently being evaluated by the Planetary Data System. The latest disks to be produced by the Planetary Data System are a set of three disks containing all of the images of Uranus acquired by the Voyager spacecraft. The images are in both compressed and uncompressed format. IMDISP can read the uncompressed images directly, but special software is provided to decompress the compressed images, which can not be processed directly. IMDISP can also display images stored on floppy or hard disks. A digital image is a picture converted to numerical form so that it can be stored and used in a computer. The image is divided into a matrix of small regions called picture elements, or pixels. The rows and columns of pixels are called "lines" and "samples", respectively. Each pixel has a numerical value, or DN (data number) value, quantifying the darkness or brightness of the image at that spot. In total, each pixel has an address (line number, sample number) and a DN value, which is all that the computer needs for processing. DISPLAY commands allow the IMDISP user to display all or part of an image at various positions on the display screen. The user may also zoom in and out from a point on the image defined by the cursor, and may pan around the image. To enable more or all of the original image to be displayed on the screen at once, the image can be "subsampled." For example, if the image were subsampled by a factor of 2, every other pixel from every other line would be displayed, starting from the upper left corner of the image. Any positive integer may be used for subsampling. The user may produce a histogram of an image file, which is a graph showing the number of pixels per DN value, or per range of DN values, for the entire image. IMDISP can also plot the DN value versus pixels along a line between two points on the image. The user can "stretch" or increase the contrast of an image by specifying low and high DN values; all pixels with values lower than the specified "low" will then become black, and all pixels higher than the specified "high" value will become white. Pixels between the low and high values will be evenly shaded between black and white. IMDISP is written in a modular form to make it easy to change it to work with different display devices or on other computers. The code can also be adapted for use in other application programs. There are device dependent image display modules, general image display subroutines, image I/O routines, and image label and command line parsing routines. The IMDISP system is written in C-language (94%) and Assembler (6%). It was implemented on an IBM PC with the MS DOS 3.21 operating system. IMDISP has a memory requirement of about 142k bytes. IMDISP was developed in 1989 and is a copyrighted work with all copyright vested in NASA. Additional planetary images can be obtained from the National Space Science Data Center at (301) 286-6695.

  20. A head-mounted display-based personal integrated-image monitoring system for transurethral resection of the prostate.

    PubMed

    Yoshida, Soichiro; Kihara, Kazunori; Takeshita, Hideki; Fujii, Yasuhisa

    2014-12-01

    The head-mounted display (HMD) is a new image monitoring system. We developed the Personal Integrated-image Monitoring System (PIM System) using the HMD (HMZ-T2, Sony Corporation, Tokyo, Japan) in combination with video splitters and multiplexers as a surgical guide system for transurethral resection of the prostate (TURP). The imaging information obtained from the cystoscope, the transurethral ultrasonography (TRUS), the video camera attached to the HMD, and the patient's vital signs monitor were split and integrated by the PIM System and a composite image was displayed by the HMD using a four-split screen technique. Wearing the HMD, the lead surgeon and the assistant could simultaneously and continuously monitor the same information displayed by the HMD in an ergonomically efficient posture. Each participant could independently rearrange the images comprising the composite image depending on the engaging step. Two benign prostatic hyperplasia (BPH) patients underwent TURP performed by surgeons guided with this system. In both cases, the TURP procedure was successfully performed, and their postoperative clinical courses had no remarkable unfavorable events. During the procedure, none of the participants experienced any HMD-wear related adverse effects or reported any discomfort.

  1. Augmented reality 3D display based on integral imaging

    NASA Astrophysics Data System (ADS)

    Deng, Huan; Zhang, Han-Le; He, Min-Yang; Wang, Qiong-Hua

    2017-02-01

    Integral imaging (II) is a good candidate for augmented reality (AR) display, since it provides various physiological depth cues so that viewers can freely change the accommodation and convergence between the virtual three-dimensional (3D) images and the real-world scene without feeling any visual discomfort. We propose two AR 3D display systems based on the theory of II. In the first AR system, a micro II display unit reconstructs a micro 3D image, and the mciro-3D image is magnified by a convex lens. The lateral and depth distortions of the magnified 3D image are analyzed and resolved by the pitch scaling and depth scaling. The magnified 3D image and real 3D scene are overlapped by using a half-mirror to realize AR 3D display. The second AR system uses a micro-lens array holographic optical element (HOE) as an image combiner. The HOE is a volume holographic grating which functions as a micro-lens array for the Bragg-matched light, and as a transparent glass for Bragg mismatched light. A reference beam can reproduce a virtual 3D image from one side and a reference beam with conjugated phase can reproduce the second 3D image from other side of the micro-lens array HOE, which presents double-sided 3D display feature.

  2. Viewpoint Dependent Imaging: An Interactive Stereoscopic Display

    NASA Astrophysics Data System (ADS)

    Fisher, Scott

    1983-04-01

    Design and implementation of a viewpoint Dependent imaging system is described. The resultant display is an interactive, lifesize, stereoscopic image. that becomes a window into a three dimensional visual environment. As the user physically changes his viewpoint of the represented data in relation to the display surface, the image is continuously updated. The changing viewpoints are retrieved from a comprehensive, stereoscopic image array stored on computer controlled, optical videodisc and fluidly presented. in coordination with the viewer's, movements as detected by a body-tracking device. This imaging system is an attempt to more closely represent an observers interactive perceptual experience of the visual world by presenting sensory information cues not offered by traditional media technologies: binocular parallax, motion parallax, and motion perspective. Unlike holographic imaging, this display requires, relatively low bandwidth.

  3. Design of area array CCD image acquisition and display system based on FPGA

    NASA Astrophysics Data System (ADS)

    Li, Lei; Zhang, Ning; Li, Tianting; Pan, Yue; Dai, Yuming

    2014-09-01

    With the development of science and technology, CCD(Charge-coupled Device) has been widely applied in various fields and plays an important role in the modern sensing system, therefore researching a real-time image acquisition and display plan based on CCD device has great significance. This paper introduces an image data acquisition and display system of area array CCD based on FPGA. Several key technical challenges and problems of the system have also been analyzed and followed solutions put forward .The FPGA works as the core processing unit in the system that controls the integral time sequence .The ICX285AL area array CCD image sensor produced by SONY Corporation has been used in the system. The FPGA works to complete the driver of the area array CCD, then analog front end (AFE) processes the signal of the CCD image, including amplification, filtering, noise elimination, CDS correlation double sampling, etc. AD9945 produced by ADI Corporation to convert analog signal to digital signal. Developed Camera Link high-speed data transmission circuit, and completed the PC-end software design of the image acquisition, and realized the real-time display of images. The result through practical testing indicates that the system in the image acquisition and control is stable and reliable, and the indicators meet the actual project requirements.

  4. Demonstration of a real-time implementation of the ICVision holographic stereogram display

    NASA Astrophysics Data System (ADS)

    Kulick, Jeffrey H.; Jones, Michael W.; Nordin, Gregory P.; Lindquist, Robert G.; Kowel, Stephen T.; Thomsen, Axel

    1995-07-01

    There is increasing interest in real-time autostereoscopic 3D displays. Such systems allow 3D objects or scenes to be viewed by one or more observers with correct motion parallax without the need for glasses or other viewing aids. Potential applications of such systems include mechanical design, training and simulation, medical imaging, virtual reality, and architectural design. One approach to the development of real-time autostereoscopic display systems has been to develop real-time holographic display systems. The approach taken by most of the systems is to compute and display a number of holographic lines at one time, and then use a scanning system to replicate the images throughout the display region. The approach taken in the ICVision system being developed at the University of Alabama in Huntsville is very different. In the ICVision display, a set of discrete viewing regions called virtual viewing slits are created by the display. Each pixel is required fill every viewing slit with different image data. When the images presented in two virtual viewing slits separated by an interoccular distance are filled with stereoscopic pair images, the observer sees a 3D image. The images are computed so that a different stereo pair is presented each time the viewer moves 1 eye pupil diameter (approximately mm), thus providing a series of stereo views. Each pixel is subdivided into smaller regions, called partial pixels. Each partial pixel is filled with a diffraction grating that is just that required to fill an individual virtual viewing slit. The sum of all the partial pixels in a pixel then fill all the virtual viewing slits. The final version of the ICVision system will form diffraction gratings in a liquid crystal layer on the surface of VLSI chips in real time. Processors embedded in the VLSI chips will compute the display in real- time. In the current version of the system, a commercial AMLCD is sandwiched with a diffraction grating array. This paper will discuss the design details of a protable 3D display based on the integration of a diffractive optical element with a commercial off-the-shelf AMLCD. The diffractive optic contains several hundred thousand partial-pixel gratings and the AMLCD modulates the light diffracted by the gratings.

  5. Stereoscopic Integrated Imaging Goggles for Multimodal Intraoperative Image Guidance

    PubMed Central

    Mela, Christopher A.; Patterson, Carrie; Thompson, William K.; Papay, Francis; Liu, Yang

    2015-01-01

    We have developed novel stereoscopic wearable multimodal intraoperative imaging and display systems entitled Integrated Imaging Goggles for guiding surgeries. The prototype systems offer real time stereoscopic fluorescence imaging and color reflectance imaging capacity, along with in vivo handheld microscopy and ultrasound imaging. With the Integrated Imaging Goggle, both wide-field fluorescence imaging and in vivo microscopy are provided. The real time ultrasound images can also be presented in the goggle display. Furthermore, real time goggle-to-goggle stereoscopic video sharing is demonstrated, which can greatly facilitate telemedicine. In this paper, the prototype systems are described, characterized and tested in surgeries in biological tissues ex vivo. We have found that the system can detect fluorescent targets with as low as 60 nM indocyanine green and can resolve structures down to 0.25 mm with large FOV stereoscopic imaging. The system has successfully guided simulated cancer surgeries in chicken. The Integrated Imaging Goggle is novel in 4 aspects: it is (a) the first wearable stereoscopic wide-field intraoperative fluorescence imaging and display system, (b) the first wearable system offering both large FOV and microscopic imaging simultaneously, (c) the first wearable system that offers both ultrasound imaging and fluorescence imaging capacities, and (d) the first demonstration of goggle-to-goggle communication to share stereoscopic views for medical guidance. PMID:26529249

  6. Apparatus for monitoring crystal growth

    DOEpatents

    Sachs, Emanual M.

    1981-01-01

    A system and method are disclosed for monitoring the growth of a crystalline body from a liquid meniscus in a furnace. The system provides an improved human/machine interface so as to reduce operator stress, strain and fatigue while improving the conditions for observation and control of the growing process. The system comprises suitable optics for forming an image of the meniscus and body wherein the image is anamorphic so that the entire meniscus can be viewed with good resolution in both the width and height dimensions. The system also comprises a video display for displaying the anamorphic image. The video display includes means for enhancing the contrast between any two contrasting points in the image. The video display also comprises a signal averager for averaging the intensity of at least one preselected portions of the image. The value of the average intensity, can in turn be utilized to control the growth of the body. The system and method are also capable of observing and monitoring multiple processes.

  7. Method of monitoring crystal growth

    DOEpatents

    Sachs, Emanual M.

    1982-01-01

    A system and method are disclosed for monitoring the growth of a crystalline body from a liquid meniscus in a furnace. The system provides an improved human/machine interface so as to reduce operator stress, strain and fatigue while improving the conditions for observation and control of the growing process. The system comprises suitable optics for forming an image of the meniscus and body wherein the image is anamorphic so that the entire meniscus can be viewed with good resolution in both the width and height dimensions. The system also comprises a video display for displaying the anamorphic image. The video display includes means for enhancing the contrast between any two contrasting points in the image. The video display also comprises a signal averager for averaging the intensity of at least one preselected portions of the image. The value of the average intensity, can in turn be utilized to control the growth of the body. The system and method are also capable of observing and monitoring multiple processes.

  8. Parallax barrier engineering for image quality improvement in an autostereoscopic 3D display.

    PubMed

    Kim, Sung-Kyu; Yoon, Ki-Hyuk; Yoon, Seon Kyu; Ju, Heongkyu

    2015-05-18

    We present a image quality improvement in a parallax barrier (PB)-based multiview autostereoscopic 3D display system under a real-time tracking of positions of a viewer's eyes. The system presented exploits a parallax barrier engineered to offer significantly improved quality of three-dimensional images for a moving viewer without an eyewear under the dynamic eye tracking. The improved image quality includes enhanced uniformity of image brightness, reduced point crosstalk, and no pseudoscopic effects. We control the relative ratio between two parameters i.e., a pixel size and the aperture of a parallax barrier slit to improve uniformity of image brightness at a viewing zone. The eye tracking that monitors positions of a viewer's eyes enables pixel data control software to turn on only pixels for view images near the viewer's eyes (the other pixels turned off), thus reducing point crosstalk. The eye tracking combined software provides right images for the respective eyes, therefore producing no pseudoscopic effects at its zone boundaries. The viewing zone can be spanned over area larger than the central viewing zone offered by a conventional PB-based multiview autostereoscopic 3D display (no eye tracking). Our 3D display system also provides multiviews for motion parallax under eye tracking. More importantly, we demonstrate substantial reduction of point crosstalk of images at the viewing zone, its level being comparable to that of a commercialized eyewear-assisted 3D display system. The multiview autostereoscopic 3D display presented can greatly resolve the point crosstalk problem, which is one of the critical factors that make it difficult for previous technologies for a multiview autostereoscopic 3D display to replace an eyewear-assisted counterpart.

  9. Conceptual design study for an advanced cab and visual system, volume 1

    NASA Technical Reports Server (NTRS)

    Rue, R. J.; Cyrus, M. L.; Garnett, T. A.; Nachbor, J. W.; Seery, J. A.; Starr, R. L.

    1980-01-01

    A conceptual design study was conducted to define requirements for an advanced cab and visual system. The rotorcraft system integration simulator is for engineering studies in the area of mission associated vehicle handling qualities. Principally a technology survey and assessment of existing and proposed simulator visual display systems, image generation systems, modular cab designs, and simulator control station designs were performed and are discussed. State of the art survey data were used to synthesize a set of preliminary visual display system concepts of which five candidate display configurations were selected for further evaluation. Basic display concepts incorporated in these configurations included: real image projection, using either periscopes, fiber optic bundles, or scanned laser optics; and virtual imaging with helmet mounted displays. These display concepts were integrated in the study with a simulator cab concept employing a modular base for aircraft controls, crew seating, and instrumentation (or other) displays. A simple concept to induce vibration in the various modules was developed and is described. Results of evaluations and trade offs related to the candidate system concepts are given, along with a suggested weighting scheme for numerically comparing visual system performance characteristics.

  10. Implementation of a Landscape Lighting System to Display Images

    NASA Astrophysics Data System (ADS)

    Sun, Gi-Ju; Cho, Sung-Jae; Kim, Chang-Beom; Moon, Cheol-Hong

    The system implemented in this study consists of a PC, MASTER, SLAVEs and MODULEs. The PC sets the various landscape lighting displays, and the image files can be sent to the MASTER through a virtual serial port connected to the USB (Universal Serial Bus). The MASTER sends a sync signal to the SLAVE. The SLAVE uses the signal received from the MASTER and the landscape lighting display pattern. The video file is saved in the NAND Flash memory and the R, G, B signals are separated using the self-made display signal and sent to the MODULE so that it can display the image.

  11. User's manual for flight Simulator Display System (FSDS)

    NASA Technical Reports Server (NTRS)

    Egerdahl, C. C.

    1979-01-01

    The capabilities of the flight simulator display system (FSDS) are described. FSDS is a color raster scan display generator designed to meet the special needs of Flight Simulation Laboratories. The FSDS can update (revise) the images it generates every 16.6 mS, with limited support from a host processor. This corresponds to the standard TV vertical rate of 60 Hertz, and allows the system to carry out display functions in a time critical environment. Rotation of a complex image in the television raster with minimal hardware is possible with the system.

  12. Spatial noise in microdisplays for near-to-eye applications

    NASA Astrophysics Data System (ADS)

    Hastings, Arthur R., Jr.; Draper, Russell S.; Wood, Michael V.; Fellowes, David A.

    2011-06-01

    Spatial noise in imaging systems has been characterized and its impact on image quality metrics has been addressed primarily with respect to the introduction of this noise at the sensor component. However, sensor fixed pattern noise is not the only source of fixed pattern noise in an imaging system. Display fixed pattern noise cannot be easily mitigated in processing and, therefore, must be addressed. In this paper, a thorough examination of the amount and the effect of display fixed pattern noise is presented. The specific manifestation of display fixed pattern noise is dependent upon the display technology. Utilizing a calibrated camera, US Army RDECOM CERDEC NVESD has developed a microdisplay (μdisplay) spatial noise data collection capability. Noise and signal power spectra were used to characterize the display signal to noise ratio (SNR) as a function of spatial frequency analogous to the minimum resolvable temperature difference (MRTD) of a thermal sensor. The goal of this study is to establish a measurement technique to characterize μdisplay limiting performance to assist in proper imaging system specification.

  13. Multiview three-dimensional display with continuous motion parallax through planar aligned OLED microdisplays.

    PubMed

    Teng, Dongdong; Xiong, Yi; Liu, Lilin; Wang, Biao

    2015-03-09

    Existing multiview three-dimensional (3D) display technologies encounter discontinuous motion parallax problem, due to a limited number of stereo-images which are presented to corresponding sub-viewing zones (SVZs). This paper proposes a novel multiview 3D display system to obtain continuous motion parallax by using a group of planar aligned OLED microdisplays. Through blocking partial light-rays by baffles inserted between adjacent OLED microdisplays, transitional stereo-image assembled by two spatially complementary segments from adjacent stereo-images is presented to a complementary fusing zone (CFZ) which locates between two adjacent SVZs. For a moving observation point, the spatial ratio of the two complementary segments evolves gradually, resulting in continuously changing transitional stereo-images and thus overcoming the problem of discontinuous motion parallax. The proposed display system employs projection-type architecture, taking the merit of full display resolution, but at the same time having a thin optical structure, offering great potentials for portable or mobile 3D display applications. Experimentally, a prototype display system is demonstrated by 9 OLED microdisplays.

  14. Transparent 3D display for augmented reality

    NASA Astrophysics Data System (ADS)

    Lee, Byoungho; Hong, Jisoo

    2012-11-01

    Two types of transparent three-dimensional display systems applicable for the augmented reality are demonstrated. One of them is a head-mounted-display-type implementation which utilizes the principle of the system adopting the concave floating lens to the virtual mode integral imaging. Such configuration has an advantage in that the threedimensional image can be displayed at sufficiently far distance resolving the accommodation conflict with the real world scene. Incorporating the convex half mirror, which shows a partial transparency, instead of the concave floating lens, makes it possible to implement the transparent three-dimensional display system. The other type is the projection-type implementation, which is more appropriate for the general use than the head-mounted-display-type implementation. Its imaging principle is based on the well-known reflection-type integral imaging. We realize the feature of transparent display by imposing the partial transparency to the array of concave mirror which is used for the screen of reflection-type integral imaging. Two types of configurations, relying on incoherent and coherent light sources, are both possible. For the incoherent configuration, we introduce the concave half mirror array, whereas the coherent one adopts the holographic optical element which replicates the functionality of the lenslet array. Though the projection-type implementation is beneficial than the head-mounted-display in principle, the present status of the technical advance of the spatial light modulator still does not provide the satisfactory visual quality of the displayed three-dimensional image. Hence we expect that the head-mounted-display-type and projection-type implementations will come up in the market in sequence.

  15. Viewing region maximization of an integral floating display through location adjustment of viewing window.

    PubMed

    Kim, Joowhan; Min, Sung-Wook; Lee, Byoungho

    2007-10-01

    Integral floating display is a recently proposed three-dimensional (3D) display method which provides a dynamic 3D image in the vicinity to an observer. It has a viewing window only through which correct 3D images can be observed. However, the positional difference between the viewing window and the floating image causes limited viewing zone in integral floating system. In this paper, we provide the principle and experimental results of the location adjustment of the viewing window of the integral floating display system by modifying the elemental image region for integral imaging. We explain the characteristics of the viewing window and propose how to move the viewing window to maximize the viewing zone.

  16. On-demand server-side image processing for web-based DICOM image display

    NASA Astrophysics Data System (ADS)

    Sakusabe, Takaya; Kimura, Michio; Onogi, Yuzo

    2000-04-01

    Low cost image delivery is needed in modern networked hospitals. If a hospital has hundreds of clients, cost of client systems is a big problem. Naturally, a Web-based system is the most effective solution. But a Web browser could not display medical images with certain image processing such as a lookup table transformation. We developed a Web-based medical image display system using Web browser and on-demand server-side image processing. All images displayed on a Web page are generated from DICOM files on a server, delivered on-demand. User interaction on the Web page is handled by a client-side scripting technology such as JavaScript. This combination makes a look-and-feel of an imaging workstation not only for its functionality but also for its speed. Real time update of images with tracing mouse motion is achieved on Web browser without any client-side image processing which may be done by client-side plug-in technology such as Java Applets or ActiveX. We tested performance of the system in three cases. Single client, small number of clients in a fast speed network, and large number of clients in a normal speed network. The result shows that there are very slight overhead for communication and very scalable in number of clients.

  17. A virtual image chain for perceived image quality of medical display

    NASA Astrophysics Data System (ADS)

    Marchessoux, Cédric; Jung, Jürgen

    2006-03-01

    This paper describes a virtual image chain for medical display (project VICTOR: granted in the 5th framework program by European commission). The chain starts from raw data of an image digitizer (CR, DR) or synthetic patterns and covers image enhancement (MUSICA by Agfa) and both display possibilities, hardcopy (film on viewing box) and softcopy (monitor). Key feature of the chain is a complete image wise approach. A first prototype is implemented in an object-oriented software platform. The display chain consists of several modules. Raw images are either taken from scanners (CR-DR) or from a pattern generator, in which characteristics of DR- CR systems are introduced by their MTF and their dose-dependent Poisson noise. The image undergoes image enhancement and comes to display. For soft display, color and monochrome monitors are used in the simulation. The image is down-sampled. The non-linear response of a color monitor is taken into account by the GOG or S-curve model, whereas the Standard Gray-Scale-Display-Function (DICOM) is used for monochrome display. The MTF of the monitor is applied on the image in intensity levels. For hardcopy display, the combination of film, printer, lightbox and viewing condition is modeled. The image is up-sampled and the DICOM-GSDF or a Kanamori Look-Up-Table is applied. An anisotropic model for the MTF of the printer is applied on the image in intensity levels. The density-dependent color (XYZ) of the hardcopy film is introduced by Look-Up-tables. Finally a Human Visual System Model is applied to the intensity images (XYZ in terms of cd/m2) in order to eliminate nonvisible differences. Comparison leads to visible differences, which are quantified by higher order image quality metrics. A specific image viewer is used for the visualization of the intensity image and the visual difference maps.

  18. Migration of the digital interactive breast-imaging teaching file

    NASA Astrophysics Data System (ADS)

    Cao, Fei; Sickles, Edward A.; Huang, H. K.; Zhou, Xiaoqiang

    1998-06-01

    The digital breast imaging teaching file developed during the last two years in our laboratory has been used successfully at UCSF (University of California, San Francisco) as a routine teaching tool for training radiology residents and fellows in mammography. Building on this success, we have ported the teaching file from an old Pixar imaging/Sun SPARC 470 display system to our newly designed telemammography display workstation (Ultra SPARC 2 platform with two DOME Md5/SBX display boards). The old Pixar/Sun 470 system, although adequate for fast and high-resolution image display, is 4- year-old technology, expensive to maintain and difficult to upgrade. The new display workstation is more cost-effective and is also compatible with the digital image format from a full-field direct digital mammography system. The digital teaching file is built on a sophisticated computer-aided instruction (CAI) model, which simulates the management sequences used in imaging interpretation and work-up. Each user can be prompted to respond by making his/her own observations, assessments, and work-up decisions as well as the marking of image abnormalities. This effectively replaces the traditional 'show-and-tell' teaching file experience with an interactive, response-driven type of instruction.

  19. Comparative Study of the MTFA, ICS, and SQRI Image Quality Metrics for Visual Display Systems

    DTIC Science & Technology

    1991-09-01

    reasonable image quality predictions across select display and viewing condition parameters. 101 6.0 REFERENCES American National Standard for Human Factors Engineering of ’ Visual Display Terminal Workstations . ANSI

  20. IDIMS/GEOPAK: Users manual for a geophysical data display and analysis system

    NASA Technical Reports Server (NTRS)

    Libert, J. M.

    1982-01-01

    The application of an existing image analysis system to the display and analysis of geophysical data is described, the potential for expanding the capabilities of such a system toward more advanced computer analytic and modeling functions is investigated. The major features of the IDIMS (Interactive Display and Image Manipulation System) and its applicability for image type analysis of geophysical data are described. Development of a basic geophysical data processing system to permit the image representation, coloring, interdisplay and comparison of geophysical data sets using existing IDIMS functions and to provide for the production of hard copies of processed images was described. An instruction manual and documentation for the GEOPAK subsystem was produced. A training course for personnel in the use of the IDIMS/GEOPAK was conducted. The effectiveness of the current IDIMS/GEOPAK system for geophysical data analysis was evaluated.

  1. Viewing zone duplication of multi-projection 3D display system using uniaxial crystal.

    PubMed

    Lee, Chang-Kun; Park, Soon-Gi; Moon, Seokil; Lee, Byoungho

    2016-04-18

    We propose a novel multiplexing technique for increasing the viewing zone of a multi-view based multi-projection 3D display system by employing double refraction in uniaxial crystal. When linearly polarized images from projector pass through the uniaxial crystal, two possible optical paths exist according to the polarization states of image. Therefore, the optical paths of the image could be changed, and the viewing zone is shifted in a lateral direction. The polarization modulation of the image from a single projection unit enables us to generate two viewing zones at different positions. For realizing full-color images at each viewing zone, a polarization-based temporal multiplexing technique is adopted with a conventional polarization switching device of liquid crystal (LC) display. Through experiments, a prototype of a ten-view multi-projection 3D display system presenting full-colored view images is implemented by combining five laser scanning projectors, an optically clear calcite (CaCO3) crystal, and an LC polarization rotator. For each time sequence of temporal multiplexing, the luminance distribution of the proposed system is measured and analyzed.

  2. Three-dimensional image display system using stereogram and holographic optical memory techniques

    NASA Astrophysics Data System (ADS)

    Kim, Cheol S.; Kim, Jung G.; Shin, Chang-Mok; Kim, Soo-Joong

    2001-09-01

    In this paper, we implemented a three dimensional image display system using stereogram and holographic optical memory techniques which can store many images and reconstruct them automatically. In this system, to store and reconstruct stereo images, incident angle of reference beam must be controlled in real time, so we used BPH (binary phase hologram) and LCD (liquid crystal display) for controlling reference beam. And input images are represented on the LCD without polarizer/analyzer for maintaining uniform beam intensities regardless of the brightness of input images. The input images and BPHs are edited using application software with having the same recording scheduled time interval in storing. The reconstructed stereo images are acquired by capturing the output images with CCD camera at the behind of the analyzer which transforms phase information into brightness information of images. The reference beams are acquired by Fourier transform of BPH which designed with SA (simulated annealing) algorithm, and represented on the LCD with the 0.05 seconds time interval using application software for reconstructing the stereo images. In output plane, we used a LCD shutter that is synchronized to a monitor that displays alternate left and right eye images for depth perception. We demonstrated optical experiment which store and reconstruct four stereo images in BaTiO3 repeatedly using holographic optical memory techniques.

  3. WE-E-12A-01: Medical Physics 1.0 to 2.0: MRI, Displays, Informatics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pickens, D; Flynn, M; Peck, D

    Medical Physics 2.0 is a bold vision for an existential transition of clinical imaging physics in face of the new realities of value-based and evidence-based medicine, comparative effectiveness, and meaningful use. It speaks to how clinical imaging physics can expand beyond traditional insular models of inspection and acceptance testing, oriented toward compliance, towards team-based models of operational engagement, prospective definition and assurance of effective use, and retrospective evaluation of clinical performance. Organized into four sessions of the AAPM, this particular session focuses on three specific modalities as outlined below. MRI 2.0: This presentation will look into the future of clinicalmore » MR imaging and what the clinical medical physicist will need to be doing as the technology of MR imaging evolves. Many of the measurement techniques used today will need to be expanded to address the advent of higher field imaging systems and dedicated imagers for specialty applications. Included will be the need to address quality assurance and testing metrics for multi-channel MR imagers and hybrid devices such as MR/PET systems. New pulse sequences and acquisition methods, increasing use of MR spectroscopy, and real-time guidance procedures will place the burden on the medical physicist to define and use new tools to properly evaluate these systems, but the clinical applications must be understood so that these tools are use correctly. Finally, new rules, clinical requirements, and regulations will mean that the medical physicist must actively work to keep her/his sites compliant and must work closely with physicians to ensure best performance of these systems. Informatics Display 1.0 to 2.0: Medical displays are an integral part of medical imaging operation. The DICOM and AAPM (TG18) efforts have led to clear definitions of performance requirements of monochrome medical displays that can be followed by medical physicists to ensure proper performance. However, effective implementation of that oversight has been challenging due to the number and extend of medical displays in use at a facility. The advent of color display and mobile displays has added additional challenges to the task of the medical physicist. This informatics display lecture first addresses the current display guidelines (the 1.0 paradigm) and further outlines the initiatives and prospects for color and mobile displays (the 2.0 paradigm). Informatics Management 1.0 to 2.0: Imaging informatics is part of every radiology practice today. Imaging informatics covers everything from the ordering of a study, through the data acquisition and processing, display and archiving, reporting of findings and the billing for the services performed. The standardization of the processes used to manage the information and methodologies to integrate these standards is being developed and advanced continuously. These developments are done in an open forum and imaging organizations and professionals all have a part in the process. In the Informatics Management presentation, the flow of information and the integration of the standards used in the processes will be reviewed. The role of radiologists and physicists in the process will be discussed. Current methods (the 1.0 paradigm) and evolving methods (the 2.0 paradigm) for validation of informatics systems function will also be discussed. Learning Objectives: Identify requirements for improving quality assurance and compliance tools for advanced and hybrid MRI systems. Identify the need for new quality assurance metrics and testing procedures for advanced systems. Identify new hardware systems and new procedures needed to evaluate MRI systems. Understand the components of current medical physics expectation for medical displays. Understand the role and prospect fo medical physics for color and mobile display devices. Understand different areas of imaging informatics and the methodology for developing informatics standards. Understand the current status of informatics standards and the role of physicists and radiologists in the process, and the current technology for validating the function of these systems.« less

  4. APPLEPIPS /Apple Personal Image Processing System/ - An interactive digital image processing system for the Apple II microcomputer

    NASA Technical Reports Server (NTRS)

    Masuoka, E.; Rose, J.; Quattromani, M.

    1981-01-01

    Recent developments related to microprocessor-based personal computers have made low-cost digital image processing systems a reality. Image analysis systems built around these microcomputers provide color image displays for images as large as 256 by 240 pixels in sixteen colors. Descriptive statistics can be computed for portions of an image, and supervised image classification can be obtained. The systems support Basic, Fortran, Pascal, and assembler language. A description is provided of a system which is representative of the new microprocessor-based image processing systems currently on the market. While small systems may never be truly independent of larger mainframes, because they lack 9-track tape drives, the independent processing power of the microcomputers will help alleviate some of the turn-around time problems associated with image analysis and display on the larger multiuser systems.

  5. The development of a multifunction lens test instrument by using computer aided variable test patterns

    NASA Astrophysics Data System (ADS)

    Chen, Chun-Jen; Wu, Wen-Hong; Huang, Kuo-Cheng

    2009-08-01

    A multi-function lens test instrument is report in this paper. This system can evaluate the image resolution, image quality, depth of field, image distortion and light intensity distribution of the tested lens by changing the tested patterns. This system consists of a tested lens, a CCD camera, a linear motorized stage, a system fixture, an observer LCD monitor, and a notebook for pattern providing. The LCD monitor displays a serious of specified tested patterns sent by the notebook. Then each displayed pattern goes through the tested lens and images in the CCD camera sensor. Consequently, the system can evaluate the performance of the tested lens by analyzing the image of CCD camera with special designed software. The major advantage of this system is that it can complete whole test quickly without interruption due to part replacement, because the tested patterns are statically displayed on monitor and controlled by the notebook.

  6. All-around viewing display system for group activity on life review therapy

    NASA Astrophysics Data System (ADS)

    Sakamoto, Kunio; Okumura, Mitsuru

    2009-10-01

    This paper describes 360 degree viewing display system that can be viewed from any direction. A conventional monitor display is viewed from one direction, i.e., the display has narrow viewing angle and observers cannot view the screen from the opposite side. To solve this problem, we developed the 360 degree viewing display for collaborative tasks on the round table. This developed 360 degree viewing system has a liquid crystal display screen and a 360 degree rotating table by motor. The principle is very simple. The screen of a monitor only rotates at a uniform speed, but the optical techniques are also utilized. Moreover, we have developed a floating 360 degree viewing display that can be viewed from any direction. This new viewing system has a display screen, a rotating table and dual parabolic mirrors. In order to float the only image screen above the table, the rotating mechanism works in the parabolic mirrors. Because the dual parabolic mirrors generate a "mirage" image over the upper mirror, observers can view a floating 2D image on the virtual screen in front of them. Then the observer can view a monitor screen at any position surrounding the round table.

  7. Display nonlinearity in digital image processing for visual communications

    NASA Astrophysics Data System (ADS)

    Peli, Eli

    1992-11-01

    The luminance emitted from a cathode ray tube (CRT) display is a nonlinear function (the gamma function) of the input video signal voltage. In most analog video systems, compensation for this nonlinear transfer function is implemented in the camera amplifiers. When CRT displays are used to present psychophysical stimuli in vision research, the specific display nonlinearity usually is measured and accounted for to ensure that the luminance of each pixel in the synthetic image property represents the intended value. However, when using digital image processing, the linear analog-to-digital converters store a digital image that is nonlinearly related to the displayed or recorded image. The effect of this nonlinear transformation on a variety of image-processing applications used in visual communications is described.

  8. Display nonlinearity in digital image processing for visual communications

    NASA Astrophysics Data System (ADS)

    Peli, Eli

    1991-11-01

    The luminance emitted from a cathode ray tube, (CRT) display is a nonlinear function (the gamma function) of the input video signal voltage. In most analog video systems, compensation for this nonlinear transfer function is implemented in the camera amplifiers. When CRT displays are used to present psychophysical stimuli in vision research, the specific display nonlinearity usually is measured and accounted for to ensure that the luminance of each pixel in the synthetic image properly represents the intended value. However, when using digital image processing, the linear analog-to-digital converters store a digital image that is nonlinearly related to the displayed or recorded image. This paper describes the effect of this nonlinear transformation on a variety of image-processing applications used in visual communications.

  9. Multiple-Flat-Panel System Displays Multidimensional Data

    NASA Technical Reports Server (NTRS)

    Gundo, Daniel; Levit, Creon; Henze, Christopher; Sandstrom, Timothy; Ellsworth, David; Green, Bryan; Joly, Arthur

    2006-01-01

    The NASA Ames hyperwall is a display system designed to facilitate the visualization of sets of multivariate and multidimensional data like those generated in complex engineering and scientific computations. The hyperwall includes a 77 matrix of computer-driven flat-panel video display units, each presenting an image of 1,280 1,024 pixels. The term hyperwall reflects the fact that this system is a more capable successor to prior computer-driven multiple-flat-panel display systems known by names that include the generic term powerwall and the trade names PowerWall and Powerwall. Each of the 49 flat-panel displays is driven by a rack-mounted, dual-central-processing- unit, workstation-class personal computer equipped with a hig-hperformance graphical-display circuit card and with a hard-disk drive having a storage capacity of 100 GB. Each such computer is a slave node in a master/ slave computing/data-communication system (see Figure 1). The computer that acts as the master node is similar to the slave-node computers, except that it runs the master portion of the system software and is equipped with a keyboard and mouse for control by a human operator. The system utilizes commercially available master/slave software along with custom software that enables the human controller to interact simultaneously with any number of selected slave nodes. In a powerwall, a single rendering task is spread across multiple processors and then the multiple outputs are tiled into one seamless super-display. It must be noted that the hyperwall concept subsumes the powerwall concept in that a single scene could be rendered as a mosaic image on the hyperwall. However, the hyperwall offers a wider set of capabilities to serve a different purpose: The hyperwall concept is one of (1) simultaneously displaying multiple different but related images, and (2) providing means for composing and controlling such sets of images. In place of elaborate software or hardware crossbar switches, the hyperwall concept substitutes reliance on the human visual system for integration, synthesis, and discrimination of patterns in complex and high-dimensional data spaces represented by the multiple displayed images. The variety of multidimensional data sets that can be displayed on the hyperwall is practically unlimited. For example, Figure 2 shows a hyperwall display of surface pressures and streamlines from a computational simulation of airflow about an aerospacecraft at various Mach numbers and angles of attack. In this display, Mach numbers increase from left to right and angles of attack increase from bottom to top. That is, all images in the same column represent simulations at the same Mach number, while all images in the same row represent simulations at the same angle of attack. The same viewing transformations and the same mapping from surface pressure to colors were used in generating all the images.

  10. The eyes prefer real images

    NASA Technical Reports Server (NTRS)

    Roscoe, Stanley N.

    1989-01-01

    For better or worse, virtual imaging displays are with us in the form of narrow-angle combining-glass presentations, head-up displays (HUD), and head-mounted projections of wide-angle sensor-generated or computer-animated imagery (HMD). All military and civil aviation services and a large number of aerospace companies are involved in one way or another in a frantic competition to develop the best virtual imaging display system. The success or failure of major weapon systems hangs in the balance, and billions of dollars in potential business are at stake. Because of the degree to which national defense is committed to the perfection of virtual imaging displays, a brief consideration of their status, an investigation and analysis of their problems, and a search for realistic alternatives are long overdue.

  11. A database system to support image algorithm evaluation

    NASA Technical Reports Server (NTRS)

    Lien, Y. E.

    1977-01-01

    The design is given of an interactive image database system IMDB, which allows the user to create, retrieve, store, display, and manipulate images through the facility of a high-level, interactive image query (IQ) language. The query language IQ permits the user to define false color functions, pixel value transformations, overlay functions, zoom functions, and windows. The user manipulates the images through generic functions. The user can direct images to display devices for visual and qualitative analysis. Image histograms and pixel value distributions can also be computed to obtain a quantitative analysis of images.

  12. A 2D/3D hybrid integral imaging display by using fast switchable hexagonal liquid crystal lens array

    NASA Astrophysics Data System (ADS)

    Lee, Hsin-Hsueh; Huang, Ping-Ju; Wu, Jui-Yi; Hsieh, Po-Yuan; Huang, Yi-Pai

    2017-05-01

    The paper proposes a new display which could switch 2D and 3D images on a monitor, and we call it as Hybrid Display. In 3D display technologies, the reduction of image resolution is still an important issue. The more angle information offer to the observer, the less spatial resolution would offer to image resolution because of the fixed panel resolution. Take it for example, in the integral photography system, the part of image without depth, like background, will reduce its resolution by transform from 2D to 3D image. Therefore, we proposed a method by using liquid crystal component to quickly switch the 2D image and 3D image. Meanwhile, the 2D image is set as a background to compensate the resolution.. In the experiment, hexagonal liquid crystal lens array would be used to take the place of fixed lens array. Moreover, in order to increase lens power of the hexagonal LC lens array, we applied high resistance (Hi-R) layer structure on the electrode. Hi-R layer would make the gradient electric field and affect the lens profile. Also, we use panel with 801 PPI to display the integral image in our system. Hence, the consequence of full resolution 2D background with the 3D depth object forms the Hybrid Display.

  13. Toshiba TDF-500 High Resolution Viewing And Analysis System

    NASA Astrophysics Data System (ADS)

    Roberts, Barry; Kakegawa, M.; Nishikawa, M.; Oikawa, D.

    1988-06-01

    A high resolution, operator interactive, medical viewing and analysis system has been developed by Toshiba and Bio-Imaging Research. This system provides many advanced features including high resolution displays, a very large image memory and advanced image processing capability. In particular, the system provides CRT frame buffers capable of update in one frame period, an array processor capable of image processing at operator interactive speeds, and a memory system capable of updating multiple frame buffers at frame rates whilst supporting multiple array processors. The display system provides 1024 x 1536 display resolution at 40Hz frame and 80Hz field rates. In particular, the ability to provide whole or partial update of the screen at the scanning rate is a key feature. This allows multiple viewports or windows in the display buffer with both fixed and cine capability. To support image processing features such as windowing, pan, zoom, minification, filtering, ROI analysis, multiplanar and 3D reconstruction, a high performance CPU is integrated into the system. This CPU is an array processor capable of up to 400 million instructions per second. To support the multiple viewer and array processors' instantaneous high memory bandwidth requirement, an ultra fast memory system is used. This memory system has a bandwidth capability of 400MB/sec and a total capacity of 256MB. This bandwidth is more than adequate to support several high resolution CRT's and also the fast processing unit. This fully integrated approach allows effective real time image processing. The integrated design of viewing system, memory system and array processor are key to the imaging system. It is the intention to describe the architecture of the image system in this paper.

  14. Unified Digital Image Display And Processing System

    NASA Astrophysics Data System (ADS)

    Horii, Steven C.; Maguire, Gerald Q.; Noz, Marilyn E.; Schimpf, James H.

    1981-11-01

    Our institution like many others, is faced with a proliferation of medical imaging techniques. Many of these methods give rise to digital images (e.g. digital radiography, computerized tomography (CT) , nuclear medicine and ultrasound). We feel that a unified, digital system approach to image management (storage, transmission and retrieval), image processing and image display will help in integrating these new modalities into the present diagnostic radiology operations. Future techniques are likely to employ digital images, so such a system could readily be expanded to include other image sources. We presently have the core of such a system. We can both view and process digital nuclear medicine (conventional gamma camera) images, positron emission tomography (PET) and CT images on a single system. Images from our recently installed digital radiographic unit can be added. Our paper describes our present system, explains the rationale for its configuration, and describes the directions in which it will expand.

  15. Displays and simulators

    NASA Astrophysics Data System (ADS)

    Mohon, N.

    A 'simulator' is defined as a machine which imitates the behavior of a real system in a very precise manner. The major components of a simulator and their interaction are outlined in brief form, taking into account the major components of an aircraft flight simulator. Particular attention is given to the visual display portion of the simulator, the basic components of the display, their interactions, and their characteristics. Real image displays are considered along with virtual image displays, and image generators. Attention is given to an advanced simulator for pilot training, a holographic pancake window, a scan laser image generator, the construction of an infrared target simulator, and the Apollo Command Module Simulator.

  16. Volumetric 3D display using a DLP projection engine

    NASA Astrophysics Data System (ADS)

    Geng, Jason

    2012-03-01

    In this article, we describe a volumetric 3D display system based on the high speed DLPTM (Digital Light Processing) projection engine. Existing two-dimensional (2D) flat screen displays often lead to ambiguity and confusion in high-dimensional data/graphics presentation due to lack of true depth cues. Even with the help of powerful 3D rendering software, three-dimensional (3D) objects displayed on a 2D flat screen may still fail to provide spatial relationship or depth information correctly and effectively. Essentially, 2D displays have to rely upon capability of human brain to piece together a 3D representation from 2D images. Despite the impressive mental capability of human visual system, its visual perception is not reliable if certain depth cues are missing. In contrast, volumetric 3D display technologies to be discussed in this article are capable of displaying 3D volumetric images in true 3D space. Each "voxel" on a 3D image (analogous to a pixel in 2D image) locates physically at the spatial position where it is supposed to be, and emits light from that position toward omni-directions to form a real 3D image in 3D space. Such a volumetric 3D display provides both physiological depth cues and psychological depth cues to human visual system to truthfully perceive 3D objects. It yields a realistic spatial representation of 3D objects and simplifies our understanding to the complexity of 3D objects and spatial relationship among them.

  17. Depth-enhanced integral imaging display system with electrically variable image planes using polymer-dispersed liquid-crystal layers.

    PubMed

    Kim, Yunhee; Choi, Heejin; Kim, Joohwan; Cho, Seong-Woo; Kim, Youngmin; Park, Gilbae; Lee, Byoungho

    2007-06-20

    A depth-enhanced three-dimensional integral imaging system with electrically variable image planes is proposed. For implementing the variable image planes, polymer-dispersed liquid-crystal (PDLC) films and a projector are adopted as a new display system in the integral imaging. Since the transparencies of PDLC films are electrically controllable, we can make each film diffuse the projected light successively with a different depth from the lens array. As a result, the proposed method enables control of the location of image planes electrically and enhances the depth. The principle of the proposed method is described, and experimental results are also presented.

  18. Analysis and design of stereoscopic display in stereo television endoscope system

    NASA Astrophysics Data System (ADS)

    Feng, Dawei

    2008-12-01

    Many 3D displays have been proposed for medical use. When we design and evaluate new system, there are three demands from surgeons. Priority is the precision. Secondly, displayed images should be easy to understand, In addition, surgery lasts hours and hours, they do not like fatiguing display. The stereo television endoscope researched in this paper make celiac viscera image on the photosurface of the left and right CCD by imitating human binocular stereo vision effect by using the double-optical lines system. The left and right video signal will be processed by frequency multiplication and display on the monitor, people can observe the stereo image which has depth impression by using a polarized LCD screen and a pair of polarized glasses. Clinical experiments show that by using the stereo TV endoscope people can make minimally invasive surgery more safe and reliable, and can shorten the operation time, and can improve the operation accuracy.

  19. Psychophysical Comparison Of A Video Display System To Film By Using Bone Fracture Images

    NASA Astrophysics Data System (ADS)

    Seeley, George W.; Stempski, Mark; Roehrig, Hans; Nudelman, Sol; Capp, M. P.

    1982-11-01

    This study investigated the possibility of using a video display system instead of film for radiological diagnosis. Also investigated were the relationships between characteristics of the system and the observer's accuracy level. Radiologists were used as observers. Thirty-six clinical bone fractures were separated into two matched sets of equal difficulty. The difficulty parameters and ratings were defined by a panel of expert bone radiologists at the Arizona Health Sciences Center, Radiology Department. These two sets of fracture images were then matched with verifiably normal images using parameters such as film type, angle of view, size, portion of anatomy, the film's density range, and the patient's age and sex. The two sets of images were then displayed, using a counterbalanced design, to each of the participating radiologists for diagnosis. Whenever a response was given to a video image, the radiologist used enhancement controls to "window in" on the grey levels of interest. During the TV phase, the radiologist was required to record the settings of the calibrated controls of the image enhancer during interpretation. At no time did any single radiologist see the same film in both modes. The study was designed so that a standard analysis of variance would show the effects of viewing mode (film vs TV), the effects due to stimulus set, and any interactions with observers. A signal detection analysis of observer performance was also performed. Results indicate that the TV display system is almost as good as the view box display; an average of only two more errors were made on the TV display. The difference between the systems has been traced to four observers who had poor accuracy on a small number of films viewed on the TV display. This information is now being correlated with the video system's signal-to-noise ratio (SNR), signal transfer function (STF), and resolution measurements, to obtain information on the basic display and enhancement requirements for a video-based radiologic system. Due to time constraints the results are not included here. The complete results of this study will be reported at the conference.

  20. Dual-view-zone tabletop 3D display system based on integral imaging.

    PubMed

    He, Min-Yang; Zhang, Han-Le; Deng, Huan; Li, Xiao-Wei; Li, Da-Hai; Wang, Qiong-Hua

    2018-02-01

    In this paper, we propose a dual-view-zone tabletop 3D display system based on integral imaging by using a multiplexed holographic optical element (MHOE) that has the optical properties of two sets of microlens arrays. The MHOE is recorded by a reference beam using the single-exposure method. The reference beam records the wavefronts of a microlens array from two different directions. Thus, when the display beam is projected on the MHOE, two wavefronts with the different directions will be rebuilt and the 3D virtual images can be reconstructed in two viewing zones. The MHOE has angle and wavelength selectivity. Under the conditions of the matched wavelength and the angle of the display beam, the diffraction efficiency of the MHOE is greatest. Because the unmatched light just passes through the MHOE, the MHOE has the advantage of a see-through display. The experimental results confirm the feasibility of the dual-view-zone tabletop 3D display system.

  1. Design of a single projector multiview 3D display system

    NASA Astrophysics Data System (ADS)

    Geng, Jason

    2014-03-01

    Multiview three-dimensional (3D) display is able to provide horizontal parallax to viewers with high-resolution and fullcolor images being presented to each view. Most multiview 3D display systems are designed and implemented using multiple projectors, each generating images for one view. Although this multi-projector design strategy is conceptually straightforward, implementation of such multi-projector design often leads to a very expensive system and complicated calibration procedures. Even for a multiview system with a moderate number of projectors (e.g., 32 or 64 projectors), the cost of a multi-projector 3D display system may become prohibitive due to the cost and complexity of integrating multiple projectors. In this article, we describe an optical design technique for a class of multiview 3D display systems that use only a single projector. In this single projector multiview (SPM) system design, multiple views for the 3D display are generated in a time-multiplex fashion by the single high speed projector with specially designed optical components, a scanning mirror, and a reflective mirror array. Images of all views are generated sequentially and projected via the specially design optical system from different viewing directions towards a 3D display screen. Therefore, the single projector is able to generate equivalent number of multiview images from multiple viewing directions, thus fulfilling the tasks of multiple projectors. An obvious advantage of the proposed SPM technique is the significant reduction of cost, size, and complexity, especially when the number of views is high. The SPM strategy also alleviates the time-consuming procedures for multi-projector calibration. The design method is flexible and scalable and can accommodate systems with different number of views.

  2. Interactive display system having a scaled virtual target zone

    DOEpatents

    Veligdan, James T.; DeSanto, Leonard

    2006-06-13

    A display system includes a waveguide optical panel having an inlet face and an opposite outlet face. A projector and imaging device cooperate with the panel for projecting a video image thereon. An optical detector bridges at least a portion of the waveguides for detecting a location on the outlet face within a target zone of an inbound light spot. A controller is operatively coupled to the imaging device and detector for displaying a cursor on the outlet face corresponding with the detected location of the spot within the target zone.

  3. A Low-Cost PC-Based Image Workstation for Dynamic Interactive Display of Three-Dimensional Anatomy

    NASA Astrophysics Data System (ADS)

    Barrett, William A.; Raya, Sai P.; Udupa, Jayaram K.

    1989-05-01

    A system for interactive definition, automated extraction, and dynamic interactive display of three-dimensional anatomy has been developed and implemented on a low-cost PC-based image workstation. An iconic display is used for staging predefined image sequences through specified increments of tilt and rotation over a solid viewing angle. Use of a fast processor facilitates rapid extraction and rendering of the anatomy into predefined image views. These views are formatted into a display matrix in a large image memory for rapid interactive selection and display of arbitrary spatially adjacent images within the viewing angle, thereby providing motion parallax depth cueing for efficient and accurate perception of true three-dimensional shape, size, structure, and spatial interrelationships of the imaged anatomy. The visual effect is that of holding and rotating the anatomy in the hand.

  4. Enhanced interfaces for web-based enterprise-wide image distribution.

    PubMed

    Jost, R Gilbert; Blaine, G James; Fritz, Kevin; Blume, Hartwig; Sadhra, Sarbjit

    2002-01-01

    Modern Web browsers support image distribution with two shortcomings: (1) image grayscale presentation at client workstations is often sub-optimal and generally inconsistent with the presentation state on diagnostic workstations and (2) an Electronic Patient Record (EPR) application usually cannot directly access images with an integrated viewer. We have modified our EPR and our Web-based image-distribution system to allow access to images from within the EPR. In addition, at the client workstation, a grayscale transformation is performed that consists of two components: a client-display-specific component based on the characteristic display function of the class of display system, and a modality-specific transformation that is downloaded with every image. The described techniques have been implemented in our institution and currently support enterprise-wide clinical image distribution. The effectiveness of the techniques is reviewed.

  5. A programmable display layer for virtual reality system architectures.

    PubMed

    Smit, Ferdi Alexander; van Liere, Robert; Froehlich, Bernd

    2010-01-01

    Display systems typically operate at a minimum rate of 60 Hz. However, existing VR-architectures generally produce application updates at a lower rate. Consequently, the display is not updated by the application every display frame. This causes a number of undesirable perceptual artifacts. We describe an architecture that provides a programmable display layer (PDL) in order to generate updated display frames. This replaces the default display behavior of repeating application frames until an update is available. We will show three benefits of the architecture typical to VR. First, smooth motion is provided by generating intermediate display frames by per-pixel depth-image warping using 3D motion fields. Smooth motion eliminates various perceptual artifacts due to judder. Second, we implement fine-grained latency reduction at the display frame level using a synchronized prediction of simulation objects and the viewpoint. This improves the average quality and consistency of latency reduction. Third, a crosstalk reduction algorithm for consecutive display frames is implemented, which improves the quality of stereoscopic images. To evaluate the architecture, we compare image quality and latency to that of a classic level-of-detail approach.

  6. Design and implementation of a PC-based image-guided surgical system.

    PubMed

    Stefansic, James D; Bass, W Andrew; Hartmann, Steven L; Beasley, Ryan A; Sinha, Tuhin K; Cash, David M; Herline, Alan J; Galloway, Robert L

    2002-11-01

    In interactive, image-guided surgery, current physical space position in the operating room is displayed on various sets of medical images used for surgical navigation. We have developed a PC-based surgical guidance system (ORION) which synchronously displays surgical position on up to four image sets and updates them in real time. There are three essential components which must be developed for this system: (1) accurately tracked instruments; (2) accurate registration techniques to map physical space to image space; and (3) methods to display and update the image sets on a computer monitor. For each of these components, we have developed a set of dynamic link libraries in MS Visual C++ 6.0 supporting various hardware tools and software techniques. Surgical instruments are tracked in physical space using an active optical tracking system. Several of the different registration algorithms were developed with a library of robust math kernel functions, and the accuracy of all registration techniques was thoroughly investigated. Our display was developed using the Win32 API for windows management and tomographic visualization, a frame grabber for live video capture, and OpenGL for visualization of surface renderings. We have begun to use this current implementation of our system for several surgical procedures, including open and minimally invasive liver surgery.

  7. Backscatter absorption gas imaging system

    DOEpatents

    McRae, Jr., Thomas G.

    1985-01-01

    A video imaging system for detecting hazardous gas leaks. Visual displays of invisible gas clouds are produced by radiation augmentation of the field of view of an imaging device by radiation corresponding to an absorption line of the gas to be detected. The field of view of an imager is irradiated by a laser. The imager receives both backscattered laser light and background radiation. When a detectable gas is present, the backscattered laser light is highly attenuated, producing a region of contrast or shadow on the image. A flying spot imaging system is utilized to synchronously irradiate and scan the area to lower laser power requirements. The imager signal is processed to produce a video display.

  8. Backscatter absorption gas imaging system

    DOEpatents

    McRae, T.G. Jr.

    A video imaging system for detecting hazardous gas leaks. Visual displays of invisible gas clouds are produced by radiation augmentation of the field of view of an imaging device by radiation corresponding to an absorption line of the gas to be detected. The field of view of an imager is irradiated by a laser. The imager receives both backscattered laser light and background radiation. When a detectable gas is present, the backscattered laser light is highly attenuated, producing a region of contrast or shadow on the image. A flying spot imaging system is utilized to synchronously irradiate and scan the area to lower laser power requirements. The imager signal is processed to produce a video display.

  9. Processing Infrared Images For Fire Management Applications

    NASA Astrophysics Data System (ADS)

    Warren, John R.; Pratt, William K.

    1981-12-01

    The USDA Forest Service has used airborne infrared systems for forest fire detection and mapping for many years. The transfer of the images from plane to ground and the transposition of fire spots and perimeters to maps has been performed manually. A new system has been developed which uses digital image processing, transmission, and storage. Interactive graphics, high resolution color display, calculations, and computer model compatibility are featured in the system. Images are acquired by an IR line scanner and converted to 1024 x 1024 x 8 bit frames for transmission to the ground at a 1.544 M bit rate over a 14.7 GHZ carrier. Individual frames are received and stored, then transferred to a solid state memory to refresh the display at a conventional 30 frames per second rate. Line length and area calculations, false color assignment, X-Y scaling, and image enhancement are available. Fire spread can be calculated for display and fire perimeters plotted on maps. The performance requirements, basic system, and image processing will be described.

  10. Design and evaluation of web-based image transmission and display with different protocols

    NASA Astrophysics Data System (ADS)

    Tan, Bin; Chen, Kuangyi; Zheng, Xichuan; Zhang, Jianguo

    2011-03-01

    There are many Web-based image accessing technologies used in medical imaging area, such as component-based (ActiveX Control) thick client Web display, Zerofootprint thin client Web viewer (or called server side processing Web viewer), Flash Rich Internet Application(RIA) ,or HTML5 based Web display. Different Web display methods have different peformance in different network environment. In this presenation, we give an evaluation on two developed Web based image display systems. The first one is used for thin client Web display. It works between a PACS Web server with WADO interface and thin client. The PACS Web server provides JPEG format images to HTML pages. The second one is for thick client Web display. It works between a PACS Web server with WADO interface and thick client running in browsers containing ActiveX control, Flash RIA program or HTML5 scripts. The PACS Web server provides native DICOM format images or JPIP stream for theses clients.

  11. Wrap-Around Out-the-Window Sensor Fusion System

    NASA Technical Reports Server (NTRS)

    Fox, Jeffrey; Boe, Eric A.; Delgado, Francisco; Secor, James B.; Clark, Michael R.; Ehlinger, Kevin D.; Abernathy, Michael F.

    2009-01-01

    The Advanced Cockpit Evaluation System (ACES) includes communication, computing, and display subsystems, mounted in a van, that synthesize out-the-window views to approximate the views of the outside world as it would be seen from the cockpit of a crewed spacecraft, aircraft, or remote control of a ground vehicle or UAV (unmanned aerial vehicle). The system includes five flat-panel display units arranged approximately in a semicircle around an operator, like cockpit windows. The scene displayed on each panel represents the view through the corresponding cockpit window. Each display unit is driven by a personal computer equipped with a video-capture card that accepts live input from any of a variety of sensors (typically, visible and/or infrared video cameras). Software running in the computers blends the live video images with synthetic images that could be generated, for example, from heads-up-display outputs, waypoints, corridors, or from satellite photographs of the same geographic region. Data from a Global Positioning System receiver and an inertial navigation system aboard the remote vehicle are used by the ACES software to keep the synthetic and live views in registration. If the live image were to fail, the synthetic scenes could still be displayed to maintain situational awareness.

  12. High resolution image processing on low-cost microcomputers

    NASA Technical Reports Server (NTRS)

    Miller, R. L.

    1993-01-01

    Recent advances in microcomputer technology have resulted in systems that rival the speed, storage, and display capabilities of traditionally larger machines. Low-cost microcomputers can provide a powerful environment for image processing. A new software program which offers sophisticated image display and analysis on IBM-based systems is presented. Designed specifically for a microcomputer, this program provides a wide-range of functions normally found only on dedicated graphics systems, and therefore can provide most students, universities and research groups with an affordable computer platform for processing digital images. The processing of AVHRR images within this environment is presented as an example.

  13. Advanced Three-Dimensional Display System

    NASA Technical Reports Server (NTRS)

    Geng, Jason

    2005-01-01

    A desktop-scale, computer-controlled display system, initially developed for NASA and now known as the VolumeViewer(TradeMark), generates three-dimensional (3D) images of 3D objects in a display volume. This system differs fundamentally from stereoscopic and holographic display systems: The images generated by this system are truly 3D in that they can be viewed from almost any angle, without the aid of special eyeglasses. It is possible to walk around the system while gazing at its display volume to see a displayed object from a changing perspective, and multiple observers standing at different positions around the display can view the object simultaneously from their individual perspectives, as though the displayed object were a real 3D object. At the time of writing this article, only partial information on the design and principle of operation of the system was available. It is known that the system includes a high-speed, silicon-backplane, ferroelectric-liquid-crystal spatial light modulator (SLM), multiple high-power lasers for projecting images in multiple colors, a rotating helix that serves as a moving screen for displaying voxels [volume cells or volume elements, in analogy to pixels (picture cells or picture elements) in two-dimensional (2D) images], and a host computer. The rotating helix and its motor drive are the only moving parts. Under control by the host computer, a stream of 2D image patterns is generated on the SLM and projected through optics onto the surface of the rotating helix. The system utilizes a parallel pixel/voxel-addressing scheme: All the pixels of the 2D pattern on the SLM are addressed simultaneously by laser beams. This parallel addressing scheme overcomes the difficulty of achieving both high resolution and a high frame rate in a raster scanning or serial addressing scheme. It has been reported that the structure of the system is simple and easy to build, that the optical design and alignment are not difficult, and that the system can be built by use of commercial off-the-shelf products. A prototype of the system displays an image of 1,024 by 768 by 170 (=133,693,440) voxels. In future designs, the resolution could be increased. The maximum number of voxels that can be generated depends upon the spatial resolution of SLM and the speed of rotation of the helix. For example, one could use an available SLM that has 1,024 by 1,024 pixels. Incidentally, this SLM is capable of operation at a switching speed of 300,000 frames per second. Implementation of full-color displays in future versions of the system would be straightforward: One could use three SLMs for red, green, and blue, respectively, and the colors of the voxels could be automatically controlled. An optically simpler alternative would be to use a single red/green/ blue light projector and synchronize the projection of each color with the generation of patterns for that color on a single SLM.

  14. Operating scheme for the light-emitting diode array of a volumetric display that exhibits multiple full-color dynamic images

    NASA Astrophysics Data System (ADS)

    Hirayama, Ryuji; Shiraki, Atsushi; Nakayama, Hirotaka; Kakue, Takashi; Shimobaba, Tomoyoshi; Ito, Tomoyoshi

    2017-07-01

    We designed and developed a control circuit for a three-dimensional (3-D) light-emitting diode (LED) array to be used in volumetric displays exhibiting full-color dynamic 3-D images. The circuit was implemented on a field-programmable gate array; therefore, pulse-width modulation, which requires high-speed processing, could be operated in real time. We experimentally evaluated the developed system by measuring the luminance of an LED with varying input and confirmed that the system works appropriately. In addition, we demonstrated that the volumetric display exhibits different full-color dynamic two-dimensional images in two orthogonal directions. Each of the exhibited images could be obtained only from the prescribed viewpoint. Such directional characteristics of the system are beneficial for applications, including digital signage, security systems, art, and amusement.

  15. Mesoscale and severe storms (Mass) data management and analysis system

    NASA Technical Reports Server (NTRS)

    Hickey, J. S.; Karitani, S.; Dickerson, M.

    1984-01-01

    Progress on the Mesoscale and Severe Storms (MASS) data management and analysis system is described. An interactive atmospheric data base management software package to convert four types of data (Sounding, Single Level, Grid, Image) into standard random access formats is implemented and integrated with the MASS AVE80 Series general purpose plotting and graphics display data analysis software package. An interactive analysis and display graphics software package (AVE80) to analyze large volumes of conventional and satellite derived meteorological data is enhanced to provide imaging/color graphics display utilizing color video hardware integrated into the MASS computer system. Local and remote smart-terminal capability is provided by installing APPLE III computer systems within individual scientist offices and integrated with the MASS system, thus providing color video display, graphics, and characters display of the four data types.

  16. Use of the stereoscopic virtual reality display system for the detection and characterization of intracranial aneurysms: A Icomparison with conventional computed tomography workstation and 3D rotational angiography.

    PubMed

    Liu, Xiujuan; Tao, Haiquan; Xiao, Xigang; Guo, Binbin; Xu, Shangcai; Sun, Na; Li, Maotong; Xie, Li; Wu, Changjun

    2018-07-01

    This study aimed to compare the diagnostic performance of the stereoscopic virtual reality display system with the conventional computed tomography (CT) workstation and three-dimensional rotational angiography (3DRA) for intracranial aneurysm detection and characterization, with a focus on small aneurysms and those near the bone. First, 42 patients with suspected intracranial aneurysms underwent both 256-row CT angiography (CTA) and 3DRA. Volume rendering (VR) images were captured using the conventional CT workstation. Next, VR images were transferred to the stereoscopic virtual reality display system. Two radiologists independently assessed the results that were obtained using the conventional CT workstation and stereoscopic virtual reality display system. The 3DRA results were considered as the ultimate reference standard. Based on 3DRA images, 38 aneurysms were confirmed in 42 patients. Two cases were misdiagnosed and 1 was missed when the traditional CT workstation was used. The sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy of the conventional CT workstation were 94.7%, 85.7%, 97.3%, 75%, and99.3%, respectively, on a per-aneurysm basis. The stereoscopic virtual reality display system missed a case. The sensitivity, specificity, PPV, NPV, and accuracy of the stereoscopic virtual reality display system were 100%, 85.7%, 97.4%, 100%, and 97.8%, respectively. No difference was observed in the accuracy of the traditional CT workstation, stereoscopic virtual reality display system, and 3DRA in detecting aneurysms. The stereoscopic virtual reality display system has some advantages in detecting small aneurysms and those near the bone. The virtual reality stereoscopic vision obtained through the system was found as a useful tool in intracranial aneurysm diagnosis and pre-operative 3D imaging. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. A low-cost multimodal head-mounted display system for neuroendoscopic surgery.

    PubMed

    Xu, Xinghua; Zheng, Yi; Yao, Shujing; Sun, Guochen; Xu, Bainan; Chen, Xiaolei

    2018-01-01

    With rapid advances in technology, wearable devices as head-mounted display (HMD) have been adopted for various uses in medical science, ranging from simply aiding in fitness to assisting surgery. We aimed to investigate the feasibility and practicability of a low-cost multimodal HMD system in neuroendoscopic surgery. A multimodal HMD system, mainly consisted of a HMD with two built-in displays, an action camera, and a laptop computer displaying reconstructed medical images, was developed to assist neuroendoscopic surgery. With this intensively integrated system, the neurosurgeon could freely switch between endoscopic image, three-dimensional (3D) reconstructed virtual endoscopy images, and surrounding environment images. Using a leap motion controller, the neurosurgeon could adjust or rotate the 3D virtual endoscopic images at a distance to better understand the positional relation between lesions and normal tissues at will. A total of 21 consecutive patients with ventricular system diseases underwent neuroendoscopic surgery with the aid of this system. All operations were accomplished successfully, and no system-related complications occurred. The HMD was comfortable to wear and easy to operate. Screen resolution of the HMD was high enough for the neurosurgeon to operate carefully. With the system, the neurosurgeon might get a better comprehension on lesions by freely switching among images of different modalities. The system had a steep learning curve, which meant a quick increment of skill with it. Compared with commercially available surgical assistant instruments, this system was relatively low-cost. The multimodal HMD system is feasible, practical, helpful, and relatively cost efficient in neuroendoscopic surgery.

  18. Near-infrared fluorescence goggle system with complementary metal–oxide–semiconductor imaging sensor and see-through display

    PubMed Central

    Liu, Yang; Njuguna, Raphael; Matthews, Thomas; Akers, Walter J.; Sudlow, Gail P.; Mondal, Suman; Tang, Rui

    2013-01-01

    Abstract. We have developed a near-infrared (NIR) fluorescence goggle system based on the complementary metal–oxide–semiconductor active pixel sensor imaging and see-through display technologies. The fluorescence goggle system is a compact wearable intraoperative fluorescence imaging and display system that can guide surgery in real time. The goggle is capable of detecting fluorescence of indocyanine green solution in the picomolar range. Aided by NIR quantum dots, we successfully used the fluorescence goggle to guide sentinel lymph node mapping in a rat model. We further demonstrated the feasibility of using the fluorescence goggle in guiding surgical resection of breast cancer metastases in the liver in conjunction with NIR fluorescent probes. These results illustrate the diverse potential use of the goggle system in surgical procedures. PMID:23728180

  19. Combining volumetric edge display and multiview display for expression of natural 3D images

    NASA Astrophysics Data System (ADS)

    Yasui, Ryota; Matsuda, Isamu; Kakeya, Hideki

    2006-02-01

    In the present paper the authors present a novel stereoscopic display method combining volumetric edge display technology and multiview display technology to realize presentation of natural 3D images where the viewers do not suffer from contradiction between binocular convergence and focal accommodation of the eyes, which causes eyestrain and sickness. We adopt volumetric display method only for edge drawing, while we adopt stereoscopic approach for flat areas of the image. Since focal accommodation of our eyes is affected only by the edge part of the image, natural focal accommodation can be induced if the edges of the 3D image are drawn on the proper depth. The conventional stereo-matching technique can give us robust depth values of the pixels which constitute noticeable edges. Also occlusion and gloss of the objects can be roughly expressed with the proposed method since we use stereoscopic approach for the flat area. We can attain a system where many users can view natural 3D objects at the consistent position and posture at the same time in this system. A simple optometric experiment using a refractometer suggests that the proposed method can give us 3-D images without contradiction between binocular convergence and focal accommodation.

  20. Stereoscopic medical imaging collaboration system

    NASA Astrophysics Data System (ADS)

    Okuyama, Fumio; Hirano, Takenori; Nakabayasi, Yuusuke; Minoura, Hirohito; Tsuruoka, Shinji

    2007-02-01

    The computerization of the clinical record and the realization of the multimedia have brought improvement of the medical service in medical facilities. It is very important for the patients to obtain comprehensible informed consent. Therefore, the doctor should plainly explain the purpose and the content of the diagnoses and treatments for the patient. We propose and design a Telemedicine Imaging Collaboration System which presents a three dimensional medical image as X-ray CT, MRI with stereoscopic image by using virtual common information space and operating the image from a remote location. This system is composed of two personal computers, two 15 inches stereoscopic parallax barrier type LCD display (LL-151D, Sharp), one 1Gbps router and 1000base LAN cables. The software is composed of a DICOM format data transfer program, an operation program of the images, the communication program between two personal computers and a real time rendering program. Two identical images of 512×768 pixcels are displayed on two stereoscopic LCD display, and both images show an expansion, reduction by mouse operation. This system can offer a comprehensible three-dimensional image of the diseased part. Therefore, the doctor and the patient can easily understand it, depending on their needs.

  1. Image Quality Analysis and Optical Performance Requirement for Micromirror-Based Lissajous Scanning Displays

    PubMed Central

    Du, Weiqi; Zhang, Gaofei; Ye, Liangchen

    2016-01-01

    Micromirror-based scanning displays have been the focus of a variety of applications. Lissajous scanning displays have advantages in terms of power consumption; however, the image quality is not good enough. The main reason for this is the varying size and the contrast ratio of pixels at different positions of the image. In this paper, the Lissajous scanning trajectory is analyzed and a new method based on the diamond pixel is introduced to Lissajous displays. The optical performance of micromirrors is discussed. A display system demonstrator is built, and tests of resolution and contrast ratio are conducted. The test results show that the new Lissajous scanning method can be used in displays by using diamond pixels and image quality remains stable at different positions. PMID:27187390

  2. Image Quality Analysis and Optical Performance Requirement for Micromirror-Based Lissajous Scanning Displays.

    PubMed

    Du, Weiqi; Zhang, Gaofei; Ye, Liangchen

    2016-05-11

    Micromirror-based scanning displays have been the focus of a variety of applications. Lissajous scanning displays have advantages in terms of power consumption; however, the image quality is not good enough. The main reason for this is the varying size and the contrast ratio of pixels at different positions of the image. In this paper, the Lissajous scanning trajectory is analyzed and a new method based on the diamond pixel is introduced to Lissajous displays. The optical performance of micromirrors is discussed. A display system demonstrator is built, and tests of resolution and contrast ratio are conducted. The test results show that the new Lissajous scanning method can be used in displays by using diamond pixels and image quality remains stable at different positions.

  3. Network of fully integrated multispecialty hospital imaging systems

    NASA Astrophysics Data System (ADS)

    Dayhoff, Ruth E.; Kuzmak, Peter M.

    1994-05-01

    The Department of Veterans Affairs (VA) DHCP Imaging System records clinically significant diagnostic images selected by medical specialists in a variety of departments, including radiology, cardiology, gastroenterology, pathology, dermatology, hematology, surgery, podiatry, dental clinic, and emergency room. These images are displayed on workstations located throughout a medical center. All images are managed by the VA's hospital information system, allowing integrated displays of text and image data across medical specialties. Clinicians can view screens of `thumbnail' images for all studies or procedures performed on a selected patient. Two VA medical centers currently have DHCP Imaging Systems installed, and others are planned. All VA medical centers and other VA facilities are connected by a wide area packet-switched network. The VA's electronic mail software has been modified to allow inclusion of binary data such as images in addition to the traditional text data. Testing of this multimedia electronic mail system is underway for medical teleconsultation.

  4. Digital Image Processing Overview For Helmet Mounted Displays

    NASA Astrophysics Data System (ADS)

    Parise, Michael J.

    1989-09-01

    Digital image processing provides a means to manipulate an image and presents a user with a variety of display formats that are not available in the analog image processing environment. When performed in real time and presented on a Helmet Mounted Display, system capability and flexibility are greatly enhanced. The information content of a display can be increased by the addition of real time insets and static windows from secondary sensor sources, near real time 3-D imaging from a single sensor can be achieved, graphical information can be added, and enhancement techniques can be employed. Such increased functionality is generating a considerable amount of interest in the military and commercial markets. This paper discusses some of these image processing techniques and their applications.

  5. STEREOMATRIX 3-D display system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whiteside, Stephen Earl

    1973-08-01

    STEREOMATRIX is a large-screen interactive 3-D laser display system which presents computer-generated wire figures stereoscopically. The presented image can be rotated, translated, and scaled by the system user and the perspective of the image is changed according to the position of the user. A cursor may be positioned in three dimensions to identify points and allows communication with the computer.

  6. Volumetric 3D display with multi-layered active screens for enhanced the depth perception (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Kim, Hak-Rin; Park, Min-Kyu; Choi, Jun-Chan; Park, Ji-Sub; Min, Sung-Wook

    2016-09-01

    Three-dimensional (3D) display technology has been studied actively because it can offer more realistic images compared to the conventional 2D display. Various psychological factors such as accommodation, binocular parallax, convergence and motion parallax are used to recognize a 3D image. For glass-type 3D displays, they use only the binocular disparity in 3D depth cues. However, this method cause visual fatigue and headaches due to accommodation conflict and distorted depth perception. Thus, the hologram and volumetric display are expected to be an ideal 3D display. Holographic displays can represent realistic images satisfying the entire factors of depth perception. But, it require tremendous amount of data and fast signal processing. The volumetric 3D displays can represent images using voxel which is a physical volume. However, it is required for large data to represent the depth information on voxel. In order to simply encode 3D information, the compact type of depth fused 3D (DFD) display, which can create polarization distributed depth map (PDDM) image having both 2D color image and depth image is introduced. In this paper, a new volumetric 3D display system is shown by using PDDM image controlled by polarization controller. In order to introduce PDDM image, polarization states of the light through spatial light modulator (SLM) was analyzed by Stokes parameter depending on the gray level. Based on the analysis, polarization controller is properly designed to convert PDDM image into sectioned depth images. After synchronizing PDDM images with active screens, we can realize reconstructed 3D image. Acknowledgment This work was supported by `The Cross-Ministry Giga KOREA Project' grant from the Ministry of Science, ICT and Future Planning, Korea

  7. Imaging System

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The 1100C Virtual Window is based on technology developed under NASA Small Business Innovation (SBIR) contracts to Ames Research Center. For example, under one contract Dimension Technologies, Inc. developed a large autostereoscopic display for scientific visualization applications. The Virtual Window employs an innovative illumination system to deliver the depth and color of true 3D imaging. Its applications include surgery and Magnetic Resonance Imaging scans, viewing for teleoperated robots, training, and in aviation cockpit displays.

  8. Method and apparatus for the simultaneous display and correlation of independently generated images

    DOEpatents

    Vaitekunas, Jeffrey J.; Roberts, Ronald A.

    1991-01-01

    An apparatus and method for location by location correlation of multiple images from Non-Destructive Evaluation (NDE) and other sources. Multiple images of a material specimen are displayed on one or more monitors of an interactive graphics system. Specimen landmarks are located in each image and mapping functions from a reference image to each other image are calcuated using the landmark locations. A location selected by positioning a cursor in the reference image is mapped to the other images and location identifiers are simultaneously displayed in those images. Movement of the cursor in the reference image causes simultaneous movement of the location identifiers in the other images to positions corresponding to the location of the reference image cursor.

  9. Real time imaging of infrared scene data generated by the Naval Postgraduate School Infrared Search and Target Designation (NPS-IRSTD) system

    NASA Astrophysics Data System (ADS)

    Baca, Michael J.

    1990-09-01

    A system to display images generated by the Naval Postgraduate School Infrared Search and Target Designation (a modified AN/SAR-8 Advanced Development Model) in near real time was developed using a 33 MHz NIC computer as the central controller. This computer was enhanced with a Data Translation DT2861 Frame Grabber for image processing and an interface board designed and constructed at NPS to provide synchronization between the IRSTD and Frame Grabber. Images are displayed in false color in a video raster format on a 512 by 480 pixel resolution monitor. Using FORTRAN, programs have been written to acquire, unscramble, expand and display a 3 deg sector of data. The time line for acquisition, processing and display has been analyzed and repetition periods of less than four seconds for successive screen displays have been achieved. This represents a marked improvement over previous methods necessitating slower Direct Memory Access transfers of data into the Frame Grabber. Recommendations are made for further improvements to enhance the speed and utility of images produced.

  10. Optical design and testing: introduction.

    PubMed

    Liang, Chao-Wen; Koshel, John; Sasian, Jose; Breault, Robert; Wang, Yongtian; Fang, Yi Chin

    2014-10-10

    Optical design and testing has numerous applications in industrial, military, consumer, and medical settings. Assembling a complete imaging or nonimage optical system may require the integration of optics, mechatronics, lighting technology, optimization, ray tracing, aberration analysis, image processing, tolerance compensation, and display rendering. This issue features original research ranging from the optical design of image and nonimage optical stimuli for human perception, optics applications, bio-optics applications, 3D display, solar energy system, opto-mechatronics to novel imaging or nonimage modalities in visible and infrared spectral imaging, modulation transfer function measurement, and innovative interferometry.

  11. Graphics Processing Unit (GPU) implementation of image processing algorithms to improve system performance of the Control, Acquisition, Processing, and Image Display System (CAPIDS) of the Micro-Angiographic Fluoroscope (MAF).

    PubMed

    Vasan, S N Swetadri; Ionita, Ciprian N; Titus, A H; Cartwright, A N; Bednarek, D R; Rudin, S

    2012-02-23

    We present the image processing upgrades implemented on a Graphics Processing Unit (GPU) in the Control, Acquisition, Processing, and Image Display System (CAPIDS) for the custom Micro-Angiographic Fluoroscope (MAF) detector. Most of the image processing currently implemented in the CAPIDS system is pixel independent; that is, the operation on each pixel is the same and the operation on one does not depend upon the result from the operation on the other, allowing the entire image to be processed in parallel. GPU hardware was developed for this kind of massive parallel processing implementation. Thus for an algorithm which has a high amount of parallelism, a GPU implementation is much faster than a CPU implementation. The image processing algorithm upgrades implemented on the CAPIDS system include flat field correction, temporal filtering, image subtraction, roadmap mask generation and display window and leveling. A comparison between the previous and the upgraded version of CAPIDS has been presented, to demonstrate how the improvement is achieved. By performing the image processing on a GPU, significant improvements (with respect to timing or frame rate) have been achieved, including stable operation of the system at 30 fps during a fluoroscopy run, a DSA run, a roadmap procedure and automatic image windowing and leveling during each frame.

  12. Image degradation by glare in radiologic display devices

    NASA Astrophysics Data System (ADS)

    Badano, Aldo; Flynn, Michael J.

    1997-05-01

    No electronic devices are currently available that can display digital radiographs without loss of visual information compared to traditional transilluminated film. Light scattering within the glass faceplate of cathode-ray tube (CRT) devices causes excessive glare that reduces image contrast. This glare, along with ambient light reflection, has been recognized as a significant limitation for radiologic applications. Efforts to control the effect of glare and ambient light reflection in CRTs include the use of absorptive glass and thin film coatings. In the near future, flat panel displays (FPD) with thin emissive structures should provide very low glare, high performance devices. We have used an optical Monte Carlo simulation to evaluate the effect of glare on image quality for typical CRT and flat panel display devices. The trade-off between display brightness and image contrast is described. For CRT systems, achieving good glare ratio requires a reduction of brightness to 30-40 percent of the maximum potential brightness. For FPD systems, similar glare performance can be achieved while maintaining 80 percent of the maximum potential brightness.

  13. Restoration of moving binary images degraded owing to phosphor persistence.

    PubMed

    Cherri, A K; Awwal, A A; Karim, M A; Moon, D L

    1991-09-10

    The degraded images of dynamic objects obtained by using a phosphor-based electro-optical display are analyzed in terms of dynamic modulation transfer function (DMTF) and temporal characteristics of the display system. The direct correspondence between the DMTF and image smear is used in developing real-time techniques for the restoration of degraded images.

  14. Automatic optical inspection of regular grid patterns with an inspection camera used below the Shannon-Nyquist criterion for optical resolution

    NASA Astrophysics Data System (ADS)

    Ferreira, Flávio P.; Forte, Paulo M. F.; Felgueiras, Paulo E. R.; Bret, Boris P. J.; Belsley, Michael S.; Nunes-Pereira, Eduardo J.

    2017-02-01

    An Automatic Optical Inspection (AOI) system for optical inspection of imaging devices used in automotive industry using an inspecting optics of lower spatial resolution than the device under inspection is described. This system is robust and with no moving parts. The cycle time is small. Its main advantage is that it is capable of detecting and quantifying defects in regular patterns, working below the Shannon-Nyquist criterion for optical resolution, using a single low resolution image sensor. It is easily scalable, which is an important advantage in industrial applications, since the same inspecting sensor can be reused for increasingly higher spatial resolutions of the devices to be inspected. The optical inspection is implemented with a notch multi-band Fourier filter, making the procedure especially fitted for regular patterns, like the ones that can be produced in image displays and Head Up Displays (HUDs). The regular patterns are used in production line only, for inspection purposes. For image displays, functional defects are detected at the level of a sub-image display grid element unit. Functional defects are the ones impairing the function of the display, and are preferred in AOI to the direct geometric imaging, since those are the ones directly related with the end-user experience. The shift in emphasis from geometric imaging to functional imaging is critical, since it is this that allows quantitative inspection, below Shannon-Nyquist. For HUDs, the functional detect detection addresses defects resulting from the combined effect of the image display and the image forming optics.

  15. Holographic display system for restoration of sight to the blind

    PubMed Central

    Goetz, G A; Mandel, Y; Manivanh, R; Palanker, D V; Čižmár, T

    2013-01-01

    Objective We present a holographic near-the-eye display system enabling optical approaches for sight restoration to the blind, such as photovoltaic retinal prosthesis, optogenetic and other photoactivation techniques. We compare it with conventional LCD or DLP-based displays in terms of image quality, field of view, optical efficiency and safety. Approach We detail the optical configuration of the holographic display system and its characterization using a phase-only spatial light modulator. Main results We describe approaches to controlling the zero diffraction order and speckle related issues in holographic display systems and assess the image quality of such systems. We show that holographic techniques offer significant advantages in terms of peak irradiance and power efficiency, and enable designs that are inherently safer than LCD or DLP-based systems. We demonstrate the performance of our holographic display system in the assessment of cortical response to alternating gratings projected onto the retinas of rats. Significance We address the issues associated with the design of high brightness, near-the-eye display systems and propose solutions to the efficiency and safety challenges with an optical design which could be miniaturized and mounted onto goggles. PMID:24045579

  16. Tangible display systems: bringing virtual surfaces into the real world

    NASA Astrophysics Data System (ADS)

    Ferwerda, James A.

    2012-03-01

    We are developing tangible display systems that enable natural interaction with virtual surfaces. Tangible display systems are based on modern mobile devices that incorporate electronic image displays, graphics hardware, tracking systems, and digital cameras. Custom software allows the orientation of a device and the position of the observer to be tracked in real-time. Using this information, realistic images of surfaces with complex textures and material properties illuminated by environment-mapped lighting, can be rendered to the screen at interactive rates. Tilting or moving in front of the device produces realistic changes in surface lighting and material appearance. In this way, tangible displays allow virtual surfaces to be observed and manipulated as naturally as real ones, with the added benefit that surface geometry and material properties can be modified in real-time. We demonstrate the utility of tangible display systems in four application areas: material appearance research; computer-aided appearance design; enhanced access to digital library and museum collections; and new tools for digital artists.

  17. [Bone drilling simulation by three-dimensional imaging].

    PubMed

    Suto, Y; Furuhata, K; Kojima, T; Kurokawa, T; Kobayashi, M

    1989-06-01

    The three-dimensional display technique has a wide range of medical applications. Pre-operative planning is one typical application: in orthopedic surgery, three-dimensional image processing has been used very successfully. We have employed this technique in pre-operative planning for orthopedic surgery, and have developed a simulation system for bone-drilling. Positive results were obtained by pre-operative rehearsal; when a region of interest is indicated by means of a mouse on the three-dimensional image displayed on the CRT, the corresponding region appears on the slice image which is displayed simultaneously. Consequently, the status of the bone-drilling is constantly monitored. In developing this system, we have placed emphasis on the quality of the reconstructed three-dimensional images, on fast processing, and on the easy operation of the surgical planning simulation.

  18. Display management subsystem, version 1: A user's eye view

    NASA Technical Reports Server (NTRS)

    Parker, Dolores

    1986-01-01

    The structure and application functions of the Display Management Subsystem (DMS) are described. The DMS, a subsystem of the Transportable Applications Executive (TAE), was designed to provide a device-independent interface for an image processing and display environment. The system is callable by C and FORTRAN applications, portable to accommodate different image analysis terminals, and easily expandable to meet local needs. Generic applications are also available for performing many image processing tasks.

  19. Reduce volume of head-up display by image stitching

    NASA Astrophysics Data System (ADS)

    Chiu, Yi-Feng; Su, Guo-Dung J.

    2016-09-01

    Head-up Display (HUD) is a safety feature for automobile drivers. Although there have been some HUD systems in commercial product already, their images are too small to show assistance information. Another problem, the volume of HUD is too large. We proposed a HUD including micro-projectors, rear-projection screen, microlens array (MLA) and the light source is 28 mm x 14 mm realized a 200 mm x 100 mm image in 3 meters from drivers. We want to use the MLA to reduce the volume by virtual image stitching. We design the HUD's package dimensions is 12 cm x 12 cm x 9 cm. It is able to show speed, map-navigation and night vision information. We used Liquid Crystal Display (LCD) as our image source due to its brighter image output required and the minimum volume occupancy. The MLA is a multi aperture system. The proposed MLA consists of many optical channels each transmitting a segment of the whole field of view. The design of the system provides the stitching of the partial images, so that we can see the whole virtual image.

  20. System and method for disrupting suspect objects

    DOEpatents

    Gladwell, T. Scott; Garretson, Justin R; Hobart, Clinton G; Monda, Mark J

    2013-07-09

    A system and method for disrupting at least one component of a suspect object is provided. The system includes a source for passing radiation through the suspect object, a screen for receiving the radiation passing through the suspect object and generating at least one image therefrom, a weapon having a discharge deployable therefrom, and a targeting unit. The targeting unit displays the image(s) of the suspect object and aims the weapon at a disruption point on the displayed image such that the weapon may be positioned to deploy the discharge at the disruption point whereby the suspect object is disabled.

  1. Flexible high-resolution display systems for the next generation of radiology reading rooms

    NASA Astrophysics Data System (ADS)

    Caban, Jesus J.; Wood, Bradford J.; Park, Adrian

    2007-03-01

    A flexible, scalable, high-resolution display system is presented to support the next generation of radiology reading rooms or interventional radiology suites. The project aims to create an environment for radiologists that will simultaneously facilitate image interpretation, analysis, and understanding while lowering visual and cognitive stress. Displays currently in use present radiologists with technical challenges to exploring complex datasets that we seek to address. These include resolution and brightness, display and ambient lighting differences, and degrees of complexity in addition to side-by-side comparison of time-variant and 2D/3D images. We address these issues through a scalable projector-based system that uses our custom-designed geometrical and photometrical calibration process to create a seamless, bright, high-resolution display environment that can reduce the visual fatigue commonly experienced by radiologists. The system we have designed uses an array of casually aligned projectors to cooperatively increase overall resolution and brightness. Images from a set of projectors in their narrowest zoom are combined at a shared projection surface, thus increasing the global "pixels per inch" (PPI) of the display environment. Two primary challenges - geometric calibration and photometric calibration - remained to be resolved before our high-resolution display system could be used in a radiology reading room or procedure suite. In this paper we present a method that accomplishes those calibrations and creates a flexible high-resolution display environment that appears seamless, sharp, and uniform across different devices.

  2. Formulation of coarse integral imaging and its applications

    NASA Astrophysics Data System (ADS)

    Kakeya, Hideki

    2008-02-01

    This paper formulates the notion of coarse integral imaging and applies it to practical designs of 3D displays for the purposes of robot teleoperation and automobile HUDs. 3D display technologies are demanded in the applications where real-time and precise depth perception is required, such as teleoperation of robot manipulators and HUDs for automobiles. 3D displays for these applications, however, have not been realized so far. In the conventional 3D display technologies, the eyes are usually induced to focus on the screen, which is not suitable for the above purposes. To overcome this problem the author adopts the coarse integral imaging system, where each component lens is large enough to cover pixels dozens of times more than the number of views. The merit of this system is that it can induce the viewer's focus on the planes of various depths by generating a real image or a virtual image off the screen. This system, however, has major disadvantages in the quality of image, which is caused by aberration of lenses and discontinuity at the joints of component lenses. In this paper the author proposes practical optical designs for 3D monitors for robot teleoperation and 3D HUDs for automobiles by overcoming the problems of aberration and discontinuity of images.

  3. High color fidelity thin film multilayer systems for head-up display use

    NASA Astrophysics Data System (ADS)

    Tsou, Yi-Jen D.; Ho, Fang C.

    1996-09-01

    Head-up display is gaining increasing access in automotive vehicles for indication and position/navigation purposes. An optical combiner, which allows the driver to receive image information from outside and inside of the automobile, is the essential part of this display device. Two multilayer thin film combiner coating systems with distinctive polarization selectivity and broad band spectral neutrality are discussed. One of the coating systems was designed to be located at the lower portion of the windshield. The coating reduced the exterior glare by approximately 45% and provided about 70% average see-through transmittance in addition to the interior information display. The other coating system was designed to be integrated with the sunshield located at the upper portion of the windshield. The coating reflected the interior information display while reducing direct sunlight penetration to 25%. Color fidelity for both interior and exterior images were maintained in both systems. This facilitated the display of full-color maps. Both coating systems were absorptionless and environmentally durable. Designs, fabrication, and performance of these coating systems are addressed.

  4. Three-Dimensional Tactical Display and Method for Visualizing Data with a Probability of Uncertainty

    DTIC Science & Technology

    2009-08-03

    replacing the more complex and less intuitive displays presently provided in such contexts as commercial aircraft , marine vehicles, and air traffic...free space-virtual reality, 3-D image display system which is enabled by using a unique form of Aerogel as the primary display media. A preferred...generates and displays a real 3-D image in the Aerogel matrix. [0014] U.S. Patent No. 6,285,317, issued September 4, 2001, to Ong, discloses a

  5. Three-Dimensional Tactical Display and Method for Visualizing Data with a Probability of Uncertainty

    DTIC Science & Technology

    2009-08-03

    replacing the more complex and less intuitive displays presently provided in such contexts as commercial aircraft , marine vehicles, and air traffic...space-virtual reality, 3-D image display system which is enabled by using a unique form of Aerogel as the primary display media. A preferred...and displays a real 3-D image in the Aerogel matrix. [0014] U.S. Patent No. 6,285,317, issued September 4, 2001, to Ong, discloses a navigation

  6. In-line positioning of ultrasound images using wireless remote display system with tablet computer facilitates ultrasound-guided radial artery catheterization.

    PubMed

    Tsuchiya, Masahiko; Mizutani, Koh; Funai, Yusuke; Nakamoto, Tatsuo

    2016-02-01

    Ultrasound-guided procedures may be easier to perform when the operator's eye axis, needle puncture site, and ultrasound image display form a straight line in the puncture direction. However, such methods have not been well tested in clinical settings because that arrangement is often impossible due to limited space in the operating room. We developed a wireless remote display system for ultrasound devices using a tablet computer (iPad Mini), which allows easy display of images at nearly any location chosen by the operator. We hypothesized that the in-line layout of ultrasound images provided by this system would allow for secure and quick catheterization of the radial artery. We enrolled first-year medical interns (n = 20) who had no prior experience with ultrasound-guided radial artery catheterization to perform that using a short-axis out-of-plane approach with two different methods. With the conventional method, only the ultrasound machine placed at the side of the head of the patient across the targeted forearm was utilized. With the tablet method, the ultrasound images were displayed on an iPad Mini positioned on the arm in alignment with the operator's eye axis and needle puncture direction. The success rate and time required for catheterization were compared between the two methods. Success rate was significantly higher (100 vs. 70 %, P = 0.02) and catheterization time significantly shorter (28.5 ± 7.5 vs. 68.2 ± 14.3 s, P < 0.001) with the tablet method as compared to the conventional method. An ergonomic straight arrangement of the image display is crucial for successful and quick completion of ultrasound-guided arterial catheterization. The present remote display system is a practical method for providing such an arrangement.

  7. Three-dimensional image acquisition and reconstruction system on a mobile device based on computer-generated integral imaging.

    PubMed

    Erdenebat, Munkh-Uchral; Kim, Byeong-Jun; Piao, Yan-Ling; Park, Seo-Yeon; Kwon, Ki-Chul; Piao, Mei-Lan; Yoo, Kwan-Hee; Kim, Nam

    2017-10-01

    A mobile three-dimensional image acquisition and reconstruction system using a computer-generated integral imaging technique is proposed. A depth camera connected to the mobile device acquires the color and depth data of a real object simultaneously, and an elemental image array is generated based on the original three-dimensional information for the object, with lens array specifications input into the mobile device. The three-dimensional visualization of the real object is reconstructed on the mobile display through optical or digital reconstruction methods. The proposed system is implemented successfully and the experimental results certify that the system is an effective and interesting method of displaying real three-dimensional content on a mobile device.

  8. VICAR - VIDEO IMAGE COMMUNICATION AND RETRIEVAL

    NASA Technical Reports Server (NTRS)

    Wall, R. J.

    1994-01-01

    VICAR (Video Image Communication and Retrieval) is a general purpose image processing software system that has been under continuous development since the late 1960's. Originally intended for data from the NASA Jet Propulsion Laboratory's unmanned planetary spacecraft, VICAR is now used for a variety of other applications including biomedical image processing, cartography, earth resources, and geological exploration. The development of this newest version of VICAR emphasized a standardized, easily-understood user interface, a shield between the user and the host operating system, and a comprehensive array of image processing capabilities. Structurally, VICAR can be divided into roughly two parts; a suite of applications programs and an executive which serves as the interfaces between the applications, the operating system, and the user. There are several hundred applications programs ranging in function from interactive image editing, data compression/decompression, and map projection, to blemish, noise, and artifact removal, mosaic generation, and pattern recognition and location. An information management system designed specifically for handling image related data can merge image data with other types of data files. The user accesses these programs through the VICAR executive, which consists of a supervisor and a run-time library. From the viewpoint of the user and the applications programs, the executive is an environment that is independent of the operating system. VICAR does not replace the host computer's operating system; instead, it overlays the host resources. The core of the executive is the VICAR Supervisor, which is based on NASA Goddard Space Flight Center's Transportable Applications Executive (TAE). Various modifications and extensions have been made to optimize TAE for image processing applications, resulting in a user friendly environment. The rest of the executive consists of the VICAR Run-Time Library, which provides a set of subroutines (image I/O, label I/O, parameter I/O, etc.) to facilitate image processing and provide the fastest I/O possible while maintaining a wide variety of capabilities. The run-time library also includes the Virtual Raster Display Interface (VRDI) which allows display oriented applications programs to be written for a variety of display devices using a set of common routines. (A display device can be any frame-buffer type device which is attached to the host computer and has memory planes for the display and manipulation of images. A display device may have any number of separate 8-bit image memory planes (IMPs), a graphics overlay plane, pseudo-color capabilities, hardware zoom and pan, and other features). The VRDI supports the following display devices: VICOM (Gould/Deanza) IP8500, RAMTEK RM-9465, ADAGE (Ikonas) IK3000 and the International Imaging Systems IVAS. VRDI's purpose is to provide a uniform operating environment not only for an application programmer, but for the user as well. The programmer is able to write programs without being concerned with the specifics of the device for which the application is intended. The VICAR Interactive Display Subsystem (VIDS) is a collection of utilities for easy interactive display and manipulation of images on a display device. VIDS has characteristics of both the executive and an application program, and offers a wide menu of image manipulation options. VIDS uses the VRDI to communicate with display devices. The first step in using VIDS to analyze and enhance an image (one simple example of VICAR's numerous capabilities) is to examine the histogram of the image. The histogram is a plot of frequency of occurrence for each pixel value (0 - 255) loaded in the image plane. If, for example, the histogram shows that there are no pixel values below 64 or above 192, the histogram can be "stretched" so that the value of 64 is mapped to zero and 192 is mapped to 255. Now the user can use the full dynamic range of the display device to display the data and better see its contents. Another example of a VIDS procedure is the JMOVIE command, which allows the user to run animations interactively on the display device. JMOVIE uses the concept of "frames", which are the individual frames which comprise the animation to be viewed. The user loads images into the frames after the size and number of frames has been selected. VICAR's source languages are primarily FORTRAN and C, with some VAX Assembler and array processor code. The VICAR run-time library is designed to work equally easily from either FORTRAN or C. The program was implemented on a DEC VAX series computer operating under VMS 4.7. The virtual memory required is 1.5MB. Approximately 180,000 blocks of storage are needed for the saveset. VICAR (version 2.3A/3G/13H) is a copyrighted work with all copyright vested in NASA and is available by license for a period of ten (10) years to approved licensees. This program was developed in 1989.

  9. Sensor fusion display evaluation using information integration models in enhanced/synthetic vision applications

    NASA Technical Reports Server (NTRS)

    Foyle, David C.

    1993-01-01

    Based on existing integration models in the psychological literature, an evaluation framework is developed to assess sensor fusion displays as might be implemented in an enhanced/synthetic vision system. The proposed evaluation framework for evaluating the operator's ability to use such systems is a normative approach: The pilot's performance with the sensor fusion image is compared to models' predictions based on the pilot's performance when viewing the original component sensor images prior to fusion. This allows for the determination as to when a sensor fusion system leads to: poorer performance than one of the original sensor displays, clearly an undesirable system in which the fused sensor system causes some distortion or interference; better performance than with either single sensor system alone, but at a sub-optimal level compared to model predictions; optimal performance compared to model predictions; or, super-optimal performance, which may occur if the operator were able to use some highly diagnostic 'emergent features' in the sensor fusion display, which were unavailable in the original sensor displays.

  10. Real object-based 360-degree integral-floating display using multiple depth camera

    NASA Astrophysics Data System (ADS)

    Erdenebat, Munkh-Uchral; Dashdavaa, Erkhembaatar; Kwon, Ki-Chul; Wu, Hui-Ying; Yoo, Kwan-Hee; Kim, Young-Seok; Kim, Nam

    2015-03-01

    A novel 360-degree integral-floating display based on the real object is proposed. The general procedure of the display system is similar with conventional 360-degree integral-floating displays. Unlike previously presented 360-degree displays, the proposed system displays the 3D image generated from the real object in 360-degree viewing zone. In order to display real object in 360-degree viewing zone, multiple depth camera have been utilized to acquire the depth information around the object. Then, the 3D point cloud representations of the real object are reconstructed according to the acquired depth information. By using a special point cloud registration method, the multiple virtual 3D point cloud representations captured by each depth camera are combined as single synthetic 3D point cloud model, and the elemental image arrays are generated for the newly synthesized 3D point cloud model from the given anamorphic optic system's angular step. The theory has been verified experimentally, and it shows that the proposed 360-degree integral-floating display can be an excellent way to display real object in the 360-degree viewing zone.

  11. Ten inch Planar Optic Display

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beiser, L.; Veligdan, J.

    A Planar Optic Display (POD) is being built and tested for suitability as a high brightness replacement for the cathode ray tube, (CRT). The POD display technology utilizes a laminated optical waveguide structure which allows a projection type of display to be constructed in a thin (I to 2 inch) housing. Inherent in the optical waveguide is a black cladding matrix which gives the display a black appearance leading to very high contrast. A Digital Micromirror Device, (DMD) from Texas Instruments is used to create video images in conjunction with a 100 milliwatt green solid state laser. An anamorphic opticalmore » system is used to inject light into the POD to form a stigmatic image. In addition to the design of the POD screen, we discuss: image formation, image projection, and optical design constraints.« less

  12. Hologlyphics: volumetric image synthesis performance system

    NASA Astrophysics Data System (ADS)

    Funk, Walter

    2008-02-01

    This paper describes a novel volumetric image synthesis system and artistic technique, which generate moving volumetric images in real-time, integrated with music. The system, called the Hologlyphic Funkalizer, is performance based, wherein the images and sound are controlled by a live performer, for the purposes of entertaining a live audience and creating a performance art form unique to volumetric and autostereoscopic images. While currently configured for a specific parallax barrier display, the Hologlyphic Funkalizer's architecture is completely adaptable to various volumetric and autostereoscopic display technologies. Sound is distributed through a multi-channel audio system; currently a quadraphonic speaker setup is implemented. The system controls volumetric image synthesis, production of music and spatial sound via acoustic analysis and human gestural control, using a dedicated control panel, motion sensors, and multiple musical keyboards. Music can be produced by external acoustic instruments, pre-recorded sounds or custom audio synthesis integrated with the volumetric image synthesis. Aspects of the sound can control the evolution of images and visa versa. Sounds can be associated and interact with images, for example voice synthesis can be combined with an animated volumetric mouth, where nuances of generated speech modulate the mouth's expressiveness. Different images can be sent to up to 4 separate displays. The system applies many novel volumetric special effects, and extends several film and video special effects into the volumetric realm. Extensive and various content has been developed and shown to live audiences by a live performer. Real world applications will be explored, with feedback on the human factors.

  13. A PC-based multispectral scanner data evaluation workstation: Application to Daedalus scanners

    NASA Technical Reports Server (NTRS)

    Jedlovec, Gary J.; James, Mark W.; Smith, Matthew R.; Atkinson, Robert J.

    1991-01-01

    In late 1989, a personal computer (PC)-based data evaluation workstation was developed to support post flight processing of Multispectral Atmospheric Mapping Sensor (MAMS) data. The MAMS Quick View System (QVS) is an image analysis and display system designed to provide the capability to evaluate Daedalus scanner data immediately after an aircraft flight. Even in its original form, the QVS offered the portability of a personal computer with the advanced analysis and display features of a mainframe image analysis system. It was recognized, however, that the original QVS had its limitations, both in speed and processing of MAMS data. Recent efforts are presented that focus on overcoming earlier limitations and adapting the system to a new data tape structure. In doing so, the enhanced Quick View System (QVS2) will accommodate data from any of the four spectrometers used with the Daedalus scanner on the NASA ER2 platform. The QVS2 is designed around the AST 486/33 MHz CPU personal computer and comes with 10 EISA expansion slots, keyboard, and 4.0 mbytes of memory. Specialized PC-McIDAS software provides the main image analysis and display capability for the system. Image analysis and display of the digital scanner data is accomplished with PC-McIDAS software.

  14. High-speed reconstruction of compressed images

    NASA Astrophysics Data System (ADS)

    Cox, Jerome R., Jr.; Moore, Stephen M.

    1990-07-01

    A compression scheme is described that allows high-definition radiological images with greater than 8-bit intensity resolution to be represented by 8-bit pixels. Reconstruction of the images with their original intensity resolution can be carried out by means of a pipeline architecture suitable for compact, high-speed implementation. A reconstruction system is described that can be fabricated according to this approach and placed between an 8-bit display buffer and the display's video system thereby allowing contrast control of images at video rates. Results for 50 CR chest images are described showing that error-free reconstruction of the original 10-bit CR images can be achieved.

  15. 3D interactive surgical visualization system using mobile spatial information acquisition and autostereoscopic display.

    PubMed

    Fan, Zhencheng; Weng, Yitong; Chen, Guowen; Liao, Hongen

    2017-07-01

    Three-dimensional (3D) visualization of preoperative and intraoperative medical information becomes more and more important in minimally invasive surgery. We develop a 3D interactive surgical visualization system using mobile spatial information acquisition and autostereoscopic display for surgeons to observe surgical target intuitively. The spatial information of regions of interest (ROIs) is captured by the mobile device and transferred to a server for further image processing. Triangular patches of intraoperative data with texture are calculated with a dimension-reduced triangulation algorithm and a projection-weighted mapping algorithm. A point cloud selection-based warm-start iterative closest point (ICP) algorithm is also developed for fusion of the reconstructed 3D intraoperative image and the preoperative image. The fusion images are rendered for 3D autostereoscopic display using integral videography (IV) technology. Moreover, 3D visualization of medical image corresponding to observer's viewing direction is updated automatically using mutual information registration method. Experimental results show that the spatial position error between the IV-based 3D autostereoscopic fusion image and the actual object was 0.38±0.92mm (n=5). The system can be utilized in telemedicine, operating education, surgical planning, navigation, etc. to acquire spatial information conveniently and display surgical information intuitively. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Volumetric 3D Display System with Static Screen

    NASA Technical Reports Server (NTRS)

    Geng, Jason

    2011-01-01

    Current display technology has relied on flat, 2D screens that cannot truly convey the third dimension of visual information: depth. In contrast to conventional visualization that is primarily based on 2D flat screens, the volumetric 3D display possesses a true 3D display volume, and places physically each 3D voxel in displayed 3D images at the true 3D (x,y,z) spatial position. Each voxel, analogous to a pixel in a 2D image, emits light from that position to form a real 3D image in the eyes of the viewers. Such true volumetric 3D display technology provides both physiological (accommodation, convergence, binocular disparity, and motion parallax) and psychological (image size, linear perspective, shading, brightness, etc.) depth cues to human visual systems to help in the perception of 3D objects. In a volumetric 3D display, viewers can watch the displayed 3D images from a completely 360 view without using any special eyewear. The volumetric 3D display techniques may lead to a quantum leap in information display technology and can dramatically change the ways humans interact with computers, which can lead to significant improvements in the efficiency of learning and knowledge management processes. Within a block of glass, a large amount of tiny dots of voxels are created by using a recently available machining technique called laser subsurface engraving (LSE). The LSE is able to produce tiny physical crack points (as small as 0.05 mm in diameter) at any (x,y,z) location within the cube of transparent material. The crack dots, when illuminated by a light source, scatter the light around and form visible voxels within the 3D volume. The locations of these tiny voxels are strategically determined such that each can be illuminated by a light ray from a high-resolution digital mirror device (DMD) light engine. The distribution of these voxels occupies the full display volume within the static 3D glass screen. This design eliminates any moving screen seen in previous approaches, so there is no image jitter, and has an inherent parallel mechanism for 3D voxel addressing. High spatial resolution is possible with a full color display being easy to implement. The system is low-cost and low-maintenance.

  17. Future directions in 3-dimensional imaging and neurosurgery: stereoscopy and autostereoscopy.

    PubMed

    Christopher, Lauren A; William, Albert; Cohen-Gadol, Aaron A

    2013-01-01

    Recent advances in 3-dimensional (3-D) stereoscopic imaging have enabled 3-D display technologies in the operating room. We find 2 beneficial applications for the inclusion of 3-D imaging in clinical practice. The first is the real-time 3-D display in the surgical theater, which is useful for the neurosurgeon and observers. In surgery, a 3-D display can include a cutting-edge mixed-mode graphic overlay for image-guided surgery. The second application is to improve the training of residents and observers in neurosurgical techniques. This article documents the requirements of both applications for a 3-D system in the operating room and for clinical neurosurgical training, followed by a discussion of the strengths and weaknesses of the current and emerging 3-D display technologies. An important comparison between a new autostereoscopic display without glasses and current stereo display with glasses improves our understanding of the best applications for 3-D in neurosurgery. Today's multiview autostereoscopic display has 3 major benefits: It does not require glasses for viewing; it allows multiple views; and it improves the workflow for image-guided surgery registration and overlay tasks because of its depth-rendering format and tools. Two current limitations of the autostereoscopic display are that resolution is reduced and depth can be perceived as too shallow in some cases. Higher-resolution displays will be available soon, and the algorithms for depth inference from stereo can be improved. The stereoscopic and autostereoscopic systems from microscope cameras to displays were compared by the use of recorded and live content from surgery. To the best of our knowledge, this is the first report of application of autostereoscopy in neurosurgery.

  18. Design of video processing and testing system based on DSP and FPGA

    NASA Astrophysics Data System (ADS)

    Xu, Hong; Lv, Jun; Chen, Xi'ai; Gong, Xuexia; Yang, Chen'na

    2007-12-01

    Based on high speed Digital Signal Processor (DSP) and Field Programmable Gate Array (FPGA), a video capture, processing and display system is presented, which is of miniaturization and low power. In this system, a triple buffering scheme was used for the capture and display, so that the application can always get a new buffer without waiting; The Digital Signal Processor has an image process ability and it can be used to test the boundary of workpiece's image. A video graduation technology is used to aim at the position which is about to be tested, also, it can enhance the system's flexibility. The character superposition technology realized by DSP is used to display the test result on the screen in character format. This system can process image information in real time, ensure test precision, and help to enhance product quality and quality management.

  19. Evaluation of Alternate Concepts for Synthetic Vision Flight Displays With Weather-Penetrating Sensor Image Inserts During Simulated Landing Approaches

    NASA Technical Reports Server (NTRS)

    Parrish, Russell V.; Busquets, Anthony M.; Williams, Steven P.; Nold, Dean E.

    2003-01-01

    A simulation study was conducted in 1994 at Langley Research Center that used 12 commercial airline pilots repeatedly flying complex Microwave Landing System (MLS)-type approaches to parallel runways under Category IIIc weather conditions. Two sensor insert concepts of 'Synthetic Vision Systems' (SVS) were used in the simulated flights, with a more conventional electro-optical display (similar to a Head-Up Display with raster capability for sensor imagery), flown under less restrictive visibility conditions, used as a control condition. The SVS concepts combined the sensor imagery with a computer-generated image (CGI) of an out-the-window scene based on an onboard airport database. Various scenarios involving runway traffic incursions (taxiing aircraft and parked fuel trucks) and navigational system position errors (both static and dynamic) were used to assess the pilots' ability to manage the approach task with the display concepts. The two SVS sensor insert concepts contrasted the simple overlay of sensor imagery on the CGI scene without additional image processing (the SV display) to the complex integration (the AV display) of the CGI scene with pilot-decision aiding using both object and edge detection techniques for detection of obstacle conflicts and runway alignment errors.

  20. Image Display And Manipulation System (IDAMS), user's guide

    NASA Technical Reports Server (NTRS)

    Cecil, R. W.

    1972-01-01

    A combination operator's guide and user's handbook for the Image Display and Manipulation System (IDAMS) is reported. Information is presented to define how to operate the computer equipment, how to structure a run deck, and how to select parameters necessary for executing a sequence of IDAMS task routines. If more detailed information is needed on any IDAMS program, see the IDAMS program documentation.

  1. Design of integrated eye tracker-display device for head mounted systems

    NASA Astrophysics Data System (ADS)

    David, Y.; Apter, B.; Thirer, N.; Baal-Zedaka, I.; Efron, U.

    2009-08-01

    We propose an Eye Tracker/Display system, based on a novel, dual function device termed ETD, which allows sharing the optical paths of the Eye tracker and the display and on-chip processing. The proposed ETD design is based on a CMOS chip combining a Liquid-Crystal-on-Silicon (LCoS) micro-display technology with near infrared (NIR) Active Pixel Sensor imager. The ET operation allows capturing the Near IR (NIR) light, back-reflected from the eye's retina. The retinal image is then used for the detection of the current direction of eye's gaze. The design of the eye tracking imager is based on the "deep p-well" pixel technology, providing low crosstalk while shielding the active pixel circuitry, which serves the imaging and the display drivers, from the photo charges generated in the substrate. The use of the ETD in the HMD Design enables a very compact design suitable for Smart Goggle applications. A preliminary optical, electronic and digital design of the goggle and its associated ETD chip and digital control, are presented.

  2. Diffraction effects incorporated design of a parallax barrier for a high-density multi-view autostereoscopic 3D display.

    PubMed

    Yoon, Ki-Hyuk; Ju, Heongkyu; Kwon, Hyunkyung; Park, Inkyu; Kim, Sung-Kyu

    2016-02-22

    We present optical characteristics of view image provided by a high-density multi-view autostereoscopic 3D display (HD-MVA3D) with a parallax barrier (PB). Diffraction effects that become of great importance in such a display system that uses a PB, are considered in an one-dimensional model of the 3D display, in which the numerical simulation of light from display panel pixels through PB slits to viewing zone is performed. The simulation results are then compared to the corresponding experimental measurements with discussion. We demonstrate that, as a main parameter for view image quality evaluation, the Fresnel number can be used to determine the PB slit aperture for the best performance of the display system. It is revealed that a set of the display parameters, which gives the Fresnel number of ∼ 0.7 offers maximized brightness of the view images while that corresponding to the Fresnel number of 0.4 ∼ 0.5 offers minimized image crosstalk. The compromise between the brightness and crosstalk enables optimization of the relative magnitude of the brightness to the crosstalk and lead to the choice of display parameter set for the HD-MVA3D with a PB, which satisfies the condition where the Fresnel number lies between 0.4 and 0.7.

  3. Color standardization and optimization in whole slide imaging.

    PubMed

    Yagi, Yukako

    2011-03-30

    Standardization and validation of the color displayed by digital slides is an important aspect of digital pathology implementation. While the most common reason for color variation is the variance in the protocols and practices in the histology lab, the color displayed can also be affected by variation in capture parameters (for example, illumination and filters), image processing and display factors in the digital systems themselves. We have been developing techniques for color validation and optimization along two paths. The first was based on two standard slides that are scanned and displayed by the imaging system in question. In this approach, one slide is embedded with nine filters with colors selected especially for H&E stained slides (looking like tiny Macbeth color chart); the specific color of the nine filters were determined in our previous study and modified for whole slide imaging (WSI). The other slide is an H&E stained mouse embryo. Both of these slides were scanned and the displayed images were compared to a standard. The second approach was based on our previous multispectral imaging research. As a first step, the two slide method (above) was used to identify inaccurate display of color and its cause, and to understand the importance of accurate color in digital pathology. We have also improved the multispectral-based algorithm for more consistent results in stain standardization. In near future, the results of the two slide and multispectral techniques can be combined and will be widely available. We have been conducting a series of researches and developing projects to improve image quality to establish Image Quality Standardization. This paper discusses one of most important aspects of image quality - color.

  4. Dynamic three-dimensional display of common congenital cardiac defects from reconstruction of two-dimensional echocardiographic images.

    PubMed

    Hsieh, K S; Lin, C C; Liu, W S; Chen, F L

    1996-01-01

    Two-dimensional echocardiography had long been a standard diagnostic modality for congenital heart disease. Further attempts of three-dimensional reconstruction using two-dimensional echocardiographic images to visualize stereotypic structure of cardiac lesions have been successful only recently. So far only very few studies have been done to display three-dimensional anatomy of the heart through two-dimensional image acquisition because such complex procedures were involved. This study introduced a recently developed image acquisition and processing system for dynamic three-dimensional visualization of various congenital cardiac lesions. From December 1994 to April 1995, 35 cases were selected in the Echo Laboratory here from about 3000 Echo examinations completed. Each image was acquired on-line with specially designed high resolution image grazmber with EKG and respiratory gating technique. Off-line image processing using a window-architectured interactive software package includes construction of 2-D ehcocardiographic pixel to 3-D "voxel" with conversion of orthogonal to rotatory axial system, interpolation, extraction of region of interest, segmentation, shading and, finally, 3D rendering. Three-dimensional anatomy of various congenital cardiac defects was shown, including four cases with ventricular septal defects, two cases with atrial septal defects, and two cases with aortic stenosis. Dynamic reconstruction of a "beating heart" is recorded as vedio tape with video interface. The potential application of 3D display of the reconstruction from 2D echocardiographic images for the diagnosis of various congenital heart defects has been shown. The 3D display was able to improve the diagnostic ability of echocardiography, and clear-cut display of the various congenital cardiac defects and vavular stenosis could be demonstrated. Reinforcement of current techniques will expand future application of 3D display of conventional 2D images.

  5. New portable FELIX 3D display

    NASA Astrophysics Data System (ADS)

    Langhans, Knut; Bezecny, Daniel; Homann, Dennis; Bahr, Detlef; Vogt, Carsten; Blohm, Christian; Scharschmidt, Karl-Heinz

    1998-04-01

    An improved generation of our 'FELIX 3D Display' is presented. This system is compact, light, modular and easy to transport. The created volumetric images consist of many voxels, which are generated in a half-sphere display volume. In that way a spatial object can be displayed occupying a physical space with height, width and depth. The new FELIX generation uses a screen rotating with 20 revolutions per second. This target screen is mounted by an easy to change mechanism making it possible to use appropriate screens for the specific purpose of the display. An acousto-optic deflection unit with an integrated small diode pumped laser draws the images on the spinning screen. Images can consist of up to 10,000 voxels at a refresh rate of 20 Hz. Currently two different hardware systems are investigated. The first one is based on a standard PCMCIA digital/analog converter card as an interface and is controlled by a notebook. The developed software is provided with a graphical user interface enabling several animation features. The second, new prototype is designed to display images created by standard CAD applications. It includes the development of a new high speed hardware interface suitable for state-of-the- art fast and high resolution scanning devices, which require high data rates. A true 3D volume display as described will complement the broad range of 3D visualization tools, such as volume rendering packages, stereoscopic and virtual reality techniques, which have become widely available in recent years. Potential applications for the FELIX 3D display include imaging in the field so fair traffic control, medical imaging, computer aided design, science as well as entertainment.

  6. Raster Scan Computer Image Generation (CIG) System Based On Refresh Memory

    NASA Astrophysics Data System (ADS)

    Dichter, W.; Doris, K.; Conkling, C.

    1982-06-01

    A full color, Computer Image Generation (CIG) raster visual system has been developed which provides a high level of training sophistication by utilizing advanced semiconductor technology and innovative hardware and firmware techniques. Double buffered refresh memory and efficient algorithms eliminate the problem of conventional raster line ordering by allowing the generated image to be stored in a random fashion. Modular design techniques and simplified architecture provide significant advantages in reduced system cost, standardization of parts, and high reliability. The major system components are a general purpose computer to perform interfacing and data base functions; a geometric processor to define the instantaneous scene image; a display generator to convert the image to a video signal; an illumination control unit which provides final image processing; and a CRT monitor for display of the completed image. Additional optional enhancements include texture generators, increased edge and occultation capability, curved surface shading, and data base extensions.

  7. System and method for controlling depth of imaging in tissues using fluorescence microscopy under ultraviolet excitation following staining with fluorescing agents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demos, Stavros; Levenson, Richard

    The present disclosure relates to a method for analyzing tissue specimens. In one implementation the method involves obtaining a tissue sample and exposing the sample to one or more fluorophores as contrast agents to enhance contrast of subcellular compartments of the tissue sample. The tissue sample is illuminated by an ultraviolet (UV) light having a wavelength between about 200 nm to about 400 nm, with the wavelength being selected to result in penetration to only a specified depth below a surface of the tissue sample. Inter-image operations between images acquired under different imaging parameters allow for improvement of the imagemore » quality via removal of unwanted image components. A microscope may be used to image the tissue sample and provide the image to an image acquisition system that makes use of a camera. The image acquisition system may create a corresponding image that is transmitted to a display system for processing and display.« less

  8. 72-directional display having VGA resolution for high-appearance image generation

    NASA Astrophysics Data System (ADS)

    Takaki, Yasuhiro; Dairiki, Takeshi

    2006-02-01

    The high-density directional display, which was originally developed in order to realize a natural 3D display, is not only a 3D display but also a high-appearance display. The appearances of objects, such as glare and transparency, are the results of the reflection and the refraction of rays. The faithful reproduction of such appearances of objects is impossible using conventional 2D displays because rays diffuse on the display screen. The high-density directional display precisely controls the horizontal ray directions so that it can reproduce the appearances of objects. The fidelity of the reproduction of object appearances depends on the ray angle sampling pitch. The angle sampling pitch is determined by considering the human eye imaging system. In the present study the high-appearance display which has the resolution of 640×400 and emits rays in 72 different horizontal directions with the angle pitch of 0.38° was constructed. Two 72-directional displays were combined, each of which consisted of a high-resolution LCD panel (3,840×2,400) and a slanted lenticular sheet. Two images produced by two displays were superimposed by a half mirror. A slit array was placed at the focal plane of the lenticular sheet for each display to reduce the horizontal image crosstalk in the combined image. The impression analysis shows that the high-appearance display provides higher appearances and presence than the conventional 2D displays do.

  9. [Spatial domain display for interference image dataset].

    PubMed

    Wang, Cai-Ling; Li, Yu-Shan; Liu, Xue-Bin; Hu, Bing-Liang; Jing, Juan-Juan; Wen, Jia

    2011-11-01

    The requirements of imaging interferometer visualization is imminent for the user of image interpretation and information extraction. However, the conventional researches on visualization only focus on the spectral image dataset in spectral domain. Hence, the quick show of interference spectral image dataset display is one of the nodes in interference image processing. The conventional visualization of interference dataset chooses classical spectral image dataset display method after Fourier transformation. In the present paper, the problem of quick view of interferometer imager in image domain is addressed and the algorithm is proposed which simplifies the matter. The Fourier transformation is an obstacle since its computation time is very large and the complexion would be even deteriorated with the size of dataset increasing. The algorithm proposed, named interference weighted envelopes, makes the dataset divorced from transformation. The authors choose three interference weighted envelopes respectively based on the Fourier transformation, features of interference data and human visual system. After comparing the proposed with the conventional methods, the results show the huge difference in display time.

  10. Improved Second-Generation 3-D Volumetric Display System. Revision 2

    DTIC Science & Technology

    1998-10-01

    computer control, uses infrared lasers to address points within a rare-earth-infused solid glass cube. Already, simple animated computer-generated images...Volumetric Display System permits images to be displayed in a three- dimensional format that can be observed without the use of special glasses . Its...MM 120 nm 60 mm nI POLARIZING I $-"• -’’""BEAMSPLI’i-ER ) 4P40-MHz 50-MHz BW PLRZN i TeO2 MODULATORS TeO2 DEFLECTORS Figure 1-4. NEOS four-channel

  11. Fiber Optic Communication System For Medical Images

    NASA Astrophysics Data System (ADS)

    Arenson, Ronald L.; Morton, Dan E.; London, Jack W.

    1982-01-01

    This paper discusses a fiber optic communication system linking ultrasound devices, Computerized tomography scanners, Nuclear Medicine computer system, and a digital fluoro-graphic system to a central radiology research computer. These centrally archived images are available for near instantaneous recall at various display consoles. When a suitable laser optical disk is available for mass storage, more extensive image archiving will be added to the network including digitized images of standard radiographs for comparison purposes and for remote display in such areas as the intensive care units, the operating room, and selected outpatient departments. This fiber optic system allows for a transfer of high resolution images in less than a second over distances exceeding 2,000 feet. The advantages of using fiber optic cables instead of typical parallel or serial communication techniques will be described. The switching methodology and communication protocols will also be discussed.

  12. Touchscreen everywhere: on transferring a normal planar surface to a touch-sensitive display.

    PubMed

    Dai, Jingwen; Chung, Chi-Kit Ronald

    2014-08-01

    We address how a human-computer interface with small device size, large display, and touch-input facility can be made possible by a mere projector and camera. The realization is through the use of a properly embedded structured light sensing scheme that enables a regular light-colored table surface to serve the dual roles of both a projection screen and a touch-sensitive display surface. A random binary pattern is employed to code structured light in pixel accuracy, which is embedded into the regular projection display in a way that the user perceives only regular display but not the structured pattern hidden in the display. With the projection display on the table surface being imaged by a camera, the observed image data, plus the known projection content, can work together to probe the 3-D workspace immediately above the table surface, like deciding if there is a finger present and if the finger touches the table surface, and if so, at what position on the table surface the contact is made. All the decisions hinge upon a careful calibration of the projector-camera-table surface system, intelligent segmentation of the hand in the image data, and exploitation of the homography mapping existing between the projector's display panel and the camera's image plane. Extensive experimentation including evaluation of the display quality, hand segmentation accuracy, touch detection accuracy, trajectory tracking accuracy, multitouch capability and system efficiency are shown to illustrate the feasibility of the proposed realization.

  13. Head-motion-controlled video goggles: preliminary concept for an interactive laparoscopic image display (i-LID).

    PubMed

    Aidlen, Jeremy T; Glick, Sara; Silverman, Kenneth; Silverman, Harvey F; Luks, Francois I

    2009-08-01

    Light-weight, low-profile, and high-resolution head-mounted displays (HMDs) now allow personalized viewing, of a laparoscopic image. The advantages include unobstructed viewing, regardless of position at the operating table, and the possibility to customize the image (i.e., enhanced reality, picture-in-picture, etc.). The bright image display allows use in daylight surroundings and the low profile of the HMD provides adequate peripheral vision. Theoretic disadvantages include reliance for all on the same image capture and anticues (i.e., reality disconnect) when the projected image remains static, despite changes in head position. This can lead to discomfort and even nausea. We have developed a prototype of interactive laparoscopic image display that allows hands-free control of the displayed image by changes in spatial orientation of the operator's head. The prototype consists of an HMD, a spatial orientation device, and computer software to enable hands-free panning and zooming of a video-endoscopic image display. The spatial orientation device uses magnetic fields created by a transmitter and receiver, each containing three orthogonal coils. The transmitter coils are efficiently driven, using USB power only, by a newly developed circuit, each at a unique frequency. The HMD-mounted receiver system links to a commercially available PC-interface PCI-bus sound card (M-Audiocard Delta 44; Avid Technology, Tewksbury, MA). Analog signals at the receiver are filtered, amplified, and converted to digital signals, which are processed to control the image display. The prototype uses a proprietary static fish-eye lens and software for the distortion-free reconstitution of any portion of the captured image. Left-right and up-down motions of the head (and HMD) produce real-time panning of the displayed image. Motion of the head toward, or away from, the transmitter causes real-time zooming in or out, respectively, of the displayed image. This prototype of the interactive HMD allows hands-free, intuitive control of the laparoscopic field, independent of the captured image.

  14. Towards a robust HDR imaging system

    NASA Astrophysics Data System (ADS)

    Long, Xin; Zeng, Xiangrong; Huangpeng, Qizi; Zhou, Jinglun; Feng, Jing

    2016-07-01

    High dynamic range (HDR) images can show more details and luminance information in general display device than low dynamic image (LDR) images. We present a robust HDR imaging system which can deal with blurry LDR images, overcoming the limitations of most existing HDR methods. Experiments on real images show the effectiveness and competitiveness of the proposed method.

  15. Interactive Image Analysis System Design,

    DTIC Science & Technology

    1982-12-01

    This report describes a design for an interactive image analysis system (IIAS), which implements terrain data extraction techniques. The design... analysis system. Additionally, the system is fully capable of supporting many generic types of image analysis and data processing, and is modularly...employs commercially available, state of the art minicomputers and image display devices with proven software to achieve a cost effective, reliable image

  16. Advanced autostereoscopic display for G-7 pilot project

    NASA Astrophysics Data System (ADS)

    Hattori, Tomohiko; Ishigaki, Takeo; Shimamoto, Kazuhiro; Sawaki, Akiko; Ishiguchi, Tsuneo; Kobayashi, Hiromi

    1999-05-01

    An advanced auto-stereoscopic display is described that permits the observation of a stereo pair by several persons simultaneously without the use of special glasses and any kind of head tracking devices for the viewers. The system is composed of a right eye system, a left eye system and a sophisticated head tracking system. In the each eye system, a transparent type color liquid crystal imaging plate is used with a special back light unit. The back light unit consists of a monochrome 2D display and a large format convex lens. The unit distributes the light of the viewers' correct each eye only. The right eye perspective system is combined with a left eye perspective system is combined with a left eye perspective system by a half mirror in order to function as a time-parallel stereoscopic system. The viewer's IR image is taken through and focused by the large format convex lens and feed back to the back light as a modulated binary half face image. The auto-stereoscopic display employs the TTL method as the accurate head tracking. The system was worked as a stereoscopic TV phone between Duke University Department Tele-medicine and Nagoya University School of Medicine Department Radiology using a high-speed digital line of GIBN. The applications are also described in this paper.

  17. Real-Time Detection and Reading of LED/LCD Displays for Visually Impaired Persons

    PubMed Central

    Tekin, Ender; Coughlan, James M.; Shen, Huiying

    2011-01-01

    Modern household appliances, such as microwave ovens and DVD players, increasingly require users to read an LED or LCD display to operate them, posing a severe obstacle for persons with blindness or visual impairment. While OCR-enabled devices are emerging to address the related problem of reading text in printed documents, they are not designed to tackle the challenge of finding and reading characters in appliance displays. Any system for reading these characters must address the challenge of first locating the characters among substantial amounts of background clutter; moreover, poor contrast and the abundance of specular highlights on the display surface – which degrade the image in an unpredictable way as the camera is moved – motivate the need for a system that processes images at a few frames per second, rather than forcing the user to take several photos, each of which can take seconds to acquire and process, until one is readable. We describe a novel system that acquires video, detects and reads LED/LCD characters in real time, reading them aloud to the user with synthesized speech. The system has been implemented on both a desktop and a cell phone. Experimental results are reported on videos of display images, demonstrating the feasibility of the system. PMID:21804957

  18. Secure Video Surveillance System Acquisition Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2009-12-04

    The SVSS Acquisition Software collects and displays video images from two cameras through a VPN, and store the images onto a collection controller. The software is configured to allow a user to enter a time window to display up to 2 1/2, hours of video review. The software collects images from the cameras at a rate of 1 image per second and automatically deletes images older than 3 hours. The software code operates in a linux environment and can be run in a virtual machine on Windows XP. The Sandia software integrates the different COTS software together to build themore » video review system.« less

  19. Intrahospital teleradiology from the emergency room

    NASA Astrophysics Data System (ADS)

    Fuhrman, Carl R.; Slasky, B. S.; Gur, David; Lattner, Stefanie; Herron, John M.; Plunkett, Michael B.; Towers, Jeffrey D.; Thaete, F. Leland

    1993-09-01

    Off-hour operations of the modern emergency room presents a challenge to conventional image management systems. To assess the utility of intrahospital teleradiology systems from the emergency room (ER), we installed a high-resolution film digitizer which was interfaced to a central archive and to a workstation at the main reading room. The system was designed to allow for digitization of images as soon as the films were processed. Digitized images were autorouted to both destinations, and digitized images could be laser printed (if desired). Almost real time interpretations of nonselected cases were performed at both locations (conventional film in the ER and a workstation in the main reading room), and an analysis of disagreements was performed. Our results demonstrate that in spite of a `significant' difference in reporting, `clinically significant differences' were found in less than 5% of cases. Folder management issues, preprocessing, image orientation, and setting reasonable lookup tables for display were identified as the main limitations to the systems' routine use in a busy environment. The main limitation of the conventional film was the identification of subtle abnormalities in the bright regions of the film. Once identified on either system (conventional film or soft display), all abnormalities were visible and detectable on both display modalities.

  20. Small Interactive Image Processing System (SMIPS) system description

    NASA Technical Reports Server (NTRS)

    Moik, J. G.

    1973-01-01

    The Small Interactive Image Processing System (SMIPS) operates under control of the IBM-OS/MVT operating system and uses an IBM-2250 model 1 display unit as interactive graphic device. The input language in the form of character strings or attentions from keys and light pen is interpreted and causes processing of built-in image processing functions as well as execution of a variable number of application programs kept on a private disk file. A description of design considerations is given and characteristics, structure and logic flow of SMIPS are summarized. Data management and graphic programming techniques used for the interactive manipulation and display of digital pictures are also discussed.

  1. Integral imaging based light field display with enhanced viewing resolution using holographic diffuser

    NASA Astrophysics Data System (ADS)

    Yan, Zhiqiang; Yan, Xingpeng; Jiang, Xiaoyu; Gao, Hui; Wen, Jun

    2017-11-01

    An integral imaging based light field display method is proposed by use of holographic diffuser, and enhanced viewing resolution is gained over conventional integral imaging systems. The holographic diffuser is fabricated with controlled diffusion characteristics, which interpolates the discrete light field of the reconstructed points to approximate the original light field. The viewing resolution can thus be improved and independent of the limitation imposed by Nyquist sampling frequency. An integral imaging system with low Nyquist sampling frequency is constructed, and reconstructed scenes of high viewing resolution using holographic diffuser are demonstrated, verifying the feasibility of the method.

  2. Design of teleoperation system with a force-reflecting real-time simulator

    NASA Technical Reports Server (NTRS)

    Hirata, Mitsunori; Sato, Yuichi; Nagashima, Fumio; Maruyama, Tsugito

    1994-01-01

    We developed a force-reflecting teleoperation system that uses a real-time graphic simulator. This system eliminates the effects of communication time delays in remote robot manipulation. The simulator provides the operator with predictive display and feedback of computed contact forces through a six-degree of freedom (6-DOF) master arm on a real-time basis. With this system, peg-in-hole tasks involving round-trip communication time delays of up to a few seconds were performed at three support levels: a real image alone, a predictive display with a real image, and a real-time graphic simulator with computed-contact-force reflection and a predictive display. The experimental results indicate the best teleoperation efficiency was achieved by using the force-reflecting simulator with two images. The shortest work time, lowest sensor maximum, and a 100 percent success rate were obtained. These results demonstrate the effectiveness of simulated-force-reflecting teleoperation efficiency.

  3. Autostereoscopic three-dimensional display by combining a single spatial light modulator and a zero-order nulled grating

    NASA Astrophysics Data System (ADS)

    Su, Yanfeng; Cai, Zhijian; Liu, Quan; Lu, Yifan; Guo, Peiliang; Shi, Lingyan; Wu, Jianhong

    2018-04-01

    In this paper, an autostereoscopic three-dimensional (3D) display system based on synthetic hologram reconstruction is proposed and implemented. The system uses a single phase-only spatial light modulator to load the synthetic hologram of the left and right stereo images, and the parallax angle between two reconstructed stereo images is enlarged by a grating to meet the split angle requirement of normal stereoscopic vision. To realize the crosstalk-free autostereoscopic 3D display with high light utilization efficiency, the groove parameters of the grating are specifically designed by the rigorous coupled-wave theory for suppressing the zero-order diffraction, and then the zero-order nulled grating is fabricated by the holographic lithography and the ion beam etching. Furthermore, the diffraction efficiency of the fabricated grating is measured under the illumination of a laser beam with a wavelength of 532 nm. Finally, the experimental verification system for the proposed autostereoscopic 3D display is presented. The experimental results prove that the proposed system is able to generate stereoscopic 3D images with good performances.

  4. Rapid turn-around mapping of wildfires and disasters with airborne infrared imagery fro the new FireMapper® 2.0 and Oilmapper systems

    Treesearch

    James W. Hoffman; Lloyd L. Coulter; Philip J Riggan

    2005-01-01

    The new FireMapper® 2.0 and OilMapper airborne, infrared imaging systems operate in a "snapshot" mode. Both systems feature the real time display of single image frames, in any selected spectral band, on a daylight readable tablet PC. These single frames are displayed to the operator with full temperature calibration in color or grayscale renditions. A rapid...

  5. High-resolution laser-projection display system using a grating electromechanical system (GEMS)

    NASA Astrophysics Data System (ADS)

    Brazas, John C.; Kowarz, Marek W.

    2004-01-01

    Eastman Kodak Company has developed a diffractive-MEMS spatial-light modulator for use in printing and display applications, the grating electromechanical system (GEMS). This modulator contains a linear array of pixels capable of high-speed digital operation, high optical contrast, and good efficiency. The device operation is based on deflection of electromechanical ribbons suspended above a silicon substrate by a series of intermediate supports. When electrostatically actuated, the ribbons conform to the supporting substructure to produce a surface-relief phase grating over a wide active region. The device is designed to be binary, switching between a reflective mirror state having suspended ribbons and a diffractive grating state having ribbons in contact with substrate features. Switching times of less than 50 nanoseconds with sub-nanosecond jitter are made possible by reliable contact-mode operation. The GEMS device can be used as a high-speed digital-optical modulator for a laser-projection display system by collecting the diffracted orders and taking advantage of the low jitter. A color channel is created using a linear array of individually addressable GEMS pixels. A two-dimensional image is produced by sweeping the line image of the array, created by the projection optics, across the display screen. Gray levels in the image are formed using pulse-width modulation (PWM). A high-resolution projection display was developed using three 1080-pixel devices illuminated by red, green, and blue laser-color primaries. The result is an HDTV-format display capable of producing stunning still and motion images with very wide color gamut.

  6. The holographic display of three-dimensional medical objects through the usage of a shiftable cylindrical lens

    NASA Astrophysics Data System (ADS)

    Teng, Dongdong; Liu, Lilin; Zhang, Yueli; Pang, Zhiyong; Wang, Biao

    2014-09-01

    Through the creative usage of a shiftable cylindrical lens, a wide-view-angle holographic display system is developed for medical object display in real three-dimensional (3D) space based on a time-multiplexing method. The two-dimensional (2D) source images for all computer generated holograms (CGHs) needed by the display system are only one group of computerized tomography (CT) or magnetic resonance imaging (MRI) slices from the scanning device. Complicated 3D message reconstruction on the computer is not necessary. A pelvis is taken as the target medical object to demonstrate this method and the obtained horizontal viewing angle reaches 28°.

  7. Designing and researching of the virtual display system based on the prism elements

    NASA Astrophysics Data System (ADS)

    Vasilev, V. N.; Grimm, V. A.; Romanova, G. E.; Smirnov, S. A.; Bakholdin, A. V.; Grishina, N. Y.

    2014-05-01

    Problems of designing of systems for virtual display systems for augmented reality placed near the observers eye (so called head worn displays) with the light guide prismatic elements are considered. Systems of augmented reality is the complex consists of the image generator (most often it's the microdisplay with the illumination system if the display is not self-luminous), the objective which forms the display image practically in infinity and the combiner which organizes the light splitting so that an observer could see the information of the microdisplay and the surrounding environment as the background at the same time. This work deals with the system with the combiner based on the composite structure of the prism elements. In the work three cases of the prism combiner design are considered and also the results of the modeling with the optical design software are presented. In the model the question of the large pupil zone was analyzed and also the discontinuous character (mosaic structure) of the angular field in transmission of the information from the microdisplay to the observer's eye with the prismatic structure are discussed.

  8. Scanning laser beam displays based on a 2D MEMS

    NASA Astrophysics Data System (ADS)

    Niesten, Maarten; Masood, Taha; Miller, Josh; Tauscher, Jason

    2010-05-01

    The combination of laser light sources and MEMS technology enables a range of display systems such as ultra small projectors for mobile devices, head-up displays for vehicles, wearable near-eye displays and projection systems for 3D imaging. Images are created by scanning red, green and blue lasers horizontally and vertically with a single two-dimensional MEMS. Due to the excellent beam quality of laser beams, the optical designs are efficient and compact. In addition, the laser illumination enables saturated display colors that are desirable for augmented reality applications where a virtual image is used. With this technology, the smallest projector engine for high volume manufacturing to date has been developed. This projector module has a height of 7 mm and a volume of 5 cc. The resolution of this projector is WVGA. No additional projection optics is required, resulting in an infinite focus depth. Unlike with micro-display projection displays, an increase in resolution will not lead to an increase in size or a decrease in efficiency. Therefore future projectors can be developed that combine a higher resolution in an even smaller and thinner form factor with increased efficiencies that will lead to lower power consumption.

  9. Light-field and holographic three-dimensional displays [Invited].

    PubMed

    Yamaguchi, Masahiro

    2016-12-01

    A perfect three-dimensional (3D) display that satisfies all depth cues in human vision is possible if a light field can be reproduced exactly as it appeared when it emerged from a real object. The light field can be generated based on either light ray or wavefront reconstruction, with the latter known as holography. This paper first provides an overview of the advances of ray-based and wavefront-based 3D display technologies, including integral photography and holography, and the integration of those technologies with digital information systems. Hardcopy displays have already been used in some applications, whereas the electronic display of a light field is under active investigation. Next, a fundamental question in this technology field is addressed: what is the difference between ray-based and wavefront-based methods for light-field 3D displays? In considering this question, it is of particular interest to look at the technology of holographic stereograms. The phase information in holography contributes to the resolution of a reconstructed image, especially for deep 3D images. Moreover, issues facing the electronic display system of light fields are discussed, including the resolution of the spatial light modulator, the computational techniques of holography, and the speckle in holographic images.

  10. A Smart Spoofing Face Detector by Display Features Analysis.

    PubMed

    Lai, ChinLun; Tai, ChiuYuan

    2016-07-21

    In this paper, a smart face liveness detector is proposed to prevent the biometric system from being "deceived" by the video or picture of a valid user that the counterfeiter took with a high definition handheld device (e.g., iPad with retina display). By analyzing the characteristics of the display platform and using an expert decision-making core, we can effectively detect whether a spoofing action comes from a fake face displayed in the high definition display by verifying the chromaticity regions in the captured face. That is, a live or spoof face can be distinguished precisely by the designed optical image sensor. To sum up, by the proposed method/system, a normal optical image sensor can be upgraded to a powerful version to detect the spoofing actions. The experimental results prove that the proposed detection system can achieve very high detection rate compared to the existing methods and thus be practical to implement directly in the authentication systems.

  11. Micromirror-based real image laser automotive head-up display

    NASA Astrophysics Data System (ADS)

    Fan, Chao; He, Siyuan

    2017-01-01

    This paper reports a micromirror-based real image laser automotive head-up display (HUD), which overcomes the limitations of the previous designs by: (1) implementing an advanced display approach which is able to display sharp corners while the previous designs can only display curved lines such as to improve the display fidelity and (2) Optimizing the optical configuration to significantly reduce the HUD module size. The optical design in the HUD is simulated to choose the off-the-shelf concave lens. The vibration test is conducted to verify that the micromirror can survive 5 g. The prototype of the HUD system is fabricated and tested.

  12. Enhanced Images for Checked and Carry-on Baggage and Cargo Screening

    NASA Technical Reports Server (NTRS)

    Woodell, Glenn; Rahman, Zia-ur; Jobson, Daniel J.; Hines, Glenn

    2004-01-01

    The current X-ray systems used by airport security personnel for the detection of contraband, and objects such as knives and guns that can impact the security of a flight, have limited effect because of the limited display quality of the X-ray images. Since the displayed images do not possess optimal contrast and sharpness, it is possible for the security personnel to miss potentially hazardous objects. This problem is also common to other disciplines such as medical Xrays, and can be mitigated, to a large extent, by the use of state-of-the-art image processing techniques to enhance the contrast and sharpness of the displayed image. The NASA Langley Research Center's Visual Information Processing Group has developed an image enhancement technology that has direct applications to this problem of inadequate display quality. Airport security X-ray imaging systems would benefit considerably by using this novel technology, making the task of the personnel who have to interpret the X-ray images considerably easier, faster, and more reliable. This improvement would translate into more accurate screening as well as minimizing the screening time delays to airline passengers. This technology, Retinex, has been optimized for consumer applications but has been applied to medical X-rays on a very preliminary basis. The resultant technology could be incorporated into a new breed of commercial x-ray imaging systems which would be transparent to the screener yet allow them to see subtle detail much more easily, reducing the amount of time needed for screening while greatly increasing the effectiveness of contraband detection and thus public safety.

  13. Enhanced Images for Checked and Carry-on Baggage and Cargo Screening

    NASA Technical Reports Server (NTRS)

    Woodell, Glen; Rahman, Zia-ur; Jobson, Daniel J.; Hines, Glenn

    2004-01-01

    The current X-ray systems used by airport security personnel for the detection of contraband, and objects such as knives and guns that can impact the security of a flight, have limited effect because of the limited display quality of the X-ray images. Since the displayed images do not possess optimal contrast and sharpness, it is possible for the security personnel to miss potentially hazardous objects. This problem is also common to other disciplines such as medical X-rays, and can be mitigated, to a large extent, by the use of state-of-the-art image processing techniques to enhance the contrast and sharpness of the displayed image. The NASA Langley Research Centers Visual Information Processing Group has developed an image enhancement technology that has direct applications to this problem of inadequate display quality. Airport security X-ray imaging systems would benefit considerably by using this novel technology, making the task of the personnel who have to interpret the X-ray images considerably easier, faster, and more reliable. This improvement would translate into more accurate screening as well as minimizing the screening time delays to airline passengers. This technology, Retinex, has been optimized for consumer applications but has been applied to medical X-rays on a very preliminary basis. The resultant technology could be incorporated into a new breed of commercial x-ray imaging systems which would be transparent to the screener yet allow them to see subtle detail much more easily, reducing the amount of time needed for screening while greatly increasing the effectiveness of contraband detection and thus public safety.

  14. Volumetric three-dimensional display system with rasterization hardware

    NASA Astrophysics Data System (ADS)

    Favalora, Gregg E.; Dorval, Rick K.; Hall, Deirdre M.; Giovinco, Michael; Napoli, Joshua

    2001-06-01

    An 8-color multiplanar volumetric display is being developed by Actuality Systems, Inc. It will be capable of utilizing an image volume greater than 90 million voxels, which we believe is the greatest utilizable voxel set of any volumetric display constructed to date. The display is designed to be used for molecular visualization, mechanical CAD, e-commerce, entertainment, and medical imaging. As such, it contains a new graphics processing architecture, novel high-performance line- drawing algorithms, and an API similar to a current standard. Three-dimensional imagery is created by projecting a series of 2-D bitmaps ('image slices') onto a diffuse screen that rotates at 600 rpm. Persistence of vision fuses the slices into a volume-filling 3-D image. A modified three-panel Texas Instruments projector provides slices at approximately 4 kHz, resulting in 8-color 3-D imagery comprised of roughly 200 radially-disposed slices which are updated at 20 Hz. Each slice has a resolution of 768 by 768 pixels, subtending 10 inches. An unusual off-axis projection scheme incorporating tilted rotating optics is used to maintain good focus across the projection screen. The display electronics includes a custom rasterization architecture which converts the user's 3- D geometry data into image slices, as well as 6 Gbits of DDR SDRAM graphics memory.

  15. A Virtual Environment System for the Comparison of Dome and HMD Systems

    NASA Technical Reports Server (NTRS)

    Chen, Jian; Harm, Deboran L.; Loftin, R. Bowen; Lin, Ching-yao; Leiss, Ernst L.

    2002-01-01

    For effective astronaut training applications, choosing the right display devices to present images is crucial. In order to assess what devices are appropriate, it is important to design a successful virtual environment for a comparison study of the display devices. We present a comprehensive system for the comparison of Dome and head-mounted display (HMD) systems. In particular, we address interactions techniques and playback environments.

  16. Development of 40-in hybrid hologram screen for auto-stereoscopic video display

    NASA Astrophysics Data System (ADS)

    Song, Hyun Ho; Nakashima, Y.; Momonoi, Y.; Honda, Toshio

    2004-06-01

    Usually in auto stereoscopic display, there are two problems. The first problem is that large image display is difficult, and the second problem is that the view zone (which means the zone in which both eyes are put for stereoscopic or 3-D image observation) is very narrow. We have been developing an auto stereoscopic large video display system (over 100 inches diagonal) which a few people can view simultaneously1,2. Usually in displays that are over 100 inches diagonal, an optical video projection system is used. As one of auto stereoscopic display systems the hologram screen has been proposed3,4,5,6. However, if the hologram screen becomes too large, the view zone (corresponding to the reconstructed diffused object) causes color dispersion and color aberration7. We also proposed the additional Fresnel lens attached to the hologram screen. We call the screen a "hybrid hologram screen", (HHS in short). We made the HHS 866mm(H)×433mm(V) (about 40 inch diagonal)8,9,10,11. By using the lens in the reconstruction step, the angle between object light and reference light can be small, compared to without the lens. So, the spread of the view zone by the color dispersion and color aberration becomes small. And also, the virtual image which is reconstructed from the hologram screen can be transformed to a real image (view zone). So, it is not necessary to use a large lens or concave mirror while making a large hologram screen.

  17. Analysis of the diffraction effects for a multi-view autostereoscopic three-dimensional display system based on shutter parallax barriers with full resolution

    NASA Astrophysics Data System (ADS)

    Meng, Yang; Yu, Zhongyuan; Jia, Fangda; Zhang, Chunyu; Wang, Ye; Liu, Yumin; Ye, Han; Chen, Laurence Lujun

    2017-10-01

    A multi-view autostereoscopic three-dimensional (3D) system is built by using a 2D display screen and a customized parallax-barrier shutter (PBS) screen. The shutter screen is controlled dynamically by address driving matrix circuit and it is placed in front of the display screen at a certain location. The system could achieve densest viewpoints due to its specially optical and geometric design which is based on concept of "eye space". The resolution of 3D imaging is not reduced compared to 2D mode by using limited time division multiplexing technology. The diffraction effects may play an important role in 3D display imaging quality, especially when applied to small screen, such as iPhone screen etc. For small screen, diffraction effects may contribute crosstalk between binocular views, image brightness uniformity etc. Therefore, diffraction effects are analyzed and considered in a one-dimensional shutter screen model of the 3D display, in which the numerical simulation of light from display pixels on display screen through parallax barrier slits to each viewing zone in eye space, is performed. The simulation results provide guidance for criteria screen size over which the impact of diffraction effects are ignorable, and below which diffraction effects must be taken into account. Finally, the simulation results are compared to the corresponding experimental measurements and observation with discussion.

  18. Flexible daylight memory displays EASL DMD: a new approach toward displays for cockpit and soldier systems

    NASA Astrophysics Data System (ADS)

    Holter, Borre; Kamfjord, Thor G.; Fossum, Richard; Fagerberg, Ragnar

    2000-08-01

    The Norwegian based company PolyDisplayR ASA, in collaboration with the Norwegian Army Material Command and SINTEF, has refined, developed and shown with color and black/white technology demonstrators an electrically addressed Smectic A reflective LCD technology featuring: (1) Good contrast, all-round viewing angle and readability under all light conditions (no wash-out in direct sunlight). (2) Infinite memory -- image remains without power -- very low power consumption, no or very low radiation ('silent display') and narrow band updating. (3) Clear, sharp and flicker-free images. (4) Large number of gray tones and colors possible. (5) Simple construction and production -- reduced cost, higher yield, more robust and environmentally friendly. (6) Possibility for lighter, more robust and flexible displays based on plastic substrates. The results and future implementation possibilities for cockpit and soldier-system displays are discussed.

  19. On-demand stereoscopic 3D displays for avionic and military applications

    NASA Astrophysics Data System (ADS)

    Sarma, Kalluri; Lu, Kanghua; Larson, Brent; Schmidt, John; Cupero, Frank

    2010-04-01

    High speed AM LCD flat panels are evaluated for use in Field Sequential Stereoscopic (FSS) 3D displays for military and avionic applications. A 120 Hz AM LCD is used in field-sequential mode for constructing eyewear-based as well as autostereoscopic 3D display demonstrators for test and evaluation. The COTS eyewear-based system uses shutter glasses to control left-eye/right-eye images. The autostereoscopic system uses a custom backlight to generate illuminating pupils for left and right eyes. It is driven in synchronization with the images on the LCD. Both displays provide 3D effect in full-color and full-resolution in the AM LCD flat panel. We have realized luminance greater than 200 fL in 3D mode with the autostereoscopic system for sunlight readability. The characterization results and performance attributes of both systems are described.

  20. A compact eyetracked optical see-through head-mounted display

    NASA Astrophysics Data System (ADS)

    Hua, Hong; Gao, Chunyu

    2012-03-01

    An eye-tracked head-mounted display (ET-HMD) system is able to display virtual images as a classical HMD does, while additionally tracking the gaze direction of the user. There is ample evidence that a fully-integrated ETHMD system offers multi-fold benefits, not only to fundamental scientific research but also to emerging applications of such technology. For instance eyetracking capability in HMDs adds a very valuable tool and objective metric for scientists to quantitatively assess user interaction with 3D environments and investigate the effectiveness of various 3D visualization technologies for various specific tasks including training, education, and augmented cognition tasks. In this paper, we present an innovative optical approach to the design of an optical see-through ET-HMD system based on freeform optical technology and an innovative optical scheme that uniquely combines the display optics with the eye imaging optics. A preliminary design of the described ET-HMD system will be presented.

  1. Digital map databases in support of avionic display systems

    NASA Astrophysics Data System (ADS)

    Trenchard, Michael E.; Lohrenz, Maura C.; Rosche, Henry, III; Wischow, Perry B.

    1991-08-01

    The emergence of computerized mission planning systems (MPS) and airborne digital moving map systems (DMS) has necessitated the development of a global database of raster aeronautical chart data specifically designed for input to these systems. The Naval Oceanographic and Atmospheric Research Laboratory''s (NOARL) Map Data Formatting Facility (MDFF) is presently dedicated to supporting these avionic display systems with the development of the Compressed Aeronautical Chart (CAC) database on Compact Disk Read Only Memory (CDROM) optical discs. The MDFF is also developing a series of aircraft-specific Write-Once Read Many (WORM) optical discs. NOARL has initiated a comprehensive research program aimed at improving the pilots'' moving map displays current research efforts include the development of an alternate image compression technique and generation of a standard set of color palettes. The CAC database will provide digital aeronautical chart data in six different scales. CAC is derived from the Defense Mapping Agency''s (DMA) Equal Arc-second (ARC) Digitized Raster Graphics (ADRG) a series of scanned aeronautical charts. NOARL processes ADRG to tailor the chart image resolution to that of the DMS display while reducing storage requirements through image compression techniques. CAC is being distributed by DMA as a library of CDROMs.

  2. Multimedia Database at National Museum of Ethnology

    NASA Astrophysics Data System (ADS)

    Sugita, Shigeharu

    This paper describes the information management system at National Museum of Ethnology, Osaka, Japan. This museum is a kind of research center for cultural anthropology, and has many computer systems such as IBM 3090, VAX11/780, Fujitu M340R, etc. With these computers, distributed multimedia databases are constructed in which not only bibliographic data but also artifact image, slide image, book page image, etc. are stored. The number of data is now about 1.3 million items. These data can be retrieved and displayed on the multimedia workstation which has several displays.

  3. Real-time intravascular photoacoustic-ultrasound imaging of lipid-laden plaque at speed of video-rate level

    NASA Astrophysics Data System (ADS)

    Hui, Jie; Cao, Yingchun; Zhang, Yi; Kole, Ayeeshik; Wang, Pu; Yu, Guangli; Eakins, Gregory; Sturek, Michael; Chen, Weibiao; Cheng, Ji-Xin

    2017-03-01

    Intravascular photoacoustic-ultrasound (IVPA-US) imaging is an emerging hybrid modality for the detection of lipidladen plaques by providing simultaneous morphological and lipid-specific chemical information of an artery wall. The clinical utility of IVPA-US technology requires real-time imaging and display at speed of video-rate level. Here, we demonstrate a compact and portable IVPA-US system capable of imaging at up to 25 frames per second in real-time display mode. This unprecedented imaging speed was achieved by concurrent innovations in excitation laser source, rotary joint assembly, 1 mm IVPA-US catheter, differentiated A-line strategy, and real-time image processing and display algorithms. By imaging pulsatile motion at different imaging speeds, 16 frames per second was deemed to be adequate to suppress motion artifacts from cardiac pulsation for in vivo applications. Our lateral resolution results further verified the number of A-lines used for a cross-sectional IVPA image reconstruction. The translational capability of this system for the detection of lipid-laden plaques was validated by ex vivo imaging of an atherosclerotic human coronary artery at 16 frames per second, which showed strong correlation to gold-standard histopathology.

  4. Research of an optimization design method of integral imaging three-dimensional display system

    NASA Astrophysics Data System (ADS)

    Gao, Hui; Yan, Zhiqiang; Wen, Jun; Jiang, Guanwu

    2016-03-01

    The information warfare needs a highly transparent environment of battlefield, it follows that true three-dimensional display technology has obvious advantages than traditional display technology in the current field of military science and technology. It also focuses on the research progress of lens array imaging technology and aims at what restrict the development of integral imaging, main including low spatial resolution, narrow depth range and small viewing angle. This paper summarizes the principle, characteristics and development history of the integral imaging. A variety of methods are compared and analyzed that how to improve the resolution, extend depth of field, increase scope and eliminate the artifact aiming at problems currently. And makes a discussion about the experimental results of the research, comparing the display performance of different methods.

  5. Image management research

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    1988-01-01

    Two types of research issues are involved in image management systems with space station applications: image processing research and image perception research. The image processing issues are the traditional ones of digitizing, coding, compressing, storing, analyzing, and displaying, but with a new emphasis on the constraints imposed by the human perceiver. Two image coding algorithms have been developed that may increase the efficiency of image management systems (IMS). Image perception research involves a study of the theoretical and practical aspects of visual perception of electronically displayed images. Issues include how rapidly a user can search through a library of images, how to make this search more efficient, and how to present images in terms of resolution and split screens. Other issues include optimal interface to an IMS and how to code images in a way that is optimal for the human perceiver. A test-bed within which such issues can be addressed has been designed.

  6. X-window-based 2K display workstation

    NASA Astrophysics Data System (ADS)

    Weinberg, Wolfram S.; Hayrapetian, Alek S.; Cho, Paul S.; Valentino, Daniel J.; Taira, Ricky K.; Huang, H. K.

    1991-07-01

    A high-definition, high-performance display station for reading and review of digital radiological images is introduced. The station is based on a Sun SPARC Station 4 and employs X window system for display and manipulation of images. A mouse-operated graphic user interface is implemented utilizing Motif-style tools. The system supports up to four MegaScan gray-scale 2560 X 2048 monitors. A special configuration of frame and video buffer yields a data transfer of 50 M pixels/s. A magnetic disk array supplies a storage capacity of 2 GB with a data transfer rate of 4-6 MB/s. The system has access to the central archive through an ultrahigh-speed fiber-optic network and patient studies are automatically transferred to the local disk. The available image processing functions include change of lookup table, zoom and pan, and cine. Future enhancements will provide for manual contour tracing, length, area, and density measurements, text and graphic overlay, as well as composition of selected images. Additional preprocessing procedures under development will optimize the initial lookup table and adjust the images to a standard orientation.

  7. DLP™-based dichoptic vision test system

    NASA Astrophysics Data System (ADS)

    Woods, Russell L.; Apfelbaum, Henry L.; Peli, Eli

    2010-01-01

    It can be useful to present a different image to each of the two eyes while they cooperatively view the world. Such dichoptic presentation can occur in investigations of stereoscopic and binocular vision (e.g., strabismus, amblyopia) and vision rehabilitation in clinical and research settings. Various techniques have been used to construct dichoptic displays. The most common and most flexible modern technique uses liquid-crystal (LC) shutters. When used in combination with cathode ray tube (CRT) displays, there is often leakage of light from the image intended for one eye into the view of the other eye. Such interocular crosstalk is 14% even in our state of the art CRT-based dichoptic system. While such crosstalk may have minimal impact on stereo movie or video game experiences, it can defeat clinical and research investigations. We use micromirror digital light processing (DLP™) technology to create a novel dichoptic visual display system with substantially lower interocular crosstalk (0.3% remaining crosstalk comes from the LC shutters). The DLP system normally uses a color wheel to display color images. Our approach is to disable the color wheel, synchronize the display directly to the computer's sync signal, allocate each of the three (former) color presentations to one or both eyes, and open and close the LC shutters in synchrony with those color events.

  8. Rapid Damage Assessment. Volume II. Development and Testing of Rapid Damage Assessment System.

    DTIC Science & Technology

    1981-02-01

    pixels/s Camera Line Rate 732.4 lines/s Pixels per Line 1728 video 314 blank 4 line number (binary) 2 run number (BCD) 2048 total Pixel Resolution 8 bits...sists of an LSI-ll microprocessor, a VDI -200 video display processor, an FD-2 dual floppy diskette subsystem, an FT-I function key-trackball module...COMPONENT LIST FOR IMAGE PROCESSOR SYSTEM IMAGE PROCESSOR SYSTEM VIEWS I VDI -200 Display Processor Racks, Table FD-2 Dual Floppy Diskette Subsystem FT-l

  9. Digital video system for on-line portal verification

    NASA Astrophysics Data System (ADS)

    Leszczynski, Konrad W.; Shalev, Shlomo; Cosby, N. Scott

    1990-07-01

    A digital system has been developed for on-line acquisition, processing and display of portal images during radiation therapy treatment. A metal/phosphor screen combination is the primary detector, where the conversion from high-energy photons to visible light takes place. A mirror angled at 45 degrees reflects the primary image to a low-light-level camera, which is removed from the direct radiation beam. The image registered by the camera is digitized, processed and displayed on a CRT monitor. Advanced digital techniques for processing of on-line images have been developed and implemented to enhance image contrast and suppress the noise. Some elements of automated radiotherapy treatment verification have been introduced.

  10. Two dimensional recursive digital filters for near real time image processing

    NASA Technical Reports Server (NTRS)

    Olson, D.; Sherrod, E.

    1980-01-01

    A program was designed toward the demonstration of the feasibility of using two dimensional recursive digital filters for subjective image processing applications that require rapid turn around. The concept of the use of a dedicated minicomputer for the processor for this application was demonstrated. The minicomputer used was the HP1000 series E with a RTE 2 disc operating system and 32K words of memory. A Grinnel 256 x 512 x 8 bit display system was used to display the images. Sample images were provided by NASA Goddard on a 800 BPI, 9 track tape. Four 512 x 512 images representing 4 spectral regions of the same scene were provided. These images were filtered with enhancement filters developed during this effort.

  11. Augmented reality based real-time subcutaneous vein imaging system

    PubMed Central

    Ai, Danni; Yang, Jian; Fan, Jingfan; Zhao, Yitian; Song, Xianzheng; Shen, Jianbing; Shao, Ling; Wang, Yongtian

    2016-01-01

    A novel 3D reconstruction and fast imaging system for subcutaneous veins by augmented reality is presented. The study was performed to reduce the failure rate and time required in intravenous injection by providing augmented vein structures that back-project superimposed veins on the skin surface of the hand. Images of the subcutaneous vein are captured by two industrial cameras with extra reflective near-infrared lights. The veins are then segmented by a multiple-feature clustering method. Vein structures captured by the two cameras are matched and reconstructed based on the epipolar constraint and homographic property. The skin surface is reconstructed by active structured light with spatial encoding values and fusion displayed with the reconstructed vein. The vein and skin surface are both reconstructed in the 3D space. Results show that the structures can be precisely back-projected to the back of the hand for further augmented display and visualization. The overall system performance is evaluated in terms of vein segmentation, accuracy of vein matching, feature points distance error, duration times, accuracy of skin reconstruction, and augmented display. All experiments are validated with sets of real vein data. The imaging and augmented system produces good imaging and augmented reality results with high speed. PMID:27446690

  12. Augmented reality based real-time subcutaneous vein imaging system.

    PubMed

    Ai, Danni; Yang, Jian; Fan, Jingfan; Zhao, Yitian; Song, Xianzheng; Shen, Jianbing; Shao, Ling; Wang, Yongtian

    2016-07-01

    A novel 3D reconstruction and fast imaging system for subcutaneous veins by augmented reality is presented. The study was performed to reduce the failure rate and time required in intravenous injection by providing augmented vein structures that back-project superimposed veins on the skin surface of the hand. Images of the subcutaneous vein are captured by two industrial cameras with extra reflective near-infrared lights. The veins are then segmented by a multiple-feature clustering method. Vein structures captured by the two cameras are matched and reconstructed based on the epipolar constraint and homographic property. The skin surface is reconstructed by active structured light with spatial encoding values and fusion displayed with the reconstructed vein. The vein and skin surface are both reconstructed in the 3D space. Results show that the structures can be precisely back-projected to the back of the hand for further augmented display and visualization. The overall system performance is evaluated in terms of vein segmentation, accuracy of vein matching, feature points distance error, duration times, accuracy of skin reconstruction, and augmented display. All experiments are validated with sets of real vein data. The imaging and augmented system produces good imaging and augmented reality results with high speed.

  13. 32 CFR 813.2 - Sources of VIDOC.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) Air Digital Recorder (ADR) images from airborne imagery systems, such as heads up displays, radar scopes, and images from electro-optical sensors carried aboard aircraft and weapons systems. (e...

  14. 32 CFR 813.2 - Sources of VIDOC.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) Air Digital Recorder (ADR) images from airborne imagery systems, such as heads up displays, radar scopes, and images from electro-optical sensors carried aboard aircraft and weapons systems. (e...

  15. 32 CFR 813.2 - Sources of VIDOC.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) Air Digital Recorder (ADR) images from airborne imagery systems, such as heads up displays, radar scopes, and images from electro-optical sensors carried aboard aircraft and weapons systems. (e...

  16. 32 CFR 813.2 - Sources of VIDOC.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) Air Digital Recorder (ADR) images from airborne imagery systems, such as heads up displays, radar scopes, and images from electro-optical sensors carried aboard aircraft and weapons systems. (e...

  17. 32 CFR 813.2 - Sources of VIDOC.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) Air Digital Recorder (ADR) images from airborne imagery systems, such as heads up displays, radar scopes, and images from electro-optical sensors carried aboard aircraft and weapons systems. (e...

  18. Dual cameras acquisition and display system of retina-like sensor camera and rectangular sensor camera

    NASA Astrophysics Data System (ADS)

    Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu

    2015-04-01

    For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.

  19. Development of a high-performance image server using ATM technology

    NASA Astrophysics Data System (ADS)

    Do Van, Minh; Humphrey, Louis M.; Ravin, Carl E.

    1996-05-01

    The ability to display digital radiographs to a radiologist in a reasonable time has long been the goal of many PACS. Intelligent routing, or pre-fetching images, has become a solution whereby a system uses a set of rules to route the images to a pre-determined destination. Images would then be stored locally on a workstation for faster display times. Some PACS use a large, centralized storage approach and workstations retrieve images over high bandwidth connections. Another approach to image management is to provide a high performance, clustered storage system. This has the advantage of eliminating the complexity of pre-fetching and allows for rapid image display from anywhere within the hospital. We discuss the development of such a storage device, which provides extremely fast access to images across a local area network. Among the requirements for development of the image server were high performance, DICOM 3.0 compliance, and the use of industry standard components. The completed image server provides performance more than sufficient for use in clinical practice. Setting up modalities to send images to the image server is simple due to the adherence to the DICOM 3.0 specification. Using only off-the-shelf components allows us to keep the cost of the server relatively inexpensive and allows for easy upgrades as technology becomes more advanced. These factors make the image server ideal for use as a clustered storage system in a radiology department.

  20. Research on self-calibration biaxial autocollimator based on ZYNQ

    NASA Astrophysics Data System (ADS)

    Guo, Pan; Liu, Bingguo; Liu, Guodong; Zhong, Yao; Lu, Binghui

    2018-01-01

    Autocollimators are mainly based on computers or the electronic devices that can be connected to the internet, and its precision, measurement range and resolution are all defective, and external displays are needed to display images in real time. What's more, there is no real-time calibration for autocollimator in the market. In this paper, we propose a biaxial autocollimator based on the ZYNQ embedded platform to solve the above problems. Firstly, the traditional optical system is improved and a light path is added for real-time calibration. Then, in order to improve measurement speed, the embedded platform based on ZYNQ that combines Linux operating system with autocollimator is designed. In this part, image acquisition, image processing, image display and the man-machine interaction interface based on Qt are achieved. Finally, the system realizes two-dimensional small angle measurement. Experimental results showed that the proposed method can improve the angle measurement accuracy. The standard deviation of the close distance (1.5m) is 0.15" in horizontal direction of image and 0.24"in vertical direction, the repeatability of measurement of the long distance (10m) is improved by 0.12 in horizontal direction of image and 0.3 in vertical direction.

  1. 3D brain MR angiography displayed by a multi-autostereoscopic screen

    NASA Astrophysics Data System (ADS)

    Magalhães, Daniel S. F.; Ribeiro, Fádua H.; Lima, Fabrício O.; Serra, Rolando L.; Moreno, Alfredo B.; Li, Li M.

    2012-02-01

    The magnetic resonance angiography (MRA) can be used to examine blood vessels in key areas of the body, including the brain. In the MRA, a powerful magnetic field, radio waves and a computer produce the detailed images. Physicians use the procedure in brain images mainly to detect atherosclerosis disease in the carotid artery of the neck, which may limit blood flow to the brain and cause a stroke and identify a small aneurysm or arteriovenous malformation inside the brain. Multi-autostereoscopic displays provide multiple views of the same scene, rather than just two, as in autostereoscopic systems. Each view is visible from a different range of positions in front of the display. This allows the viewer to move left-right in front of the display and see the correct view from any position. The use of 3D imaging in the medical field has proven to be a benefit to doctors when diagnosing patients. For different medical domains a stereoscopic display could be advantageous in terms of a better spatial understanding of anatomical structures, better perception of ambiguous anatomical structures, better performance of tasks that require high level of dexterity, increased learning performance, and improved communication with patients or between doctors. In this work we describe a multi-autostereoscopic system and how to produce 3D MRA images to be displayed with it. We show results of brain MR angiography images discussing, how a 3D visualization can help physicians to a better diagnosis.

  2. Liquid crystal light valve technologies for display applications

    NASA Astrophysics Data System (ADS)

    Kikuchi, Hiroshi; Takizawa, Kuniharu

    2001-11-01

    The liquid crystal (LC) light valve, which is a spatial light modulator that uses LC material, is a very important device in the area of display development, image processing, optical computing, holograms, etc. In particular, there have been dramatic developments in the past few years in the application of the LC light valve to projectors and other display technologies. Various LC operating modes have been developed, including thin film transistors, MOS-FETs and other active matrix drive techniques to meet the requirements for higher resolution, and substantial improvements have been achieved in the performance of optical systems, resulting in brighter display images. Given this background, the number of applications for the LC light valve has greatly increased. The resolution has increased from QVGA (320 x 240) to QXGA (2048 x 1536) or even super- high resolution of eight million pixels. In the area of optical output, projectors of 600 to 13,000 lm are now available, and they are used for presentations, home theatres, electronic cinema and other diverse applications. Projectors using the LC light valve can display high- resolution images on large screens. They are now expected to be developed further as part of hyper-reality visual systems. This paper provides an overview of the needs for large-screen displays, human factors related to visual effects, the way in which LC light valves are applied to projectors, improvements in moving picture quality, and the results of the latest studies that have been made to increase the quality of images and moving images or pictures.

  3. Imaging and applied optics: introduction to the feature issue.

    PubMed

    Zalevsky, Zeev; Arnison, Matthew R; Javidi, Bahram; Testorf, Markus

    2018-03-01

    This special issue of Applied Optics contains selected papers from OSA's Imaging Congress with particular emphasis on work from mathematics in imaging, computational optical sensing and imaging, imaging systems and applications, and 3D image acquisition and display.

  4. Viewing-zone control of integral imaging display using a directional projection and elemental image resizing method.

    PubMed

    Alam, Md Ashraful; Piao, Mei-Lan; Bang, Le Thanh; Kim, Nam

    2013-10-01

    Viewing-zone control of integral imaging (II) displays using a directional projection and elemental image (EI) resizing method is proposed. Directional projection of EIs with the same size of microlens pitch causes an EI mismatch at the EI plane. In this method, EIs are generated computationally using a newly introduced algorithm: the directional elemental image generation and resizing algorithm considering the directional projection geometry of each pixel as well as an EI resizing method to prevent the EI mismatch. Generated EIs are projected as a collimated projection beam with a predefined directional angle, either horizontally or vertically. The proposed II display system allows reconstruction of a 3D image within a predefined viewing zone that is determined by the directional projection angle.

  5. Solar active region display system

    NASA Astrophysics Data System (ADS)

    Golightly, M.; Raben, V.; Weyland, M.

    2003-04-01

    The Solar Active Region Display System (SARDS) is a client-server application that automatically collects a wide range of solar data and displays it in a format easy for users to assimilate and interpret. Users can rapidly identify active regions of interest or concern from color-coded indicators that visually summarize each region's size, magnetic configuration, recent growth history, and recent flare and CME production. The active region information can be overlaid onto solar maps, multiple solar images, and solar difference images in orthographic, Mercator or cylindrical equidistant projections. Near real-time graphs display the GOES soft and hard x-ray flux, flare events, and daily F10.7 value as a function of time; color-coded indicators show current trends in soft x-ray flux, flare temperature, daily F10.7 flux, and x-ray flare occurrence. Through a separate window up to 4 real-time or static graphs can simultaneously display values of KP, AP, daily F10.7 flux, GOES soft and hard x-ray flux, GOES >10 and >100 MeV proton flux, and Thule neutron monitor count rate. Climatologic displays use color-valued cells to show F10.7 and AP values as a function of Carrington/Bartel's rotation sequences - this format allows users to detect recurrent patterns in solar and geomagnetic activity as well as variations in activity levels over multiple solar cycles. Users can customize many of the display and graph features; all displays can be printed or copied to the system's clipboard for "pasting" into other applications. The system obtains and stores space weather data and images from sources such as the NOAA Space Environment Center, NOAA National Geophysical Data Center, the joint ESA/NASA SOHO spacecraft, and the Kitt Peak National Solar Observatory, and can be extended to include other data series and image sources. Data and images retrieved from the system's database are converted to XML and transported from a central server using HTTP and SOAP protocols, allowing operation through network firewalls; data is compressed to enhance performance over limited bandwidth connections. All applications and services are written in the JAVA program language for platform independence. Several versions of SARDS have been in operational use by the NASA Space Radiation Analysis Group, NOAA Space Weather Operations, and U.S. Air Force Weather Agency since 1999.

  6. Performance considerations for high-definition head-mounted displays

    NASA Technical Reports Server (NTRS)

    Edwards, Oliver J.; Larimer, James; Gille, Jennifer

    1992-01-01

    Design image-optimization for helmet-mounted displays (HMDs) for military systems is presently discussed within the framework of a systems-engineering approach that encompasses (1) a description of natural targets in the field; (2) the characteristics of human visual perception; and (3) device specifications that directly relate to these ecological and human-factors parameters. Attention is given to target size and contrast and the relationship of the modulation transfer function to image resolution.

  7. System description and analysis. Part 1: Feasibility study for helicopter/VTOL wide-angle simulation image generation display system

    NASA Technical Reports Server (NTRS)

    1977-01-01

    A preliminary design for a helicopter/VSTOL wide angle simulator image generation display system is studied. The visual system is to become part of a simulator capability to support Army aviation systems research and development within the near term. As required for the Army to simulate a wide range of aircraft characteristics, versatility and ease of changing cockpit configurations were primary considerations of the study. Due to the Army's interest in low altitude flight and descents into and landing in constrained areas, particular emphasis is given to wide field of view, resolution, brightness, contrast, and color. The visual display study includes a preliminary design, demonstrated feasibility of advanced concepts, and a plan for subsequent detail design and development. Analysis and tradeoff considerations for various visual system elements are outlined and discussed.

  8. Initial experience with a nuclear medicine viewing workstation

    NASA Astrophysics Data System (ADS)

    Witt, Robert M.; Burt, Robert W.

    1992-07-01

    Graphical User Interfaced (GUI) workstations are now available from commercial vendors. We recently installed a GUI workstation in our nuclear medicine reading room for exclusive use of staff and resident physicians. The system is built upon a Macintosh platform and has been available as a DELTAmanager from MedImage and more recently as an ICON V from Siemens Medical Systems. The workstation provides only display functions and connects to our existing nuclear medicine imaging system via ethernet. The system has some processing capabilities to create oblique, sagittal and coronal views from transverse tomographic views. Hard copy output is via a screen save device and a thermal color printer. The DELTAmanager replaced a MicroDELTA workstation which had both process and view functions. The mouse activated GUI has made remarkable changes to physicians'' use of the nuclear medicine viewing system. Training time to view and review studies has been reduced from hours to about 30-minutes. Generation of oblique views and display of brain and heart tomographic studies has been reduced from about 30-minutes of technician''s time to about 5-minutes of physician''s time. Overall operator functionality has been increased so that resident physicians with little prior computer experience can access all images on the image server and display pertinent patient images when consulting with other staff.

  9. High resolution biomedical imaging system with direct detection of x-rays via a charge coupled device

    DOEpatents

    Atac, M.; McKay, T.A.

    1998-04-21

    An imaging system is provided for direct detection of x-rays from an irradiated biological tissue. The imaging system includes an energy source for emitting x-rays toward the biological tissue and a charge coupled device (CCD) located immediately adjacent the biological tissue and arranged transverse to the direction of irradiation along which the x-rays travel. The CCD directly receives and detects the x-rays after passing through the biological tissue. The CCD is divided into a matrix of cells, each of which individually stores a count of x-rays directly detected by the cell. The imaging system further includes a pattern generator electrically coupled to the CCD for reading a count from each cell. A display device is provided for displaying an image representative of the count read by the pattern generator from the cells of the CCD. 13 figs.

  10. High resolution biomedical imaging system with direct detection of x-rays via a charge coupled device

    DOEpatents

    Atac, Muzaffer; McKay, Timothy A.

    1998-01-01

    An imaging system is provided for direct detection of x-rays from an irradiated biological tissue. The imaging system includes an energy source for emitting x-rays toward the biological tissue and a charge coupled device (CCD) located immediately adjacent the biological tissue and arranged transverse to the direction of irradiation along which the x-rays travel. The CCD directly receives and detects the x-rays after passing through the biological tissue. The CCD is divided into a matrix of cells, each of which individually stores a count of x-rays directly detected by the cell. The imaging system further includes a pattern generator electrically coupled to the CCD for reading a count from each cell. A display device is provided for displaying an image representative of the count read by the pattern generator from the cells of the CCD.

  11. Design of polarization imaging system based on CIS and FPGA

    NASA Astrophysics Data System (ADS)

    Zeng, Yan-an; Liu, Li-gang; Yang, Kun-tao; Chang, Da-ding

    2008-02-01

    As polarization is an important characteristic of light, polarization image detecting is a new image detecting technology of combining polarimetric and image processing technology. Contrasting traditional image detecting in ray radiation, polarization image detecting could acquire a lot of very important information which traditional image detecting couldn't. Polarization image detecting will be widely used in civilian field and military field. As polarization image detecting could resolve some problem which couldn't be resolved by traditional image detecting, it has been researched widely around the world. The paper introduces polarization image detecting in physical theory at first, then especially introduces image collecting and polarization image process based on CIS (CMOS image sensor) and FPGA. There are two parts including hardware and software for polarization imaging system. The part of hardware include drive module of CMOS image sensor, VGA display module, SRAM access module and the real-time image data collecting system based on FPGA. The circuit diagram and PCB was designed. Stokes vector and polarization angle computing method are analyzed in the part of software. The float multiply of Stokes vector is optimized into just shift and addition operation. The result of the experiment shows that real time image collecting system could collect and display image data from CMOS image sensor in real-time.

  12. Image quality evaluation of medical color and monochrome displays using an imaging colorimeter

    NASA Astrophysics Data System (ADS)

    Roehrig, Hans; Gu, Xiliang; Fan, Jiahua

    2012-10-01

    The purpose of this presentation is to demonstrate the means which permit examining the accuracy of Image Quality with respect to MTF (Modulation Transfer Function) and NPS (Noise Power Spectrum) of Color Displays and Monochrome Displays. Indications were in the past that color displays could affect the clinical performance of color displays negatively compared to monochrome displays. Now colorimeters like the PM-1423 are available which have higher sensitivity and color accuracy than the traditional cameras like CCD cameras. Reference (1) was not based on measurements made with a colorimeter. This paper focuses on the measurements of physical characteristics of the spatial resolution and noise performance of color and monochrome medical displays which were made with a colorimeter and we will after this meeting submit the data to an ROC study so we have again a paper to present at a future SPIE Conference.Specifically, Modulation Transfer Function (MTF) and Noise Power Spectrum (NPS) were evaluated and compared at different digital driving levels (DDL) between the two medical displays. This paper focuses on the measurements of physical characteristics of the spatial resolution and noise performance of color and monochrome medical displays which were made with a colorimeter and we will after this meeting submit the data to an ROC study so we have again a paper to present at a future Annual SPIE Conference. Specifically, Modulation Transfer Function (MTF) and Noise Power Spectrum (NPS) were evaluated and compared at different digital driving levels (DDL) between the two medical displays. The Imaging Colorimeter. Measurement of color image quality needs were done with an imaging colorimeter as it is shown below. Imaging colorimetry is ideally suited to FPD measurement because imaging systems capture spatial data generating millions of data points in a single measurement operation. The imaging colorimeter which was used was the PM-1423 from Radiant Imaging. It uses full-frame CCDs with 100% fill factor which makes it very suitable to measure luminance and chrominance of individual LCD pixels and sub-pixels on an LCD display. The CCDs used are 14-bit thermoelectrically cooled and temperature stabilized , scientific grade.

  13. Real Image Visual Display System

    DTIC Science & Technology

    1992-12-01

    DTI-100M autostereoscopic display ......................... 15 8. Lenticular screen ........ ............................. 16 9. Lenticular screen...parameters and pixel position ................. 17 10. General viewing of the stereoscopic couple .................... 18 11. Viewing zones for lenticular ...involves using a lenticular screen for imaging. Lenticular screens are probably most familiar in the form of ŗ-D postcards" which 15 consist of an

  14. A Method for Determining the Timing of Displaying the Speaker's Face and Captions for a Real-Time Speech-to-Caption System

    NASA Astrophysics Data System (ADS)

    Kuroki, Hayato; Ino, Shuichi; Nakano, Satoko; Hori, Kotaro; Ifukube, Tohru

    The authors of this paper have been studying a real-time speech-to-caption system using speech recognition technology with a “repeat-speaking” method. In this system, they used a “repeat-speaker” who listens to a lecturer's voice and then speaks back the lecturer's speech utterances into a speech recognition computer. The througoing system showed that the accuracy of the captions is about 97% in Japanese-Japanese conversion and the conversion time from voices to captions is about 4 seconds in English-English conversion in some international conferences. Of course it required a lot of costs to achieve these high performances. In human communications, speech understanding depends not only on verbal information but also on non-verbal information such as speaker's gestures, and face and mouth movements. So the authors found the idea to display information of captions and speaker's face movement images with a suitable way to achieve a higher comprehension after storing information once into a computer briefly. In this paper, we investigate the relationship of the display sequence and display timing between captions that have speech recognition errors and the speaker's face movement images. The results show that the sequence “to display the caption before the speaker's face image” improves the comprehension of the captions. The sequence “to display both simultaneously” shows an improvement only a few percent higher than the question sentence, and the sequence “to display the speaker's face image before the caption” shows almost no change. In addition, the sequence “to display the caption 1 second before the speaker's face shows the most significant improvement of all the conditions.

  15. Controllable 3D Display System Based on Frontal Projection Lenticular Screen

    NASA Astrophysics Data System (ADS)

    Feng, Q.; Sang, X.; Yu, X.; Gao, X.; Wang, P.; Li, C.; Zhao, T.

    2014-08-01

    A novel auto-stereoscopic three-dimensional (3D) projection display system based on the frontal projection lenticular screen is demonstrated. It can provide high real 3D experiences and the freedom of interaction. In the demonstrated system, the content can be changed and the dense of viewing points can be freely adjusted according to the viewers' demand. The high dense viewing points can provide smooth motion parallax and larger image depth without blurry. The basic principle of stereoscopic display is described firstly. Then, design architectures including hardware and software are demonstrated. The system consists of a frontal projection lenticular screen, an optimally designed projector-array and a set of multi-channel image processors. The parameters of the frontal projection lenticular screen are based on the demand of viewing such as the viewing distance and the width of view zones. Each projector is arranged on an adjustable platform. The set of multi-channel image processors are made up of six PCs. One of them is used as the main controller, the other five client PCs can process 30 channel signals and transmit them to the projector-array. Then a natural 3D scene will be perceived based on the frontal projection lenticular screen with more than 1.5 m image depth in real time. The control section is presented in detail, including parallax adjustment, system synchronization, distortion correction, etc. Experimental results demonstrate the effectiveness of this novel controllable 3D display system.

  16. Evaluation of image quality

    NASA Technical Reports Server (NTRS)

    Pavel, M.

    1993-01-01

    This presentation outlines in viewgraph format a general approach to the evaluation of display system quality for aviation applications. This approach is based on the assumption that it is possible to develop a model of the display which captures most of the significant properties of the display. The display characteristics should include spatial and temporal resolution, intensity quantizing effects, spatial sampling, delays, etc. The model must be sufficiently well specified to permit generation of stimuli that simulate the output of the display system. The first step in the evaluation of display quality is an analysis of the tasks to be performed using the display. Thus, for example, if a display is used by a pilot during a final approach, the aesthetic aspects of the display may be less relevant than its dynamic characteristics. The opposite task requirements may apply to imaging systems used for displaying navigation charts. Thus, display quality is defined with regard to one or more tasks. Given a set of relevant tasks, there are many ways to approach display evaluation. The range of evaluation approaches includes visual inspection, rapid evaluation, part-task simulation, and full mission simulation. The work described is focused on two complementary approaches to rapid evaluation. The first approach is based on a model of the human visual system. A model of the human visual system is used to predict the performance of the selected tasks. The model-based evaluation approach permits very rapid and inexpensive evaluation of various design decisions. The second rapid evaluation approach employs specifically designed critical tests that embody many important characteristics of actual tasks. These are used in situations where a validated model is not available. These rapid evaluation tests are being implemented in a workstation environment.

  17. Objective analysis of image quality of video image capture systems

    NASA Astrophysics Data System (ADS)

    Rowberg, Alan H.

    1990-07-01

    As Picture Archiving and Communication System (PACS) technology has matured, video image capture has become a common way of capturing digital images from many modalities. While digital interfaces, such as those which use the ACR/NEMA standard, will become more common in the future, and are preferred because of the accuracy of image transfer, video image capture will be the dominant method in the short term, and may continue to be used for some time because of the low cost and high speed often associated with such devices. Currently, virtually all installed systems use methods of digitizing the video signal that is produced for display on the scanner viewing console itself. A series of digital test images have been developed for display on either a GE CT9800 or a GE Signa MRI scanner. These images have been captured with each of five commercially available image capture systems, and the resultant images digitally transferred on floppy disk to a PC1286 computer containing Optimast' image analysis software. Here the images can be displayed in a comparative manner for visual evaluation, in addition to being analyzed statistically. Each of the images have been designed to support certain tests, including noise, accuracy, linearity, gray scale range, stability, slew rate, and pixel alignment. These image capture systems vary widely in these characteristics, in addition to the presence or absence of other artifacts, such as shading and moire pattern. Other accessories such as video distribution amplifiers and noise filters can also add or modify artifacts seen in the captured images, often giving unusual results. Each image is described, together with the tests which were performed using them. One image contains alternating black and white lines, each one pixel wide, after equilibration strips ten pixels wide. While some systems have a slew rate fast enough to track this correctly, others blur it to an average shade of gray, and do not resolve the lines, or give horizontal or vertical streaking. While many of these results are significant from an engineering standpoint alone, there are clinical implications and some anatomy or pathology may not be visualized if an image capture system is used improperly.

  18. Study of a direct visualization display tool for space applications

    NASA Astrophysics Data System (ADS)

    Pereira do Carmo, J.; Gordo, P. R.; Martins, M.; Rodrigues, F.; Teodoro, P.

    2017-11-01

    The study of a Direct Visualization Display Tool (DVDT) for space applications is reported. The review of novel technologies for a compact display tool is described. Several applications for this tool have been identified with the support of ESA astronauts and are presented. A baseline design is proposed. It consists mainly of OLEDs as image source; a specially designed optical prism as relay optics; a Personal Digital Assistant (PDA), with data acquisition card, as control unit; and voice control and simplified keyboard as interfaces. Optical analysis and the final estimated performance are reported. The system is able to display information (text, pictures or/and video) with SVGA resolution directly to the astronaut using a Field of View (FOV) of 20x14.5 degrees. The image delivery system is a monocular Head Mounted Display (HMD) that weights less than 100g. The HMD optical system has an eye pupil of 7mm and an eye relief distance of 30mm.

  19. Architecture for a PACS primary diagnosis workstation

    NASA Astrophysics Data System (ADS)

    Shastri, Kaushal; Moran, Byron

    1990-08-01

    A major factor in determining the overall utility of a medical Picture Archiving and Communications (PACS) system is the functionality of the diagnostic workstation. Meyer-Ebrecht and Wendler [1] have proposed a modular picture computer architecture with high throughput and Perry et.al [2] have defined performance requirements for radiology workstations. In order to be clinically useful, a primary diagnosis workstation must not only provide functions of current viewing systems (e.g. mechanical alternators [3,4]) such as acceptable image quality, simultaneous viewing of multiple images, and rapid switching of image banks; but must also provide a diagnostic advantage over the current systems. This includes window-level functions on any image, simultaneous display of multi-modality images, rapid image manipulation, image processing, dynamic image display (cine), electronic image archival, hardcopy generation, image acquisition, network support, and an easy user interface. Implementation of such a workstation requires an underlying hardware architecture which provides high speed image transfer channels, local storage facilities, and image processing functions. This paper describes the hardware architecture of the Siemens Diagnostic Reporting Console (DRC) which meets these requirements.

  20. Real time display Fourier-domain OCT using multi-thread parallel computing with data vectorization

    NASA Astrophysics Data System (ADS)

    Eom, Tae Joong; Kim, Hoon Seop; Kim, Chul Min; Lee, Yeung Lak; Choi, Eun-Seo

    2011-03-01

    We demonstrate a real-time display of processed OCT images using multi-thread parallel computing with a quad-core CPU of a personal computer. The data of each A-line are treated as one vector to maximize the data translation rate between the cores of the CPU and RAM stored image data. A display rate of 29.9 frames/sec for processed OCT data (4096 FFT-size x 500 A-scans) is achieved in our system using a wavelength swept source with 52-kHz swept frequency. The data processing times of the OCT image and a Doppler OCT image with a 4-time average are 23.8 msec and 91.4 msec.

  1. Single DMD time-multiplexed 64-views autostereoscopic 3D display

    NASA Astrophysics Data System (ADS)

    Loreti, Luigi

    2013-03-01

    Based on previous prototype of the Real time 3D holographic display developed last year, we developed a new concept of auto-stereoscopic multiview display (64 views), wide angle (90°) 3D full color display. The display is based on a RGB laser light source illuminating a DMD (Discovery 4100 0,7") at 24.000 fps, an image deflection system made with an AOD (Acoustic Optic Deflector) driven by a piezo-electric transducer generating a variable standing acoustic wave on the crystal that acts as a phase grating. The DMD projects in fast sequence 64 point of view of the image on the crystal cube. Depending on the frequency of the standing wave, the input picture sent by the DMD is deflected in different angle of view. An holographic screen at a proper distance diffuse the rays in vertical direction (60°) and horizontally select (1°) only the rays directed to the observer. A telescope optical system will enlarge the image to the right dimension. A VHDL firmware to render in real-time (16 ms) 64 views (16 bit 4:2:2) of a CAD model (obj, dxf or 3Ds) and depth-map encoded video images was developed into the resident Virtex5 FPGA of the Discovery 4100 SDK, thus eliminating the needs of image transfer and high speed links

  2. High-performance floating-point image computing workstation for medical applications

    NASA Astrophysics Data System (ADS)

    Mills, Karl S.; Wong, Gilman K.; Kim, Yongmin

    1990-07-01

    The medical imaging field relies increasingly on imaging and graphics techniques in diverse applications with needs similar to (or more stringent than) those of the military, industrial and scientific communities. However, most image processing and graphics systems available for use in medical imaging today are either expensive, specialized, or in most cases both. High performance imaging and graphics workstations which can provide real-time results for a number of applications, while maintaining affordability and flexibility, can facilitate the application of digital image computing techniques in many different areas. This paper describes the hardware and software architecture of a medium-cost floating-point image processing and display subsystem for the NeXT computer, and its applications as a medical imaging workstation. Medical imaging applications of the workstation include use in a Picture Archiving and Communications System (PACS), in multimodal image processing and 3-D graphics workstation for a broad range of imaging modalities, and as an electronic alternator utilizing its multiple monitor display capability and large and fast frame buffer. The subsystem provides a 2048 x 2048 x 32-bit frame buffer (16 Mbytes of image storage) and supports both 8-bit gray scale and 32-bit true color images. When used to display 8-bit gray scale images, up to four different 256-color palettes may be used for each of four 2K x 2K x 8-bit image frames. Three of these image frames can be used simultaneously to provide pixel selectable region of interest display. A 1280 x 1024 pixel screen with 1: 1 aspect ratio can be windowed into the frame buffer for display of any portion of the processed image or images. In addition, the system provides hardware support for integer zoom and an 82-color cursor. This subsystem is implemented on an add-in board occupying a single slot in the NeXT computer. Up to three boards may be added to the NeXT for multiple display capability (e.g., three 1280 x 1024 monitors, each with a 16-Mbyte frame buffer). Each add-in board provides an expansion connector to which an optional image computing coprocessor board may be added. Each coprocessor board supports up to four processors for a peak performance of 160 MFLOPS. The coprocessors can execute programs from external high-speed microcode memory as well as built-in internal microcode routines. The internal microcode routines provide support for 2-D and 3-D graphics operations, matrix and vector arithmetic, and image processing in integer, IEEE single-precision floating point, or IEEE double-precision floating point. In addition to providing a library of C functions which links the NeXT computer to the add-in board and supports its various operational modes, algorithms and medical imaging application programs are being developed and implemented for image display and enhancement. As an extension to the built-in algorithms of the coprocessors, 2-D Fast Fourier Transform (FF1), 2-D Inverse FFF, convolution, warping and other algorithms (e.g., Discrete Cosine Transform) which exploit the parallel architecture of the coprocessor board are being implemented.

  3. 78 FR 32078 - Special Conditions: Gulfstream Model G280 Airplane, Enhanced Flight Vision System (EFVS) With...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-29

    ... document refers to a system comprised of a head-up display, imaging sensor(s), and avionics interfaces that display the sensor imagery on the HUD, and which overlay that imagery with alpha-numeric and symbolic... the sensor imagery, with or without other flight information, on a head-down display. For clarity, the...

  4. Split-screen display system and standardized methods for ultrasound image acquisition and multi-frame data processing

    NASA Technical Reports Server (NTRS)

    Selzer, Robert H. (Inventor); Hodis, Howard N. (Inventor)

    2011-01-01

    A standardized acquisition methodology assists operators to accurately replicate high resolution B-mode ultrasound images obtained over several spaced-apart examinations utilizing a split-screen display in which the arterial ultrasound image from an earlier examination is displayed on one side of the screen while a real-time "live" ultrasound image from a current examination is displayed next to the earlier image on the opposite side of the screen. By viewing both images, whether simultaneously or alternately, while manually adjusting the ultrasound transducer, an operator is able to bring into view the real-time image that best matches a selected image from the earlier ultrasound examination. Utilizing this methodology, dynamic material properties of arterial structures, such as IMT and diameter, are measured in a standard region over successive image frames. Each frame of the sequence has its echo edge boundaries automatically determined by using the immediately prior frame's true echo edge coordinates as initial boundary conditions. Computerized echo edge recognition and tracking over multiple successive image frames enhances measurement of arterial diameter and IMT and allows for improved vascular dimension measurements, including vascular stiffness and IMT determinations.

  5. Omniview motionless camera orientation system

    NASA Technical Reports Server (NTRS)

    Martin, H. Lee (Inventor); Kuban, Daniel P. (Inventor); Zimmermann, Steven D. (Inventor); Busko, Nicholas (Inventor)

    2010-01-01

    An apparatus and method is provided for converting digital images for use in an imaging system. The apparatus includes a data memory which stores digital data representing an image having a circular or spherical field of view such as an image captured by a fish-eye lens, a control input for receiving a signal for selecting a portion of the image, and a converter responsive to the control input for converting digital data corresponding to the selected portion into digital data representing a planar image for subsequent display. Various methods include the steps of storing digital data representing an image having a circular or spherical field of view, selecting a portion of the image, and converting the stored digital data corresponding to the selected portion into digital data representing a planar image for subsequent display. In various embodiments, the data converter and data conversion step may use an orthogonal set of transformation algorithms.

  6. Multi-channel automotive night vision system

    NASA Astrophysics Data System (ADS)

    Lu, Gang; Wang, Li-jun; Zhang, Yi

    2013-09-01

    A four-channel automotive night vision system is designed and developed .It is consist of the four active near-infrared cameras and an Mulit-channel image processing display unit,cameras were placed in the automobile front, left, right and rear of the system .The system uses near-infrared laser light source,the laser light beam is collimated, the light source contains a thermoelectric cooler (TEC),It can be synchronized with the camera focusing, also has an automatic light intensity adjustment, and thus can ensure the image quality. The principle of composition of the system is description in detail,on this basis, beam collimation,the LD driving and LD temperature control of near-infrared laser light source,four-channel image processing display are discussed.The system can be used in driver assistance, car BLIS, car parking assist system and car alarm system in day and night.

  7. A large flat panel multifunction display for military and space applications

    NASA Astrophysics Data System (ADS)

    Pruitt, James S.

    1992-09-01

    A flat panel multifunction display (MFD) that offers the size and reliability benefits of liquid crystal display technology while achieving near-CRT display quality is presented. Display generation algorithms that provide exceptional display quality are being implemented in custom VLSI components to minimize MFD size. A high-performance processor converts user-specified display lists to graphics commands used by these components, resulting in high-speed updates of two-dimensional and three-dimensional images. The MFD uses the MIL-STD-1553B data bus for compatibility with virtually all avionics systems. The MFD can generate displays directly from display lists received from the MIL-STD-1553B bus. Complex formats can be stored in the MFD and displayed using parameters from the data bus. The MFD also accepts direct video input and performs special processing on this input to enhance image quality.

  8. Real-time self-calibration of a tracked augmented reality display

    NASA Astrophysics Data System (ADS)

    Baum, Zachary; Lasso, Andras; Ungi, Tamas; Fichtinger, Gabor

    2016-03-01

    PURPOSE: Augmented reality systems have been proposed for image-guided needle interventions but they have not become widely used in clinical practice due to restrictions such as limited portability, low display refresh rates, and tedious calibration procedures. We propose a handheld tablet-based self-calibrating image overlay system. METHODS: A modular handheld augmented reality viewbox was constructed from a tablet computer and a semi-transparent mirror. A consistent and precise self-calibration method, without the use of any temporary markers, was designed to achieve an accurate calibration of the system. Markers attached to the viewbox and patient are simultaneously tracked using an optical pose tracker to report the position of the patient with respect to a displayed image plane that is visualized in real-time. The software was built using the open-source 3D Slicer application platform's SlicerIGT extension and the PLUS toolkit. RESULTS: The accuracy of the image overlay with image-guided needle interventions yielded a mean absolute position error of 0.99 mm (95th percentile 1.93 mm) in-plane of the overlay and a mean absolute position error of 0.61 mm (95th percentile 1.19 mm) out-of-plane. This accuracy is clinically acceptable for tool guidance during various procedures, such as musculoskeletal injections. CONCLUSION: A self-calibration method was developed and evaluated for a tracked augmented reality display. The results show potential for the use of handheld image overlays in clinical studies with image-guided needle interventions.

  9. Image Format Conversion to DICOM and Lookup Table Conversion to Presentation Value of the Japanese Society of Radiological Technology (JSRT) Standard Digital Image Database.

    PubMed

    Yanagita, Satoshi; Imahana, Masato; Suwa, Kazuaki; Sugimura, Hitomi; Nishiki, Masayuki

    2016-01-01

    Japanese Society of Radiological Technology (JSRT) standard digital image database contains many useful cases of chest X-ray images, and has been used in many state-of-the-art researches. However, the pixel values of all the images are simply digitized as relative density values by utilizing a scanned film digitizer. As a result, the pixel values are completely different from the standardized display system input value of digital imaging and communications in medicine (DICOM), called presentation value (P-value), which can maintain a visual consistency when observing images using different display luminance. Therefore, we converted all the images from JSRT standard digital image database to DICOM format followed by the conversion of the pixel values to P-value using an original program developed by ourselves. Consequently, JSRT standard digital image database has been modified so that the visual consistency of images is maintained among different luminance displays.

  10. Full-parallax 3D display from stereo-hybrid 3D camera system

    NASA Astrophysics Data System (ADS)

    Hong, Seokmin; Ansari, Amir; Saavedra, Genaro; Martinez-Corral, Manuel

    2018-04-01

    In this paper, we propose an innovative approach for the production of the microimages ready to display onto an integral-imaging monitor. Our main contribution is using a stereo-hybrid 3D camera system, which is used for picking up a 3D data pair and composing a denser point cloud. However, there is an intrinsic difficulty in the fact that hybrid sensors have dissimilarities and therefore should be equalized. Handled data facilitate to generating an integral image after projecting computationally the information through a virtual pinhole array. We illustrate this procedure with some imaging experiments that provide microimages with enhanced quality. After projection of such microimages onto the integral-imaging monitor, 3D images are produced with great parallax and viewing angle.

  11. Panoramic, large-screen, 3-D flight display system design

    NASA Technical Reports Server (NTRS)

    Franklin, Henry; Larson, Brent; Johnson, Michael; Droessler, Justin; Reinhart, William F.

    1995-01-01

    The report documents and summarizes the results of the required evaluations specified in the SOW and the design specifications for the selected display system hardware. Also included are the proposed development plan and schedule as well as the estimated rough order of magnitude (ROM) cost to design, fabricate, and demonstrate a flyable prototype research flight display system. The thrust of the effort was development of a complete understanding of the user/system requirements for a panoramic, collimated, 3-D flyable avionic display system and the translation of the requirements into an acceptable system design for fabrication and demonstration of a prototype display in the early 1997 time frame. Eleven display system design concepts were presented to NASA LaRC during the program, one of which was down-selected to a preferred display system concept. A set of preliminary display requirements was formulated. The state of the art in image source technology, 3-D methods, collimation methods, and interaction methods for a panoramic, 3-D flight display system were reviewed in depth and evaluated. Display technology improvements and risk reductions associated with maturity of the technologies for the preferred display system design concept were identified.

  12. Augmented Reality Imaging System: 3D Viewing of a Breast Cancer.

    PubMed

    Douglas, David B; Boone, John M; Petricoin, Emanuel; Liotta, Lance; Wilson, Eugene

    2016-01-01

    To display images of breast cancer from a dedicated breast CT using Depth 3-Dimensional (D3D) augmented reality. A case of breast cancer imaged using contrast-enhanced breast CT (Computed Tomography) was viewed with the augmented reality imaging, which uses a head display unit (HDU) and joystick control interface. The augmented reality system demonstrated 3D viewing of the breast mass with head position tracking, stereoscopic depth perception, focal point convergence and the use of a 3D cursor and joy-stick enabled fly through with visualization of the spiculations extending from the breast cancer. The augmented reality system provided 3D visualization of the breast cancer with depth perception and visualization of the mass's spiculations. The augmented reality system should be further researched to determine the utility in clinical practice.

  13. Table screen 360-degree holographic display using circular viewing-zone scanning.

    PubMed

    Inoue, Tatsuaki; Takaki, Yasuhiro

    2015-03-09

    A table screen 360-degree holographic display is proposed, with an increased screen size, having an expanded viewing zone over all horizontal directions around the table screen. It consists of a microelectromechanical systems spatial light modulator (MEMS SLM), a magnifying imaging system, and a rotating screen. The MEMS SLM generates hologram patterns at a high frame rate, the magnifying imaging system increases the screen of the MEMS SLM, and the reduced viewing zones are scanned circularly by the rotating screen. The viewing zones are localized to practically realize wavefront reconstruction. An experimental system has been constructed. The generation of 360-degree three-dimensional (3D) images was achieved by scanning 800 reduced and localized viewing zones circularly. The table screen had a diameter of 100 mm, and the frame rate of 3D image generation was 28.4 Hz.

  14. Stimulated penetrating keratoplasty using real-time virtual intraoperative surgical optical coherence tomography

    PubMed Central

    Lee, Changho; Kim, Kyungun; Han, Seunghoon; Kim, Sehui; Lee, Jun Hoon; Kim, Hong kyun; Kim, Chulhong; Jung, Woonggyu; Kim, Jeehyun

    2014-01-01

    Abstract. An intraoperative surgical microscope is an essential tool in a neuro- or ophthalmological surgical environment. Yet, it has an inherent limitation to classify subsurface information because it only provides the surface images. To compensate for and assist in this problem, combining the surgical microscope with optical coherence tomography (OCT) has been adapted. We developed a real-time virtual intraoperative surgical OCT (VISOCT) system by adapting a spectral-domain OCT scanner with a commercial surgical microscope. Thanks to our custom-made beam splitting and image display subsystems, the OCT images and microscopic images are simultaneously visualized through an ocular lens or the eyepiece of the microscope. This improvement helps surgeons to focus on the operation without distraction to view OCT images on another separate display. Moreover, displaying the OCT live images on the eyepiece helps surgeon’s depth perception during the surgeries. Finally, we successfully processed stimulated penetrating keratoplasty in live rabbits. We believe that these technical achievements are crucial to enhance the usability of the VISOCT system in a real surgical operating condition. PMID:24604471

  15. Computer system for scanning tunneling microscope automation

    NASA Astrophysics Data System (ADS)

    Aguilar, M.; García, A.; Pascual, P. J.; Presa, J.; Santisteban, A.

    1987-03-01

    A computerized system for the automation of a scanning tunneling microscope is presented. It is based on an IBM personal computer (PC) either an XT or an AT, which performs the control, data acquisition and storage operations, displays the STM "images" in real time, and provides image processing tools for the restoration and analysis of data. It supports different data acquisition and control cards and image display cards. The software has been designed in a modular way to allow the replacement of these cards and other equipment improvements as well as the inclusion of user routines for data analysis.

  16. Image design and replication for image-plane disk-type multiplex holograms

    NASA Astrophysics Data System (ADS)

    Chen, Chih-Hung; Cheng, Yih-Shyang

    2017-09-01

    The fabrication methods and parameter design for both real-image generation and virtual-image display in image-plane disk-type multiplex holography are introduced in this paper. A theoretical model of a disk-type hologram is also presented and is then used in our two-step holographic processes, including the production of a non-image-plane master hologram and optical replication using a single-beam copying system for the production of duplicated holograms. Experimental results are also presented to verify the possibility of mass production using the one-shot holographic display technology described in this study.

  17. Portable real-time color night vision

    NASA Astrophysics Data System (ADS)

    Toet, Alexander; Hogervorst, Maarten A.

    2008-03-01

    We developed a simple and fast lookup-table based method to derive and apply natural daylight colors to multi-band night-time images. The method deploys an optimal color transformation derived from a set of samples taken from a daytime color reference image. The colors in the resulting colorized multiband night-time images closely resemble the colors in the daytime color reference image. Also, object colors remain invariant under panning operations and are independent of the scene content. Here we describe the implementation of this method in two prototype portable dual band realtime night vision systems. One system provides co-aligned visual and near-infrared bands of two image intensifiers, the other provides co-aligned images from a digital image intensifier and an uncooled longwave infrared microbolometer. The co-aligned images from both systems are further processed by a notebook computer. The color mapping is implemented as a realtime lookup table transform. The resulting colorised video streams can be displayed in realtime on head mounted displays and stored on the hard disk of the notebook computer. Preliminary field trials demonstrate the potential of these systems for applications like surveillance, navigation and target detection.

  18. Autostereoscopic 3D display system with dynamic fusion of the viewing zone under eye tracking: principles, setup, and evaluation [Invited].

    PubMed

    Yoon, Ki-Hyuk; Kang, Min-Koo; Lee, Hwasun; Kim, Sung-Kyu

    2018-01-01

    We study optical technologies for viewer-tracked autostereoscopic 3D display (VTA3D), which provides improved 3D image quality and extended viewing range. In particular, we utilize a technique-the so-called dynamic fusion of viewing zone (DFVZ)-for each 3D optical line to realize image quality equivalent to that achievable at optimal viewing distance, even when a viewer is moving in a depth direction. In addition, we examine quantitative properties of viewing zones provided by the VTA3D system that adopted DFVZ, revealing that the optimal viewing zone can be formed at viewer position. Last, we show that the comfort zone is extended due to DFVZ. This is demonstrated by a viewer's subjective evaluation of the 3D display system that employs both multiview autostereoscopic 3D display and DFVZ.

  19. Wavelength Coded Image Transmission and Holographic Optical Elements.

    DTIC Science & Technology

    1984-08-20

    system has been designed and built for transmitting images of diffusely reflecting objects through optical fibers and displaying those images at a...passive components at the end of a fiber-optic designed to transmit high-resolution images of diffusely imaging system as described in this paper... designing a system for viewing diffusely reflecting The authors are with University of Minnesota. Electrical Engi- objects, one must consider that a

  20. View generation for 3D-TV using image reconstruction from irregularly spaced samples

    NASA Astrophysics Data System (ADS)

    Vázquez, Carlos

    2007-02-01

    Three-dimensional television (3D-TV) will become the next big step in the development of advanced TV systems. One of the major challenges for the deployment of 3D-TV systems is the diversity of display technologies and the high cost of capturing multi-view content. Depth image-based rendering (DIBR) has been identified as a key technology for the generation of new views for stereoscopic and multi-view displays from a small number of views captured and transmitted. We propose a disparity compensation method for DIBR that does not require spatial interpolation of the disparity map. We use a forward-mapping disparity compensation with real precision. The proposed method deals with the irregularly sampled image resulting from this disparity compensation process by applying a re-sampling algorithm based on a bi-cubic spline function space that produces smooth images. The fact that no approximation is made on the position of the samples implies that geometrical distortions in the final images due to approximations in sample positions are minimized. We also paid attention to the occlusion problem. Our algorithm detects the occluded regions in the newly generated images and uses simple depth-aware inpainting techniques to fill the gaps created by newly exposed areas. We tested the proposed method in the context of generation of views needed for viewing on SynthaGram TM auto-stereoscopic displays. We used as input either a 2D image plus a depth map or a stereoscopic pair with the associated disparity map. Our results show that this technique provides high quality images to be viewed on different display technologies such as stereoscopic viewing with shutter glasses (two views) and lenticular auto-stereoscopic displays (nine views).

  1. Omnidirectional-view three-dimensional display system based on cylindrical selective-diffusing screen.

    PubMed

    Xia, Xinxing; Zheng, Zhenrong; Liu, Xu; Li, Haifeng; Yan, Caijie

    2010-09-10

    We utilized a high-frame-rate projector, a rotating mirror, and a cylindrical selective-diffusing screen to present a novel three-dimensional (3D) omnidirectional-view display system without the need for any special viewing aids. The display principle and image size are analyzed, and the common display zone is proposed. The viewing zone for one observation place is also studied. The experimental results verify this method, and a vivid color 3D scene with occlusion and smooth parallax is also demonstrated with the system.

  2. Characterizing stroke lesions using digital templates and lesion quantification tools in a web-based imaging informatics system for a large-scale stroke rehabilitation clinical trial

    NASA Astrophysics Data System (ADS)

    Wang, Ximing; Edwardson, Matthew; Dromerick, Alexander; Winstein, Carolee; Wang, Jing; Liu, Brent

    2015-03-01

    Previously, we presented an Interdisciplinary Comprehensive Arm Rehabilitation Evaluation (ICARE) imaging informatics system that supports a large-scale phase III stroke rehabilitation trial. The ePR system is capable of displaying anonymized patient imaging studies and reports, and the system is accessible to multiple clinical trial sites and users across the United States via the web. However, the prior multicenter stroke rehabilitation trials lack any significant neuroimaging analysis infrastructure. In stroke related clinical trials, identification of the stroke lesion characteristics can be meaningful as recent research shows that lesion characteristics are related to stroke scale and functional recovery after stroke. To facilitate the stroke clinical trials, we hope to gain insight into specific lesion characteristics, such as vascular territory, for patients enrolled into large stroke rehabilitation trials. To enhance the system's capability for data analysis and data reporting, we have integrated new features with the system: a digital brain template display, a lesion quantification tool and a digital case report form. The digital brain templates are compiled from published vascular territory templates at each of 5 angles of incidence. These templates were updated to include territories in the brainstem using a vascular territory atlas and the Medical Image Processing, Analysis and Visualization (MIPAV) tool. The digital templates are displayed for side-by-side comparisons and transparent template overlay onto patients' images in the image viewer. The lesion quantification tool quantifies planimetric lesion area from user-defined contour. The digital case report form stores user input into a database, then displays contents in the interface to allow for reviewing, editing, and new inputs. In sum, the newly integrated system features provide the user with readily-accessible web-based tools to identify the vascular territory involved, estimate lesion area, and store these results in a web-based digital format.

  3. Comparison study of five different display modalities for whole slide images in surgical pathology and cytopathology in Europe

    NASA Astrophysics Data System (ADS)

    D'Haene, Nicky; Maris, Calliope; Rorive, Sandrine; Moles Lopez, Xavier; Rostang, Johan; Marchessoux, Cédric; Pantanowitz, Liron; Parwani, Anil V.; Salmon, Isabelle

    2013-03-01

    User experience with viewing images in pathology is crucial for accurate interpretation and diagnosis. With digital pathology, images are being read on a display system, and this poses new types of questions: such as what is the difference in terms of pixelation, refresh lag or obscured features compared to an optical microscope. Is there a resultant change in user performance in terms of speed of slide review, perception of adequacy and quality or in diagnostic confidence? A prior psychophysical study was carried out comparing various display modalities on whole slide imaging (WSI) in pathology at the University of Pittsburgh Medical Center (UPMC) in the USA. This prior study compared professional and non-professional grade display modalities and highlighted the importance of using a medical grade display to view pathological digital images. This study was duplicated in Europe at the Department of Pathology in Erasme Hospital (Université Libre de Bruxelles (ULB)) in an attempt to corroborate these findings. Digital WSI with corresponding glass slides of 58 cases including surgical pathology and cytopathology slides of varying difficulty were employed. Similar non-professional and professional grade display modalities were compared to an optical microscope (Olympus BX51). Displays ranged from a laptop (DELL Latitude D620), to a consumer grade display (DELL E248WFPb), to two professional grade monitors (Eizo CG245W and Barco MDCC-6130). Three pathologists were selected from the Department of Pathology in Erasme Hospital (ULB) in Belgium to view and interpret the pathological images on these different displays. The results show that non-professional grade displays (laptop and consumer) have inferior user experience compared to professional grade monitors and the optical microscope.

  4. An update of commercial infrared sensing and imaging instruments

    NASA Technical Reports Server (NTRS)

    Kaplan, Herbert

    1989-01-01

    A classification of infrared sensing instruments by type and application, listing commercially available instruments, from single point thermal probes to on-line control sensors, to high speed, high resolution imaging systems is given. A review of performance specifications follows, along with a discussion of typical thermographic display approaches utilized by various imager manufacturers. An update report on new instruments, new display techniques and newly introduced features of existing instruments is given.

  5. High-speed switchable lens enables the development of a volumetric stereoscopic display

    PubMed Central

    Love, Gordon D.; Hoffman, David M.; Hands, Philip J.W.; Gao, James; Kirby, Andrew K.; Banks, Martin S.

    2011-01-01

    Stereoscopic displays present different images to the two eyes and thereby create a compelling three-dimensional (3D) sensation. They are being developed for numerous applications including cinema, television, virtual prototyping, and medical imaging. However, stereoscopic displays cause perceptual distortions, performance decrements, and visual fatigue. These problems occur because some of the presented depth cues (i.e., perspective and binocular disparity) specify the intended 3D scene while focus cues (blur and accommodation) specify the fixed distance of the display itself. We have developed a stereoscopic display that circumvents these problems. It consists of a fast switchable lens synchronized to the display such that focus cues are nearly correct. The system has great potential for both basic vision research and display applications. PMID:19724571

  6. System and method for magnetic current density imaging at ultra low magnetic fields

    DOEpatents

    Espy, Michelle A.; George, John Stevens; Kraus, Robert Henry; Magnelind, Per; Matlashov, Andrei Nikolaevich; Tucker, Don; Turovets, Sergei; Volegov, Petr Lvovich

    2016-02-09

    Preferred systems can include an electrical impedance tomography apparatus electrically connectable to an object; an ultra low field magnetic resonance imaging apparatus including a plurality of field directions and disposable about the object; a controller connected to the ultra low field magnetic resonance imaging apparatus and configured to implement a sequencing of one or more ultra low magnetic fields substantially along one or more of the plurality of field directions; and a display connected to the controller, and wherein the controller is further configured to reconstruct a displayable image of an electrical current density in the object. Preferred methods, apparatuses, and computer program products are also disclosed.

  7. Grid-based precision aim system and method for disrupting suspect objects

    DOEpatents

    Gladwell, Thomas Scott; Garretson, Justin; Hobart, Clinton G.; Monda, Mark J.

    2014-06-10

    A system and method for disrupting at least one component of a suspect object is provided. The system has a source for passing radiation through the suspect object, a grid board positionable adjacent the suspect object (the grid board having a plurality of grid areas, the radiation from the source passing through the grid board), a screen for receiving the radiation passing through the suspect object and generating at least one image, a weapon for deploying a discharge, and a targeting unit for displaying the image of the suspect object and aiming the weapon according to a disruption point on the displayed image and deploying the discharge into the suspect object to disable the suspect object.

  8. JAVA Stereo Display Toolkit

    NASA Technical Reports Server (NTRS)

    Edmonds, Karina

    2008-01-01

    This toolkit provides a common interface for displaying graphical user interface (GUI) components in stereo using either specialized stereo display hardware (e.g., liquid crystal shutter or polarized glasses) or anaglyph display (red/blue glasses) on standard workstation displays. An application using this toolkit will work without modification in either environment, allowing stereo software to reach a wider audience without sacrificing high-quality display on dedicated hardware. The toolkit is written in Java for use with the Swing GUI Toolkit and has cross-platform compatibility. It hooks into the graphics system, allowing any standard Swing component to be displayed in stereo. It uses the OpenGL graphics library to control the stereo hardware and to perform the rendering. It also supports anaglyph and special stereo hardware using the same API (application-program interface), and has the ability to simulate color stereo in anaglyph mode by combining the red band of the left image with the green/blue bands of the right image. This is a low-level toolkit that accomplishes simply the display of components (including the JadeDisplay image display component). It does not include higher-level functions such as disparity adjustment, 3D cursor, or overlays all of which can be built using this toolkit.

  9. An advanced programmable/reconfigurable color graphics display system for crew station technology research

    NASA Technical Reports Server (NTRS)

    Montoya, R. J.; England, J. N.; Hatfield, J. J.; Rajala, S. A.

    1981-01-01

    The hardware configuration, software organization, and applications software for the NASA IKONAS color graphics display system are described. The systems were created at the Langley Research Center Display Device Laboratory to develop, evaluate, and demonstrate advanced generic concepts, technology, and systems integration techniques for electronic crew station systems of future civil aircraft. A minicomputer with 64K core memory acts as a host for a raster scan graphics display generator. The architectures of the hardware system and the graphics display system are provided. The applications software features a FORTRAN-based model of an aircraft, a display system, and the utility program for real-time communications. The model accepts inputs from a two-dimensional joystick and outputs a set of aircraft states. Ongoing and planned work for image segmentation/generation, specialized graphics procedures, and higher level language user interface are discussed.

  10. Integration of OLEDs in biomedical sensor systems: design and feasibility analysis

    NASA Astrophysics Data System (ADS)

    Rai, Pratyush; Kumar, Prashanth S.; Varadan, Vijay K.

    2010-04-01

    Organic (electronic) Light Emitting Diodes (OLEDs) have been shown to have applications in the field of lighting and flexible display. These devices can also be incorporated in sensors as light source for imaging/fluorescence sensing for miniaturized systems for biomedical applications and low-cost displays for sensor output. The current device capability aligns well with the aforementioned applications as low power diffuse lighting and momentary/push button dynamic display. A top emission OLED design has been proposed that can be incorporated with the sensor and peripheral electrical circuitry, also based on organic electronics. Feasibility analysis is carried out for an integrated optical imaging/sensor system, based on luminosity and spectrum band width. A similar study is also carried out for sensor output display system that functions as a pseudo active OLED matrix. A power model is presented for device power requirements and constraints. The feasibility analysis is also supplemented with the discussion about implementation of ink-jet printing and stamping techniques for possibility of roll to roll manufacturing.

  11. [Development of a system for ultrasonic three-dimensional reconstruction of fetus].

    PubMed

    Baba, K

    1989-04-01

    We have developed a system for ultrasonic three-dimensional (3-D) fetus reconstruction using computers. Either a real-time linear array probe or a convex array probe of an ultrasonic scanner was mounted on a position sensor arm of a manual compound scanner in order to detect the position of the probe. A microcomputer was used to convert the position information to what could be recorded on a video tape as an image. This image was superimposed on the ultrasonic tomographic image simultaneously with a superimposer and recorded on a video tape. Fetuses in utero were scanned in seven cases. More than forty ultrasonic section image on the video tape were fed into a minicomputer. The shape of the fetus was displayed three-dimensionally by means of computer graphics. The computer-generated display produced a 3-D image of the fetus and showed the usefulness and accuracy of this system. Since it took only a few seconds for data collection by ultrasonic inspection, fetal movement did not adversely affect the results. Data input took about ten minutes for 40 slices, and 3-D reconstruction and display took about two minutes. The system made it possible to observe and record the 3-D image of the fetus in utero non-invasively and therefore is expected to make it much easier to obtain a 3-D picture of the fetus in utero.

  12. Integration of real-time 3D capture, reconstruction, and light-field display

    NASA Astrophysics Data System (ADS)

    Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Pei, Renjing; Liu, Yongchun; Zhang, Xiao

    2015-03-01

    Effective integration of 3D acquisition, reconstruction (modeling) and display technologies into a seamless systems provides augmented experience of visualizing and analyzing real objects and scenes with realistic 3D sensation. Applications can be found in medical imaging, gaming, virtual or augmented reality and hybrid simulations. Although 3D acquisition, reconstruction, and display technologies have gained significant momentum in recent years, there seems a lack of attention on synergistically combining these components into a "end-to-end" 3D visualization system. We designed, built and tested an integrated 3D visualization system that is able to capture in real-time 3D light-field images, perform 3D reconstruction to build 3D model of the objects, and display the 3D model on a large autostereoscopic screen. In this article, we will present our system architecture and component designs, hardware/software implementations, and experimental results. We will elaborate on our recent progress on sparse camera array light-field 3D acquisition, real-time dense 3D reconstruction, and autostereoscopic multi-view 3D display. A prototype is finally presented with test results to illustrate the effectiveness of our proposed integrated 3D visualization system.

  13. Procurement specification color graphic camera system

    NASA Technical Reports Server (NTRS)

    Prow, G. E.

    1980-01-01

    The performance and design requirements for a Color Graphic Camera System are presented. The system is a functional part of the Earth Observation Department Laboratory System (EODLS) and will be interfaced with Image Analysis Stations. It will convert the output of a raster scan computer color terminal into permanent, high resolution photographic prints and transparencies. Images usually displayed will be remotely sensed LANDSAT imager scenes.

  14. On the Uncertain Future of the Volumetric 3D Display Paradigm

    NASA Astrophysics Data System (ADS)

    Blundell, Barry G.

    2017-06-01

    Volumetric displays permit electronically processed images to be depicted within a transparent physical volume and enable a range of cues to depth to be inherently associated with image content. Further, images can be viewed directly by multiple simultaneous observers who are able to change vantage positions in a natural way. On the basis of research to date, we assume that the technologies needed to implement useful volumetric displays able to support translucent image formation are available. Consequently, in this paper we review aspects of the volumetric paradigm and identify important issues which have, to date, precluded their successful commercialization. Potentially advantageous characteristics are outlined and demonstrate that significant research is still needed in order to overcome barriers which continue to hamper the effective exploitation of this display modality. Given the recent resurgence of interest in developing commercially viable general purpose volumetric systems, this discussion is of particular relevance.

  15. Method and apparatus for calibrating a tiled display

    NASA Technical Reports Server (NTRS)

    Chen, Chung-Jen (Inventor); Johnson, Michael J. (Inventor); Chandrasekhar, Rajesh (Inventor)

    2001-01-01

    A display system that can be calibrated and re-calibrated with a minimal amount of manual intervention. To accomplish this, one or more cameras are provided to capture an image of the display screen. The resulting captured image is processed to identify any non-desirable characteristics, including visible artifacts such as seams, bands, rings, etc. Once the non-desirable characteristics are identified, an appropriate transformation function is determined. The transformation function is used to pre-warp the input video signal that is provided to the display such that the non-desirable characteristics are reduced or eliminated from the display. The transformation function preferably compensates for spatial non-uniformity, color non-uniformity, luminance non-uniformity, and other visible artifacts.

  16. Java-based remote viewing and processing of nuclear medicine images: toward "the imaging department without walls".

    PubMed

    Slomka, P J; Elliott, E; Driedger, A A

    2000-01-01

    In nuclear medicine practice, images often need to be reviewed and reports prepared from locations outside the department, usually in the form of hard copy. Although hard-copy images are simple and portable, they do not offer electronic data search and image manipulation capabilities. On the other hand, picture archiving and communication systems or dedicated workstations cannot be easily deployed at numerous locations. To solve this problem, we propose a Java-based remote viewing station (JaRViS) for the reading and reporting of nuclear medicine images using Internet browser technology. JaRViS interfaces to the clinical patient database of a nuclear medicine workstation. All JaRViS software resides on a nuclear medicine department server. The contents of the clinical database can be searched by a browser interface after providing a password. Compressed images with the Java applet and color lookup tables are downloaded on the client side. This paradigm does not require nuclear medicine software to reside on remote computers, which simplifies support and deployment of such a system. To enable versatile reporting of the images, color tables and thresholds can be interactively manipulated and images can be displayed in a variety of layouts. Image filtering, frame grouping (adding frames), and movie display are available. Tomographic mode displays are supported, including gated SPECT. The time to display 14 lung perfusion images in 128 x 128 matrix together with the Java applet and color lookup tables over a V.90 modem is <1 min. SPECT and PET slice reorientation is interactive (<1 s). JaRViS could run on a Windows 95/98/NT or a Macintosh platform with Netscape Communicator or Microsoft Intemet Explorer. The performance of Java code for bilinear interpolation, cine display, and filtering approaches that of a standard imaging workstation. It is feasible to set up a remote nuclear medicine viewing station using Java and an Internet or intranet browser. Images can be made easily and cost-effectively available to referring physicians and ambulatory clinics within and outside of the hospital, providing a convenient alternative to film media. We also find this system useful in home reporting of emergency procedures such as lung ventilation-perfusion scans or dynamic studies.

  17. Model-based quantification of image quality

    NASA Technical Reports Server (NTRS)

    Hazra, Rajeeb; Miller, Keith W.; Park, Stephen K.

    1989-01-01

    In 1982, Park and Schowengerdt published an end-to-end analysis of a digital imaging system quantifying three principal degradation components: (1) image blur - blurring caused by the acquisition system, (2) aliasing - caused by insufficient sampling, and (3) reconstruction blur - blurring caused by the imperfect interpolative reconstruction. This analysis, which measures degradation as the square of the radiometric error, includes the sample-scene phase as an explicit random parameter and characterizes the image degradation caused by imperfect acquisition and reconstruction together with the effects of undersampling and random sample-scene phases. In a recent paper Mitchell and Netravelli displayed the visual effects of the above mentioned degradations and presented subjective analysis about their relative importance in determining image quality. The primary aim of the research is to use the analysis of Park and Schowengerdt to correlate their mathematical criteria for measuring image degradations with subjective visual criteria. Insight gained from this research can be exploited in the end-to-end design of optical systems, so that system parameters (transfer functions of the acquisition and display systems) can be designed relative to each other, to obtain the best possible results using quantitative measurements.

  18. Computer-generated imagery for 4-D meteorological data

    NASA Technical Reports Server (NTRS)

    Hibbard, William L.

    1986-01-01

    The University of Wisconsin-Madison Space Science and Engineering Center is developing animated stereo display terminals for use with McIDAS (Man-computer Interactive Data Access System). This paper describes image-generation techniques which have been developed to take maximum advantage of these terminals, integrating large quantities of four-dimensional meteorological data from balloon and satellite soundings, satellite images, Doppler and volumetric radar, and conventional surface observations. The images have been designed to use perspective, shading, hidden-surface removal, and transparency to augment the animation and stereo-display geometry. They create an illusion of a moving three-dimensional model of the atmosphere. This paper describes the design of these images and a number of rules of thumb for generating four-dimensional meteorological displays.

  19. Full resolution hologram-like autostereoscopic display

    NASA Technical Reports Server (NTRS)

    Eichenlaub, Jesse B.; Hutchins, Jamie

    1995-01-01

    Under this program, Dimension Technologies Inc. (DTI) developed a prototype display that uses a proprietary illumination technique to create autostereoscopic hologram-like full resolution images on an LCD operating at 180 fps. The resulting 3D image possesses a resolution equal to that of the LCD along with properties normally associated with holograms, including change of perspective with observer position and lack of viewing position restrictions. Furthermore, this autostereoscopic technique eliminates the need to wear special glasses to achieve the parallax effect. Under the program a prototype display was developed which demonstrates the hologram-like full resolution concept. To implement such a system, DTI explored various concept designs and enabling technologies required to support those designs. Specifically required were: a parallax illumination system with sufficient brightness and control; an LCD with rapid address and pixel response; and an interface to an image generation system for creation of computer graphics. Of the possible parallax illumination system designs, we chose a design which utilizes an array of fluorescent lamps. This system creates six sets of illumination areas to be imaged behind an LCD. This controlled illumination array is interfaced to a lenticular lens assembly which images the light segments into thin vertical light lines to achieve the parallax effect. This light line formation is the foundation of DTI's autostereoscopic technique. The David Sarnoff Research Center (Sarnoff) was subcontracted to develop an LCD that would operate with a fast scan rate and pixel response. Sarnoff chose a surface mode cell technique and produced the world's first large area pi-cell active matrix TFT LCD. The device provided adequate performance to evaluate five different perspective stereo viewing zones. A Silicon Graphics' Iris Indigo system was used for image generation which allowed for static and dynamic multiple perspective image rendering. During the development of the prototype display, we identified many critical issues associated with implementing such a technology. Testing and evaluation enabled us to prove that this illumination technique provides autostereoscopic 3D multi perspective images with a wide range of view, smooth transition, and flickerless operation given suitable enabling technologies.

  20. A portable near-infrared fluorescence image overlay device for surgical navigation (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    McWade, Melanie A.

    2016-03-01

    A rise in the use of near-infrared (NIR) fluorescent dyes or intrinsic fluorescent markers for surgical guidance and tissue diagnosis has triggered the development of NIR fluorescence imaging systems. Because NIR wavelengths are invisible to the naked eye, instrumentation must allow surgeons to visualize areas of high fluorescence. Current NIR fluorescence imaging systems have limited ease-of-use because they display fluorescent information on remote display monitors that require surgeons to divert attention away from the patient to identify the location of tissue fluorescence. Furthermore, some systems lack simultaneous visible light imaging which provides valuable spatial context to fluorescence images. We have developed a novel, portable NIR fluorescence imaging approach for intraoperative surgical guidance that provides information for surgical navigation within the clinician's line of sight. The system utilizes a NIR CMOS detector to collect excited NIR fluorescence from the surgical field. Tissues with NIR fluorescence are overlaid with visible light to provide information on tissue margins directly on the surgical field. In vitro studies have shown this versatile imaging system can be applied to applications with both extrinsic NIR contrast agents such as indocyanine green and weaker sources of biological fluorescence such as parathyroid gland tissue. This non-invasive, portable NIR fluorescence imaging system overlays an image directly on tissue, potentially allowing surgical decisions to be made quicker and with greater ease-of-use than current NIR fluorescence imaging systems.

  1. Color Imaging management in film processing

    NASA Astrophysics Data System (ADS)

    Tremeau, Alain; Konik, Hubert; Colantoni, Philippe

    2003-12-01

    The latest research projects in the laboratory LIGIV concerns capture, processing, archiving and display of color images considering the trichromatic nature of the Human Vision System (HSV). Among these projects one addresses digital cinematographic film sequences of high resolution and dynamic range. This project aims to optimize the use of content for the post-production operators and for the end user. The studies presented in this paper address the use of metadata to optimise the consumption of video content on a device of user's choice independent of the nature of the equipment that captured the content. Optimising consumption includes enhancing the quality of image reconstruction on a display. Another part of this project addresses the content-based adaptation of image display. Main focus is on Regions of Interest (ROI) operations, based on the ROI concepts of MPEG-7. The aim of this second part is to characterize and ensure the conditions of display even if display device or display media changes. This requires firstly the definition of a reference color space and the definition of bi-directional color transformations for each peripheral device (camera, display, film recorder, etc.). The complicating factor is that different devices have different color gamuts, depending on the chromaticity of their primaries and the ambient illumination under which they are viewed. To match the displayed image to the aimed appearance, all kind of production metadata (camera specification, camera colour primaries, lighting conditions) should be associated to the film material. Metadata and content build together rich content. The author is assumed to specify conditions as known from digital graphics arts. To control image pre-processing and image post-processing, these specifications should be contained in the film's metadata. The specifications are related to the ICC profiles but need additionally consider mesopic viewing conditions.

  2. Integrated test system of infrared and laser data based on USB 3.0

    NASA Astrophysics Data System (ADS)

    Fu, Hui Quan; Tang, Lin Bo; Zhang, Chao; Zhao, Bao Jun; Li, Mao Wen

    2017-07-01

    Based on USB3.0, this paper presents the design method of an integrated test system for both infrared image data and laser signal data processing module. The core of the design is FPGA logic control, the design uses dual-chip DDR3 SDRAM to achieve high-speed laser data cache, and receive parallel LVDS image data through serial-to-parallel conversion chip, and it achieves high-speed data communication between the system and host computer through the USB3.0 bus. The experimental results show that the developed PC software realizes the real-time display of 14-bit LVDS original image after 14-to-8 bit conversion and JPEG2000 compressed image after decompression in software, and can realize the real-time display of the acquired laser signal data. The correctness of the test system design is verified, indicating that the interface link is normal.

  3. Stroboscopic Image Modulation to Reduce the Visual Blur of an Object Being Viewed by an Observer Experiencing Vibration

    NASA Technical Reports Server (NTRS)

    Kaiser, Mary K. (Inventor); Adelstein, Bernard D. (Inventor); Anderson, Mark R. (Inventor); Beutter, Brent R. (Inventor); Ahumada, Albert J., Jr. (Inventor); McCann, Robert S. (Inventor)

    2014-01-01

    A method and apparatus for reducing the visual blur of an object being viewed by an observer experiencing vibration. In various embodiments of the present invention, the visual blur is reduced through stroboscopic image modulation (SIM). A SIM device is operated in an alternating "on/off" temporal pattern according to a SIM drive signal (SDS) derived from the vibration being experienced by the observer. A SIM device (controlled by a SIM control system) operates according to the SDS serves to reduce visual blur by "freezing" (or reducing an image's motion to a slow drift) the visual image of the viewed object. In various embodiments, the SIM device is selected from the group consisting of illuminator(s), shutter(s), display control system(s), and combinations of the foregoing (including the use of multiple illuminators, shutters, and display control systems).

  4. AOIPS water resources data management system

    NASA Technical Reports Server (NTRS)

    Vanwie, P.

    1977-01-01

    The text and computer-generated displays used to demonstrate the AOIPS (Atmospheric and Oceanographic Information Processing System) water resources data management system are investigated. The system was developed to assist hydrologists in analyzing the physical processes occurring in watersheds. It was designed to alleviate some of the problems encountered while investigating the complex interrelationships of variables such as land-cover type, topography, precipitation, snow melt, surface runoff, evapotranspiration, and streamflow rates. The system has an interactive image processing capability and a color video display to display results as they are obtained.

  5. Implementation of a high-resolution workstation for primary diagnosis of projection radiography images

    NASA Astrophysics Data System (ADS)

    Good, Walter F.; Herron, John M.; Maitz, Glenn S.; Gur, David; Miller, Stephen L.; Straub, William H.; Fuhrman, Carl R.

    1990-08-01

    We designed and implemented a high-resolution video workstation as the central hardware component in a comprehensive multi-project program comparing the use of digital and film modalities. The workstation utilizes a 1.8 GByte real-time disk (RCI) capable of storing 400 full-resolution images and two Tektronix (GMA251) display controllers with 19" monitors (GMA2O2). The display is configured in a portrait format with a resolution of 1536 x 2048 x 8 bit, and operates at 75 Hz in a noninterlaced mode. Transmission of data through a 12 to 8 bit lookup table into the display controllers occurs at 20 MBytes/second (.35 seconds per image). The workstation allows easy use of brightness (level) and contrast (window) to be manipulated with a trackball, and various processing options can be selected using push buttons. Display of any of the 400 images is also performed at 20MBytes/sec (.35 sec/image). A separate text display provides for the automatic display of patient history data and for a scoring form through which readers can interact with the system by means of a computer mouse. In addition, the workstation provides for the randomization of cases and for the immediate entry of diagnostic responses into a master database. Over the past year this workstation has been used for over 10,000 readings in diagnostic studies related to 1) image resolution; 2) film vs. soft display; 3) incorporation of patient history data into the reading process; and 4) usefulness of image processing.

  6. Distributed file management for remote clinical image-viewing stations

    NASA Astrophysics Data System (ADS)

    Ligier, Yves; Ratib, Osman M.; Girard, Christian; Logean, Marianne; Trayser, Gerhard

    1996-05-01

    The Geneva PACS is based on a distributed architecture, with different archive servers used to store all the image files produced by digital imaging modalities. Images can then be visualized on different display stations with the Osiris software. Image visualization require to have the image file physically present on the local station. Thus, images must be transferred from archive servers to local display stations in an acceptable way, which means fast and user friendly where the notion of file must be hidden to users. The transfer of image files is done according to different schemes including prefetching and direct image selection. Prefetching allows the retrieval of previous studies of a patient in advance. A direct image selection is also provided in order to retrieve images on request. When images are transferred locally on the display station, they are stored in Papyrus files, each file containing a set of images. File names are used by the Osiris viewing software to open image sequences. But file names alone are not explicit enough to properly describe the content of the file. A specific utility has been developed to present a list of patients, and for each patient a list of exams which can be selected and automatically displayed. The system has been successfully tested in different clinical environments. It will be soon extended on a hospital wide basis.

  7. Cardio-PACs: a new opportunity

    NASA Astrophysics Data System (ADS)

    Heupler, Frederick A., Jr.; Thomas, James D.; Blume, Hartwig R.; Cecil, Robert A.; Heisler, Mary

    2000-05-01

    It is now possible to replace film-based image management in the cardiac catheterization laboratory with a Cardiology Picture Archiving and Communication System (Cardio-PACS) based on digital imaging technology. The first step in the conversion process is installation of a digital image acquisition system that is capable of generating high-quality DICOM-compatible images. The next three steps, which are the subject of this presentation, involve image display, distribution, and storage. Clinical requirements and associated cost considerations for these three steps are listed below: Image display: (1) Image quality equal to film, with DICOM format, lossless compression, image processing, desktop PC-based with color monitor, and physician-friendly imaging software; (2) Performance specifications include: acquire 30 frames/sec; replay 15 frames/sec; access to file server 5 seconds, and to archive 5 minutes; (3) Compatibility of image file, transmission, and processing formats; (4) Image manipulation: brightness, contrast, gray scale, zoom, biplane display, and quantification; (5) User-friendly control of image review. Image distribution: (1) Standard IP-based network between cardiac catheterization laboratories, file server, long-term archive, review stations, and remote sites; (2) Non-proprietary formats; (3) Bidirectional distribution. Image storage: (1) CD-ROM vs disk vs tape; (2) Verification of data integrity; (3) User-designated storage capacity for catheterization laboratory, file server, long-term archive. Costs: (1) Image acquisition equipment, file server, long-term archive; (2) Network infrastructure; (3) Review stations and software; (4) Maintenance and administration; (5) Future upgrades and expansion; (6) Personnel.

  8. Polyplanar optical display

    NASA Astrophysics Data System (ADS)

    Veligdan, James T.; Beiser, Leo; Biscardi, Cyrus; Brewster, Calvin; DeSanto, Leonard

    1997-07-01

    The polyplanar optical display (POD) is a unique display screen which can be use with any projection source. This display screen is 2 inches thick and has a matte black face which allows for high contrast images. The prototype being developed is a form, fit and functional replacement display for the B-52 aircraft which uses a monochrome ten-inch display. The new display uses a 100 milliwatt green solid state laser as its optical source. In order to produce real- time video, the laser light is being modulated by a digital light processing (DLP) chip manufactured by Texas Instruments, Inc. A variable astigmatic focusing system is used to produce a stigmatic image on the viewing face of the POD. In addition to the optical design, we discuss the electronic interfacing to the DLP chip, the opto-mechanical design and viewing angle characteristics.

  9. Laser-driven polyplanar optic display

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Veligdan, J.T.; Biscardi, C.; Brewster, C.

    1998-01-01

    The Polyplanar Optical Display (POD) is a unique display screen which can be used with any projection source. This display screen is 2 inches thick and has a matte-black face which allows for high contrast images. The prototype being developed is a form, fit and functional replacement display for the B-52 aircraft which uses a monochrome ten-inch display. The new display uses a 200 milliwatt green solid-state laser (532 nm) as its optical source. In order to produce real-time video, the laser light is being modulated by a Digital Light Processing (DLP) chip manufactured by Texas Instruments, Inc. A variablemore » astigmatic focusing system is used to produce a stigmatic image on the viewing face of the POD. In addition to the optical design, the authors discuss the DLP chip, the optomechanical design and viewing angle characteristics.« less

  10. Laser-driven polyplanar optic display

    NASA Astrophysics Data System (ADS)

    Veligdan, James T.; Beiser, Leo; Biscardi, Cyrus; Brewster, Calvin; DeSanto, Leonard

    1998-05-01

    The Polyplanar Optical Display (POD) is a unique display screen which can be used with any projection source. This display screen is 2 inches thick and has a matte-black face which allows for high contrast images. The prototype being developed is a form, fit and functional replacement display for the B-52 aircraft which uses a monochrome ten-inch display. The new display uses a 200 milliwatt green solid- state laser (532 nm) as its optical source. In order to produce real-time video, the laser light is being modulated by a Digital Light Processing (DLPTM) chip manufactured by Texas Instruments, Inc. A variable astigmatic focusing system is used to produce a stigmatic image on the viewing face of the POD. In addition to the optical design, we discuss the DLPTM chip, the opto-mechanical design and viewing angle characteristics.

  11. Application of AI techniques to a voice-actuated computer system for reconstructing and displaying magnetic resonance imaging data

    NASA Astrophysics Data System (ADS)

    Sherley, Patrick L.; Pujol, Alfonso, Jr.; Meadow, John S.

    1990-07-01

    To provide a means of rendering complex computer architectures languages and input/output modalities transparent to experienced and inexperienced users research is being conducted to develop a voice driven/voice response computer graphics imaging system. The system will be used for reconstructing and displaying computed tomography and magnetic resonance imaging scan data. In conjunction with this study an artificial intelligence (Al) control strategy was developed to interface the voice components and support software to the computer graphics functions implemented on the Sun Microsystems 4/280 color graphics workstation. Based on generated text and converted renditions of verbal utterances by the user the Al control strategy determines the user''s intent and develops and validates a plan. The program type and parameters within the plan are used as input to the graphics system for reconstructing and displaying medical image data corresponding to that perceived intent. If the plan is not valid the control strategy queries the user for additional information. The control strategy operates in a conversation mode and vocally provides system status reports. A detailed examination of the various AT techniques is presented with major emphasis being placed on their specific roles within the total control strategy structure. 1.

  12. Image Capture and Display Based on Embedded Linux

    NASA Astrophysics Data System (ADS)

    Weigong, Zhang; Suran, Di; Yongxiang, Zhang; Liming, Li

    For the requirement of building a highly reliable communication system, SpaceWire was selected in the integrated electronic system. There was a need to test the performance of SpaceWire. As part of the testing work, the goal of this paper is to transmit image data from CMOS camera through SpaceWire and display real-time images on the graphical user interface with Qt in the embedded development platform of Linux & ARM. A point-to-point mode of transmission was chosen; the running result showed the two communication ends basically reach a consensus picture in succession. It suggests that the SpaceWire can transmit the data reliably.

  13. Small Interactive Image Processing System (SMIPS) users manual

    NASA Technical Reports Server (NTRS)

    Moik, J. G.

    1973-01-01

    The Small Interactive Image Processing System (SMIP) is designed to facilitate the acquisition, digital processing and recording of image data as well as pattern recognition in an interactive mode. Objectives of the system are ease of communication with the computer by personnel who are not expert programmers, fast response to requests for information on pictures, complete error recovery as well as simplification of future programming efforts for extension of the system. The SMIP system is intended for operation under OS/MVT on an IBM 360/75 or 91 computer equipped with the IBM-2250 Model 1 display unit. This terminal is used as an interface between user and main computer. It has an alphanumeric keyboard, a programmed function keyboard and a light pen which are used for specification of input to the system. Output from the system is displayed on the screen as messages and pictures.

  14. A new approach of building 3D visualization framework for multimodal medical images display and computed assisted diagnosis

    NASA Astrophysics Data System (ADS)

    Li, Zhenwei; Sun, Jianyong; Zhang, Jianguo

    2012-02-01

    As more and more CT/MR studies are scanning with larger volume of data sets, more and more radiologists and clinician would like using PACS WS to display and manipulate these larger data sets of images with 3D rendering features. In this paper, we proposed a design method and implantation strategy to develop 3D image display component not only with normal 3D display functions but also with multi-modal medical image fusion as well as compute-assisted diagnosis of coronary heart diseases. The 3D component has been integrated into the PACS display workstation of Shanghai Huadong Hospital, and the clinical practice showed that it is easy for radiologists and physicians to use these 3D functions such as multi-modalities' (e.g. CT, MRI, PET, SPECT) visualization, registration and fusion, and the lesion quantitative measurements. The users were satisfying with the rendering speeds and quality of 3D reconstruction. The advantages of the component include low requirements for computer hardware, easy integration, reliable performance and comfortable application experience. With this system, the radiologists and the clinicians can manipulate with 3D images easily, and use the advanced visualization tools to facilitate their work with a PACS display workstation at any time.

  15. On the development of an interactive resource information management system for analysis and display of spatiotemporal data

    NASA Technical Reports Server (NTRS)

    Schell, J. A.

    1974-01-01

    The recent availability of timely synoptic earth imagery from the Earth Resources Technology Satellites (ERTS) provides a wealth of information for the monitoring and management of vital natural resources. Formal language definitions and syntax interpretation algorithms were adapted to provide a flexible, computer information system for the maintenance of resource interpretation of imagery. These techniques are incorporated, together with image analysis functions, into an Interactive Resource Information Management and Analysis System, IRIMAS, which is implemented on a Texas Instruments 980A minicomputer system augmented with a dynamic color display for image presentation. A demonstration of system usage and recommendations for further system development are also included.

  16. Autonomous spacecraft rendezvous and docking

    NASA Technical Reports Server (NTRS)

    Tietz, J. C.; Almand, B. J.

    1985-01-01

    A storyboard display is presented which summarizes work done recently in design and simulation of autonomous video rendezvous and docking systems for spacecraft. This display includes: photographs of the simulation hardware, plots of chase vehicle trajectories from simulations, pictures of the docking aid including image processing interpretations, and drawings of the control system strategy. Viewgraph-style sheets on the display bulletin board summarize the simulation objectives, benefits, special considerations, approach, and results.

  17. Autonomous spacecraft rendezvous and docking

    NASA Astrophysics Data System (ADS)

    Tietz, J. C.; Almand, B. J.

    A storyboard display is presented which summarizes work done recently in design and simulation of autonomous video rendezvous and docking systems for spacecraft. This display includes: photographs of the simulation hardware, plots of chase vehicle trajectories from simulations, pictures of the docking aid including image processing interpretations, and drawings of the control system strategy. Viewgraph-style sheets on the display bulletin board summarize the simulation objectives, benefits, special considerations, approach, and results.

  18. Interactive display system having a matrix optical detector

    DOEpatents

    Veligdan, James T.; DeSanto, Leonard

    2007-01-23

    A display system includes a waveguide optical panel having an inlet face and an opposite outlet face. An image beam is projected across the inlet face laterally and transversely for display on the outlet face. An optical detector including a matrix of detector elements is optically aligned with the inlet face for detecting a corresponding lateral and transverse position of an inbound light spot on the outlet face.

  19. A Survey of Display Hardware and Software.

    ERIC Educational Resources Information Center

    Poore, Jesse H., Jr.; And Others

    Reported are two papers which deal with the fundamentals of display hardware and software in computer systems. The first report presents the basic principles of display hardware in terms of image generation from buffers presumed to be loaded and controlled by a digital computer. The concepts surrounding the electrostatic tube, the electromagnetic…

  20. Exploring interaction with 3D volumetric displays

    NASA Astrophysics Data System (ADS)

    Grossman, Tovi; Wigdor, Daniel; Balakrishnan, Ravin

    2005-03-01

    Volumetric displays generate true volumetric 3D images by actually illuminating points in 3D space. As a result, viewing their contents is similar to viewing physical objects in the real world. These displays provide a 360 degree field of view, and do not require the user to wear hardware such as shutter glasses or head-trackers. These properties make them a promising alternative to traditional display systems for viewing imagery in 3D. Because these displays have only recently been made available commercially (e.g., www.actuality-systems.com), their current use tends to be limited to non-interactive output-only display devices. To take full advantage of the unique features of these displays, however, it would be desirable if the 3D data being displayed could be directly interacted with and manipulated. We investigate interaction techniques for volumetric display interfaces, through the development of an interactive 3D geometric model building application. While this application area itself presents many interesting challenges, our focus is on the interaction techniques that are likely generalizable to interactive applications for other domains. We explore a very direct style of interaction where the user interacts with the virtual data using direct finger manipulations on and around the enclosure surrounding the displayed 3D volumetric image.

  1. Design of embedded endoscopic ultrasonic imaging system

    NASA Astrophysics Data System (ADS)

    Li, Ming; Zhou, Hao; Wen, Shijie; Chen, Xiodong; Yu, Daoyin

    2008-12-01

    Endoscopic ultrasonic imaging system is an important component in the endoscopic ultrasonography system (EUS). Through the ultrasonic probe, the characteristics of the fault histology features of digestive organs is detected by EUS, and then received by the reception circuit which making up of amplifying, gain compensation, filtering and A/D converter circuit, in the form of ultrasonic echo. Endoscopic ultrasonic imaging system is the back-end processing system of the EUS, with the function of receiving digital ultrasonic echo modulated by the digestive tract wall from the reception circuit, acquiring and showing the fault histology features in the form of image and characteristic data after digital signal processing, such as demodulation, etc. Traditional endoscopic ultrasonic imaging systems are mainly based on image acquisition and processing chips, which connecting to personal computer with USB2.0 circuit, with the faults of expensive, complicated structure, poor portability, and difficult to popularize. To against the shortcomings above, this paper presents the methods of digital signal acquisition and processing specially based on embedded technology with the core hardware structure of ARM and FPGA for substituting the traditional design with USB2.0 and personal computer. With built-in FIFO and dual-buffer, FPGA implement the ping-pong operation of data storage, simultaneously transferring the image data into ARM through the EBI bus by DMA function, which is controlled by ARM to carry out the purpose of high-speed transmission. The ARM system is being chosen to implement the responsibility of image display every time DMA transmission over and actualizing system control with the drivers and applications running on the embedded operating system Windows CE, which could provide a stable, safe and reliable running platform for the embedded device software. Profiting from the excellent graphical user interface (GUI) and good performance of Windows CE, we can not only clearly show 511×511 pixels ultrasonic echo images through application program, but also provide a simple and friendly operating interface with mouse and touch screen which is more convenient than the traditional endoscopic ultrasonic imaging system. Including core and peripheral circuits of FPGA and ARM, power network circuit and LCD display circuit, we designed the whole embedded system, achieving the desired purpose by implementing ultrasonic image display properly after the experimental verification, solving the problem of hugeness and complexity of the traditional endoscopic ultrasonic imaging system.

  2. Air traffic control : good progress on interim replacement for outage-plagued system, but risks can be further reduced

    DOT National Transportation Integrated Search

    1996-10-01

    Certain air traffic control(ATC) centers experienced a series of major outages, : some of which were caused by the Display Channel Complex or DCC-a mainframe : computer system that processes radar and other data into displayable images on : controlle...

  3. Video stereo-laparoscopy system

    NASA Astrophysics Data System (ADS)

    Xiang, Yang; Hu, Jiasheng; Jiang, Huilin

    2006-01-01

    Minimally invasive surgery (MIS) has contributed significantly to patient care by reducing the morbidity associated with more invasive procedures. MIS procedures have become standard treatment for gallbladder disease and some abdominal malignancies. The imaging system has played a major role in the evolving field of minimally invasive surgery (MIS). The image need to have good resolution, large magnification, especially, the image need to have depth cue at the same time the image have no flicker and suit brightness. The video stereo-laparoscopy system can meet the demand of the doctors. This paper introduces the 3d video laparoscopy has those characteristic, field frequency: 100Hz, the depth space: 150mm, resolution: 10pl/mm. The work principle of the system is introduced in detail, and the optical system and time-division stereo-display system are described briefly in this paper. The system has focusing image lens, it can image on the CCD chip, the optical signal can change the video signal, and through A/D switch of the image processing system become the digital signal, then display the polarized image on the screen of the monitor through the liquid crystal shutters. The doctors with the polarized glasses can watch the 3D image without flicker of the tissue or organ. The 3D video laparoscope system has apply in the MIS field and praised by doctors. Contrast to the traditional 2D video laparoscopy system, it has some merit such as reducing the time of surgery, reducing the problem of surgery and the trained time.

  4. X-Windows Widget for Image Display

    NASA Technical Reports Server (NTRS)

    Deen, Robert G.

    2011-01-01

    XvicImage is a high-performance XWindows (Motif-compliant) user interface widget for displaying images. It handles all aspects of low-level image display. The fully Motif-compliant image display widget handles the following tasks: (1) Image display, including dithering as needed (2) Zoom (3) Pan (4) Stretch (contrast enhancement, via lookup table) (5) Display of single-band or color data (6) Display of non-byte data (ints, floats) (7) Pseudocolor display (8) Full overlay support (drawing graphics on image) (9) Mouse-based panning (10) Cursor handling, shaping, and planting (disconnecting cursor from mouse) (11) Support for all user interaction events (passed to application) (12) Background loading and display of images (doesn't freeze the GUI) (13) Tiling of images.

  5. A near-real-time full-parallax holographic display for remote operations

    NASA Technical Reports Server (NTRS)

    Iavecchia, Helene P.; Huff, Lloyd; Marzwell, Neville I.

    1991-01-01

    A near real-time, full parallax holographic display system was developed that has the potential to provide a 3-D display for remote handling operations in hazardous environments. The major components of the system consist of a stack of three spatial light modulators which serves as the object source of the hologram; a near real-time holographic recording material (such as thermoplastic and photopolymer); and an optical system for relaying SLM images to the holographic recording material and to the observer for viewing.

  6. Display Sharing: An Alternative Paradigm

    NASA Technical Reports Server (NTRS)

    Brown, Michael A.

    2010-01-01

    The current Johnson Space Center (JSC) Mission Control Center (MCC) Video Transport System (VTS) provides flight controllers and management the ability to meld raw video from various sources with telemetry to improve situational awareness. However, maintaining a separate infrastructure for video delivery and integration of video content with data adds significant complexity and cost to the system. When considering alternative architectures for a VTS, the current system's ability to share specific computer displays in their entirety to other locations, such as large projector systems, flight control rooms, and back supporting rooms throughout the facilities and centers must be incorporated into any new architecture. Internet Protocol (IP)-based systems also support video delivery and integration. IP-based systems generally have an advantage in terms of cost and maintainability. Although IP-based systems are versatile, the task of sharing a computer display from one workstation to another can be time consuming for an end-user and inconvenient to administer at a system level. The objective of this paper is to present a prototype display sharing enterprise solution. Display sharing is a system which delivers image sharing across the LAN while simultaneously managing bandwidth, supporting encryption, enabling recovery and resynchronization following a loss of signal, and, minimizing latency. Additional critical elements will include image scaling support, multi -sharing, ease of initial integration and configuration, integration with desktop window managers, collaboration tools, host and recipient controls. This goal of this paper is to summarize the various elements of an IP-based display sharing system that can be used in today's control center environment.

  7. Development of scanning holographic display using MEMS SLM

    NASA Astrophysics Data System (ADS)

    Takaki, Yasuhiro

    2016-10-01

    Holography is an ideal three-dimensional (3D) display technique, because it produces 3D images that naturally satisfy human 3D perception including physiological and psychological factors. However, its electronic implementation is quite challenging because ultra-high resolution is required for display devices to provide sufficient screen size and viewing zone. We have developed holographic display techniques to enlarge the screen size and the viewing zone by use of microelectromechanical systems spatial light modulators (MEMS-SLMs). Because MEMS-SLMs can generate hologram patterns at a high frame rate, the time-multiplexing technique is utilized to virtually increase the resolution. Three kinds of scanning systems have been combined with MEMS-SLMs; the screen scanning system, the viewing-zone scanning system, and the 360-degree scanning system. The screen scanning system reduces the hologram size to enlarge the viewing zone and the reduced hologram patterns are scanned on the screen to increase the screen size: the color display system with a screen size of 6.2 in. and a viewing zone angle of 11° was demonstrated. The viewing-zone scanning system increases the screen size and the reduced viewing zone is scanned to enlarge the viewing zone: a screen size of 2.0 in. and a viewing zone angle of 40° were achieved. The two-channel system increased the screen size to 7.4 in. The 360-degree scanning increases the screen size and the reduced viewing zone is scanned circularly: the display system having a flat screen with a diameter of 100 mm was demonstrated, which generates 3D images viewed from any direction around the flat screen.

  8. Teleradiology system using a magneto-optical disk and N-ISDN

    NASA Astrophysics Data System (ADS)

    Ban, Hideyuki; Osaki, Takanobu; Matsuo, Hitoshi; Okabe, Akifumi; Nakajima, Kotaro; Ohyama, Nagaaki

    1997-05-01

    We have developed a new teleradiology system that provides a fast response and secure data transmission while using N- ISDN communication and an ISC magneto-optical disk that is specialized for medical use. The system consists of PC-based terminals connected to a N-ISDN line and the ISC disk. The system uses two types of data: the control data needed for various operational functions and the image data. For quick response, only the much smaller quantity of control data is sent through the N-ISDN during the actual conference. The bulk of the image data is sent to each site on duplicate ISC disks before the conference. The displaying and processing of images are executed using the local data on the ISC disk. We used this system for a trial teleconsultation between two hospitals. The response time needed to display a 2-Mbyte image was 4 seconds. The telepointer could be controlled with no noticeable delay by sending only the pointer's coordinates. Also, since the patient images were exchanged via the ISC disks only, unauthorized access to the patient images through the N-ISDN was prevented. Thus, this trial provides a preliminary demonstration of the usefulness of this system for clinical use.

  9. Computer Human Interaction for Image Information Systems.

    ERIC Educational Resources Information Center

    Beard, David Volk

    1991-01-01

    Presents an approach to developing viable image computer-human interactions (CHI) involving user metaphors for comprehending image data and methods for locating, accessing, and displaying computer images. A medical-image radiology workstation application is used as an example, and feedback and evaluation methods are discussed. (41 references) (LRW)

  10. Active confocal imaging for visual prostheses

    PubMed Central

    Jung, Jae-Hyun; Aloni, Doron; Yitzhaky, Yitzhak; Peli, Eli

    2014-01-01

    There are encouraging advances in prosthetic vision for the blind, including retinal and cortical implants, and other “sensory substitution devices” that use tactile or electrical stimulation. However, they all have low resolution, limited visual field, and can display only few gray levels (limited dynamic range), severely restricting their utility. To overcome these limitations, image processing or the imaging system could emphasize objects of interest and suppress the background clutter. We propose an active confocal imaging system based on light-field technology that will enable a blind user of any visual prosthesis to efficiently scan, focus on, and “see” only an object of interest while suppressing interference from background clutter. The system captures three-dimensional scene information using a light-field sensor and displays only an in-focused plane with objects in it. After capturing a confocal image, a de-cluttering process removes the clutter based on blur difference. In preliminary experiments we verified the positive impact of confocal-based background clutter removal on recognition of objects in low resolution and limited dynamic range simulated phosphene images. Using a custom-made multiple-camera system, we confirmed that the concept of a confocal de-cluttered image can be realized effectively using light field imaging. PMID:25448710

  11. A novel apparatus for testing binocular function using the 'CyberDome' three-dimensional hemispherical visual display system.

    PubMed

    Handa, T; Ishikawa, H; Shimizu, K; Kawamura, R; Nakayama, H; Sawada, K

    2009-11-01

    Virtual reality has recently been highlighted as a promising medium for visual presentation and entertainment. A novel apparatus for testing binocular visual function using a hemispherical visual display system, 'CyberDome', has been developed and tested. Subjects comprised 40 volunteers (mean age, 21.63 years) with corrected visual acuity of -0.08 (LogMAR) or better, and stereoacuity better than 100 s of arc on the Titmus stereo test. Subjects were able to experience visual perception like being surrounded by visual images, a feature of the 'CyberDome' hemispherical visual display system. Visual images to the right and left eyes were projected and superimposed on the dome screen, allowing test images to be seen independently by each eye using polarizing glasses. The hemispherical visual display was 1.4 m in diameter. Three test parameters were evaluated: simultaneous perception (subjective angle of strabismus), motor fusion amplitude (convergence and divergence), and stereopsis (binocular disparity at 1260, 840, and 420 s of arc). Testing was performed in volunteer subjects with normal binocular vision, and results were compared with those using a major amblyoscope. Subjective angle of strabismus and motor fusion amplitude showed a significant correlation between our test and the major amblyoscope. All subjects could perceive the stereoscopic target with a binocular disparity of 480 s of arc. Our novel apparatus using the CyberDome, a hemispherical visual display system, was able to quantitatively evaluate binocular function. This apparatus offers clinical promise in the evaluation of binocular function.

  12. Digital retrospective motion-mode display and processing of electron beam cine-computed tomography and other cross-sectional cardiac imaging techniques

    NASA Astrophysics Data System (ADS)

    Reed, Judd E.; Rumberger, John A.; Buithieu, Jean; Behrenbeck, Thomas; Breen, Jerome F.; Sheedy, Patrick F., II

    1995-05-01

    Electron beam computed tomography is unparalleled in its ability to consistently produce high quality dynamic images of the human heart. Its use in quantification of left ventricular dynamics is well established in both clinical and research applications. However, the image analysis tools supplied with the scanners offer a limited number of analysis options. They are based on embedded computer systems which have not been significantly upgraded since the scanner was introduced over a decade ago in spite of the explosive improvements in available computer power which have occured during this period. To address these shortcomings, a workstation-based ventricular analysis system has been developed at our institution. This system, which has been in use for over five years, is based on current workstation technology and therefore has benefited from the periodic upgrades in processor performance available to these systems. The dynamic image segmentation component of this system is an interactively supervised, semi-automatic surface identification and tracking system. It characterizes the endocardial and epicardial surfaces of the left ventricle as two concentric 4D hyper-space polyhedrons. Each of these polyhedrons have nearly ten thousand vertices which are deposited into a relational database. The right ventricle is also processed in a similar manner. This database is queried by other custom components which extract ventricular function parameters such as regional ejection fraction and wall stress. The interactive tool which supervises dynamic image segmentation has been enhanced with a temporal domain display. The operator interactively chooses the spatial location of the endpoints of a line segment while the corresponding space/time image is displayed. These images, with content resembling M-Mode echocardiography, benefit form electron beam computed tomography's high spatial and contrast resolution. The segmented surfaces are displayed along with the imagery. These displays give the operator valuable feedback pertaining to the contiguity of the extracted surfaces. As with M-Mode echocardiography, the velocity of moving structures can be easily visualized and measured. However, many views inaccessible to standard transthoracic echocardiography are easily generated. These features have augmented the interpretability of cine electron beam computed tomography and have prompted the recent cloning of this system into an 'omni-directional M-Mode display' system for use in digital post-processing of echocardiographic parasternal short axis tomograms. This enhances the functional assessment in orthogonal views of the left ventricle, accounting for shape changes particularly in the asymmetric post-infarction ventricle. Conclusions: A new tool has been developed for analysis and visualization of cine electron beam computed tomography. It has been found to be very useful in verifying the consistency of myocardial surface definition with a semi-automated segmentation tool. By drawing on M-Mode echocardiography experience, electron beam tomography's interpretability has been enhanced. Use of this feature, in conjunction with the existing image processing tools, will enhance the presentations of data on regional systolic and diastolic functions to clinicians in a format that is familiar to most cardiologists. Additionally, this tool reinforces the advantages of electron beam tomography as a single imaging modality for the assessment of left and right ventricular size, shape, and regional functions.

  13. 21 CFR 892.1630 - Electrostatic x-ray imaging system.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Electrostatic x-ray imaging system. 892.1630 Section 892.1630 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES... visible image. This generic type of device may include signal analysis and display equipment, patient and...

  14. 21 CFR 892.1630 - Electrostatic x-ray imaging system.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Electrostatic x-ray imaging system. 892.1630 Section 892.1630 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES... visible image. This generic type of device may include signal analysis and display equipment, patient and...

  15. 21 CFR 892.1630 - Electrostatic x-ray imaging system.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Electrostatic x-ray imaging system. 892.1630 Section 892.1630 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES... visible image. This generic type of device may include signal analysis and display equipment, patient and...

  16. A novel shape-changing haptic table-top display

    NASA Astrophysics Data System (ADS)

    Wang, Jiabin; Zhao, Lu; Liu, Yue; Wang, Yongtian; Cai, Yi

    2018-01-01

    A shape-changing table-top display with haptic feedback allows its users to perceive 3D visual and texture displays interactively. Since few existing devices are developed as accurate displays with regulatory haptic feedback, a novel attentive and immersive shape changing mechanical interface (SCMI) consisting of image processing unit and transformation unit was proposed in this paper. In order to support a precise 3D table-top display with an offset of less than 2 mm, a custommade mechanism was developed to form precise surface and regulate the feedback force. The proposed image processing unit was capable of extracting texture data from 2D picture for rendering shape-changing surface and realizing 3D modeling. The preliminary evaluation result proved the feasibility of the proposed system.

  17. Image based performance analysis of thermal imagers

    NASA Astrophysics Data System (ADS)

    Wegner, D.; Repasi, E.

    2016-05-01

    Due to advances in technology, modern thermal imagers resemble sophisticated image processing systems in functionality. Advanced signal and image processing tools enclosed into the camera body extend the basic image capturing capability of thermal cameras. This happens in order to enhance the display presentation of the captured scene or specific scene details. Usually, the implemented methods are proprietary company expertise, distributed without extensive documentation. This makes the comparison of thermal imagers especially from different companies a difficult task (or at least a very time consuming/expensive task - e.g. requiring the execution of a field trial and/or an observer trial). For example, a thermal camera equipped with turbulence mitigation capability stands for such a closed system. The Fraunhofer IOSB has started to build up a system for testing thermal imagers by image based methods in the lab environment. This will extend our capability of measuring the classical IR-system parameters (e.g. MTF, MTDP, etc.) in the lab. The system is set up around the IR- scene projector, which is necessary for the thermal display (projection) of an image sequence for the IR-camera under test. The same set of thermal test sequences might be presented to every unit under test. For turbulence mitigation tests, this could be e.g. the same turbulence sequence. During system tests, gradual variation of input parameters (e. g. thermal contrast) can be applied. First ideas of test scenes selection and how to assembly an imaging suite (a set of image sequences) for the analysis of imaging thermal systems containing such black boxes in the image forming path is discussed.

  18. Naked-eye 3D imaging employing a modified MIMO micro-ring conjugate mirrors

    NASA Astrophysics Data System (ADS)

    Youplao, P.; Pornsuwancharoen, N.; Amiri, I. S.; Thieu, V. N.; Yupapin, P.

    2018-03-01

    In this work, the use of a micro-conjugate mirror that can produce the 3D image incident probe and display is proposed. By using the proposed system together with the concept of naked-eye 3D imaging, a pixel and a large volume pixel of a 3D image can be created and displayed as naked-eye perception, which is valuable for the large volume naked-eye 3D imaging applications. In operation, a naked-eye 3D image that has a large pixel volume will be constructed by using the MIMO micro-ring conjugate mirror system. Thereafter, these 3D images, formed by the first micro-ring conjugate mirror system, can be transmitted through an optical link to a short distance away and reconstructed via the recovery conjugate mirror at the other end of the transmission. The image transmission is performed by the Fourier integral in MATLAB and compares to the Opti-wave program results. The Fourier convolution is also included for the large volume image transmission. The simulation is used for the manipulation, where the array of a micro-conjugate mirror system is designed and simulated for the MIMO system. The naked-eye 3D imaging is confirmed by the concept of the conjugate mirror in both the input and output images, in terms of the four-wave mixing (FWM), which is discussed and interpreted.

  19. Designing a wearable navigation system for image-guided cancer resection surgery

    PubMed Central

    Shao, Pengfei; Ding, Houzhu; Wang, Jinkun; Liu, Peng; Ling, Qiang; Chen, Jiayu; Xu, Junbin; Zhang, Shiwu; Xu, Ronald

    2015-01-01

    A wearable surgical navigation system is developed for intraoperative imaging of surgical margin in cancer resection surgery. The system consists of an excitation light source, a monochromatic CCD camera, a host computer, and a wearable headset unit in either of the following two modes: head-mounted display (HMD) and Google glass. In the HMD mode, a CMOS camera is installed on a personal cinema system to capture the surgical scene in real-time and transmit the image to the host computer through a USB port. In the Google glass mode, a wireless connection is established between the glass and the host computer for image acquisition and data transport tasks. A software program is written in Python to call OpenCV functions for image calibration, co-registration, fusion, and display with augmented reality. The imaging performance of the surgical navigation system is characterized in a tumor simulating phantom. Image-guided surgical resection is demonstrated in an ex vivo tissue model. Surgical margins identified by the wearable navigation system are co-incident with those acquired by a standard small animal imaging system, indicating the technical feasibility for intraoperative surgical margin detection. The proposed surgical navigation system combines the sensitivity and specificity of a fluorescence imaging system and the mobility of a wearable goggle. It can be potentially used by a surgeon to identify the residual tumor foci and reduce the risk of recurrent diseases without interfering with the regular resection procedure. PMID:24980159

  20. Designing a wearable navigation system for image-guided cancer resection surgery.

    PubMed

    Shao, Pengfei; Ding, Houzhu; Wang, Jinkun; Liu, Peng; Ling, Qiang; Chen, Jiayu; Xu, Junbin; Zhang, Shiwu; Xu, Ronald

    2014-11-01

    A wearable surgical navigation system is developed for intraoperative imaging of surgical margin in cancer resection surgery. The system consists of an excitation light source, a monochromatic CCD camera, a host computer, and a wearable headset unit in either of the following two modes: head-mounted display (HMD) and Google glass. In the HMD mode, a CMOS camera is installed on a personal cinema system to capture the surgical scene in real-time and transmit the image to the host computer through a USB port. In the Google glass mode, a wireless connection is established between the glass and the host computer for image acquisition and data transport tasks. A software program is written in Python to call OpenCV functions for image calibration, co-registration, fusion, and display with augmented reality. The imaging performance of the surgical navigation system is characterized in a tumor simulating phantom. Image-guided surgical resection is demonstrated in an ex vivo tissue model. Surgical margins identified by the wearable navigation system are co-incident with those acquired by a standard small animal imaging system, indicating the technical feasibility for intraoperative surgical margin detection. The proposed surgical navigation system combines the sensitivity and specificity of a fluorescence imaging system and the mobility of a wearable goggle. It can be potentially used by a surgeon to identify the residual tumor foci and reduce the risk of recurrent diseases without interfering with the regular resection procedure.

  1. Architecture for biomedical multimedia information delivery on the World Wide Web

    NASA Astrophysics Data System (ADS)

    Long, L. Rodney; Goh, Gin-Hua; Neve, Leif; Thoma, George R.

    1997-10-01

    Research engineers at the National Library of Medicine are building a prototype system for the delivery of multimedia biomedical information on the World Wide Web. This paper discuses the architecture and design considerations for the system, which will be used initially to make images and text from the third National Health and Nutrition Examination Survey (NHANES) publicly available. We categorized our analysis as follows: (1) fundamental software tools: we analyzed trade-offs among use of conventional HTML/CGI, X Window Broadway, and Java; (2) image delivery: we examined the use of unconventional TCP transmission methods; (3) database manager and database design: we discuss the capabilities and planned use of the Informix object-relational database manager and the planned schema for the HNANES database; (4) storage requirements for our Sun server; (5) user interface considerations; (6) the compatibility of the system with other standard research and analysis tools; (7) image display: we discuss considerations for consistent image display for end users. Finally, we discuss the scalability of the system in terms of incorporating larger or more databases of similar data, and the extendibility of the system for supporting content-based retrieval of biomedical images. The system prototype is called the Web-based Medical Information Retrieval System. An early version was built as a Java applet and tested on Unix, PC, and Macintosh platforms. This prototype used the MiniSQL database manager to do text queries on a small database of records of participants in the second NHANES survey. The full records and associated x-ray images were retrievable and displayable on a standard Web browser. A second version has now been built, also a Java applet, using the MySQL database manager.

  2. Low-cost telepresence for collaborative virtual environments.

    PubMed

    Rhee, Seon-Min; Ziegler, Remo; Park, Jiyoung; Naef, Martin; Gross, Markus; Kim, Myoung-Hee

    2007-01-01

    We present a novel low-cost method for visual communication and telepresence in a CAVE -like environment, relying on 2D stereo-based video avatars. The system combines a selection of proven efficient algorithms and approximations in a unique way, resulting in a convincing stereoscopic real-time representation of a remote user acquired in a spatially immersive display. The system was designed to extend existing projection systems with acquisition capabilities requiring minimal hardware modifications and cost. The system uses infrared-based image segmentation to enable concurrent acquisition and projection in an immersive environment without a static background. The system consists of two color cameras and two additional b/w cameras used for segmentation in the near-IR spectrum. There is no need for special optics as the mask and color image are merged using image-warping based on a depth estimation. The resulting stereo image stream is compressed, streamed across a network, and displayed as a frame-sequential stereo texture on a billboard in the remote virtual environment.

  3. The Tactile Vision Substitution System: Applications in Education and Employment

    ERIC Educational Resources Information Center

    Scadden, Lawrence A.

    1974-01-01

    The Tactile Vision Substitution System converts the visual image from a narrow-angle television camera to a tactual image on a 5-inch square, 100-point display of vibrators placed against the abdomen of the blind person. (Author)

  4. Fluorescent Microscopy Enhancement Using Imaging

    NASA Astrophysics Data System (ADS)

    Conrad, Morgan P.; Reck tenwald, Diether J.; Woodhouse, Bryan S.

    1986-06-01

    To enhance our capabilities for observing fluorescent stains in biological systems, we are developing a low cost imaging system based around an IBM AT microcomputer and a commercial image capture board compatible with a standard RS-170 format video camera. The image is digitized in real time with 256 grey levels, while being displayed and also stored in memory. The software allows for interactive processing of the data, such as histogram equalization or pseudocolor enhancement of the display. The entire image, or a quadrant thereof, can be averaged over time to improve the signal to noise ratio. Images may be stored to disk for later use or comparison. The camera may be selected for better response in the UV or near IR. Combined with signal averaging, this increases the sensitivity relative to that of the human eye, while still allowing for the fluorescence distribution on either the surface or internal cytoskeletal structure to be observed.

  5. Stabilized display of coronary x-ray image sequences

    NASA Astrophysics Data System (ADS)

    Close, Robert A.; Whiting, James S.; Da, Xiaolin; Eigler, Neal L.

    2004-05-01

    Display stabilization is a technique by which a feature of interest in a cine image sequence is tracked and then shifted to remain approximately stationary on the display device. Prior simulations indicate that display stabilization with high playback rates ( 30 f/s) can significantly improve detectability of low-contrast features in coronary angiograms. Display stabilization may also help to improve the accuracy of intra-coronary device placement. We validated our automated tracking algorithm by comparing the inter-frame difference (jitter) between manual and automated tracking of 150 coronary x-ray image sequences acquired on a digital cardiovascular X-ray imaging system with CsI/a-Si flat panel detector. We find that the median (50%) inter-frame jitter between manual and automatic tracking is 1.41 pixels or less, indicating a jump no further than an adjacent pixel. This small jitter implies that automated tracking and manual tracking should yield similar improvements in the performance of most visual tasks. We hypothesize that cardiologists would perceive a benefit in viewing the stabilized display as an addition to the standard playback of cine recordings. A benefit of display stabilization was identified in 87 of 101 sequences (86%). The most common tasks cited were evaluation of stenosis and determination of stent and balloon positions. We conclude that display stabilization offers perceptible improvements in the performance of visual tasks by cardiologists.

  6. Quality metrics for sensor images

    NASA Technical Reports Server (NTRS)

    Ahumada, AL

    1993-01-01

    Methods are needed for evaluating the quality of augmented visual displays (AVID). Computational quality metrics will help summarize, interpolate, and extrapolate the results of human performance tests with displays. The FLM Vision group at NASA Ames has been developing computational models of visual processing and using them to develop computational metrics for similar problems. For example, display modeling systems use metrics for comparing proposed displays, halftoning optimizing methods use metrics to evaluate the difference between the halftone and the original, and image compression methods minimize the predicted visibility of compression artifacts. The visual discrimination models take as input two arbitrary images A and B and compute an estimate of the probability that a human observer will report that A is different from B. If A is an image that one desires to display and B is the actual displayed image, such an estimate can be regarded as an image quality metric reflecting how well B approximates A. There are additional complexities associated with the problem of evaluating the quality of radar and IR enhanced displays for AVID tasks. One important problem is the question of whether intruding obstacles are detectable in such displays. Although the discrimination model can handle detection situations by making B the original image A plus the intrusion, this detection model makes the inappropriate assumption that the observer knows where the intrusion will be. Effects of signal uncertainty need to be added to our models. A pilot needs to make decisions rapidly. The models need to predict not just the probability of a correct decision, but the probability of a correct decision by the time the decision needs to be made. That is, the models need to predict latency as well as accuracy. Luce and Green have generated models for auditory detection latencies. Similar models are needed for visual detection. Most image quality models are designed for static imagery. Watson has been developing a general spatial-temporal vision model to optimize video compression techniques. These models need to be adapted and calibrated for AVID applications.

  7. Advantages and difficulties of implementation of flat-panel multimedia monitoring system in a surgical MRI suite

    NASA Astrophysics Data System (ADS)

    Deckard, Michael; Ratib, Osman M.; Rubino, Gregory

    2002-05-01

    Our project was to design and implement a ceiling-mounted multi monitor display unit for use in a high-field MRI surgical suite. The system is designed to simultaneously display images/data from four different digital and/or analog sources with: minimal interference from the adjacent high magnetic field, minimal signal-to-noise/artifact contribution to the MRI images and compliance with codes and regulations for the sterile neuro-surgical environment. Provisions were also made to accommodate the importing and exporting of video information via PACS and remote processing/display for clinical and education uses. Commercial fiber optic receivers/transmitters were implemented along with supporting video processing and distribution equipment to solve the video communication problem. A new generation of high-resolution color flat panel displays was selected for the project. A custom-made monitor mount and in-suite electronics enclosure was designed and constructed at UCLA. Difficulties with implementing an isolated AC power system are discussed and a work-around solution presented.

  8. Integral imaging with multiple image planes using a uniaxial crystal plate.

    PubMed

    Park, Jae-Hyeung; Jung, Sungyong; Choi, Heejin; Lee, Byoungho

    2003-08-11

    Integral imaging has been attracting much attention recently for its several advantages such as full parallax, continuous view-points, and real-time full-color operation. However, the thickness of the displayed three-dimensional image is limited to relatively small value due to the degradation of the image resolution. In this paper, we propose a method to provide observers with enhanced perception of the depth without severe resolution degradation by the use of the birefringence of a uniaxial crystal plate. The proposed integral imaging system can display images integrated around three central depth planes by dynamically altering the polarization and controlling both elemental images and dynamic slit array mask accordingly. We explain the principle of the proposed method and verify it experimentally.

  9. CHARGE Image Generator: Theory of Operation and Author Language Support. Technical Report 75-3.

    ERIC Educational Resources Information Center

    Gunwaldsen, Roger L.

    The image generator function and author language software support for the CHARGE (Color Halftone Area Graphics Environment) Interactive Graphics System are described. Designed initially for use in computer-assisted instruction (CAI) systems, the CHARGE Interactive Graphics System can provide graphic displays for various applications including…

  10. Polyplanar optic display for cockpit application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Veligdan, J.; Biscardi, C.; Brewster, C.

    1998-04-01

    The Polyplanar Optical Display (POD) is a high contrast display screen being developed for cockpit applications. This display screen is 2 inches thick and has a matte black face which allows for high contrast images. The prototype being developed is a form, fit and functional replacement display for the B-52 aircraft which uses a monochrome ten-inch display. The new display uses a long lifetime, (10,000 hour), 200 mW green solid-state laser (532 nm) as its optical source. In order to produce real-time video, the laser light is being modulated by a Digital Light Processing (DLP{trademark}) chip manufactured by Texas Instruments,more » Inc. A variable astigmatic focusing system is used to produce a stigmatic image on the viewing face of the POD. In addition to the optical design and speckle reduction, the authors discuss the electronic interfacing to the DLP{trademark} chip, the opto-mechanical design and viewing angle characteristics.« less

  11. Polyplanar optic display

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Veligdan, J.; Biscardi, C.; Brewster, C.

    1997-07-01

    The Polyplanar Optical Display (POD) is a unique display screen which can be used with any projection source. This display screen is 2 inches thick and has a matte black face which allows for high contrast images. The prototype being developed is a form, fit and functional replacement display for the B-52 aircraft which uses a monochrome ten-inch display. The new display uses a 100 milliwatt green solid state laser (532 nm) as its optical source. In order to produce real-time video, the laser light is being modulated by a Digital Light Processing (DLP{trademark}) chip manufactured by Texas Instruments, Inc.more » A variable astigmatic focusing system is used to produce a stigmatic image on the viewing face of the POD. In addition to the optical design, the authors discuss the electronic interfacing to the DLP{trademark} chip, the opto-mechanical design and viewing angle characteristics.« less

  12. Polyplanar optic display for cockpit application

    NASA Astrophysics Data System (ADS)

    Veligdan, James T.; Biscardi, Cyrus; Brewster, Calvin; DeSanto, Leonard; Freibott, William C.

    1998-09-01

    The Polyplanar Optical Display (POD) is a high contrast display screen being developed for cockpit applications. This display screen is 2 inches thick and has a matte black face which allows for high contrast images. The prototype being developed is a form, fit and functional replacement display for the B-52 aircraft which uses a monochrome ten-inch display. The new display uses a long lifetime, (10,000 hour), 200 mW green solid-state laser (532 nm) as its optical source. In order to produce real-time video, the laser light is being modulated by a Digital Light Processing (DLPTM) chip manufactured by Texas Instruments, Inc. A variable astigmatic focusing system is used to produce a stigmatic image on the viewing face of the POD. In addition to the optical design and speckle reduction, we discuss the electronic interfacing to the DLPTM chip, the opto-mechanical design and viewing angle characteristics.

  13. Development and implementation of ultrasound picture archiving and communication system

    NASA Astrophysics Data System (ADS)

    Weinberg, Wolfram S.; Tessler, Franklin N.; Grant, Edward G.; Kangarloo, Hooshang; Huang, H. K.

    1990-08-01

    The Department of Radiological Sciences at the UCLA School of Medicine is developing an archiving and communication system (PACS) for digitized ultrasound images. In its final stage the system will involve the acquisition and archiving of ultrasound studies from four different locations including the Center for Health Sciences, the Department for Mental Health and the Outpatient Radiology and Endoscopy Departments with a total of 200-250 patient studies per week. The concept comprises two stages of image manipulation for each ultrasound work area. The first station is located close to the examination site and accomodates the acquisition of digital images from up to five ultrasound devices and provides for instantaneous display and primary viewing and image selection. Completed patient studies are transferred to a main workstation for secondary review, further analysis and comparison studies. The review station has an on-line storage capacity of 10,000 images with a resolution of 512x512 8 bit data to allow for immediate retrieval of active patient studies of up to two weeks. The main work stations are connected through the general network and use one central archive for long term storage and a film printer for hardcopy output. First phase development efforts concentrate on the implementation and testing of a system at one location consisting of a number of ultrasound units with video digitizer and network interfaces and a microcomputer workstation as host for the display station with two color monitors, each allowing simultaneous display of four 512x512 images. The discussion emphasizes functionality, performance and acceptance of the system in the clinical environment.

  14. MACMULTIVIEW 5.1

    NASA Technical Reports Server (NTRS)

    Norikane, L.

    1994-01-01

    MacMultiview is an interactive tool for the Macintosh II family which allows one to display and make computations utilizing polarimetric radar data collected by the Jet Propulsion Laboratory's imaging SAR (synthetic aperture radar) polarimeter system. The system includes the single-frequency L-band sensor mounted on the NASA CV990 aircraft and its replacement, the multi-frequency P-, L-, and C-band sensors mounted on the NASA DC-8. MacMultiview provides two basic functions: computation of synthesized polarimetric images and computation of polarization signatures. The radar data can be used to compute a variety of images. The total power image displays the sum of the polarized and unpolarized components of the backscatter for each pixel. The magnitude/phase difference image displays the HH (horizontal transmit and horizontal receive polarization) to VV (vertical transmit and vertical receive polarization) phase difference using color. Magnitude is displayed using intensity. The user may also select any transmit and receive polarization combination from which an image is synthesized. This image displays the backscatter which would have been observed had the sensor been configured using the selected transmit and receive polarizations. MacMultiview can also be used to compute polarization signatures, three dimensional plots of backscatter versus transmit and receive polarizations. The standard co-polarization signatures (transmit and receive polarizations are the same) and cross-polarization signatures (transmit and receive polarizations are orthogonal) can be plotted for any rectangular subset of pixels within a radar data set. In addition, the ratio of co- and cross-polarization signatures computed from different subsets within the same data set can also be computed. Computed images can be saved in a variety of formats: byte format (headerless format which saves the image as a string of byte values), MacMultiview (a byte image preceded by an ASCII header), and PICT2 format (standard format readable by MacMultiview and other image processing programs for the Macintosh). Images can also be printed on PostScript output devices. Polarization signatures can be saved in either a PICT format or as a text file containing PostScript commands and printed on any QuickDraw output device. The associated Stokes matrices can be stored in a text file. MacMultiview is written in C-language for Macintosh II series computers. MacMultiview will only run on Macintosh II series computers with 8-bit video displays (gray shades or color). The program also requires a minimum configuration of System 6.0, Finder 6.1, and 1Mb of RAM. MacMultiview is NOT compatible with System 7.0. It requires 32-Bit QuickDraw. Note: MacMultiview may not be fully compatible with preliminary versions of 32-Bit QuickDraw. Macintosh Programmer's Workshop and Macintosh Programmer's Workshop C (version 3.0) are required for recompiling and relinking. The standard distribution medium for this package is a set of three 800K 3.5 inch diskettes in Macintosh format. This program was developed in 1989 and updated in 1991. MacMultiview is a copyrighted work with all copyright vested in NASA. QuickDraw, Finder, Macintosh, and System 7 are trademarks of Apple Computer, Inc.

  15. UWGSP4: an imaging and graphics superworkstation and its medical applications

    NASA Astrophysics Data System (ADS)

    Jong, Jing-Ming; Park, Hyun Wook; Eo, Kilsu; Kim, Min-Hwan; Zhang, Peng; Kim, Yongmin

    1992-05-01

    UWGSP4 is configured with a parallel architecture for image processing and a pipelined architecture for computer graphics. The system's peak performance is 1,280 MFLOPS for image processing and over 200,000 Gouraud shaded 3-D polygons per second for graphics. The simulated sustained performance is about 50% of the peak performance in general image processing. Most of the 2-D image processing functions are efficiently vectorized and parallelized in UWGSP4. A performance of 770 MFLOPS in convolution and 440 MFLOPS in FFT is achieved. The real-time cine display, up to 32 frames of 1280 X 1024 pixels per second, is supported. In 3-D imaging, the update rate for the surface rendering is 10 frames of 20,000 polygons per second; the update rate for the volume rendering is 6 frames of 128 X 128 X 128 voxels per second. The system provides 1280 X 1024 X 32-bit double frame buffers and one 1280 X 1024 X 8-bit overlay buffer for supporting realistic animation, 24-bit true color, and text annotation. A 1280 X 1024- pixel, 66-Hz noninterlaced display screen with 1:1 aspect ratio can be windowed into the frame buffer for the display of any portion of the processed image or graphics.

  16. Volumetric, dashboard-mounted augmented display

    NASA Astrophysics Data System (ADS)

    Kessler, David; Grabowski, Christopher

    2017-11-01

    The optical design of a compact volumetric display for drivers is presented. The system displays a true volume image with realistic physical depth cues, such as focal accommodation, parallax and convergence. A large eyebox is achieved with a pupil expander. The windshield is used as the augmented reality combiner. A freeform windshield corrector is placed at the dashboard.

  17. Display Developer for Firing Room Applications

    NASA Technical Reports Server (NTRS)

    Bowman, Elizabeth A.

    2013-01-01

    The firing room at Kennedy Space Center (KSC) is responsible for all NASA human spaceflight launch operations, therefore it is vital that all displays within the firing room be properly tested, up-to-date, and user-friendly during a launch. The Ground Main Propulsion System (GMPS) requires a number of remote displays for Vehicle Integration and Launch (VIL) Operations at KSC. My project is to develop remote displays for the GMPS using the Display Services and Framework (DSF) editor. These remote displays will be based on model images provided by GMPS through PowerPoint. Using the DSF editor, the PowerPoint images can be recreated with active buttons associated with the correct Compact Unique Identifiers (CUIs). These displays will be documented in the Software Requirements and Design Specifications (SRDS) at the 90% GMPS Design Review. In the future, these remote displays will be available for other developers to improve, edit, or add on to so that the display may be incorporated into the firing room to be used for launches.

  18. Design, Implementation and Characterization of a Quantum-Dot-Based Volumetric Display

    NASA Astrophysics Data System (ADS)

    Hirayama, Ryuji; Naruse, Makoto; Nakayama, Hirotaka; Tate, Naoya; Shiraki, Atsushi; Kakue, Takashi; Shimobaba, Tomoyoshi; Ohtsu, Motoichi; Ito, Tomoyoshi

    2015-02-01

    In this study, we propose and experimentally demonstrate a volumetric display system based on quantum dots (QDs) embedded in a polymer substrate. Unlike conventional volumetric displays, our system does not require electrical wiring; thus, the heretofore unavoidable issue of occlusion is resolved because irradiation by external light supplies the energy to the light-emitting voxels formed by the QDs. By exploiting the intrinsic attributes of the QDs, the system offers ultrahigh definition and a wide range of colours for volumetric displays. In this paper, we discuss the design, implementation and characterization of the proposed volumetric display's first prototype. We developed an 8 × 8 × 8 display comprising two types of QDs. This display provides multicolour three-type two-dimensional patterns when viewed from different angles. The QD-based volumetric display provides a new way to represent images and could be applied in leisure and advertising industries, among others.

  19. Design, implementation and characterization of a quantum-dot-based volumetric display.

    PubMed

    Hirayama, Ryuji; Naruse, Makoto; Nakayama, Hirotaka; Tate, Naoya; Shiraki, Atsushi; Kakue, Takashi; Shimobaba, Tomoyoshi; Ohtsu, Motoichi; Ito, Tomoyoshi

    2015-02-16

    In this study, we propose and experimentally demonstrate a volumetric display system based on quantum dots (QDs) embedded in a polymer substrate. Unlike conventional volumetric displays, our system does not require electrical wiring; thus, the heretofore unavoidable issue of occlusion is resolved because irradiation by external light supplies the energy to the light-emitting voxels formed by the QDs. By exploiting the intrinsic attributes of the QDs, the system offers ultrahigh definition and a wide range of colours for volumetric displays. In this paper, we discuss the design, implementation and characterization of the proposed volumetric display's first prototype. We developed an 8 × 8 × 8 display comprising two types of QDs. This display provides multicolour three-type two-dimensional patterns when viewed from different angles. The QD-based volumetric display provides a new way to represent images and could be applied in leisure and advertising industries, among others.

  20. A method of rapidly evaluating image quality of NED optical system

    NASA Astrophysics Data System (ADS)

    Sun, Qi; Qiu, Chuankai; Yang, Huan

    2014-11-01

    In recent years, with the development of technology of micro-display, advanced optics and the software and hardware, near-to-eye display ( NED) optical system will have a wide range of potential applications in the fields of amusement and virtual reality. However, research on the evaluating image quality of this kind optical system is comparatively lagging behind. Although now there are some methods and equipment for evaluation, they can't be applied in commercial production because of their complex operation and inaccuracy. In this paper, an academic method is proposed and a Rapid Evaluation System (RES) is designed to evaluate the image of optical system rapidly and exactly. Firstly, a set of parameters that eyes are sensitive to and also express the quality of system should be extracted and quantized to be criterion, so the evaluation standards can be established. Then, some parameters can be detected by RES consisted of micro-display, CCD camera and computer and so on. By process of scaling, the measuring results of the RES are exact and creditable, relationship between object measurement, subjective evaluation and the RES will be established. After that, image quality of optical system can be evaluated just by detecting parameters of that. The RES is simple and the results of evaluation are exact and keeping with human vision. So the method can be used not only for optimizing design of optical system, but also for evaluation in commercial production.

  1. Prism-based single-camera system for stereo display

    NASA Astrophysics Data System (ADS)

    Zhao, Yue; Cui, Xiaoyu; Wang, Zhiguo; Chen, Hongsheng; Fan, Heyu; Wu, Teresa

    2016-06-01

    This paper combines the prism and single camera and puts forward a method of stereo imaging with low cost. First of all, according to the principle of geometrical optics, we can deduce the relationship between the prism single-camera system and dual-camera system, and according to the principle of binocular vision we can deduce the relationship between binoculars and dual camera. Thus we can establish the relationship between the prism single-camera system and binoculars and get the positional relation of prism, camera, and object with the best effect of stereo display. Finally, using the active shutter stereo glasses of NVIDIA Company, we can realize the three-dimensional (3-D) display of the object. The experimental results show that the proposed approach can make use of the prism single-camera system to simulate the various observation manners of eyes. The stereo imaging system, which is designed by the method proposed by this paper, can restore the 3-D shape of the object being photographed factually.

  2. Diagnostic report acquisition unit for the Mayo/IBM PACS project

    NASA Astrophysics Data System (ADS)

    Brooks, Everett G.; Rothman, Melvyn L.

    1991-07-01

    The Mayo Clinic and IBM Rochester have jointly developed a picture archive and control system (PACS) for use with Mayo's MRI and Neuro-CT imaging modalities. One of the challenges of developing a useful PACS involves integrating the diagnostic reports with the electronic images so they can be displayed simultaneously. By the time a diagnostic report is generated for a particular case, its images have already been captured and archived by the PACS. To integrate the report with the images, the authors have developed an IBM Personal System/2 computer (PS/2) based diagnostic report acquisition unit (RAU). A typed copy of the report is transmitted via facsimile to the RAU where it is stacked electronically with other reports that have been sent previously but not yet processed. By processing these reports at the RAU, the information they contain is integrated with the image database and a copy of the report is archived electronically on an IBM Application System/400 computer (AS/400). When a user requests a set of images for viewing, the report is automatically integrated with the image data. By using a hot key, the user can toggle on/off the report on the display screen. This report describes process, hardware, and software employed to integrate the diagnostic report information into the PACS, including how the report images are captured, transmitted, and entered into the AS/400 database. Also described is how the archived reports and their associated medical images are located and merged for retrieval and display. The methods used to detect and process error conditions are also discussed.

  3. X-Ray Imaging System

    NASA Technical Reports Server (NTRS)

    1986-01-01

    The FluoroScan Imaging System is a high resolution, low radiation device for viewing stationary or moving objects. It resulted from NASA technology developed for x-ray astronomy and Goddard application to a low intensity x-ray imaging scope. FlouroScan Imaging Systems, Inc, (formerly HealthMate, Inc.), a NASA licensee, further refined the FluoroScan System. It is used for examining fractures, placement of catheters, and in veterinary medicine. Its major components include an x-ray generator, scintillator, visible light image intensifier and video display. It is small, light and maneuverable.

  4. Image Processing System

    NASA Technical Reports Server (NTRS)

    1986-01-01

    Mallinckrodt Institute of Radiology (MIR) is using a digital image processing system which employs NASA-developed technology. MIR's computer system is the largest radiology system in the world. It is used in diagnostic imaging. Blood vessels are injected with x-ray dye, and the images which are produced indicate whether arteries are hardened or blocked. A computer program developed by Jet Propulsion Laboratory known as Mini-VICAR/IBIS was supplied to MIR by COSMIC. The program provides the basis for developing the computer imaging routines for data processing, contrast enhancement and picture display.

  5. Distributed data collection for a database of radiological image interpretations

    NASA Astrophysics Data System (ADS)

    Long, L. Rodney; Ostchega, Yechiam; Goh, Gin-Hua; Thoma, George R.

    1997-01-01

    The National Library of Medicine, in collaboration with the National Center for Health Statistics and the National Institute for Arthritis and Musculoskeletal and Skin Diseases, has built a system for collecting radiological interpretations for a large set of x-ray images acquired as part of the data gathered in the second National Health and Nutrition Examination Survey. This system is capable of delivering across the Internet 5- and 10-megabyte x-ray images to Sun workstations equipped with X Window based 2048 X 2560 image displays, for the purpose of having these images interpreted for the degree of presence of particular osteoarthritic conditions in the cervical and lumbar spines. The collected interpretations can then be stored in a database at the National Library of Medicine, under control of the Illustra DBMS. This system is a client/server database application which integrates (1) distributed server processing of client requests, (2) a customized image transmission method for faster Internet data delivery, (3) distributed client workstations with high resolution displays, image processing functions and an on-line digital atlas, and (4) relational database management of the collected data.

  6. Optimization of illumination schemes in a head-mounted display integrated with eye tracking capabilities

    NASA Astrophysics Data System (ADS)

    Pansing, Craig W.; Hua, Hong; Rolland, Jannick P.

    2005-08-01

    Head-mounted display (HMD) technologies find a variety of applications in the field of 3D virtual and augmented environments, 3D scientific visualization, as well as wearable displays. While most of the current HMDs use head pose to approximate line of sight, we propose to investigate approaches and designs for integrating eye tracking capability into HMDs from a low-level system design perspective and to explore schemes for optimizing system performance. In this paper, we particularly propose to optimize the illumination scheme, which is a critical component in designing an eye tracking-HMD (ET-HMD) integrated system. An optimal design can improve not only eye tracking accuracy, but also robustness. Using LightTools, we present the simulation of a complete eye illumination and imaging system using an eye model along with multiple near infrared LED (IRLED) illuminators and imaging optics, showing the irradiance variation of the different eye structures. The simulation of dark pupil effects along with multiple 1st-order Purkinje images will be presented. A parametric analysis is performed to investigate the relationships between the IRLED configurations and the irradiance distribution at the eye, and a set of optimal configuration parameters is recommended. The analysis will be further refined by actual eye image acquisition and processing.

  7. Advanced Traveler Information Systems and Commercial Vehicle Operations Components of the Intelligent Transportation Systems: Head-up Displays and Driver Attention for Navigation Information

    DOT National Transportation Integrated Search

    1998-03-01

    Since the initial development of prototype automotive head-up displays (HUDs), there has been a concern that the presence of the HUD image may interfere with the driving task and negatively impact driving performance. The overall goal of this experim...

  8. Algorithmic support for graphic images rotation in avionics

    NASA Astrophysics Data System (ADS)

    Kniga, E. V.; Gurjanov, A. V.; Shukalov, A. V.; Zharinov, I. O.

    2018-05-01

    The avionics device designing has an actual problem of development and research algorithms to rotate the images which are being shown in the on-board display. The image rotation algorithms are a part of program software of avionics devices, which are parts of the on-board computers of the airplanes and helicopters. Images to be rotated have the flight location map fragments. The image rotation in the display system can be done as a part of software or mechanically. The program option is worse than the mechanic one in its rotation speed. The comparison of some test images of rotation several algorithms is shown which are being realized mechanically with the program environment Altera QuartusII.

  9. Initial experience with a radiology imaging network to newborn and intensive care units.

    PubMed

    Witt, R M; Cohen, M D; Appledorn, C R

    1991-02-01

    A digital image network has been installed in the James Whitcomb Riley Hospital for Children on the Indiana University Medical Center to create a limited all digital imaging system. The system is composed of commercial components, Philips/AT&T CommView system, (Philips Medical Systems, Shelton, CT; AT&T Bell Laboratories, West Long Beach, NJ) and connects an existing Philips Computed Radiology (PCR) system to two remote workstations that reside in the intensive care unit and the newborn nursery. The purpose of the system is to display images obtained from the PCR system on the remote workstations for direct viewing by referring clinicians, and to reduce many of their visits to the radiology reading room three floors away. The design criteria includes the ability to centrally control all image management functions on the remote workstations to relieve the clinicians from any image management tasks except for recalling patient images. The principal components of the system are the Philips PCR system, the acquisition module (AM), and the PCR interface to the Data Management Module (DMM). Connected to the DMM are an Enhanced Graphics Display Workstation (EGDW), an optical disk drive, and a network gateway to an ethernet link. The ethernet network is the connection to the two Results Viewing Stations (RVS) and both RVSs are approximately 100 m from the gateway. The DMM acts as an image file server and an image archive device. The DMM manages the image data base and can load images to the EGDW and the two RVSs. The system has met the initial design specifications and can successfully capture images from the PCR and direct them to the RVSs.(ABSTRACT TRUNCATED AT 250 WORDS)

  10. Method and Apparatus for Virtual Interactive Medical Imaging by Multiple Remotely-Located Users

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D. (Inventor); Twombly, Ian Alexander (Inventor); Senger, Steven O. (Inventor)

    2003-01-01

    A virtual interactive imaging system allows the displaying of high-resolution, three-dimensional images of medical data to a user and allows the user to manipulate the images, including rotation of images in any of various axes. The system includes a mesh component that generates a mesh to represent a surface of an anatomical object, based on a set of data of the object, such as from a CT or MRI scan or the like. The mesh is generated so as to avoid tears, or holes, in the mesh, providing very high-quality representations of topographical features of the object, particularly at high- resolution. The system further includes a virtual surgical cutting tool that enables the user to simulate the removal of a piece or layer of a displayed object, such as a piece of skin or bone, view the interior of the object, manipulate the removed piece, and reattach the removed piece if desired. The system further includes a virtual collaborative clinic component, which allows the users of multiple, remotely-located computer systems to collaboratively and simultaneously view and manipulate the high-resolution, three-dimensional images of the object in real-time.

  11. The OCT penlight: in-situ image guidance for microsurgery

    NASA Astrophysics Data System (ADS)

    Galeotti, John; Sajjad, Areej; Wang, Bo; Kagemann, Larry; Shukla, Gaurav; Siegel, Mel; Wu, Bing; Klatzky, Roberta; Wollstein, Gadi; Schuman, Joel S.; Stetten, George

    2010-02-01

    We have developed a new image-based guidance system for microsurgery using optical coherence tomography (OCT), which presents a virtual image in its correct location inside the scanned tissue. Applications include surgery of the cornea, skin, and other surfaces below which shallow targets may advantageously be displayed for the naked eye or low-power magnification by a surgical microscope or loupes (magnifying eyewear). OCT provides real-time highresolution (3 micron) images at video rates within a two or more millimeter axial range in soft tissue, and is therefore suitable for guidance to various shallow targets such as Schlemm's canal in the eye (for treating Glaucoma) or skin tumors. A series of prototypes of the "OCT penlight" have produced virtual images with sufficient resolution and intensity to be useful under magnification, while the geometrical arrangement between the OCT scanner and display optics (including a half-silvered mirror) permits sufficient surgical access. The two prototypes constructed thus far have used, respectively, a miniature organic light emitting diode (OLED) display and a reflective liquid crystal on silicon (LCoS) display. The OLED has the advantage of relative simplicity, satisfactory resolution (15 micron), and color capability, whereas the LCoS can produce an image with much higher intensity and superior resolution (12 micron), although it is monochromatic and more complicated optically. Intensity is a crucial limiting factor, since light flux is greatly diminished with increasing magnification, thus favoring the LCoS as the more practical system.

  12. Video System for Viewing From a Remote or Windowless Cockpit

    NASA Technical Reports Server (NTRS)

    Banerjee, Amamath

    2009-01-01

    A system of electronic hardware and software synthesizes, in nearly real time, an image of a portion of a scene surveyed by as many as eight video cameras aimed, in different directions, at portions of the scene. This is a prototype of systems that would enable a pilot to view the scene outside a remote or windowless cockpit. The outputs of the cameras are digitized. Direct memory addressing is used to store the data of a few captured images in sequence, and the sequence is repeated in cycles. Cylindrical warping is used in merging adjacent images at their borders to construct a mosaic image of the scene. The mosaic-image data are written to a memory block from which they can be rendered on a head-mounted display (HMD) device. A subsystem in the HMD device tracks the direction of gaze of the wearer, providing data that are used to select, for display, the portion of the mosaic image corresponding to the direction of gaze. The basic functionality of the system has been demonstrated by mounting the cameras on the roof of a van and steering the van by use of the images presented on the HMD device.

  13. Real-time clinically oriented array-based in vivo combined photoacoustic and power Doppler imaging

    NASA Astrophysics Data System (ADS)

    Harrison, Tyler; Jeffery, Dean; Wiebe, Edward; Zemp, Roger J.

    2014-03-01

    Photoacoustic imaging has great potential for identifying vascular regions for clinical imaging. In addition to assessing angiogenesis in cancers, there are many other disease processes that result in increased vascularity that present novel targets for photoacoustic imaging. Doppler imaging can provide good localization of large vessels, but poor imaging of small or low flow speed vessels and is susceptible to motion artifacts. Photoacoustic imaging can provide visualization of small vessels, but due to the filtering effects of ultrasound transducers, only shows the edges of large vessels. Thus, we have combined photoacoustic imaging with ultrasound power Doppler to provide contrast agent- free vascular imaging. We use a research-oriented ultrasound array system to provide interlaced ultrasound, Doppler, and photoacoustic imaging. This system features realtime display of all three modalities with adjustable persistence, rejection, and compression. For ease of use in a clinical setting, display of each mode can be disabled. We verify the ability of this system to identify vessels with varying flow speeds using receiver operating characteristic curves, and find that as flow speed falls, photoacoustic imaging becomes a much better method for identifying blood vessels. We also present several in vivo images of the thyroid and several synovial joints to assess the practicality of this imaging for clinical applications.

  14. MIDAS - A microcomputer-based image display and analysis system with full Landsat frame processing capabilities

    NASA Technical Reports Server (NTRS)

    Hofman, L. B.; Erickson, W. K.; Donovan, W. E.

    1984-01-01

    Image Display and Analysis Systems (MIDAS) developed at NASA/Ames for the analysis of Landsat MSS images is described. The MIDAS computer power and memory, graphics, resource-sharing, expansion and upgrade, environment and maintenance, and software/user-interface requirements are outlined; the implementation hardware (including 32-bit microprocessor, 512K error-correcting RAM, 70 or 140-Mbyte formatted disk drive, 512 x 512 x 24 color frame buffer, and local-area-network transceiver) and applications software (ELAS, CIE, and P-EDITOR) are characterized; and implementation problems, performance data, and costs are examined. Planned improvements in MIDAS hardware and design goals and areas of exploration for MIDAS software are discussed.

  15. High dynamic range coding imaging system

    NASA Astrophysics Data System (ADS)

    Wu, Renfan; Huang, Yifan; Hou, Guangqi

    2014-10-01

    We present a high dynamic range (HDR) imaging system design scheme based on coded aperture technique. This scheme can help us obtain HDR images which have extended depth of field. We adopt Sparse coding algorithm to design coded patterns. Then we utilize the sensor unit to acquire coded images under different exposure settings. With the guide of the multiple exposure parameters, a series of low dynamic range (LDR) coded images are reconstructed. We use some existing algorithms to fuse and display a HDR image by those LDR images. We build an optical simulation model and get some simulation images to verify the novel system.

  16. Real-time free-viewpoint DIBR for large-size 3DLED

    NASA Astrophysics Data System (ADS)

    Wang, NengWen; Sang, Xinzhu; Guo, Nan; Wang, Kuiru

    2017-10-01

    Three-dimensional (3D) display technologies make great progress in recent years, and lenticular array based 3D display is a relatively mature technology, which most likely to commercial. In naked-eye-3D display, the screen size is one of the most important factors that affect the viewing experience. In order to construct a large-size naked-eye-3D display system, the LED display is used. However, the pixel misalignment is an inherent defect of the LED screen, which will influences the rendering quality. To address this issue, an efficient image synthesis algorithm is proposed. The Texture-Plus-Depth(T+D) format is chosen for the display content, and the modified Depth Image Based Rendering (DIBR) method is proposed to synthesize new views. In order to achieve realtime, the whole algorithm is implemented on GPU. With the state-of-the-art hardware and the efficient algorithm, a naked-eye-3D display system with a LED screen size of 6m × 1.8m is achieved. Experiment shows that the algorithm can process the 43-view 3D video with 4K × 2K resolution in real time on GPU, and vivid 3D experience is perceived.

  17. Color image generation for screen-scanning holographic display.

    PubMed

    Takaki, Yasuhiro; Matsumoto, Yuji; Nakajima, Tatsumi

    2015-10-19

    Horizontally scanning holography using a microelectromechanical system spatial light modulator (MEMS-SLM) can provide reconstructed images with an enlarged screen size and an increased viewing zone angle. Herein, we propose techniques to enable color image generation for a screen-scanning display system employing a single MEMS-SLM. Higher-order diffraction components generated by the MEMS-SLM for R, G, and B laser lights were coupled by providing proper illumination angles on the MEMS-SLM for each color. An error diffusion technique to binarize the hologram patterns was developed, in which the error diffusion directions were determined for each color. Color reconstructed images with a screen size of 6.2 in. and a viewing zone angle of 10.2° were generated at a frame rate of 30 Hz.

  18. PMC's Florida Bay & Adjacent Marine Systems Science Program

    Science.gov Websites

    Florida Bay and Adjacent Marine Systems Science Program Inverted image, click link below to view actual image and caption click to display actual image and caption Program Overview Management & - January 2002 >For more, click here to view the What's New Page... | Main | Overview | Management &

  19. A liquid-crystal-on-silicon color sequential display using frame buffer pixel circuits

    NASA Astrophysics Data System (ADS)

    Lee, Sangrok

    Next generation liquid-crystal-on-silicon (LCOS) high definition (HD) televisions and image projection displays will need to be low-cost and high quality to compete with existing systems based on digital micromirror devices (DMDs), plasma displays, and direct view liquid crystal displays. In this thesis, a novel frame buffer pixel architecture that buffers data for the next image frame while displaying the current frame, offers such a competitive solution is presented. The primary goal of the thesis is to demonstrate the LCOS microdisplay architecture for high quality image projection displays and at potentially low cost. The thesis covers four main research areas: new frame buffer pixel circuits to improve the LCOS performance, backplane architecture design and testing, liquid crystal modes for the LCOS microdisplay, and system integration and demonstration. The design requirements for the LCOS backplane with a 64 x 32 pixel array are addressed and measured electrical characteristics matches to computer simulation results. Various liquid crystal (LC) modes applicable for LCOS microdisplays and their physical properties are discussed. One- and two-dimensional director simulations are performed for the selected LC modes. Test liquid crystal cells with the selected LC modes are made and their electro-optic effects are characterized. The 64 x 32 LCOS microdisplays fabricated with the best LC mode are optically tested with interface circuitry. The characteristics of the LCOS microdisplays are summarized with the successful demonstration.

  20. Design and application of a small size SAFT imaging system for concrete structure

    NASA Astrophysics Data System (ADS)

    Shao, Zhixue; Shi, Lihua; Shao, Zhe; Cai, Jian

    2011-07-01

    A method of ultrasonic imaging detection is presented for quick non-destructive testing (NDT) of concrete structures using synthesized aperture focusing technology (SAFT). A low cost ultrasonic sensor array consisting of 12 market available low frequency ultrasonic transducers is designed and manufactured. A channel compensation method is proposed to improve the consistency of different transducers. The controlling devices for array scan as well as the virtual instrument for SAFT imaging are designed. In the coarse scan mode with the scan step of 50 mm, the system can quickly give an image display of a cross section of 600 mm (L) × 300 mm (D) by one measurement. In the refined scan model, the system can reduce the scan step and give an image display of the same cross section by moving the sensor array several times. Experiments on staircase specimen, concrete slab with embedded target, and building floor with underground pipe line all verify the efficiency of the proposed method.

  1. Comparison of imaging characteristics of multiple-beam equalization and storage phosphor direct digitizer radiographic systems

    NASA Astrophysics Data System (ADS)

    Sankaran, A.; Chuang, Keh-Shih; Yonekawa, Hisashi; Huang, H. K.

    1992-06-01

    The imaging characteristics of two chest radiographic equipment, Advanced Multiple Beam Equalization Radiography (AMBER) and Konica Direct Digitizer [using a storage phosphor (SP) plate] systems have been compared. The variables affecting image quality and the computer display/reading systems used are detailed. Utilizing specially designed wedge, geometric, and anthropomorphic phantoms, studies were conducted on: exposure and energy response of detectors; nodule detectability; different exposure techniques; various look- up tables (LUTs), gray scale displays and laser printers. Methods for scatter estimation and reduction were investigated. It is concluded that AMBER with screen-film and equalization techniques provides better nodule detectability than SP plates. However, SP plates have other advantages such as flexibility in the selection of exposure techniques, image processing features, and excellent sensitivity when combined with optimum reader operating modes. The equalization feature of AMBER provides better nodule detectability under the denser regions of the chest. Results of diagnostic accuracy are demonstrated with nodule detectability plots and analysis of images obtained with phantoms.

  2. Investigation of imaging and flight guidance concepts for rotorcraft zero visibility approach and landing

    NASA Technical Reports Server (NTRS)

    Mckeown, W. L.

    1984-01-01

    A simulation experiment to explore the use of an augmented pictorial display to approach and land a helicopter in zero visibility conditions was conducted in a fixed base simulator. A literature search was also conducted to determine related work. A display was developed and pilot in-the-loop evaluations were conducted. The pictorial display was a simulated, high resolution radar image, augmented with various parameters to improve distance and motion cues. Approaches and landings were accomplished, but with higher workloads and less accuracy than necessary for a practical system. Recommendations are provided for display improvements and a follow on simulation study in a moving based simulator.

  3. A volumetric three-dimensional digital light photoactivatable dye display

    NASA Astrophysics Data System (ADS)

    Patel, Shreya K.; Cao, Jian; Lippert, Alexander R.

    2017-07-01

    Volumetric three-dimensional displays offer spatially accurate representations of images with a 360° view, but have been difficult to implement due to complex fabrication requirements. Herein, a chemically enabled volumetric 3D digital light photoactivatable dye display (3D Light PAD) is reported. The operating principle relies on photoactivatable dyes that become reversibly fluorescent upon illumination with ultraviolet light. Proper tuning of kinetics and emission wavelengths enables the generation of a spatial pattern of fluorescent emission at the intersection of two structured light beams. A first-generation 3D Light PAD was fabricated using the photoactivatable dye N-phenyl spirolactam rhodamine B, a commercial picoprojector, an ultraviolet projector and a custom quartz imaging chamber. The system displays a minimum voxel size of 0.68 mm3, 200 μm resolution and good stability over repeated `on-off' cycles. A range of high-resolution 3D images and animations can be projected, setting the foundation for widely accessible volumetric 3D displays.

  4. A volumetric three-dimensional digital light photoactivatable dye display

    PubMed Central

    Patel, Shreya K.; Cao, Jian; Lippert, Alexander R.

    2017-01-01

    Volumetric three-dimensional displays offer spatially accurate representations of images with a 360° view, but have been difficult to implement due to complex fabrication requirements. Herein, a chemically enabled volumetric 3D digital light photoactivatable dye display (3D Light PAD) is reported. The operating principle relies on photoactivatable dyes that become reversibly fluorescent upon illumination with ultraviolet light. Proper tuning of kinetics and emission wavelengths enables the generation of a spatial pattern of fluorescent emission at the intersection of two structured light beams. A first-generation 3D Light PAD was fabricated using the photoactivatable dye N-phenyl spirolactam rhodamine B, a commercial picoprojector, an ultraviolet projector and a custom quartz imaging chamber. The system displays a minimum voxel size of 0.68 mm3, 200 μm resolution and good stability over repeated ‘on-off’ cycles. A range of high-resolution 3D images and animations can be projected, setting the foundation for widely accessible volumetric 3D displays. PMID:28695887

  5. High-performance web viewer for cardiac images

    NASA Astrophysics Data System (ADS)

    dos Santos, Marcelo; Furuie, Sergio S.

    2004-04-01

    With the advent of the digital devices for medical diagnosis the use of the regular films in radiology has decreased. Thus, the management and handling of medical images in digital format has become an important and critical task. In Cardiology, for example, the main difficulty is to display dynamic images with the appropriated color palette and frame rate used on acquisition process by Cath, Angio and Echo systems. In addition, other difficulty is handling large images in memory by any existing personal computer, including thin clients. In this work we present a web-based application that carries out these tasks with robustness and excellent performance, without burdening the server and network. This application provides near-diagnostic quality display of cardiac images stored as DICOM 3.0 files via a web browser and provides a set of resources that allows the viewing of still and dynamic images. It can access image files from the local disks, or network connection. Its features include: allows real-time playback, dynamic thumbnails image viewing during loading, access to patient database information, image processing tools, linear and angular measurements, on-screen annotations, image printing and exporting DICOM images to other image formats, and many others, all characterized by a pleasant user-friendly interface, inside a Web browser by means of a Java application. This approach offers some advantages over the most of medical images viewers, such as: facility of installation, integration with other systems by means of public and standardized interfaces, platform independence, efficient manipulation and display of medical images, all with high performance.

  6. VA's Integrated Imaging System on three platforms.

    PubMed

    Dayhoff, R E; Maloney, D L; Majurski, W J

    1992-01-01

    The DHCP Integrated Imaging System provides users with integrated patient data including text, image and graphics data. This system has been transferred from its original two screen DOS-based MUMPS platform to an X window workstation and a Microsoft Windows-based workstation. There are differences between these various platforms that impact on software design and on software development strategy. Data structures and conventions were used to isolate hardware, operating system, imaging software, and user-interface differences between platforms in the implementation of functionality for text and image display and interaction. The use of an object-oriented approach greatly increased system portability.

  7. VA's Integrated Imaging System on three platforms.

    PubMed Central

    Dayhoff, R. E.; Maloney, D. L.; Majurski, W. J.

    1992-01-01

    The DHCP Integrated Imaging System provides users with integrated patient data including text, image and graphics data. This system has been transferred from its original two screen DOS-based MUMPS platform to an X window workstation and a Microsoft Windows-based workstation. There are differences between these various platforms that impact on software design and on software development strategy. Data structures and conventions were used to isolate hardware, operating system, imaging software, and user-interface differences between platforms in the implementation of functionality for text and image display and interaction. The use of an object-oriented approach greatly increased system portability. PMID:1482983

  8. Development of an immersive virtual reality head-mounted display with high performance.

    PubMed

    Wang, Yunqi; Liu, Weiqi; Meng, Xiangxiang; Fu, Hanyi; Zhang, Daliang; Kang, Yusi; Feng, Rui; Wei, Zhonglun; Zhu, Xiuqing; Jiang, Guohua

    2016-09-01

    To resolve the contradiction between large field of view and high resolution in immersive virtual reality (VR) head-mounted displays (HMDs), an HMD monocular optical system with a large field of view and high resolution was designed. The system was fabricated by adopting aspheric technology with CNC grinding and a high-resolution LCD as the image source. With this monocular optical system, an HMD binocular optical system with a wide-range continuously adjustable interpupillary distance was achieved in the form of partially overlapping fields of view (FOV) combined with a screw adjustment mechanism. A fast image processor-centered LCD driver circuit and an image preprocessing system were also built to address binocular vision inconsistency in the partially overlapping FOV binocular optical system. The distortions of the HMD optical system with a large field of view were measured. Meanwhile, the optical distortions in the display and the trapezoidal distortions introduced during image processing were corrected by a calibration model for reverse rotations and translations. A high-performance not-fully-transparent VR HMD device with high resolution (1920×1080) and large FOV [141.6°(H)×73.08°(V)] was developed. The full field-of-view average value of angular resolution is 18.6  pixels/degree. With the device, high-quality VR simulations can be completed under various scenarios, and the device can be utilized for simulated trainings in aeronautics, astronautics, and other fields with corresponding platforms. The developed device has positive practical significance.

  9. Performance and cost improvements in the display control module for DVE

    NASA Astrophysics Data System (ADS)

    Thomas, J.; Lorimer, S.,

    2009-05-01

    The DCM for the Drivers Vision Enhancer system is the display part of a relatively low cost IR imaging system for land-vehicles. Recent changes to operational needs, associated with asymmetrical warfare have added daytime operations to the uses for this mature system. This paper will discuss cost/performance tradeoffs and provide thoughts for "DVE of the future" in light of these new operational needs for the system.

  10. A 360-degree floating 3D display based on light field regeneration.

    PubMed

    Xia, Xinxing; Liu, Xu; Li, Haifeng; Zheng, Zhenrong; Wang, Han; Peng, Yifan; Shen, Weidong

    2013-05-06

    Using light field reconstruction technique, we can display a floating 3D scene in the air, which is 360-degree surrounding viewable with correct occlusion effect. A high-frame-rate color projector and flat light field scanning screen are used in the system to create the light field of real 3D scene in the air above the spinning screen. The principle and display performance of this approach are investigated in this paper. The image synthesis method for all the surrounding viewpoints is analyzed, and the 3D spatial resolution and angular resolution of the common display zone are employed to evaluate display performance. The prototype is achieved and the real 3D color animation image has been presented vividly. The experimental results verified the representability of this method.

  11. Training Performance of Laparoscopic Surgery in Two- and Three-Dimensional Displays.

    PubMed

    Lin, Chiuhsiang Joe; Cheng, Chih-Feng; Chen, Hung-Jen; Wu, Kuan-Ying

    2017-04-01

    This research investigated differences in the effects of a state-of-art stereoscopic 3-dimensional (3D) display and a traditional 2-dimensional (2D) display in simulated laparoscopic surgery over a longer duration than in previous publications and studied the learning effects of the 2 display systems on novices. A randomized experiment with 2 factors, image dimensions and image sequence, was conducted to investigate differences in the mean movement time, the mean error frequency, NASA-TLX cognitive workload, and visual fatigue in pegboard and circle-tracing tasks. The stereoscopic 3D display had advantages in mean movement time ( P < .001 and P = .002) and mean error frequency ( P = .010 and P = .008) in both the tasks. There were no significant differences in the objective visual fatigue ( P = .729 and P = .422) and in the NASA-TLX ( P = .605 and P = .937) cognitive workload between the 3D and the 2D displays on both the tasks. For the learning effect, participants who used the stereoscopic 3D display first had shorter mean movement time in the 2D display environment on both the pegboard ( P = .011) and the circle-tracing ( P = .017) tasks. The results of this research suggest that a stereoscopic system would not result in higher objective visual fatigue and cognitive workload than a 2D system, and it might reduce the performance time and increase the precision of surgical operations. In addition, learning efficiency of the stereoscopic system on the novices in this study demonstrated its value for training and education in laparoscopic surgery.

  12. The application of autostereoscopic display in smart home system based on mobile devices

    NASA Astrophysics Data System (ADS)

    Zhang, Yongjun; Ling, Zhi

    2015-03-01

    Smart home is a system to control home devices which are more and more popular in our daily life. Mobile intelligent terminals based on smart homes have been developed, make remote controlling and monitoring possible with smartphones or tablets. On the other hand, 3D stereo display technology developed rapidly in recent years. Therefore, a iPad-based smart home system adopts autostereoscopic display as the control interface is proposed to improve the userfriendliness of using experiences. In consideration of iPad's limited hardware capabilities, we introduced a 3D image synthesizing method based on parallel processing with Graphic Processing Unit (GPU) implemented it with OpenGL ES Application Programming Interface (API) library on IOS platforms for real-time autostereoscopic displaying. Compared to the traditional smart home system, the proposed system applied autostereoscopic display into smart home system's control interface enhanced the reality, user-friendliness and visual comfort of interface.

  13. Integrated large view angle hologram system with multi-slm

    NASA Astrophysics Data System (ADS)

    Yang, ChengWei; Liu, Juan

    2017-10-01

    Recently holographic display has attracted much attention for its ability to generate real-time 3D reconstructed image. CGH provides an effective way to produce hologram, and spacial light modulator (SLM) is used to reconstruct the image. However the reconstructing system is usually very heavy and complex, and the view-angle is limited by the pixel size and spatial bandwidth product (SBP) of the SLM. In this paper a light portable holographic display system is proposed by integrating the optical elements and host computer units.Which significantly reduces the space taken in horizontal direction. CGH is produced based on the Fresnel diffraction and point source method. To reduce the memory usage and image distortion, we use an optimized accurate compressed look up table method (AC-LUT) to compute the hologram. In the system, six SLMs are concatenated to a curved plane, each one loading the phase-only hologram in a different angle of the object, the horizontal view-angle of the reconstructed image can be expanded to about 21.8°.

  14. Review of ultraresolution (10-100 megapixel) visualization systems built by tiling commercial display components

    NASA Astrophysics Data System (ADS)

    Hopper, Darrel G.; Haralson, David G.; Simpson, Matthew A.; Longo, Sam J.

    2002-08-01

    Ultra-resolution visualization systems are achieved by the technique of tiling many direct or project-view displays. During the past fews years, several such systems have been built from commercial electronics components (displays, computers, image generators, networks, communication links, and software). Civil applications driving this development have independently determined that they require images at 10-100 megapixel (Mpx) resolution to enable state-of-the-art research, engineering, design, stock exchanges, flight simulators, business information and enterprise control centers, education, art and entertainment. Military applications also press the art of the possible to improve the productivity of warfighters and lower the cost of providing for the national defense. The environment in some 80% of defense applications can be addressed by ruggedization of commercial components. This paper reviews the status of ultra-resolution systems based on commercial components and describes a vision for their integration into advanced yet affordable military command centers, simulator/trainers, and, eventually, crew stations in air, land, sea and space systems.

  15. 3D head mount display with single panel

    NASA Astrophysics Data System (ADS)

    Wang, Yuchang; Huang, Junejei

    2014-09-01

    The head mount display for entertainment usually requires light weight. But in the professional application has more requirements. The image quality, field of view (FOV), color gamut, response and life time are considered items, too. A head mount display based on 1-chip TI DMD spatial light modulator is proposed. The multiple light sources and splitting images relay system are the major design tasks. The relay system images the object (DMD) into two image planes to crate binocular vision. The 0.65 inch 1080P DMD is adopted. The relay has a good performance which includes the doublet to reduce the chromatic aberration. Some spaces are reserved for placing the mirror and adjustable mechanism. The mirror splits the rays to the left and right image plane. These planes correspond to the eyepieces objects and image to eyes. A changeable mechanism provides the variable interpupillary distance (IPD). The folding optical path makes sure that the HMD center of gravity is close to the head and prevents the uncomfortable downward force being applied to head or orbit. Two RGB LED assemblies illuminate to the DMD in different angle. The light is highly collimated. The divergence angle is small enough such that one LED ray would only enters to the correct eyepiece. This switching is electronic controlled. There is no moving part to produce vibration and fast switch would be possible. Two LED synchronize with 3D video sync by a driving board which also controls the DMD. When the left eye image is displayed on DMD, the LED for left optical path turns on. Vice versa for right image and 3D scene is accomplished.

  16. Development of a web-based DICOM-SR viewer for CAD data of multiple sclerosis lesions in an imaging informatics-based efolder

    NASA Astrophysics Data System (ADS)

    Ma, Kevin; Wong, Jonathan; Zhong, Mark; Zhang, Jeff; Liu, Brent

    2014-03-01

    In the past, we have presented an imaging-informatics based eFolder system for managing and analyzing imaging and lesion data of multiple sclerosis (MS) patients, which allows for data storage, data analysis, and data mining in clinical and research settings. The system integrates the patient's clinical data with imaging studies and a computer-aided detection (CAD) algorithm for quantifying MS lesion volume, lesion contour, locations, and sizes in brain MRI studies. For compliance with IHE integration protocols, long-term storage in PACS, and data query and display in a DICOM compliant clinical setting, CAD results need to be converted into DICOM-Structured Report (SR) format. Open-source dcmtk and customized XML templates are used to convert quantitative MS CAD results from MATLAB to DICOM-SR format. A web-based GUI based on our existing web-accessible DICOM object (WADO) image viewer has been designed to display the CAD results from generated SR files. The GUI is able to parse DICOM-SR files and extract SR document data, then display lesion volume, location, and brain matter volume along with the referenced DICOM imaging study. In addition, the GUI supports lesion contour overlay, which matches a detected MS lesion with its corresponding DICOM-SR data when a user selects either the lesion or the data. The methodology of converting CAD data in native MATLAB format to DICOM-SR and displaying the tabulated DICOM-SR along with the patient's clinical information, and relevant study images in the GUI will be demonstrated. The developed SR conversion model and GUI support aim to further demonstrate how to incorporate CAD post-processing components in a PACS and imaging informatics-based environment.

  17. High-quality remote interactive imaging in the operating theatre

    NASA Astrophysics Data System (ADS)

    Grimstead, Ian J.; Avis, Nick J.; Evans, Peter L.; Bocca, Alan

    2009-02-01

    We present a high-quality display system that enables the remote access within an operating theatre of high-end medical imaging and surgical planning software. Currently, surgeons often use printouts from such software for reference during surgery; our system enables surgeons to access and review patient data in a sterile environment, viewing real-time renderings of MRI & CT data as required. Once calibrated, our system displays shades of grey in Operating Room lighting conditions (removing any gamma correction artefacts). Our system does not require any expensive display hardware, is unobtrusive to the remote workstation and works with any application without requiring additional software licenses. To extend the native 256 levels of grey supported by a standard LCD monitor, we have used the concept of "PseudoGrey" where slightly off-white shades of grey are used to extend the intensity range from 256 to 1,785 shades of grey. Remote access is facilitated by a customized version of UltraVNC, which corrects remote shades of grey for display in the Operating Room. The system is successfully deployed at Morriston Hospital, Swansea, UK, and is in daily use during Maxillofacial surgery. More formal user trials and quantitative assessments are being planned for the future.

  18. A real-time electronic imaging system for solar X-ray observations from sounding rockets

    NASA Technical Reports Server (NTRS)

    Davis, J. M.; Ting, J. W.; Gerassimenko, M.

    1979-01-01

    A real-time imaging system for displaying the solar coronal soft X-ray emission, focussed by a grazing incidence telescope, is described. The design parameters of the system, which is to be used primarily as part of a real-time control system for a sounding rocket experiment, are identified. Their achievement with a system consisting of a microchannel plate, for the conversion of X-rays into visible light, and a slow-scan vidicon, for recording and transmission of the integrated images, is described in detail. The system has a quantum efficiency better than 8 deg above 8 A, a dynamic range of 1000 coupled with a sensitivity to single photoelectrons, and provides a spatial resolution of 15 arc seconds over a field of view of 40 x 40 square arc minutes. The incident radiation is filtered to eliminate wavelengths longer than 100 A. Each image contains 3.93 x 10 to the 5th bits of information and is transmitted to the ground where it is processed by a mini-computer and displayed in real-time on a standard TV monitor.

  19. Imaging System for Vaginal Surgery.

    PubMed

    Taylor, G Bernard; Myers, Erinn M

    2015-12-01

    The vaginal surgeon is challenged with performing complex procedures within a surgical field of limited light and exposure. The video telescopic operating microscope is an illumination and imaging system that provides visualization during open surgical procedures with a limited field of view. The imaging system is positioned within the surgical field and then secured to the operating room table with a maneuverable holding arm. A high-definition camera and Xenon light source allow transmission of the magnified image to a high-definition monitor in the operating room. The monitor screen is positioned above the patient for the surgeon and assistants to view real time throughout the operation. The video telescopic operating microscope system was used to provide surgical illumination and magnification during total vaginal hysterectomy and salpingectomy, midurethral sling, and release of vaginal scar procedures. All procedures were completed without complications. The video telescopic operating microscope provided illumination of the vaginal operative field and display of the magnified image onto high-definition monitors in the operating room for the surgeon and staff to simultaneously view the procedures. The video telescopic operating microscope provides high-definition display, magnification, and illumination during vaginal surgery.

  20. An investigation into the utilization of HCMM thermal data for the discrimination of volcanic and Eolian geological units. [Newberry Volcano, Oregon

    NASA Technical Reports Server (NTRS)

    Head, J. W., III (Principal Investigator)

    1982-01-01

    The nature of the HCMM data set and the general geologic application of thermal inertia imaging using HCMM data were studied. The CCT's of five sites of interest were obtained and displayed. The upgrading of the image display system was investigated. Fragment/block size distributions in various terrain types wre characterized with emphasis on volcanic terrain. The SEASAT L-band radar images of volcanic landforms at Newberry Volcano, Oregon, were analyzed and compared to imagery from LANDSAT band 7.

  1. Methods for Dichoptic Stimulus Presentation in Functional Magnetic Resonance Imaging - A Review

    PubMed Central

    Choubey, Bhaskar; Jurcoane, Alina; Muckli, Lars; Sireteanu, Ruxandra

    2009-01-01

    Dichoptic stimuli (different stimuli displayed to each eye) are increasingly being used in functional brain imaging experiments using visual stimulation. These studies include investigation into binocular rivalry, interocular information transfer, three-dimensional depth perception as well as impairments of the visual system like amblyopia and stereodeficiency. In this paper, we review various approaches of displaying dichoptic stimulus used in functional magnetic resonance imaging experiments. These include traditional approaches of using filters (red-green, red-blue, polarizing) with optical assemblies as well as newer approaches of using bi-screen goggles. PMID:19526076

  2. Dynamic, diagnostic, and pharmacological radionuclide studies of the esophagus in achalasia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rozen, P.; Gelfond, M.; Zaltzman, S.

    1982-08-01

    The esophagus was evaluated in 15 patients with achalasia by continuous gamma camera imaging following ingestion of a semi-solid meal labeled with /sup 99m/Tc. The images were displayed and recorded on a simple computerized data processing/display system. Subsequent cine mode images of esophageal emptying demonstrated abnormalities of the body of the esophagus not reflected by the manometric examination. Computer-generated time-activity curves representing specific regions of interest were better than manometry in evaluating the results of myotomy, dilatation, and drug therapy. Isosorbide dinitrate significantly improved esophageal emptying.

  3. A versatile stereoscopic visual display system for vestibular and oculomotor research.

    PubMed

    Kramer, P D; Roberts, D C; Shelhamer, M; Zee, D S

    1998-01-01

    Testing of the vestibular system requires a vestibular stimulus (motion) and/or a visual stimulus. We have developed a versatile, low cost, stereoscopic visual display system, using "virtual reality" (VR) technology. The display system can produce images for each eye that correspond to targets at any virtual distance relative to the subject, and so require the appropriate ocular vergence. We elicited smooth pursuit, "stare" optokinetic nystagmus (OKN) and after-nystagmus (OKAN), vergence for targets at various distances, and short-term adaptation of the vestibulo-ocular reflex (VOR), using both conventional methods and the stereoscopic display. Pursuit, OKN, and OKAN were comparable with both methods. When used with a vestibular stimulus, VR induced appropriate adaptive changes of the phase and gain of the angular VOR. In addition, using the VR display system and a human linear acceleration sled, we adapted the phase of the linear VOR. The VR-based stimulus system not only offers an alternative to more cumbersome means of stimulating the visual system in vestibular experiments, it also can produce visual stimuli that would otherwise be impractical or impossible. Our techniques provide images without the latencies encountered in most VR systems. Its inherent versatility allows it to be useful in several different types of experiments, and because it is software driven it can be quickly adapted to provide a new stimulus. These two factors allow VR to provide considerable savings in time and money, as well as flexibility in developing experimental paradigms.

  4. Illuminant-adaptive color reproduction for mobile display

    NASA Astrophysics Data System (ADS)

    Kim, Jong-Man; Park, Kee-Hyon; Kwon, Oh-Seol; Cho, Yang-Ho; Ha, Yeong-Ho

    2006-01-01

    This paper proposes an illuminant-adaptive reproduction method using light adaptation and flare conditions for a mobile display. Mobile displays, such as PDAs and cellular phones, are viewed under various lighting conditions. In particular, images displayed in daylight are perceived as quite dark due to the light adaptation of the human visual system, as the luminance of a mobile display is considerably lower than that of an outdoor environment. In addition, flare phenomena decrease the color gamut of a mobile display by increasing the luminance of dark areas and de-saturating the chroma. Therefore, this paper presents an enhancement method composed of lightness enhancement and chroma compensation. First, the ambient light intensity is measured using a lux-sensor, then the flare is calculated based on the reflection ratio of the display device and the ambient light intensity. The relative cone response is nonlinear to the input luminance. This is also changed by the ambient light intensity. Thus, to improve the perceived image, the displayed luminance is enhanced by lightness linearization. In this paper, the image's luminance is transformed by linearization of the response to the input luminance according to the ambient light intensity. Next, the displayed image is compensated according to the physically reduced chroma, resulting from flare phenomena. The reduced chroma value is calculated according to the flare for each intensity. The chroma compensation method to maintain the original image's chroma is applied differently for each hue plane, as the flare affects each hue plane differently. At this time, the enhanced chroma also considers the gamut boundary. Based on experimental observations, the outer luminance-intensity generally ranges from 1,000 lux to 30,000 lux. Thus, in the case of an outdoor environment, i.e. greater than 1,000 lux, this study presents a color reproduction method based on an inverse cone response curve and flare condition. Consequently, the proposed algorithm improves the quality of the perceived image adaptive to an outdoor environment.

  5. Ultra-realistic imaging: a new beginning for display holography

    NASA Astrophysics Data System (ADS)

    Bjelkhagen, Hans I.; Brotherton-Ratcliffe, David

    2014-02-01

    Recent improvements in key foundation technologies are set to potentially transform the field of Display Holography. In particular new recording systems, based on recent DPSS and semiconductor lasers combined with novel recording materials and processing, have now demonstrated full-color analogue holograms of both lower noise and higher spectral accuracy. Progress in illumination technology is leading to a further major reduction in display noise and to a significant increase of the clear image depth and brightness of such holograms. So too, recent progress in 1-step Direct-Write Digital Holography (DWDH) now opens the way to the creation of High Virtual Volume Displays (HVV) - large format full-parallax DWDH reflection holograms having fundamentally larger clear image depths. In a certain fashion HVV displays can be thought of as providing a high quality full-color digital equivalent to the large-format laser-illuminated transmission holograms of the sixties and seventies. Back then, the advent of such holograms led to much optimism for display holography in the market. However, problems with laser illumination, their monochromatic analogue nature and image noise are well cited as being responsible for their failure in reality. Is there reason for believing that the latest technology improvements will make the mark this time around? This paper argues that indeed there is.

  6. Finding-specific display presets for computed radiography soft-copy reading.

    PubMed

    Andriole, K P; Gould, R G; Webb, W R

    1999-05-01

    Much work has been done to optimize the display of cross-sectional modality imaging examinations for soft-copy reading (i.e., window/level tissue presets, and format presentations such as tile and stack modes, four-on-one, nine-on-one, etc). Less attention has been paid to the display of digital forms of the conventional projection x-ray. The purpose of this study is to assess the utility of providing presets for computed radiography (CR) soft-copy display, based not on the window/level settings, but on processing applied to the image optimized for visualization of specific findings, pathologies, etc (i.e., pneumothorax, tumor, tube location). It is felt that digital display of CR images based on finding-specific processing presets has the potential to: speed reading of digital projection x-ray examinations on soft copy; improve diagnostic efficacy; standardize display across examination type, clinical scenario, important key findings, and significant negatives; facilitate image comparison; and improve confidence in and acceptance of soft-copy reading. Clinical chest images are acquired using an Agfa-Gevaert (Mortsel, Belgium) ADC 70 CR scanner and Fuji (Stamford, CT) 9000 and AC2 CR scanners. Those demonstrating pertinent findings are transferred over the clinical picture archiving and communications system (PACS) network to a research image processing station (Agfa PS5000), where the optimal image-processing settings per finding, pathologic category, etc, are developed in conjunction with a thoracic radiologist, by manipulating the multiscale image contrast amplification (Agfa MUSICA) algorithm parameters. Soft-copy display of images processed with finding-specific settings are compared with the standard default image presentation for 50 cases of each category. Comparison is scored using a 5-point scale with the positive scale denoting the standard presentation is preferred over the finding-specific processing, the negative scale denoting the finding-specific processing is preferred over the standard presentation, and zero denoting no difference. Processing settings have been developed for several findings including pneumothorax and lung nodules, and clinical cases are currently being collected in preparation for formal clinical trials. Preliminary results indicate a preference for the optimized-processing presentation of images over the standard default, particularly by inexperienced radiology residents and referring clinicians.

  7. PSIDD3: Post-Scan Ultrasonic Data Display System for the Windows-Based PC Including Fuzzy Logic Analysis

    NASA Technical Reports Server (NTRS)

    Lovelace, Jeffrey J.; Cios, Krzysztof J.; Roth, Don J.; Cao, Wei

    2000-01-01

    Post-Scan Interactive Data Display (PSIDD) III is a user-oriented Windows-based system that facilitates the display and comparison of ultrasonic contact data. The system is optimized to compare ultrasonic measurements made at different locations within a material or at different stages of material degradation. PSIDD III provides complete analysis of the primary wave forms in the time and frequency domains along with the calculation of several frequency dependent properties including Phase Velocity and Attenuation Coefficient and several frequency independent properties, like the Cross Correlation Velocity. The system allows image generation on all of the frequency dependent properties at any available frequency (limited by the bandwidth used in the scans) and on any of the frequency independent properties. From ultrasonic contact scans, areas of interest on an image can be studied with regard to underlying raw waveforms and derived ultrasonic properties by simply selecting the point on the image. The system offers various modes of in-depth comparison between scan points. Up to five scan points can be selected for comparative analysis at once. The system was developed with Borland Delphi software (Visual Pascal) and is based on a SQL database. It is ideal for classification of material properties, or location of microstructure variations in materials.

  8. TU-FG-BRB-05: A 3 Dimensional Prompt Gamma Imaging System for Range Verification in Proton Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Draeger, E; Chen, H; Polf, J

    2016-06-15

    Purpose: To report on the initial developments of a clinical 3-dimensional (3D) prompt gamma (PG) imaging system for proton radiotherapy range verification. Methods: The new imaging system under development consists of a prototype Compton camera to measure PG emission during proton beam irradiation and software to reconstruct, display, and analyze 3D images of the PG emission. For initial test of the system, PGs were measured with a prototype CC during a 200 cGy dose delivery with clinical proton pencil beams (ranging from 100 MeV – 200 MeV) to a water phantom. Measurements were also carried out with the CC placedmore » 15 cm from the phantom for a full range 150 MeV pencil beam and with its range shifted by 2 mm. Reconstructed images of the PG emission were displayed by the clinical PG imaging software and compared to the dose distributions of the proton beams calculated by a commercial treatment planning system. Results: Measurements made with the new PG imaging system showed that a 3D image could be reconstructed from PGs measured during the delivery of 200 cGy of dose, and that shifts in the Bragg peak range of as little as 2 mm could be detected. Conclusion: Initial tests of a new PG imaging system show its potential to provide 3D imaging and range verification for proton radiotherapy. Based on these results, we have begun work to improve the system with the goal that images can be produced from delivery of as little as 20 cGy so that the system could be used for in-vivo proton beam range verification on a daily basis.« less

  9. Apple Image Processing Educator

    NASA Technical Reports Server (NTRS)

    Gunther, F. J.

    1981-01-01

    A software system design is proposed and demonstrated with pilot-project software. The system permits the Apple II microcomputer to be used for personalized computer-assisted instruction in the digital image processing of LANDSAT images. The programs provide data input, menu selection, graphic and hard-copy displays, and both general and detailed instructions. The pilot-project results are considered to be successful indicators of the capabilities and limits of microcomputers for digital image processing education.

  10. Video enhancement of X-ray and neutron radiographs

    NASA Technical Reports Server (NTRS)

    Vary, A.

    1973-01-01

    System was devised for displaying radiographs on television screen and enhancing fine detail in picture. System uses analog-computer circuits to process television signal from low-noise television camera. Enhanced images are displayed in black and white and can be controlled to vary degree of enhancement and magnification of details in either radiographic transparencies or opaque photographs.

  11. A portable high-definition electronic endoscope based on embedded system

    NASA Astrophysics Data System (ADS)

    Xu, Guang; Wang, Liqiang; Xu, Jin

    2012-11-01

    This paper presents a low power and portable highdefinition (HD) electronic endoscope based on CortexA8 embedded system. A 1/6 inch CMOS image sensor is used to acquire HD images with 1280 *800 pixels. The camera interface of A8 is designed to support images of various sizes and support multiple inputs of video format such as ITUR BT601/ 656 standard. Image rotation (90 degrees clockwise) and image process functions are achieved by CAMIF. The decode engine of the processor plays back or records HD videos at speed of 30 frames per second, builtin HDMI interface transmits high definition images to the external display. Image processing procedures such as demosaicking, color correction and auto white balance are realized on the A8 platform. Other functions are selected through OSD settings. An LCD panel displays the real time images. The snapshot pictures or compressed videos are saved in an SD card or transmited to a computer through USB interface. The size of the camera head is 4×4.8×15 mm with more than 3 meters working distance. The whole endoscope system can be powered by a lithium battery, with the advantages of miniature, low cost and portability.

  12. Real-Time Intravascular Ultrasound and Photoacoustic Imaging

    PubMed Central

    VanderLaan, Donald; Karpiouk, Andrei; Yeager, Doug; Emelianov, Stanislav

    2018-01-01

    Combined intravascular ultrasound and photoacoustic imaging (IVUS/IVPA) is an emerging hybrid modality being explored as a means of improving the characterization of atherosclerotic plaque anatomical and compositional features. While initial demonstrations of the technique have been encouraging, they have been limited by catheter rotation and data acquisition, displaying and processing rates on the order of several seconds per frame as well as the use of off-line image processing. Herein, we present a complete IVUS/IVPA imaging system and method capable of real-time IVUS/IVPA imaging, with online data acquisition, image processing and display of both IVUS and IVPA images. The integrated IVUS/IVPA catheter is fully contained within a 1 mm outer diameter torque cable coupled on the proximal end to a custom-designed spindle enabling optical and electrical coupling to system hardware, including a nanosecond-pulsed laser with a controllable pulse repetition frequency capable of greater than 10kHz, motor and servo drive, an ultrasound pulser/receiver, and a 200 MHz digitizer. The system performance is characterized and demonstrated on a vessel-mimicking phantom with an embedded coronary stent intended to provide IVPA contrast within content of an IVUS image. PMID:28092507

  13. Retinal projection type super multi-view head-mounted display

    NASA Astrophysics Data System (ADS)

    Takahashi, Hideya; Ito, Yutaka; Nakata, Seigo; Yamada, Kenji

    2014-02-01

    We propose a retinal projection type super multi-view head-mounted display (HMD). The smooth motion parallax provided by the super multi-view technique enables a precise superposition of virtual 3D images on the real scene. Moreover, if a viewer focuses one's eyes on the displayed 3D image, the stimulus for the accommodation of the human eye is produced naturally. Therefore, although proposed HMD is a monocular HMD, it provides observers with natural 3D images. The proposed HMD consists of an image projection optical system and a holographic optical element (HOE). The HOE is used as a combiner, and also works as a condenser lens to implement the Maxwellian view. Some parallax images are projected onto the HOE, and converged on the pupil, and then projected onto the retina. In order to verify the effectiveness of the proposed HMD, we constructed the prototype HMD. In the prototype HMD, the number of parallax images and the number of convergent points on the pupil is three. The distance between adjacent convergent points is 2 mm. We displayed virtual images at the distance from 20 cm to 200 cm in front of the pupil, and confirmed the accommodation. This paper describes the principle of proposed HMD, and also describes the experimental result.

  14. Satellite Imagery Via Personal Computer

    NASA Technical Reports Server (NTRS)

    1989-01-01

    Automatic Picture Transmission (APT) was incorporated by NASA in the Tiros 8 weather satellite. APT included an advanced satellite camera that immediately transmitted a picture as well as low cost receiving equipment. When an advanced scanning radiometer was later introduced, ground station display equipment would not readily adjust to the new format until GSFC developed an APT Digital Scan Converter that made them compatible. A NASA Technical Note by Goddard's Vermillion and Kamoski described how to build a converter. In 1979, Electro-Services, using this technology, built the first microcomputer weather imaging system in the U.S. The company changed its name to Satellite Data Systems, Inc. and now manufactures the WeatherFax facsimile display graphics system which converts a personal computer into a weather satellite image acquisition and display workstation. Hardware, antennas, receivers, etc. are also offered. Customers include U.S. Weather Service, schools, military, etc.

  15. Thin client performance for remote 3-D image display.

    PubMed

    Lai, Albert; Nieh, Jason; Laine, Andrew; Starren, Justin

    2003-01-01

    Several trends in biomedical computing are converging in a way that will require new approaches to telehealth image display. Image viewing is becoming an "anytime, anywhere" activity. In addition, organizations are beginning to recognize that healthcare providers are highly mobile and optimal care requires providing information wherever the provider and patient are. Thin-client computing is one way to support image viewing this complex environment. However little is known about the behavior of thin client systems in supporting image transfer in modern heterogeneous networks. Our results show that using thin-clients can deliver acceptable performance over conditions commonly seen in wireless networks if newer protocols optimized for these conditions are used.

  16. General view of the flight deck of the Orbiter Discovery ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    General view of the flight deck of the Orbiter Discovery looking forward along the approximate center line of the orbiter at the center console. The Multifunction Electronic Display System (MEDS) is evident in the mid-ground center of this image, this system was a major upgrade from the previous analog display system. The commander's station is on the port side or left in this view and the pilot's station is on the starboard side or right tin this view. Not the grab bar in the upper center of the image which was primarily used for commander and pilot ingress with the orbiter in a vertical position on the launch pad. Also note that the forward observation windows have protective covers over them. This image was taken at Kennedy Space Center. - Space Transportation System, Orbiter Discovery (OV-103), Lyndon B. Johnson Space Center, 2101 NASA Parkway, Houston, Harris County, TX

  17. Proceedings of the Open Sessions of the Workshop on Imaging Trackers and Autonomous Acquisition Applications for Missile Guidance Held at Redstone Arsenal, Alabama on 19-20 November 1979

    DTIC Science & Technology

    1979-11-01

    a generalized cooccurrence matrix. Describing image texture is an important problem in the design of image understanding systems . Applications as...display system design optimization and video signal processing. Based on a study by Southern Research Institute , a number of options were identified...Specification for Target Acquisition Designation System (U), RFP # AMC-DP-AAH-H4020, i2 Apr 77. 4. Terminal Homing Applications of Solid State Image

  18. Impact and utilization studies of a PACS display station in an ICU setting

    NASA Astrophysics Data System (ADS)

    Andriole, Katherine P.; Storto, Maria L.; Gamsu, Gordon; Huang, H. K.

    1996-05-01

    An assessment of changes in health-care professional behavior as a result of the introduction of a PACS (picture archiving and communication system) display station to an adult medical- surgical intensive care unit (ICU) is investigated via pre- and post-PACs evaluations. ICU display station utilization and the impact on clinical operations are also examined. Parameters measured both pre- and post-PACS ICU display station placement include the number of films per patient day, the number of clinician reviews of a patient's images per day and the percentage of images on which the unit interacts with a radiologist. The elapsed times from the time of exposure to the time of: review by the referring physician, radiologist-unit interaction and clinical action based on image information are also measured. The results of this investigation suggest that the introduction of a PaCS display station in the ICU may reduce the number of exams per patient day, decrease the elapsed time from the time of exposure to the time of review by the unit clinician, and improve the time to clinical action. Note, however, that it does not appear to change the percentage of total images on which the unit interacts with a radiologist.

  19. Completion of a Hospital-Wide Comprehensive Image Management and Communication System

    NASA Astrophysics Data System (ADS)

    Mun, Seong K.; Benson, Harold R.; Horii, Steven C.; Elliott, Larry P.; Lo, Shih-Chung B.; Levine, Betty A.; Braudes, Robert E.; Plumlee, Gabriel S.; Garra, Brian S.; Schellinger, Dieter; Majors, Bruce; Goeringer, Fred; Kerlin, Barbara D.; Cerva, John R.; Ingeholm, Mary-Lou; Gore, Tim

    1989-05-01

    A comprehensive image management and communication (IMAC) network has been installed at Georgetown University Hospital for an extensive clinical evaluation. The network is based on the AT&T CommView system and it includes interfaces to 12 imaging devices, 15 workstations (inside and outside of the radiology department), a teleradiology link to an imaging center, an optical jukebox and a number of advanced image display and processing systems such as Sun workstations, PIXAR, and PIXEL. Details of network configuration and its role in the evaluation project are discussed.

  20. Literature concerning control and display technology applicable to the Orbital Maneuvering Vehicle (OMV)

    NASA Technical Reports Server (NTRS)

    1990-01-01

    A review is presented of the literature concerning control and display technology that is applicable to the Orbital Maneuvering Vehicle (OMV), a system being developed by NASA that will enable the user to remotely pilot it during a mission in space. In addition to the general review, special consideration is given to virtual image displays and their potential for use in the system, and a preliminary partial task analysis of the user's functions is also presented.

  1. Grayscale/resolution trade-off for text: Model predictions and psychophysical results for letter confusion and letter discrimination

    NASA Technical Reports Server (NTRS)

    Gille, Jennifer; Martin, Russel; Lubin, Jeffrey; Larimer, James

    1995-01-01

    In a series of papers presented in 1994, we examined the grayscale/resolution trade-off for natural images displayed on devices with discrete pixellation, such as AMLCD's. In the present paper we extend this study by examining the grayscale/resolution trade-off for text images on discrete-pixel displays. Halftoning in printing is an example of the grayscale/resolution trade-off. In printing, spatial resolution is sacrificed to produce grayscale. Another example of this trade-off is the inherent low-pass spatial filter of a CRT, caused by the point-spread function of the electron beam in the phosphor layer. On a CRT, sharp image edges are blurred by this inherent low-pass filtering, and the block noise created by spatial quantization is greatly reduced. A third example of this trade-off is text anti-aliasing, where grayscale is used to improve letter shape, size and location when rendered at a low spatial resolution. There are additional implications for display system design from the grayscale/resolution trade-off. For example, reduced grayscale can reduce system costs by requiring less complexity in the framestore, allowing the use of lower cost drivers, potentially increasing data transfer rates in the image subsystem, and simplifying the manufacturing processes that are used to construct the active matrix for AMLCD (active-matrix liquid-crystal display) or AMTFEL (active-matrix thin-film electroluminescent) devices. Therefore, the study of these trade-offs is important for display designers and manufacturing and systems engineers who wish to create the highest performance, lowest cost device possible. Our strategy for investigating this trade-off is to generate a set of simple test images, manipulate grayscale and resolution, predict discrimination performance using the ViDEOS(Sarnoff) Human Vision Model, conduct an empirical study of discrimination using psychophysical procedures, and verify the computational results using the psychophysical results.

  2. A Workstation for Interactive Display and Quantitative Analysis of 3-D and 4-D Biomedical Images

    PubMed Central

    Robb, R.A.; Heffeman, P.B.; Camp, J.J.; Hanson, D.P.

    1986-01-01

    The capability to extract objective and quantitatively accurate information from 3-D radiographic biomedical images has not kept pace with the capabilities to produce the images themselves. This is rather an ironic paradox, since on the one hand the new 3-D and 4-D imaging capabilities promise significant potential for providing greater specificity and sensitivity (i.e., precise objective discrimination and accurate quantitative measurement of body tissue characteristics and function) in clinical diagnostic and basic investigative imaging procedures than ever possible before, but on the other hand, the momentous advances in computer and associated electronic imaging technology which have made these 3-D imaging capabilities possible have not been concomitantly developed for full exploitation of these capabilities. Therefore, we have developed a powerful new microcomputer-based system which permits detailed investigations and evaluation of 3-D and 4-D (dynamic 3-D) biomedical images. The system comprises a special workstation to which all the information in a large 3-D image data base is accessible for rapid display, manipulation, and measurement. The system provides important capabilities for simultaneously representing and analyzing both structural and functional data and their relationships in various organs of the body. This paper provides a detailed description of this system, as well as some of the rationale, background, theoretical concepts, and practical considerations related to system implementation. ImagesFigure 5Figure 7Figure 8Figure 9Figure 10Figure 11Figure 12Figure 13Figure 14Figure 15Figure 16

  3. Validation of a new digital breast tomosynthesis medical display

    NASA Astrophysics Data System (ADS)

    Marchessoux, Cédric; Vivien, Nicolas; Kumcu, Asli; Kimpe, Tom

    2011-03-01

    The main objective of this study is to evaluate and validate the new Barco medical display MDMG-5221 which has been optimized for the Digital Breast Tomosynthesis (DBT) imaging modality system, and to prove the benefit of the new DBT display in terms of image quality and clinical performance. The clinical performance is evaluated by the detection of micro-calcifications inserted in reconstructed Digital Breast Tomosynthesis slices. The slices are shown in dynamic cine loops, at two frames rates. The statistical analysis chosen for this study is the Receiver Operating Characteristic Multiple-Reader, Multiple-Case methodology, in order to measure the clinical performance of the two displays. Four experienced radiologists are involved in this study. For this clinical study, 50 normal and 50 abnormal independent datasets were used. The result is that the new display outperforms the mammography display for a signal detection task using real DBT images viewed at 25 and 50 slices per second. In the case of 50 slices per second, the p-value = 0.0664. For a cut-off where alpha=0.05, the conclusion is that the null hypothesis cannot be rejected, however the trend is that the new display performs 6% better than the old display in terms of AUC. At 25 slices per second, the difference between the two displays is very apparent. The new display outperforms the mammography display by 10% in terms of AUC, with a good statistical significance of p=0.0415.

  4. Software components for medical image visualization and surgical planning

    NASA Astrophysics Data System (ADS)

    Starreveld, Yves P.; Gobbi, David G.; Finnis, Kirk; Peters, Terence M.

    2001-05-01

    Purpose: The development of new applications in medical image visualization and surgical planning requires the completion of many common tasks such as image reading and re-sampling, segmentation, volume rendering, and surface display. Intra-operative use requires an interface to a tracking system and image registration, and the application requires basic, easy to understand user interface components. Rapid changes in computer and end-application hardware, as well as in operating systems and network environments make it desirable to have a hardware and operating system as an independent collection of reusable software components that can be assembled rapidly to prototype new applications. Methods: Using the OpenGL based Visualization Toolkit as a base, we have developed a set of components that implement the above mentioned tasks. The components are written in both C++ and Python, but all are accessible from Python, a byte compiled scripting language. The components have been used on the Red Hat Linux, Silicon Graphics Iris, Microsoft Windows, and Apple OS X platforms. Rigorous object-oriented software design methods have been applied to ensure hardware independence and a standard application programming interface (API). There are components to acquire, display, and register images from MRI, MRA, CT, Computed Rotational Angiography (CRA), Digital Subtraction Angiography (DSA), 2D and 3D ultrasound, video and physiological recordings. Interfaces to various tracking systems for intra-operative use have also been implemented. Results: The described components have been implemented and tested. To date they have been used to create image manipulation and viewing tools, a deep brain functional atlas, a 3D ultrasound acquisition and display platform, a prototype minimally invasive robotic coronary artery bypass graft planning system, a tracked neuro-endoscope guidance system and a frame-based stereotaxy neurosurgery planning tool. The frame-based stereotaxy module has been licensed and certified for use in a commercial image guidance system. Conclusions: It is feasible to encapsulate image manipulation and surgical guidance tasks in individual, reusable software modules. These modules allow for faster development of new applications. The strict application of object oriented software design methods allows individual components of such a system to make the transition from the research environment to a commercial one.

  5. Visualization of hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Hogervorst, Maarten A.; Bijl, Piet; Toet, Alexander

    2007-04-01

    We developed four new techniques to visualize hyper spectral image data for man-in-the-loop target detection. The methods respectively: (1) display the subsequent bands as a movie ("movie"), (2) map the data onto three channels and display these as a colour image ("colour"), (3) display the correlation between the pixel signatures and a known target signature ("match") and (4) display the output of a standard anomaly detector ("anomaly"). The movie technique requires no assumptions about the target signature and involves no information loss. The colour technique produces a single image that can be displayed in real-time. A disadvantage of this technique is loss of information. A display of the match between a target signature and pixels and can be interpreted easily and fast, but this technique relies on precise knowledge of the target signature. The anomaly detector signifies pixels with signatures that deviate from the (local) background. We performed a target detection experiment with human observers to determine their relative performance with the four techniques,. The results show that the "match" presentation yields the best performance, followed by "movie" and "anomaly", while performance with the "colour" presentation was the poorest. Each scheme has its advantages and disadvantages and is more or less suited for real-time and post-hoc processing. The rationale is that the final interpretation is best done by a human observer. In contrast to automatic target recognition systems, the interpretation of hyper spectral imagery by the human visual system is robust to noise and image transformations and requires a minimal number of assumptions (about signature of target and background, target shape etc.) When more knowledge about target and background is available this may be used to help the observer interpreting the data (aided target detection).

  6. Clinical application of a modern high-definition head-mounted display in sonography.

    PubMed

    Takeshita, Hideki; Kihara, Kazunori; Yoshida, Soichiro; Higuchi, Saori; Ito, Masaya; Nakanishi, Yasukazu; Kijima, Toshiki; Ishioka, Junichiro; Matsuoka, Yoh; Numao, Noboru; Saito, Kazutaka; Fujii, Yasuhisa

    2014-08-01

    Because of the remarkably improved image quality and wearability of modern head-mounted displays, a monitoring system using a head-mounted display rather than a fixed-site monitor for sonographic scanning has the potential to improve the diagnostic performance and lessen the examiner's physical burden during a sonographic examination. In a preclinical setting, 2 head-mounted displays, the HMZ-T2 (Sony Corporation, Tokyo, Japan) and the Wrap1200 (Vuzix Corporation, Rochester, NY), were found to be applicable to sonography. In a clinical setting, the feasibility of the HMZ-T2 was shown by its good image quality and acceptable wearability. This modern device is appropriate for clinical use in sonography. © 2014 by the American Institute of Ultrasound in Medicine.

  7. Blinded evaluation of the effects of high definition and magnification on perceived image quality in laryngeal imaging.

    PubMed

    Otto, Kristen J; Hapner, Edie R; Baker, Michael; Johns, Michael M

    2006-02-01

    Advances in commercial video technology have improved office-based laryngeal imaging. This study investigates the perceived image quality of a true high-definition (HD) video camera and the effect of magnification on laryngeal videostroboscopy. We performed a prospective, dual-armed, single-blinded analysis of a standard laryngeal videostroboscopic examination comparing 3 separate add-on camera systems: a 1-chip charge-coupled device (CCD) camera, a 3-chip CCD camera, and a true 720p (progressive scan) HD camera. Displayed images were controlled for magnification and image size (20-inch [50-cm] display, red-green-blue, and S-video cable for 1-chip and 3-chip cameras; digital visual interface cable and HD monitor for HD camera). Ten blinded observers were then asked to rate the following 5 items on a 0-to-100 visual analog scale: resolution, color, ability to see vocal fold vibration, sense of depth perception, and clarity of blood vessels. Eight unblinded observers were then asked to rate the difference in perceived resolution and clarity of laryngeal examination images when displayed on a 10-inch (25-cm) monitor versus a 42-inch (105-cm) monitor. A visual analog scale was used. These monitors were controlled for actual resolution capacity. For each item evaluated, randomized block design analysis demonstrated that the 3-chip camera scored significantly better than the 1-chip camera (p < .05). For the categories of color and blood vessel discrimination, the 3-chip camera scored significantly better than the HD camera (p < .05). For magnification alone, observers rated the 42-inch monitor statistically better than the 10-inch monitor. The expense of new medical technology must be judged against its added value. This study suggests that HD laryngeal imaging may not add significant value over currently available video systems, in perceived image quality, when a small monitor is used. Although differences in clarity between standard and HD cameras may not be readily apparent on small displays, a large display size coupled with HD technology may impart improved diagnosis of subtle vocal fold lesions and vibratory anomalies.

  8. 21 CFR 892.1560 - Ultrasonic pulsed echo imaging system.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Ultrasonic pulsed echo imaging system. 892.1560 Section 892.1560 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES... receiver. This generic type of device may include signal analysis and display equipment, patient and...

  9. 21 CFR 892.1560 - Ultrasonic pulsed echo imaging system.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Ultrasonic pulsed echo imaging system. 892.1560 Section 892.1560 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES... receiver. This generic type of device may include signal analysis and display equipment, patient and...

  10. 21 CFR 892.1650 - Image-intensified fluoroscopic x-ray system.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Image-intensified fluoroscopic x-ray system. 892.1650 Section 892.1650 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN... through electronic amplification. This generic type of device may include signal analysis and display...

  11. 21 CFR 892.1650 - Image-intensified fluoroscopic x-ray system.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Image-intensified fluoroscopic x-ray system. 892.1650 Section 892.1650 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN... through electronic amplification. This generic type of device may include signal analysis and display...

  12. 21 CFR 892.1560 - Ultrasonic pulsed echo imaging system.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Ultrasonic pulsed echo imaging system. 892.1560 Section 892.1560 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES... receiver. This generic type of device may include signal analysis and display equipment, patient and...

  13. 21 CFR 892.1550 - Ultrasonic pulsed doppler imaging system.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Ultrasonic pulsed doppler imaging system. 892.1550 Section 892.1550 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES... include signal analysis and display equipment, patient and equipment supports, component parts, and...

  14. 21 CFR 892.1550 - Ultrasonic pulsed doppler imaging system.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Ultrasonic pulsed doppler imaging system. 892.1550 Section 892.1550 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES... include signal analysis and display equipment, patient and equipment supports, component parts, and...

  15. 21 CFR 892.1650 - Image-intensified fluoroscopic x-ray system.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Image-intensified fluoroscopic x-ray system. 892.1650 Section 892.1650 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN... through electronic amplification. This generic type of device may include signal analysis and display...

  16. A Work Station For Control Of Changing Systems

    NASA Technical Reports Server (NTRS)

    Mandl, Daniel J.

    1988-01-01

    Touch screen and microcomputer enable flexible control of complicated systems. Computer work station equipped to produce graphical displays used as command panel and status indicator for command-and-control system. Operator uses images of control buttons displayed on touch screen to send prestored commands. Use of prestored library of commands reduces incidence of errors. If necessary, operator uses conventional keyboard to enter commands in real time to handle unforeseeable situations.

  17. The development of machine technology processing for earth resource survey

    NASA Technical Reports Server (NTRS)

    Landgrebe, D. A.

    1970-01-01

    The following technologies are considered for automatic processing of earth resources data: (1) registration of multispectral and multitemporal images, (2) digital image display systems, (3) data system parameter effects on satellite remote sensing systems, and (4) data compression techniques based on spectral redundancy. The importance of proper spectral band and compression algorithm selections is pointed out.

  18. Space shuttle visual simulation system design study

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The current and near-future state-of-the-art in visual simulation equipment technology is related to the requirements of the space shuttle visual system. Image source, image sensing, and displays are analyzed on a subsystem basis, and the principal conclusions are used in the formulation of a recommended baseline visual system. Perceptibility and visibility are also analyzed.

  19. 75 FR 47176 - Special Conditions: Dassault Aviation Model Falcon 7X; Enhanced Flight Visibility System (EFVS)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-05

    ...), imaging sensor(s), and avionics interfaces that display the sensor imagery on the HUD and overlay it with... that display the sensor imagery, with or without other flight information, on a head-down display. To... infrared sensors can be much different from that detected by natural pilot vision. On a dark night, thermal...

  20. Development of a data mining and imaging informatics display tool for a multiple sclerosis e-folder system

    NASA Astrophysics Data System (ADS)

    Liu, Margaret; Loo, Jerry; Ma, Kevin; Liu, Brent

    2011-03-01

    Multiple sclerosis (MS) is a debilitating autoimmune disease of the central nervous system that damages axonal pathways through inflammation and demyelination. In order to address the need for a centralized application to manage and study MS patients, the MS e-Folder - a web-based, disease-specific electronic medical record system - was developed. The e-Folder has a PHP and MySQL based graphical user interface (GUI) that can serve as both a tool for clinician decision support and a data mining tool for researchers. This web-based GUI gives the e-Folder a user friendly interface that can be securely accessed through the internet and requires minimal software installation on the client side. The e-Folder GUI displays and queries patient medical records--including demographic data, social history, past medical history, and past MS history. In addition, DICOM format imaging data, and computer aided detection (CAD) results from a lesion load algorithm are also displayed. The GUI interface is dynamic and allows manipulation of the DICOM images, such as zoom, pan, and scrolling, and the ability to rotate 3D images. Given the complexity of clinical management and the need to bolster research in MS, the MS e-Folder system will improve patient care and provide MS researchers with a function-rich patient data hub.

  1. The Electronic View Box: a software tool for radiation therapy treatment verification.

    PubMed

    Bosch, W R; Low, D A; Gerber, R L; Michalski, J M; Graham, M V; Perez, C A; Harms, W B; Purdy, J A

    1995-01-01

    We have developed a software tool for interactively verifying treatment plan implementation. The Electronic View Box (EVB) tool copies the paradigm of current practice but does so electronically. A portal image (online portal image or digitized port film) is displayed side by side with a prescription image (digitized simulator film or digitally reconstructed radiograph). The user can measure distances between features in prescription and portal images and "write" on the display, either to approve the image or to indicate required corrective actions. The EVB tool also provides several features not available in conventional verification practice using a light box. The EVB tool has been written in ANSI C using the X window system. The tool makes use of the Virtual Machine Platform and Foundation Library specifications of the NCI-sponsored Radiation Therapy Planning Tools Collaborative Working Group for portability into an arbitrary treatment planning system that conforms to these specifications. The present EVB tool is based on an earlier Verification Image Review tool, but with a substantial redesign of the user interface. A graphical user interface prototyping system was used in iteratively refining the tool layout to allow rapid modifications of the interface in response to user comments. Features of the EVB tool include 1) hierarchical selection of digital portal images based on physician name, patient name, and field identifier; 2) side-by-side presentation of prescription and portal images at equal magnification and orientation, and with independent grayscale controls; 3) "trace" facility for outlining anatomical structures; 4) "ruler" facility for measuring distances; 5) zoomed display of corresponding regions in both images; 6) image contrast enhancement; and 7) communication of portal image evaluation results (approval, block modification, repeat image acquisition, etc.). The EVB tool facilitates the rapid comparison of prescription and portal images and permits electronic communication of corrections in port shape and positioning.

  2. AOIPS - An interactive image processing system. [Atmospheric and Oceanic Information Processing System

    NASA Technical Reports Server (NTRS)

    Bracken, P. A.; Dalton, J. T.; Quann, J. J.; Billingsley, J. B.

    1978-01-01

    The Atmospheric and Oceanographic Information Processing System (AOIPS) was developed to help applications investigators perform required interactive image data analysis rapidly and to eliminate the inefficiencies and problems associated with batch operation. This paper describes the configuration and processing capabilities of AOIPS and presents unique subsystems for displaying, analyzing, storing, and manipulating digital image data. Applications of AOIPS to research investigations in meteorology and earth resources are featured.

  3. Micromachined mirrors for raster-scanning displays and optical fiber switches

    NASA Astrophysics Data System (ADS)

    Hagelin, Paul Merritt

    Micromachines and micro-optics have the potential to shrink the size and cost of free-space optical systems, enabling a new generation of high-performance, compact projection displays and telecommunications equipment. In raster-scanning displays and optical fiber switches, a free-space optical beam can interact with multiple tilt- up micromirrors fabricated on a single substrate. The size, rotation angle, and flatness of the mirror surfaces determine the number of pixels in a raster-display or ports in an optical switch. Single-chip and two-chip optical raster display systems demonstrate static mirror curvature correction, an integrated electronic driver board, and dynamic micromirror performance. Correction for curvature caused by a stress gradient in the micromirror leads to resolution of 102 by 119 pixels in the single-chip display. The optical design of the two-chip display features in-situ mirror curvature measurement and adjustable image magnification with a single output lens. An electronic driver board synchronizes modulation of the optical source with micromirror actuation for the display of images. Dynamic off-axis mirror motion is shown to have minimal influence on resolution. The confocal switch, a free-space optical fiber cross- connect, incorporates micromirrors having a design similar to the image-refresh scanner. Two micromirror arrays redirect optical beams from an input fiber array to the output fibers. The switch architecture supports simultaneous switching of multiple wavelength channels. A 2x2 switch configuration, using single-mode optical fiber at 1550 mn, is demonstrated with insertion loss of -4.2 dB and cross-talk of -50.5 dB. The micromirrors have sufficient size and angular range for scaling to a 32x32 cross-connect switch that has low insertion-loss and low cross-talk.

  4. Dual-dimensional microscopy: real-time in vivo three-dimensional observation method using high-resolution light-field microscopy and light-field display.

    PubMed

    Kim, Jonghyun; Moon, Seokil; Jeong, Youngmo; Jang, Changwon; Kim, Youngmin; Lee, Byoungho

    2018-06-01

    Here, we present dual-dimensional microscopy that captures both two-dimensional (2-D) and light-field images of an in-vivo sample simultaneously, synthesizes an upsampled light-field image in real time, and visualizes it with a computational light-field display system in real time. Compared with conventional light-field microscopy, the additional 2-D image greatly enhances the lateral resolution at the native object plane up to the diffraction limit and compensates for the image degradation at the native object plane. The whole process from capturing to displaying is done in real time with the parallel computation algorithm, which enables the observation of the sample's three-dimensional (3-D) movement and direct interaction with the in-vivo sample. We demonstrate a real-time 3-D interactive experiment with Caenorhabditis elegans. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  5. Overview of the land analysis system (LAS)

    USGS Publications Warehouse

    Quirk, Bruce K.; Olseson, Lyndon R.

    1987-01-01

    The Land Analysis System (LAS) is a fully integrated digital analysis system designed to support remote sensing, image processing, and geographic information systems research. LAS is being developed through a cooperative effort between the National Aeronautics and Space Administration Goddard Space Flight Center and the U. S. Geological Survey Earth Resources Observation Systems (EROS) Data Center. LAS has over 275 analysis modules capable to performing input and output, radiometric correction, geometric registration, signal processing, logical operations, data transformation, classification, spatial analysis, nominal filtering, conversion between raster and vector data types, and display manipulation of image and ancillary data. LAS is currently implant using the Transportable Applications Executive (TAE). While TAE was designed primarily to be transportable, it still provides the necessary components for a standard user interface, terminal handling, input and output services, display management, and intersystem communications. With TAE the analyst uses the same interface to the processing modules regardless of the host computer or operating system. LAS was originally implemented at EROS on a Digital Equipment Corporation computer system under the Virtual Memorial System operating system with DeAnza displays and is presently being converted to run on a Gould Power Node and Sun workstation under the Berkeley System Distribution UNIX operating system.

  6. Real-time MRI guidance of cardiac interventions.

    PubMed

    Campbell-Washburn, Adrienne E; Tavallaei, Mohammad A; Pop, Mihaela; Grant, Elena K; Chubb, Henry; Rhode, Kawal; Wright, Graham A

    2017-10-01

    Cardiac magnetic resonance imaging (MRI) is appealing to guide complex cardiac procedures because it is ionizing radiation-free and offers flexible soft-tissue contrast. Interventional cardiac MR promises to improve existing procedures and enable new ones for complex arrhythmias, as well as congenital and structural heart disease. Guiding invasive procedures demands faster image acquisition, reconstruction and analysis, as well as intuitive intraprocedural display of imaging data. Standard cardiac MR techniques such as 3D anatomical imaging, cardiac function and flow, parameter mapping, and late-gadolinium enhancement can be used to gather valuable clinical data at various procedural stages. Rapid intraprocedural image analysis can extract and highlight critical information about interventional targets and outcomes. In some cases, real-time interactive imaging is used to provide a continuous stream of images displayed to interventionalists for dynamic device navigation. Alternatively, devices are navigated relative to a roadmap of major cardiac structures generated through fast segmentation and registration. Interventional devices can be visualized and tracked throughout a procedure with specialized imaging methods. In a clinical setting, advanced imaging must be integrated with other clinical tools and patient data. In order to perform these complex procedures, interventional cardiac MR relies on customized equipment, such as interactive imaging environments, in-room image display, audio communication, hemodynamic monitoring and recording systems, and electroanatomical mapping and ablation systems. Operating in this sophisticated environment requires coordination and planning. This review provides an overview of the imaging technology used in MRI-guided cardiac interventions. Specifically, this review outlines clinical targets, standard image acquisition and analysis tools, and the integration of these tools into clinical workflow. 1 Technical Efficacy: Stage 5 J. Magn. Reson. Imaging 2017;46:935-950. © 2017 International Society for Magnetic Resonance in Medicine.

  7. Three-Dimensional Large Screen Display Using Polymer-Dispersed Liquid-Crystal Light Valves and a Schlieren Optical System: Proposal and Basic Experiments

    NASA Astrophysics Data System (ADS)

    Takizawa, Kuniharu

    A novel three-dimensional (3-D) projection display used with polarized eyeglasses is proposed. It consists of polymer-dispersed liquid crystal-light valves that modulate the illuminated light based on light scattering, a polarization beam splitter, and a Schlieren projection system. The features of the proposed display include a 3-D image display with a single projector, half size and half power consumption compared with a conventional 3-D projector with polarized glasses. Measured electro-optic characteristics of a polymer-dispersed liquid-crystal cell inserted between crossed polarizers suggests that the proposed display achieves small cross talk and high-extinction ratio.

  8. [Newly developed monitor for IVR: liquid crystal display (LCD) replaced with cathode ray tube (CRT)].

    PubMed

    Ichida, Takao; Hosogai, Minoru; Yokoyama, Kouji; Ogawa, Takayoshi; Okusako, Kenji; Shougaki, Masachika; Masai, Hironao; Yamada, Eiji; Okuyama, Kazuo; Hatagawa, Masakatsu

    2004-09-01

    For physicians who monitor images during interventional radiology (VR), we have built and been using a system that employs a liquid crystal display (LCD) instead of the conventional cathode ray tube (CRT). The system incorporates a ceiling-suspension-type monitor (three-display monitor) with an LCD on each of the three displays for the head and abdominal regions and another ceiling-suspension-type monitor (5-display monitor) with an LCD on each display for the cardiac region. As these monitors are made to be thin and light in weight, they can be placed in a high position in the room, thereby saving space and allowing for more effective use of space in the X-ray room. The system has also improved the efficiency of operators in the IVR room. The three-display folding mechanism allows the displays to be viewed from multiple directions, thereby improving the environment so that the performance of IVR can be observed.

  9. Space Images for NASA JPL Android Version

    NASA Technical Reports Server (NTRS)

    Nelson, Jon D.; Gutheinz, Sandy C.; Strom, Joshua R.; Arca, Jeremy M.; Perez, Martin; Boggs, Karen; Stanboli, Alice

    2013-01-01

    This software addresses the demand for easily accessible NASA JPL images and videos by providing a user friendly and simple graphical user interface that can be run via the Android platform from any location where Internet connection is available. This app is complementary to the iPhone version of the application. A backend infrastructure stores, tracks, and retrieves space images from the JPL Photojournal and Institutional Communications Web server, and catalogs the information into a streamlined rating infrastructure. This system consists of four distinguishing components: image repository, database, server-side logic, and Android mobile application. The image repository contains images from various JPL flight projects. The database stores the image information as well as the user rating. The server-side logic retrieves the image information from the database and categorizes each image for display. The Android mobile application is an interfacing delivery system that retrieves the image information from the server for each Android mobile device user. Also created is a reporting and tracking system for charting and monitoring usage. Unlike other Android mobile image applications, this system uses the latest emerging technologies to produce image listings based directly on user input. This allows for countless combinations of images returned. The backend infrastructure uses industry-standard coding and database methods, enabling future software improvement and technology updates. The flexibility of the system design framework permits multiple levels of display possibilities and provides integration capabilities. Unique features of the software include image/video retrieval from a selected set of categories, image Web links that can be shared among e-mail users, sharing to Facebook/Twitter, marking as user's favorites, and image metadata searchable for instant results.

  10. Large size three-dimensional video by electronic holography using multiple spatial light modulators

    PubMed Central

    Sasaki, Hisayuki; Yamamoto, Kenji; Wakunami, Koki; Ichihashi, Yasuyuki; Oi, Ryutaro; Senoh, Takanori

    2014-01-01

    In this paper, we propose a new method of using multiple spatial light modulators (SLMs) to increase the size of three-dimensional (3D) images that are displayed using electronic holography. The scalability of images produced by the previous method had an upper limit that was derived from the path length of the image-readout part. We were able to produce larger colour electronic holographic images with a newly devised space-saving image-readout optical system for multiple reflection-type SLMs. This optical system is designed so that the path length of the image-readout part is half that of the previous method. It consists of polarization beam splitters (PBSs), half-wave plates (HWPs), and polarizers. We used 16 (4 × 4) 4K×2K-pixel SLMs for displaying holograms. The experimental device we constructed was able to perform 20 fps video reproduction in colour of full-parallax holographic 3D images with a diagonal image size of 85 mm and a horizontal viewing-zone angle of 5.6 degrees. PMID:25146685

  11. Large size three-dimensional video by electronic holography using multiple spatial light modulators.

    PubMed

    Sasaki, Hisayuki; Yamamoto, Kenji; Wakunami, Koki; Ichihashi, Yasuyuki; Oi, Ryutaro; Senoh, Takanori

    2014-08-22

    In this paper, we propose a new method of using multiple spatial light modulators (SLMs) to increase the size of three-dimensional (3D) images that are displayed using electronic holography. The scalability of images produced by the previous method had an upper limit that was derived from the path length of the image-readout part. We were able to produce larger colour electronic holographic images with a newly devised space-saving image-readout optical system for multiple reflection-type SLMs. This optical system is designed so that the path length of the image-readout part is half that of the previous method. It consists of polarization beam splitters (PBSs), half-wave plates (HWPs), and polarizers. We used 16 (4 × 4) 4K×2K-pixel SLMs for displaying holograms. The experimental device we constructed was able to perform 20 fps video reproduction in colour of full-parallax holographic 3D images with a diagonal image size of 85 mm and a horizontal viewing-zone angle of 5.6 degrees.

  12. Real-Time Imaging System for the OpenPET

    NASA Astrophysics Data System (ADS)

    Tashima, Hideaki; Yoshida, Eiji; Kinouchi, Shoko; Nishikido, Fumihiko; Inadama, Naoko; Murayama, Hideo; Suga, Mikio; Haneishi, Hideaki; Yamaya, Taiga

    2012-02-01

    The OpenPET and its real-time imaging capability have great potential for real-time tumor tracking in medical procedures such as biopsy and radiation therapy. For the real-time imaging system, we intend to use the one-pass list-mode dynamic row-action maximum likelihood algorithm (DRAMA) and implement it using general-purpose computing on graphics processing units (GPGPU) techniques. However, it is difficult to make consistent reconstructions in real-time because the amount of list-mode data acquired in PET scans may be large depending on the level of radioactivity, and the reconstruction speed depends on the amount of the list-mode data. In this study, we developed a system to control the data used in the reconstruction step while retaining quantitative performance. In the proposed system, the data transfer control system limits the event counts to be used in the reconstruction step according to the reconstruction speed, and the reconstructed images are properly intensified by using the ratio of the used counts to the total counts. We implemented the system on a small OpenPET prototype system and evaluated the performance in terms of the real-time tracking ability by displaying reconstructed images in which the intensity was compensated. The intensity of the displayed images correlated properly with the original count rate and a frame rate of 2 frames per second was achieved with average delay time of 2.1 s.

  13. Vertical viewing angle enhancement for the 360  degree integral-floating display using an anamorphic optic system.

    PubMed

    Erdenebat, Munkh-Uchral; Kwon, Ki-Chul; Yoo, Kwan-Hee; Baasantseren, Ganbat; Park, Jae-Hyeung; Kim, Eun-Soo; Kim, Nam

    2014-04-15

    We propose a 360 degree integral-floating display with an enhanced vertical viewing angle. The system projects two-dimensional elemental image arrays via a high-speed digital micromirror device projector and reconstructs them into 3D perspectives with a lens array. Double floating lenses relate initial 3D perspectives to the center of a vertically curved convex mirror. The anamorphic optic system tailors the initial 3D perspectives horizontally and vertically disperse light rays more widely. By the proposed method, the entire 3D image provides both monocular and binocular depth cues, a full-parallax demonstration with high-angular ray density and an enhanced vertical viewing angle.

  14. Detailed description of the Mayo/IBM PACS

    NASA Astrophysics Data System (ADS)

    Gehring, Dale G.; Persons, Kenneth R.; Rothman, Melvyn L.; Salutz, James R.; Morin, Richard L.

    1991-07-01

    The Mayo Clinic and IBM/Rochester have jointly developed a picture archiving system (PACS) for use with Mayo's MRI and Neuro-CT imaging modalities. The system was developed to replace the imaging system's vendor-supplied magnetic tape archiving capability. The system consists of seven MR imagers and nine CT scanners, each interfaced to the PACS via IBM Personal System/2(tm) (PS/2) computers, which act as gateways from the imaging modality to the PACS network. The PAC system operates on the token-ring component of Mayo's city-wide local area network. Also on the PACS network are four optical storage subsystems used for image archival, three optical subsystems used for image retrieval, an IBM Application System/400(tm) (AS/400) computer used for database management and multiple PS/2-based image display systems and their image servers.

  15. Architectures and algorithms for digital image processing; Proceedings of the Meeting, Cannes, France, December 5, 6, 1985

    NASA Technical Reports Server (NTRS)

    Duff, Michael J. B. (Editor); Siegel, Howard J. (Editor); Corbett, Francis J. (Editor)

    1986-01-01

    The conference presents papers on the architectures, algorithms, and applications of image processing. Particular attention is given to a very large scale integration system for image reconstruction from projections, a prebuffer algorithm for instant display of volume data, and an adaptive image sequence filtering scheme based on motion detection. Papers are also presented on a simple, direct practical method of sensing local motion and analyzing local optical flow, image matching techniques, and an automated biological dosimetry system.

  16. A near-infrared fluorescence-based surgical navigation system imaging software for sentinel lymph node detection

    NASA Astrophysics Data System (ADS)

    Ye, Jinzuo; Chi, Chongwei; Zhang, Shuang; Ma, Xibo; Tian, Jie

    2014-02-01

    Sentinel lymph node (SLN) in vivo detection is vital in breast cancer surgery. A new near-infrared fluorescence-based surgical navigation system (SNS) imaging software, which has been developed by our research group, is presented for SLN detection surgery in this paper. The software is based on the fluorescence-based surgical navigation hardware system (SNHS) which has been developed in our lab, and is designed specifically for intraoperative imaging and postoperative data analysis. The surgical navigation imaging software consists of the following software modules, which mainly include the control module, the image grabbing module, the real-time display module, the data saving module and the image processing module. And some algorithms have been designed to achieve the performance of the software, for example, the image registration algorithm based on correlation matching. Some of the key features of the software include: setting the control parameters of the SNS; acquiring, display and storing the intraoperative imaging data in real-time automatically; analysis and processing of the saved image data. The developed software has been used to successfully detect the SLNs in 21 cases of breast cancer patients. In the near future, we plan to improve the software performance and it will be extensively used for clinical purpose.

  17. Low Cost Desktop Image Analysis Workstation With Enhanced Interactive User Interface

    NASA Astrophysics Data System (ADS)

    Ratib, Osman M.; Huang, H. K.

    1989-05-01

    A multimodality picture archiving and communication system (PACS) is in routine clinical use in the UCLA Radiology Department. Several types workstations are currently implemented for this PACS. Among them, the Apple Macintosh II personal computer was recently chosen to serve as a desktop workstation for display and analysis of radiological images. This personal computer was selected mainly because of its extremely friendly user-interface, its popularity among the academic and medical community and its low cost. In comparison to other microcomputer-based systems the Macintosh II offers the following advantages: the extreme standardization of its user interface, file system and networking, and the availability of a very large variety of commercial software packages. In the current configuration the Macintosh II operates as a stand-alone workstation where images are imported from a centralized PACS server through an Ethernet network using a standard TCP-IP protocol, and stored locally on magnetic disk. The use of high resolution screens (1024x768 pixels x 8bits) offer sufficient performance for image display and analysis. We focused our project on the design and implementation of a variety of image analysis algorithms ranging from automated structure and edge detection to sophisticated dynamic analysis of sequential images. Specific analysis programs were developed for ultrasound images, digitized angiograms, MRI and CT tomographic images and scintigraphic images.

  18. Image standards in tissue-based diagnosis (diagnostic surgical pathology).

    PubMed

    Kayser, Klaus; Görtler, Jürgen; Goldmann, Torsten; Vollmer, Ekkehard; Hufnagl, Peter; Kayser, Gian

    2008-04-18

    Progress in automated image analysis, virtual microscopy, hospital information systems, and interdisciplinary data exchange require image standards to be applied in tissue-based diagnosis. To describe the theoretical background, practical experiences and comparable solutions in other medical fields to promote image standards applicable for diagnostic pathology. THEORY AND EXPERIENCES: Images used in tissue-based diagnosis present with pathology-specific characteristics. It seems appropriate to discuss their characteristics and potential standardization in relation to the levels of hierarchy in which they appear. All levels can be divided into legal, medical, and technological properties. Standards applied to the first level include regulations or aims to be fulfilled. In legal properties, they have to regulate features of privacy, image documentation, transmission, and presentation; in medical properties, features of disease-image combination, human-diagnostics, automated information extraction, archive retrieval and access; and in technological properties features of image acquisition, display, formats, transfer speed, safety, and system dynamics. The next lower second level has to implement the prescriptions of the upper one, i.e. describe how they are implemented. Legal aspects should demand secure encryption for privacy of all patient related data, image archives that include all images used for diagnostics for a period of 10 years at minimum, accurate annotations of dates and viewing, and precise hardware and software information. Medical aspects should demand standardized patients' files such as DICOM 3 or HL 7 including history and previous examinations, information of image display hardware and software, of image resolution and fields of view, of relation between sizes of biological objects and image sizes, and of access to archives and retrieval. Technological aspects should deal with image acquisition systems (resolution, colour temperature, focus, brightness, and quality evaluation procedures), display resolution data, implemented image formats, storage, cycle frequency, backup procedures, operation system, and external system accessibility. The lowest third level describes the permitted limits and threshold in detail. At present, an applicable standard including all mentioned features does not exist to our knowledge; some aspects can be taken from radiological standards (PACS, DICOM 3); others require specific solutions or are not covered yet. The progress in virtual microscopy and application of artificial intelligence (AI) in tissue-based diagnosis demands fast preparation and implementation of an internationally acceptable standard. The described hierarchic order as well as analytic investigation in all potentially necessary aspects and details offers an appropriate tool to specifically determine standardized requirements.

  19. Rotational symmetric HMD with eye-tracking capability

    NASA Astrophysics Data System (ADS)

    Liu, Fangfang; Cheng, Dewen; Wang, Qiwei; Wang, Yongtian

    2016-10-01

    As an important auxiliary function of head-mounted displays (HMDs), eye tracking has an important role in the field of intelligent human-machine interaction. In this paper, an eye-tracking HMD system (ET-HMD) is designed based on the rotational symmetric system. The tracking principle in this paper is based on pupil-corneal reflection. The ET-HMD system comprises three optical paths for virtual display, infrared illumination, and eye tracking. The display optics is shared by three optical paths and consists of four spherical lenses. For the eye-tracking path, an extra imaging lens is added to match the image sensor and achieve eye tracking. The display optics provides users a 40° diagonal FOV with a ״ 0.61 OLED, the 19 mm eye clearance, and 10 mm exit pupil diameter. The eye-tracking path can capture 15 mm × 15 mm of the users' eyes. The average MTF is above 0.1 at 26 lp/mm for the display path, and exceeds 0.2 at 46 lp/mm for the eye-tracking path. Eye illumination is simulated using LightTools with an eye model and an 850 nm near-infrared LED (NIR-LED). The results of the simulation show that the illumination of the NIR-LED can cover the area of the eye model with the display optics that is sufficient for eye tracking. The integrated optical system HMDs with eye-tracking feature can help improve the HMD experience of users.

  20. Search systems and computer-implemented search methods

    DOEpatents

    Payne, Deborah A.; Burtner, Edwin R.; Hampton, Shawn D.; Gillen, David S.; Henry, Michael J.

    2017-03-07

    Search systems and computer-implemented search methods are described. In one aspect, a search system includes a communications interface configured to access a plurality of data items of a collection, wherein the data items include a plurality of image objects individually comprising image data utilized to generate an image of the respective data item. The search system may include processing circuitry coupled with the communications interface and configured to process the image data of the data items of the collection to identify a plurality of image content facets which are indicative of image content contained within the images and to associate the image objects with the image content facets and a display coupled with the processing circuitry and configured to depict the image objects associated with the image content facets.

  1. Search systems and computer-implemented search methods

    DOEpatents

    Payne, Deborah A.; Burtner, Edwin R.; Bohn, Shawn J.; Hampton, Shawn D.; Gillen, David S.; Henry, Michael J.

    2015-12-22

    Search systems and computer-implemented search methods are described. In one aspect, a search system includes a communications interface configured to access a plurality of data items of a collection, wherein the data items include a plurality of image objects individually comprising image data utilized to generate an image of the respective data item. The search system may include processing circuitry coupled with the communications interface and configured to process the image data of the data items of the collection to identify a plurality of image content facets which are indicative of image content contained within the images and to associate the image objects with the image content facets and a display coupled with the processing circuitry and configured to depict the image objects associated with the image content facets.

  2. Satellite Data Processing System (SDPS) users manual V1.0

    NASA Technical Reports Server (NTRS)

    Caruso, Michael; Dunn, Chris

    1989-01-01

    SDPS is a menu driven interactive program designed to facilitate the display and output of image and line-based data sets common to telemetry, modeling and remote sensing. This program can be used to display up to four separate raster images and overlay line-based data such as coastlines, ship tracks and velocity vectors. The program uses multiple windows to communicate information with the user. At any given time, the program may have up to four image display windows as well as auxiliary windows containing information about each image displayed. SDPS is not a commercial program. It does not contain complete type checking or error diagnostics which may allow the program to crash. Known anomalies will be mentioned in the appropriate section as notes or cautions. SDPS was designed to be used on Sun Microsystems Workstations running SunView1 (Sun Visual/Integrated Environment for Workstations). It was primarily designed to be used on workstations equipped with color monitors, but most of the line-based functions and several of the raster-based functions can be used with monochrome monitors. The program currently runs on Sun 3 series workstations running Sun OS 4.0 and should port easily to Sun 4 and Sun 386 series workstations with SunView1. Users should also be familiar with UNIX, Sun workstations and the SunView window system.

  3. 42 CFR 37.44 - Approval of radiographic facilities that use digital radiography systems.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... image acquisition, digitization, processing, compression, transmission, display, archiving, and... quality digital chest radiographs by submitting to NIOSH digital radiographic image files of a test object... digital radiographic image files from six or more sample chest radiographs that are of acceptable quality...

  4. 42 CFR 37.44 - Approval of radiographic facilities that use digital radiography systems.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... image acquisition, digitization, processing, compression, transmission, display, archiving, and... quality digital chest radiographs by submitting to NIOSH digital radiographic image files of a test object... digital radiographic image files from six or more sample chest radiographs that are of acceptable quality...

  5. Acquisition of stereo panoramas for display in VR environments

    NASA Astrophysics Data System (ADS)

    Ainsworth, Richard A.; Sandin, Daniel J.; Schulze, Jurgen P.; Prudhomme, Andrew; DeFanti, Thomas A.; Srinivasan, Madhusudhanan

    2011-03-01

    Virtual reality systems are an excellent environment for stereo panorama displays. The acquisition and display methods described here combine high-resolution photography with surround vision and full stereo view in an immersive environment. This combination provides photographic stereo-panoramas for a variety of VR displays, including the StarCAVE, NexCAVE, and CORNEA. The zero parallax point used in conventional panorama photography is also the center of horizontal and vertical rotation when creating photographs for stereo panoramas. The two photographically created images are displayed on a cylinder or a sphere. The radius from the viewer to the image is set at approximately 20 feet, or at the object of major interest. A full stereo view is presented in all directions. The interocular distance, as seen from the viewer's perspective, displaces the two spherical images horizontally. This presents correct stereo separation in whatever direction the viewer is looking, even up and down. Objects at infinity will move with the viewer, contributing to an immersive experience. Stereo panoramas created with this acquisition and display technique can be applied without modification to a large array of VR devices having different screen arrangements and different VR libraries.

  6. Storage and retrieval of large digital images

    DOEpatents

    Bradley, J.N.

    1998-01-20

    Image compression and viewing are implemented with (1) a method for performing DWT-based compression on a large digital image with a computer system possessing a two-level system of memory and (2) a method for selectively viewing areas of the image from its compressed representation at multiple resolutions and, if desired, in a client-server environment. The compression of a large digital image I(x,y) is accomplished by first defining a plurality of discrete tile image data subsets T{sub ij}(x,y) that, upon superposition, form the complete set of image data I(x,y). A seamless wavelet-based compression process is effected on I(x,y) that is comprised of successively inputting the tiles T{sub ij}(x,y) in a selected sequence to a DWT routine, and storing the resulting DWT coefficients in a first primary memory. These coefficients are periodically compressed and transferred to a secondary memory to maintain sufficient memory in the primary memory for data processing. The sequence of DWT operations on the tiles T{sub ij}(x,y) effectively calculates a seamless DWT of I(x,y). Data retrieval consists of specifying a resolution and a region of I(x,y) for display. The subset of stored DWT coefficients corresponding to each requested scene is determined and then decompressed for input to an inverse DWT, the output of which forms the image display. The repeated process whereby image views are specified may take the form an interaction with a computer pointing device on an image display from a previous retrieval. 6 figs.

  7. Storage and retrieval of large digital images

    DOEpatents

    Bradley, Jonathan N.

    1998-01-01

    Image compression and viewing are implemented with (1) a method for performing DWT-based compression on a large digital image with a computer system possessing a two-level system of memory and (2) a method for selectively viewing areas of the image from its compressed representation at multiple resolutions and, if desired, in a client-server environment. The compression of a large digital image I(x,y) is accomplished by first defining a plurality of discrete tile image data subsets T.sub.ij (x,y) that, upon superposition, form the complete set of image data I(x,y). A seamless wavelet-based compression process is effected on I(x,y) that is comprised of successively inputting the tiles T.sub.ij (x,y) in a selected sequence to a DWT routine, and storing the resulting DWT coefficients in a first primary memory. These coefficients are periodically compressed and transferred to a secondary memory to maintain sufficient memory in the primary memory for data processing. The sequence of DWT operations on the tiles T.sub.ij (x,y) effectively calculates a seamless DWT of I(x,y). Data retrieval consists of specifying a resolution and a region of I(x,y) for display. The subset of stored DWT coefficients corresponding to each requested scene is determined and then decompressed for input to an inverse DWT, the output of which forms the image display. The repeated process whereby image views are specified may take the form an interaction with a computer pointing device on an image display from a previous retrieval.

  8. Analysis and design of wedge projection display system based on ray retracing method.

    PubMed

    Lee, Chang-Kun; Lee, Taewon; Sung, Hyunsik; Min, Sung-Wook

    2013-06-10

    A design method for the wedge projection display system based on the ray retracing method is proposed. To analyze the principle of image formation on the inclined surface of the wedge-shaped waveguide, the bundle of rays is retraced from an imaging point on the inclined surface to the aperture of the waveguide. In consequence of ray retracing, we obtain the incident conditions of the ray, such as the position and the angle at the aperture, which provide clues for image formation. To illuminate the image formation, the concept of the equivalent imaging point is proposed, which is the intersection where the incident rays are extended over the space regardless of the refraction and reflection in the waveguide. Since the initial value of the rays arriving at the equivalent imaging point corresponds to that of the rays converging into the imaging point on the inclined surface, the image formation can be visualized by calculating the equivalent imaging point over the entire inclined surface. Then, we can find image characteristics, such as their size and position, and their degree of blur--by analyzing the distribution of the equivalent imaging point--and design the optimized wedge projection system by attaching the prism structure at the aperture. The simulation results show the feasibility of the ray retracing analysis and characterize the numerical relation between the waveguide parameters and the aperture structure for on-axis configuration. The experimental results verify the designed system based on the proposed method.

  9. Dynamic, diagnostic, and pharmacological radionuclide studies of the esophagus in achalasia: correlation with manometric measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rozen, P.; Gelfond, M.; Zaltzman, S.

    1982-08-01

    The esophagus was evaluated in 15 patients with achalasia by continuous gamma camera imaging following ingestion of a semi-solid meal labeled with /sup 99//sup m/Tc. The images were displayed and recorded on a simple computerized data processing/display system. Subsequent cine' mode images of esophagela emptying demonstrated abnormalities of the body of the esophagus not reflected by the manometric examination. Computer-generated time-activity curves representing specific regions of interest were better than manometry in evaluating the results of myotomy, dilatation, and drug therapy. Isosorbide dinitrate significantly improved esophageal emptying.

  10. Computers in imaging and health care: now and in the future.

    PubMed

    Arenson, R L; Andriole, K P; Avrin, D E; Gould, R G

    2000-11-01

    Early picture archiving and communication systems (PACS) were characterized by the use of very expensive hardware devices, cumbersome display stations, duplication of database content, lack of interfaces to other clinical information systems, and immaturity in their understanding of the folder manager concepts and workflow reengineering. They were implemented historically at large academic medical centers by biomedical engineers and imaging informaticists. PACS were nonstandard, home-grown projects with mixed clinical acceptance. However, they clearly showed the great potential for PACS and filmless medical imaging. Filmless radiology is a reality today. The advent of efficient softcopy display of images provides a means for dealing with the ever-increasing number of studies and number of images per study. Computer power has increased, and archival storage cost has decreased to the extent that the economics of PACS is justifiable with respect to film. Network bandwidths have increased to allow large studies of many megabytes to arrive at display stations within seconds of examination completion. PACS vendors have recognized the need for efficient workflow and have built systems with intelligence in the management of patient data. Close integration with the hospital information system (HIS)-radiology information system (RIS) is critical for system functionality. Successful implementation of PACS requires integration or interoperation with hospital and radiology information systems. Besides the economic advantages, secure rapid access to all clinical information on patients, including imaging studies, anytime and anywhere, enhances the quality of patient care, although it is difficult to quantify. Medical image management systems are maturing, providing access outside of the radiology department to images and clinical information throughout the hospital or the enterprise via the Internet. Small and medium-sized community hospitals, private practices, and outpatient centers in rural areas will begin realizing the benefits of PACS already realized by the large tertiary care academic medical centers and research institutions. Hand-held devices and the Worldwide Web are going to change the way people communicate and do business. The impact on health care will be huge, including radiology. Computer-aided diagnosis, decision support tools, virtual imaging, and guidance systems will transform our practice as value-added applications utilizing the technologies pushed by PACS development efforts. Outcomes data and the electronic medical record (EMR) will drive our interactions with referring physicians and we expect the radiologist to become the informaticist, a new version of the medical management consultant.

  11. Microvax-based data management and reduction system for the regional planetary image facilities

    NASA Technical Reports Server (NTRS)

    Arvidson, R.; Guinness, E.; Slavney, S.; Weiss, B.

    1987-01-01

    Presented is a progress report for the Regional Planetary Image Facilities (RPIF) prototype image data management and reduction system being jointly implemented by Washington University and the USGS, Flagstaff. The system will consist of a MicroVAX with a high capacity (approx 300 megabyte) disk drive, a compact disk player, an image display buffer, a videodisk player, USGS image processing software, and SYSTEM 1032 - a commercial relational database management package. The USGS, Flagstaff, will transfer their image processing software including radiometric and geometric calibration routines, to the MicroVAX environment. Washington University will have primary responsibility for developing the database management aspects of the system and for integrating the various aspects into a working system.

  12. Direct-Solve Image-Based Wavefront Sensing

    NASA Technical Reports Server (NTRS)

    Lyon, Richard G.

    2009-01-01

    A method of wavefront sensing (more precisely characterized as a method of determining the deviation of a wavefront from a nominal figure) has been invented as an improved means of assessing the performance of an optical system as affected by such imperfections as misalignments, design errors, and fabrication errors. The method is implemented by software running on a single-processor computer that is connected, via a suitable interface, to the image sensor (typically, a charge-coupled device) in the system under test. The software collects a digitized single image from the image sensor. The image is displayed on a computer monitor. The software directly solves for the wavefront in a time interval of a fraction of a second. A picture of the wavefront is displayed. The solution process involves, among other things, fast Fourier transforms. It has been reported to the effect that some measure of the wavefront is decomposed into modes of the optical system under test, but it has not been reported whether this decomposition is postprocessing of the solution or part of the solution process.

  13. Multispectral image-fused head-tracked vision system (HTVS) for driving applications

    NASA Astrophysics Data System (ADS)

    Reese, Colin E.; Bender, Edward J.

    2001-08-01

    Current military thermal driver vision systems consist of a single Long Wave Infrared (LWIR) sensor mounted on a manually operated gimbal, which is normally locked forward during driving. The sensor video imagery is presented on a large area flat panel display for direct view. The Night Vision and Electronics Sensors Directorate and Kaiser Electronics are cooperatively working to develop a driver's Head Tracked Vision System (HTVS) which directs dual waveband sensors in a more natural head-slewed imaging mode. The HTVS consists of LWIR and image intensified sensors, a high-speed gimbal, a head mounted display, and a head tracker. The first prototype systems have been delivered and have undergone preliminary field trials to characterize the operational benefits of a head tracked sensor system for tactical military ground applications. This investigation will address the advantages of head tracked vs. fixed sensor systems regarding peripheral sightings of threats, road hazards, and nearby vehicles. An additional thrust will investigate the degree to which additive (A+B) fusion of LWIR and image intensified sensors enhances overall driving performance. Typically, LWIR sensors are better for detecting threats, while image intensified sensors provide more natural scene cues, such as shadows and texture. This investigation will examine the degree to which the fusion of these two sensors enhances the driver's overall situational awareness.

  14. Contrast sensitivity function in stereoscopic viewing of Gabor patches on a medical polarized three-dimensional stereoscopic display

    NASA Astrophysics Data System (ADS)

    Rousson, Johanna; Haar, Jérémy; Santal, Sarah; Kumcu, Asli; Platiša, Ljiljana; Piepers, Bastian; Kimpe, Tom; Philips, Wilfried

    2016-03-01

    While three-dimensional (3-D) imaging systems are entering hospitals, no study to date has explored the luminance calibration needs of 3-D stereoscopic diagnostic displays and if they differ from two-dimensional (2-D) displays. Since medical display calibration incorporates the human contrast sensitivity function (CSF), we first assessed the 2-D CSF for benchmarking and then examined the impact of two image parameters on the 3-D stereoscopic CSF: (1) five depth plane (DP) positions (between DP: -171 and DP: 2853 mm), and (2) three 3-D inclinations (0 deg, 45 deg, and 60 deg around the horizontal axis of a DP). Stimuli were stereoscopic images of a vertically oriented 2-D Gabor patch at one of seven frequencies ranging from 0.4 to 10 cycles/deg. CSFs were measured for seven to nine human observers with a staircase procedure. The results indicate that the 2-D CSF model remains valid for a 3-D stereoscopic display regardless of the amount of disparity between the stereo images. We also found that the 3-D CSF at DP≠0 does not differ from the 3-D CSF at DP=0 for DPs and disparities which allow effortless binocular fusion. Therefore, the existing 2-D medical luminance calibration algorithm remains an appropriate tool for calibrating polarized stereoscopic medical displays.

  15. Anatomical education and surgical simulation based on the Chinese Visible Human: a three-dimensional virtual model of the larynx region.

    PubMed

    Liu, Kaijun; Fang, Binji; Wu, Yi; Li, Ying; Jin, Jun; Tan, Liwen; Zhang, Shaoxiang

    2013-09-01

    Anatomical knowledge of the larynx region is critical for understanding laryngeal disease and performing required interventions. Virtual reality is a useful method for surgical education and simulation. Here, we assembled segmented cross-section slices of the larynx region from the Chinese Visible Human dataset. The laryngeal structures were precisely segmented manually as 2D images, then reconstructed and displayed as 3D images in the virtual reality Dextrobeam system. Using visualization and interaction with the virtual reality modeling language model, a digital laryngeal anatomy instruction was constructed using HTML and JavaScript languages. The volume larynx models can thus display an arbitrary section of the model and provide a virtual dissection function. This networked teaching system of the digital laryngeal anatomy can be read remotely, displayed locally, and manipulated interactively.

  16. Quantification of differences between nailfold capillaroscopy images with a scleroderma pattern and normal pattern using measures of geometric and algorithmic complexity.

    PubMed

    Urwin, Samuel George; Griffiths, Bridget; Allen, John

    2017-02-01

    This study aimed to quantify and investigate differences in the geometric and algorithmic complexity of the microvasculature in nailfold capillaroscopy (NFC) images displaying a scleroderma pattern and those displaying a 'normal' pattern. 11 NFC images were qualitatively classified by a capillary specialist as indicative of 'clear microangiopathy' (CM), i.e. a scleroderma pattern, and 11 as 'not clear microangiopathy' (NCM), i.e. a 'normal' pattern. Pre-processing was performed, and fractal dimension (FD) and Kolmogorov complexity (KC) were calculated following image binarisation. FD and KC were compared between groups, and a k-means cluster analysis (n  =  2) on all images was performed, without prior knowledge of the group assigned to them (i.e. CM or NCM), using FD and KC as inputs. CM images had significantly reduced FD and KC compared to NCM images, and the cluster analysis displayed promising results that the quantitative classification of images into CM and NCM groups is possible using the mathematical measures of FD and KC. The analysis techniques used show promise for quantitative microvascular investigation in patients with systemic sclerosis.

  17. Using high-resolution displays for high-resolution cardiac data.

    PubMed

    Goodyer, Christopher; Hodrien, John; Wood, Jason; Kohl, Peter; Brodlie, Ken

    2009-07-13

    The ability to perform fast, accurate, high-resolution visualization is fundamental to improving our understanding of anatomical data. As the volumes of data increase from improvements in scanning technology, the methods applied to visualization must evolve. In this paper, we address the interactive display of data from high-resolution magnetic resonance imaging scanning of a rabbit heart and subsequent histological imaging. We describe a visualization environment involving a tiled liquid crystal display panel display wall and associated software, which provides an interactive and intuitive user interface. The oView software is an OpenGL application that is written for the VR Juggler environment. This environment abstracts displays and devices away from the application itself, aiding portability between different systems, from desktop PCs to multi-tiled display walls. Portability between display walls has been demonstrated through its use on walls at the universities of both Leeds and Oxford. We discuss important factors to be considered for interactive two-dimensional display of large three-dimensional datasets, including the use of intuitive input devices and level of detail aspects.

  18. Earth Observation Services (Image Processing Software)

    NASA Technical Reports Server (NTRS)

    1992-01-01

    San Diego State University and Environmental Systems Research Institute, with other agencies, have applied satellite imaging and image processing techniques to geographic information systems (GIS) updating. The resulting images display land use and are used by a regional planning agency for applications like mapping vegetation distribution and preserving wildlife habitats. The EOCAP program provides government co-funding to encourage private investment in, and to broaden the use of NASA-developed technology for analyzing information about Earth and ocean resources.

  19. 1981 Image II Conference Proceedings.

    DTIC Science & Technology

    1981-11-01

    rapid motion of terrain detail across the display requires fast display processors. Other difficulties are perceptual: the visual displays must convey...has been a continuing effort by Vought in the last decade. Early systems were restricted by the unavailability of video bulk storage with fast random...each photograph. The calculations aided in the proper sequencing of the scanned scenes on the tape recorder and eventually facilitated fast random

  20. EzMol: A Web Server Wizard for the Rapid Visualization and Image Production of Protein and Nucleic Acid Structures.

    PubMed

    Reynolds, Christopher R; Islam, Suhail A; Sternberg, Michael J E

    2018-01-31

    EzMol is a molecular visualization Web server in the form of a software wizard, located at http://www.sbg.bio.ic.ac.uk/ezmol/. It is designed for easy and rapid image manipulation and display of protein molecules, and is intended for users who need to quickly produce high-resolution images of protein molecules but do not have the time or inclination to use a software molecular visualization system. EzMol allows the upload of molecular structure files in PDB format to generate a Web page including a representation of the structure that the user can manipulate. EzMol provides intuitive options for chain display, adjusting the color/transparency of residues, side chains and protein surfaces, and for adding labels to residues. The final adjusted protein image can then be downloaded as a high-resolution image. There are a range of applications for rapid protein display, including the illustration of specific areas of a protein structure and the rapid prototyping of images. Copyright © 2018. Published by Elsevier Ltd.

  1. Bio-inspired display of polarization information using selected visual cues

    NASA Astrophysics Data System (ADS)

    Yemelyanov, Konstantin M.; Lin, Shih-Schon; Luis, William Q.; Pugh, Edward N., Jr.; Engheta, Nader

    2003-12-01

    For imaging systems the polarization of electromagnetic waves carries much potentially useful information about such features of the world as the surface shape, material contents, local curvature of objects, as well as about the relative locations of the source, object and imaging system. The imaging system of the human eye however, is "polarization-blind", and cannot utilize the polarization of light without the aid of an artificial, polarization-sensitive instrument. Therefore, polarization information captured by a man-made polarimetric imaging system must be displayed to a human observer in the form of visual cues that are naturally processed by the human visual system, while essentially preserving the other important non-polarization information (such as spectral and intensity information) in an image. In other words, some forms of sensory substitution are needed for representing polarization "signals" without affecting other visual information such as color and brightness. We are investigating several bio-inspired representational methodologies for mapping polarization information into visual cues readily perceived by the human visual system, and determining which mappings are most suitable for specific applications such as object detection, navigation, sensing, scene classifications, and surface deformation. The visual cues and strategies we are exploring are the use of coherently moving dots superimposed on image to represent various range of polarization signals, overlaying textures with spatial and/or temporal signatures to segregate regions of image with differing polarization, modulating luminance and/or color contrast of scenes in terms of certain aspects of polarization values, and fusing polarization images into intensity-only images. In this talk, we will present samples of our findings in this area.

  2. Video System Highlights Hydrogen Fires

    NASA Technical Reports Server (NTRS)

    Youngquist, Robert C.; Gleman, Stuart M.; Moerk, John S.

    1992-01-01

    Video system combines images from visible spectrum and from three bands in infrared spectrum to produce color-coded display in which hydrogen fires distinguished from other sources of heat. Includes linear array of 64 discrete lead selenide mid-infrared detectors operating at room temperature. Images overlaid on black and white image of same scene from standard commercial video camera. In final image, hydrogen fires appear red; carbon-based fires, blue; and other hot objects, mainly green and combinations of green and red. Where no thermal source present, image remains in black and white. System enables high degree of discrimination between hydrogen flames and other thermal emitters.

  3. Telemedicine optoelectronic biomedical data processing system

    NASA Astrophysics Data System (ADS)

    Prosolovska, Vita V.

    2010-08-01

    The telemedicine optoelectronic biomedical data processing system is created to share medical information for the control of health rights and timely and rapid response to crisis. The system includes the main blocks: bioprocessor, analog-digital converter biomedical images, optoelectronic module for image processing, optoelectronic module for parallel recording and storage of biomedical imaging and matrix screen display of biomedical images. Rated temporal characteristics of the blocks defined by a particular triggering optoelectronic couple in analog-digital converters and time imaging for matrix screen. The element base for hardware implementation of the developed matrix screen is integrated optoelectronic couples produced by selective epitaxy.

  4. Real-time speckle variance swept-source optical coherence tomography using a graphics processing unit.

    PubMed

    Lee, Kenneth K C; Mariampillai, Adrian; Yu, Joe X Z; Cadotte, David W; Wilson, Brian C; Standish, Beau A; Yang, Victor X D

    2012-07-01

    Advances in swept source laser technology continues to increase the imaging speed of swept-source optical coherence tomography (SS-OCT) systems. These fast imaging speeds are ideal for microvascular detection schemes, such as speckle variance (SV), where interframe motion can cause severe imaging artifacts and loss of vascular contrast. However, full utilization of the laser scan speed has been hindered by the computationally intensive signal processing required by SS-OCT and SV calculations. Using a commercial graphics processing unit that has been optimized for parallel data processing, we report a complete high-speed SS-OCT platform capable of real-time data acquisition, processing, display, and saving at 108,000 lines per second. Subpixel image registration of structural images was performed in real-time prior to SV calculations in order to reduce decorrelation from stationary structures induced by the bulk tissue motion. The viability of the system was successfully demonstrated in a high bulk tissue motion scenario of human fingernail root imaging where SV images (512 × 512 pixels, n = 4) were displayed at 54 frames per second.

  5. The Design and Evaluation of the Lighting Imaging Sensor Data Applications Display (LISDAD)

    NASA Technical Reports Server (NTRS)

    Boldi, B.; Hodanish, S.; Sharp, D.; Williams, E.; Goodman, Steven; Raghavan, R.; Matlin, A.; Weber, M.

    1998-01-01

    The design and evaluation of the Lightning Imaging Sensor Data Applications Display (LISDAD). The ultimate goal of the LISDAD system is to quantify the utility of total lightning information in short-term, severe-weather forecasting operations. To this end, scientists from NASA, NWS, and MIT organized an effort to study the relationship of lightning and severe-weather on a storm-by-storm, and even cell-by-cell basis for as many storms as possible near Melbourne, Florida. Melbourne was chosen as it offers a unique combination of high probability of severe weather and proximity to major relevant sensors - specifically: NASA's total lightning mapping system at Kennedy Space Center (the LDAR system at KSC); a NWS/NEXRAD radar (at Melbourne); and a prototype Integrated Terminal Weather System (ITWS, at Orlando), which obtains cloud-to-ground lightning Information from the National Lightning Detection Network (NLDN), and also uses NSSL's Severe Storm Algorithm (NSSL/SSAP) to obtain information about various storm-cell parameters. To assist in realizing this project's goal, an interactive, real-time data processing system (the LISDAD system) has been developed that supports both operational short-term weather forecasting and post facto severe-storm research. Suggestions have been drawn from the operational users (NWS/Melbourne) in the design of the data display and its salient behavior. The initial concept for the users Graphical Situation Display (GSD) was simply to overlay radar data with lightning data, but as the association between rapid upward trends in the total lightning rate and severe weather became evident, the display was significantly redesigned. The focus changed to support the display of time series of storm-parameter data and the automatic recognition of cells that display rapid changes in the total-lightning flash rate. The latter is calculated by grouping discrete LDAR radiation sources into lightning flashes using a time-space association algorithm. Specifically, the GSD presents the user with the Composite Maximum Reflectivity obtained from the NWS/NEXRAD. Superimposed upon this background image are placed small black circles indicating the locations of storm cells identified by the NSSL/SSA. The circles become cyan if lightning is detected within the storm-cell; if the cell has lightning rates indicative of a severe-storm, the circle turns red. This paper will: (1) review the design of LISDAD system; (2) present some examples of its data display; and shown results of the lightning based severe-weather prediction algorithm.

  6. Pseudo-shading technique in the two-dimensional domain: a post-processing algorithm for enhancing the Z-buffer of a three-dimensional binary image.

    PubMed

    Tan, A C; Richards, R

    1989-01-01

    Three-dimensional (3D) medical graphics is becoming popular in clinical use on tomographic scanners. Research work in 3D reconstructive display of computerized tomography (CT) and magnetic resonance imaging (MRI) scans on conventional computers has produced many so-called pseudo-3D images. The quality of these images depends on the rendering algorithm, the coarseness of the digitized object, the number of grey levels and the image screen resolution. CT and MRI data are fundamentally voxel based and they produce images that are coarse because of the resolution of the data acquisition system. 3D images produced by the Z-buffer depth shading technique suffer loss of detail when complex objects with fine textural detail need to be displayed. Attempts have been made to improve the display of voxel objects, and existing techniques have shown the improvement possible using these post-processing algorithms. The improved rendering technique works on the Z-buffer image to generate a shaded image using a single light source in any direction. The effectiveness of the technique in generating a shaded image has been shown to be a useful means of presenting 3D information for clinical use.

  7. Perception of 3D spatial relations for 3D displays

    NASA Astrophysics Data System (ADS)

    Rosen, Paul; Pizlo, Zygmunt; Hoffmann, Christoph; Popescu, Voicu S.

    2004-05-01

    We test perception of 3D spatial relations in 3D images rendered by a 3D display (Perspecta from Actuality Systems) and compare it to that of a high-resolution flat panel display. 3D images provide the observer with such depth cues as motion parallax and binocular disparity. Our 3D display is a device that renders a 3D image by displaying, in rapid succession, radial slices through the scene on a rotating screen. The image is contained in a glass globe and can be viewed from virtually any direction. In the psychophysical experiment several families of 3D objects are used as stimuli: primitive shapes (cylinders and cuboids), and complex objects (multi-story buildings, cars, and pieces of furniture). Each object has at least one plane of symmetry. On each trial an object or its "distorted" version is shown at an arbitrary orientation. The distortion is produced by stretching an object in a random direction by 40%. This distortion must eliminate the symmetry of an object. The subject's task is to decide whether or not the presented object is distorted under several viewing conditions (monocular/binocular, with/without motion parallax, and near/far). The subject's performance is measured by the discriminability d', which is a conventional dependent variable in signal detection experiments.

  8. Split image optical display

    DOEpatents

    Veligdan, James T.

    2005-05-31

    A video image is displayed from an optical panel by splitting the image into a plurality of image components, and then projecting the image components through corresponding portions of the panel to collectively form the image. Depth of the display is correspondingly reduced.

  9. Split image optical display

    DOEpatents

    Veligdan, James T [Manorville, NY

    2007-05-29

    A video image is displayed from an optical panel by splitting the image into a plurality of image components, and then projecting the image components through corresponding portions of the panel to collectively form the image. Depth of the display is correspondingly reduced.

  10. Low-cost data analysis systems for processing multispectral scanner data

    NASA Technical Reports Server (NTRS)

    Whitely, S. L.

    1976-01-01

    The basic hardware and software requirements are described for four low cost analysis systems for computer generated land use maps. The data analysis systems consist of an image display system, a small digital computer, and an output recording device. Software is described together with some of the display and recording devices, and typical costs are cited. Computer requirements are given, and two approaches are described for converting black-white film and electrostatic printer output to inexpensive color output products. Examples of output products are shown.

  11. Study on high power ultraviolet laser oil detection system

    NASA Astrophysics Data System (ADS)

    Jin, Qi; Cui, Zihao; Bi, Zongjie; Zhang, Yanchao; Tian, Zhaoshuo; Fu, Shiyou

    2018-03-01

    Laser Induce Fluorescence (LIF) is a widely used new telemetry technology. It obtains information about oil spill and oil film thickness by analyzing the characteristics of stimulated fluorescence and has an important application in the field of rapid analysis of water composition. A set of LIF detection system for marine oil pollution is designed in this paper, which uses 355nm high-energy pulsed laser as the excitation light source. A high-sensitivity image intensifier is used in the detector. The upper machine sends a digital signal through a serial port to achieve nanoseconds range-gated width control for image intensifier. The target fluorescence spectrum image is displayed on the image intensifier by adjusting the delay time and the width of the pulse signal. The spectral image is coupled to CCD by lens imaging to achieve spectral display and data analysis function by computer. The system is used to detect the surface of the floating oil film in the distance of 25m to obtain the fluorescence spectra of different oil products respectively. The fluorescence spectra of oil products are obvious. The experimental results show that the system can realize high-precision long-range fluorescence detection and reflect the fluorescence characteristics of the target accurately, with broad application prospects in marine oil pollution identification and oil film thickness detection.

  12. Radiology on handheld devices: image display, manipulation, and PACS integration issues.

    PubMed

    Raman, Bhargav; Raman, Raghav; Raman, Lalithakala; Beaulieu, Christopher F

    2004-01-01

    Handheld personal digital assistants (PDAs) have undergone continuous and substantial improvements in hardware and graphics capabilities, making them a compelling platform for novel developments in teleradiology. The latest PDAs have processor speeds of up to 400 MHz and storage capacities of up to 80 Gbytes with memory expansion methods. A Digital Imaging and Communications in Medicine (DICOM)-compliant, vendor-independent handheld image access system was developed in which a PDA server acts as the gateway between a picture archiving and communication system (PACS) and PDAs. The system is compatible with most currently available PDA models. It is capable of both wired and wireless transfer of images and includes custom PDA software and World Wide Web interfaces that implement a variety of basic image manipulation functions. Implementation of this system, which is currently undergoing debugging and beta testing, required optimization of the user interface to efficiently display images on smaller PDA screens. The PDA server manages user work lists and implements compression and security features to accelerate transfer speeds, protect patient information, and regulate access. Although some limitations remain, PDA-based teleradiology has the potential to increase the efficiency of the radiologic work flow, increasing productivity and improving communication with referring physicians and patients. Copyright RSNA, 2004

  13. Edge smoothing for real-time simulation of a polygon face object system as viewed by a moving observer

    NASA Technical Reports Server (NTRS)

    Lotz, Robert W. (Inventor); Westerman, David J. (Inventor)

    1980-01-01

    The visual system within an aircraft flight simulation system receives flight data and terrain data which is formated into a buffer memory. The image data is forwarded to an image processor which translates the image data into face vertex vectors Vf, defining the position relationship between the vertices of each terrain object and the aircraft. The image processor then rotates, clips, and projects the image data into two-dimensional display vectors (Vd). A display generator receives the Vd faces, and other image data to provide analog inputs to CRT devices which provide the window displays for the simulated aircraft. The video signal to the CRT devices passes through an edge smoothing device which prolongs the rise time (and fall time) of the video data inversely as the slope of the edge being smoothed. An operational amplifier within the edge smoothing device has a plurality of independently selectable feedback capacitors each having a different value. The values of the capacitors form a series which doubles as a power of two. Each feedback capacitor has a fast switch responsive to the corresponding bit of a digital binary control word for selecting (1) or not selecting (0) that capacitor. The control word is determined by the slope of each edge. The resulting actual feedback capacitance for each edge is the sum of all the selected capacitors and is directly proportional to the value of the binary control word. The output rise time (or fall time) is a function of the feedback capacitance, and is controlled by the slope through the binary control word.

  14. Practical low-cost stereo head-mounted display

    NASA Astrophysics Data System (ADS)

    Pausch, Randy; Dwivedi, Pramod; Long, Allan C., Jr.

    1991-08-01

    A high-resolution head-mounted display has been developed from substantially cheaper components than previous systems. Monochrome displays provide 720 by 280 monochrome pixels to each eye in a one-inch-square region positioned approximately one inch from each eye. The display hardware is the Private Eye, manufactured by Reflection Technologies, Inc. The tracking system uses the Polhemus Isotrak, providing (x,y,z, azimuth, elevation and roll) information on the user''s head position and orientation 60 times per second. In combination with a modified Nintendo Power Glove, this system provides a full-functionality virtual reality/simulation system. Using two host 80386 computers, real-time wire frame images can be produced. Other virtual reality systems require roughly 250,000 in hardware, while this one requires only 5,000. Stereo is particularly useful for this system because shading or occlusion cannot be used as depth cues.

  15. Head Mounted Displays for Virtual Reality

    DTIC Science & Technology

    1993-02-01

    Produce an Image of Infinity 9 3 The Naval Ocean Systems Center HMD with Front-Mounted CRTs 10 4 The VR Group HMD with Side-Mounted CRTs. The Image is...Convergence Angles 34 vii SECTION 1 INTRODUCTION One of the goals in the development of Virtual Reality ( VR ) is to achieve "total immersion" where one...become transported out of the real world and into the virtual world. The developers of VR have utilized the head mounted display (HMD) as a means of

  16. Cloud-based image sharing network for collaborative imaging diagnosis and consultation

    NASA Astrophysics Data System (ADS)

    Yang, Yuanyuan; Gu, Yiping; Wang, Mingqing; Sun, Jianyong; Li, Ming; Zhang, Weiqiang; Zhang, Jianguo

    2018-03-01

    In this presentation, we presented a new approach to design cloud-based image sharing network for collaborative imaging diagnosis and consultation through Internet, which can enable radiologists, specialists and physicians locating in different sites collaboratively and interactively to do imaging diagnosis or consultation for difficult or emergency cases. The designed network combined a regional RIS, grid-based image distribution management, an integrated video conferencing system and multi-platform interactive image display devices together with secured messaging and data communication. There are three kinds of components in the network: edge server, grid-based imaging documents registry and repository, and multi-platform display devices. This network has been deployed in a public cloud platform of Alibaba through Internet since March 2017 and used for small lung nodule or early staging lung cancer diagnosis services between Radiology departments of Huadong hospital in Shanghai and the First Hospital of Jiaxing in Zhejiang Province.

  17. Progress in 3D imaging and display by integral imaging

    NASA Astrophysics Data System (ADS)

    Martinez-Cuenca, R.; Saavedra, G.; Martinez-Corral, M.; Pons, A.; Javidi, B.

    2009-05-01

    Three-dimensionality is currently considered an important added value in imaging devices, and therefore the search for an optimum 3D imaging and display technique is a hot topic that is attracting important research efforts. As main value, 3D monitors should provide the observers with different perspectives of a 3D scene by simply varying the head position. Three-dimensional imaging techniques have the potential to establish a future mass-market in the fields of entertainment and communications. Integral imaging (InI), which can capture true 3D color images, has been seen as the right technology to 3D viewing to audiences of more than one person. Due to the advanced degree of development, InI technology could be ready for commercialization in the coming years. This development is the result of a strong research effort performed along the past few years by many groups. Since Integral Imaging is still an emerging technology, the first aim of the "3D Imaging and Display Laboratory" at the University of Valencia, has been the realization of a thorough study of the principles that govern its operation. Is remarkable that some of these principles have been recognized and characterized by our group. Other contributions of our research have been addressed to overcome some of the classical limitations of InI systems, like the limited depth of field (in pickup and in display), the poor axial and lateral resolution, the pseudoscopic-to-orthoscopic conversion, the production of 3D images with continuous relief, or the limited range of viewing angles of InI monitors.

  18. Role of HIS/RIS DICOM interfaces in the integration of imaging into the Department of Veterans Affairs healthcare enterprise

    NASA Astrophysics Data System (ADS)

    Kuzmak, Peter M.; Dayhoff, Ruth E.

    1998-07-01

    The U.S. Department of Veterans Affairs is integrating imaging into the healthcare enterprise using the Digital Imaging and Communication in Medicine (DICOM) standard protocols. Image management is directly integrated into the VistA Hospital Information System (HIS) software and clinical database. Radiology images are acquired via DICOM, and are stored directly in the HIS database. Images can be displayed on low- cost clinician's workstations throughout the medical center. High-resolution diagnostic quality multi-monitor VistA workstations with specialized viewing software can be used for reading radiology images. DICOM has played critical roles in the ability to integrate imaging functionality into the Healthcare Enterprise. Because of its openness, it allows the integration of system components from commercial and non- commercial sources to work together to provide functional cost-effective solutions (see Figure 1). Two approaches are used to acquire and handle images within the radiology department. At some VA Medical Centers, DICOM is used to interface a commercial Picture Archiving and Communications System (PACS) to the VistA HIS. At other medical centers, DICOM is used to interface the image producing modalities directly to the image acquisition and display capabilities of VistA itself. Both of these approaches use a small set of DICOM services that has been implemented by VistA to allow patient and study text data to be transmitted to image producing modalities and the commercial PACS, and to enable images and study data to be transferred back.

  19. Optical rotation compensation for a holographic 3D display with a 360 degree horizontal viewing zone.

    PubMed

    Sando, Yusuke; Barada, Daisuke; Yatagai, Toyohiko

    2016-10-20

    A method for a continuous optical rotation compensation in a time-division-based holographic three-dimensional (3D) display with a rotating mirror is presented. Since the coordinate system of wavefronts after the mirror reflection rotates about the optical axis along with the rotation angle, compensation or cancellation is absolutely necessary to fix the reconstructed 3D object. In this study, we address this problem by introducing an optical image rotator based on a right-angle prism that rotates synchronously with the rotating mirror. The optical and continuous compensation reduces the occurrence of duplicate images, which leads to the improvement of the quality of reconstructed images. The effect of the optical rotation compensation is experimentally verified and a demonstration of holographic 3D display with the optical rotation compensation is presented.

  20. Display challenges resulting from the use of wide field of view imaging devices

    NASA Astrophysics Data System (ADS)

    Petty, Gregory J.; Fulton, Jack; Nicholson, Gail; Seals, Ean

    2012-06-01

    As focal plane array technologies advance and imagers increase in resolution, display technology must outpace the imaging improvements in order to adequately represent the complete data collection. Typical display devices tend to have an aspect ratio similar to 4:3 or 16:9, however a breed of Wide Field of View (WFOV) imaging devices exist that skew from the norm with aspect ratios as high as 5:1. This particular quality, when coupled with a high spatial resolution, presents a unique challenge for display devices. Standard display devices must choose between resizing the image data to fit the display and displaying the image data in native resolution and truncating potentially important information. The problem compounds when considering the applications; WFOV high-situationalawareness imagers are sought for space-limited military vehicles. Tradeoffs between these issues are assessed to the image quality of the WFOV sensor.

  1. Helmet-mounted displays in long-range-target visual acquisition

    NASA Astrophysics Data System (ADS)

    Wilkins, Donald F.

    1999-07-01

    Aircrews have always sought a tactical advantage within the visual range (WVR) arena -- usually defined as 'see the opponent first.' Even with radar and interrogation foe/friend (IFF) systems, the pilot who visually acquires his opponent first has a significant advantage. The Helmet Mounted Cueing System (HMCS) equipped with a camera offers an opportunity to correct the problems with the previous approaches. By utilizing real-time image enhancement technique and feeding the image to the pilot on the HMD, the target can be visually acquired well beyond the range provided by the unaided eye. This paper will explore the camera and display requirements for such a system and place those requirements within the context of other requirements, such as weight.

  2. Clinical challenges associated with incorporation of nonradiology images into the electronic medical record

    NASA Astrophysics Data System (ADS)

    Siegel, Eliot L.; Reiner, Bruce I.

    2001-08-01

    To date, the majority of Picture Archival and Communication Systems (PACS) have been utilized only for capture, storage, and display of radiology and in some cases, nuclear medicine images. Medical images for other subspecialty areas are currently stored in local, independent systems, which typically are not accessible throughout the healthcare enterprise and do not communicate with other hospital information or image management systems. It is likely that during the next few years, healthcare centers will expand PAC system capability to incorporate these multimedia data or alternatively, hospital-wide electronic patient record systems will be able to provide this function.

  3. Pilot Task Profiles, Human Factors, And Image Realism

    NASA Astrophysics Data System (ADS)

    McCormick, Dennis

    1982-06-01

    Computer Image Generation (CIG) visual systems provide real time scenes for state-of-the-art flight training simulators. The visual system reauires a greater understanding of training tasks, human factors, and the concept of image realism to produce an effective and efficient training scene than is required by other types of visual systems. Image realism must be defined in terms of pilot visual information reauirements. Human factors analysis of training and perception is necessary to determine the pilot's information requirements. System analysis then determines how the CIG and display device can best provide essential information to the pilot. This analysis procedure ensures optimum training effectiveness and system performance.

  4. Optimization of electro-optical parameters of LCD for advertising systems

    NASA Astrophysics Data System (ADS)

    Olifierczuk, Marek; Zielinski, Jerzy; Klosowicz, Stanislaw J.

    1998-02-01

    The analysis of the optimization of negative image twisted nematic LCD is presented. Theoretical considerations are confirmed by experimental results. The effect of material parameters and technology on the contrast ratio and display dynamics is given. The effect in TN display with black dye is presented.

  5. Simplified Night Sky Display System

    NASA Technical Reports Server (NTRS)

    Castellano, Timothy P.

    2010-01-01

    A document describes a simple night sky display system that is portable, lightweight, and includes, at most, four components in its simplest configuration. The total volume of this system is no more than 10(sup 6) cm(sup 3) in a disassembled state, and weighs no more than 20 kilograms. The four basic components are a computer, a projector, a spherical light-reflecting first surface and mount, and a spherical second surface for display. The computer has temporary or permanent memory that contains at least one signal representing one or more images of a portion of the sky when viewed from an arbitrary position, and at a selected time. The first surface reflector is spherical and receives and reflects the image from the projector onto the second surface, which is shaped like a hemisphere. This system may be used to simulate selected portions of the night sky, preserving the appearance and kinesthetic sense of the celestial sphere surrounding the Earth or any other point in space. These points will then show motions of planets, stars, galaxies, nebulae, and comets that are visible from that position. The images may be motionless, or move with the passage of time. The array of images presented, and vantage points in space, are limited only by the computer software that is available, or can be developed. An optional approach is to have the screen (second surface) self-inflate by means of gas within the enclosed volume, and then self-regulate that gas in order to support itself without any other mechanical support.

  6. Psychophysical Evaluation of Three-Dimensional Auditory Displays

    NASA Technical Reports Server (NTRS)

    Wightman, Frederic L. (Principal Investigator)

    1995-01-01

    This report describes the process made during the first year of a three-year Cooperative Research Agreement (CRA NCC2-542). The CRA proposed a program of applied of psychophysical research designed to determine the requirements and limitations of three-dimensional (3-D) auditory display systems. These displays present synthesized stimuli to a pilot or virtual workstation operator that evoke auditory images at predetermined positions in space. The images can be either stationary or moving. In previous years. we completed a number of studies that provided data on listeners' abilities to localize stationary sound sources with 3-D displays. The current focus is on the use of 3-D displays in 'natural' listening conditions, which include listeners' head movements, moving sources, multiple sources and 'echoic' sources. The results of our research on two of these topics, the role of head movements and the role of echoes and reflections, were reported in the most recent Semi-Annual Pro-ress Report (Appendix A). In the period since the last Progress Report we have been studying a third topic, the localizability of moving sources. The results of this research are described. The fidelity of a virtual auditory display is critically dependent on precise measurement of the listener''s Head-Related Transfer Functions (HRTFs), which are used to produce the virtual auditory images. We continue to explore methods for improving our HRTF measurement technique. During this reporting period we compared HRTFs measured using our standard open-canal probe tube technique and HRTFs measured with the closed-canal insert microphones from the Crystal River Engineering Snapshot system.

  7. Planetary Education and Outreach Using the NOAA Science on a Sphere

    NASA Technical Reports Server (NTRS)

    Simon-Miller, A. A.; Williams, D. R.; Smith, S. M.; Friedlander, J. S.; Mayo, L. A.; Clark, P. E.; Henderson, M. A.

    2011-01-01

    Science On a Sphere (SOS) is a large visualization system, developed by the National Oceanic and Atmospheric Administration (NOAH), that uses computers running Redhat Linux and four video projectors to display animated data onto the outside of a sphere. Said another way, SOS is a stationary globe that can show dynamic, animated images in spherical form. Visualization of cylindrical data maps show planets, their atmosphere, oceans, and land, in very realistic form. The SOS system uses 4 video projectors to display images onto the sphere. Each projector is driven by a separate computer, and a fifth computer is used to control the operation of the display computers. Each computer is a relatively powerful PC with a high-end graphics card. The video projectors have native XGA resolution. The projectors are placed at the corners of a 30' x 30' square with a 68" carbon fiber sphere suspended in the center of the square. The equator of the sphere is typically located 86" off the floor. SOS uses common image formats such as JPEG, or TIFF in a very specific, but simple form; the images are plotted on an equatorial cylindrical equidistant projection, or as it is commonly known, a latitude/longitude grid, where the image is twice as wide as it is high (rectangular). 2048x] 024 is the minimum usable spatial resolution without some noticeable pixelation. Labels and text can be applied within the image, or using a timestamp-like feature within the SOS system software. There are two basic modes of operation for SOS: displaying a single image or an animated sequence of frames. The frame or frames can be setup to rotate or tilt, as in a planetary rotation. Sequences of images that animate through time produce a movie visualization, with or without an overlain soundtrack. After the images are processed, SOS will display the images in sequence and play them like a movie across the entire sphere surface. Movies can be of any arbitrary length, limited mainly by disk space and can be animated at frame rates up to 30 frames per second. Transitions, special effects, and other computer graphics techniques can be added to a sequence through the use of off-the-shelf software, like Final Cut Pro. However, one drawback is that the Sphere cannot be used in the same manner as a flat movie screen; images cannot be pushed to a "side", a highlighted area must be viewable to all sides of the room simultaneously, and some transitions do not work as well as others. We discuss these issues and workarounds in our poster.

  8. The Comparison Of Dome And HMD Delivery Systems: A Case Study

    NASA Technical Reports Server (NTRS)

    Chen, Jian; Harm, Deborah L.; Loftin, R. Bowen; Tyalor, Laura C.; Leiss, Ernst L.

    2002-01-01

    For effective astronaut training applications, choosing the right display devices to present images is crucial. In order to assess what devices are appropriate, it is important to design a successful virtual environment for a comparison study of the display devices. We present a comprehensive system, a Virtual environment testbed (VET), for the comparison of Dome and Head Mounted Display (HMD) systems on an SGI Onyx workstation. By writing codelets, we allow a variety of virtual scenarios and subjects' information to be loaded without programming or changing the code. This is part of an ongoing research project conducted by the NASA / JSC.

  9. Military display performance parameters

    NASA Astrophysics Data System (ADS)

    Desjardins, Daniel D.; Meyer, Frederick

    2012-06-01

    The military display market is analyzed in terms of four of its segments: avionics, vetronics, dismounted soldier, and command and control. Requirements are summarized for a number of technology-driving parameters, to include luminance, night vision imaging system compatibility, gray levels, resolution, dimming range, viewing angle, video capability, altitude, temperature, shock and vibration, etc., for direct-view and virtual-view displays in cockpits and crew stations. Technical specifications are discussed for selected programs.

  10. Combat Vehicle Command and Control System Architecture Overview

    DTIC Science & Technology

    1994-10-01

    inserted in the software. • Interactive interface displays and controls were prepared using rapidly prototyped software and were retained at the MWTB for...being simulated "* controls , sensor displays, and out-the-window displays for the crew "* computer image generators (CIGs) for out-the-window and...black hot viewing modes. The commander may access a number of capabilities of the CITV simulation, described below, from controls located around the

  11. CALIPSO: an interactive image analysis software package for desktop PACS workstations

    NASA Astrophysics Data System (ADS)

    Ratib, Osman M.; Huang, H. K.

    1990-07-01

    The purpose of this project is to develop a low cost workstation for quantitative analysis of multimodality images using a Macintosh II personal computer. In the current configuration the Macintosh operates as a stand alone workstation where images are imported either from a central PACS server through a standard Ethernet network or recorded through video digitizer board. The CALIPSO software developed contains a large variety ofbasic image display and manipulation tools. We focused our effort however on the design and implementation ofquantitative analysis methods that can be applied to images from different imaging modalities. Analysis modules currently implemented include geometric and densitometric volumes and ejection fraction calculation from radionuclide and cine-angiograms Fourier analysis ofcardiac wall motion vascular stenosis measurement color coded parametric display of regional flow distribution from dynamic coronary angiograms automatic analysis ofmyocardial distribution ofradiolabelled tracers from tomoscintigraphic images. Several of these analysis tools were selected because they use similar color coded andparametric display methods to communicate quantitative data extracted from the images. 1. Rationale and objectives of the project Developments of Picture Archiving and Communication Systems (PACS) in clinical environment allow physicians and radiologists to assess radiographic images directly through imaging workstations (''). This convenient access to the images is often limited by the number of workstations available due in part to their high cost. There is also an increasing need for quantitative analysis ofthe images. During thepast decade

  12. Thermal Protection System Imagery Inspection Management System -TIIMS

    NASA Technical Reports Server (NTRS)

    Goza, Sharon; Melendrez, David L.; Henningan, Marsha; LaBasse, Daniel; Smith, Daniel J.

    2011-01-01

    TIIMS is used during the inspection phases of every mission to provide quick visual feedback, detailed inspection data, and determination to the mission management team. This system consists of a visual Web page interface, an SQL database, and a graphical image generator. These combine to allow a user to ascertain quickly the status of the inspection process, and current determination of any problem zones. The TIIMS system allows inspection engineers to enter their determinations into a database and to link pertinent images and video to those database entries. The database then assigns criteria to each zone and tile, and via query, sends the information to a graphical image generation program. Using the official TIPS database tile positions and sizes, the graphical image generation program creates images of the current status of the orbiter, coloring zones, and tiles based on a predefined key code. These images are then displayed on a Web page using customized JAVA scripts to display the appropriate zone of the orbiter based on the location of the user's cursor. The close-up graphic and database entry for that particular zone can then be seen by selecting the zone. This page contains links into the database to access the images used by the inspection engineer when they make the determination entered into the database. Status for the inspection zones changes as determinations are refined and shown by the appropriate color code.

  13. Immersive cyberspace system

    NASA Technical Reports Server (NTRS)

    Park, Brian V. (Inventor)

    1997-01-01

    An immersive cyberspace system is presented which provides visual, audible, and vibrational inputs to a subject remaining in neutral immersion, and also provides for subject control input. The immersive cyberspace system includes a relaxation chair and a neutral immersion display hood. The relaxation chair supports a subject positioned thereupon, and places the subject in position which merges a neutral body position, the position a body naturally assumes in zero gravity, with a savasana yoga position. The display hood, which covers the subject's head, is configured to produce light images and sounds. An image projection subsystem provides either external or internal image projection. The display hood includes a projection screen moveably attached to an opaque shroud. A motion base supports the relaxation chair and produces vibrational inputs over a range of about 0-30 Hz. The motion base also produces limited translation and rotational movements of the relaxation chair. These limited translational and rotational movements, when properly coordinated with visual stimuli, constitute motion cues which create sensations of pitch, yaw, and roll movements. Vibration transducers produce vibrational inputs from about 20 Hz to about 150 Hz. An external computer, coupled to various components of the immersive cyberspace system, executes a software program and creates the cyberspace environment. One or more neutral hand posture controllers may be coupled to the external computer system and used to control various aspects of the cyberspace environment, or to enter data during the cyberspace experience.

  14. Advances in display technology III; Proceedings of the Meeting, Los Angeles, CA, January 18, 19, 1983

    NASA Astrophysics Data System (ADS)

    Schlam, E.

    1983-01-01

    Human factors in visible displays are discussed, taking into account an introduction to color vision, a laser optometric assessment of visual display viewability, the quantification of color contrast, human performance evaluations of digital image quality, visual problems of office video display terminals, and contemporary problems in airborne displays. Other topics considered are related to electroluminescent technology, liquid crystal and related technologies, plasma technology, and display terminal and systems. Attention is given to the application of electroluminescent technology to personal computers, electroluminescent driving techniques, thin film electroluminescent devices with memory, the fabrication of very large electroluminescent displays, the operating properties of thermally addressed dye switching liquid crystal display, light field dichroic liquid crystal displays for very large area displays, and hardening military plasma displays for a nuclear environment.

  15. Development and evaluation of an airplane electronic display format aligned with the inertial velocity vector

    NASA Technical Reports Server (NTRS)

    Steinmetz, G. G.

    1986-01-01

    The development of an electronic primary flight display format aligned with the aircraft velocity vector, a simulation evaluation comparing this format with an electronic attitude-aligned primary flight display format, and a flight evaluation of the velocity-vector-aligned display format are described. Earlier tests in turbulent conditions with the electronic attitude-aligned display format had exhibited unsteadiness. A primary objective of aligning the display format with the velocity vector was to take advantage of a velocity-vector control-wheel steering system to provide steadiness of display during turbulent conditions. Better situational awareness under crosswind conditions was also achieved. The evaluation task was a curved, descending approach with turbulent and crosswind conditions. Primary flight display formats contained computer-drawn perspective runway images and flight-path angle information. The flight tests were conducted aboard the NASA Transport Systems Research Vehicle (TSRV). Comparative results of the simulation and flight tests were principally obtained from subjective commentary. Overall, the pilots preferred the display format aligned with the velocity vector.

  16. Back-to-back optical coherence tomography-ultrasound probe for co-registered three-dimensional intravascular imaging with real-time display

    NASA Astrophysics Data System (ADS)

    Li, Jiawen; Ma, Teng; Jing, Joseph; Zhang, Jun; Patel, Pranav M.; Shung, K. Kirk; Zhou, Qifa; Chen, Zhongping

    2014-03-01

    We have developed a novel integrated optical coherence tomography (OCT)-intravascular ultrasound (IVUS) probe, with a 1.5 mm-long rigid-part and 0.9 mm outer diameter, for real-time intracoronary imaging of atherosclerotic plaques and guiding interventional procedures. By placing the OCT ball lens and IVUS 45MHz single element transducer back-to-back at the same axial position, this probe can provide automatically co-registered, co-axial OCT-IVUS imaging. To demonstrate its capability, 3D OCT-IVUS imaging of a pig's coronary artery in real-time displayed in polar coordinates, as well as images of two major types of advanced plaques in human cadaver coronary segments, was obtained using this probe and our upgraded system. Histology validation is also presented.

  17. Ultrasonic Data Display and Analysis System Developed (Including Fuzzy Logic Analysis) for the Windows-Based PC

    NASA Technical Reports Server (NTRS)

    Lovelace, Jeffrey J.; Cios, Kryzsztof J.; Roth, Don J.; cAO, wEI n.

    2001-01-01

    Post-Scan Interactive Data Display (PSIDD) III is a user-oriented Windows-based system that facilitates the display and comparison of ultrasonic contact measurement data obtained at NASA Glenn Research Center's Ultrasonic Nondestructive Evaluation measurement facility. The system is optimized to compare ultrasonic measurements made at different locations within a material or at different stages of material degradation. PSIDD III provides complete analysis of the primary waveforms in the time and frequency domains along with the calculation of several frequency-dependent properties including phase velocity and attenuation coefficient and several frequency-independent properties, like the cross correlation velocity. The system allows image generation on all the frequency-dependent properties at any available frequency (limited by the bandwidth used in the scans) and on any of the frequency-independent properties. From ultrasonic contact scans, areas of interest on an image can be studied with regard to underlying raw waveforms and derived ultrasonic properties by simply selecting the point on the image. The system offers various modes of indepth comparison between scan points. Up to five scan points can be selected for comparative analysis at once. The system was developed with Borland Delphi software (Visual Pascal) and is based on an SQL data base. It is ideal for the classification of material properties or the location of microstructure variations in materials. Along with the ultrasonic contact measurement software that it is partnered with, this system is technology ready and can be transferred to users worldwide.

  18. Interpretation of medical imaging data with a mobile application: a mobile digital imaging processing environment.

    PubMed

    Lin, Meng Kuan; Nicolini, Oliver; Waxenegger, Harald; Galloway, Graham J; Ullmann, Jeremy F P; Janke, Andrew L

    2013-01-01

    Digital Imaging Processing (DIP) requires data extraction and output from a visualization tool to be consistent. Data handling and transmission between the server and a user is a systematic process in service interpretation. The use of integrated medical services for management and viewing of imaging data in combination with a mobile visualization tool can be greatly facilitated by data analysis and interpretation. This paper presents an integrated mobile application and DIP service, called M-DIP. The objective of the system is to (1) automate the direct data tiling, conversion, pre-tiling of brain images from Medical Imaging NetCDF (MINC), Neuroimaging Informatics Technology Initiative (NIFTI) to RAW formats; (2) speed up querying of imaging measurement; and (3) display high-level of images with three dimensions in real world coordinates. In addition, M-DIP provides the ability to work on a mobile or tablet device without any software installation using web-based protocols. M-DIP implements three levels of architecture with a relational middle-layer database, a stand-alone DIP server, and a mobile application logic middle level realizing user interpretation for direct querying and communication. This imaging software has the ability to display biological imaging data at multiple zoom levels and to increase its quality to meet users' expectations. Interpretation of bioimaging data is facilitated by an interface analogous to online mapping services using real world coordinate browsing. This allows mobile devices to display multiple datasets simultaneously from a remote site. M-DIP can be used as a measurement repository that can be accessed by any network environment, such as a portable mobile or tablet device. In addition, this system and combination with mobile applications are establishing a virtualization tool in the neuroinformatics field to speed interpretation services.

  19. Interpretation of Medical Imaging Data with a Mobile Application: A Mobile Digital Imaging Processing Environment

    PubMed Central

    Lin, Meng Kuan; Nicolini, Oliver; Waxenegger, Harald; Galloway, Graham J.; Ullmann, Jeremy F. P.; Janke, Andrew L.

    2013-01-01

    Digital Imaging Processing (DIP) requires data extraction and output from a visualization tool to be consistent. Data handling and transmission between the server and a user is a systematic process in service interpretation. The use of integrated medical services for management and viewing of imaging data in combination with a mobile visualization tool can be greatly facilitated by data analysis and interpretation. This paper presents an integrated mobile application and DIP service, called M-DIP. The objective of the system is to (1) automate the direct data tiling, conversion, pre-tiling of brain images from Medical Imaging NetCDF (MINC), Neuroimaging Informatics Technology Initiative (NIFTI) to RAW formats; (2) speed up querying of imaging measurement; and (3) display high-level of images with three dimensions in real world coordinates. In addition, M-DIP provides the ability to work on a mobile or tablet device without any software installation using web-based protocols. M-DIP implements three levels of architecture with a relational middle-layer database, a stand-alone DIP server, and a mobile application logic middle level realizing user interpretation for direct querying and communication. This imaging software has the ability to display biological imaging data at multiple zoom levels and to increase its quality to meet users’ expectations. Interpretation of bioimaging data is facilitated by an interface analogous to online mapping services using real world coordinate browsing. This allows mobile devices to display multiple datasets simultaneously from a remote site. M-DIP can be used as a measurement repository that can be accessed by any network environment, such as a portable mobile or tablet device. In addition, this system and combination with mobile applications are establishing a virtualization tool in the neuroinformatics field to speed interpretation services. PMID:23847587

  20. Environmental fog/rain visual display system for aircraft simulators

    NASA Technical Reports Server (NTRS)

    Chase, W. D. (Inventor)

    1982-01-01

    An environmental fog/rain visual display system for aircraft simulators is described. The electronic elements of the system include a real time digital computer, a caligraphic color display which simulates landing lights of selective intensity, and a color television camera for producing a moving color display of the airport runway as depicted on a model terrain board. The mechanical simulation elements of the system include an environmental chamber which can produce natural fog, nonhomogeneous fog, rain and fog combined, or rain only. A pilot looking through the aircraft wind screen will look through the fog and/or rain generated in the environmental chamber onto a viewing screen with the simulated color image of the airport runway thereon, and observe a very real simulation of actual conditions of a runway as it would appear through actual fog and/or rain.

  1. Web-based segmentation and display of three-dimensional radiologic image data.

    PubMed

    Silverstein, J; Rubenstein, J; Millman, A; Panko, W

    1998-01-01

    In many clinical circumstances, viewing sequential radiological image data as three-dimensional models is proving beneficial. However, designing customized computer-generated radiological models is beyond the scope of most physicians, due to specialized hardware and software requirements. We have created a simple method for Internet users to remotely construct and locally display three-dimensional radiological models using only a standard web browser. Rapid model construction is achieved by distributing the hardware intensive steps to a remote server. Once created, the model is automatically displayed on the requesting browser and is accessible to multiple geographically distributed users. Implementation of our server software on large scale systems could be of great service to the worldwide medical community.

  2. Advanced Interactive Display Formats for Terminal Area Traffic Control

    NASA Technical Reports Server (NTRS)

    Grunwald, Arthur J.; Shaviv, G. E.

    1999-01-01

    This research project deals with an on-line dynamic method for automated viewing parameter management in perspective displays. Perspective images are optimized such that a human observer will perceive relevant spatial geometrical features with minimal errors. In order to compute the errors at which observers reconstruct spatial features from perspective images, a visual spatial-perception model was formulated. The model was employed as the basis of an optimization scheme aimed at seeking the optimal projection parameter setting. These ideas are implemented in the context of an air traffic control (ATC) application. A concept, referred to as an active display system, was developed. This system uses heuristic rules to identify relevant geometrical features of the three-dimensional air traffic situation. Agile, on-line optimization was achieved by a specially developed and custom-tailored genetic algorithm (GA), which was to deal with the multi-modal characteristics of the objective function and exploit its time-evolving nature.

  3. Display Device Color Management and Visual Surveillance of Vehicles

    ERIC Educational Resources Information Center

    Srivastava, Satyam

    2011-01-01

    Digital imaging has seen an enormous growth in the last decade. Today users have numerous choices in creating, accessing, and viewing digital image/video content. Color management is important to ensure consistent visual experience across imaging systems. This is typically achieved using color profiles. In this thesis we identify the limitations…

  4. Software Graphical User Interface For Analysis Of Images

    NASA Technical Reports Server (NTRS)

    Leonard, Desiree M.; Nolf, Scott R.; Avis, Elizabeth L.; Stacy, Kathryn

    1992-01-01

    CAMTOOL software provides graphical interface between Sun Microsystems workstation and Eikonix Model 1412 digitizing camera system. Camera scans and digitizes images, halftones, reflectives, transmissives, rigid or flexible flat material, or three-dimensional objects. Users digitize images and select from three destinations: work-station display screen, magnetic-tape drive, or hard disk. Written in C.

  5. 3D image display of fetal ultrasonic images by thin shell

    NASA Astrophysics Data System (ADS)

    Wang, Shyh-Roei; Sun, Yung-Nien; Chang, Fong-Ming; Jiang, Ching-Fen

    1999-05-01

    Due to the properties of convenience and non-invasion, ultrasound has become an essential tool for diagnosis of fetal abnormality during women pregnancy in obstetrics. However, the 'noisy and blurry' nature of ultrasound data makes the rendering of the data a challenge in comparison with MRI and CT images. In spite of the speckle noise, the unwanted objects usually occlude the target to be observed. In this paper, we proposed a new system that can effectively depress the speckle noise, extract the target object, and clearly render the 3D fetal image in almost real-time from 3D ultrasound image data. The system is based on a deformable model that detects contours of the object according to the local image feature of ultrasound. Besides, in order to accelerate rendering speed, a thin shell is defined to separate the observed organ from unrelated structures depending on those detected contours. In this way, we can support quick 3D display of ultrasound, and the efficient visualization of 3D fetal ultrasound thus becomes possible.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kimpe, T; Marchessoux, C; Rostang, J

    Purpose: Use of color images in medical imaging has increased significantly the last few years. As of today there is no agreed standard on how color information needs to be visualized on medical color displays, resulting into large variability of color appearance and it making consistency and quality assurance a challenge. This paper presents a proposal for an extension of DICOM GSDF towards color. Methods: Visualization needs for several color modalities (multimodality imaging, nuclear medicine, digital pathology, quantitative imaging applications…) have been studied. On this basis a proposal was made for desired color behavior of color medical display systems andmore » its behavior and effect on color medical images was analyzed. Results: Several medical color modalities could benefit from perceptually linear color visualization for similar reasons as why GSDF was put in place for greyscale medical images. An extension of the GSDF (Greyscale Standard Display Function) to color is proposed: CSDF (color standard display function). CSDF is based on deltaE2000 and offers a perceptually linear color behavior. CSDF uses GSDF as its neutral grey behavior. A comparison between sRGB/GSDF and CSDF confirms that CSDF significantly improves perceptual color linearity. Furthermore, results also indicate that because of the improved perceptual linearity, CSDF has the potential to increase perceived contrast of clinically relevant color features. Conclusion: There is a need for an extension of GSDF towards color visualization in order to guarantee consistency and quality. A first proposal (CSDF) for such extension has been made. Behavior of a CSDF calibrated display has been characterized and compared with sRGB/GSDF behavior. First results indicate that CSDF could have a positive influence on perceived contrast of clinically relevant color features and could offer benefits for quantitative imaging applications. Authors are employees of Barco Healthcare.« less

  7. A photophoretic-trap volumetric display

    NASA Astrophysics Data System (ADS)

    Smalley, D. E.; Nygaard, E.; Squire, K.; van Wagoner, J.; Rasmussen, J.; Gneiting, S.; Qaderi, K.; Goodsell, J.; Rogers, W.; Lindsey, M.; Costner, K.; Monk, A.; Pearson, M.; Haymore, B.; Peatross, J.

    2018-01-01

    Free-space volumetric displays, or displays that create luminous image points in space, are the technology that most closely resembles the three-dimensional displays of popular fiction. Such displays are capable of producing images in ‘thin air’ that are visible from almost any direction and are not subject to clipping. Clipping restricts the utility of all three-dimensional displays that modulate light at a two-dimensional surface with an edge boundary; these include holographic displays, nanophotonic arrays, plasmonic displays, lenticular or lenslet displays and all technologies in which the light scattering surface and the image point are physically separate. Here we present a free-space volumetric display based on photophoretic optical trapping that produces full-colour graphics in free space with ten-micrometre image points using persistence of vision. This display works by first isolating a cellulose particle in a photophoretic trap created by spherical and astigmatic aberrations. The trap and particle are then scanned through a display volume while being illuminated with red, green and blue light. The result is a three-dimensional image in free space with a large colour gamut, fine detail and low apparent speckle. This platform, named the Optical Trap Display, is capable of producing image geometries that are currently unobtainable with holographic and light-field technologies, such as long-throw projections, tall sandtables and ‘wrap-around’ displays.

  8. Advanced Millimeter-Wave Imaging Enhances Security Screening

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheen, David M.; Bernacki, Bruce E.; McMakin, Douglas L.

    2012-01-12

    Millimeter-wave imaging is rapidly gaining acceptance for passenger screening at airports and other secured facilities. This paper details a number of techniques developed over the last several years including novel image reconstruction and display techniques, polarimetric imaging techniques, array switching schemes, as well as high frequency high bandwidth techniques. Implementation of some of these methods will increase the cost and complexity of the mm-wave security portal imaging systems. RF photonic methods may provide new solutions to the design and development of the sequentially switched linear mm-wave arrays that are the key element in the mm-wave portal imaging systems.

  9. Advanced Millimeter-Wave Security Portal Imaging Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheen, David M.; Bernacki, Bruce E.; McMakin, Douglas L.

    2012-04-01

    Millimeter-wave imaging is rapidly gaining acceptance for passenger screening at airports and other secured facilities. This paper details a number of techniques developed over the last several years including novel image reconstruction and display techniques, polarimetric imaging techniques, array switching schemes, as well as high frequency high bandwidth techniques. Implementation of some of these methods will increase the cost and complexity of the mm-wave security portal imaging systems. RF photonic methods may provide new solutions to the design and development of the sequentially switched linear mm-wave arrays that are the key element in the mm-wave portal imaging systems.

  10. Super long viewing distance light homogeneous emitting three-dimensional display

    NASA Astrophysics Data System (ADS)

    Liao, Hongen

    2015-04-01

    Three-dimensional (3D) display technology has continuously been attracting public attention with the progress in today's 3D television and mature display technologies. The primary characteristics of conventional glasses-free autostereoscopic displays, such as spatial resolution, image depths, and viewing angle, are often limited due to the use of optical lenses or optical gratings. We present a 3D display using MEMS-scanning-mechanism-based light homogeneous emitting (LHE) approach and demonstrate that the display can directly generate an autostereoscopic 3D image without the need for optical lenses or gratings. The generated 3D image has the advantages of non-aberration and a high-definition spatial resolution, making it the first to exhibit animated 3D images with image depth of six meters. Our LHE 3D display approach can be used to generate a natural flat-panel 3D display with super long viewing distance and alternative real-time image update.

  11. Emergency CT brain: preliminary interpretation with a tablet device: image quality and diagnostic performance of the Apple iPad.

    PubMed

    Mc Laughlin, Patrick; Neill, Siobhan O; Fanning, Noel; Mc Garrigle, Anne Marie; Connor, Owen J O; Wyse, Gerry; Maher, Michael M

    2012-04-01

    Tablet devices have recently been used in radiological image interpretation because they have a display resolution comparable to desktop LCD monitors. We identified a need to examine tablet display performance prior to their use in preliminary interpretation of radiological images. We compared the spatial and contrast resolution of a commercially available tablet display with a diagnostic grade 2 megapixel monochrome LCD using a contrast detail phantom. We also recorded reporting discrepancies, using the ACR RADPEER system, between preliminary interpretation of 100 emergency CT brain examinations on the tablet display and formal review on a diagnostic LCD. The iPad display performed inferiorly to the diagnostic monochrome display without the ability to zoom. When the software zoom function was enabled on the tablet device, comparable contrast detail phantom scores of 163 vs 165 points were achieved. No reporting discrepancies were encountered during the interpretation of 43 normal examinations and five cases of acute intracranial hemorrhage. There were seven RADPEER2 (understandable) misses when using the iPad display and 12 with the diagnostic LCD. Use of software zoom in the tablet device improved its contrast detail phantom score. The tablet allowed satisfactory identification of acute CT brain findings, but additional research will be required to examine the cause of "understandable" reporting discrepancies that occur when using tablet devices.

  12. High-resolution imaging optomechatronics for precise liquid crystal display module bonding automated optical inspection

    NASA Astrophysics Data System (ADS)

    Ni, Guangming; Liu, Lin; Zhang, Jing; Liu, Juanxiu; Liu, Yong

    2018-01-01

    With the development of the liquid crystal display (LCD) module industry, LCD modules become more and more precise with larger sizes, which demands harsh imaging requirements for automated optical inspection (AOI). Here, we report a high-resolution and clearly focused imaging optomechatronics for precise LCD module bonding AOI inspection. It first presents and achieves high-resolution imaging for LCD module bonding AOI inspection using a line scan camera (LSC) triggered by a linear optical encoder, self-adaptive focusing for the whole large imaging region using LSC, and a laser displacement sensor, which reduces the requirements of machining, assembly, and motion control of AOI devices. Results show that this system can directly achieve clearly focused imaging for AOI inspection of large LCD module bonding with 0.8 μm image resolution, 2.65-mm scan imaging width, and no limited imaging width theoretically. All of these are significant for AOI inspection in the LCD module industry and other fields that require imaging large regions with high resolution.

  13. NPS assessment of color medical image displays using a monochromatic CCD camera

    NASA Astrophysics Data System (ADS)

    Roehrig, Hans; Gu, Xiliang; Fan, Jiahua

    2012-10-01

    This paper presents an approach to Noise Power Spectrum (NPS) assessment of color medical displays without using an expensive imaging colorimeter. The R, G and B color uniform patterns were shown on the display under study and the images were taken using a high resolution monochromatic camera. A colorimeter was used to calibrate the camera images. Synthetic intensity images were formed by the weighted sum of the R, G, B and the dark screen images. Finally the NPS analysis was conducted on the synthetic images. The proposed method replaces an expensive imaging colorimeter for NPS evaluation, which also suggests a potential solution for routine color medical display QA/QC in the clinical area, especially when imaging of display devices is desired

  14. Three-dimensional holographic display of ultrasound computed tomograms

    NASA Astrophysics Data System (ADS)

    Andre, Michael P.; Janee, Helmar S.; Ysrael, Mariana Z.; Hodler, Jeurg; Olson, Linda K.; Leopold, George R.; Schulz, Raymond

    1997-05-01

    Breast ultrasound is a valuable adjunct to mammography but is limited by a very small field of view, particularly with high-resolution transducers necessary for breast diagnosis. We have been developing an ultrasound system based on a diffraction tomography method that provides slices through the breast on a large 20-cm diameter circular field of view. Eight to fifteen images are typically produced in sequential coronal planes from the nipple to the chest wall with either 0.25 or 0.5 mm pixels. As a means to simplify the interpretation of this large set of images, we report experience with 3D life-sized displays of the entire breast of human volunteers using a digital holographic technique. The compound 3D holographic images are produced from the digital image matrix, recorded on 14 X 17 inch transparency and projected on a special white-light viewbox. Holographic visualization of the entire breast has proved to be the preferred method for 3D display of ultrasound computed tomography images. It provides a unique perspective on breast anatomy and may prove useful for biopsy guidance and surgical planning.

  15. Correlations between time required for radiological diagnoses, readers' performance, display environments, and difficulty of cases

    NASA Astrophysics Data System (ADS)

    Gur, David; Rockette, Howard E.; Sumkin, Jules H.; Hoy, Ronald J.; Feist, John H.; Thaete, F. Leland; King, Jill L.; Slasky, B. S.; Miketic, Linda M.; Straub, William H.

    1991-07-01

    In a series of large ROC studies, the authors analyzed the time radiologists took to diagnose PA chest images as a function of observer performance indices (Az), display environments, and difficulty of cases. Board-certified radiologists interpreted at least 600 images each for the presence or absence of one or more of the following abnormalities: interstitial disease, nodule, and pneumothorax. Results indicated that there exists a large inter- reader variability in the time required to diagnose PA chest images. There is no correlation between a reader's specific median reading time and his/her performance. Time generally increases as the number of abnormalities on a single image increases and for cases with subtle abnormalities. Results also indicated that, in general, the longer the time for interpretation of a specific case (within reader), the further the observer's confidence ratings were from the truth. These findings were found to hold true regardless of the display mode. These results may have implications with regards to the appropriate methodology that should be used for imaging systems evaluations and for measurements of productivity for radiologists.

  16. Analysis of Interactive Graphics Display Equipment for an Automated Photo Interpretation System.

    DTIC Science & Technology

    1982-06-01

    System provides the hardware and software for a range of graphics processor tasks. The IMAGE System employs the RSX- II M real - time operating . system in...One hard copy unit serves up to four work stations. The executive program of the IMAGE system is the DEC RSX- 11 M real - time operating system . In...picture controller. The PDP 11/34 executes programs concurrently under the RSX- I IM real - time operating system . Each graphics program consists of a

  17. General-purpose interface bus for multiuser, multitasking computer system

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R.; Roth, Don J.; Stang, David B.

    1990-01-01

    The architecture of a multiuser, multitasking, virtual-memory computer system intended for the use by a medium-size research group is described. There are three central processing units (CPU) in the configuration, each with 16 MB memory, and two 474 MB hard disks attached. CPU 1 is designed for data analysis and contains an array processor for fast-Fourier transformations. In addition, CPU 1 shares display images viewed with the image processor. CPU 2 is designed for image analysis and display. CPU 3 is designed for data acquisition and contains 8 GPIB channels and an analog-to-digital conversion input/output interface with 16 channels. Up to 9 users can access the third CPU simultaneously for data acquisition. Focus is placed on the optimization of hardware interfaces and software, facilitating instrument control, data acquisition, and processing.

  18. 76 FR 29006 - In the Matter of Certain Motion-Sensitive Sound Effects Devices and Image Display Devices and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-19

    ... Effects Devices and Image Display Devices and Components and Products Containing Same; Notice of... United States after importation of certain motion-sensitive sound effects devices and image display... devices and image display devices and components and products containing same that infringe one or more of...

  19. 76 FR 59737 - In the Matter of Certain Digital Photo Frames and Image Display Devices and Components Thereof...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-27

    ... Frames and Image Display Devices and Components Thereof; Notice of Institution of Investigation... United States after importation of certain digital photo frames and image display devices and components... certain digital photo frames and image display devices and components thereof that infringe one or more of...

  20. Color matrix display simulation based upon luminance and chromatic contrast sensitivity of early vision

    NASA Technical Reports Server (NTRS)

    Martin, Russel A.; Ahumada, Albert J., Jr.; Larimer, James O.

    1992-01-01

    This paper describes the design and operation of a new simulation model for color matrix display development. It models the physical structure, the signal processing, and the visual perception of static displays, to allow optimization of display design parameters through image quality measures. The model is simple, implemented in the Mathematica computer language, and highly modular. Signal processing modules operate on the original image. The hardware modules describe backlights and filters, the pixel shape, and the tiling of the pixels over the display. Small regions of the displayed image can be visualized on a CRT. Visual perception modules assume static foveal images. The image is converted into cone catches and then into luminance, red-green, and blue-yellow images. A Haar transform pyramid separates the three images into spatial frequency and direction-specific channels. The channels are scaled by weights taken from human contrast sensitivity measurements of chromatic and luminance mechanisms at similar frequencies and orientations. Each channel provides a detectability measure. These measures allow the comparison of images displayed on prospective devices and, by that, the optimization of display designs.

Top