Sample records for image display device

  1. 76 FR 29006 - In the Matter of Certain Motion-Sensitive Sound Effects Devices and Image Display Devices and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-19

    ... Effects Devices and Image Display Devices and Components and Products Containing Same; Notice of... United States after importation of certain motion-sensitive sound effects devices and image display... devices and image display devices and components and products containing same that infringe one or more of...

  2. Seamless tiled display system

    NASA Technical Reports Server (NTRS)

    Dubin, Matthew B. (Inventor); Larson, Brent D. (Inventor); Kolosowsky, Aleksandra (Inventor)

    2006-01-01

    A modular and scalable seamless tiled display apparatus includes multiple display devices, a screen, and multiple lens assemblies. Each display device is subdivided into multiple sections, and each section is configured to display a sectional image. One of the lens assemblies is optically coupled to each of the sections of each of the display devices to project the sectional image displayed on that section onto the screen. The multiple lens assemblies are configured to merge the projected sectional images to form a single tiled image. The projected sectional images may be merged on the screen by magnifying and shifting the images in an appropriate manner. The magnification and shifting of these images eliminates any visual effect on the tiled display that may result from dead-band regions defined between each pair of adjacent sections on each display device, and due to gaps between multiple display devices.

  3. 76 FR 59737 - In the Matter of Certain Digital Photo Frames and Image Display Devices and Components Thereof...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-27

    ... Frames and Image Display Devices and Components Thereof; Notice of Institution of Investigation... United States after importation of certain digital photo frames and image display devices and components... certain digital photo frames and image display devices and components thereof that infringe one or more of...

  4. 78 FR 16707 - Certain Digital Photo Frames and Image Display Devices and Components Thereof; Issuance of a...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-18

    ... Image Display Devices and Components Thereof; Issuance of a Limited Exclusion Order and Cease and Desist... within the United States after importation of certain digital photo frames and image display devices and...: (1) The unlicensed entry of digital photo frames and image display devices and components thereof...

  5. Display challenges resulting from the use of wide field of view imaging devices

    NASA Astrophysics Data System (ADS)

    Petty, Gregory J.; Fulton, Jack; Nicholson, Gail; Seals, Ean

    2012-06-01

    As focal plane array technologies advance and imagers increase in resolution, display technology must outpace the imaging improvements in order to adequately represent the complete data collection. Typical display devices tend to have an aspect ratio similar to 4:3 or 16:9, however a breed of Wide Field of View (WFOV) imaging devices exist that skew from the norm with aspect ratios as high as 5:1. This particular quality, when coupled with a high spatial resolution, presents a unique challenge for display devices. Standard display devices must choose between resizing the image data to fit the display and displaying the image data in native resolution and truncating potentially important information. The problem compounds when considering the applications; WFOV high-situationalawareness imagers are sought for space-limited military vehicles. Tradeoffs between these issues are assessed to the image quality of the WFOV sensor.

  6. Interactive display system having a digital micromirror imaging device

    DOEpatents

    Veligdan, James T.; DeSanto, Leonard; Kaull, Lisa; Brewster, Calvin

    2006-04-11

    A display system includes a waveguide optical panel having an inlet face and an opposite outlet face. A projector cooperates with a digital imaging device, e.g. a digital micromirror imaging device, for projecting an image through the panel for display on the outlet face. The imaging device includes an array of mirrors tiltable between opposite display and divert positions. The display positions reflect an image light beam from the projector through the panel for display on the outlet face. The divert positions divert the image light beam away from the panel, and are additionally used for reflecting a probe light beam through the panel toward the outlet face. Covering a spot on the panel, e.g. with a finger, reflects the probe light beam back through the panel toward the inlet face for detection thereat and providing interactive capability.

  7. Image Quality Characteristics of Handheld Display Devices for Medical Imaging

    PubMed Central

    Yamazaki, Asumi; Liu, Peter; Cheng, Wei-Chung; Badano, Aldo

    2013-01-01

    Handheld devices such as mobile phones and tablet computers have become widespread with thousands of available software applications. Recently, handhelds are being proposed as part of medical imaging solutions, especially in emergency medicine, where immediate consultation is required. However, handheld devices differ significantly from medical workstation displays in terms of display characteristics. Moreover, the characteristics vary significantly among device types. We investigate the image quality characteristics of various handheld devices with respect to luminance response, spatial resolution, spatial noise, and reflectance. We show that the luminance characteristics of the handheld displays are different from those of workstation displays complying with grayscale standard target response suggesting that luminance calibration might be needed. Our results also demonstrate that the spatial characteristics of handhelds can surpass those of medical workstation displays particularly for recent generation devices. While a 5 mega-pixel monochrome workstation display has horizontal and vertical modulation transfer factors of 0.52 and 0.47 at the Nyquist frequency, the handheld displays released after 2011 can have values higher than 0.63 at the respective Nyquist frequencies. The noise power spectra for workstation displays are higher than 1.2×10−5 mm2 at 1 mm−1, while handheld displays have values lower than 3.7×10−6 mm2. Reflectance measurements on some of the handheld displays are consistent with measurements for workstation displays with, in some cases, low specular and diffuse reflectance coefficients. The variability of the characterization results among devices due to the different technological features indicates that image quality varies greatly among handheld display devices. PMID:24236113

  8. Use of mobile devices for medical imaging.

    PubMed

    Hirschorn, David S; Choudhri, Asim F; Shih, George; Kim, Woojin

    2014-12-01

    Mobile devices have fundamentally changed personal computing, with many people forgoing the desktop and even laptop computer altogether in favor of a smaller, lighter, and cheaper device with a touch screen. Doctors and patients are beginning to expect medical images to be available on these devices for consultative viewing, if not actual diagnosis. However, this raises serious concerns with regard to the ability of existing mobile devices and networks to quickly and securely move these images. Medical images often come in large sets, which can bog down a network if not conveyed in an intelligent manner, and downloaded data on a mobile device are highly vulnerable to a breach of patient confidentiality should that device become lost or stolen. Some degree of regulation is needed to ensure that the software used to view these images allows all relevant medical information to be visible and manipulated in a clinically acceptable manner. There also needs to be a quality control mechanism to ensure that a device's display accurately conveys the image content without loss of contrast detail. Furthermore, not all mobile displays are appropriate for all types of images. The smaller displays of smart phones, for example, are not well suited for viewing entire chest radiographs, no matter how small and numerous the pixels of the display may be. All of these factors should be taken into account when deciding where, when, and how to use mobile devices for the display of medical images. Copyright © 2014 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  9. A device-dependent interface for interactive image display

    NASA Technical Reports Server (NTRS)

    Perkins, D. C.; Szczur, M. R.; Owings, J.; Jamros, R. K.

    1984-01-01

    The structure of the device independent Display Management Subsystem (DMS) and the interface routines that are available to the applications programmer for use in developing a set of portable image display utility programs are described.

  10. Image display device in digital TV

    DOEpatents

    Choi, Seung Jong [Seoul, KR

    2006-07-18

    Disclosed is an image display device in a digital TV that is capable of carrying out the conversion into various kinds of resolution by using single bit map data in the digital TV. The image display device includes: a data processing part for executing bit map conversion, compression, restoration and format-conversion for text data; a memory for storing the bit map data obtained according to the bit map conversion and compression in the data processing part and image data inputted from an arbitrary receiving part, the receiving part receiving one of digital image data and analog image data; an image outputting part for reading the image data from the memory; and a display processing part for mixing the image data read from the image outputting part and the bit map data converted in format from the a data processing part. Therefore, the image display device according to the present invention can convert text data in such a manner as to correspond with various resolution, carry out the compression for bit map data, thereby reducing the memory space, and support text data of an HTML format, thereby providing the image with the text data of various shapes.

  11. Image change detection systems, methods, and articles of manufacture

    DOEpatents

    Jones, James L.; Lassahn, Gordon D.; Lancaster, Gregory D.

    2010-01-05

    Aspects of the invention relate to image change detection systems, methods, and articles of manufacture. According to one aspect, a method of identifying differences between a plurality of images is described. The method includes loading a source image and a target image into memory of a computer, constructing source and target edge images from the source and target images to enable processing of multiband images, displaying the source and target images on a display device of the computer, aligning the source and target edge images, switching displaying of the source image and the target image on the display device, to enable identification of differences between the source image and the target image.

  12. Image degradation by glare in radiologic display devices

    NASA Astrophysics Data System (ADS)

    Badano, Aldo; Flynn, Michael J.

    1997-05-01

    No electronic devices are currently available that can display digital radiographs without loss of visual information compared to traditional transilluminated film. Light scattering within the glass faceplate of cathode-ray tube (CRT) devices causes excessive glare that reduces image contrast. This glare, along with ambient light reflection, has been recognized as a significant limitation for radiologic applications. Efforts to control the effect of glare and ambient light reflection in CRTs include the use of absorptive glass and thin film coatings. In the near future, flat panel displays (FPD) with thin emissive structures should provide very low glare, high performance devices. We have used an optical Monte Carlo simulation to evaluate the effect of glare on image quality for typical CRT and flat panel display devices. The trade-off between display brightness and image contrast is described. For CRT systems, achieving good glare ratio requires a reduction of brightness to 30-40 percent of the maximum potential brightness. For FPD systems, similar glare performance can be achieved while maintaining 80 percent of the maximum potential brightness.

  13. Contrast Transmission In Medical Image Display

    NASA Astrophysics Data System (ADS)

    Pizer, Stephen M.; Zimmerman, John B.; Johnston, R. Eugene

    1982-11-01

    The display of medical images involves transforming recorded intensities such at CT numbers into perceivable intensities such as combinations of color and luminance. For the viewer to extract the most information about patterns of decreasing and increasing recorded intensity, the display designer must pay attention to three issues: 1) choice of display scale, including its discretization; 2) correction for variations in contrast sensitivity across the display scale due to the observer and the display device (producing an honest display); and 3) contrast enhancement based on the information in the recorded image and its importance, determined by viewing objectives. This paper will present concepts and approaches in all three of these areas. In choosing display scales three properties are important: sensitivity, associability, and naturalness of order. The unit of just noticeable difference (jnd) will be carefully defined. An observer experiment to measure the jnd values across a display scale will be specified. The overall sensitivity provided by a scale as measured in jnd's gives a measure of sensitivity called the perceived dynamic range (PDR). Methods for determining the PDR fran the aforementioned PDR values, and PDR's for various grey and pseudocolor scales will be presented. Methods of achieving sensitivity while retaining associability and naturalness of order with pseudocolor scales will be suggested. For any display device and scale it is useful to compensate for the device and observer by preceding the device with an intensity mapping (lookup table) chosen so that perceived intensity is linear with display-driving intensity. This mapping can be determined from the aforementioned jnd values. With a linearized display it is possible to standardize display devices so that the same image displayed on different devices or scales (e.g. video and hard copy) will be in sane sense perceptually equivalent. Furthermore, with a linearized display, it is possible to design contrast enhancement mappings that optimize the transmission of information from the recorded image to the display-driving signal with the assurance that this information will not then be lost by a -further nonlinear relation between display-driving and perceived intensity. It is suggested that optimal contrast enhancement mappings are adaptive to the local distribution of recorded intensities.

  14. 77 FR 74220 - Certain Digital Photo Frames and Image Display Devices and Components Thereof; Commission...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-13

    ... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-807] Certain Digital Photo Frames and Image Display Devices and Components Thereof; Commission Determination Not To Review an Initial... importation, and the sale within the United States after importation of certain digital photo frames and image...

  15. 76 FR 54251 - Notice of Receipt of Complaint; Solicitation of Comments Relating to the Public Interest

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-31

    ... Certain Digital Photo Frames and Image Display Devices and Components Thereof, DN 2842; the Commission is... importation of certain digital photo frames and image display devices and components thereof. The complaint...

  16. Personal graphical communicator.

    PubMed

    Stephens, Michael; Barrett, Steven

    2008-01-01

    A device to help a child communicate was requested by an educator. The child cannot read, write, or speak but can recognize symbols and use those symbols to communicate. While this communication works, it doesn't work well in situations where another person does not know how to use the symbols to communicate. For this reason, a device was requested that could display images to a child and play a phrase when that image was chosen. To meet this need an MP3 player like device was constructed. The device stores images and Mpeg-Layer III (MP3) sound clips on a replaceable Secure Digital (SD) card. The images are displayed on a color Liquid Crystal Display (LCD) where the user is able to skip through images to find the phrase that needs to be said. Once found simply hitting the play button will play the sound clip associated with the image. The device is portable and compact for easy use. It uses Universal Serial Bus (USB) to recharge its batteries, communicate with the PC and update the firmware.

  17. Three-dimensional hologram display system

    NASA Technical Reports Server (NTRS)

    Mintz, Frederick (Inventor); Chao, Tien-Hsin (Inventor); Bryant, Nevin (Inventor); Tsou, Peter (Inventor)

    2009-01-01

    The present invention relates to a three-dimensional (3D) hologram display system. The 3D hologram display system includes a projector device for projecting an image upon a display medium to form a 3D hologram. The 3D hologram is formed such that a viewer can view the holographic image from multiple angles up to 360 degrees. Multiple display media are described, namely a spinning diffusive screen, a circular diffuser screen, and an aerogel. The spinning diffusive screen utilizes spatial light modulators to control the image such that the 3D image is displayed on the rotating screen in a time-multiplexing manner. The circular diffuser screen includes multiple, simultaneously-operated projectors to project the image onto the circular diffuser screen from a plurality of locations, thereby forming the 3D image. The aerogel can use the projection device described as applicable to either the spinning diffusive screen or the circular diffuser screen.

  18. Effect of color visualization and display hardware on the visual assessment of pseudocolor medical images

    PubMed Central

    Zabala-Travers, Silvina; Choi, Mina; Cheng, Wei-Chung

    2015-01-01

    Purpose: Even though the use of color in the interpretation of medical images has increased significantly in recent years, the ad hoc manner in which color is handled and the lack of standard approaches have been associated with suboptimal and inconsistent diagnostic decisions with a negative impact on patient treatment and prognosis. The purpose of this study is to determine if the choice of color scale and display device hardware affects the visual assessment of patterns that have the characteristics of functional medical images. Methods: Perfusion magnetic resonance imaging (MRI) was the basis for designing and performing experiments. Synthetic images resembling brain dynamic-contrast enhanced MRI consisting of scaled mixtures of white, lumpy, and clustered backgrounds were used to assess the performance of a rainbow (“jet”), a heated black-body (“hot”), and a gray (“gray”) color scale with display devices of different quality on the detection of small changes in color intensity. The authors used a two-alternative, forced-choice design where readers were presented with 600 pairs of images. Each pair consisted of two images of the same pattern flipped along the vertical axis with a small difference in intensity. Readers were asked to select the image with the highest intensity. Three differences in intensity were tested on four display devices: a medical-grade three-million-pixel display, a consumer-grade monitor, a tablet device, and a phone. Results: The estimates of percent correct show that jet outperformed hot and gray in the high and low range of the color scales for all devices with a maximum difference in performance of 18% (confidence intervals: 6%, 30%). Performance with hot was different for high and low intensity, comparable to jet for the high range, and worse than gray for lower intensity values. Similar performance was seen between devices using jet and hot, while gray performance was better for handheld devices. Time of performance was shorter with jet. Conclusions: Our findings demonstrate that the choice of color scale and display hardware affects the visual comparative analysis of pseudocolor images. Follow-up studies in clinical settings are being considered to confirm the results with patient images. PMID:26127048

  19. Orthoscopic real-image display of digital holograms.

    PubMed

    Makowski, P L; Kozacki, T; Zaperty, W

    2017-10-01

    We present a practical solution for the long-standing problem of depth inversion in real-image holographic display of digital holograms. It relies on a field lens inserted in front of the spatial light modulator device addressed by a properly processed hologram. The processing algorithm accounts for pixel size and wavelength mismatch between capture and display devices in a way that prevents image deformation. Complete images of large dimensions are observable from one position with a naked eye. We demonstrate the method experimentally on a 10-cm-long 3D object using a single full-HD spatial light modulator, but it can supplement most holographic displays designed to form a real image, including circular wide angle configurations.

  20. 77 FR 21994 - Certain Digital Photo Frames and Image Display Devices and Components Thereof; Notice of Request...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-12

    ... Image Display Devices and Components Thereof; Notice of Request for Written Submissions on Remedy, the... importation, and the sale within the United States after importation of certain digital photo frames and image... the President, has 60 days to approve or disapprove the Commission's action. See section 337(j), 19 U...

  1. Features and limitations of mobile tablet devices for viewing radiological images.

    PubMed

    Grunert, J H

    2015-03-01

    Mobile radiological image display systems are becoming increasingly common, necessitating a comparison of the features of these systems, specifically the operating system employed, connection to stationary PACS, data security and rang of image display and image analysis functions. In the fall of 2013, a total of 17 PACS suppliers were surveyed regarding the technical features of 18 mobile radiological image display systems using a standardized questionnaire. The study also examined to what extent the technical specifications of the mobile image display systems satisfy the provisions of the Germany Medical Devices Act as well as the provisions of the German X-ray ordinance (RöV). There are clear differences in terms of how the mobile systems connected to the stationary PACS. Web-based solutions allow the mobile image display systems to function independently of their operating systems. The examined systems differed very little in terms of image display and image analysis functions. Mobile image display systems complement stationary PACS and can be used to view images. The impacts of the new quality assurance guidelines (QS-RL) as well as the upcoming new standard DIN 6868 - 157 on the acceptance testing of mobile image display units for the purpose of image evaluation are discussed. © Georg Thieme Verlag KG Stuttgart · New York.

  2. Color Imaging management in film processing

    NASA Astrophysics Data System (ADS)

    Tremeau, Alain; Konik, Hubert; Colantoni, Philippe

    2003-12-01

    The latest research projects in the laboratory LIGIV concerns capture, processing, archiving and display of color images considering the trichromatic nature of the Human Vision System (HSV). Among these projects one addresses digital cinematographic film sequences of high resolution and dynamic range. This project aims to optimize the use of content for the post-production operators and for the end user. The studies presented in this paper address the use of metadata to optimise the consumption of video content on a device of user's choice independent of the nature of the equipment that captured the content. Optimising consumption includes enhancing the quality of image reconstruction on a display. Another part of this project addresses the content-based adaptation of image display. Main focus is on Regions of Interest (ROI) operations, based on the ROI concepts of MPEG-7. The aim of this second part is to characterize and ensure the conditions of display even if display device or display media changes. This requires firstly the definition of a reference color space and the definition of bi-directional color transformations for each peripheral device (camera, display, film recorder, etc.). The complicating factor is that different devices have different color gamuts, depending on the chromaticity of their primaries and the ambient illumination under which they are viewed. To match the displayed image to the aimed appearance, all kind of production metadata (camera specification, camera colour primaries, lighting conditions) should be associated to the film material. Metadata and content build together rich content. The author is assumed to specify conditions as known from digital graphics arts. To control image pre-processing and image post-processing, these specifications should be contained in the film's metadata. The specifications are related to the ICC profiles but need additionally consider mesopic viewing conditions.

  3. Strain Multiplexed Metasurface Holograms on a Stretchable Substrate.

    PubMed

    Malek, Stephanie C; Ee, Ho-Seok; Agarwal, Ritesh

    2017-06-14

    We demonstrate reconfigurable phase-only computer-generated metasurface holograms with up to three image planes operating in the visible regime fabricated with gold nanorods on a stretchable polydimethylsiloxane substrate. Stretching the substrate enlarges the hologram image and changes the location of the image plane. Upon stretching, these devices can switch the displayed holographic image between multiple distinct images. This work opens up the possibilities for stretchable metasurface holograms as flat devices for dynamically reconfigurable optical communication and display. It also confirms that metasurfaces on stretchable substrates can serve as platform for a variety of reconfigurable optical devices.

  4. Three-dimensional image acquisition and reconstruction system on a mobile device based on computer-generated integral imaging.

    PubMed

    Erdenebat, Munkh-Uchral; Kim, Byeong-Jun; Piao, Yan-Ling; Park, Seo-Yeon; Kwon, Ki-Chul; Piao, Mei-Lan; Yoo, Kwan-Hee; Kim, Nam

    2017-10-01

    A mobile three-dimensional image acquisition and reconstruction system using a computer-generated integral imaging technique is proposed. A depth camera connected to the mobile device acquires the color and depth data of a real object simultaneously, and an elemental image array is generated based on the original three-dimensional information for the object, with lens array specifications input into the mobile device. The three-dimensional visualization of the real object is reconstructed on the mobile display through optical or digital reconstruction methods. The proposed system is implemented successfully and the experimental results certify that the system is an effective and interesting method of displaying real three-dimensional content on a mobile device.

  5. Interactive display system having a scaled virtual target zone

    DOEpatents

    Veligdan, James T.; DeSanto, Leonard

    2006-06-13

    A display system includes a waveguide optical panel having an inlet face and an opposite outlet face. A projector and imaging device cooperate with the panel for projecting a video image thereon. An optical detector bridges at least a portion of the waveguides for detecting a location on the outlet face within a target zone of an inbound light spot. A controller is operatively coupled to the imaging device and detector for displaying a cursor on the outlet face corresponding with the detected location of the spot within the target zone.

  6. Emergency CT brain: preliminary interpretation with a tablet device: image quality and diagnostic performance of the Apple iPad.

    PubMed

    Mc Laughlin, Patrick; Neill, Siobhan O; Fanning, Noel; Mc Garrigle, Anne Marie; Connor, Owen J O; Wyse, Gerry; Maher, Michael M

    2012-04-01

    Tablet devices have recently been used in radiological image interpretation because they have a display resolution comparable to desktop LCD monitors. We identified a need to examine tablet display performance prior to their use in preliminary interpretation of radiological images. We compared the spatial and contrast resolution of a commercially available tablet display with a diagnostic grade 2 megapixel monochrome LCD using a contrast detail phantom. We also recorded reporting discrepancies, using the ACR RADPEER system, between preliminary interpretation of 100 emergency CT brain examinations on the tablet display and formal review on a diagnostic LCD. The iPad display performed inferiorly to the diagnostic monochrome display without the ability to zoom. When the software zoom function was enabled on the tablet device, comparable contrast detail phantom scores of 163 vs 165 points were achieved. No reporting discrepancies were encountered during the interpretation of 43 normal examinations and five cases of acute intracranial hemorrhage. There were seven RADPEER2 (understandable) misses when using the iPad display and 12 with the diagnostic LCD. Use of software zoom in the tablet device improved its contrast detail phantom score. The tablet allowed satisfactory identification of acute CT brain findings, but additional research will be required to examine the cause of "understandable" reporting discrepancies that occur when using tablet devices.

  7. VICAR - VIDEO IMAGE COMMUNICATION AND RETRIEVAL

    NASA Technical Reports Server (NTRS)

    Wall, R. J.

    1994-01-01

    VICAR (Video Image Communication and Retrieval) is a general purpose image processing software system that has been under continuous development since the late 1960's. Originally intended for data from the NASA Jet Propulsion Laboratory's unmanned planetary spacecraft, VICAR is now used for a variety of other applications including biomedical image processing, cartography, earth resources, and geological exploration. The development of this newest version of VICAR emphasized a standardized, easily-understood user interface, a shield between the user and the host operating system, and a comprehensive array of image processing capabilities. Structurally, VICAR can be divided into roughly two parts; a suite of applications programs and an executive which serves as the interfaces between the applications, the operating system, and the user. There are several hundred applications programs ranging in function from interactive image editing, data compression/decompression, and map projection, to blemish, noise, and artifact removal, mosaic generation, and pattern recognition and location. An information management system designed specifically for handling image related data can merge image data with other types of data files. The user accesses these programs through the VICAR executive, which consists of a supervisor and a run-time library. From the viewpoint of the user and the applications programs, the executive is an environment that is independent of the operating system. VICAR does not replace the host computer's operating system; instead, it overlays the host resources. The core of the executive is the VICAR Supervisor, which is based on NASA Goddard Space Flight Center's Transportable Applications Executive (TAE). Various modifications and extensions have been made to optimize TAE for image processing applications, resulting in a user friendly environment. The rest of the executive consists of the VICAR Run-Time Library, which provides a set of subroutines (image I/O, label I/O, parameter I/O, etc.) to facilitate image processing and provide the fastest I/O possible while maintaining a wide variety of capabilities. The run-time library also includes the Virtual Raster Display Interface (VRDI) which allows display oriented applications programs to be written for a variety of display devices using a set of common routines. (A display device can be any frame-buffer type device which is attached to the host computer and has memory planes for the display and manipulation of images. A display device may have any number of separate 8-bit image memory planes (IMPs), a graphics overlay plane, pseudo-color capabilities, hardware zoom and pan, and other features). The VRDI supports the following display devices: VICOM (Gould/Deanza) IP8500, RAMTEK RM-9465, ADAGE (Ikonas) IK3000 and the International Imaging Systems IVAS. VRDI's purpose is to provide a uniform operating environment not only for an application programmer, but for the user as well. The programmer is able to write programs without being concerned with the specifics of the device for which the application is intended. The VICAR Interactive Display Subsystem (VIDS) is a collection of utilities for easy interactive display and manipulation of images on a display device. VIDS has characteristics of both the executive and an application program, and offers a wide menu of image manipulation options. VIDS uses the VRDI to communicate with display devices. The first step in using VIDS to analyze and enhance an image (one simple example of VICAR's numerous capabilities) is to examine the histogram of the image. The histogram is a plot of frequency of occurrence for each pixel value (0 - 255) loaded in the image plane. If, for example, the histogram shows that there are no pixel values below 64 or above 192, the histogram can be "stretched" so that the value of 64 is mapped to zero and 192 is mapped to 255. Now the user can use the full dynamic range of the display device to display the data and better see its contents. Another example of a VIDS procedure is the JMOVIE command, which allows the user to run animations interactively on the display device. JMOVIE uses the concept of "frames", which are the individual frames which comprise the animation to be viewed. The user loads images into the frames after the size and number of frames has been selected. VICAR's source languages are primarily FORTRAN and C, with some VAX Assembler and array processor code. The VICAR run-time library is designed to work equally easily from either FORTRAN or C. The program was implemented on a DEC VAX series computer operating under VMS 4.7. The virtual memory required is 1.5MB. Approximately 180,000 blocks of storage are needed for the saveset. VICAR (version 2.3A/3G/13H) is a copyrighted work with all copyright vested in NASA and is available by license for a period of ten (10) years to approved licensees. This program was developed in 1989.

  8. Automatic optical inspection of regular grid patterns with an inspection camera used below the Shannon-Nyquist criterion for optical resolution

    NASA Astrophysics Data System (ADS)

    Ferreira, Flávio P.; Forte, Paulo M. F.; Felgueiras, Paulo E. R.; Bret, Boris P. J.; Belsley, Michael S.; Nunes-Pereira, Eduardo J.

    2017-02-01

    An Automatic Optical Inspection (AOI) system for optical inspection of imaging devices used in automotive industry using an inspecting optics of lower spatial resolution than the device under inspection is described. This system is robust and with no moving parts. The cycle time is small. Its main advantage is that it is capable of detecting and quantifying defects in regular patterns, working below the Shannon-Nyquist criterion for optical resolution, using a single low resolution image sensor. It is easily scalable, which is an important advantage in industrial applications, since the same inspecting sensor can be reused for increasingly higher spatial resolutions of the devices to be inspected. The optical inspection is implemented with a notch multi-band Fourier filter, making the procedure especially fitted for regular patterns, like the ones that can be produced in image displays and Head Up Displays (HUDs). The regular patterns are used in production line only, for inspection purposes. For image displays, functional defects are detected at the level of a sub-image display grid element unit. Functional defects are the ones impairing the function of the display, and are preferred in AOI to the direct geometric imaging, since those are the ones directly related with the end-user experience. The shift in emphasis from geometric imaging to functional imaging is critical, since it is this that allows quantitative inspection, below Shannon-Nyquist. For HUDs, the functional detect detection addresses defects resulting from the combined effect of the image display and the image forming optics.

  9. Augmented reality for the surgeon: Systematic review.

    PubMed

    Yoon, Jang W; Chen, Robert E; Kim, Esther J; Akinduro, Oluwaseun O; Kerezoudis, Panagiotis; Han, Phillip K; Si, Phong; Freeman, William D; Diaz, Roberto J; Komotar, Ricardo J; Pirris, Stephen M; Brown, Benjamin L; Bydon, Mohamad; Wang, Michael Y; Wharen, Robert E; Quinones-Hinojosa, Alfredo

    2018-04-30

    Since the introduction of wearable head-up displays, there has been much interest in the surgical community adapting this technology into routine surgical practice. We used the keywords augmented reality OR wearable device OR head-up display AND surgery using PubMed, EBSCO, IEEE and SCOPUS databases. After exclusions, 74 published articles that evaluated the utility of wearable head-up displays in surgical settings were included in our review. Across all studies, the most common use of head-up displays was in cases of live streaming from surgical microscopes, navigation, monitoring of vital signs, and display of preoperative images. The most commonly used head-up display was Google Glass. Head-up displays enhanced surgeons' operating experience; common disadvantages include limited battery life, display size and discomfort. Due to ergonomic issues with dual-screen devices, augmented reality devices with the capacity to overlay images onto the surgical field will be key features of next-generation surgical head-up displays. Copyright © 2018 John Wiley & Sons, Ltd.

  10. 76 FR 35910 - Notice of Receipt of Complaint; Solicitation of Comments Relating to the Public Interest

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-20

    ... Motion-Sensitive Sound Effects Devices and Image Display Devices and Components and Products Containing... sale within the United States after importation of certain motion- sensitive sound devices and image...

  11. The use of lower resolution viewing devices for mammographic interpretation: implications for education and training.

    PubMed

    Chen, Yan; James, Jonathan J; Turnbull, Anne E; Gale, Alastair G

    2015-10-01

    To establish whether lower resolution, lower cost viewing devices have the potential to deliver mammographic interpretation training. On three occasions over eight months, fourteen consultant radiologists and reporting radiographers read forty challenging digital mammography screening cases on three different displays: a digital mammography workstation, a standard LCD monitor, and a smartphone. Standard image manipulation software was available for use on all three devices. Receiver operating characteristic (ROC) analysis and ANOVA (Analysis of Variance) were used to determine the significance of differences in performance between the viewing devices with/without the application of image manipulation software. The effect of reader's experience was also assessed. Performance was significantly higher (p < .05) on the mammography workstation compared to the other two viewing devices. When image manipulation software was applied to images viewed on the standard LCD monitor, performance improved to mirror levels seen on the mammography workstation with no significant difference between the two. Image interpretation on the smartphone was uniformly poor. Film reader experience had no significant effect on performance across all three viewing devices. Lower resolution standard LCD monitors combined with appropriate image manipulation software are capable of displaying mammographic pathology, and are potentially suitable for delivering mammographic interpretation training. • This study investigates potential devices for training in mammography interpretation. • Lower resolution standard LCD monitors are potentially suitable for mammographic interpretation training. • The effect of image manipulation tools on mammography workstation viewing is insignificant. • Reader experience had no significant effect on performance in all viewing devices. • Smart phones are not suitable for displaying mammograms.

  12. Characterization of crosstalk in stereoscopic display devices.

    PubMed

    Zafar, Fahad; Badano, Aldo

    2014-12-01

    Many different types of stereoscopic display devices are used for commercial and research applications. Stereoscopic displays offer the potential to improve performance in detection tasks for medical imaging diagnostic systems. Due to the variety of stereoscopic display technologies, it remains unclear how these compare with each other for detection and estimation tasks. Different stereo devices have different performance trade-offs due to their display characteristics. Among them, crosstalk is known to affect observer perception of 3D content and might affect detection performance. We measured and report the detailed luminance output and crosstalk characteristics for three different types of stereoscopic display devices. We recorded the effect of other issues on recorded luminance profiles such as viewing angle, use of different eye wear, and screen location. Our results show that the crosstalk signature for viewing 3D content can vary considerably when using different types of 3D glasses for active stereo displays. We also show that significant differences are present in crosstalk signatures when varying the viewing angle from 0 degrees to 20 degrees for a stereo mirror 3D display device. Our detailed characterization can help emulate the effect of crosstalk in conducting computational observer image quality assessment evaluations that minimize costly and time-consuming human reader studies.

  13. A Virtual Environment System for the Comparison of Dome and HMD Systems

    NASA Technical Reports Server (NTRS)

    Chen, Jian; Harm, Deboran L.; Loftin, R. Bowen; Lin, Ching-yao; Leiss, Ernst L.

    2002-01-01

    For effective astronaut training applications, choosing the right display devices to present images is crucial. In order to assess what devices are appropriate, it is important to design a successful virtual environment for a comparison study of the display devices. We present a comprehensive system for the comparison of Dome and head-mounted display (HMD) systems. In particular, we address interactions techniques and playback environments.

  14. LCD displays performance comparison by MTF measurement using the white noise stimulus method

    NASA Astrophysics Data System (ADS)

    Mitjà, Carles; Escofet, Jaume

    2011-01-01

    The amount of images produced to be viewed as soft copies on output displays are significantly increasing. This growing occurs at the expense of the images targeted to hard copy versions on paper or any other physical support. Even in the case of high quality hard copy production, people working in professional imaging uses different displays in selecting, editing, processing and showing images, from laptop screen to specialized high end displays. Then, the quality performance of these devices is crucial in the chain of decisions to be taken in image production. Metrics of this quality performance can help in the equipment acquisition. Different metrics and methods have been described to determine the quality performance of CRT and LCD computer displays in clinical area. One of most important metrics in this field is the device spatial frequency response obtained measuring the modulation transfer function (MTF). This work presents a comparison between the MTF of three different LCD displays, Apple MacBook Pro 15", Apple LED Cinema Display 24" and Apple iPhone4, measured by the white noise stimulus method, over vertical and horizontal directions. Additionally, different displays show particular pixels structure pattern. In order to identify this pixel structure, a set of high magnification images is taken from each display to be related with the respective vertical and horizontal MTF.

  15. NPS assessment of color medical image displays using a monochromatic CCD camera

    NASA Astrophysics Data System (ADS)

    Roehrig, Hans; Gu, Xiliang; Fan, Jiahua

    2012-10-01

    This paper presents an approach to Noise Power Spectrum (NPS) assessment of color medical displays without using an expensive imaging colorimeter. The R, G and B color uniform patterns were shown on the display under study and the images were taken using a high resolution monochromatic camera. A colorimeter was used to calibrate the camera images. Synthetic intensity images were formed by the weighted sum of the R, G, B and the dark screen images. Finally the NPS analysis was conducted on the synthetic images. The proposed method replaces an expensive imaging colorimeter for NPS evaluation, which also suggests a potential solution for routine color medical display QA/QC in the clinical area, especially when imaging of display devices is desired

  16. Display And Analysis Of Tomographic Volumetric Images Utilizing A Vari-Focal Mirror

    NASA Astrophysics Data System (ADS)

    Harris, L. D.; Camp, J. J.

    1984-10-01

    A system for the three-dimensional (3-D) display and analysis of stacks of tomographic images is described. The device utilizes the principle of a variable focal (vari-focal) length optical element in the form of an aluminized membrane stretched over a loudspeaker to generate a virtual 3-D image which is a visible representation of a 3-D array of image elements (voxels). The system displays 500,000 voxels per mirror cycle in a 3-D raster which appears continuous and demonstrates no distracting artifacts. The display is bright enough so that portions of the image can be dimmed without compromising the number of shades of gray. For x-ray CT, a displayed volume image looks like a 3-D radiograph which appears to be in the space directly behind the mirror. The viewer sees new views by moving his/her head from side to side or up and down. The system facilitates a variety of operator interactive functions which allow the user to point at objects within the image, control the orientation and location of brightened oblique planes within the volume, numerically dissect away selected image regions, and control intensity window levels. Photographs of example volume images displayed on the system illustrate, to the degree possible in a flat picture, the nature of displayed images and the capabilities of the system. Preliminary application of the display device to the analysis of volume reconstructions obtained from the Dynamic Spatial Reconstructor indicates significant utility of the system in selecting oblique sections and gaining an appreciation of the shape and dimensions of complex organ systems.

  17. NPS assessment of color medical displays using a monochromatic CCD camera

    NASA Astrophysics Data System (ADS)

    Roehrig, Hans; Gu, Xiliang; Fan, Jiahua

    2012-02-01

    This paper presents an approach to Noise Power Spectrum (NPS) assessment of color medical displays without using an expensive imaging colorimeter. The R, G and B color uniform patterns were shown on the display under study and the images were taken using a high resolution monochromatic camera. A colorimeter was used to calibrate the camera images. Synthetic intensity images were formed by the weighted sum of the R, G, B and the dark screen images. Finally the NPS analysis was conducted on the synthetic images. The proposed method replaces an expensive imaging colorimeter for NPS evaluation, which also suggests a potential solution for routine color medical display QA/QC in the clinical area, especially when imaging of display devices is desired.

  18. Tangible display systems: bringing virtual surfaces into the real world

    NASA Astrophysics Data System (ADS)

    Ferwerda, James A.

    2012-03-01

    We are developing tangible display systems that enable natural interaction with virtual surfaces. Tangible display systems are based on modern mobile devices that incorporate electronic image displays, graphics hardware, tracking systems, and digital cameras. Custom software allows the orientation of a device and the position of the observer to be tracked in real-time. Using this information, realistic images of surfaces with complex textures and material properties illuminated by environment-mapped lighting, can be rendered to the screen at interactive rates. Tilting or moving in front of the device produces realistic changes in surface lighting and material appearance. In this way, tangible displays allow virtual surfaces to be observed and manipulated as naturally as real ones, with the added benefit that surface geometry and material properties can be modified in real-time. We demonstrate the utility of tangible display systems in four application areas: material appearance research; computer-aided appearance design; enhanced access to digital library and museum collections; and new tools for digital artists.

  19. New DICOM extensions for softcopy and hardcopy display consistency.

    PubMed

    Eichelberg, M; Riesmeier, J; Kleber, K; Grönemeyer, D H; Oosterwijk, H; Jensch, P

    2000-01-01

    The DICOM standard defines in detail how medical images can be communicated. However, the rules on how to interpret the parameters contained in a DICOM image which deal with the image presentation were either lacking or not well defined. As a result, the same image frequently looks different when displayed on different workstations or printed on a film from various printers. Three new DICOM extensions attempt to close this gap by defining a comprehensive model for the display of images on softcopy and hardcopy devices: Grayscale Standard Display Function, Grayscale Softcopy Presentation State and Presentation Look Up Table.

  20. High-resolution laser-projection display system using a grating electromechanical system (GEMS)

    NASA Astrophysics Data System (ADS)

    Brazas, John C.; Kowarz, Marek W.

    2004-01-01

    Eastman Kodak Company has developed a diffractive-MEMS spatial-light modulator for use in printing and display applications, the grating electromechanical system (GEMS). This modulator contains a linear array of pixels capable of high-speed digital operation, high optical contrast, and good efficiency. The device operation is based on deflection of electromechanical ribbons suspended above a silicon substrate by a series of intermediate supports. When electrostatically actuated, the ribbons conform to the supporting substructure to produce a surface-relief phase grating over a wide active region. The device is designed to be binary, switching between a reflective mirror state having suspended ribbons and a diffractive grating state having ribbons in contact with substrate features. Switching times of less than 50 nanoseconds with sub-nanosecond jitter are made possible by reliable contact-mode operation. The GEMS device can be used as a high-speed digital-optical modulator for a laser-projection display system by collecting the diffracted orders and taking advantage of the low jitter. A color channel is created using a linear array of individually addressable GEMS pixels. A two-dimensional image is produced by sweeping the line image of the array, created by the projection optics, across the display screen. Gray levels in the image are formed using pulse-width modulation (PWM). A high-resolution projection display was developed using three 1080-pixel devices illuminated by red, green, and blue laser-color primaries. The result is an HDTV-format display capable of producing stunning still and motion images with very wide color gamut.

  1. Head-motion-controlled video goggles: preliminary concept for an interactive laparoscopic image display (i-LID).

    PubMed

    Aidlen, Jeremy T; Glick, Sara; Silverman, Kenneth; Silverman, Harvey F; Luks, Francois I

    2009-08-01

    Light-weight, low-profile, and high-resolution head-mounted displays (HMDs) now allow personalized viewing, of a laparoscopic image. The advantages include unobstructed viewing, regardless of position at the operating table, and the possibility to customize the image (i.e., enhanced reality, picture-in-picture, etc.). The bright image display allows use in daylight surroundings and the low profile of the HMD provides adequate peripheral vision. Theoretic disadvantages include reliance for all on the same image capture and anticues (i.e., reality disconnect) when the projected image remains static, despite changes in head position. This can lead to discomfort and even nausea. We have developed a prototype of interactive laparoscopic image display that allows hands-free control of the displayed image by changes in spatial orientation of the operator's head. The prototype consists of an HMD, a spatial orientation device, and computer software to enable hands-free panning and zooming of a video-endoscopic image display. The spatial orientation device uses magnetic fields created by a transmitter and receiver, each containing three orthogonal coils. The transmitter coils are efficiently driven, using USB power only, by a newly developed circuit, each at a unique frequency. The HMD-mounted receiver system links to a commercially available PC-interface PCI-bus sound card (M-Audiocard Delta 44; Avid Technology, Tewksbury, MA). Analog signals at the receiver are filtered, amplified, and converted to digital signals, which are processed to control the image display. The prototype uses a proprietary static fish-eye lens and software for the distortion-free reconstitution of any portion of the captured image. Left-right and up-down motions of the head (and HMD) produce real-time panning of the displayed image. Motion of the head toward, or away from, the transmitter causes real-time zooming in or out, respectively, of the displayed image. This prototype of the interactive HMD allows hands-free, intuitive control of the laparoscopic field, independent of the captured image.

  2. Device for wavelength-selective imaging

    DOEpatents

    Frangioni, John V.

    2010-09-14

    An imaging device captures both a visible light image and a diagnostic image, the diagnostic image corresponding to emissions from an imaging medium within the object. The visible light image (which may be color or grayscale) and the diagnostic image may be superimposed to display regions of diagnostic significance within a visible light image. A number of imaging media may be used according to an intended application for the imaging device, and an imaging medium may have wavelengths above, below, or within the visible light spectrum. The devices described herein may be advantageously packaged within a single integrated device or other solid state device, and/or employed in an integrated, single-camera medical imaging system, as well as many non-medical imaging systems that would benefit from simultaneous capture of visible-light wavelength images along with images at other wavelengths.

  3. Algorithmic support for graphic images rotation in avionics

    NASA Astrophysics Data System (ADS)

    Kniga, E. V.; Gurjanov, A. V.; Shukalov, A. V.; Zharinov, I. O.

    2018-05-01

    The avionics device designing has an actual problem of development and research algorithms to rotate the images which are being shown in the on-board display. The image rotation algorithms are a part of program software of avionics devices, which are parts of the on-board computers of the airplanes and helicopters. Images to be rotated have the flight location map fragments. The image rotation in the display system can be done as a part of software or mechanically. The program option is worse than the mechanic one in its rotation speed. The comparison of some test images of rotation several algorithms is shown which are being realized mechanically with the program environment Altera QuartusII.

  4. Artificial Structural Color Pixels: A Review

    PubMed Central

    Zhao, Yuqian; Zhao, Yong; Hu, Sheng; Lv, Jiangtao; Ying, Yu; Gervinskas, Gediminas; Si, Guangyuan

    2017-01-01

    Inspired by natural photonic structures (Morpho butterfly, for instance), researchers have demonstrated varying artificial color display devices using different designs. Photonic-crystal/plasmonic color filters have drawn increasing attention most recently. In this review article, we show the developing trend of artificial structural color pixels from photonic crystals to plasmonic nanostructures. Such devices normally utilize the distinctive optical features of photonic/plasmon resonance, resulting in high compatibility with current display and imaging technologies. Moreover, dynamical color filtering devices are highly desirable because tunable optical components are critical for developing new optical platforms which can be integrated or combined with other existing imaging and display techniques. Thus, extensive promising potential applications have been triggered and enabled including more abundant functionalities in integrated optics and nanophotonics. PMID:28805736

  5. Development of Land Analysis System display modules

    NASA Technical Reports Server (NTRS)

    Gordon, Douglas; Hollaren, Douglas; Huewe, Laurie

    1986-01-01

    The Land Analysis System (LAS) display modules were developed to allow a user to interactively display, manipulate, and store image and image related data. To help accomplish this task, these modules utilize the Transportable Applications Executive and the Display Management System software to interact with the user and the display device. The basic characteristics of a display are outlined and some of the major modifications and additions made to the display management software are discussed. Finally, all available LAS display modules are listed along with a short description of each.

  6. Electronic recording of holograms with applications to holographic displays

    NASA Technical Reports Server (NTRS)

    Claspy, P. C.; Merat, F. L.

    1979-01-01

    The paper describes an electronic heterodyne recording which uses electrooptic modulation to introduce a sinusoidal phase shift between the object and reference wave. The resulting temporally modulated holographic interference pattern is scanned by a commercial image dissector camera, and the rejection of the self-interference terms is accomplished by heterodyne detection at the camera output. The electrical signal representing this processed hologram can then be used to modify the properties of a liquid crystal light valve or a similar device. Such display devices transform the displayed interference pattern into a phase modulated wave front rendering a three-dimensional image.

  7. Visual communication - Information and fidelity. [of images

    NASA Technical Reports Server (NTRS)

    Huck, Freidrich O.; Fales, Carl L.; Alter-Gartenberg, Rachel; Rahman, Zia-Ur; Reichenbach, Stephen E.

    1993-01-01

    This assessment of visual communication deals with image gathering, coding, and restoration as a whole rather than as separate and independent tasks. The approach focuses on two mathematical criteria, information and fidelity, and on their relationships to the entropy of the encoded data and to the visual quality of the restored image. Past applications of these criteria to the assessment of image coding and restoration have been limited to the link that connects the output of the image-gathering device to the input of the image-display device. By contrast, the approach presented in this paper explicitly includes the critical limiting factors that constrain image gathering and display. This extension leads to an end-to-end assessment theory of visual communication that combines optical design with digital processing.

  8. Dynamic integral imaging technology for 3D applications (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Huang, Yi-Pai; Javidi, Bahram; Martínez-Corral, Manuel; Shieh, Han-Ping D.; Jen, Tai-Hsiang; Hsieh, Po-Yuan; Hassanfiroozi, Amir

    2017-05-01

    Depth and resolution are always the trade-off in integral imaging technology. With the dynamic adjustable devices, the two factors of integral imaging can be fully compensated with time-multiplexed addressing. Those dynamic devices can be mechanical or electrical driven. In this presentation, we will mainly focused on discussing various Liquid Crystal devices which can change the focal length, scan and shift the image position, or switched in between 2D/3D mode. By using the Liquid Crystal devices, dynamic integral imaging have been successfully applied on 3D Display, capturing, and bio-imaging applications.

  9. Information theoretical assessment of visual communication with subband coding

    NASA Astrophysics Data System (ADS)

    Rahman, Zia-ur; Fales, Carl L.; Huck, Friedrich O.

    1994-09-01

    A well-designed visual communication channel is one which transmits the most information about a radiance field with the fewest artifacts. The role of image processing, encoding and restoration is to improve the quality of visual communication channels by minimizing the error in the transmitted data. Conventionally this role has been analyzed strictly in the digital domain neglecting the effects of image-gathering and image-display devices on the quality of the image. This results in the design of a visual communication channel which is `suboptimal.' We propose an end-to-end assessment of the imaging process which incorporates the influences of these devices in the design of the encoder and the restoration process. This assessment combines Shannon's communication theory with Wiener's restoration filter and with the critical design factors of the image gathering and display devices, thus providing the metrics needed to quantify and optimize the end-to-end performance of the visual communication channel. Results show that the design of the image-gathering device plays a significant role in determining the quality of the visual communication channel and in designing the analysis filters for subband encoding.

  10. Wide-Field-of-View, High-Resolution, Stereoscopic Imager

    NASA Technical Reports Server (NTRS)

    Prechtl, Eric F.; Sedwick, Raymond J.

    2010-01-01

    A device combines video feeds from multiple cameras to provide wide-field-of-view, high-resolution, stereoscopic video to the user. The prototype under development consists of two camera assemblies, one for each eye. One of these assemblies incorporates a mounting structure with multiple cameras attached at offset angles. The video signals from the cameras are fed to a central processing platform where each frame is color processed and mapped into a single contiguous wide-field-of-view image. Because the resolution of most display devices is typically smaller than the processed map, a cropped portion of the video feed is output to the display device. The positioning of the cropped window will likely be controlled through the use of a head tracking device, allowing the user to turn his or her head side-to-side or up and down to view different portions of the captured image. There are multiple options for the display of the stereoscopic image. The use of head mounted displays is one likely implementation. However, the use of 3D projection technologies is another potential technology under consideration, The technology can be adapted in a multitude of ways. The computing platform is scalable, such that the number, resolution, and sensitivity of the cameras can be leveraged to improve image resolution and field of view. Miniaturization efforts can be pursued to shrink the package down for better mobility. Power savings studies can be performed to enable unattended, remote sensing packages. Image compression and transmission technologies can be incorporated to enable an improved telepresence experience.

  11. Display management subsystem, version 1: A user's eye view

    NASA Technical Reports Server (NTRS)

    Parker, Dolores

    1986-01-01

    The structure and application functions of the Display Management Subsystem (DMS) are described. The DMS, a subsystem of the Transportable Applications Executive (TAE), was designed to provide a device-independent interface for an image processing and display environment. The system is callable by C and FORTRAN applications, portable to accommodate different image analysis terminals, and easily expandable to meet local needs. Generic applications are also available for performing many image processing tasks.

  12. Segmented cold cathode display panel

    NASA Technical Reports Server (NTRS)

    Payne, Leslie (Inventor)

    1998-01-01

    The present invention is a video display device that utilizes the novel concept of generating an electronically controlled pattern of electron emission at the output of a segmented photocathode. This pattern of electron emission is amplified via a channel plate. The result is that an intense electronic image can be accelerated toward a phosphor thus creating a bright video image. This novel arrangement allows for one to provide a full color flat video display capable of implementation in large formats. In an alternate arrangement, the present invention is provided without the channel plate and a porous conducting surface is provided instead. In this alternate arrangement, the brightness of the image is reduced but the cost of the overall device is significantly lowered because fabrication complexity is significantly decreased.

  13. Design of integrated eye tracker-display device for head mounted systems

    NASA Astrophysics Data System (ADS)

    David, Y.; Apter, B.; Thirer, N.; Baal-Zedaka, I.; Efron, U.

    2009-08-01

    We propose an Eye Tracker/Display system, based on a novel, dual function device termed ETD, which allows sharing the optical paths of the Eye tracker and the display and on-chip processing. The proposed ETD design is based on a CMOS chip combining a Liquid-Crystal-on-Silicon (LCoS) micro-display technology with near infrared (NIR) Active Pixel Sensor imager. The ET operation allows capturing the Near IR (NIR) light, back-reflected from the eye's retina. The retinal image is then used for the detection of the current direction of eye's gaze. The design of the eye tracking imager is based on the "deep p-well" pixel technology, providing low crosstalk while shielding the active pixel circuitry, which serves the imaging and the display drivers, from the photo charges generated in the substrate. The use of the ETD in the HMD Design enables a very compact design suitable for Smart Goggle applications. A preliminary optical, electronic and digital design of the goggle and its associated ETD chip and digital control, are presented.

  14. Emi-Flective Display Device with Attribute of High Glare-Free-Ambient-Contrast-Ratio

    NASA Astrophysics Data System (ADS)

    Yang, Bo-Ru; Hsu, Chuan-Wei; Shieh, Han-Ping D.

    2007-11-01

    We have demonstrated the integration of an organic light emitting device (OLED) and a reflective liquid crystal display (R-LCD) which was termed an emi-flective display. The glare-free-ambient-contrast-ratio (GFA-CR) was used to evaluate the image quality of display devices under ambient light. Through integrating the OLED with R-LCD, the GFA-CR of the device achieved an improvement by a factor of 8 compared with that of the OLED alone. Moreover, the integrated R-LCD showed a GFA-CR of 100:1 within a viewing cone of 20° which can suppress the wash-out of OLED and is more power-saving in the sunlight. Therefore, an emi-flective display is a promising technique for mobile applications.

  15. High resolution biomedical imaging system with direct detection of x-rays via a charge coupled device

    DOEpatents

    Atac, M.; McKay, T.A.

    1998-04-21

    An imaging system is provided for direct detection of x-rays from an irradiated biological tissue. The imaging system includes an energy source for emitting x-rays toward the biological tissue and a charge coupled device (CCD) located immediately adjacent the biological tissue and arranged transverse to the direction of irradiation along which the x-rays travel. The CCD directly receives and detects the x-rays after passing through the biological tissue. The CCD is divided into a matrix of cells, each of which individually stores a count of x-rays directly detected by the cell. The imaging system further includes a pattern generator electrically coupled to the CCD for reading a count from each cell. A display device is provided for displaying an image representative of the count read by the pattern generator from the cells of the CCD. 13 figs.

  16. High resolution biomedical imaging system with direct detection of x-rays via a charge coupled device

    DOEpatents

    Atac, Muzaffer; McKay, Timothy A.

    1998-01-01

    An imaging system is provided for direct detection of x-rays from an irradiated biological tissue. The imaging system includes an energy source for emitting x-rays toward the biological tissue and a charge coupled device (CCD) located immediately adjacent the biological tissue and arranged transverse to the direction of irradiation along which the x-rays travel. The CCD directly receives and detects the x-rays after passing through the biological tissue. The CCD is divided into a matrix of cells, each of which individually stores a count of x-rays directly detected by the cell. The imaging system further includes a pattern generator electrically coupled to the CCD for reading a count from each cell. A display device is provided for displaying an image representative of the count read by the pattern generator from the cells of the CCD.

  17. Experimental evaluation of the optical quality of DMD SLM for its application as Fourier holograms displaying device

    NASA Astrophysics Data System (ADS)

    Molodtsov, D. Y.; Cheremkhin, P. A.; Krasnov, V. V.; Rodin, V. G.

    2016-04-01

    In this paper, the optical quality of micromirror DMD spatial light modulator (SLM) is evaluated and its applicability as an output device for holographic filters in dispersive correlators is analyzed. The possibility of using of DMD SLM extracted from consumer DLP-projector was experimentally evaluated by displaying of Fourier holograms. Software for displaying of holograms was developed. Experiments on holograms reconstruction was conducted with a different number of holograms pixels (and different placement on SLM). Reduction of number of pixels of output hologram (i.e. size of minimum resolvable element) led to improvement of reconstructed image quality. The evaluation shows that not every DMD-chip has acceptable optical quality for its application as display device for Fourier holograms. It was determined that major factor of reconstructed image quality degradation is a curvature of surface of SLM or its safety glass. Ranging hologram size allowed to estimate approximate size of sufficiently flat area of SLM matrix. For tested SLM it was about 1.5 mm. Further hologram size increase led to significant reconstructed image quality degradation. Developed and applied a technique allows to quickly estimate maximum size of holograms that can be displayed with specific SLM without significant degradation of reconstructed image. Additionally it allows to identify areas on the SLM with increased curvature of the surface.

  18. Three-dimensional display technologies

    PubMed Central

    Geng, Jason

    2014-01-01

    The physical world around us is three-dimensional (3D), yet traditional display devices can show only two-dimensional (2D) flat images that lack depth (i.e., the third dimension) information. This fundamental restriction greatly limits our ability to perceive and to understand the complexity of real-world objects. Nearly 50% of the capability of the human brain is devoted to processing visual information [Human Anatomy & Physiology (Pearson, 2012)]. Flat images and 2D displays do not harness the brain’s power effectively. With rapid advances in the electronics, optics, laser, and photonics fields, true 3D display technologies are making their way into the marketplace. 3D movies, 3D TV, 3D mobile devices, and 3D games have increasingly demanded true 3D display with no eyeglasses (autostereoscopic). Therefore, it would be very beneficial to readers of this journal to have a systematic review of state-of-the-art 3D display technologies. PMID:25530827

  19. Simple measurement of lenticular lens quality for autostereoscopic displays

    NASA Astrophysics Data System (ADS)

    Gray, Stuart; Boudreau, Robert A.

    2013-03-01

    Lenticular lens based autostereoscopic 3D displays are finding many applications in digital signage and consumer electronics devices. A high quality 3D viewing experience requires the lenticular lens be properly aligned with the pixels on the display device so that each eye views the correct image. This work presents a simple and novel method for rapidly assessing the quality of a lenticular lens to be used in autostereoscopic displays. Errors in lenticular alignment across the entire display are easily observed with a simple test pattern where adjacent views are programmed to display different colors.

  20. Color matrix display simulation based upon luminance and chromatic contrast sensitivity of early vision

    NASA Technical Reports Server (NTRS)

    Martin, Russel A.; Ahumada, Albert J., Jr.; Larimer, James O.

    1992-01-01

    This paper describes the design and operation of a new simulation model for color matrix display development. It models the physical structure, the signal processing, and the visual perception of static displays, to allow optimization of display design parameters through image quality measures. The model is simple, implemented in the Mathematica computer language, and highly modular. Signal processing modules operate on the original image. The hardware modules describe backlights and filters, the pixel shape, and the tiling of the pixels over the display. Small regions of the displayed image can be visualized on a CRT. Visual perception modules assume static foveal images. The image is converted into cone catches and then into luminance, red-green, and blue-yellow images. A Haar transform pyramid separates the three images into spatial frequency and direction-specific channels. The channels are scaled by weights taken from human contrast sensitivity measurements of chromatic and luminance mechanisms at similar frequencies and orientations. Each channel provides a detectability measure. These measures allow the comparison of images displayed on prospective devices and, by that, the optimization of display designs.

  1. Reconfigurable and responsive droplet-based compound micro-lenses.

    PubMed

    Nagelberg, Sara; Zarzar, Lauren D; Nicolas, Natalie; Subramanian, Kaushikaram; Kalow, Julia A; Sresht, Vishnu; Blankschtein, Daniel; Barbastathis, George; Kreysing, Moritz; Swager, Timothy M; Kolle, Mathias

    2017-03-07

    Micro-scale optical components play a crucial role in imaging and display technology, biosensing, beam shaping, optical switching, wavefront-analysis, and device miniaturization. Herein, we demonstrate liquid compound micro-lenses with dynamically tunable focal lengths. We employ bi-phase emulsion droplets fabricated from immiscible hydrocarbon and fluorocarbon liquids to form responsive micro-lenses that can be reconfigured to focus or scatter light, form real or virtual images, and display variable focal lengths. Experimental demonstrations of dynamic refractive control are complemented by theoretical analysis and wave-optical modelling. Additionally, we provide evidence of the micro-lenses' functionality for two potential applications-integral micro-scale imaging devices and light field display technology-thereby demonstrating both the fundamental characteristics and the promising opportunities for fluid-based dynamic refractive micro-scale compound lenses.

  2. Reconfigurable and responsive droplet-based compound micro-lenses

    PubMed Central

    Nagelberg, Sara; Zarzar, Lauren D.; Nicolas, Natalie; Subramanian, Kaushikaram; Kalow, Julia A.; Sresht, Vishnu; Blankschtein, Daniel; Barbastathis, George; Kreysing, Moritz; Swager, Timothy M.; Kolle, Mathias

    2017-01-01

    Micro-scale optical components play a crucial role in imaging and display technology, biosensing, beam shaping, optical switching, wavefront-analysis, and device miniaturization. Herein, we demonstrate liquid compound micro-lenses with dynamically tunable focal lengths. We employ bi-phase emulsion droplets fabricated from immiscible hydrocarbon and fluorocarbon liquids to form responsive micro-lenses that can be reconfigured to focus or scatter light, form real or virtual images, and display variable focal lengths. Experimental demonstrations of dynamic refractive control are complemented by theoretical analysis and wave-optical modelling. Additionally, we provide evidence of the micro-lenses' functionality for two potential applications—integral micro-scale imaging devices and light field display technology—thereby demonstrating both the fundamental characteristics and the promising opportunities for fluid-based dynamic refractive micro-scale compound lenses. PMID:28266505

  3. Design on the x-ray oral digital image display card

    NASA Astrophysics Data System (ADS)

    Wang, Liping; Gu, Guohua; Chen, Qian

    2009-10-01

    According to the main characteristics of X-ray imaging, the X-ray display card is successfully designed and debugged using the basic principle of correlated double sampling (CDS) and combined with embedded computer technology. CCD sensor drive circuit and the corresponding procedures have been designed. Filtering and sampling hold circuit have been designed. The data exchange with PC104 bus has been implemented. Using complex programmable logic device as a device to provide gating and timing logic, the functions which counting, reading CPU control instructions, corresponding exposure and controlling sample-and-hold have been completed. According to the image effect and noise analysis, the circuit components have been adjusted. And high-quality images have been obtained.

  4. Stabilized display of coronary x-ray image sequences

    NASA Astrophysics Data System (ADS)

    Close, Robert A.; Whiting, James S.; Da, Xiaolin; Eigler, Neal L.

    2004-05-01

    Display stabilization is a technique by which a feature of interest in a cine image sequence is tracked and then shifted to remain approximately stationary on the display device. Prior simulations indicate that display stabilization with high playback rates ( 30 f/s) can significantly improve detectability of low-contrast features in coronary angiograms. Display stabilization may also help to improve the accuracy of intra-coronary device placement. We validated our automated tracking algorithm by comparing the inter-frame difference (jitter) between manual and automated tracking of 150 coronary x-ray image sequences acquired on a digital cardiovascular X-ray imaging system with CsI/a-Si flat panel detector. We find that the median (50%) inter-frame jitter between manual and automatic tracking is 1.41 pixels or less, indicating a jump no further than an adjacent pixel. This small jitter implies that automated tracking and manual tracking should yield similar improvements in the performance of most visual tasks. We hypothesize that cardiologists would perceive a benefit in viewing the stabilized display as an addition to the standard playback of cine recordings. A benefit of display stabilization was identified in 87 of 101 sequences (86%). The most common tasks cited were evaluation of stenosis and determination of stent and balloon positions. We conclude that display stabilization offers perceptible improvements in the performance of visual tasks by cardiologists.

  5. Design of area array CCD image acquisition and display system based on FPGA

    NASA Astrophysics Data System (ADS)

    Li, Lei; Zhang, Ning; Li, Tianting; Pan, Yue; Dai, Yuming

    2014-09-01

    With the development of science and technology, CCD(Charge-coupled Device) has been widely applied in various fields and plays an important role in the modern sensing system, therefore researching a real-time image acquisition and display plan based on CCD device has great significance. This paper introduces an image data acquisition and display system of area array CCD based on FPGA. Several key technical challenges and problems of the system have also been analyzed and followed solutions put forward .The FPGA works as the core processing unit in the system that controls the integral time sequence .The ICX285AL area array CCD image sensor produced by SONY Corporation has been used in the system. The FPGA works to complete the driver of the area array CCD, then analog front end (AFE) processes the signal of the CCD image, including amplification, filtering, noise elimination, CDS correlation double sampling, etc. AD9945 produced by ADI Corporation to convert analog signal to digital signal. Developed Camera Link high-speed data transmission circuit, and completed the PC-end software design of the image acquisition, and realized the real-time display of images. The result through practical testing indicates that the system in the image acquisition and control is stable and reliable, and the indicators meet the actual project requirements.

  6. 3D Display Using Conjugated Multiband Bandpass Filters

    NASA Technical Reports Server (NTRS)

    Bae, Youngsam; White, Victor E.; Shcheglov, Kirill

    2012-01-01

    Stereoscopic display techniques are based on the principle of displaying two views, with a slightly different perspective, in such a way that the left eye views only by the left eye, and the right eye views only by the right eye. However, one of the major challenges in optical devices is crosstalk between the two channels. Crosstalk is due to the optical devices not completely blocking the wrong-side image, so the left eye sees a little bit of the right image and the right eye sees a little bit of the left image. This results in eyestrain and headaches. A pair of interference filters worn as an optical device can solve the problem. The device consists of a pair of multiband bandpass filters that are conjugated. The term "conjugated" describes the passband regions of one filter not overlapping with those of the other, but the regions are interdigitated. Along with the glasses, a 3D display produces colors composed of primary colors (basis for producing colors) having the spectral bands the same as the passbands of the filters. More specifically, the primary colors producing one viewpoint will be made up of the passbands of one filter, and those of the other viewpoint will be made up of the passbands of the conjugated filter. Thus, the primary colors of one filter would be seen by the eye that has the matching multiband filter. The inherent characteristic of the interference filter will allow little or no transmission of the wrong side of the stereoscopic images.

  7. Viewpoint Dependent Imaging: An Interactive Stereoscopic Display

    NASA Astrophysics Data System (ADS)

    Fisher, Scott

    1983-04-01

    Design and implementation of a viewpoint Dependent imaging system is described. The resultant display is an interactive, lifesize, stereoscopic image. that becomes a window into a three dimensional visual environment. As the user physically changes his viewpoint of the represented data in relation to the display surface, the image is continuously updated. The changing viewpoints are retrieved from a comprehensive, stereoscopic image array stored on computer controlled, optical videodisc and fluidly presented. in coordination with the viewer's, movements as detected by a body-tracking device. This imaging system is an attempt to more closely represent an observers interactive perceptual experience of the visual world by presenting sensory information cues not offered by traditional media technologies: binocular parallax, motion parallax, and motion perspective. Unlike holographic imaging, this display requires, relatively low bandwidth.

  8. Collimated autostereoscopic displays for cockpit applications

    NASA Astrophysics Data System (ADS)

    Eichenlaub, Jesse B.

    1995-06-01

    The use of an autostereoscopic display (a display that produces stereoscopic images that the user can see without wearing special glasses) for cockpit applications is now under investigation at Wright Patterson Air Force Base. DTI reported on this display, built for testing in a simulator, at last year's conference. It is believed, based on testing performed at NASA's Langley Research Center, that collimating this type of display will accrue benefits to the user including a grater useful imaging volume and more accurate stereo perception. DTI has therefore investigated the feasibility of collimating an autostereoscopic display, and has experimentally demonstrated a proof of concept model of such a display. As in the case of conventional displays, a collimated autostereoscopic display utilizes an optical element located one focal length from the surface of the image forming device. The presence of this element must be taken into account when designing the optics used to create the autostereoscopic images. The major design issues associated with collimated 2D displays are also associated with collimated autostereoscopic displays.

  9. Stroboscopic Image Modulation to Reduce the Visual Blur of an Object Being Viewed by an Observer Experiencing Vibration

    NASA Technical Reports Server (NTRS)

    Kaiser, Mary K. (Inventor); Adelstein, Bernard D. (Inventor); Anderson, Mark R. (Inventor); Beutter, Brent R. (Inventor); Ahumada, Albert J., Jr. (Inventor); McCann, Robert S. (Inventor)

    2014-01-01

    A method and apparatus for reducing the visual blur of an object being viewed by an observer experiencing vibration. In various embodiments of the present invention, the visual blur is reduced through stroboscopic image modulation (SIM). A SIM device is operated in an alternating "on/off" temporal pattern according to a SIM drive signal (SDS) derived from the vibration being experienced by the observer. A SIM device (controlled by a SIM control system) operates according to the SDS serves to reduce visual blur by "freezing" (or reducing an image's motion to a slow drift) the visual image of the viewed object. In various embodiments, the SIM device is selected from the group consisting of illuminator(s), shutter(s), display control system(s), and combinations of the foregoing (including the use of multiple illuminators, shutters, and display control systems).

  10. Integration of OLEDs in biomedical sensor systems: design and feasibility analysis

    NASA Astrophysics Data System (ADS)

    Rai, Pratyush; Kumar, Prashanth S.; Varadan, Vijay K.

    2010-04-01

    Organic (electronic) Light Emitting Diodes (OLEDs) have been shown to have applications in the field of lighting and flexible display. These devices can also be incorporated in sensors as light source for imaging/fluorescence sensing for miniaturized systems for biomedical applications and low-cost displays for sensor output. The current device capability aligns well with the aforementioned applications as low power diffuse lighting and momentary/push button dynamic display. A top emission OLED design has been proposed that can be incorporated with the sensor and peripheral electrical circuitry, also based on organic electronics. Feasibility analysis is carried out for an integrated optical imaging/sensor system, based on luminosity and spectrum band width. A similar study is also carried out for sensor output display system that functions as a pseudo active OLED matrix. A power model is presented for device power requirements and constraints. The feasibility analysis is also supplemented with the discussion about implementation of ink-jet printing and stamping techniques for possibility of roll to roll manufacturing.

  11. A database system to support image algorithm evaluation

    NASA Technical Reports Server (NTRS)

    Lien, Y. E.

    1977-01-01

    The design is given of an interactive image database system IMDB, which allows the user to create, retrieve, store, display, and manipulate images through the facility of a high-level, interactive image query (IQ) language. The query language IQ permits the user to define false color functions, pixel value transformations, overlay functions, zoom functions, and windows. The user manipulates the images through generic functions. The user can direct images to display devices for visual and qualitative analysis. Image histograms and pixel value distributions can also be computed to obtain a quantitative analysis of images.

  12. Recent advances in photorefractivity of poly(4-diphenylaminostyrene) composites: Wavelength dependence and dynamic holographic images

    NASA Astrophysics Data System (ADS)

    Tsujimura, Sho; Kinashi, Kenji; Sakai, Wataru; Tsutsumi, Naoto

    2014-08-01

    To expand upon our previous report [Appl. Phys. Express 5, 064101 (2012) 064101], we provide here the modified poly(4-diphenylaminostyrene) (PDAS)-based photorefractive (PR) device on the basis of wavelength dependency, and demonstrate dynamic holographic images by using the PDAS-based PR device under the obtained appropriate conditions. The PR devices containing the triphenylamine unit have potential application to dynamic holographic images, which will be useful for real-time holographic displays.

  13. Cloud-based image sharing network for collaborative imaging diagnosis and consultation

    NASA Astrophysics Data System (ADS)

    Yang, Yuanyuan; Gu, Yiping; Wang, Mingqing; Sun, Jianyong; Li, Ming; Zhang, Weiqiang; Zhang, Jianguo

    2018-03-01

    In this presentation, we presented a new approach to design cloud-based image sharing network for collaborative imaging diagnosis and consultation through Internet, which can enable radiologists, specialists and physicians locating in different sites collaboratively and interactively to do imaging diagnosis or consultation for difficult or emergency cases. The designed network combined a regional RIS, grid-based image distribution management, an integrated video conferencing system and multi-platform interactive image display devices together with secured messaging and data communication. There are three kinds of components in the network: edge server, grid-based imaging documents registry and repository, and multi-platform display devices. This network has been deployed in a public cloud platform of Alibaba through Internet since March 2017 and used for small lung nodule or early staging lung cancer diagnosis services between Radiology departments of Huadong hospital in Shanghai and the First Hospital of Jiaxing in Zhejiang Province.

  14. Flat panel ferroelectric electron emission display system

    DOEpatents

    Sampayan, Stephen E.; Orvis, William J.; Caporaso, George J.; Wieskamp, Ted F.

    1996-01-01

    A device which can produce a bright, raster scanned or non-raster scanned image from a flat panel. Unlike many flat panel technologies, this device does not require ambient light or auxiliary illumination for viewing the image. Rather, this device relies on electrons emitted from a ferroelectric emitter impinging on a phosphor. This device takes advantage of a new electron emitter technology which emits electrons with significant kinetic energy and beam current density.

  15. Using high-resolution displays for high-resolution cardiac data.

    PubMed

    Goodyer, Christopher; Hodrien, John; Wood, Jason; Kohl, Peter; Brodlie, Ken

    2009-07-13

    The ability to perform fast, accurate, high-resolution visualization is fundamental to improving our understanding of anatomical data. As the volumes of data increase from improvements in scanning technology, the methods applied to visualization must evolve. In this paper, we address the interactive display of data from high-resolution magnetic resonance imaging scanning of a rabbit heart and subsequent histological imaging. We describe a visualization environment involving a tiled liquid crystal display panel display wall and associated software, which provides an interactive and intuitive user interface. The oView software is an OpenGL application that is written for the VR Juggler environment. This environment abstracts displays and devices away from the application itself, aiding portability between different systems, from desktop PCs to multi-tiled display walls. Portability between display walls has been demonstrated through its use on walls at the universities of both Leeds and Oxford. We discuss important factors to be considered for interactive two-dimensional display of large three-dimensional datasets, including the use of intuitive input devices and level of detail aspects.

  16. Ten inch Planar Optic Display

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beiser, L.; Veligdan, J.

    A Planar Optic Display (POD) is being built and tested for suitability as a high brightness replacement for the cathode ray tube, (CRT). The POD display technology utilizes a laminated optical waveguide structure which allows a projection type of display to be constructed in a thin (I to 2 inch) housing. Inherent in the optical waveguide is a black cladding matrix which gives the display a black appearance leading to very high contrast. A Digital Micromirror Device, (DMD) from Texas Instruments is used to create video images in conjunction with a 100 milliwatt green solid state laser. An anamorphic opticalmore » system is used to inject light into the POD to form a stigmatic image. In addition to the design of the POD screen, we discuss: image formation, image projection, and optical design constraints.« less

  17. Display Considerations For Intravascular Ultrasonic Imaging

    NASA Astrophysics Data System (ADS)

    Gessert, James M.; Krinke, Charlie; Mallery, John A.; Zalesky, Paul J.

    1989-08-01

    A display has been developed for intravascular ultrasonic imaging. Design of this display has a primary goal of providing guidance information for therapeutic interventions such as balloons, lasers, and atherectomy devices. Design considerations include catheter configuration, anatomy, acoustic properties of normal and diseased tissue, catheterization laboratory and operating room environment, acoustic and electrical safety, acoustic data sampling issues, and logistical support such as image measurement, storage and retrieval. Intravascular imaging is in an early stage of development so design flexibility and expandability are very important. The display which has been developed is capable of acquisition and display of grey scale images at rates varying from static B-scans to 30 frames per second. It stores images in a 640 X 480 X 8 bit format and is capable of black and white as well as color display in multiplevideo formats. The design is based on the industry standard PC-AT architecture and consists of two AT style circuit cards, one for high speed sampling and the other for scan conversion, graphics and video generation.

  18. EMU helmet mounted display

    NASA Technical Reports Server (NTRS)

    Marmolejo, Jose (Inventor); Smith, Stephen (Inventor); Plough, Alan (Inventor); Clarke, Robert (Inventor); Mclean, William (Inventor); Fournier, Joseph (Inventor)

    1990-01-01

    A helmet mounted display device is disclosed for projecting a display on a flat combiner surface located above the line of sight where the display is produced by two independent optical channels with independent LCD image generators. The display has a fully overlapped field of view on the combiner surface and the focus can be adjusted from a near field of four feet to infinity.

  19. Virtual reality 3D headset based on DMD light modulators

    NASA Astrophysics Data System (ADS)

    Bernacki, Bruce E.; Evans, Allan; Tang, Edward

    2014-06-01

    We present the design of an immersion-type 3D headset suitable for virtual reality applications based upon digital micromirror devices (DMD). Current methods for presenting information for virtual reality are focused on either polarizationbased modulators such as liquid crystal on silicon (LCoS) devices, or miniature LCD or LED displays often using lenses to place the image at infinity. LCoS modulators are an area of active research and development, and reduce the amount of viewing light by 50% due to the use of polarization. Viewable LCD or LED screens may suffer low resolution, cause eye fatigue, and exhibit a "screen door" or pixelation effect due to the low pixel fill factor. Our approach leverages a mature technology based on silicon micro mirrors delivering 720p resolution displays in a small form-factor with high fill factor. Supporting chip sets allow rapid integration of these devices into wearable displays with high-definition resolution and low power consumption, and many of the design methods developed for DMD projector applications can be adapted to display use. Potential applications include night driving with natural depth perception, piloting of UAVs, fusion of multiple sensors for pilots, training, vision diagnostics and consumer gaming. Our design concept is described in which light from the DMD is imaged to infinity and the user's own eye lens forms a real image on the user's retina resulting in a virtual retinal display.

  20. Flat panel ferroelectric electron emission display system

    DOEpatents

    Sampayan, S.E.; Orvis, W.J.; Caporaso, G.J.; Wieskamp, T.F.

    1996-04-16

    A device is disclosed which can produce a bright, raster scanned or non-raster scanned image from a flat panel. Unlike many flat panel technologies, this device does not require ambient light or auxiliary illumination for viewing the image. Rather, this device relies on electrons emitted from a ferroelectric emitter impinging on a phosphor. This device takes advantage of a new electron emitter technology which emits electrons with significant kinetic energy and beam current density. 6 figs.

  1. Real-time MRI guidance of cardiac interventions.

    PubMed

    Campbell-Washburn, Adrienne E; Tavallaei, Mohammad A; Pop, Mihaela; Grant, Elena K; Chubb, Henry; Rhode, Kawal; Wright, Graham A

    2017-10-01

    Cardiac magnetic resonance imaging (MRI) is appealing to guide complex cardiac procedures because it is ionizing radiation-free and offers flexible soft-tissue contrast. Interventional cardiac MR promises to improve existing procedures and enable new ones for complex arrhythmias, as well as congenital and structural heart disease. Guiding invasive procedures demands faster image acquisition, reconstruction and analysis, as well as intuitive intraprocedural display of imaging data. Standard cardiac MR techniques such as 3D anatomical imaging, cardiac function and flow, parameter mapping, and late-gadolinium enhancement can be used to gather valuable clinical data at various procedural stages. Rapid intraprocedural image analysis can extract and highlight critical information about interventional targets and outcomes. In some cases, real-time interactive imaging is used to provide a continuous stream of images displayed to interventionalists for dynamic device navigation. Alternatively, devices are navigated relative to a roadmap of major cardiac structures generated through fast segmentation and registration. Interventional devices can be visualized and tracked throughout a procedure with specialized imaging methods. In a clinical setting, advanced imaging must be integrated with other clinical tools and patient data. In order to perform these complex procedures, interventional cardiac MR relies on customized equipment, such as interactive imaging environments, in-room image display, audio communication, hemodynamic monitoring and recording systems, and electroanatomical mapping and ablation systems. Operating in this sophisticated environment requires coordination and planning. This review provides an overview of the imaging technology used in MRI-guided cardiac interventions. Specifically, this review outlines clinical targets, standard image acquisition and analysis tools, and the integration of these tools into clinical workflow. 1 Technical Efficacy: Stage 5 J. Magn. Reson. Imaging 2017;46:935-950. © 2017 International Society for Magnetic Resonance in Medicine.

  2. Edge smoothing for real-time simulation of a polygon face object system as viewed by a moving observer

    NASA Technical Reports Server (NTRS)

    Lotz, Robert W. (Inventor); Westerman, David J. (Inventor)

    1980-01-01

    The visual system within an aircraft flight simulation system receives flight data and terrain data which is formated into a buffer memory. The image data is forwarded to an image processor which translates the image data into face vertex vectors Vf, defining the position relationship between the vertices of each terrain object and the aircraft. The image processor then rotates, clips, and projects the image data into two-dimensional display vectors (Vd). A display generator receives the Vd faces, and other image data to provide analog inputs to CRT devices which provide the window displays for the simulated aircraft. The video signal to the CRT devices passes through an edge smoothing device which prolongs the rise time (and fall time) of the video data inversely as the slope of the edge being smoothed. An operational amplifier within the edge smoothing device has a plurality of independently selectable feedback capacitors each having a different value. The values of the capacitors form a series which doubles as a power of two. Each feedback capacitor has a fast switch responsive to the corresponding bit of a digital binary control word for selecting (1) or not selecting (0) that capacitor. The control word is determined by the slope of each edge. The resulting actual feedback capacitance for each edge is the sum of all the selected capacitors and is directly proportional to the value of the binary control word. The output rise time (or fall time) is a function of the feedback capacitance, and is controlled by the slope through the binary control word.

  3. Double-layered liquid crystal light shutter for control of absorption and scattering of the light incident to a transparent display device

    NASA Astrophysics Data System (ADS)

    Huh, Jae-Won; Yu, Byeong-Hun; Shin, Dong-Myung; Yoon, Tae-Hoon

    2015-03-01

    Recently, a transparent display has got much attention as one of the next generation display devices. Especially, active studies on a transparent display using organic light-emitting diodes (OLEDs) are in progress. However, since it is not possible to obtain black color using a transparent OLED, it suffers from poor visibility. This inevitable problem can be solved by using a light shutter. Light shutter technology can be divided into two types; light absorption and scattering. However, a light shutter based on light absorption cannot block the background image perfectly and a light shutter based on light scattering cannot provide black color. In this work we demonstrate a light shutter using two liquid crystal (LC) layers, a light absorption layer and a light scattering layer. To realize a light absorption layer and a light scattering layer, we use the planar state of a dye-doped chiral nematic LC (CNLC) cell and the focal-conic state of a long-pitch CNLC cell, respectively. The proposed light shutter device can block the background image perfectly and show black color. We expect that the proposed light shutter can increase the visibility of a transparent display.

  4. The Comparison Of Dome And HMD Delivery Systems: A Case Study

    NASA Technical Reports Server (NTRS)

    Chen, Jian; Harm, Deborah L.; Loftin, R. Bowen; Tyalor, Laura C.; Leiss, Ernst L.

    2002-01-01

    For effective astronaut training applications, choosing the right display devices to present images is crucial. In order to assess what devices are appropriate, it is important to design a successful virtual environment for a comparison study of the display devices. We present a comprehensive system, a Virtual environment testbed (VET), for the comparison of Dome and Head Mounted Display (HMD) systems on an SGI Onyx workstation. By writing codelets, we allow a variety of virtual scenarios and subjects' information to be loaded without programming or changing the code. This is part of an ongoing research project conducted by the NASA / JSC.

  5. Neuroradiology Using Secure Mobile Device Review.

    PubMed

    Randhawa, Privia A; Morrish, William; Lysack, John T; Hu, William; Goyal, Mayank; Hill, Michael D

    2016-04-05

    Image review on computer-based workstations has made film-based review outdated. Despite advances in technology, the lack of portability of digital workstations creates an inherent disadvantage. As such, we sought to determine if the quality of image review on a handheld device is adequate for routine clinical use. Six CT/CTA cases and six MR/MRA cases were independently reviewed by three neuroradiologists in varying environments: high and low ambient light using a handheld device and on a traditional imaging workstation in ideal conditions. On first review (using a handheld device in high ambient light), a preliminary diagnosis for each case was made. Upon changes in review conditions, neuroradiologists were asked if any additional features were seen that changed their initial diagnoses. Reviewers were also asked to comment on overall clinical quality and if the handheld display was of acceptable quality for image review. After the initial CT review in high ambient light, additional findings were reported in 2 of 18 instances on subsequent reviews. Similarly, additional findings were identified in 4 of 18 instances after the initial MR review in high ambient lighting. Only one of these six additional findings contributed to the diagnosis made on the initial preliminary review. Use of a handheld device for image review is of adequate diagnostic quality based on image contrast, sharpness of structures, visible artefacts and overall display quality. Although reviewers were comfortable with using this technology, a handheld device with a larger screen may be diagnostically superior.

  6. Optimal front light design for reflective displays under different ambient illumination

    NASA Astrophysics Data System (ADS)

    Wang, Sheng-Po; Chang, Ting-Ting; Li, Chien-Ju; Bai, Yi-Ho; Hu, Kuo-Jui

    2011-01-01

    The goal of this study is to find out the optimal luminance and color temperature of front light for reflective displays in different ambient illumination by conducting series of psychophysical experiments. A color and brightness tunable front light device with ten LED units was built and been calibrated to present 256 luminance levels and 13 different color temperature at fixed luminance of 200 cd/m2. The experiment results revealed the best luminance and color temperature settings for human observers under different ambient illuminant, which could also assist the e-paper manufacturers to design front light device, and present the best image quality on reflective displays. Furthermore, a similar experiment procedure was conducted by utilizing new flexible e-signage display developed by ITRI and an optimal front light device for the new display panel has been designed and utilized.

  7. Current progress and technical challenges of flexible liquid crystal displays

    NASA Astrophysics Data System (ADS)

    Fujikake, Hideo; Sato, Hiroto

    2009-02-01

    We focused on several technical approaches to flexible liquid crystal (LC) display in this report. We have been developing flexible displays using plastic film substrates based on polymer-dispersed LC technology with molecular alignment control. In our representative devices, molecular-aligned polymer walls keep plastic-substrate gap constant without LC alignment disorder, and aligned polymer networks create monostable switching of fast-response ferroelectric LC (FLC) for grayscale capability. In the fabrication process, a high-viscosity FLC/monomer solution was printed, sandwiched and pressed between plastic substrates. Then the polymer walls and networks were sequentially formed based on photo-polymerization-induced phase separation in the nematic phase by two exposure processes of patterned and uniform ultraviolet light. The two flexible backlight films of direct illumination and light-guide methods using small three-primary-color light-emitting diodes were fabricated to obtain high-visibility display images. The fabricated flexible FLC panels were driven by external transistor arrays, internal organic thin film transistor (TFT) arrays, and poly-Si TFT arrays. We achieved full-color moving-image displays using the flexible FLC panel and the flexible backlight film based on field-sequential-color driving technique. Otherwise, for backlight-free flexible LC displays, flexible reflective devices of twisted guest-host nematic LC and cholesteric LC were discussed with molecular-aligned polymer walls. Singlesubstrate device structure and fabrication method using self-standing polymer-stabilized nematic LC film and polymer ceiling layer were also proposed for obtaining LC devices with excellent flexibility.

  8. Towards a robust HDR imaging system

    NASA Astrophysics Data System (ADS)

    Long, Xin; Zeng, Xiangrong; Huangpeng, Qizi; Zhou, Jinglun; Feng, Jing

    2016-07-01

    High dynamic range (HDR) images can show more details and luminance information in general display device than low dynamic image (LDR) images. We present a robust HDR imaging system which can deal with blurry LDR images, overcoming the limitations of most existing HDR methods. Experiments on real images show the effectiveness and competitiveness of the proposed method.

  9. Video System for Viewing From a Remote or Windowless Cockpit

    NASA Technical Reports Server (NTRS)

    Banerjee, Amamath

    2009-01-01

    A system of electronic hardware and software synthesizes, in nearly real time, an image of a portion of a scene surveyed by as many as eight video cameras aimed, in different directions, at portions of the scene. This is a prototype of systems that would enable a pilot to view the scene outside a remote or windowless cockpit. The outputs of the cameras are digitized. Direct memory addressing is used to store the data of a few captured images in sequence, and the sequence is repeated in cycles. Cylindrical warping is used in merging adjacent images at their borders to construct a mosaic image of the scene. The mosaic-image data are written to a memory block from which they can be rendered on a head-mounted display (HMD) device. A subsystem in the HMD device tracks the direction of gaze of the wearer, providing data that are used to select, for display, the portion of the mosaic image corresponding to the direction of gaze. The basic functionality of the system has been demonstrated by mounting the cameras on the roof of a van and steering the van by use of the images presented on the HMD device.

  10. Multiresolution image gathering and restoration

    NASA Technical Reports Server (NTRS)

    Fales, Carl L.; Huck, Friedrich O.; Alter-Gartenberg, Rachel; Rahman, Zia-Ur

    1992-01-01

    In this paper we integrate multiresolution decomposition with image gathering and restoration. This integration leads to a Wiener-matrix filter that accounts for the aliasing, blurring, and noise in image gathering, together with the digital filtering and decimation in signal decomposition. Moreover, as implemented here, the Wiener-matrix filter completely suppresses the blurring and raster effects of the image-display device. We demonstrate that this filter can significantly improve the fidelity and visual quality produced by conventional image reconstruction. The extent of this improvement, in turn, depends on the design of the image-gathering device.

  11. Design of a handheld infrared imaging device based on uncooled infrared detector

    NASA Astrophysics Data System (ADS)

    Sun, Xianzhong; Li, Junwei; Zhang, Yazhou

    2017-02-01

    This paper, we introduced the system structure and operation principle of the device, and discussed our solutions for image data acquisition and storage, operating states and modes control and power management in detail. Besides, we proposed a algorithm of pseudo color for thermal image and applied it to the image processing module of the device. The thermal images can be real time displayed in a 1.8 inches TFT-LCD. The device has a compacted structure and can be held easily by one hand. It also has a good imaging performance with low power consumption, thermal sensitivity is less than 150mK. At last, we introduced one of its applications for fault diagnosis in electronic circuits, the test shows that: it's a good solution for fast fault detection.

  12. Assessment of display performance for medical imaging systems: Executive summary of AAPM TG18 report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samei, Ehsan; Badano, Aldo; Chakraborty, Dev

    Digital imaging provides an effective means to electronically acquire, archive, distribute, and view medical images. Medical imaging display stations are an integral part of these operations. Therefore, it is vitally important to assure that electronic display devices do not compromise image quality and ultimately patient care. The AAPM Task Group 18 (TG18) recently published guidelines and acceptance criteria for acceptance testing and quality control of medical display devices. This paper is an executive summary of the TG18 report. TG18 guidelines include visual, quantitative, and advanced testing methodologies for primary and secondary class display devices. The characteristics, tested in conjunction withmore » specially designed test patterns (i.e., TG18 patterns), include reflection, geometric distortion, luminance, the spatial and angular dependencies of luminance, resolution, noise, glare, chromaticity, and display artifacts. Geometric distortions are evaluated by linear measurements of the TG18-QC test pattern, which should render distortion coefficients less than 2%/5% for primary/secondary displays, respectively. Reflection measurements include specular and diffuse reflection coefficients from which the maximum allowable ambient lighting is determined such that contrast degradation due to display reflection remains below a 20% limit and the level of ambient luminance (L{sub amb}) does not unduly compromise luminance ratio (LR) and contrast at low luminance levels. Luminance evaluation relies on visual assessment of low contrast features in the TG18-CT and TG18-MP test patterns, or quantitative measurements at 18 distinct luminance levels of the TG18-LN test patterns. The major acceptable criteria for primary/secondary displays are maximum luminance of greater than 170/100 cd/m{sup 2}, LR of greater than 250/100, and contrast conformance to that of the grayscale standard display function (GSDF) of better than 10%/20%, respectively. The angular response is tested to ascertain the viewing cone within which contrast conformance to the GSDF is better than 30%/60% and LR is greater than 175/70 for primary/secondary displays, or alternatively, within which the on-axis contrast thresholds of the TG18-CT test pattern remain discernible. The evaluation of luminance spatial uniformity at two distinct luminance levels across the display faceplate using TG18-UNL test patterns should yield nonuniformity coefficients smaller than 30%. The resolution evaluation includes the visual scoring of the CX test target in the TG18-QC or TG18-CX test patterns, which should yield scores greater than 4/6 for primary/secondary displays. Noise evaluation includes visual evaluation of the contrast threshold in the TG18-AFC test pattern, which should yield a minimum of 3/2 targets visible for primary/secondary displays. The guidelines also include methodologies for more quantitative resolution and noise measurements based on MTF and NPS analyses. The display glare test, based on the visibility of the low-contrast targets of the TG18-GV test pattern or the measurement of the glare ratio (GR), is expected to yield scores greater than 3/1 and GRs greater than 400/150 for primary/secondary displays. Chromaticity, measured across a display faceplate or between two display devices, is expected to render a u{sup '},v{sup '} color separation of less than 0.01 for primary displays. The report offers further descriptions of prior standardization efforts, current display technologies, testing prerequisites, streamlined procedures and timelines, and TG18 test patterns.« less

  13. Use of a color CMOS camera as a colorimeter

    NASA Astrophysics Data System (ADS)

    Dallas, William J.; Roehrig, Hans; Redford, Gary R.

    2006-08-01

    In radiology diagnosis, film is being quickly replaced by computer monitors as the display medium for all imaging modalities. Increasingly, these monitors are color instead of monochrome. It is important to have instruments available to characterize the display devices in order to guarantee reproducible presentation of image material. We are developing an imaging colorimeter based on a commercially available color digital camera. The camera uses a sensor that has co-located pixels in all three primary colors.

  14. fVisiOn: glasses-free tabletop 3D display to provide virtual 3D media naturally alongside real media

    NASA Astrophysics Data System (ADS)

    Yoshida, Shunsuke

    2012-06-01

    A novel glasses-free tabletop 3D display, named fVisiOn, floats virtual 3D objects on an empty, flat, tabletop surface and enables multiple viewers to observe raised 3D images from any angle at 360° Our glasses-free 3D image reproduction method employs a combination of an optical device and an array of projectors and produces continuous horizontal parallax in the direction of a circular path located above the table. The optical device shapes a hollow cone and works as an anisotropic diffuser. The circularly arranged projectors cast numerous rays into the optical device. Each ray represents a particular ray that passes a corresponding point on a virtual object's surface and orients toward a viewing area around the table. At any viewpoint on the ring-shaped viewing area, both eyes collect fractional images from different projectors, and all the viewers around the table can perceive the scene as 3D from their perspectives because the images include binocular disparity. The entire principle is installed beneath the table, so the tabletop area remains clear. No ordinary tabletop activities are disturbed. Many people can naturally share the 3D images displayed together with real objects on the table. In our latest prototype, we employed a handmade optical device and an array of over 100 tiny projectors. This configuration reproduces static and animated 3D scenes for a 130° viewing area and allows 5-cm-tall virtual characters to play soccer and dance on the table.

  15. High-resolution, continuous field-of-view (FOV), non-rotating imaging system

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terrance L. (Inventor); Stirbl, Robert C. (Inventor); Aghazarian, Hrand (Inventor); Padgett, Curtis W. (Inventor)

    2010-01-01

    A high resolution CMOS imaging system especially suitable for use in a periscope head. The imaging system includes a sensor head for scene acquisition, and a control apparatus inclusive of distributed processors and software for device-control, data handling, and display. The sensor head encloses a combination of wide field-of-view CMOS imagers and narrow field-of-view CMOS imagers. Each bank of imagers is controlled by a dedicated processing module in order to handle information flow and image analysis of the outputs of the camera system. The imaging system also includes automated or manually controlled display system and software for providing an interactive graphical user interface (GUI) that displays a full 360-degree field of view and allows the user or automated ATR system to select regions for higher resolution inspection.

  16. Interpretation of medical imaging data with a mobile application: a mobile digital imaging processing environment.

    PubMed

    Lin, Meng Kuan; Nicolini, Oliver; Waxenegger, Harald; Galloway, Graham J; Ullmann, Jeremy F P; Janke, Andrew L

    2013-01-01

    Digital Imaging Processing (DIP) requires data extraction and output from a visualization tool to be consistent. Data handling and transmission between the server and a user is a systematic process in service interpretation. The use of integrated medical services for management and viewing of imaging data in combination with a mobile visualization tool can be greatly facilitated by data analysis and interpretation. This paper presents an integrated mobile application and DIP service, called M-DIP. The objective of the system is to (1) automate the direct data tiling, conversion, pre-tiling of brain images from Medical Imaging NetCDF (MINC), Neuroimaging Informatics Technology Initiative (NIFTI) to RAW formats; (2) speed up querying of imaging measurement; and (3) display high-level of images with three dimensions in real world coordinates. In addition, M-DIP provides the ability to work on a mobile or tablet device without any software installation using web-based protocols. M-DIP implements three levels of architecture with a relational middle-layer database, a stand-alone DIP server, and a mobile application logic middle level realizing user interpretation for direct querying and communication. This imaging software has the ability to display biological imaging data at multiple zoom levels and to increase its quality to meet users' expectations. Interpretation of bioimaging data is facilitated by an interface analogous to online mapping services using real world coordinate browsing. This allows mobile devices to display multiple datasets simultaneously from a remote site. M-DIP can be used as a measurement repository that can be accessed by any network environment, such as a portable mobile or tablet device. In addition, this system and combination with mobile applications are establishing a virtualization tool in the neuroinformatics field to speed interpretation services.

  17. Interpretation of Medical Imaging Data with a Mobile Application: A Mobile Digital Imaging Processing Environment

    PubMed Central

    Lin, Meng Kuan; Nicolini, Oliver; Waxenegger, Harald; Galloway, Graham J.; Ullmann, Jeremy F. P.; Janke, Andrew L.

    2013-01-01

    Digital Imaging Processing (DIP) requires data extraction and output from a visualization tool to be consistent. Data handling and transmission between the server and a user is a systematic process in service interpretation. The use of integrated medical services for management and viewing of imaging data in combination with a mobile visualization tool can be greatly facilitated by data analysis and interpretation. This paper presents an integrated mobile application and DIP service, called M-DIP. The objective of the system is to (1) automate the direct data tiling, conversion, pre-tiling of brain images from Medical Imaging NetCDF (MINC), Neuroimaging Informatics Technology Initiative (NIFTI) to RAW formats; (2) speed up querying of imaging measurement; and (3) display high-level of images with three dimensions in real world coordinates. In addition, M-DIP provides the ability to work on a mobile or tablet device without any software installation using web-based protocols. M-DIP implements three levels of architecture with a relational middle-layer database, a stand-alone DIP server, and a mobile application logic middle level realizing user interpretation for direct querying and communication. This imaging software has the ability to display biological imaging data at multiple zoom levels and to increase its quality to meet users’ expectations. Interpretation of bioimaging data is facilitated by an interface analogous to online mapping services using real world coordinate browsing. This allows mobile devices to display multiple datasets simultaneously from a remote site. M-DIP can be used as a measurement repository that can be accessed by any network environment, such as a portable mobile or tablet device. In addition, this system and combination with mobile applications are establishing a virtualization tool in the neuroinformatics field to speed interpretation services. PMID:23847587

  18. White constancy method for mobile displays

    NASA Astrophysics Data System (ADS)

    Yum, Ji Young; Park, Hyun Hee; Jang, Seul Ki; Lee, Jae Hyang; Kim, Jong Ho; Yi, Ji Young; Lee, Min Woo

    2014-03-01

    In these days, consumer's needs for image quality of mobile devices are increasing as smartphone is widely used. For example, colors may be perceived differently when displayed contents under different illuminants. Displayed white in incandescent lamp is perceived as bluish, while same content in LED light is perceived as yellowish. When changed in perceived white under illuminant environment, image quality would be degraded. Objective of the proposed white constancy method is restricted to maintain consistent output colors regardless of the illuminants utilized. Human visual experiments are performed to analyze viewers'perceptual constancy. Participants are asked to choose the displayed white in a variety of illuminants. Relationship between the illuminants and the selected colors with white are modeled by mapping function based on the results of human visual experiments. White constancy values for image control are determined on the predesigned functions. Experimental results indicate that propsed method yields better image quality by keeping the display white.

  19. Digital 3D holographic display using scattering layers for enhanced viewing angle and image size

    NASA Astrophysics Data System (ADS)

    Yu, Hyeonseung; Lee, KyeoReh; Park, Jongchan; Park, YongKeun

    2017-05-01

    In digital 3D holographic displays, the generation of realistic 3D images has been hindered by limited viewing angle and image size. Here we demonstrate a digital 3D holographic display using volume speckle fields produced by scattering layers in which both the viewing angle and the image size are greatly enhanced. Although volume speckle fields exhibit random distributions, the transmitted speckle fields have a linear and deterministic relationship with the input field. By modulating the incident wavefront with a digital micro-mirror device, volume speckle patterns are controlled to generate 3D images of micrometer-size optical foci with 35° viewing angle in a volume of 2 cm × 2 cm × 2 cm.

  20. Development of an immersive virtual reality head-mounted display with high performance.

    PubMed

    Wang, Yunqi; Liu, Weiqi; Meng, Xiangxiang; Fu, Hanyi; Zhang, Daliang; Kang, Yusi; Feng, Rui; Wei, Zhonglun; Zhu, Xiuqing; Jiang, Guohua

    2016-09-01

    To resolve the contradiction between large field of view and high resolution in immersive virtual reality (VR) head-mounted displays (HMDs), an HMD monocular optical system with a large field of view and high resolution was designed. The system was fabricated by adopting aspheric technology with CNC grinding and a high-resolution LCD as the image source. With this monocular optical system, an HMD binocular optical system with a wide-range continuously adjustable interpupillary distance was achieved in the form of partially overlapping fields of view (FOV) combined with a screw adjustment mechanism. A fast image processor-centered LCD driver circuit and an image preprocessing system were also built to address binocular vision inconsistency in the partially overlapping FOV binocular optical system. The distortions of the HMD optical system with a large field of view were measured. Meanwhile, the optical distortions in the display and the trapezoidal distortions introduced during image processing were corrected by a calibration model for reverse rotations and translations. A high-performance not-fully-transparent VR HMD device with high resolution (1920×1080) and large FOV [141.6°(H)×73.08°(V)] was developed. The full field-of-view average value of angular resolution is 18.6  pixels/degree. With the device, high-quality VR simulations can be completed under various scenarios, and the device can be utilized for simulated trainings in aeronautics, astronautics, and other fields with corresponding platforms. The developed device has positive practical significance.

  1. Virtual reality 3D headset based on DMD light modulators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernacki, Bruce E.; Evans, Allan; Tang, Edward

    We present the design of an immersion-type 3D headset suitable for virtual reality applications based upon digital micro-mirror devices (DMD). Our approach leverages silicon micro mirrors offering 720p resolution displays in a small form-factor. Supporting chip sets allow rapid integration of these devices into wearable displays with high resolution and low power consumption. Applications include night driving, piloting of UAVs, fusion of multiple sensors for pilots, training, vision diagnostics and consumer gaming. Our design is described in which light from the DMD is imaged to infinity and the user’s own eye lens forms a real image on the user’s retina.

  2. Haptic Stylus and Empirical Studies on Braille, Button, and Texture Display

    PubMed Central

    Kyung, Ki-Uk; Lee, Jun-Young; Park, Junseok

    2008-01-01

    This paper presents a haptic stylus interface with a built-in compact tactile display module and an impact module as well as empirical studies on Braille, button, and texture display. We describe preliminary evaluations verifying the tactile display's performance indicating that it can satisfactorily represent Braille numbers for both the normal and the blind. In order to prove haptic feedback capability of the stylus, an experiment providing impact feedback mimicking the click of a button has been conducted. Since the developed device is small enough to be attached to a force feedback device, its applicability to combined force and tactile feedback display in a pen-held haptic device is also investigated. The handle of pen-held haptic interface was replaced by the pen-like interface to add tactile feedback capability to the device. Since the system provides combination of force, tactile and impact feedback, three haptic representation methods for texture display have been compared on surface with 3 texture groups which differ in direction, groove width, and shape. In addition, we evaluate its capacity to support touch screen operations by providing tactile sensations when a user rubs against an image displayed on a monitor. PMID:18317520

  3. Haptic stylus and empirical studies on braille, button, and texture display.

    PubMed

    Kyung, Ki-Uk; Lee, Jun-Young; Park, Junseok

    2008-01-01

    This paper presents a haptic stylus interface with a built-in compact tactile display module and an impact module as well as empirical studies on Braille, button, and texture display. We describe preliminary evaluations verifying the tactile display's performance indicating that it can satisfactorily represent Braille numbers for both the normal and the blind. In order to prove haptic feedback capability of the stylus, an experiment providing impact feedback mimicking the click of a button has been conducted. Since the developed device is small enough to be attached to a force feedback device, its applicability to combined force and tactile feedback display in a pen-held haptic device is also investigated. The handle of pen-held haptic interface was replaced by the pen-like interface to add tactile feedback capability to the device. Since the system provides combination of force, tactile and impact feedback, three haptic representation methods for texture display have been compared on surface with 3 texture groups which differ in direction, groove width, and shape. In addition, we evaluate its capacity to support touch screen operations by providing tactile sensations when a user rubs against an image displayed on a monitor.

  4. Development of fluorescence based handheld imaging devices for food safety inspection

    NASA Astrophysics Data System (ADS)

    Lee, Hoyoung; Kim, Moon S.; Chao, Kuanglin; Lefcourt, Alan M.; Chan, Diane E.

    2013-05-01

    For sanitation inspection in food processing environment, fluorescence imaging can be a very useful method because many organic materials reveal unique fluorescence emissions when excited by UV or violet radiation. Although some fluorescence-based automated inspection instrumentation has been developed for food products, there remains a need for devices that can assist on-site inspectors performing visual sanitation inspection of the surfaces of food processing/handling equipment. This paper reports the development of an inexpensive handheld imaging device designed to visualize fluorescence emissions and intended to help detect the presence of fecal contaminants, organic residues, and bacterial biofilms at multispectral fluorescence emission bands. The device consists of a miniature camera, multispectral (interference) filters, and high power LED illumination. With WiFi communication, live inspection images from the device can be displayed on smartphone or tablet devices. This imaging device could be a useful tool for assessing the effectiveness of sanitation procedures and for helping processors to minimize food safety risks or determine potential problem areas. This paper presents the design and development including evaluation and optimization of the hardware components of the imaging devices.

  5. A head-mounted compressive three-dimensional display system with polarization-dependent focus switching

    NASA Astrophysics Data System (ADS)

    Lee, Chang-Kun; Moon, Seokil; Lee, Byounghyo; Jeong, Youngmo; Lee, Byoungho

    2016-10-01

    A head-mounted compressive three-dimensional (3D) display system is proposed by combining polarization beam splitter (PBS), fast switching polarization rotator and micro display with high pixel density. According to the polarization state of the image controlled by polarization rotator, optical path of image in the PBS can be divided into transmitted and reflected components. Since optical paths of each image are spatially separated, it is possible to independently focus both images at different depth positions. Transmitted p-polarized and reflected s-polarized images can be focused by convex lens and mirror, respectively. When the focal lengths of the convex lens and mirror are properly determined, two image planes can be located in intended positions. The geometrical relationship is easily modulated by replacement of the components. The fast switching of polarization realizes the real-time operation of multi-focal image planes with a single display panel. Since it is possible to conserve the device characteristic of single panel, the high image quality, reliability and uniformity can be retained. For generating 3D images, layer images for compressive light field display between two image planes are calculated. Since the display panel with high pixel density is adopted, high quality 3D images are reconstructed. In addition, image degradation by diffraction between physically stacked display panels can be mitigated. Simple optical configuration of the proposed system is implemented and the feasibility of the proposed method is verified through experiments.

  6. Development of a mini-mobile digital radiography system by using wireless smart devices.

    PubMed

    Jeong, Chang-Won; Joo, Su-Chong; Ryu, Jong-Hyun; Lee, Jinseok; Kim, Kyong-Woo; Yoon, Kwon-Ha

    2014-08-01

    The current technologies that trend in digital radiology (DR) are toward systems using portable smart mobile as patient-centered care. We aimed to develop a mini-mobile DR system by using smart devices for wireless connection into medical information systems. We developed a mini-mobile DR system consisting of an X-ray source and a Complementary Metal-Oxide Semiconductor (CMOS) sensor based on a flat panel detector for small-field diagnostics in patients. It is used instead of the systems that are difficult to perform with a fixed traditional device. We also designed a method for embedded systems in the development of portable DR systems. The external interface used the fast and stable IEEE 802.11n wireless protocol, and we adapted the device for connections with Picture Archiving and Communication System (PACS) and smart devices. The smart device could display images on an external monitor other than the monitor in the DR system. The communication modules, main control board, and external interface supporting smart devices were implemented. Further, a smart viewer based on the external interface was developed to display image files on various smart devices. In addition, the advantage of operators is to reduce radiation dose when using remote smart devices. It is integrated with smart devices that can provide X-ray imaging services anywhere. With this technology, it can permit image observation on a smart device from a remote location by connecting to the external interface. We evaluated the response time of the mini-mobile DR system to compare to mobile PACS. The experimental results show that our system outperforms conventional mobile PACS in this regard.

  7. Flexoelectric effect in an in-plane switching (IPS) liquid crystal cell for low-power consumption display devices

    NASA Astrophysics Data System (ADS)

    Kim, Min Su; Bos, Philip J.; Kim, Dong-Woo; Yang, Deng-Ke; Lee, Joong Hee; Lee, Seung Hee

    2016-10-01

    Technology of displaying static images in portable displays, advertising panels and price tags pursues significant reduction in power consumption and in product cost. Driving at a low-frequency electric field in fringe-field switching (FFS) mode can be one of the efficient ways to save powers of the recent portable devices, but a serious drop of image-quality, so-called image-flickering, has been found in terms of the coupling of elastic deformation to not only quadratic dielectric effect but linear flexoelectric effect. Despite of the urgent requirement of solving the issue, understanding of such a phenomenon is yet vague. Here, we thoroughly analyze and firstly report the flexoelectric effect in in-plane switching (IPS) liquid crystal cell. The effect takes place on the area above electrodes due to splay and bend deformations of nematic liquid crystal along oblique electric fields, so that the obvious spatial shift of the optical transmittance is experimentally observed and is clearly demonstrated based on the relation between direction of flexoelectric polarization and electric field polarity. In addition, we report that the IPS mode has inherent characteristics to solve the image-flickering issue in the low-power consumption display in terms of the physical property of liquid crystal material and the electrode structure.

  8. Flexoelectric effect in an in-plane switching (IPS) liquid crystal cell for low-power consumption display devices.

    PubMed

    Kim, Min Su; Bos, Philip J; Kim, Dong-Woo; Yang, Deng-Ke; Lee, Joong Hee; Lee, Seung Hee

    2016-10-12

    Technology of displaying static images in portable displays, advertising panels and price tags pursues significant reduction in power consumption and in product cost. Driving at a low-frequency electric field in fringe-field switching (FFS) mode can be one of the efficient ways to save powers of the recent portable devices, but a serious drop of image-quality, so-called image-flickering, has been found in terms of the coupling of elastic deformation to not only quadratic dielectric effect but linear flexoelectric effect. Despite of the urgent requirement of solving the issue, understanding of such a phenomenon is yet vague. Here, we thoroughly analyze and firstly report the flexoelectric effect in in-plane switching (IPS) liquid crystal cell. The effect takes place on the area above electrodes due to splay and bend deformations of nematic liquid crystal along oblique electric fields, so that the obvious spatial shift of the optical transmittance is experimentally observed and is clearly demonstrated based on the relation between direction of flexoelectric polarization and electric field polarity. In addition, we report that the IPS mode has inherent characteristics to solve the image-flickering issue in the low-power consumption display in terms of the physical property of liquid crystal material and the electrode structure.

  9. Image quality degradation by light-scattering processes in high-performance display devices for medical imaging

    NASA Astrophysics Data System (ADS)

    Badano, Aldo

    1999-11-01

    This thesis addresses the characterization of light scattering processes that degrade image quality in high performance electronic display devices for digital radiography. Using novel experimental and computational tools, we study the lateral diffusion of light in emissive display devices that causes extensive veiling glare and significant reduction of the physical contrast. In addition, we examine the deleterious effects of ambient light reflections that affect the contrast of low luminance regions, and superimpose unwanted structured signal. The analysis begins by introducing the performance limitations of the human visual system to define high fidelity requirements. It is noted that current devices severely suffer from image quality degradation due to optical transport processes. To model the veiling glare and reflectance characteristics of display devices, we introduce a Monte Carlo light transport simulation code, DETECT-II, that tracks individual photons through multiple scattering events. The simulation accounts for the photon polarization state at each scattering event, and provides descriptions for rough surfaces and thin film coatings. A new experimental method to measure veiling glare is described next, based on a conic collimated probe that minimizes contamination from bright areas. The measured veiling glare ratio is taken to be the luminance in the surrounding bright field divided by the luminance in the dark circle. We show that veiling glare ratios in the order of a few hundreds can be measured with an uncertainty of a few percent. The veiling glare response function is obtained by measuring the small spot contrast ratio of test patterns having varying dark spot radius. Using DETECT-II, we then estimate the ring response functions for a high performance medical imaging monitor of current design, and compare the predictions of the model with the experimentally measured response function. The data presented in this thesis demonstrate that although absorption in the faceplate of high performance monochrome cathode-ray tube monitors have reduced glare, a black matrix design is needed for high fidelity applications. For a high performance medical imaging monitor with anti-reflective coating, the glare ratio for a 1 cm diameter dark spot was measured to be 240. Finally, we introduce experimental techniques for measurements of specular and diffuse display reflectance, and we compare measured reflection coefficients with Monte Carlo estimates. A specular reflection coefficient of 0.0012, and a diffuse coefficient of 0.005 nits/lux are required to minimize degradation from ambient light in rooms with 100 lux illumination. In spite of having comparable reflection coefficients, the low maximum luminance of current devices worsens the effect of ambient light reflections when compared to radiographic film. Flat panel technologies with optimized designs can perform even better than film due to a thin faceplate, increased light absorption, and high brightness.

  10. Color image quality in projection displays: a case study

    NASA Astrophysics Data System (ADS)

    Strand, Monica; Hardeberg, Jon Y.; Nussbaum, Peter

    2005-01-01

    Recently the use of projection displays has increased dramatically in different applications such as digital cinema, home theatre, and business and educational presentations. Even if the color image quality of these devices has improved significantly over the years, it is still a common situation for users of projection displays that the projected colors differ significantly from the intended ones. This study presented in this paper attempts to analyze the color image quality of a large set of projection display devices, particularly investigating the variations in color reproduction. As a case study, a set of 14 projectors (LCD and DLP technology) at Gjovik University College have been tested under four different conditions: dark and light room, with and without using an ICC-profile. To find out more about the importance of the illumination conditions in a room, and the degree of improvement when using an ICC-profile, the results from the measurements was processed and analyzed. Eye-One Beamer from GretagMacbeth was used to make the profiles. The color image quality was evaluated both visually and by color difference calculations. The results from the analysis indicated large visual and colorimetric differences between the projectors. Our DLP projectors have generally smaller color gamut than LCD projectors. The color gamuts of older projectors are significantly smaller than that of newer ones. The amount of ambient light reaching the screen is of great importance for the visual impression. If too much reflections and other ambient light reaches the screen, the projected image gets pale and has low contrast. When using a profile, the differences in colors between the projectors gets smaller and the colors appears more correct. For one device, the average ΔE*ab color difference when compared to a relative white reference was reduced from 22 to 11, for another from 13 to 6. Blue colors have the largest variations among the projection displays and makes them therefore harder to predict.

  11. Color image quality in projection displays: a case study

    NASA Astrophysics Data System (ADS)

    Strand, Monica; Hardeberg, Jon Y.; Nussbaum, Peter

    2004-10-01

    Recently the use of projection displays has increased dramatically in different applications such as digital cinema, home theatre, and business and educational presentations. Even if the color image quality of these devices has improved significantly over the years, it is still a common situation for users of projection displays that the projected colors differ significantly from the intended ones. This study presented in this paper attempts to analyze the color image quality of a large set of projection display devices, particularly investigating the variations in color reproduction. As a case study, a set of 14 projectors (LCD and DLP technology) at Gjøvik University College have been tested under four different conditions: dark and light room, with and without using an ICC-profile. To find out more about the importance of the illumination conditions in a room, and the degree of improvement when using an ICC-profile, the results from the measurements was processed and analyzed. Eye-One Beamer from GretagMacbeth was used to make the profiles. The color image quality was evaluated both visually and by color difference calculations. The results from the analysis indicated large visual and colorimetric differences between the projectors. Our DLP projectors have generally smaller color gamut than LCD projectors. The color gamuts of older projectors are significantly smaller than that of newer ones. The amount of ambient light reaching the screen is of great importance for the visual impression. If too much reflections and other ambient light reaches the screen, the projected image gets pale and has low contrast. When using a profile, the differences in colors between the projectors gets smaller and the colors appears more correct. For one device, the average ΔE*ab color difference when compared to a relative white reference was reduced from 22 to 11, for another from 13 to 6. Blue colors have the largest variations among the projection displays and makes them therefore harder to predict.

  12. Grayscale/resolution trade-off for text: Model predictions and psychophysical results for letter confusion and letter discrimination

    NASA Technical Reports Server (NTRS)

    Gille, Jennifer; Martin, Russel; Lubin, Jeffrey; Larimer, James

    1995-01-01

    In a series of papers presented in 1994, we examined the grayscale/resolution trade-off for natural images displayed on devices with discrete pixellation, such as AMLCD's. In the present paper we extend this study by examining the grayscale/resolution trade-off for text images on discrete-pixel displays. Halftoning in printing is an example of the grayscale/resolution trade-off. In printing, spatial resolution is sacrificed to produce grayscale. Another example of this trade-off is the inherent low-pass spatial filter of a CRT, caused by the point-spread function of the electron beam in the phosphor layer. On a CRT, sharp image edges are blurred by this inherent low-pass filtering, and the block noise created by spatial quantization is greatly reduced. A third example of this trade-off is text anti-aliasing, where grayscale is used to improve letter shape, size and location when rendered at a low spatial resolution. There are additional implications for display system design from the grayscale/resolution trade-off. For example, reduced grayscale can reduce system costs by requiring less complexity in the framestore, allowing the use of lower cost drivers, potentially increasing data transfer rates in the image subsystem, and simplifying the manufacturing processes that are used to construct the active matrix for AMLCD (active-matrix liquid-crystal display) or AMTFEL (active-matrix thin-film electroluminescent) devices. Therefore, the study of these trade-offs is important for display designers and manufacturing and systems engineers who wish to create the highest performance, lowest cost device possible. Our strategy for investigating this trade-off is to generate a set of simple test images, manipulate grayscale and resolution, predict discrimination performance using the ViDEOS(Sarnoff) Human Vision Model, conduct an empirical study of discrimination using psychophysical procedures, and verify the computational results using the psychophysical results.

  13. Backscatter absorption gas imaging system

    DOEpatents

    McRae, Jr., Thomas G.

    1985-01-01

    A video imaging system for detecting hazardous gas leaks. Visual displays of invisible gas clouds are produced by radiation augmentation of the field of view of an imaging device by radiation corresponding to an absorption line of the gas to be detected. The field of view of an imager is irradiated by a laser. The imager receives both backscattered laser light and background radiation. When a detectable gas is present, the backscattered laser light is highly attenuated, producing a region of contrast or shadow on the image. A flying spot imaging system is utilized to synchronously irradiate and scan the area to lower laser power requirements. The imager signal is processed to produce a video display.

  14. Backscatter absorption gas imaging system

    DOEpatents

    McRae, T.G. Jr.

    A video imaging system for detecting hazardous gas leaks. Visual displays of invisible gas clouds are produced by radiation augmentation of the field of view of an imaging device by radiation corresponding to an absorption line of the gas to be detected. The field of view of an imager is irradiated by a laser. The imager receives both backscattered laser light and background radiation. When a detectable gas is present, the backscattered laser light is highly attenuated, producing a region of contrast or shadow on the image. A flying spot imaging system is utilized to synchronously irradiate and scan the area to lower laser power requirements. The imager signal is processed to produce a video display.

  15. Color quality management in advanced flat panel display engines

    NASA Astrophysics Data System (ADS)

    Lebowsky, Fritz; Neugebauer, Charles F.; Marnatti, David M.

    2003-01-01

    During recent years color reproduction systems for consumer needs have experienced various difficulties. In particular, flat panels and printers could not reach a satisfactory color match. The RGB image stored on an Internet server of a retailer did not show the desired colors on a consumer display device or printer device. STMicroelectronics addresses this important color reproduction issue inside their advanced display engines using novel algorithms targeted for low cost consumer flat panels. Using a new and genuine RGB color space transformation, which combines a gamma correction Look-Up-Table, tetrahedrization, and linear interpolation, we satisfy market demands.

  16. Projection-type see-through holographic three-dimensional display

    NASA Astrophysics Data System (ADS)

    Wakunami, Koki; Hsieh, Po-Yuan; Oi, Ryutaro; Senoh, Takanori; Sasaki, Hisayuki; Ichihashi, Yasuyuki; Okui, Makoto; Huang, Yi-Pai; Yamamoto, Kenji

    2016-10-01

    Owing to the limited spatio-temporal resolution of display devices, dynamic holographic three-dimensional displays suffer from a critical trade-off between the display size and the visual angle. Here we show a projection-type holographic three-dimensional display, in which a digitally designed holographic optical element and a digital holographic projection technique are combined to increase both factors at the same time. In the experiment, the enlarged holographic image, which is twice as large as the original display device, projected on the screen of the digitally designed holographic optical element was concentrated at the target observation area so as to increase the visual angle, which is six times as large as that for a general holographic display. Because the display size and the visual angle can be designed independently, the proposed system will accelerate the adoption of holographic three-dimensional displays in industrial applications, such as digital signage, in-car head-up displays, smart-glasses and head-mounted displays.

  17. Highly Reflective Multi-stable Electrofluidic Display Pixels

    NASA Astrophysics Data System (ADS)

    Yang, Shu

    Electronic papers (E-papers) refer to the displays that mimic the appearance of printed papers, but still owning the features of conventional electronic displays, such as the abilities of browsing websites and playing videos. The motivation of creating paper-like displays is inspired by the truths that reading on a paper caused least eye fatigue due to the paper's reflective and light diffusive nature, and, unlike the existing commercial displays, there is no cost of any form of energy for sustaining the displayed image. To achieve the equivalent visual effect of a paper print, an ideal E-paper has to be a highly reflective with good contrast ratio and full-color capability. To sustain the image with zero power consumption, the display pixels need to be bistable, which means the "on" and "off" states are both lowest energy states. Pixel can change its state only when sufficient external energy is given. There are many emerging technologies competing to demonstrate the first ideal E-paper device. However, none is able to achieve satisfactory visual effect, bistability and video speed at the same time. Challenges come from either the inherent physical/chemical properties or the fabrication process. Electrofluidic display is one of the most promising E-paper technologies. It has successfully demonstrated high reflectivity, brilliant color and video speed operation by moving colored pigment dispersion between visible and invisible places with electrowetting force. However, the pixel design did not allow the image bistability. Presented in this dissertation are the multi-stable electrofluidic display pixels that are able to sustain grayscale levels without any power consumption, while keeping the favorable features of the previous generation electrofluidic display. The pixel design, fabrication method using multiple layer dry film photoresist lamination, and physical/optical characterizations are discussed in details. Based on the pixel structure, the preliminary results of a simplified design and fabrication method are demonstrated. As advanced research topics regarding the device optical performance, firstly an optical model for evaluating reflective displays' light out-coupling efficiency is established to guide the pixel design; Furthermore, Aluminum surface diffusers are analytically modeled and then fabricated onto multi-stable electrofluidic display pixels to demonstrate truly "white" multi-stable electrofluidic display modules. The achieved results successfully promoted multi-stable electrofluidic display as excellent candidate for the ultimate E-paper device especially for larger scale signage applications.

  18. 77 FR 3002 - Certain Motion-Sensitive Sound Effects Devices and Image Display Devices and Components and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-20

    ... U.S. Patent Nos. 5,825,427 (``the '427 patent'') and 6,150,947 (``the '947 patent''). The Commission... investigation, namely the '427 patent and the '947 patent. The complaint in the 787 [[Page 3003

  19. High-chroma visual cryptography using interference color of high-order retarder films

    NASA Astrophysics Data System (ADS)

    Sugawara, Shiori; Harada, Kenji; Sakai, Daisuke

    2015-08-01

    Visual cryptography can be used as a method of sharing a secret image through several encrypted images. Conventional visual cryptography can display only monochrome images. We have developed a high-chroma color visual encryption technique using the interference color of high-order retarder films. The encrypted films are composed of a polarizing film and retarder films. The retarder films exhibit interference color when they are sandwiched between two polarizing films. We propose a stacking technique for displaying high-chroma interference color images. A prototype visual cryptography device using high-chroma interference color is developed.

  20. Nanocrystalline ZnON; High mobility and low band gap semiconductor material for high performance switch transistor and image sensor application

    PubMed Central

    Lee, Eunha; Benayad, Anass; Shin, Taeho; Lee, HyungIk; Ko, Dong-Su; Kim, Tae Sang; Son, Kyoung Seok; Ryu, Myungkwan; Jeon, Sanghun; Park, Gyeong-Su

    2014-01-01

    Interest in oxide semiconductors stems from benefits, primarily their ease of process, relatively high mobility (0.3–10 cm2/vs), and wide-bandgap. However, for practical future electronic devices, the channel mobility should be further increased over 50 cm2/vs and wide-bandgap is not suitable for photo/image sensor applications. The incorporation of nitrogen into ZnO semiconductor can be tailored to increase channel mobility, enhance the optical absorption for whole visible light and form uniform micro-structure, satisfying the desirable attributes essential for high performance transistor and visible light photo-sensors on large area platform. Here, we present electronic, optical and microstructural properties of ZnON, a composite of Zn3N2 and ZnO. Well-optimized ZnON material presents high mobility exceeding 100 cm2V−1s−1, the band-gap of 1.3 eV and nanocrystalline structure with multiphase. We found that mobility, microstructure, electronic structure, band-gap and trap properties of ZnON are varied with nitrogen concentration in ZnO. Accordingly, the performance of ZnON-based device can be adjustable to meet the requisite of both switch device and image-sensor potentials. These results demonstrate how device and material attributes of ZnON can be optimized for new device strategies in display technology and we expect the ZnON will be applicable to a wide range of imaging/display devices. PMID:24824778

  1. Performance considerations for high-definition head-mounted displays

    NASA Technical Reports Server (NTRS)

    Edwards, Oliver J.; Larimer, James; Gille, Jennifer

    1992-01-01

    Design image-optimization for helmet-mounted displays (HMDs) for military systems is presently discussed within the framework of a systems-engineering approach that encompasses (1) a description of natural targets in the field; (2) the characteristics of human visual perception; and (3) device specifications that directly relate to these ecological and human-factors parameters. Attention is given to target size and contrast and the relationship of the modulation transfer function to image resolution.

  2. Analysis of the viewing zone of the Cambridge autostereoscopic display.

    PubMed

    Dodgson, N A

    1996-04-01

    The Cambridge autostereoscopic three-dimensional display is a time-multiplexed device that gives both stereo and movement parallax to the viewer without the need for any special glasses. This analysis derives the size and position of the fully illuminated, and hence useful, viewing zone for a Cambridge display. The viewing zone of such a display is shown to be completely determined by four parameters: the width of the screen, the optimal distance of the viewer from the screen, the width over which an image can be seen across the whole screen at this optimal distance, and the number of views. A display's viewing zone can thus be completely described without reference to the internal implementation of the device. An equation that describes what the eye sees from any position in front of the display is derived. The equations derived can be used in both the analysis and design of this type of time-multiplexed autostereoscopic display.

  3. WE-G-204-08: Optimized Digital Radiographic Technique for Lost Surgical Devices/Needle Identification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorman, A; Seabrook, G; Brakken, A

    Purpose: Small surgical devices and needles are used in many surgical procedures. Conventionally, an x-ray film is taken to identify missing devices/needles if post procedure count is incorrect. There is no data to indicate smallest surgical devices/needles that can be identified with digital radiography (DR), and its optimized acquisition technique. Methods: In this study, the DR equipment used is a Canon RadPro mobile with CXDI-70c wireless DR plate, and the same DR plate on a fixed Siemens Multix unit. Small surgical devices and needles tested include Rubber Shod, Bulldog, Fogarty Hydrogrip, and needles with sizes 3-0 C-T1 through 8-0 BV175-6.more » They are imaged with PMMA block phantoms with thickness of 2–8 inch, and an abdomen phantom. Various DR techniques are used. Images are reviewed on the portable x-ray acquisition display, a clinical workstation, and a diagnostic workstation. Results: all small surgical devices and needles are visible in portable DR images with 2–8 inch of PMMA. However, when they are imaged with the abdomen phantom plus 2 inch of PMMA, needles smaller than 9.3 mm length can not be visualized at the optimized technique of 81 kV and 16 mAs. There is no significant difference in visualization with various techniques, or between mobile and fixed radiography unit. However, there is noticeable difference in visualizing the smallest needle on a diagnostic reading workstation compared to the acquisition display on a portable x-ray unit. Conclusion: DR images should be reviewed on a diagnostic reading workstation. Using optimized DR techniques, the smallest needle that can be identified on all phantom studies is 9.3 mm. Sample DR images of various small surgical devices/needles available on diagnostic workstation for comparison may improve their identification. Further in vivo study is needed to confirm the optimized digital radiography technique for identification of lost small surgical devices and needles.« less

  4. Accommodation response measurements for integral 3D image

    NASA Astrophysics Data System (ADS)

    Hiura, H.; Mishina, T.; Arai, J.; Iwadate, Y.

    2014-03-01

    We measured accommodation responses under integral photography (IP), binocular stereoscopic, and real object display conditions, and viewing conditions of binocular and monocular viewing conditions. The equipment we used was an optometric device and a 3D display. We developed the 3D display for IP and binocular stereoscopic images that comprises a high-resolution liquid crystal display (LCD) and a high-density lens array. The LCD has a resolution of 468 dpi and a diagonal size of 4.8 inches. The high-density lens array comprises 106 x 69 micro lenses that have a focal length of 3 mm and diameter of 1 mm. The lenses are arranged in a honeycomb pattern. The 3D display was positioned 60 cm from an observer under IP and binocular stereoscopic display conditions. The target was presented at eight depth positions relative to the 3D display: 15, 10, and 5 cm in front of the 3D display, on the 3D display panel, and 5, 10, 15 and 30 cm behind the 3D display under the IP and binocular stereoscopic display conditions. Under the real object display condition, the target was displayed on the 3D display panel, and the 3D display was placed at the eight positions. The results suggest that the IP image induced more natural accommodation responses compared to the binocular stereoscopic image. The accommodation responses of the IP image were weaker than those of a real object; however, they showed a similar tendency with those of the real object under the two viewing conditions. Therefore, IP can induce accommodation to the depth positions of 3D images.

  5. Stereo Imaging Miniature Endoscope with Single Imaging Chip and Conjugated Multi-Bandpass Filters

    NASA Technical Reports Server (NTRS)

    Shahinian, Hrayr Karnig (Inventor); Bae, Youngsam (Inventor); White, Victor E. (Inventor); Shcheglov, Kirill V. (Inventor); Manohara, Harish M. (Inventor); Kowalczyk, Robert S. (Inventor)

    2018-01-01

    A dual objective endoscope for insertion into a cavity of a body for providing a stereoscopic image of a region of interest inside of the body including an imaging device at the distal end for obtaining optical images of the region of interest (ROI), and processing the optical images for forming video signals for wired and/or wireless transmission and display of 3D images on a rendering device. The imaging device includes a focal plane detector array (FPA) for obtaining the optical images of the ROI, and processing circuits behind the FPA. The processing circuits convert the optical images into the video signals. The imaging device includes right and left pupil for receiving a right and left images through a right and left conjugated multi-band pass filters. Illuminators illuminate the ROI through a multi-band pass filter having three right and three left pass bands that are matched to the right and left conjugated multi-band pass filters. A full color image is collected after three or six sequential illuminations with the red, green and blue lights.

  6. A novel shape-changing haptic table-top display

    NASA Astrophysics Data System (ADS)

    Wang, Jiabin; Zhao, Lu; Liu, Yue; Wang, Yongtian; Cai, Yi

    2018-01-01

    A shape-changing table-top display with haptic feedback allows its users to perceive 3D visual and texture displays interactively. Since few existing devices are developed as accurate displays with regulatory haptic feedback, a novel attentive and immersive shape changing mechanical interface (SCMI) consisting of image processing unit and transformation unit was proposed in this paper. In order to support a precise 3D table-top display with an offset of less than 2 mm, a custommade mechanism was developed to form precise surface and regulate the feedback force. The proposed image processing unit was capable of extracting texture data from 2D picture for rendering shape-changing surface and realizing 3D modeling. The preliminary evaluation result proved the feasibility of the proposed system.

  7. Team Electronic Gameplay Combining Different Means of Control

    NASA Technical Reports Server (NTRS)

    Palsson, Olafur S. (Inventor); Pope, Alan T. (Inventor)

    2014-01-01

    Disclosed are methods and apparatuses provided for modifying the effect of an operator controlled input device on an interactive device to encourage the self-regulation of at least one physiological activity by a person different than the operator. The interactive device comprises a display area which depicts images and apparatus for receiving at least one input from the operator controlled input device to thus permit the operator to control and interact with at least some of the depicted images. One effect modification comprises measurement of the physiological activity of a person different from the operator, while modifying the ability of the operator to control and interact with at least some of the depicted images by modifying the input from the operator controlled input device in response to changes in the measured physiological signal.

  8. Display of high dynamic range images under varying viewing conditions

    NASA Astrophysics Data System (ADS)

    Borer, Tim

    2017-09-01

    Recent demonstrations of high dynamic range (HDR) television have shown that superb images are possible. With the emergence of an HDR television production standard (ITU-R Recommendation BT.2100) last year, HDR television production is poised to take off. However research to date has focused principally on HDR image display only under "dark" viewing conditions. HDR television will need to be displayed at varying brightness and under varying illumination (for example to view sport in daytime or on mobile devices). We know, from common practice with conventional TV, that the rendering intent (gamma) should change under brighter conditions, although this is poorly quantified. For HDR the need to render images under varying conditions is all the more acute. This paper seeks to explore the issues surrounding image display under varying conditions. It also describes how visual adaptation is affected by display brightness, surround illumination, screen size and viewing distance. Existing experimental results are presented and extended to try to quantify these effects. Using the experimental results it is described how HDR images may be displayed so that they are perceptually equivalent under different viewing conditions. A new interpretation of the experimental results is reported, yielding a new, luminance invariant model for the appropriate display "gamma". In this way the consistency of HDR image reproduction should be improved, thereby better maintaining "creative intent" in television.

  9. Interactive Image Analysis System Design,

    DTIC Science & Technology

    1982-12-01

    This report describes a design for an interactive image analysis system (IIAS), which implements terrain data extraction techniques. The design... analysis system. Additionally, the system is fully capable of supporting many generic types of image analysis and data processing, and is modularly...employs commercially available, state of the art minicomputers and image display devices with proven software to achieve a cost effective, reliable image

  10. IMDISP - INTERACTIVE IMAGE DISPLAY PROGRAM

    NASA Technical Reports Server (NTRS)

    Martin, M. D.

    1994-01-01

    The Interactive Image Display Program (IMDISP) is an interactive image display utility for the IBM Personal Computer (PC, XT and AT) and compatibles. Until recently, efforts to utilize small computer systems for display and analysis of scientific data have been hampered by the lack of sufficient data storage capacity to accomodate large image arrays. Most planetary images, for example, require nearly a megabyte of storage. The recent development of the "CDROM" (Compact Disk Read-Only Memory) storage technology makes possible the storage of up to 680 megabytes of data on a single 4.72-inch disk. IMDISP was developed for use with the CDROM storage system which is currently being evaluated by the Planetary Data System. The latest disks to be produced by the Planetary Data System are a set of three disks containing all of the images of Uranus acquired by the Voyager spacecraft. The images are in both compressed and uncompressed format. IMDISP can read the uncompressed images directly, but special software is provided to decompress the compressed images, which can not be processed directly. IMDISP can also display images stored on floppy or hard disks. A digital image is a picture converted to numerical form so that it can be stored and used in a computer. The image is divided into a matrix of small regions called picture elements, or pixels. The rows and columns of pixels are called "lines" and "samples", respectively. Each pixel has a numerical value, or DN (data number) value, quantifying the darkness or brightness of the image at that spot. In total, each pixel has an address (line number, sample number) and a DN value, which is all that the computer needs for processing. DISPLAY commands allow the IMDISP user to display all or part of an image at various positions on the display screen. The user may also zoom in and out from a point on the image defined by the cursor, and may pan around the image. To enable more or all of the original image to be displayed on the screen at once, the image can be "subsampled." For example, if the image were subsampled by a factor of 2, every other pixel from every other line would be displayed, starting from the upper left corner of the image. Any positive integer may be used for subsampling. The user may produce a histogram of an image file, which is a graph showing the number of pixels per DN value, or per range of DN values, for the entire image. IMDISP can also plot the DN value versus pixels along a line between two points on the image. The user can "stretch" or increase the contrast of an image by specifying low and high DN values; all pixels with values lower than the specified "low" will then become black, and all pixels higher than the specified "high" value will become white. Pixels between the low and high values will be evenly shaded between black and white. IMDISP is written in a modular form to make it easy to change it to work with different display devices or on other computers. The code can also be adapted for use in other application programs. There are device dependent image display modules, general image display subroutines, image I/O routines, and image label and command line parsing routines. The IMDISP system is written in C-language (94%) and Assembler (6%). It was implemented on an IBM PC with the MS DOS 3.21 operating system. IMDISP has a memory requirement of about 142k bytes. IMDISP was developed in 1989 and is a copyrighted work with all copyright vested in NASA. Additional planetary images can be obtained from the National Space Science Data Center at (301) 286-6695.

  11. Low-cost data analysis systems for processing multispectral scanner data

    NASA Technical Reports Server (NTRS)

    Whitely, S. L.

    1976-01-01

    The basic hardware and software requirements are described for four low cost analysis systems for computer generated land use maps. The data analysis systems consist of an image display system, a small digital computer, and an output recording device. Software is described together with some of the display and recording devices, and typical costs are cited. Computer requirements are given, and two approaches are described for converting black-white film and electrostatic printer output to inexpensive color output products. Examples of output products are shown.

  12. Protective laser beam viewing device

    DOEpatents

    Neil, George R.; Jordan, Kevin Carl

    2012-12-18

    A protective laser beam viewing system or device including a camera selectively sensitive to laser light wavelengths and a viewing screen receiving images from the laser sensitive camera. According to a preferred embodiment of the invention, the camera is worn on the head of the user or incorporated into a goggle-type viewing display so that it is always aimed at the area of viewing interest to the user and the viewing screen is incorporated into a video display worn as goggles over the eyes of the user.

  13. Adaptive controller for volumetric display of neuroimaging studies

    NASA Astrophysics Data System (ADS)

    Bleiberg, Ben; Senseney, Justin; Caban, Jesus

    2014-03-01

    Volumetric display of medical images is an increasingly relevant method for examining an imaging acquisition as the prevalence of thin-slice imaging increases in clinical studies. Current mouse and keyboard implementations for volumetric control provide neither the sensitivity nor specificity required to manipulate a volumetric display for efficient reading in a clinical setting. Solutions to efficient volumetric manipulation provide more sensitivity by removing the binary nature of actions controlled by keyboard clicks, but specificity is lost because a single action may change display in several directions. When specificity is then further addressed by re-implementing hardware binary functions through the introduction of mode control, the result is a cumbersome interface that fails to achieve the revolutionary benefit required for adoption of a new technology. We address the specificity versus sensitivity problem of volumetric interfaces by providing adaptive positional awareness to the volumetric control device by manipulating communication between hardware driver and existing software methods for volumetric display of medical images. This creates a tethered effect for volumetric display, providing a smooth interface that improves on existing hardware approaches to volumetric scene manipulation.

  14. Navigating the fifth dimension: new concepts in interactive multimodality and multidimensional image navigation

    NASA Astrophysics Data System (ADS)

    Ratib, Osman; Rosset, Antoine; Dahlbom, Magnus; Czernin, Johannes

    2005-04-01

    Display and interpretation of multi dimensional data obtained from the combination of 3D data acquired from different modalities (such as PET-CT) require complex software tools allowing the user to navigate and modify the different image parameters. With faster scanners it is now possible to acquire dynamic images of a beating heart or the transit of a contrast agent adding a fifth dimension to the data. We developed a DICOM-compliant software for real time navigation in very large sets of 5 dimensional data based on an intuitive multidimensional jog-wheel widely used by the video-editing industry. The software, provided under open source licensing, allows interactive, single-handed, navigation through 3D images while adjusting blending of image modalities, image contrast and intensity and the rate of cine display of dynamic images. In this study we focused our effort on the user interface and means for interactively navigating in these large data sets while easily and rapidly changing multiple parameters such as image position, contrast, intensity, blending of colors, magnification etc. Conventional mouse-driven user interface requiring the user to manipulate cursors and sliders on the screen are too cumbersome and slow. We evaluated several hardware devices and identified a category of multipurpose jogwheel device that is used in the video-editing industry that is particularly suitable for rapidly navigating in five dimensions while adjusting several display parameters interactively. The application of this tool will be demonstrated in cardiac PET-CT imaging and functional cardiac MRI studies.

  15. Designing Websites for Displaying Large Data Sets and Images on Multiple Platforms

    NASA Astrophysics Data System (ADS)

    Anderson, A.; Wolf, V. G.; Garron, J.; Kirschner, M.

    2012-12-01

    The desire to build websites to analyze and display ever increasing amounts of scientific data and images pushes for web site designs which utilize large displays, and to use the display area as efficiently as possible. Yet, scientists and users of their data are increasingly wishing to access these websites in the field and on mobile devices. This results in the need to develop websites that can support a wide range of devices and screen sizes, and to optimally use whatever display area is available. Historically, designers have addressed this issue by building two websites; one for mobile devices, and one for desktop environments, resulting in increased cost, duplicity of work, and longer development times. Recent advancements in web design technology and techniques have evolved which allow for the development of a single website that dynamically adjusts to the type of device being used to browse the website (smartphone, tablet, desktop). In addition they provide the opportunity to truly optimize whatever display area is available. HTML5 and CSS3 give web designers media query statements which allow design style sheets to be aware of the size of the display being used, and to format web content differently based upon the queried response. Web elements can be rendered in a different size, position, or even removed from the display entirely, based upon the size of the display area. Using HTML5/CSS3 media queries in this manner is referred to as "Responsive Web Design" (RWD). RWD in combination with technologies such as LESS and Twitter Bootstrap allow the web designer to build web sites which not only dynamically respond to the browser display size being used, but to do so in very controlled and intelligent ways, ensuring that good layout and graphic design principles are followed while doing so. At the University of Alaska Fairbanks, the Alaska Satellite Facility SAR Data Center (ASF) recently redesigned their popular Vertex application and converted it from a traditional, fixed-layout website into a RWD site built on HTML5, LESS and Twitter Bootstrap. Vertex is a data portal for remotely sensed imagery of the earth, offering Synthetic Aperture Radar (SAR) data products from the global ASF archive. By using Responsive Web Design, ASF is able to provide access to a massive collection of SAR imagery and allow the user to use mobile devices and desktops to maximum advantage. ASF's Vertex web site demonstrates that with increased interface flexibility, scientists, managers and users can increase their personal effectiveness by accessing data portals from their preferred device as their science dictates.

  16. The holographic display of three-dimensional medical objects through the usage of a shiftable cylindrical lens

    NASA Astrophysics Data System (ADS)

    Teng, Dongdong; Liu, Lilin; Zhang, Yueli; Pang, Zhiyong; Wang, Biao

    2014-09-01

    Through the creative usage of a shiftable cylindrical lens, a wide-view-angle holographic display system is developed for medical object display in real three-dimensional (3D) space based on a time-multiplexing method. The two-dimensional (2D) source images for all computer generated holograms (CGHs) needed by the display system are only one group of computerized tomography (CT) or magnetic resonance imaging (MRI) slices from the scanning device. Complicated 3D message reconstruction on the computer is not necessary. A pelvis is taken as the target medical object to demonstrate this method and the obtained horizontal viewing angle reaches 28°.

  17. Thermal-Wave Microscope

    NASA Technical Reports Server (NTRS)

    Jones, Robert E.; Kramarchuk, Ihor; Williams, Wallace D.; Pouch, John J.; Gilbert, Percy

    1989-01-01

    Computer-controlled thermal-wave microscope developed to investigate III-V compound semiconductor devices and materials. Is nondestructive technique providing information on subsurface thermal features of solid samples. Furthermore, because this is subsurface technique, three-dimensional imaging also possible. Microscope uses intensity-modulated electron beam of modified scanning electron microscope to generate thermal waves in sample. Acoustic waves generated by thermal waves received by transducer and processed in computer to form images displayed on video display of microscope or recorded on magnetic disk.

  18. Display Device Color Management and Visual Surveillance of Vehicles

    ERIC Educational Resources Information Center

    Srivastava, Satyam

    2011-01-01

    Digital imaging has seen an enormous growth in the last decade. Today users have numerous choices in creating, accessing, and viewing digital image/video content. Color management is important to ensure consistent visual experience across imaging systems. This is typically achieved using color profiles. In this thesis we identify the limitations…

  19. 21 CFR 892.1630 - Electrostatic x-ray imaging system.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Electrostatic x-ray imaging system. 892.1630 Section 892.1630 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES... visible image. This generic type of device may include signal analysis and display equipment, patient and...

  20. 21 CFR 892.1630 - Electrostatic x-ray imaging system.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Electrostatic x-ray imaging system. 892.1630 Section 892.1630 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES... visible image. This generic type of device may include signal analysis and display equipment, patient and...

  1. 21 CFR 892.1630 - Electrostatic x-ray imaging system.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Electrostatic x-ray imaging system. 892.1630 Section 892.1630 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES... visible image. This generic type of device may include signal analysis and display equipment, patient and...

  2. Content dependent selection of image enhancement parameters for mobile displays

    NASA Astrophysics Data System (ADS)

    Lee, Yoon-Gyoo; Kang, Yoo-Jin; Kim, Han-Eol; Kim, Ka-Hee; Kim, Choon-Woo

    2011-01-01

    Mobile devices such as cellular phones and portable multimedia player with capability of playing terrestrial digital multimedia broadcasting (T-DMB) contents have been introduced into consumer market. In this paper, content dependent image quality enhancement method for sharpness and colorfulness and noise reduction is presented to improve perceived image quality on mobile displays. Human visual experiments are performed to analyze viewers' preference. Relationship between the objective measures and the optimal values of image control parameters are modeled by simple lookup tables based on the results of human visual experiments. Content dependent values of image control parameters are determined based on the calculated measures and predetermined lookup tables. Experimental results indicate that dynamic selection of image control parameters yields better image quality.

  3. Future opportunities for advancing glucose test device electronics.

    PubMed

    Young, Brian R; Young, Teresa L; Joyce, Margaret K; Kennedy, Spencer I; Atashbar, Massood Z

    2011-09-01

    Advancements in the field of printed electronics can be applied to the field of diabetes testing. A brief history and some new developments in printed electronics components applicable to personal test devices, including circuitry, batteries, transmission devices, displays, and sensors, are presented. Low-cost, thin, and lightweight materials containing printed circuits with energy storage or harvest capability and reactive/display centers, made using new printing/imaging technologies, are ideal for incorporation into personal-use medical devices such as glucose test meters. Semicontinuous rotogravure printing, which utilizes flexible substrates and polymeric, metallic, and/or nano "ink" composite materials to effect rapidly produced, lower-cost printed electronics, is showing promise. Continuing research advancing substrate, "ink," and continuous processing development presents the opportunity for research collaboration with medical device designers. © 2011 Diabetes Technology Society.

  4. Viewing zone duplication of multi-projection 3D display system using uniaxial crystal.

    PubMed

    Lee, Chang-Kun; Park, Soon-Gi; Moon, Seokil; Lee, Byoungho

    2016-04-18

    We propose a novel multiplexing technique for increasing the viewing zone of a multi-view based multi-projection 3D display system by employing double refraction in uniaxial crystal. When linearly polarized images from projector pass through the uniaxial crystal, two possible optical paths exist according to the polarization states of image. Therefore, the optical paths of the image could be changed, and the viewing zone is shifted in a lateral direction. The polarization modulation of the image from a single projection unit enables us to generate two viewing zones at different positions. For realizing full-color images at each viewing zone, a polarization-based temporal multiplexing technique is adopted with a conventional polarization switching device of liquid crystal (LC) display. Through experiments, a prototype of a ten-view multi-projection 3D display system presenting full-colored view images is implemented by combining five laser scanning projectors, an optically clear calcite (CaCO3) crystal, and an LC polarization rotator. For each time sequence of temporal multiplexing, the luminance distribution of the proposed system is measured and analyzed.

  5. Characterizing the reflectivity of handheld display devices.

    PubMed

    Liu, Peter; Badano, Aldo

    2014-08-01

    With increased use of handheld and tablet display devices for viewing medical images, methods for consistently measuring reflectivity of the devices are needed. In this note, the authors report on the characterization of diffuse reflections for handheld display devices including mobile phones and tablets using methods recommended by the American Association of Physicists in Medicine Task Group 18 (TG18). The authors modified the diffuse reflectance coefficient measurement method outlined in the TG18 report. The authors measured seven handheld display devices (two phones and five tablets) and three workstation displays. The device was attached to a black panel with Velcro. To study the effect of the back surface on the diffuse reflectance coefficient, the authors created Styrofoam masks with different size square openings and placed it in front of the device. Overall, for each display device, measurements of illuminance and reflected luminance on the display screen were taken. The authors measured with no mask, with masks of varying size, and with display-size masks, and calculated the corresponding diffuse reflectance coefficient. For all handhelds, the diffuse reflectance coefficient measured with no back panel were lower than measurements performed with a mask. The authors found an overall increase in reflectivity as the size of the mask decreases. For workstations displays, diffuse reflectance coefficients were higher when no back panel was used, and higher than with masks. In all cases, as luminance increased, illuminance increased, but not at the same rate. Since the size of handheld displays is smaller than that of workstation devices, the TG18 method suffers from a dependency on illumination condition. The authors show that the diffuse reflection coefficients can vary depending on the nature of the back surface of the illuminating box. The variability in the diffuse coefficient can be as large as 20% depending on the size of the mask. For all measurements, both luminance and illuminance increased as the size of the display window decreased. The TG18 method does not account for this variability. The authors conclude that the method requires a definitive description of the back panel used in the light source setup. The methods described in the TG18 document may need to be improved to provide consistent comparisons of desktop monitors, phones, and tablets.

  6. Mobile viewer system for virtual 3D space using infrared LED point markers and camera

    NASA Astrophysics Data System (ADS)

    Sakamoto, Kunio; Taneji, Shoto

    2006-09-01

    The authors have developed a 3D workspace system using collaborative imaging devices. A stereoscopic display enables this system to project 3D information. In this paper, we describe the position detecting system for a see-through 3D viewer. A 3D display system is useful technology for virtual reality, mixed reality and augmented reality. We have researched spatial imaging and interaction system. We have ever proposed 3D displays using the slit as a parallax barrier, the lenticular screen and the holographic optical elements(HOEs) for displaying active image 1)2)3)4). The purpose of this paper is to propose the interactive system using these 3D imaging technologies. The observer can view virtual images in the real world when the user watches the screen of a see-through 3D viewer. The goal of our research is to build the display system as follows; when users see the real world through the mobile viewer, the display system gives users virtual 3D images, which is floating in the air, and the observers can touch these floating images and interact them such that kids can make play clay. The key technologies of this system are the position recognition system and the spatial imaging display. The 3D images are presented by the improved parallax barrier 3D display. Here the authors discuss the measuring method of the mobile viewer using infrared LED point markers and a camera in the 3D workspace (augmented reality world). The authors show the geometric analysis of the proposed measuring method, which is the simplest method using a single camera not the stereo camera, and the results of our viewer system.

  7. Testing Instrument for Flight-Simulator Displays

    NASA Technical Reports Server (NTRS)

    Haines, Richard F.

    1987-01-01

    Displays for flight-training simulators rapidly aligned with aid of integrated optical instrument. Calibrations and tests such as aligning boresight of display with respect to user's eyes, checking and adjusting display horizon, checking image sharpness, measuring illuminance of displayed scenes, and measuring distance of optical focus of scene performed with single unit. New instrument combines all measurement devices in single, compact, integrated unit. Requires just one initial setup. Employs laser and produces narrow, collimated beam for greater measurement accuracy. Uses only one moving part, double right prism, to position laser beam.

  8. Tchebichef moment transform on image dithering for mobile applications

    NASA Astrophysics Data System (ADS)

    Ernawan, Ferda; Abu, Nur Azman; Rahmalan, Hidayah

    2012-04-01

    Currently, mobile image applications spend a lot of computing process to display images. A true color raw image contains billions of colors and it consumes high computational power in most mobile image applications. At the same time, mobile devices are only expected to be equipped with lower computing process and minimum storage space. Image dithering is a popular technique to reduce the numbers of bit per pixel at the expense of lower quality image displays. This paper proposes a novel approach on image dithering using 2x2 Tchebichef moment transform (TMT). TMT integrates a simple mathematical framework technique using matrices. TMT coefficients consist of real rational numbers. An image dithering based on TMT has the potential to provide better efficiency and simplicity. The preliminary experiment shows a promising result in term of error reconstructions and image visual textures.

  9. Portable low-cost devices for videotaping, editing, and displaying field-sequential stereoscopic motion pictures and video

    NASA Astrophysics Data System (ADS)

    Starks, Michael R.

    1990-09-01

    A variety of low cost devices for capturing, editing and displaying field sequential 60 cycle stereoscopic video have recently been marketed by 3D TV Corp. and others. When properly used, they give very high quality images with most consumer and professional equipment. Our stereoscopic multiplexers for creating and editing field sequential video in NTSC or component(SVHS, Betacain, RGB) and Home 3D Theater system employing LCD eyeglasses have made 3D movies and television available to a large audience.

  10. Clinical application of a modern high-definition head-mounted display in sonography.

    PubMed

    Takeshita, Hideki; Kihara, Kazunori; Yoshida, Soichiro; Higuchi, Saori; Ito, Masaya; Nakanishi, Yasukazu; Kijima, Toshiki; Ishioka, Junichiro; Matsuoka, Yoh; Numao, Noboru; Saito, Kazutaka; Fujii, Yasuhisa

    2014-08-01

    Because of the remarkably improved image quality and wearability of modern head-mounted displays, a monitoring system using a head-mounted display rather than a fixed-site monitor for sonographic scanning has the potential to improve the diagnostic performance and lessen the examiner's physical burden during a sonographic examination. In a preclinical setting, 2 head-mounted displays, the HMZ-T2 (Sony Corporation, Tokyo, Japan) and the Wrap1200 (Vuzix Corporation, Rochester, NY), were found to be applicable to sonography. In a clinical setting, the feasibility of the HMZ-T2 was shown by its good image quality and acceptable wearability. This modern device is appropriate for clinical use in sonography. © 2014 by the American Institute of Ultrasound in Medicine.

  11. The benefits of the Atlas of Human Cardiac Anatomy website for the design of cardiac devices.

    PubMed

    Spencer, Julianne H; Quill, Jason L; Bateman, Michael G; Eggen, Michael D; Howard, Stephen A; Goff, Ryan P; Howard, Brian T; Quallich, Stephen G; Iaizzo, Paul A

    2013-11-01

    This paper describes how the Atlas of Human Cardiac Anatomy website can be used to improve cardiac device design throughout the process of development. The Atlas is a free-access website featuring novel images of both functional and fixed human cardiac anatomy from over 250 human heart specimens. This website provides numerous educational tutorials on anatomy, physiology and various imaging modalities. For instance, the 'device tutorial' provides examples of devices that were either present at the time of in vitro reanimation or were subsequently delivered, including leads, catheters, valves, annuloplasty rings and stents. Another section of the website displays 3D models of the vasculature, blood volumes and/or tissue volumes reconstructed from computed tomography and magnetic resonance images of various heart specimens. The website shares library images, video clips and computed tomography and MRI DICOM files in honor of the generous gifts received from donors and their families.

  12. Aerial LED signage by use of crossed-mirror array

    NASA Astrophysics Data System (ADS)

    Yamamoto, Hirotsugu; Kujime, Ryousuke; Bando, Hiroki; Suyama, Shiro

    2013-03-01

    3D representation of digital signage improves its significance and rapid notification of important points. Real 3D display techniques such as volumetric 3D displays are effective for use of 3D for public signs because it provides not only binocular disparity but also motion parallax and other cues, which will give 3D impression even people with abnormal binocular vision. Our goal is to realize aerial 3D LED signs. We have specially designed and fabricated a reflective optical device to form an aerial image of LEDs with a wide field angle. The developed reflective optical device composed of crossed-mirror array (CMA). CMA contains dihedral corner reflectors at each aperture. After double reflection, light rays emitted from an LED will converge into the corresponding image point. The depth between LED lamps is represented in the same depth in the floating 3D image. Floating image of LEDs was formed in wide range of incident angle with a peak reflectance at 35 deg. The image size of focused beam (point spread function) agreed to the apparent aperture size.

  13. Reading performance with large fonts on high-resolution displays

    NASA Astrophysics Data System (ADS)

    Powers, Maureen K.; Larimer, James O.; Gille, Jennifer; Liu, Hsien-Chang

    2004-06-01

    Reading is a fundamental task and skill in many environments including business, education, and the home. Today, reading often occurs on electronic displays in addition to traditional hard copy media such as books and magazines, presenting issues of legibility and other factors that can affect human performance [1]. In fact, the transition to soft copy media for text images is often met with worker complaints about their vision and comfort while reading [2-6]. Careful comparative evaluations of reading performance across hard and soft copy device types are rare, even though they are clearly important given the rapid and substantial improvements in soft copy devices available in the marketplace over the last 5 years. To begin to fill this evaluation gap, we compared reading performance on three different soft copy devices and traditional paper. This study does not investigate comfort factors such as display location, seating comfort, and more general issues of lighting, rather we focus instead on a narrow examination of reading performance differences across display types when font sizes are large.

  14. Design and characterization of a handheld multimodal imaging device for the assessment of oral epithelial lesions

    NASA Astrophysics Data System (ADS)

    Higgins, Laura M.; Pierce, Mark C.

    2014-08-01

    A compact handpiece combining high resolution fluorescence (HRF) imaging with optical coherence tomography (OCT) was developed to provide real-time assessment of oral lesions. This multimodal imaging device simultaneously captures coregistered en face images with subcellular detail alongside cross-sectional images of tissue microstructure. The HRF imaging acquires a 712×594 μm2 field-of-view at the sample with a spatial resolution of 3.5 μm. The OCT images were acquired to a depth of 1.5 mm with axial and lateral resolutions of 9.3 and 8.0 μm, respectively. HRF and OCT images are simultaneously displayed at 25 fps. The handheld device was used to image a healthy volunteer, demonstrating the potential for in vivo assessment of the epithelial surface for dysplastic and neoplastic changes at the cellular level, while simultaneously evaluating submucosal involvement. We anticipate potential applications in real-time assessment of oral lesions for improved surveillance and surgical guidance.

  15. Recent patents on electrophoretic displays and materials.

    PubMed

    Christophersen, Marc; Phlips, Bernard F

    2010-11-01

    Electrophoretic displays (EPDs) have made their way into consumer products. EPDs enable displays that offer the look and form of a printed page, often called "electronic paper". We will review recent apparatus and method patents for EPD devices and their fabrication. A brief introduction into the basic display operation and history of EPDs is given, while pointing out the technological challenges and difficulties for inventors. Recently, the majority of scientific publications and patenting activity has been directed to micro-segmented EPDs. These devices exhibit high optical reflectance and contrast, wide viewing angle, and high image resolution. Micro-segmented EPDs can also be integrated with flexible transistors technologies into flexible displays. Typical particles size ranges from 200 nm to 2 micrometer. Currently one very active area of patenting is the development of full-color EPDs. We summarize the recent patenting activity for EPDs and provide comments on perceiving factors driving intellectual property protection for EPD technologies.

  16. The application of autostereoscopic display in smart home system based on mobile devices

    NASA Astrophysics Data System (ADS)

    Zhang, Yongjun; Ling, Zhi

    2015-03-01

    Smart home is a system to control home devices which are more and more popular in our daily life. Mobile intelligent terminals based on smart homes have been developed, make remote controlling and monitoring possible with smartphones or tablets. On the other hand, 3D stereo display technology developed rapidly in recent years. Therefore, a iPad-based smart home system adopts autostereoscopic display as the control interface is proposed to improve the userfriendliness of using experiences. In consideration of iPad's limited hardware capabilities, we introduced a 3D image synthesizing method based on parallel processing with Graphic Processing Unit (GPU) implemented it with OpenGL ES Application Programming Interface (API) library on IOS platforms for real-time autostereoscopic displaying. Compared to the traditional smart home system, the proposed system applied autostereoscopic display into smart home system's control interface enhanced the reality, user-friendliness and visual comfort of interface.

  17. A portable detection instrument based on DSP for beef marbling

    NASA Astrophysics Data System (ADS)

    Zhou, Tong; Peng, Yankun

    2014-05-01

    Beef marbling is one of the most important indices to assess beef quality. Beef marbling is graded by the measurement of the fat distribution density in the rib-eye region. However quality grades of beef in most of the beef slaughtering houses and businesses depend on trainees using their visual senses or comparing the beef slice to the Chinese standard sample cards. Manual grading demands not only great labor but it also lacks objectivity and accuracy. Aiming at the necessity of beef slaughtering houses and businesses, a beef marbling detection instrument was designed. The instrument employs Charge-coupled Device (CCD) imaging techniques, digital image processing, Digital Signal Processor (DSP) control and processing techniques and Liquid Crystal Display (LCD) screen display techniques. The TMS320DM642 digital signal processor of Texas Instruments (TI) is the core that combines high-speed data processing capabilities and real-time processing features. All processes such as image acquisition, data transmission, image processing algorithms and display were implemented on this instrument for a quick, efficient, and non-invasive detection of beef marbling. Structure of the system, working principle, hardware and software are introduced in detail. The device is compact and easy to transport. The instrument can determine the grade of beef marbling reliably and correctly.

  18. Initial experience with a handheld device digital imaging and communications in medicine viewer: OsiriX mobile on the iPhone.

    PubMed

    Choudhri, Asim F; Radvany, Martin G

    2011-04-01

    Medical imaging is commonly used to diagnose many emergent conditions, as well as plan treatment. Digital images can be reviewed on almost any computing platform. Modern mobile phones and handheld devices are portable computing platforms with robust software programming interfaces, powerful processors, and high-resolution displays. OsiriX mobile, a new Digital Imaging and Communications in Medicine viewing program, is available for the iPhone/iPod touch platform. This raises the possibility of mobile review of diagnostic medical images to expedite diagnosis and treatment planning using a commercial off the shelf solution, facilitating communication among radiologists and referring clinicians.

  19. A full-parallax 3D display with restricted viewing zone tracking viewer's eye

    NASA Astrophysics Data System (ADS)

    Beppu, Naoto; Yendo, Tomohiro

    2015-03-01

    The Three-Dimensional (3D) vision became widely known as familiar imaging technique now. The 3D display has been put into practical use in various fields, such as entertainment and medical fields. Development of 3D display technology will play an important role in a wide range of fields. There are various ways to the method of displaying 3D image. There is one of the methods that showing 3D image method to use the ray reproduction and we focused on it. This method needs many viewpoint images when achieve a full-parallax because this method display different viewpoint image depending on the viewpoint. We proposed to reduce wasteful rays by limiting projector's ray emitted to around only viewer using a spinning mirror, and to increase effectiveness of display device to achieve a full-parallax 3D display. We propose a method by using a tracking viewer's eye, a high-speed projector, a rotating mirror that tracking viewer (a spinning mirror), a concave mirror array having the different vertical slope arranged circumferentially (a concave mirror array), a cylindrical mirror. About proposed method in simulation, we confirmed the scanning range and the locus of the movement in the horizontal direction of the ray. In addition, we confirmed the switching of the viewpoints and convergence performance in the vertical direction of rays. Therefore, we confirmed that it is possible to realize a full-parallax.

  20. 21 CFR 884.2225 - Obstetric-gynecologic ultrasonic imager.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Obstetric-gynecologic ultrasonic imager. 884.2225 Section 884.2225 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES... generic type of device may include the following: signal analysis and display equipment, electronic...

  1. 21 CFR 892.1560 - Ultrasonic pulsed echo imaging system.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Ultrasonic pulsed echo imaging system. 892.1560 Section 892.1560 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES... receiver. This generic type of device may include signal analysis and display equipment, patient and...

  2. 21 CFR 892.1560 - Ultrasonic pulsed echo imaging system.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Ultrasonic pulsed echo imaging system. 892.1560 Section 892.1560 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES... receiver. This generic type of device may include signal analysis and display equipment, patient and...

  3. 21 CFR 892.1650 - Image-intensified fluoroscopic x-ray system.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Image-intensified fluoroscopic x-ray system. 892.1650 Section 892.1650 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN... through electronic amplification. This generic type of device may include signal analysis and display...

  4. 21 CFR 892.1650 - Image-intensified fluoroscopic x-ray system.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Image-intensified fluoroscopic x-ray system. 892.1650 Section 892.1650 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN... through electronic amplification. This generic type of device may include signal analysis and display...

  5. 21 CFR 884.2225 - Obstetric-gynecologic ultrasonic imager.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Obstetric-gynecologic ultrasonic imager. 884.2225 Section 884.2225 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES... generic type of device may include the following: signal analysis and display equipment, electronic...

  6. 21 CFR 892.1560 - Ultrasonic pulsed echo imaging system.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Ultrasonic pulsed echo imaging system. 892.1560 Section 892.1560 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES... receiver. This generic type of device may include signal analysis and display equipment, patient and...

  7. 21 CFR 884.2225 - Obstetric-gynecologic ultrasonic imager.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Obstetric-gynecologic ultrasonic imager. 884.2225 Section 884.2225 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES... generic type of device may include the following: signal analysis and display equipment, electronic...

  8. 21 CFR 892.1650 - Image-intensified fluoroscopic x-ray system.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Image-intensified fluoroscopic x-ray system. 892.1650 Section 892.1650 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN... through electronic amplification. This generic type of device may include signal analysis and display...

  9. Hierarchical tone mapping for high dynamic range image visualization

    NASA Astrophysics Data System (ADS)

    Qiu, Guoping; Duan, Jiang

    2005-07-01

    In this paper, we present a computationally efficient, practically easy to use tone mapping techniques for the visualization of high dynamic range (HDR) images in low dynamic range (LDR) reproduction devices. The new method, termed hierarchical nonlinear linear (HNL) tone-mapping operator maps the pixels in two hierarchical steps. The first step allocates appropriate numbers of LDR display levels to different HDR intensity intervals according to the pixel densities of the intervals. The second step linearly maps the HDR intensity intervals to theirs allocated LDR display levels. In the developed HNL scheme, the assignment of LDR display levels to HDR intensity intervals is controlled by a very simple and flexible formula with a single adjustable parameter. We also show that our new operators can be used for the effective enhancement of ordinary images.

  10. Confocal retinal imaging using a digital light projector with a near infrared VCSEL source

    NASA Astrophysics Data System (ADS)

    Muller, Matthew S.; Elsner, Ann E.

    2018-02-01

    A custom near infrared VCSEL source has been implemented in a confocal non-mydriatic retinal camera, the Digital Light Ophthalmoscope (DLO). The use of near infrared light improves patient comfort, avoids pupil constriction, penetrates the deeper retina, and does not mask visual stimuli. The DLO performs confocal imaging by synchronizing a sequence of lines displayed with a digital micromirror device to the rolling shutter exposure of a 2D CMOS camera. Real-time software adjustments enable multiply scattered light imaging, which rapidly and cost-effectively emphasizes drusen and other scattering disruptions in the deeper retina. A separate 5.1" LCD display provides customizable visible stimuli for vision experiments with simultaneous near infrared imaging.

  11. Adapting smartphones for low-cost optical medical imaging

    NASA Astrophysics Data System (ADS)

    Pratavieira, Sebastião.; Vollet-Filho, José D.; Carbinatto, Fernanda M.; Blanco, Kate; Inada, Natalia M.; Bagnato, Vanderlei S.; Kurachi, Cristina

    2015-06-01

    Optical images have been used in several medical situations to improve diagnosis of lesions or to monitor treatments. However, most systems employ expensive scientific (CCD or CMOS) cameras and need computers to display and save the images, usually resulting in a high final cost for the system. Additionally, this sort of apparatus operation usually becomes more complex, requiring more and more specialized technical knowledge from the operator. Currently, the number of people using smartphone-like devices with built-in high quality cameras is increasing, which might allow using such devices as an efficient, lower cost, portable imaging system for medical applications. Thus, we aim to develop methods of adaptation of those devices to optical medical imaging techniques, such as fluorescence. Particularly, smartphones covers were adapted to connect a smartphone-like device to widefield fluorescence imaging systems. These systems were used to detect lesions in different tissues, such as cervix and mouth/throat mucosa, and to monitor ALA-induced protoporphyrin-IX formation for photodynamic treatment of Cervical Intraepithelial Neoplasia. This approach may contribute significantly to low-cost, portable and simple clinical optical imaging collection.

  12. Tangible imaging systems

    NASA Astrophysics Data System (ADS)

    Ferwerda, James A.

    2013-03-01

    We are developing tangible imaging systems1-4 that enable natural interaction with virtual objects. Tangible imaging systems are based on consumer mobile devices that incorporate electronic displays, graphics hardware, accelerometers, gyroscopes, and digital cameras, in laptop or tablet-shaped form-factors. Custom software allows the orientation of a device and the position of the observer to be tracked in real-time. Using this information, realistic images of threedimensional objects with complex textures and material properties are rendered to the screen, and tilting or moving in front of the device produces realistic changes in surface lighting and material appearance. Tangible imaging systems thus allow virtual objects to be observed and manipulated as naturally as real ones with the added benefit that object properties can be modified under user control. In this paper we describe four tangible imaging systems we have developed: the tangiBook - our first implementation on a laptop computer; tangiView - a more refined implementation on a tablet device; tangiPaint - a tangible digital painting application; and phantoView - an application that takes the tangible imaging concept into stereoscopic 3D.

  13. The Eye Catching Property of Digital-Signage with Scent and a Scent-Emitting Video Display System

    NASA Astrophysics Data System (ADS)

    Tomono, Akira; Otake, Syunya

    In this paper, the effective method of inducing a glance aimed at the digital signage by emitting a scent is described. The simulation experiment was done using the immersive VR System because there were a lot of restrictions to the experiment in an actual passageway. In order to investigate the eye catching property of the digital signage, the passer-by's eye movement was analyzed. Through the experiment, they were clarified that the digital signage with the scent was paid to attention, and the strong impression remained in the memory. Next, a scent-emitting video display system applying to the digital signage is described. To this end, a scent-emitting device that is able to quickly change the scents it is releasing, and present them from a distance (by the non-contact method), thus maintaining a relationship between the scent and the image, must be developed. We propose a new method where a device that can release pressurized gases is placed behind the display screen filled with tiny pores. Scents are then ejected from this device, traveling through the pores to the front side of the screen. An excellent scent delivery characteristic was obtained because the distance to the user is close and the scent is presented from the front. We also present a method for inducing viewer reactions using on-screen images, thereby enabling scent release to coincide precisely with viewer inhalations. We anticipate that the simultaneous presentation of scents and video images will deepen viewers' comprehension of these images.

  14. Graphical user interface for image acquisition and processing

    DOEpatents

    Goldberg, Kenneth A.

    2002-01-01

    An event-driven GUI-based image acquisition interface for the IDL programming environment designed for CCD camera control and image acquisition directly into the IDL environment where image manipulation and data analysis can be performed, and a toolbox of real-time analysis applications. Running the image acquisition hardware directly from IDL removes the necessity of first saving images in one program and then importing the data into IDL for analysis in a second step. Bringing the data directly into IDL creates an opportunity for the implementation of IDL image processing and display functions in real-time. program allows control over the available charge coupled device (CCD) detector parameters, data acquisition, file saving and loading, and image manipulation and processing, all from within IDL. The program is built using IDL's widget libraries to control the on-screen display and user interface.

  15. Signal Normalization Reduces Image Appearance Disparity Among Multiple Optical Coherence Tomography Devices.

    PubMed

    Chen, Chieh-Li; Ishikawa, Hiroshi; Wollstein, Gadi; Bilonick, Richard A; Kagemann, Larry; Schuman, Joel S

    2017-02-01

    To assess the effect of the previously reported optical coherence tomography (OCT) signal normalization method on reducing the discrepancies in image appearance among spectral-domain OCT (SD-OCT) devices. Healthy eyes and eyes with various retinal pathologies were scanned at the macular region using similar volumetric scan patterns with at least two out of three SD-OCT devices at the same visit (Cirrus HD-OCT, Zeiss, Dublin, CA; RTVue, Optovue, Fremont, CA; and Spectralis, Heidelberg Engineering, Heidelberg, Germany). All the images were processed with the signal normalization. A set of images formed a questionnaire with 24 pairs of cross-sectional images from each eye with any combination of the three SD-OCT devices either both pre- or postsignal normalization. Observers were asked to evaluate the similarity of the two displayed images based on the image appearance. The effects on reducing the differences in image appearance before and after processing were analyzed. Twenty-nine researchers familiar with OCT images participated in the survey. Image similarity was significantly improved after signal normalization for all three combinations ( P ≤ 0.009) as Cirrus and RTVue combination became the most similar pair, followed by Cirrus and Spectralis, and RTVue and Spectralis. The signal normalization successfully minimized the disparities in the image appearance among multiple SD-OCT devices, allowing clinical interpretation and comparison of OCT images regardless of the device differences. The signal normalization would enable direct OCT images comparisons without concerning about device differences and broaden OCT usage by enabling long-term follow-ups and data sharing.

  16. Switching of liquid crystal devices between reflective and transmissive modes

    NASA Astrophysics Data System (ADS)

    Lin, Hui-Chi; Wang, Chih-Hung

    Transflective liquid crystal displays (LCD) are commonly known that each pixel is divided into reflective (R) and transmissive (T) subpixels. The R mode uses ambient light, while the T mode utilizes a backlight to display images. However, the division of the pixel decreases the light efficiency and the resolution. This study demonstrates a gelator-doped liquid crystal (LC) devices, that is switchable between R and T modes, without sub-pixel division. The R and T modes are designed to have bend configurations with phase retardation of π/2 and π, respectively. The phase retardation of a LC device can be varied and fixed by the thermoreversible association and dissociation of the gelator molecules. It is believed that the proposed device is a potential candidate for portable information systems.

  17. Efficient green lasers for high-resolution scanning micro-projector displays

    NASA Astrophysics Data System (ADS)

    Bhatia, Vikram; Bauco, Anthony S.; Oubei, Hassan M.; Loeber, David A. S.

    2010-02-01

    Laser-based projectors are gaining increased acceptance in mobile device market due to their low power consumption, superior image quality and small size. The basic configuration of such micro-projectors is a miniature mirror that creates an image by raster scanning the collinear red, blue and green laser beams that are individually modulated on a pixel-bypixel basis. The image resolution of these displays can be limited by the modulation bandwidth of the laser sources, and the modulation speed of the green laser has been one of the key limitations in the development of these displays. We will discuss how this limitation is fundamental to the architecture of many laser designs and then present a green laser configuration which overcomes these difficulties. In this green laser architecture infra-red light from a distributed Bragg-reflector (DBR) laser diode undergoes conversion to green light in a waveguided second harmonic generator (SHG) crystal. The direct doubling in a single pass through the SHG crystal allows the device to operate at the large modulation bandwidth of the DBR laser. We demonstrate that the resultant product has a small footprint (<0.7 cc envelope volume), high efficiency (>9% electrical-to-optical conversion) and large modulation bandwidth (>100 MHz).

  18. Using a Motion Sensor-Equipped Smartphone to Facilitate CT-Guided Puncture.

    PubMed

    Hirata, Masaaki; Watanabe, Ryouhei; Koyano, Yasuhiro; Sugata, Shigenori; Takeda, Yukie; Nakamura, Seiji; Akamune, Akihisa; Tsuda, Takaharu; Mochizuki, Teruhito

    2017-04-01

    To demonstrate the use of "Smart Puncture," a smartphone application to assist conventional CT-guided puncture without CT fluoroscopy, and to describe the advantages of this application. A puncture guideline is displayed by entering the angle into the application. Regardless of the angle at which the device is being held, the motion sensor ensures that the guideline is displayed at the appropriate angle with respect to gravity. The angle of the smartphone's liquid crystal display (LCD) is also detected, preventing needle deflection from the CT slice image. Physicians can perform the puncture procedure by advancing the needle using the guideline while the smartphone is placed adjacent to the patient. In an experimental puncture test using a sponge as a target, the target was punctured at 30°, 50°, and 70° when the device was tilted to 0°, 15°, 30°, and 45°, respectively. The punctured target was then imaged with a CT scan, and the puncture error was measured. The mean puncture error in the plane parallel to the LCD was less than 2°, irrespective of device tilt. The mean puncture error in the sagittal plane was less than 3° with no device tilt. However, the mean puncture error tended to increase when the tilt was increased. This application can transform a smartphone into a valuable tool that is capable of objectively and accurately assisting CT-guided puncture procedures.

  19. Small Interactive Image Processing System (SMIPS) system description

    NASA Technical Reports Server (NTRS)

    Moik, J. G.

    1973-01-01

    The Small Interactive Image Processing System (SMIPS) operates under control of the IBM-OS/MVT operating system and uses an IBM-2250 model 1 display unit as interactive graphic device. The input language in the form of character strings or attentions from keys and light pen is interpreted and causes processing of built-in image processing functions as well as execution of a variable number of application programs kept on a private disk file. A description of design considerations is given and characteristics, structure and logic flow of SMIPS are summarized. Data management and graphic programming techniques used for the interactive manipulation and display of digital pictures are also discussed.

  20. Computational see-through near-eye displays

    NASA Astrophysics Data System (ADS)

    Maimone, Andrew S.

    See-through near-eye displays with the form factor and field of view of eyeglasses are a natural choice for augmented reality systems: the non-encumbering size enables casual and extended use and large field of view enables general-purpose spatially registered applications. However, designing displays with these attributes is currently an open problem. Support for enhanced realism through mutual occlusion and the focal depth cues is also not found in eyeglasses-like displays. This dissertation provides a new strategy for eyeglasses-like displays that follows the principles of computational displays, devices that rely on software as a fundamental part of image formation. Such devices allow more hardware simplicity and flexibility, showing greater promise of meeting form factor and field of view goals while enhancing realism. This computational approach is realized in two novel and complementary see-through near-eye display designs. The first subtractive approach filters omnidirectional light through a set of optimized patterns displayed on a stack of spatial light modulators, reproducing a light field corresponding to in-focus imagery. The design is thin and scales to wide fields of view; see-through is achieved with transparent components placed directly in front of the eye. Preliminary support for focal cues and environment occlusion is also demonstrated. The second additive approach uses structured point light illumination to form an image with a minimal set of rays. Each of an array of defocused point light sources is modulated by a region of a spatial light modulator, essentially encoding an image in the focal blur. See-through is also achieved with transparent components and thin form factors and wide fields of view (>= 100 degrees) are demonstrated. The designs are examined in theoretical terms, in simulation, and through prototype hardware with public demonstrations. This analysis shows that the proposed computational near-eye display designs offer a significantly different set of trade-offs than conventional optical designs. Several challenges remain to make the designs practical, most notably addressing diffraction limits.

  1. The Formal Specification of a Visual display Device: Design and Implementation.

    DTIC Science & Technology

    1985-06-01

    The use of these data structures with their defined operations, give the programmer a very powerful instructions set. Like the DPU code generator in...which any AM hosted machine could faithfully display. 27 In- general , most applications have no need to create images from a data structure representing...formation of standard functional interfaces to these resources. OS’s generally do not provide a functional interface to either the processor or the display2

  2. Scanning laser beam displays based on a 2D MEMS

    NASA Astrophysics Data System (ADS)

    Niesten, Maarten; Masood, Taha; Miller, Josh; Tauscher, Jason

    2010-05-01

    The combination of laser light sources and MEMS technology enables a range of display systems such as ultra small projectors for mobile devices, head-up displays for vehicles, wearable near-eye displays and projection systems for 3D imaging. Images are created by scanning red, green and blue lasers horizontally and vertically with a single two-dimensional MEMS. Due to the excellent beam quality of laser beams, the optical designs are efficient and compact. In addition, the laser illumination enables saturated display colors that are desirable for augmented reality applications where a virtual image is used. With this technology, the smallest projector engine for high volume manufacturing to date has been developed. This projector module has a height of 7 mm and a volume of 5 cc. The resolution of this projector is WVGA. No additional projection optics is required, resulting in an infinite focus depth. Unlike with micro-display projection displays, an increase in resolution will not lead to an increase in size or a decrease in efficiency. Therefore future projectors can be developed that combine a higher resolution in an even smaller and thinner form factor with increased efficiencies that will lead to lower power consumption.

  3. A Guide to Microforms and Microform Retrieval Equipment.

    ERIC Educational Resources Information Center

    McKay, Mark, Ed.

    As used in this handbook, microform retrieval equipment is defined as any device that is used to locate, enlarge, and display microform images or that produces enlarged hard copy from the images. Only equipment widely available in the United States has been included. The first chapter provides information about the most widely and generally used…

  4. The Role and Design of Screen Images in Software Documentation.

    ERIC Educational Resources Information Center

    van der Meij, Hans

    2000-01-01

    Discussion of learning a new computer software program focuses on how to support the joint handling of a manual, input devices, and screen display. Describes a study that examined three design styles for manuals that included screen images to reduce split-attention problems and discusses theory versus practice and cognitive load theory.…

  5. Projectors get personal

    NASA Astrophysics Data System (ADS)

    Graham-Rowe, Duncan

    2007-12-01

    As the size of handheld gadgets decreases, their displays become harder to view. The solution could lie with integrated projectors that can project crisp, large images from mobile devices onto any chosen surface. Duncan Graham-Rowe reports.

  6. Teleoperated robotic sorting system

    DOEpatents

    Roos, Charles E.; Sommer, Jr., Edward J.; Parrish, Robert H.; Russell, James R.

    2008-06-24

    A method and apparatus are disclosed for classifying materials utilizing a computerized touch sensitive screen or other computerized pointing device for operator identification and electronic marking of spatial coordinates of materials to be extracted. An operator positioned at a computerized touch sensitive screen views electronic images of the mixture of materials to be sorted as they are conveyed past a sensor array which transmits sequences of images of the mixture either directly or through a computer to the touch sensitive display screen. The operator manually "touches" objects displayed on the screen to be extracted from the mixture thereby registering the spatial coordinates of the objects within the computer. The computer then tracks the registered objects as they are conveyed and directs automated devices including mechanical means such as air jets, robotic arms, or other mechanical diverters to extract the registered objects.

  7. Teleoperated robotic sorting system

    DOEpatents

    Roos, Charles E.; Sommer, Edward J.; Parrish, Robert H.; Russell, James R.

    2000-01-01

    A method and apparatus are disclosed for classifying materials utilizing a computerized touch sensitive screen or other computerized pointing device for operator identification and electronic marking of spatial coordinates of materials to be extracted. An operator positioned at a computerized touch sensitive screen views electronic images of the mixture of materials to be sorted as they are conveyed past a sensor array which transmits sequences of images of the mixture either directly or through a computer to the touch sensitive display screen. The operator manually "touches" objects displayed on the screen to be extracted from the mixture thereby registering the spatial coordinates of the objects within the computer. The computer then tracks the registered objects as they are conveyed and directs automated devices including mechanical means such as air jets, robotic arms, or other mechanical diverters to extract the registered objects.

  8. Amorphous Silicon: Flexible Backplane and Display Application

    NASA Astrophysics Data System (ADS)

    Sarma, Kalluri R.

    Advances in the science and technology of hydrogenated amorphous silicon (a-Si:H, also referred to as a-Si) and the associated devices including thin-film transistors (TFT) during the past three decades have had a profound impact on the development and commercialization of major applications such as thin-film solar cells, digital image scanners and X-ray imagers and active matrix liquid crystal displays (AMLCDs). Particularly, during approximately the past 15 years, a-Si TFT-based flat panel AMLCDs have been a huge commercial success. a-Si TFT-LCD has enabled the note book PCs, and is now rapidly replacing the venerable CRT in the desktop monitor and home TV applications. a-Si TFT-LCD is now the dominant technology in use for applications ranging from small displays such as in mobile phones to large displays such as in home TV, as well-specialized applications such as industrial and avionics displays.

  9. Touchscreen everywhere: on transferring a normal planar surface to a touch-sensitive display.

    PubMed

    Dai, Jingwen; Chung, Chi-Kit Ronald

    2014-08-01

    We address how a human-computer interface with small device size, large display, and touch-input facility can be made possible by a mere projector and camera. The realization is through the use of a properly embedded structured light sensing scheme that enables a regular light-colored table surface to serve the dual roles of both a projection screen and a touch-sensitive display surface. A random binary pattern is employed to code structured light in pixel accuracy, which is embedded into the regular projection display in a way that the user perceives only regular display but not the structured pattern hidden in the display. With the projection display on the table surface being imaged by a camera, the observed image data, plus the known projection content, can work together to probe the 3-D workspace immediately above the table surface, like deciding if there is a finger present and if the finger touches the table surface, and if so, at what position on the table surface the contact is made. All the decisions hinge upon a careful calibration of the projector-camera-table surface system, intelligent segmentation of the hand in the image data, and exploitation of the homography mapping existing between the projector's display panel and the camera's image plane. Extensive experimentation including evaluation of the display quality, hand segmentation accuracy, touch detection accuracy, trajectory tracking accuracy, multitouch capability and system efficiency are shown to illustrate the feasibility of the proposed realization.

  10. Vergence-accommodation conflicts hinder visual performance and cause visual fatigue.

    PubMed

    Hoffman, David M; Girshick, Ahna R; Akeley, Kurt; Banks, Martin S

    2008-03-28

    Three-dimensional (3D) displays have become important for many applications including vision research, operation of remote devices, medical imaging, surgical training, scientific visualization, virtual prototyping, and more. In many of these applications, it is important for the graphic image to create a faithful impression of the 3D structure of the portrayed object or scene. Unfortunately, 3D displays often yield distortions in perceived 3D structure compared with the percepts of the real scenes the displays depict. A likely cause of such distortions is the fact that computer displays present images on one surface. Thus, focus cues-accommodation and blur in the retinal image-specify the depth of the display rather than the depths in the depicted scene. Additionally, the uncoupling of vergence and accommodation required by 3D displays frequently reduces one's ability to fuse the binocular stimulus and causes discomfort and fatigue for the viewer. We have developed a novel 3D display that presents focus cues that are correct or nearly correct for the depicted scene. We used this display to evaluate the influence of focus cues on perceptual distortions, fusion failures, and fatigue. We show that when focus cues are correct or nearly correct, (1) the time required to identify a stereoscopic stimulus is reduced, (2) stereoacuity in a time-limited task is increased, (3) distortions in perceived depth are reduced, and (4) viewer fatigue and discomfort are reduced. We discuss the implications of this work for vision research and the design and use of displays.

  11. Ocular Tolerance of Contemporary Electronic Display Devices.

    PubMed

    Clark, Andrew J; Yang, Paul; Khaderi, Khizer R; Moshfeghi, Andrew A

    2018-05-01

    Electronic displays have become an integral part of life in the developed world since the revolution of mobile computing a decade ago. With the release of multiple consumer-grade virtual reality (VR) and augmented reality (AR) products in the past 2 years utilizing head-mounted displays (HMDs), as well as the development of low-cost, smartphone-based HMDs, the ability to intimately interact with electronic screens is greater than ever. VR/AR HMDs also place the display at much closer ocular proximity than traditional electronic devices while also isolating the user from the ambient environment to create a "closed" system between the user's eyes and the display. Whether the increased interaction with these devices places the user's retina at higher risk of damage is currently unclear. Herein, the authors review the discovery of photochemical damage of the retina from visible light as well as summarize relevant clinical and preclinical data regarding the influence of modern display devices on retinal health. Multiple preclinical studies have been performed with modern light-emitting diode technology demonstrating damage to the retina at modest exposure levels, particularly from blue-light wavelengths. Unfortunately, high-quality in-human studies are lacking, and the small clinical investigations performed to date have failed to keep pace with the rapid evolutions in display technology. Clinical investigations assessing the effect of HMDs on human retinal function are also yet to be performed. From the available data, modern consumer electronic displays do not appear to pose any acute risk to vision with average use; however, future studies with well-defined clinical outcomes and illuminance metrics are needed to better understand the long-term risks of cumulative exposure to electronic displays in general and with "closed" VR/AR HMDs in particular. [Ophthalmic Surg Lasers Imaging Retina. 2018;49:346-354.]. Copyright 2018, SLACK Incorporated.

  12. Advances in display technology III; Proceedings of the Meeting, Los Angeles, CA, January 18, 19, 1983

    NASA Astrophysics Data System (ADS)

    Schlam, E.

    1983-01-01

    Human factors in visible displays are discussed, taking into account an introduction to color vision, a laser optometric assessment of visual display viewability, the quantification of color contrast, human performance evaluations of digital image quality, visual problems of office video display terminals, and contemporary problems in airborne displays. Other topics considered are related to electroluminescent technology, liquid crystal and related technologies, plasma technology, and display terminal and systems. Attention is given to the application of electroluminescent technology to personal computers, electroluminescent driving techniques, thin film electroluminescent devices with memory, the fabrication of very large electroluminescent displays, the operating properties of thermally addressed dye switching liquid crystal display, light field dichroic liquid crystal displays for very large area displays, and hardening military plasma displays for a nuclear environment.

  13. Redundancy of stereoscopic images: Experimental evaluation

    NASA Astrophysics Data System (ADS)

    Yaroslavsky, L. P.; Campos, J.; Espínola, M.; Ideses, I.

    2005-12-01

    With the recent advancement in visualization devices over the last years, we are seeing a growing market for stereoscopic content. In order to convey 3D content by means of stereoscopic displays, one needs to transmit and display at least 2 points of view of the video content. This has profound implications on the resources required to transmit the content, as well as demands on the complexity of the visualization system. It is known that stereoscopic images are redundant which may prove useful for compression and may have positive effect on the construction of the visualization device. In this paper we describe an experimental evaluation of data redundancy in color stereoscopic images. In the experiments with computer generated and real life test stereo images, several observers visually tested the stereopsis threshold and accuracy of parallax measurement in anaglyphs and stereograms as functions of the blur degree of one of two stereo images. In addition, we tested the color saturation threshold in one of two stereo images for which full color 3D perception with no visible color degradations was maintained. The experiments support a theoretical estimate that one has to add, to data required to reproduce one of two stereoscopic images, only several percents of that amount of data in order to achieve stereoscopic perception.

  14. Confocal Retinal Imaging Using a Digital Light Projector with a Near Infrared VCSEL Source

    PubMed Central

    Muller, Matthew S.; Elsner, Ann E.

    2018-01-01

    A custom near infrared VCSEL source has been implemented in a confocal non-mydriatic retinal camera, the Digital Light Ophthalmoscope (DLO). The use of near infrared light improves patient comfort, avoids pupil constriction, penetrates the deeper retina, and does not mask visual stimuli. The DLO performs confocal imaging by synchronizing a sequence of lines displayed with a digital micromirror device to the rolling shutter exposure of a 2D CMOS camera. Real-time software adjustments enable multiply scattered light imaging, which rapidly and cost-effectively emphasizes drusen and other scattering disruptions in the deeper retina. A separate 5.1″ LCD display provides customizable visible stimuli for vision experiments with simultaneous near infrared imaging. PMID:29899586

  15. A novel fully integrated handheld gamma camera

    NASA Astrophysics Data System (ADS)

    Massari, R.; Ucci, A.; Campisi, C.; Scopinaro, F.; Soluri, A.

    2016-10-01

    In this paper, we present an innovative, fully integrated handheld gamma camera, namely designed to gather in the same device the gamma ray detector with the display and the embedded computing system. The low power consumption allows the prototype to be battery operated. To be useful in radioguided surgery, an intraoperative gamma camera must be very easy to handle since it must be moved to find a suitable view. Consequently, we have developed the first prototype of a fully integrated, compact and lightweight gamma camera for radiopharmaceuticals fast imaging. The device can operate without cables across the sterile field, so it may be easily used in the operating theater for radioguided surgery. The prototype proposed consists of a Silicon Photomultiplier (SiPM) array coupled with a proprietary scintillation structure based on CsI(Tl) crystals. To read the SiPM output signals, we have developed a very low power readout electronics and a dedicated analog to digital conversion system. One of the most critical aspects we faced designing the prototype was the low power consumption, which is mandatory to develop a battery operated device. We have applied this detection device in the lymphoscintigraphy technique (sentinel lymph node mapping) comparing the results obtained with those of a commercial gamma camera (Philips SKYLight). The results obtained confirm a rapid response of the device and an adequate spatial resolution for the use in the scintigraphic imaging. This work confirms the feasibility of a small gamma camera with an integrated display. This device is designed for radioguided surgery and small organ imaging, but it could be easily combined into surgical navigation systems.

  16. Digital Light Processing update: status and future applications

    NASA Astrophysics Data System (ADS)

    Hornbeck, Larry J.

    1999-05-01

    Digital Light Processing (DLP) projection displays based on the Digital Micromirror Device (DMD) were introduced to the market in 1996. Less than 3 years later, DLP-based projectors are found in such diverse applications as mobile, conference room, video wall, home theater, and large-venue. They provide high-quality, seamless, all-digital images that have exceptional stability as well as freedom from both flicker and image lag. Marked improvements have been made in the image quality of DLP-based projection display, including brightness, resolution, contrast ratio, and border image. DLP-based mobile projectors that weighted about 27 pounds in 1996 now weight only about 7 pounds. This weight reduction has been responsible for the definition of an entirely new projector class, the ultraportable. New applications are being developed for this important new projection display technology; these include digital photofinishing for high process speed minilab and maxilab applications and DLP Cinema for the digital delivery of films to audiences around the world. This paper describes the status of DLP-based projection display technology, including its manufacturing, performance improvements, and new applications, with emphasis on DLP Cinema.

  17. LabVIEW Graphical User Interface for a New High Sensitivity, High Resolution Micro-Angio-Fluoroscopic and ROI-CBCT System

    PubMed Central

    Keleshis, C; Ionita, CN; Yadava, G; Patel, V; Bednarek, DR; Hoffmann, KR; Verevkin, A; Rudin, S

    2008-01-01

    A graphical user interface based on LabVIEW software was developed to enable clinical evaluation of a new High-Sensitivity Micro-Angio-Fluoroscopic (HSMAF) system for real-time acquisition, display and rapid frame transfer of high-resolution region-of-interest images. The HSMAF detector consists of a CsI(Tl) phosphor, a light image intensifier (LII), and a fiber-optic taper coupled to a progressive scan, frame-transfer, charged-coupled device (CCD) camera which provides real-time 12 bit, 1k × 1k images capable of greater than 10 lp/mm resolution. Images can be captured in continuous or triggered mode, and the camera can be programmed by a computer using Camera Link serial communication. A graphical user interface was developed to control the camera modes such as gain and pixel binning as well as to acquire, store, display, and process the images. The program, written in LabVIEW, has the following capabilities: camera initialization, synchronized image acquisition with the x-ray pulses, roadmap and digital subtraction angiography acquisition (DSA), flat field correction, brightness and contrast control, last frame hold in fluoroscopy, looped playback of the acquired images in angiography, recursive temporal filtering and LII gain control. Frame rates can be up to 30 fps in full-resolution mode. The user friendly implementation of the interface along with the high framerate acquisition and display for this unique high-resolution detector should provide angiographers and interventionalists with a new capability for visualizing details of small vessels and endovascular devices such as stents and hence enable more accurate diagnoses and image guided interventions. (Support: NIH Grants R01NS43924, R01EB002873) PMID:18836570

  18. LabVIEW Graphical User Interface for a New High Sensitivity, High Resolution Micro-Angio-Fluoroscopic and ROI-CBCT System.

    PubMed

    Keleshis, C; Ionita, Cn; Yadava, G; Patel, V; Bednarek, Dr; Hoffmann, Kr; Verevkin, A; Rudin, S

    2008-01-01

    A graphical user interface based on LabVIEW software was developed to enable clinical evaluation of a new High-Sensitivity Micro-Angio-Fluoroscopic (HSMAF) system for real-time acquisition, display and rapid frame transfer of high-resolution region-of-interest images. The HSMAF detector consists of a CsI(Tl) phosphor, a light image intensifier (LII), and a fiber-optic taper coupled to a progressive scan, frame-transfer, charged-coupled device (CCD) camera which provides real-time 12 bit, 1k × 1k images capable of greater than 10 lp/mm resolution. Images can be captured in continuous or triggered mode, and the camera can be programmed by a computer using Camera Link serial communication. A graphical user interface was developed to control the camera modes such as gain and pixel binning as well as to acquire, store, display, and process the images. The program, written in LabVIEW, has the following capabilities: camera initialization, synchronized image acquisition with the x-ray pulses, roadmap and digital subtraction angiography acquisition (DSA), flat field correction, brightness and contrast control, last frame hold in fluoroscopy, looped playback of the acquired images in angiography, recursive temporal filtering and LII gain control. Frame rates can be up to 30 fps in full-resolution mode. The user friendly implementation of the interface along with the high framerate acquisition and display for this unique high-resolution detector should provide angiographers and interventionalists with a new capability for visualizing details of small vessels and endovascular devices such as stents and hence enable more accurate diagnoses and image guided interventions. (Support: NIH Grants R01NS43924, R01EB002873).

  19. New portable FELIX 3D display

    NASA Astrophysics Data System (ADS)

    Langhans, Knut; Bezecny, Daniel; Homann, Dennis; Bahr, Detlef; Vogt, Carsten; Blohm, Christian; Scharschmidt, Karl-Heinz

    1998-04-01

    An improved generation of our 'FELIX 3D Display' is presented. This system is compact, light, modular and easy to transport. The created volumetric images consist of many voxels, which are generated in a half-sphere display volume. In that way a spatial object can be displayed occupying a physical space with height, width and depth. The new FELIX generation uses a screen rotating with 20 revolutions per second. This target screen is mounted by an easy to change mechanism making it possible to use appropriate screens for the specific purpose of the display. An acousto-optic deflection unit with an integrated small diode pumped laser draws the images on the spinning screen. Images can consist of up to 10,000 voxels at a refresh rate of 20 Hz. Currently two different hardware systems are investigated. The first one is based on a standard PCMCIA digital/analog converter card as an interface and is controlled by a notebook. The developed software is provided with a graphical user interface enabling several animation features. The second, new prototype is designed to display images created by standard CAD applications. It includes the development of a new high speed hardware interface suitable for state-of-the- art fast and high resolution scanning devices, which require high data rates. A true 3D volume display as described will complement the broad range of 3D visualization tools, such as volume rendering packages, stereoscopic and virtual reality techniques, which have become widely available in recent years. Potential applications for the FELIX 3D display include imaging in the field so fair traffic control, medical imaging, computer aided design, science as well as entertainment.

  20. Using Mobile Devices to Display, Overlay, and Animate Geophysical Data and Imagery

    NASA Astrophysics Data System (ADS)

    Batzli, S.; Parker, D.

    2011-12-01

    A major challenge in mobile-device map application development is to offer rich content and features with simple and intuitive controls and fast performance. Our goal is to bring visualization, animation, and notifications of near real-time weather and earth observation information derived from satellite and sensor data to mobile devices. Our robust back-end processing infrastructure can deliver content in the form of images, shapes, standard descriptive formats (eg. KML, JSON) or raw data to a variety of desktop software, browsers, and mobile devices on demand. We have developed custom interfaces for low-bandwidth browsers (including mobile phones) and high-feature browsers (including smartphones), as well as native applications for Android and iOS devices. Mobile devices offer time- and location-awareness and persistent data connections, allowing us to tailor timely notifications and displays to the user's geographic and time context. This presentation includes a live demo of how our mobile apps deliver animation of standard and custom data products in an interactive map interface.

  1. Exploratory visualization of astronomical data on ultra-high-resolution wall displays

    NASA Astrophysics Data System (ADS)

    Pietriga, Emmanuel; del Campo, Fernando; Ibsen, Amanda; Primet, Romain; Appert, Caroline; Chapuis, Olivier; Hempel, Maren; Muñoz, Roberto; Eyheramendy, Susana; Jordan, Andres; Dole, Hervé

    2016-07-01

    Ultra-high-resolution wall displays feature a very high pixel density over a large physical surface, which makes them well-suited to the collaborative, exploratory visualization of large datasets. We introduce FITS-OW, an application designed for such wall displays, that enables astronomers to navigate in large collections of FITS images, query astronomical databases, and display detailed, complementary data and documents about multiple sources simultaneously. We describe how astronomers interact with their data using both the wall's touchsensitive surface and handheld devices. We also report on the technical challenges we addressed in terms of distributed graphics rendering and data sharing over the computer clusters that drive wall displays.

  2. Live HDR video streaming on commodity hardware

    NASA Astrophysics Data System (ADS)

    McNamee, Joshua; Hatchett, Jonathan; Debattista, Kurt; Chalmers, Alan

    2015-09-01

    High Dynamic Range (HDR) video provides a step change in viewing experience, for example the ability to clearly see the soccer ball when it is kicked from the shadow of the stadium into sunshine. To achieve the full potential of HDR video, so-called true HDR, it is crucial that all the dynamic range that was captured is delivered to the display device and tone mapping is confined only to the display. Furthermore, to ensure widespread uptake of HDR imaging, it should be low cost and available on commodity hardware. This paper describes an end-to-end HDR pipeline for capturing, encoding and streaming high-definition HDR video in real-time using off-the-shelf components. All the lighting that is captured by HDR-enabled consumer cameras is delivered via the pipeline to any display, including HDR displays and even mobile devices with minimum latency. The system thus provides an integrated HDR video pipeline that includes everything from capture to post-production, archival and storage, compression, transmission, and display.

  3. Acquisition of stereo panoramas for display in VR environments

    NASA Astrophysics Data System (ADS)

    Ainsworth, Richard A.; Sandin, Daniel J.; Schulze, Jurgen P.; Prudhomme, Andrew; DeFanti, Thomas A.; Srinivasan, Madhusudhanan

    2011-03-01

    Virtual reality systems are an excellent environment for stereo panorama displays. The acquisition and display methods described here combine high-resolution photography with surround vision and full stereo view in an immersive environment. This combination provides photographic stereo-panoramas for a variety of VR displays, including the StarCAVE, NexCAVE, and CORNEA. The zero parallax point used in conventional panorama photography is also the center of horizontal and vertical rotation when creating photographs for stereo panoramas. The two photographically created images are displayed on a cylinder or a sphere. The radius from the viewer to the image is set at approximately 20 feet, or at the object of major interest. A full stereo view is presented in all directions. The interocular distance, as seen from the viewer's perspective, displaces the two spherical images horizontally. This presents correct stereo separation in whatever direction the viewer is looking, even up and down. Objects at infinity will move with the viewer, contributing to an immersive experience. Stereo panoramas created with this acquisition and display technique can be applied without modification to a large array of VR devices having different screen arrangements and different VR libraries.

  4. Detection of fecal contamination on beef meat surfaces using handheld fluorescence imaging device (HFID)

    NASA Astrophysics Data System (ADS)

    Oh, Mirae; Lee, Hoonsoo; Cho, Hyunjeong; Moon, Sang-Ho; Kim, Eun-Kyung; Kim, Moon S.

    2016-05-01

    Current meat inspection in slaughter plants, for food safety and quality attributes including potential fecal contamination, is conducted through by visual examination human inspectors. A handheld fluorescence-based imaging device (HFID) was developed to be an assistive tool for human inspectors by highlighting contaminated food and food contact surfaces on a display monitor. It can be used under ambient lighting conditions in food processing plants. Critical components of the imaging device includes four 405-nm 10-W LEDs for fluorescence excitation, a charge-coupled device (CCD) camera, optical filter (670 nm used for this study), and Wi-Fi transmitter for broadcasting real-time video/images to monitoring devices such as smartphone and tablet. This study aimed to investigate the effectiveness of HFID in enhancing visual detection of fecal contamination on red meat, fat, and bone surfaces of beef under varying ambient luminous intensities (0, 10, 30, 50 and 70 foot-candles). Overall, diluted feces on fat, red meat and bone areas of beef surfaces were detectable in the 670-nm single-band fluorescence images when using the HFID under 0 to 50 foot-candle ambient lighting.

  5. A review of wearable technology in medicine.

    PubMed

    Iqbal, Mohammed H; Aydin, Abdullatif; Brunckhorst, Oliver; Dasgupta, Prokar; Ahmed, Kamran

    2016-10-01

    With rapid advances in technology, wearable devices have evolved and been adopted for various uses, ranging from simple devices used in aiding fitness to more complex devices used in assisting surgery. Wearable technology is broadly divided into head-mounted displays and body sensors. A broad search of the current literature revealed a total of 13 different body sensors and 11 head-mounted display devices. The latter have been reported for use in surgery (n = 7), imaging (n = 3), simulation and education (n = 2) and as navigation tools (n = 1). Body sensors have been used as vital signs monitors (n = 9) and for posture-related devices for posture and fitness (n = 4). Body sensors were found to have excellent functionality in aiding patient posture and rehabilitation while head-mounted displays can provide information to surgeons to while maintaining sterility during operative procedures. There is a potential role for head-mounted wearable technology and body sensors in medicine and patient care. However, there is little scientific evidence available proving that the application of such technologies improves patient satisfaction or care. Further studies need to be conducted prior to a clear conclusion. © The Royal Society of Medicine.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirata, Masaaki, E-mail: masaaki314@gmail.com; Watanabe, Ryouhei; Koyano, Yasuhiro

    PurposeTo demonstrate the use of “Smart Puncture,” a smartphone application to assist conventional CT-guided puncture without CT fluoroscopy, and to describe the advantages of this application.Materials and MethodsA puncture guideline is displayed by entering the angle into the application. Regardless of the angle at which the device is being held, the motion sensor ensures that the guideline is displayed at the appropriate angle with respect to gravity. The angle of the smartphone’s liquid crystal display (LCD) is also detected, preventing needle deflection from the CT slice image. Physicians can perform the puncture procedure by advancing the needle using the guidelinemore » while the smartphone is placed adjacent to the patient. In an experimental puncture test using a sponge as a target, the target was punctured at 30°, 50°, and 70° when the device was tilted to 0°, 15°, 30°, and 45°, respectively. The punctured target was then imaged with a CT scan, and the puncture error was measured.ResultsThe mean puncture error in the plane parallel to the LCD was less than 2°, irrespective of device tilt. The mean puncture error in the sagittal plane was less than 3° with no device tilt. However, the mean puncture error tended to increase when the tilt was increased.ConclusionThis application can transform a smartphone into a valuable tool that is capable of objectively and accurately assisting CT-guided puncture procedures.« less

  7. Visualization of information with an established order

    DOEpatents

    Wong, Pak Chung [Richland, WA; Foote, Harlan P [Richmond, WA; Thomas, James J [Richland, WA; Wong, Kwong-Kwok [Sugar Land, TX

    2007-02-13

    Among the embodiments of the present invention is a system including one or more processors operable to access data representative of a biopolymer sequence of monomer units. The one or more processors are further operable to establish a pattern corresponding to at least one fractal curve and generate one or more output signals corresponding to a number of image elements each representative of one of the monomer units. Also included is a display device responsive to the one or more output signals to visualize the biopolymer sequence by displaying the image elements in accordance with the pattern.

  8. 3D interactive surgical visualization system using mobile spatial information acquisition and autostereoscopic display.

    PubMed

    Fan, Zhencheng; Weng, Yitong; Chen, Guowen; Liao, Hongen

    2017-07-01

    Three-dimensional (3D) visualization of preoperative and intraoperative medical information becomes more and more important in minimally invasive surgery. We develop a 3D interactive surgical visualization system using mobile spatial information acquisition and autostereoscopic display for surgeons to observe surgical target intuitively. The spatial information of regions of interest (ROIs) is captured by the mobile device and transferred to a server for further image processing. Triangular patches of intraoperative data with texture are calculated with a dimension-reduced triangulation algorithm and a projection-weighted mapping algorithm. A point cloud selection-based warm-start iterative closest point (ICP) algorithm is also developed for fusion of the reconstructed 3D intraoperative image and the preoperative image. The fusion images are rendered for 3D autostereoscopic display using integral videography (IV) technology. Moreover, 3D visualization of medical image corresponding to observer's viewing direction is updated automatically using mutual information registration method. Experimental results show that the spatial position error between the IV-based 3D autostereoscopic fusion image and the actual object was 0.38±0.92mm (n=5). The system can be utilized in telemedicine, operating education, surgical planning, navigation, etc. to acquire spatial information conveniently and display surgical information intuitively. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Simple video format for mobile applications

    NASA Astrophysics Data System (ADS)

    Smith, John R.; Miao, Zhourong; Li, Chung-Sheng

    2000-04-01

    With the advent of pervasive computing, there is a growing demand for enabling multimedia applications on mobile devices. Large numbers of pervasive computing devices, such as personal digital assistants (PDAs), hand-held computer (HHC), smart phones, portable audio players, automotive computing devices, and wearable computers are gaining access to online information sources. However, the pervasive computing devices are often constrained along a number of dimensions, such as processing power, local storage, display size and depth, connectivity, and communication bandwidth, which makes it difficult to access rich image and video content. In this paper, we report on our initial efforts in designing a simple scalable video format with low-decoding and transcoding complexity for pervasive computing. The goal is to enable image and video access for mobile applications such as electronic catalog shopping, video conferencing, remote surveillance and video mail using pervasive computing devices.

  10. Content-based image retrieval on mobile devices

    NASA Astrophysics Data System (ADS)

    Ahmad, Iftikhar; Abdullah, Shafaq; Kiranyaz, Serkan; Gabbouj, Moncef

    2005-03-01

    Content-based image retrieval area possesses a tremendous potential for exploration and utilization equally for researchers and people in industry due to its promising results. Expeditious retrieval of desired images requires indexing of the content in large-scale databases along with extraction of low-level features based on the content of these images. With the recent advances in wireless communication technology and availability of multimedia capable phones it has become vital to enable query operation in image databases and retrieve results based on the image content. In this paper we present a content-based image retrieval system for mobile platforms, providing the capability of content-based query to any mobile device that supports Java platform. The system consists of light-weight client application running on a Java enabled device and a server containing a servlet running inside a Java enabled web server. The server responds to image query using efficient native code from selected image database. The client application, running on a mobile phone, is able to initiate a query request, which is handled by a servlet in the server for finding closest match to the queried image. The retrieved results are transmitted over mobile network and images are displayed on the mobile phone. We conclude that such system serves as a basis of content-based information retrieval on wireless devices and needs to cope up with factors such as constraints on hand-held devices and reduced network bandwidth available in mobile environments.

  11. Applying colour science in colour design

    NASA Astrophysics Data System (ADS)

    Luo, Ming Ronnier

    2006-06-01

    Although colour science has been widely used in a variety of industries over the years, it has not been fully explored in the field of product design. This paper will initially introduce the three main application fields of colour science: colour specification, colour-difference evaluation and colour appearance modelling. By integrating these advanced colour technologies together with modern colour imaging devices such as display, camera, scanner and printer, some computer systems have been recently developed to assist designers for designing colour palettes through colour selection by means of a number of widely used colour order systems, for creating harmonised colour schemes via a categorical colour system, for generating emotion colours using various colour emotional scales and for facilitating colour naming via a colour-name library. All systems are also capable of providing accurate colour representation on displays and output to different imaging devices such as printers.

  12. Mobile visual communications and displays

    NASA Astrophysics Data System (ADS)

    Valliath, George T.

    2004-09-01

    The different types of mobile visual communication modes and the types of displays needed in cellular handsets are explored. The well-known 2-way video conferencing is only one of the possible modes. Some modes are already supported on current handsets while others need the arrival of advanced network capabilities to be supported. Displays for devices that support these visual communication modes need to deliver the required visual experience. Over the last 20 years the display has grown in size while the rest of the handset has shrunk. However, the display is still not large enough - the processor performance and network capabilities continue to outstrip the display ability. This makes the display a bottleneck. This paper will explore potential solutions to a small large image on a small handset.

  13. WE-E-12A-01: Medical Physics 1.0 to 2.0: MRI, Displays, Informatics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pickens, D; Flynn, M; Peck, D

    Medical Physics 2.0 is a bold vision for an existential transition of clinical imaging physics in face of the new realities of value-based and evidence-based medicine, comparative effectiveness, and meaningful use. It speaks to how clinical imaging physics can expand beyond traditional insular models of inspection and acceptance testing, oriented toward compliance, towards team-based models of operational engagement, prospective definition and assurance of effective use, and retrospective evaluation of clinical performance. Organized into four sessions of the AAPM, this particular session focuses on three specific modalities as outlined below. MRI 2.0: This presentation will look into the future of clinicalmore » MR imaging and what the clinical medical physicist will need to be doing as the technology of MR imaging evolves. Many of the measurement techniques used today will need to be expanded to address the advent of higher field imaging systems and dedicated imagers for specialty applications. Included will be the need to address quality assurance and testing metrics for multi-channel MR imagers and hybrid devices such as MR/PET systems. New pulse sequences and acquisition methods, increasing use of MR spectroscopy, and real-time guidance procedures will place the burden on the medical physicist to define and use new tools to properly evaluate these systems, but the clinical applications must be understood so that these tools are use correctly. Finally, new rules, clinical requirements, and regulations will mean that the medical physicist must actively work to keep her/his sites compliant and must work closely with physicians to ensure best performance of these systems. Informatics Display 1.0 to 2.0: Medical displays are an integral part of medical imaging operation. The DICOM and AAPM (TG18) efforts have led to clear definitions of performance requirements of monochrome medical displays that can be followed by medical physicists to ensure proper performance. However, effective implementation of that oversight has been challenging due to the number and extend of medical displays in use at a facility. The advent of color display and mobile displays has added additional challenges to the task of the medical physicist. This informatics display lecture first addresses the current display guidelines (the 1.0 paradigm) and further outlines the initiatives and prospects for color and mobile displays (the 2.0 paradigm). Informatics Management 1.0 to 2.0: Imaging informatics is part of every radiology practice today. Imaging informatics covers everything from the ordering of a study, through the data acquisition and processing, display and archiving, reporting of findings and the billing for the services performed. The standardization of the processes used to manage the information and methodologies to integrate these standards is being developed and advanced continuously. These developments are done in an open forum and imaging organizations and professionals all have a part in the process. In the Informatics Management presentation, the flow of information and the integration of the standards used in the processes will be reviewed. The role of radiologists and physicists in the process will be discussed. Current methods (the 1.0 paradigm) and evolving methods (the 2.0 paradigm) for validation of informatics systems function will also be discussed. Learning Objectives: Identify requirements for improving quality assurance and compliance tools for advanced and hybrid MRI systems. Identify the need for new quality assurance metrics and testing procedures for advanced systems. Identify new hardware systems and new procedures needed to evaluate MRI systems. Understand the components of current medical physics expectation for medical displays. Understand the role and prospect fo medical physics for color and mobile display devices. Understand different areas of imaging informatics and the methodology for developing informatics standards. Understand the current status of informatics standards and the role of physicists and radiologists in the process, and the current technology for validating the function of these systems.« less

  14. Progress and Prospects in Stretchable Electroluminescent Devices

    NASA Astrophysics Data System (ADS)

    Wang, Jiangxin; Lee, Pooi See

    2017-03-01

    Stretchable electroluminescent (EL) devices are a new form of mechanically deformable electronics that are gaining increasing interests and believed to be one of the essential technologies for next generation lighting and display applications. Apart from the simple bending capability in flexible EL devices, the stretchable EL devices are required to withstand larger mechanical deformations and accommodate stretching strain beyond 10%. The excellent mechanical conformability in these devices enables their applications in rigorous mechanical conditions such as flexing, twisting, stretching, and folding.The stretchable EL devices can be conformably wrapped onto arbitrary curvilinear surface and respond seamlessly to the external or internal forces, leading to unprecedented applications that cannot be addressed with conventional technologies. For example, they are in demand for wide applications in biomedical-related devices or sensors and soft interactive display systems, including activating devices for photosensitive drug, imaging apparatus for internal tissues, electronic skins, interactive input and output devices, robotics, and volumetric displays. With increasingly stringent demand on the mechanical requirements, the fabrication of stretchable EL device is encountering many challenges that are difficult to resolve. In this review, recent progresses in the stretchable EL devices are covered with a focus on the approaches that are adopted to tackle materials and process challenges in stretchable EL devices and delineate the strategies in stretchable electronics. We first introduce the emission mechanisms that have been successfully demonstrated on stretchable EL devices. Limitations and advantages of the different mechanisms for stretchable EL devices are also discussed. Representative reports are reviewed based on different structural and material strategies. Unprecedented applications that have been enabled by the stretchable EL devices are reviewed. Finally, we summarize with our perspectives on the approaches for the stretchable EL devices and our proposals on the future development in these devices.

  15. Fiber Optic Communication System For Medical Images

    NASA Astrophysics Data System (ADS)

    Arenson, Ronald L.; Morton, Dan E.; London, Jack W.

    1982-01-01

    This paper discusses a fiber optic communication system linking ultrasound devices, Computerized tomography scanners, Nuclear Medicine computer system, and a digital fluoro-graphic system to a central radiology research computer. These centrally archived images are available for near instantaneous recall at various display consoles. When a suitable laser optical disk is available for mass storage, more extensive image archiving will be added to the network including digitized images of standard radiographs for comparison purposes and for remote display in such areas as the intensive care units, the operating room, and selected outpatient departments. This fiber optic system allows for a transfer of high resolution images in less than a second over distances exceeding 2,000 feet. The advantages of using fiber optic cables instead of typical parallel or serial communication techniques will be described. The switching methodology and communication protocols will also be discussed.

  16. High-resolution imaging optomechatronics for precise liquid crystal display module bonding automated optical inspection

    NASA Astrophysics Data System (ADS)

    Ni, Guangming; Liu, Lin; Zhang, Jing; Liu, Juanxiu; Liu, Yong

    2018-01-01

    With the development of the liquid crystal display (LCD) module industry, LCD modules become more and more precise with larger sizes, which demands harsh imaging requirements for automated optical inspection (AOI). Here, we report a high-resolution and clearly focused imaging optomechatronics for precise LCD module bonding AOI inspection. It first presents and achieves high-resolution imaging for LCD module bonding AOI inspection using a line scan camera (LSC) triggered by a linear optical encoder, self-adaptive focusing for the whole large imaging region using LSC, and a laser displacement sensor, which reduces the requirements of machining, assembly, and motion control of AOI devices. Results show that this system can directly achieve clearly focused imaging for AOI inspection of large LCD module bonding with 0.8 μm image resolution, 2.65-mm scan imaging width, and no limited imaging width theoretically. All of these are significant for AOI inspection in the LCD module industry and other fields that require imaging large regions with high resolution.

  17. Latency in Visionic Systems: Test Methods and Requirements

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.; Arthur, J. J., III; Williams, Steven P.; Kramer, Lynda J.

    2005-01-01

    A visionics device creates a pictorial representation of the external scene for the pilot. The ultimate objective of these systems may be to electronically generate a form of Visual Meteorological Conditions (VMC) to eliminate weather or time-of-day as an operational constraint and provide enhancement over actual visual conditions where eye-limiting resolution may be a limiting factor. Empirical evidence has shown that the total system delays or latencies including the imaging sensors and display systems, can critically degrade their utility, usability, and acceptability. Definitions and measurement techniques are offered herein as common test and evaluation methods for latency testing in visionics device applications. Based upon available data, very different latency requirements are indicated based upon the piloting task, the role in which the visionics device is used in this task, and the characteristics of the visionics cockpit display device including its resolution, field-of-regard, and field-of-view. The least stringent latency requirements will involve Head-Up Display (HUD) applications, where the visionics imagery provides situational information as a supplement to symbology guidance and command information. Conversely, the visionics system latency requirement for a large field-of-view Head-Worn Display application, providing a Virtual-VMC capability from which the pilot will derive visual guidance, will be the most stringent, having a value as low as 20 msec.

  18. Automatic mobile device synchronization and remote control system for high-performance medical applications.

    PubMed

    Constantinescu, L; Kim, J; Chan, C; Feng, D

    2007-01-01

    The field of telemedicine is in need of generic solutions that harness the power of small, easily carried computing devices to increase efficiency and decrease the likelihood of medical errors. Our study resolved to build a framework to bridge the gap between handheld and desktop solutions by developing an automated network protocol that wirelessly propagates application data and images prepared by a powerful workstation to handheld clients for storage, display and collaborative manipulation. To this end, we present the Mobile Active Medical Protocol (MAMP), a framework capable of nigh-effortlessly linking medical workstation solutions to corresponding control interfaces on handheld devices for remote storage, control and display. The ease-of-use, encapsulation and applicability of this automated solution is designed to provide significant benefits to the rapid development of telemedical solutions. Our results demonstrate that the design of this system allows an acceptable data transfer rate, a usable framerate for diagnostic solutions and enough flexibility to enable its use in a wide variety of cases. To this end, we also present a large-scale multi-modality image viewer as an example application based on the MAMP.

  19. Liquid crystal light valve technologies for display applications

    NASA Astrophysics Data System (ADS)

    Kikuchi, Hiroshi; Takizawa, Kuniharu

    2001-11-01

    The liquid crystal (LC) light valve, which is a spatial light modulator that uses LC material, is a very important device in the area of display development, image processing, optical computing, holograms, etc. In particular, there have been dramatic developments in the past few years in the application of the LC light valve to projectors and other display technologies. Various LC operating modes have been developed, including thin film transistors, MOS-FETs and other active matrix drive techniques to meet the requirements for higher resolution, and substantial improvements have been achieved in the performance of optical systems, resulting in brighter display images. Given this background, the number of applications for the LC light valve has greatly increased. The resolution has increased from QVGA (320 x 240) to QXGA (2048 x 1536) or even super- high resolution of eight million pixels. In the area of optical output, projectors of 600 to 13,000 lm are now available, and they are used for presentations, home theatres, electronic cinema and other diverse applications. Projectors using the LC light valve can display high- resolution images on large screens. They are now expected to be developed further as part of hyper-reality visual systems. This paper provides an overview of the needs for large-screen displays, human factors related to visual effects, the way in which LC light valves are applied to projectors, improvements in moving picture quality, and the results of the latest studies that have been made to increase the quality of images and moving images or pictures.

  20. Text analysis devices, articles of manufacture, and text analysis methods

    DOEpatents

    Turner, Alan E; Hetzler, Elizabeth G; Nakamura, Grant C

    2015-03-31

    Text analysis devices, articles of manufacture, and text analysis methods are described according to some aspects. In one aspect, a text analysis device includes a display configured to depict visible images, and processing circuitry coupled with the display and wherein the processing circuitry is configured to access a first vector of a text item and which comprises a plurality of components, to access a second vector of the text item and which comprises a plurality of components, to weight the components of the first vector providing a plurality of weighted values, to weight the components of the second vector providing a plurality of weighted values, and to combine the weighted values of the first vector with the weighted values of the second vector to provide a third vector.

  1. 76 FR 42136 - In the Matter of Certain Motion-Sensitive Sound Effects Devices and Image Display Devices and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-18

    ... the U.S. International Trade Commission on June 13, 2011, under section 337 of the Tariff Act of 1930... products containing same by reason of infringement of certain claims of U.S. Patent No. 6,150,947 (``the... submitted by the named respondents in accordance with section 210.13 of the Commission's Rules of Practice...

  2. Perception of 3D spatial relations for 3D displays

    NASA Astrophysics Data System (ADS)

    Rosen, Paul; Pizlo, Zygmunt; Hoffmann, Christoph; Popescu, Voicu S.

    2004-05-01

    We test perception of 3D spatial relations in 3D images rendered by a 3D display (Perspecta from Actuality Systems) and compare it to that of a high-resolution flat panel display. 3D images provide the observer with such depth cues as motion parallax and binocular disparity. Our 3D display is a device that renders a 3D image by displaying, in rapid succession, radial slices through the scene on a rotating screen. The image is contained in a glass globe and can be viewed from virtually any direction. In the psychophysical experiment several families of 3D objects are used as stimuli: primitive shapes (cylinders and cuboids), and complex objects (multi-story buildings, cars, and pieces of furniture). Each object has at least one plane of symmetry. On each trial an object or its "distorted" version is shown at an arbitrary orientation. The distortion is produced by stretching an object in a random direction by 40%. This distortion must eliminate the symmetry of an object. The subject's task is to decide whether or not the presented object is distorted under several viewing conditions (monocular/binocular, with/without motion parallax, and near/far). The subject's performance is measured by the discriminability d', which is a conventional dependent variable in signal detection experiments.

  3. Nanometric holograms based on a topological insulator material.

    PubMed

    Yue, Zengji; Xue, Gaolei; Liu, Juan; Wang, Yongtian; Gu, Min

    2017-05-18

    Holography has extremely extensive applications in conventional optical instruments spanning optical microscopy and imaging, three-dimensional displays and metrology. To integrate holography with modern low-dimensional electronic devices, holograms need to be thinned to a nanometric scale. However, to keep a pronounced phase shift modulation, the thickness of holograms has been generally limited to the optical wavelength scale, which hinders their integration with ultrathin electronic devices. Here, we break this limit and achieve 60 nm holograms using a topological insulator material. We discover that nanometric topological insulator thin films act as an intrinsic optical resonant cavity due to the unequal refractive indices in their metallic surfaces and bulk. The resonant cavity leads to enhancement of phase shifts and thus the holographic imaging. Our work paves a way towards integrating holography with flat electronic devices for optical imaging, data storage and information security.

  4. Collision judgment when viewing minified images through a HMD visual field expander

    NASA Astrophysics Data System (ADS)

    Luo, Gang; Lichtenstein, Lee; Peli, Eli

    2007-02-01

    Purpose: Patients with tunnel vision have great difficulties in mobility. We have developed an augmented vision head mounted device, which can provide patients 5x expanded field by superimposing minified edge images of a wider field captured by a miniature video camera over the natural view seen through the display. In the minified display, objects appear closer to the heading direction than they really are. This might cause users to overestimate collision risks, and therefore to perform unnecessary obstacle-avoidance maneuvers. A study was conducted in a virtual environment to test the impact of minified view on collision judgment. Methods: Simulated scenes were presented to subjects as if they were walking in a shopping mall corridor. Subjects reported whether they would make any contact with stationary obstacles that appeared at variable distances from their walking path. Perceived safe passing distance (PSPD) was calculated by finding the transition point from reports of yes to no. Decision uncertainty was quantified by the sharpness of the transition. Collision envelope (CE) size was calculated by summing up PSPD for left and right sides. Ten normally sighted subjects were tested (1) when not using the device and with one eye patched, and (2) when the see-through view of device was blocked and only minified images were visible. Results: The use of the 5x minification device caused only an 18% increase of CE (13cm, p=0.048). Significant impact of the device on judgment uncertainty was not found (p=0.089). Conclusion: Minification had only a small impact on collision judgment. This supports the use of such a minifying device as an effective field expander for patients with tunnel vision.

  5. Color display and encryption with a plasmonic polarizing metamirror

    NASA Astrophysics Data System (ADS)

    Song, Maowen; Li, Xiong; Pu, Mingbo; Guo, Yinghui; Liu, Kaipeng; Yu, Honglin; Ma, Xiaoliang; Luo, Xiangang

    2018-01-01

    Structural colors emerge when a particular wavelength range is filtered out from a broadband light source. It is regarded as a valuable platform for color display and digital imaging due to the benefits of environmental friendliness, higher visibility, and durability. However, current devices capable of generating colors are all based on direct transmission or reflection. Material loss, thick configuration, and the lack of tunability hinder their transition to practical applications. In this paper, a novel mechanism that generates high-purity colors by photon spin restoration on ultrashallow plasmonic grating is proposed. We fabricated the sample by interference lithography and experimentally observed full color display, tunable color logo imaging, and chromatic sensing. The unique combination of high efficiency, high-purity colors, tunable chromatic display, ultrathin structure, and friendliness for fabrication makes this design an easy way to bridge the gap between theoretical investigations and daily-life applications.

  6. Illuminant-adaptive color reproduction for mobile display

    NASA Astrophysics Data System (ADS)

    Kim, Jong-Man; Park, Kee-Hyon; Kwon, Oh-Seol; Cho, Yang-Ho; Ha, Yeong-Ho

    2006-01-01

    This paper proposes an illuminant-adaptive reproduction method using light adaptation and flare conditions for a mobile display. Mobile displays, such as PDAs and cellular phones, are viewed under various lighting conditions. In particular, images displayed in daylight are perceived as quite dark due to the light adaptation of the human visual system, as the luminance of a mobile display is considerably lower than that of an outdoor environment. In addition, flare phenomena decrease the color gamut of a mobile display by increasing the luminance of dark areas and de-saturating the chroma. Therefore, this paper presents an enhancement method composed of lightness enhancement and chroma compensation. First, the ambient light intensity is measured using a lux-sensor, then the flare is calculated based on the reflection ratio of the display device and the ambient light intensity. The relative cone response is nonlinear to the input luminance. This is also changed by the ambient light intensity. Thus, to improve the perceived image, the displayed luminance is enhanced by lightness linearization. In this paper, the image's luminance is transformed by linearization of the response to the input luminance according to the ambient light intensity. Next, the displayed image is compensated according to the physically reduced chroma, resulting from flare phenomena. The reduced chroma value is calculated according to the flare for each intensity. The chroma compensation method to maintain the original image's chroma is applied differently for each hue plane, as the flare affects each hue plane differently. At this time, the enhanced chroma also considers the gamut boundary. Based on experimental observations, the outer luminance-intensity generally ranges from 1,000 lux to 30,000 lux. Thus, in the case of an outdoor environment, i.e. greater than 1,000 lux, this study presents a color reproduction method based on an inverse cone response curve and flare condition. Consequently, the proposed algorithm improves the quality of the perceived image adaptive to an outdoor environment.

  7. A cloud-based multimodality case file for mobile devices.

    PubMed

    Balkman, Jason D; Loehfelm, Thomas W

    2014-01-01

    Recent improvements in Web and mobile technology, along with the widespread use of handheld devices in radiology education, provide unique opportunities for creating scalable, universally accessible, portable image-rich radiology case files. A cloud database and a Web-based application for radiologic images were developed to create a mobile case file with reasonable usability, download performance, and image quality for teaching purposes. A total of 75 radiology cases related to breast, thoracic, gastrointestinal, musculoskeletal, and neuroimaging subspecialties were included in the database. Breast imaging cases are the focus of this article, as they best demonstrate handheld display capabilities across a wide variety of modalities. This case subset also illustrates methods for adapting radiologic content to cloud platforms and mobile devices. Readers will gain practical knowledge about storage and retrieval of cloud-based imaging data, an awareness of techniques used to adapt scrollable and high-resolution imaging content for the Web, and an appreciation for optimizing images for handheld devices. The evaluation of this software demonstrates the feasibility of adapting images from most imaging modalities to mobile devices, even in cases of full-field digital mammograms, where high resolution is required to represent subtle pathologic features. The cloud platform allows cases to be added and modified in real time by using only a standard Web browser with no application-specific software. Challenges remain in developing efficient ways to generate, modify, and upload radiologic and supplementary teaching content to this cloud-based platform. Online supplemental material is available for this article. ©RSNA, 2014.

  8. A Smart Spoofing Face Detector by Display Features Analysis.

    PubMed

    Lai, ChinLun; Tai, ChiuYuan

    2016-07-21

    In this paper, a smart face liveness detector is proposed to prevent the biometric system from being "deceived" by the video or picture of a valid user that the counterfeiter took with a high definition handheld device (e.g., iPad with retina display). By analyzing the characteristics of the display platform and using an expert decision-making core, we can effectively detect whether a spoofing action comes from a fake face displayed in the high definition display by verifying the chromaticity regions in the captured face. That is, a live or spoof face can be distinguished precisely by the designed optical image sensor. To sum up, by the proposed method/system, a normal optical image sensor can be upgraded to a powerful version to detect the spoofing actions. The experimental results prove that the proposed detection system can achieve very high detection rate compared to the existing methods and thus be practical to implement directly in the authentication systems.

  9. Dual-mode optical microscope based on single-pixel imaging

    NASA Astrophysics Data System (ADS)

    Rodríguez, A. D.; Clemente, P.; Tajahuerce, E.; Lancis, J.

    2016-07-01

    We demonstrate an inverted microscope that can image specimens in both reflection and transmission modes simultaneously with a single light source. The microscope utilizes a digital micromirror device (DMD) for patterned illumination altogether with two single-pixel photosensors for efficient light detection. The system, a scan-less device with no moving parts, works by sequential projection of a set of binary intensity patterns onto the sample that are codified onto a modified commercial DMD. Data to be displayed are geometrically transformed before written into a memory cell to cancel optical artifacts coming from the diamond-like shaped structure of the micromirror array. The 24-bit color depth of the display is fully exploited to increase the frame rate by a factor of 24, which makes the technique practicable for real samples. Our commercial DMD-based LED-illumination is cost effective and can be easily coupled as an add-on module for already existing inverted microscopes. The reflection and transmission information provided by our dual microscope complement each other and can be useful for imaging non-uniform samples and to prevent self-shadowing effects.

  10. 75 FR 8115 - In the Matter of Certain Electronic Devices Having Image Capture or Display Functionality and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-23

    ... 14157 (2009). The complainant named Eastman Kodak Company of Rochester, New York (``Kodak'') as the respondent. On December 16, 2009, LG and Kodak jointly moved to terminate the investigation based on a...

  11. 78 FR 73562 - Certain Ground Fault Circuit Interrupters and Products Containing Same Final Commission...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-06

    ... of cease and desist orders. See Certain Digital Photo Frames and Image Display Devices and Components... articles in question during the period of Presidential review (19 U.S.C. 1337(j)). The Commission's orders...

  12. X-Ray Imaging System

    NASA Technical Reports Server (NTRS)

    1986-01-01

    The FluoroScan Imaging System is a high resolution, low radiation device for viewing stationary or moving objects. It resulted from NASA technology developed for x-ray astronomy and Goddard application to a low intensity x-ray imaging scope. FlouroScan Imaging Systems, Inc, (formerly HealthMate, Inc.), a NASA licensee, further refined the FluoroScan System. It is used for examining fractures, placement of catheters, and in veterinary medicine. Its major components include an x-ray generator, scintillator, visible light image intensifier and video display. It is small, light and maneuverable.

  13. Completion of a Hospital-Wide Comprehensive Image Management and Communication System

    NASA Astrophysics Data System (ADS)

    Mun, Seong K.; Benson, Harold R.; Horii, Steven C.; Elliott, Larry P.; Lo, Shih-Chung B.; Levine, Betty A.; Braudes, Robert E.; Plumlee, Gabriel S.; Garra, Brian S.; Schellinger, Dieter; Majors, Bruce; Goeringer, Fred; Kerlin, Barbara D.; Cerva, John R.; Ingeholm, Mary-Lou; Gore, Tim

    1989-05-01

    A comprehensive image management and communication (IMAC) network has been installed at Georgetown University Hospital for an extensive clinical evaluation. The network is based on the AT&T CommView system and it includes interfaces to 12 imaging devices, 15 workstations (inside and outside of the radiology department), a teleradiology link to an imaging center, an optical jukebox and a number of advanced image display and processing systems such as Sun workstations, PIXAR, and PIXEL. Details of network configuration and its role in the evaluation project are discussed.

  14. A medical application integrating remote 3D visualization tools to access picture archiving and communication system on mobile devices.

    PubMed

    He, Longjun; Ming, Xing; Liu, Qian

    2014-04-01

    With computing capability and display size growing, the mobile device has been used as a tool to help clinicians view patient information and medical images anywhere and anytime. However, for direct interactive 3D visualization, which plays an important role in radiological diagnosis, the mobile device cannot provide a satisfactory quality of experience for radiologists. This paper developed a medical system that can get medical images from the picture archiving and communication system on the mobile device over the wireless network. In the proposed application, the mobile device got patient information and medical images through a proxy server connecting to the PACS server. Meanwhile, the proxy server integrated a range of 3D visualization techniques, including maximum intensity projection, multi-planar reconstruction and direct volume rendering, to providing shape, brightness, depth and location information generated from the original sectional images for radiologists. Furthermore, an algorithm that changes remote render parameters automatically to adapt to the network status was employed to improve the quality of experience. Finally, performance issues regarding the remote 3D visualization of the medical images over the wireless network of the proposed application were also discussed. The results demonstrated that this proposed medical application could provide a smooth interactive experience in the WLAN and 3G networks.

  15. Design and testing of artifact-suppressed adaptive histogram equalization: a contrast-enhancement technique for display of digital chest radiographs.

    PubMed

    Rehm, K; Seeley, G W; Dallas, W J; Ovitt, T W; Seeger, J F

    1990-01-01

    One of the goals of our research in the field of digital radiography has been to develop contrast-enhancement algorithms for eventual use in the display of chest images on video devices with the aim of preserving the diagnostic information presently available with film, some of which would normally be lost because of the smaller dynamic range of video monitors. The ASAHE algorithm discussed in this article has been tested by investigating observer performance in a difficult detection task involving phantoms and simulated lung nodules, using film as the output medium. The results of the experiment showed that the algorithm is successful in providing contrast-enhanced, natural-looking chest images while maintaining diagnostic information. The algorithm did not effect an increase in nodule detectability, but this was not unexpected because film is a medium capable of displaying a wide range of gray levels. It is sufficient at this stage to show that there is no degradation in observer performance. Future tests will evaluate the performance of the ASAHE algorithm in preparing chest images for video display.

  16. Research on self-calibration biaxial autocollimator based on ZYNQ

    NASA Astrophysics Data System (ADS)

    Guo, Pan; Liu, Bingguo; Liu, Guodong; Zhong, Yao; Lu, Binghui

    2018-01-01

    Autocollimators are mainly based on computers or the electronic devices that can be connected to the internet, and its precision, measurement range and resolution are all defective, and external displays are needed to display images in real time. What's more, there is no real-time calibration for autocollimator in the market. In this paper, we propose a biaxial autocollimator based on the ZYNQ embedded platform to solve the above problems. Firstly, the traditional optical system is improved and a light path is added for real-time calibration. Then, in order to improve measurement speed, the embedded platform based on ZYNQ that combines Linux operating system with autocollimator is designed. In this part, image acquisition, image processing, image display and the man-machine interaction interface based on Qt are achieved. Finally, the system realizes two-dimensional small angle measurement. Experimental results showed that the proposed method can improve the angle measurement accuracy. The standard deviation of the close distance (1.5m) is 0.15" in horizontal direction of image and 0.24"in vertical direction, the repeatability of measurement of the long distance (10m) is improved by 0.12 in horizontal direction of image and 0.3 in vertical direction.

  17. High color fidelity thin film multilayer systems for head-up display use

    NASA Astrophysics Data System (ADS)

    Tsou, Yi-Jen D.; Ho, Fang C.

    1996-09-01

    Head-up display is gaining increasing access in automotive vehicles for indication and position/navigation purposes. An optical combiner, which allows the driver to receive image information from outside and inside of the automobile, is the essential part of this display device. Two multilayer thin film combiner coating systems with distinctive polarization selectivity and broad band spectral neutrality are discussed. One of the coating systems was designed to be located at the lower portion of the windshield. The coating reduced the exterior glare by approximately 45% and provided about 70% average see-through transmittance in addition to the interior information display. The other coating system was designed to be integrated with the sunshield located at the upper portion of the windshield. The coating reflected the interior information display while reducing direct sunlight penetration to 25%. Color fidelity for both interior and exterior images were maintained in both systems. This facilitated the display of full-color maps. Both coating systems were absorptionless and environmentally durable. Designs, fabrication, and performance of these coating systems are addressed.

  18. A service protocol for post-processing of medical images on the mobile device

    NASA Astrophysics Data System (ADS)

    He, Longjun; Ming, Xing; Xu, Lang; Liu, Qian

    2014-03-01

    With computing capability and display size growing, the mobile device has been used as a tool to help clinicians view patient information and medical images anywhere and anytime. It is uneasy and time-consuming for transferring medical images with large data size from picture archiving and communication system to mobile client, since the wireless network is unstable and limited by bandwidth. Besides, limited by computing capability, memory and power endurance, it is hard to provide a satisfactory quality of experience for radiologists to handle some complex post-processing of medical images on the mobile device, such as real-time direct interactive three-dimensional visualization. In this work, remote rendering technology is employed to implement the post-processing of medical images instead of local rendering, and a service protocol is developed to standardize the communication between the render server and mobile client. In order to make mobile devices with different platforms be able to access post-processing of medical images, the Extensible Markup Language is taken to describe this protocol, which contains four main parts: user authentication, medical image query/ retrieval, 2D post-processing (e.g. window leveling, pixel values obtained) and 3D post-processing (e.g. maximum intensity projection, multi-planar reconstruction, curved planar reformation and direct volume rendering). And then an instance is implemented to verify the protocol. This instance can support the mobile device access post-processing of medical image services on the render server via a client application or on the web page.

  19. The effect of time in use on the display performance of the iPad.

    PubMed

    Caffery, Liam J; Manthey, Kenneth L; Sim, Lawrence H

    2016-07-01

    The aim of this study was to evaluate changes to the luminance, luminance uniformity and conformance to the digital imaging and communication in medicine greyscale standard display function (GSDF) as a function of time in use for the iPad. Luminance measurements of the American Association of Physicists in Medicine (AAPM) Group 18 task group (TG18) luminance uniformity and luminance test patterns (TG18-UNL and TG18-LN8) were performed using a calibrated near-range luminance meter. Nine sets of measurements were taken, where the time in use of the iPad ranged from 0 to 2500 h. The maximum luminance (Lmax) of the display decreased (367-338 cdm(-2)) as a function of time. The minimum luminance remained constant. The maximum non-uniformity coefficient was 11%. Luminance uniformity decreased slightly as a function of time in use. The conformance of the iPad deviated from the GSDF curve at commencement of use. Deviation did not increase as a function of time in use. This study has demonstrated that the iPad display exhibits luminance degradation typical of liquid crystal displays. The Lmax of the iPad fell below the American College of Radiology-AAPM-Society of Imaging Informatics in Medicine recommendations for primary displays (>350 cdm(-2)) at approximately 1000 h in use. The Lmax recommendation for secondary displays (>250 cdm(-2)) was exceeded during the entire study. The maximum non-uniformity coefficient did not exceed the recommendations for either primary or secondary displays. The deviation from the GSDF exceeded the recommendations of the TG18 for use as either a primary or secondary display. The brightness, uniformity and contrast response are reasonably stable over the useful lifetime of the device; however, the device fails to meet the contrast response standard for either a primary or secondary display.

  20. Exploring interaction with 3D volumetric displays

    NASA Astrophysics Data System (ADS)

    Grossman, Tovi; Wigdor, Daniel; Balakrishnan, Ravin

    2005-03-01

    Volumetric displays generate true volumetric 3D images by actually illuminating points in 3D space. As a result, viewing their contents is similar to viewing physical objects in the real world. These displays provide a 360 degree field of view, and do not require the user to wear hardware such as shutter glasses or head-trackers. These properties make them a promising alternative to traditional display systems for viewing imagery in 3D. Because these displays have only recently been made available commercially (e.g., www.actuality-systems.com), their current use tends to be limited to non-interactive output-only display devices. To take full advantage of the unique features of these displays, however, it would be desirable if the 3D data being displayed could be directly interacted with and manipulated. We investigate interaction techniques for volumetric display interfaces, through the development of an interactive 3D geometric model building application. While this application area itself presents many interesting challenges, our focus is on the interaction techniques that are likely generalizable to interactive applications for other domains. We explore a very direct style of interaction where the user interacts with the virtual data using direct finger manipulations on and around the enclosure surrounding the displayed 3D volumetric image.

  1. Progress in 3D imaging and display by integral imaging

    NASA Astrophysics Data System (ADS)

    Martinez-Cuenca, R.; Saavedra, G.; Martinez-Corral, M.; Pons, A.; Javidi, B.

    2009-05-01

    Three-dimensionality is currently considered an important added value in imaging devices, and therefore the search for an optimum 3D imaging and display technique is a hot topic that is attracting important research efforts. As main value, 3D monitors should provide the observers with different perspectives of a 3D scene by simply varying the head position. Three-dimensional imaging techniques have the potential to establish a future mass-market in the fields of entertainment and communications. Integral imaging (InI), which can capture true 3D color images, has been seen as the right technology to 3D viewing to audiences of more than one person. Due to the advanced degree of development, InI technology could be ready for commercialization in the coming years. This development is the result of a strong research effort performed along the past few years by many groups. Since Integral Imaging is still an emerging technology, the first aim of the "3D Imaging and Display Laboratory" at the University of Valencia, has been the realization of a thorough study of the principles that govern its operation. Is remarkable that some of these principles have been recognized and characterized by our group. Other contributions of our research have been addressed to overcome some of the classical limitations of InI systems, like the limited depth of field (in pickup and in display), the poor axial and lateral resolution, the pseudoscopic-to-orthoscopic conversion, the production of 3D images with continuous relief, or the limited range of viewing angles of InI monitors.

  2. Exploring virtual reality technology and the Oculus Rift for the examination of digital pathology slides.

    PubMed

    Farahani, Navid; Post, Robert; Duboy, Jon; Ahmed, Ishtiaque; Kolowitz, Brian J; Krinchai, Teppituk; Monaco, Sara E; Fine, Jeffrey L; Hartman, Douglas J; Pantanowitz, Liron

    2016-01-01

    Digital slides obtained from whole slide imaging (WSI) platforms are typically viewed in two dimensions using desktop personal computer monitors or more recently on mobile devices. To the best of our knowledge, we are not aware of any studies viewing digital pathology slides in a virtual reality (VR) environment. VR technology enables users to be artificially immersed in and interact with a computer-simulated world. Oculus Rift is among the world's first consumer-targeted VR headsets, intended primarily for enhanced gaming. Our aim was to explore the use of the Oculus Rift for examining digital pathology slides in a VR environment. An Oculus Rift Development Kit 2 (DK2) was connected to a 64-bit computer running Virtual Desktop software. Glass slides from twenty randomly selected lymph node cases (ten with benign and ten malignant diagnoses) were digitized using a WSI scanner. Three pathologists reviewed these digital slides on a 27-inch 5K display and with the Oculus Rift after a 2-week washout period. Recorded endpoints included concordance of final diagnoses and time required to examine slides. The pathologists also rated their ease of navigation, image quality, and diagnostic confidence for both modalities. There was 90% diagnostic concordance when reviewing WSI using a 5K display and Oculus Rift. The time required to examine digital pathology slides on the 5K display averaged 39 s (range 10-120 s), compared to 62 s with the Oculus Rift (range 15-270 s). All pathologists confirmed that digital pathology slides were easily viewable in a VR environment. The ratings for image quality and diagnostic confidence were higher when using the 5K display. Using the Oculus Rift DK2 to view and navigate pathology whole slide images in a virtual environment is feasible for diagnostic purposes. However, image resolution using the Oculus Rift device was limited. Interactive VR technologies such as the Oculus Rift are novel tools that may be of use in digital pathology.

  3. Augmented reality on poster presentations, in the field and in the classroom

    NASA Astrophysics Data System (ADS)

    Hawemann, Friedrich; Kolawole, Folarin

    2017-04-01

    Augmented reality (AR) is the direct addition of virtual information through an interface to a real-world environment. In practice, through a mobile device such as a tablet or smartphone, information can be projected onto a target- for example, an image on a poster. Mobile devices are widely distributed today such that augmented reality is easily accessible to almost everyone. Numerous studies have shown that multi-dimensional visualization is essential for efficient perception of the spatial, temporal and geometrical configuration of geological structures and processes. Print media, such as posters and handouts lack the ability to display content in the third and fourth dimensions, which might be in space-domain as seen in three-dimensional (3-D) objects, or time-domain (four-dimensional, 4-D) expressible in the form of videos. Here, we show that augmented reality content can be complimentary to geoscience poster presentations, hands-on material and in the field. In the latter example, location based data is loaded and for example, a virtual geological profile can be draped over a real-world landscape. In object based AR, the application is trained to recognize an image or object through the camera of the user's mobile device, such that specific content is automatically downloaded and displayed on the screen of the device, and positioned relative to the trained image or object. We used ZapWorks, a commercially-available software application to create and present examples of content that is poster-based, in which important supplementary information is presented as interactive virtual images, videos and 3-D models. We suggest that the flexibility and real-time interactivity offered by AR makes it an invaluable tool for effective geoscience poster presentation, class-room and field geoscience learning.

  4. Invisibility cloak with image projection capability

    PubMed Central

    Banerjee, Debasish; Ji, Chengang; Iizuka, Hideo

    2016-01-01

    Investigations of invisibility cloaks have been led by rigorous theories and such cloak structures, in general, require extreme material parameters. Consequently, it is challenging to realize them, particularly in the full visible region. Due to the insensitivity of human eyes to the polarization and phase of light, cloaking a large object in the full visible region has been recently realized by a simplified theory. Here, we experimentally demonstrate a device concept where a large object can be concealed in a cloak structure and at the same time any images can be projected through it by utilizing a distinctively different approach; the cloaking via one polarization and the image projection via the other orthogonal polarization. Our device structure consists of commercially available optical components such as polarizers and mirrors, and therefore, provides a significant further step towards practical application scenarios such as transparent devices and see-through displays. PMID:27958334

  5. Nanometric holograms based on a topological insulator material

    PubMed Central

    Yue, Zengji; Xue, Gaolei; Liu, Juan; Wang, Yongtian; Gu, Min

    2017-01-01

    Holography has extremely extensive applications in conventional optical instruments spanning optical microscopy and imaging, three-dimensional displays and metrology. To integrate holography with modern low-dimensional electronic devices, holograms need to be thinned to a nanometric scale. However, to keep a pronounced phase shift modulation, the thickness of holograms has been generally limited to the optical wavelength scale, which hinders their integration with ultrathin electronic devices. Here, we break this limit and achieve 60 nm holograms using a topological insulator material. We discover that nanometric topological insulator thin films act as an intrinsic optical resonant cavity due to the unequal refractive indices in their metallic surfaces and bulk. The resonant cavity leads to enhancement of phase shifts and thus the holographic imaging. Our work paves a way towards integrating holography with flat electronic devices for optical imaging, data storage and information security. PMID:28516906

  6. Invisibility cloak with image projection capability

    NASA Astrophysics Data System (ADS)

    Banerjee, Debasish; Ji, Chengang; Iizuka, Hideo

    2016-12-01

    Investigations of invisibility cloaks have been led by rigorous theories and such cloak structures, in general, require extreme material parameters. Consequently, it is challenging to realize them, particularly in the full visible region. Due to the insensitivity of human eyes to the polarization and phase of light, cloaking a large object in the full visible region has been recently realized by a simplified theory. Here, we experimentally demonstrate a device concept where a large object can be concealed in a cloak structure and at the same time any images can be projected through it by utilizing a distinctively different approach; the cloaking via one polarization and the image projection via the other orthogonal polarization. Our device structure consists of commercially available optical components such as polarizers and mirrors, and therefore, provides a significant further step towards practical application scenarios such as transparent devices and see-through displays.

  7. Invisibility cloak with image projection capability.

    PubMed

    Banerjee, Debasish; Ji, Chengang; Iizuka, Hideo

    2016-12-13

    Investigations of invisibility cloaks have been led by rigorous theories and such cloak structures, in general, require extreme material parameters. Consequently, it is challenging to realize them, particularly in the full visible region. Due to the insensitivity of human eyes to the polarization and phase of light, cloaking a large object in the full visible region has been recently realized by a simplified theory. Here, we experimentally demonstrate a device concept where a large object can be concealed in a cloak structure and at the same time any images can be projected through it by utilizing a distinctively different approach; the cloaking via one polarization and the image projection via the other orthogonal polarization. Our device structure consists of commercially available optical components such as polarizers and mirrors, and therefore, provides a significant further step towards practical application scenarios such as transparent devices and see-through displays.

  8. A wearable bluetooth LE sensor for patient monitoring during MRI scans.

    PubMed

    Vogt, Christian; Reber, Jonas; Waltisberg, Daniel; Buthe, Lars; Marjanovic, Josip; Munzenrieder, Niko; Pruessmann, Klaas P; Troster, Gerhard

    2016-08-01

    This paper presents a working prototype of a wearable patient monitoring device capable of recording the heart rate, blood oxygen saturation, surface temperature and humidity during an magnetic resonance imaging (MRI) experiment. The measured values are transmitted via Bluetooth low energy (LE) and displayed in real time on a smartphone on the outside of the MRI room. During 7 MRI image acquisitions of at least 1 min and a total duration of 25 min no Bluetooth data packets were lost. The raw measurements of the light intensity for the photoplethysmogram based heart rate measurement shows an increased noise floor by 50LSB (least significant bit) during the MRI operation, whereas the temperature and humidity readings are unaffected. The device itself creates a magnetic resonance (MR) signal loss with a radius of 14 mm around the device surface and shows no significant increase in image noise of an acquired MRI image due to its radio frequency activity. This enables continuous and unobtrusive patient monitoring during MRI scans.

  9. A fast and automatic fusion algorithm for unregistered multi-exposure image sequence

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Yu, Feihong

    2014-09-01

    Human visual system (HVS) can visualize all the brightness levels of the scene through visual adaptation. However, the dynamic range of most commercial digital cameras and display devices are smaller than the dynamic range of human eye. This implies low dynamic range (LDR) images captured by normal digital camera may lose image details. We propose an efficient approach to high dynamic (HDR) image fusion that copes with image displacement and image blur degradation in a computationally efficient manner, which is suitable for implementation on mobile devices. The various image registration algorithms proposed in the previous literatures are unable to meet the efficiency and performance requirements in the application of mobile devices. In this paper, we selected Oriented Brief (ORB) detector to extract local image structures. The descriptor selected in multi-exposure image fusion algorithm has to be fast and robust to illumination variations and geometric deformations. ORB descriptor is the best candidate in our algorithm. Further, we perform an improved RANdom Sample Consensus (RANSAC) algorithm to reject incorrect matches. For the fusion of images, a new approach based on Stationary Wavelet Transform (SWT) is used. The experimental results demonstrate that the proposed algorithm generates high quality images at low computational cost. Comparisons with a number of other feature matching methods show that our method gets better performance.

  10. Design and fabrication of directional diffractive device on glass substrate for multiview holographic 3D display

    NASA Astrophysics Data System (ADS)

    Su, Yanfeng; Cai, Zhijian; Liu, Quan; Zou, Wenlong; Guo, Peiliang; Wu, Jianhong

    2018-01-01

    Multiview holographic 3D display based on the nano-grating patterned directional diffractive device can provide 3D images with high resolution and wide viewing angle, which has attracted considerable attention. However, the current directional diffractive device fabricated on the photoresist is vulnerable to damage, which will lead to the short service life of the device. In this paper, we propose a directional diffractive device on glass substrate to increase its service life. In the design process, the period and the orientation of the nano-grating at each pixel are carefully calculated accordingly by the predefined position of the viewing zone, and the groove parameters are designed by analyzing the diffraction efficiency of the nano-grating pixel on glass substrate. In the experiment, a 4-view photoresist directional diffractive device with a full coverage of pixelated nano-grating arrays is efficiently fabricated by using an ultraviolet continuously variable spatial frequency lithography system, and then the nano-grating patterns on the photoresist are transferred to the glass substrate by combining the ion beam etching and the reactive ion beam etching for controlling the groove parameters precisely. The properties of the etched glass device are measured under the illumination of a collimated laser beam with a wavelength of 532nm. The experimental results demonstrate that the light utilization efficiency is improved and optimized in comparison with the photoresist device. Furthermore, the fabricated device on glass substrate is easier to be replicated and of better durability and practicability, which shows great potential in the commercial applications of 3D display terminal.

  11. Micro-optical system based 3D imaging for full HD depth image capturing

    NASA Astrophysics Data System (ADS)

    Park, Yong-Hwa; Cho, Yong-Chul; You, Jang-Woo; Park, Chang-Young; Yoon, Heesun; Lee, Sang-Hun; Kwon, Jong-Oh; Lee, Seung-Wan

    2012-03-01

    20 Mega-Hertz-switching high speed image shutter device for 3D image capturing and its application to system prototype are presented. For 3D image capturing, the system utilizes Time-of-Flight (TOF) principle by means of 20MHz high-speed micro-optical image modulator, so called 'optical shutter'. The high speed image modulation is obtained using the electro-optic operation of the multi-layer stacked structure having diffractive mirrors and optical resonance cavity which maximizes the magnitude of optical modulation. The optical shutter device is specially designed and fabricated realizing low resistance-capacitance cell structures having small RC-time constant. The optical shutter is positioned in front of a standard high resolution CMOS image sensor and modulates the IR image reflected from the object to capture a depth image. Suggested novel optical shutter device enables capturing of a full HD depth image with depth accuracy of mm-scale, which is the largest depth image resolution among the-state-of-the-arts, which have been limited up to VGA. The 3D camera prototype realizes color/depth concurrent sensing optical architecture to capture 14Mp color and full HD depth images, simultaneously. The resulting high definition color/depth image and its capturing device have crucial impact on 3D business eco-system in IT industry especially as 3D image sensing means in the fields of 3D camera, gesture recognition, user interface, and 3D display. This paper presents MEMS-based optical shutter design, fabrication, characterization, 3D camera system prototype and image test results.

  12. Flexible high-resolution display systems for the next generation of radiology reading rooms

    NASA Astrophysics Data System (ADS)

    Caban, Jesus J.; Wood, Bradford J.; Park, Adrian

    2007-03-01

    A flexible, scalable, high-resolution display system is presented to support the next generation of radiology reading rooms or interventional radiology suites. The project aims to create an environment for radiologists that will simultaneously facilitate image interpretation, analysis, and understanding while lowering visual and cognitive stress. Displays currently in use present radiologists with technical challenges to exploring complex datasets that we seek to address. These include resolution and brightness, display and ambient lighting differences, and degrees of complexity in addition to side-by-side comparison of time-variant and 2D/3D images. We address these issues through a scalable projector-based system that uses our custom-designed geometrical and photometrical calibration process to create a seamless, bright, high-resolution display environment that can reduce the visual fatigue commonly experienced by radiologists. The system we have designed uses an array of casually aligned projectors to cooperatively increase overall resolution and brightness. Images from a set of projectors in their narrowest zoom are combined at a shared projection surface, thus increasing the global "pixels per inch" (PPI) of the display environment. Two primary challenges - geometric calibration and photometric calibration - remained to be resolved before our high-resolution display system could be used in a radiology reading room or procedure suite. In this paper we present a method that accomplishes those calibrations and creates a flexible high-resolution display environment that appears seamless, sharp, and uniform across different devices.

  13. ESR paper on the proper use of mobile devices in radiology.

    PubMed

    2018-04-01

    Mobile devices (smartphones, tablets, etc.) have become key methods of communication, data access and data sharing for the population in the past decade. The technological capabilities of these devices have expanded very rapidly; for example, their in-built cameras have largely replaced conventional cameras. Their processing power is often sufficient to handle the large data sets of radiology studies and to manipulate images and studies directly on hand-held devices. Thus, they can be used to transmit and view radiology studies, often in locations remote from the source of the imaging data. They are not recommended for primary interpretation of radiology studies, but they facilitate sharing of studies for second opinions, viewing of studies and reports by clinicians at the bedside, etc. Other potential applications include remote participation in educational activity (e.g. webinars) and consultation of online educational content, e-books, journals and reference sources. Social-networking applications can be used for exchanging professional information and teaching. Users of mobile device must be aware of the vulnerabilities and dangers of their use, in particular regarding the potential for inappropriate sharing of confidential patient information, and must take appropriate steps to protect confidential data. • Mobile devices have revolutionized communication in the past decade, and are now ubiquitous. • Mobile devices have sufficient processing power to manipulate and display large data sets of radiological images. • Mobile devices allow transmission & sharing of radiologic studies for purposes of second opinions, bedside review of images, teaching, etc. • Mobile devices are currently not recommended as tools for primary interpretation of radiologic studies. • The use of mobile devices for image and data transmission carries risks, especially regarding confidentiality, which must be considered.

  14. Novel computer-based endoscopic camera

    NASA Astrophysics Data System (ADS)

    Rabinovitz, R.; Hai, N.; Abraham, Martin D.; Adler, Doron; Nissani, M.; Fridental, Ron; Vitsnudel, Ilia

    1995-05-01

    We have introduced a computer-based endoscopic camera which includes (a) unique real-time digital image processing to optimize image visualization by reducing over exposed glared areas and brightening dark areas, and by accentuating sharpness and fine structures, and (b) patient data documentation and management. The image processing is based on i Sight's iSP1000TM digital video processor chip and Adaptive SensitivityTM patented scheme for capturing and displaying images with wide dynamic range of light, taking into account local neighborhood image conditions and global image statistics. It provides the medical user with the ability to view images under difficult lighting conditions, without losing details `in the dark' or in completely saturated areas. The patient data documentation and management allows storage of images (approximately 1 MB per image for a full 24 bit color image) to any storage device installed into the camera, or to an external host media via network. The patient data which is included with every image described essential information on the patient and procedure. The operator can assign custom data descriptors, and can search for the stored image/data by typing any image descriptor. The camera optics has extended zoom range of f equals 20 - 45 mm allowing control of the diameter of the field which is displayed on the monitor such that the complete field of view of the endoscope can be displayed on all the area of the screen. All these features provide versatile endoscopic camera with excellent image quality and documentation capabilities.

  15. Evaluation of 4 Commercial Viewing Devices for Radiographic Perceptibility and Working Length Determination.

    PubMed

    Lally, Trent; Geist, James R; Yu, Qingzhao; Himel, Van T; Sabey, Kent

    2015-07-01

    This study compared images displayed on 1 desktop monitor, 1 laptop monitor, and 2 tablets for the detection of contrast and working length interpretation, with a null hypothesis of no differences between the devices. Three aluminum blocks, with milled circles of varying depth, were radiographed at various exposure levels to create 45 images of varying radiographic density. Six observers viewed the images on 4 devices: Lenovo M92z desktop (Lenovo, Beijing, China), Lenovo Z580 laptop (Lenovo), iPad 3 (Apple, Cupertino, CA), and iPad mini (Apple). Observers recorded the number of circles detected for each image, and a perceptibility curve was used to compare the devices. Additionally, 42 extracted teeth were imaged with working length files affixed at various levels (short, flush, and long) relative to the anatomic apex. Observers measured the distance from file tip to tooth apex on each device. The absolute mean measurement error was calculated for each image. Analysis of variance tests compared the devices. Observers repeated their sessions 1 month later to evaluate intraobserver reliability as measured with weighted kappa tests. Interclass correlation coefficients compared interobserver reliability. There was no significant difference in perceptibility detection between the Lenovo M92z desktop, iPad 3, and iPad mini. However, on average, all 3 were significantly better than the Lenovo Z580 laptop (P values ≤.015). No significant difference in the mean absolute error was noted for working length measurements among the 4 viewing devices (P = .3509). Although all 4 viewing devices seemed comparable with regard to working length evaluation, the laptop computer screen had lower overall ability to perceive contrast differences. Copyright © 2015 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  16. The Effects of Towfish Motion on Sidescan Sonar Images: Extension to a Multiple-Beam Device

    DTIC Science & Technology

    1994-02-01

    simulation, the raw simulated sidescan image is formed from pixels G , which are the sum of energies E,", assigned to the nearest range- bin k as noted in...for stable motion at constant velocity V0, are applied to (divided into) the G ,, and the simulated sidescan image is ready to display. Maximal energy...limitation is likely to apply to all multiple-beam sonais of similar construction. The yaw correction was incorporated in the MBEAM model by an

  17. Span graphics display utilities handbook, first edition

    NASA Technical Reports Server (NTRS)

    Gallagher, D. L.; Green, J. L.; Newman, R.

    1985-01-01

    The Space Physics Analysis Network (SPAN) is a computer network connecting scientific institutions throughout the United States. This network provides an avenue for timely, correlative research between investigators, in a multidisciplinary approach to space physics studies. An objective in the development of SPAN is to make available direct and simplified procedures that scientists can use, without specialized training, to exchange information over the network. Information exchanges include raw and processes data, analysis programs, correspondence, documents, and graphite images. This handbook details procedures that can be used to exchange graphic images over SPAN. The intent is to periodically update this handbook to reflect the constantly changing facilities available on SPAN. The utilities described within reflect an earnest attempt to provide useful descriptions of working utilities that can be used to transfer graphic images across the network. Whether graphic images are representative of satellite servations or theoretical modeling and whether graphics images are of device dependent or independent type, the SPAN graphics display utilities handbook will be the users guide to graphic image exchange.

  18. Development of a high-performance image server using ATM technology

    NASA Astrophysics Data System (ADS)

    Do Van, Minh; Humphrey, Louis M.; Ravin, Carl E.

    1996-05-01

    The ability to display digital radiographs to a radiologist in a reasonable time has long been the goal of many PACS. Intelligent routing, or pre-fetching images, has become a solution whereby a system uses a set of rules to route the images to a pre-determined destination. Images would then be stored locally on a workstation for faster display times. Some PACS use a large, centralized storage approach and workstations retrieve images over high bandwidth connections. Another approach to image management is to provide a high performance, clustered storage system. This has the advantage of eliminating the complexity of pre-fetching and allows for rapid image display from anywhere within the hospital. We discuss the development of such a storage device, which provides extremely fast access to images across a local area network. Among the requirements for development of the image server were high performance, DICOM 3.0 compliance, and the use of industry standard components. The completed image server provides performance more than sufficient for use in clinical practice. Setting up modalities to send images to the image server is simple due to the adherence to the DICOM 3.0 specification. Using only off-the-shelf components allows us to keep the cost of the server relatively inexpensive and allows for easy upgrades as technology becomes more advanced. These factors make the image server ideal for use as a clustered storage system in a radiology department.

  19. Understanding the exposure-time effect on speckle contrast measurements for laser displays

    NASA Astrophysics Data System (ADS)

    Suzuki, Koji; Kubota, Shigeo

    2018-02-01

    To evaluate the influence of exposure time on speckle noise for laser displays, speckle contrast measurement method was developed observable at a human eye response time using a high-sensitivity camera which has a signal multiplying function. The nonlinearity of camera light sensitivity was calibrated to measure accurate speckle contrasts, and the measuring lower limit noise of speckle contrast was improved by applying spatial-frequency low pass filter to the captured images. Three commercially available laser displays were measured over a wide range of exposure times from tens of milliseconds to several seconds without adjusting the brightness of laser displays. The speckle contrast of raster-scanned mobile projector without any speckle-reduction device was nearly constant over various exposure times. On the contrary to this, in full-frame projection type laser displays equipped with a temporally-averaging speckle-reduction device, some of their speckle contrasts close to the lower limits noise were slightly increased at the shorter exposure time due to the noise. As a result, the exposure-time effect of speckle contrast could not be observed in our measurements, although it is more reasonable to think that the speckle contrasts of laser displays, which are equipped with the temporally-averaging speckle-reduction device, are dependent on the exposure time. This discrepancy may be attributed to the underestimation of temporal averaging factor. We expected that this method is useful for evaluating various laser displays and clarify the relationship between the speckle noise and the exposure time for a further verification of speckle reduction.

  20. Active illumination using a digital micromirror device for quantitative phase imaging.

    PubMed

    Shin, Seungwoo; Kim, Kyoohyun; Yoon, Jonghee; Park, YongKeun

    2015-11-15

    We present a powerful and cost-effective method for active illumination using a digital micromirror device (DMD) for quantitative phase-imaging techniques. Displaying binary illumination patterns on a DMD with appropriate spatial filtering, plane waves with various illumination angles are generated and impinged onto a sample. Complex optical fields of the sample obtained with various incident angles are then measured via Mach-Zehnder interferometry, from which a high-resolution 2D synthetic aperture phase image and a 3D refractive index tomogram of the sample are reconstructed. We demonstrate the fast and stable illumination-control capability of the proposed method by imaging colloidal spheres and biological cells. The capability of high-speed optical diffraction tomography is also demonstrated by measuring 3D Brownian motion of colloidal particles with the tomogram acquisition rate of 100 Hz.

  1. Analysis of crystalline lens coloration using a black and white charge-coupled device camera.

    PubMed

    Sakamoto, Y; Sasaki, K; Kojima, M

    1994-01-01

    To analyze lens coloration in vivo, we used a new type of Scheimpflug camera that is a black and white type of charge-coupled device (CCD) camera. A new methodology was proposed. Scheimpflug images of the lens were taken three times through red (R), green (G), and blue (B) filters, respectively. Three images corresponding with the R, G, and B channels were combined into one image on the cathode-ray tube (CRT) display. The spectral transmittance of the tricolor filters and the spectral sensitivity of the CCD camera were used to correct the scattering-light intensity of each image. Coloration of the lens was expressed on a CIE standard chromaticity diagram. The lens coloration of seven eyes analyzed by this method showed values almost the same as those obtained by the previous method using color film.

  2. DYNAMIC PATTERN RECOGNITION BY MEANS OF THRESHOLD NETS,

    DTIC Science & Technology

    A method is expounded for the recognition of visual patterns. A circuit diagram of a device is described which is based on a multilayer threshold ...structure synthesized in accordance with the proposed method. Coded signals received each time an image is displayed are transmitted to the threshold ...circuit which distinguishes the signs, and from there to the layers of threshold resolving elements. The image at each layer is made to correspond

  3. Volumetric 3D Display System with Static Screen

    NASA Technical Reports Server (NTRS)

    Geng, Jason

    2011-01-01

    Current display technology has relied on flat, 2D screens that cannot truly convey the third dimension of visual information: depth. In contrast to conventional visualization that is primarily based on 2D flat screens, the volumetric 3D display possesses a true 3D display volume, and places physically each 3D voxel in displayed 3D images at the true 3D (x,y,z) spatial position. Each voxel, analogous to a pixel in a 2D image, emits light from that position to form a real 3D image in the eyes of the viewers. Such true volumetric 3D display technology provides both physiological (accommodation, convergence, binocular disparity, and motion parallax) and psychological (image size, linear perspective, shading, brightness, etc.) depth cues to human visual systems to help in the perception of 3D objects. In a volumetric 3D display, viewers can watch the displayed 3D images from a completely 360 view without using any special eyewear. The volumetric 3D display techniques may lead to a quantum leap in information display technology and can dramatically change the ways humans interact with computers, which can lead to significant improvements in the efficiency of learning and knowledge management processes. Within a block of glass, a large amount of tiny dots of voxels are created by using a recently available machining technique called laser subsurface engraving (LSE). The LSE is able to produce tiny physical crack points (as small as 0.05 mm in diameter) at any (x,y,z) location within the cube of transparent material. The crack dots, when illuminated by a light source, scatter the light around and form visible voxels within the 3D volume. The locations of these tiny voxels are strategically determined such that each can be illuminated by a light ray from a high-resolution digital mirror device (DMD) light engine. The distribution of these voxels occupies the full display volume within the static 3D glass screen. This design eliminates any moving screen seen in previous approaches, so there is no image jitter, and has an inherent parallel mechanism for 3D voxel addressing. High spatial resolution is possible with a full color display being easy to implement. The system is low-cost and low-maintenance.

  4. Digital watermarking opportunities enabled by mobile media proliferation

    NASA Astrophysics Data System (ADS)

    Modro, Sierra; Sharma, Ravi K.

    2009-02-01

    Consumer usages of mobile devices and electronic media are changing. Mobile devices now include increased computational capabilities, mobile broadband access, better integrated sensors, and higher resolution screens. These enhanced features are driving increased consumption of media such as images, maps, e-books, audio, video, and games. As users become more accustomed to using mobile devices for media, opportunities arise for new digital watermarking usage models. For example, transient media, like images being displayed on screens, could be watermarked to provide a link between mobile devices. Applications based on these emerging usage models utilizing watermarking can provide richer user experiences and drive increased media consumption. We describe the enabling factors and highlight a few of the usage models and new opportunities. We also outline how the new opportunities are driving further innovation in watermarking technologies. We discuss challenges in market adoption of applications based on these usage models.

  5. Vergence–accommodation conflicts hinder visual performance and cause visual fatigue

    PubMed Central

    Hoffman, David M.; Girshick, Ahna R.; Akeley, Kurt; Banks, Martin S.

    2010-01-01

    Three-dimensional (3D) displays have become important for many applications including vision research, operation of remote devices, medical imaging, surgical training, scientific visualization, virtual prototyping, and more. In many of these applications, it is important for the graphic image to create a faithful impression of the 3D structure of the portrayed object or scene. Unfortunately, 3D displays often yield distortions in perceived 3D structure compared with the percepts of the real scenes the displays depict. A likely cause of such distortions is the fact that computer displays present images on one surface. Thus, focus cues—accommodation and blur in the retinal image—specify the depth of the display rather than the depths in the depicted scene. Additionally, the uncoupling of vergence and accommodation required by 3D displays frequently reduces one’s ability to fuse the binocular stimulus and causes discomfort and fatigue for the viewer. We have developed a novel 3D display that presents focus cues that are correct or nearly correct for the depicted scene. We used this display to evaluate the influence of focus cues on perceptual distortions, fusion failures, and fatigue. We show that when focus cues are correct or nearly correct, (1) the time required to identify a stereoscopic stimulus is reduced, (2) stereoacuity in a time-limited task is increased, (3) distortions in perceived depth are reduced, and (4) viewer fatigue and discomfort are reduced. We discuss the implications of this work for vision research and the design and use of displays. PMID:18484839

  6. IR sensitive photorefractive polymers, the first updateable holographic three-dimensional display

    NASA Astrophysics Data System (ADS)

    Tay, Savas

    This work presents recent advances in the development of infra-red sensitive photorefractive polymers, and updateable near real-time holographic 3D displays based on photorefractive polymers. Theoretical and experimental techniques used for design, fabrication and characterization of photorefractive polymers are outlined. Materials development and technical advances that made possible the use of photorefractive polymers for infra-red free-space optical communications, and 3D holographic displays are presented. Photorefractive polymers are dynamic holographic materials that allow recording of highly efficient reversible holograms. The longest operation wavelength for a photorefractive polymer before this study has been 950nm, far shorter than 1550nm, the wavelength of choice for optical communications and medical imaging. The polymers shown here were sensitized using two-photon absorption, a third order nonlinear effect, beyond the linear absorption spectrum of organic dyes, and reach 40% diffraction efficiency with a 35ms response time at this wavelength. As a consequence of two-photon absorption sensitization they exhibit non-destructive readout, which is an important advantage for applications that require high signal-to-noise ratios. Holographic 3D displays provide highly realistic images without the need for special eyewear, making them valuable tools for applications that require "situational awareness" such as medical, industrial and military imaging. Current commercially available holographic 3D displays employ photopolymers that lack image updating capability, resulting in their restricted use and high cost per 3D image. The holographic 3D display shown here employs photorefractive polymers with nearly 100% diffraction efficiency and fast writing time, hours of image persistence, rapid erasure and large area, a combination of properties that has not been shown before. The 3D display is based on stereography and utilizes world's largest photorefractive devices (4x4 inch in size). It can be recorded within a few minutes, viewed for several hours without the need for refreshing and can be completely erased and updated with new images when desired, thusly comprising the first updateable holographic 3D display with memory, suitable for practical use.

  7. Structure Sensor for mobile markerless augmented reality

    NASA Astrophysics Data System (ADS)

    Kilgus, T.; Bux, R.; Franz, A. M.; Johnen, W.; Heim, E.; Fangerau, M.; Müller, M.; Yen, K.; Maier-Hein, L.

    2016-03-01

    3D Visualization of anatomical data is an integral part of diagnostics and treatment in many medical disciplines, such as radiology, surgery and forensic medicine. To enable intuitive interaction with the data, we recently proposed a new concept for on-patient visualization of medical data which involves rendering of subsurface structures on a mobile display that can be moved along the human body. The data fusion is achieved with a range imaging device attached to the display. The range data is used to register static 3D medical imaging data with the patient body based on a surface matching algorithm. However, our previous prototype was based on the Microsoft Kinect camera and thus required a cable connection to acquire color and depth data. The contribution of this paper is two-fold. Firstly, we replace the Kinect with the Structure Sensor - a novel cable-free range imaging device - to improve handling and user experience and show that the resulting accuracy (target registration error: 4.8+/-1.5 mm) is comparable to that achieved with the Kinect. Secondly, a new approach to visualizing complex 3D anatomy based on this device, as well as 3D printed models of anatomical surfaces, is presented. We demonstrate that our concept can be applied to in vivo data and to a 3D printed skull of a forensic case. Our new device is the next step towards clinical integration and shows that the concept cannot only be applied during autopsy but also for presentation of forensic data to laypeople in court or medical education.

  8. Our current knowledge of the Antikythera Mechanism

    NASA Astrophysics Data System (ADS)

    Seiradakis, J. H.; Edmunds, M. G.

    2018-01-01

    The Antikythera Mechanism is the oldest known mechanical calculator. It was constructed around the second century bce and lost in a shipwreck very close to the small Greek island of Antikythera. The shipwreck was discovered 2,000 years later, in 1900. The Mechanism was recognized in the spring of 1902 as a geared mechanical device displaying calendars and astronomical information. Application of modern imaging methods to the surviving fragments has led to general agreement on the basic structure of the device and its solar and lunar astronomical functions. The reading of the remains of its extensive inscriptions has shown that it was also intended to display the shifting position of the planets in the zodiac. In this review, we set out our view on what is known about the device, what can reasonably be conjectured and what major uncertainties still remain regarding its origin, context and purpose.

  9. In-line positioning of ultrasound images using wireless remote display system with tablet computer facilitates ultrasound-guided radial artery catheterization.

    PubMed

    Tsuchiya, Masahiko; Mizutani, Koh; Funai, Yusuke; Nakamoto, Tatsuo

    2016-02-01

    Ultrasound-guided procedures may be easier to perform when the operator's eye axis, needle puncture site, and ultrasound image display form a straight line in the puncture direction. However, such methods have not been well tested in clinical settings because that arrangement is often impossible due to limited space in the operating room. We developed a wireless remote display system for ultrasound devices using a tablet computer (iPad Mini), which allows easy display of images at nearly any location chosen by the operator. We hypothesized that the in-line layout of ultrasound images provided by this system would allow for secure and quick catheterization of the radial artery. We enrolled first-year medical interns (n = 20) who had no prior experience with ultrasound-guided radial artery catheterization to perform that using a short-axis out-of-plane approach with two different methods. With the conventional method, only the ultrasound machine placed at the side of the head of the patient across the targeted forearm was utilized. With the tablet method, the ultrasound images were displayed on an iPad Mini positioned on the arm in alignment with the operator's eye axis and needle puncture direction. The success rate and time required for catheterization were compared between the two methods. Success rate was significantly higher (100 vs. 70 %, P = 0.02) and catheterization time significantly shorter (28.5 ± 7.5 vs. 68.2 ± 14.3 s, P < 0.001) with the tablet method as compared to the conventional method. An ergonomic straight arrangement of the image display is crucial for successful and quick completion of ultrasound-guided arterial catheterization. The present remote display system is a practical method for providing such an arrangement.

  10. Vertical viewing angle enhancement for the 360  degree integral-floating display using an anamorphic optic system.

    PubMed

    Erdenebat, Munkh-Uchral; Kwon, Ki-Chul; Yoo, Kwan-Hee; Baasantseren, Ganbat; Park, Jae-Hyeung; Kim, Eun-Soo; Kim, Nam

    2014-04-15

    We propose a 360 degree integral-floating display with an enhanced vertical viewing angle. The system projects two-dimensional elemental image arrays via a high-speed digital micromirror device projector and reconstructs them into 3D perspectives with a lens array. Double floating lenses relate initial 3D perspectives to the center of a vertically curved convex mirror. The anamorphic optic system tailors the initial 3D perspectives horizontally and vertically disperse light rays more widely. By the proposed method, the entire 3D image provides both monocular and binocular depth cues, a full-parallax demonstration with high-angular ray density and an enhanced vertical viewing angle.

  11. Device- and system-independent personal touchless user interface for operating rooms : One personal UI to control all displays in an operating room.

    PubMed

    Ma, Meng; Fallavollita, Pascal; Habert, Séverine; Weidert, Simon; Navab, Nassir

    2016-06-01

    In the modern day operating room, the surgeon performs surgeries with the support of different medical systems that showcase patient information, physiological data, and medical images. It is generally accepted that numerous interactions must be performed by the surgical team to control the corresponding medical system to retrieve the desired information. Joysticks and physical keys are still present in the operating room due to the disadvantages of mouses, and surgeons often communicate instructions to the surgical team when requiring information from a specific medical system. In this paper, a novel user interface is developed that allows the surgeon to personally perform touchless interaction with the various medical systems, switch effortlessly among them, all of this without modifying the systems' software and hardware. To achieve this, a wearable RGB-D sensor is mounted on the surgeon's head for inside-out tracking of his/her finger with any of the medical systems' displays. Android devices with a special application are connected to the computers on which the medical systems are running, simulating a normal USB mouse and keyboard. When the surgeon performs interaction using pointing gestures, the desired cursor position in the targeted medical system display, and gestures, are transformed into general events and then sent to the corresponding Android device. Finally, the application running on the Android devices generates the corresponding mouse or keyboard events according to the targeted medical system. To simulate an operating room setting, our unique user interface was tested by seven medical participants who performed several interactions with the visualization of CT, MRI, and fluoroscopy images at varying distances from them. Results from the system usability scale and NASA-TLX workload index indicated a strong acceptance of our proposed user interface.

  12. Real-Time Detection and Reading of LED/LCD Displays for Visually Impaired Persons

    PubMed Central

    Tekin, Ender; Coughlan, James M.; Shen, Huiying

    2011-01-01

    Modern household appliances, such as microwave ovens and DVD players, increasingly require users to read an LED or LCD display to operate them, posing a severe obstacle for persons with blindness or visual impairment. While OCR-enabled devices are emerging to address the related problem of reading text in printed documents, they are not designed to tackle the challenge of finding and reading characters in appliance displays. Any system for reading these characters must address the challenge of first locating the characters among substantial amounts of background clutter; moreover, poor contrast and the abundance of specular highlights on the display surface – which degrade the image in an unpredictable way as the camera is moved – motivate the need for a system that processes images at a few frames per second, rather than forcing the user to take several photos, each of which can take seconds to acquire and process, until one is readable. We describe a novel system that acquires video, detects and reads LED/LCD characters in real time, reading them aloud to the user with synthesized speech. The system has been implemented on both a desktop and a cell phone. Experimental results are reported on videos of display images, demonstrating the feasibility of the system. PMID:21804957

  13. A CAMAC display module for fast bit-mapped graphics

    NASA Astrophysics Data System (ADS)

    Abdel-Aal, R. E.

    1992-10-01

    In many data acquisition and analysis facilities for nuclear physics research, utilities for the display of two-dimensional (2D) images and spectra on graphics terminals suffer from low speed, poor resolution, and limited accuracy. Development of CAMAC bit-mapped graphics modules for this purpose has been discouraged in the past by the large device count needed and the long times required to load the image data from the host computer into the CAMAC hardware; particularly since many such facilities have been designed to support fast DMA block transfers only for data acquisition into the host. This paper describes the design and implementation of a prototype CAMAC graphics display module with a resolution of 256×256 pixels at eight colours for which all components can be easily accommodated in a single-width package. Employed is a hardware technique which reduces the number of programmed CAMAC data transfer operations needed for writing 2D images into the display memory by approximately an order of magnitude, with attendant improvements in the display speed and CPU time consumption. Hardware and software details are given together with sample results. Information on the performance of the module in a typical VAX/MBD data acquisition environment is presented, including data on the mutual effects of simultaneous data acquisition traffic. Suggestions are made for further improvements in performance.

  14. A liquid-crystal-on-silicon color sequential display using frame buffer pixel circuits

    NASA Astrophysics Data System (ADS)

    Lee, Sangrok

    Next generation liquid-crystal-on-silicon (LCOS) high definition (HD) televisions and image projection displays will need to be low-cost and high quality to compete with existing systems based on digital micromirror devices (DMDs), plasma displays, and direct view liquid crystal displays. In this thesis, a novel frame buffer pixel architecture that buffers data for the next image frame while displaying the current frame, offers such a competitive solution is presented. The primary goal of the thesis is to demonstrate the LCOS microdisplay architecture for high quality image projection displays and at potentially low cost. The thesis covers four main research areas: new frame buffer pixel circuits to improve the LCOS performance, backplane architecture design and testing, liquid crystal modes for the LCOS microdisplay, and system integration and demonstration. The design requirements for the LCOS backplane with a 64 x 32 pixel array are addressed and measured electrical characteristics matches to computer simulation results. Various liquid crystal (LC) modes applicable for LCOS microdisplays and their physical properties are discussed. One- and two-dimensional director simulations are performed for the selected LC modes. Test liquid crystal cells with the selected LC modes are made and their electro-optic effects are characterized. The 64 x 32 LCOS microdisplays fabricated with the best LC mode are optically tested with interface circuitry. The characteristics of the LCOS microdisplays are summarized with the successful demonstration.

  15. Optimization of reading conditions for flat panel displays.

    PubMed

    Thomas, J A; Chakrabarti, K; Kaczmarek, R V; Maslennikov, A; Mitchell, C A; Romanyukha, A

    2006-06-01

    Task Group 18 (TG 18) of the American Association of Physicists in Medicine has developed guidelines for Assessment of Display Performance for Medical Imaging Systems. In this document, a method for determination of the maximum room lighting for displays is suggested. It is based on luminance measurements of a black target displayed on each display device at different room illuminance levels. Linear extrapolation of the above luminance measurements vs. room illuminance allows one to determine diffuse and specular reflection coefficients. TG 18 guidelines have established recommended maximum room lighting. It is based on the characterization of the display by its minimum and maximum luminance and the description of room by diffuse and specular coefficients. We carried out these luminance measurements for three selected displays to determine their optimum viewing conditions: one cathode ray tube and two flat panels. We found some problems with the application of the TG 18 guidelines to optimize viewing conditions for IBM T221 flat panels. Introduction of the requirement for minimum room illuminance allows a more accurate determination of the optimal viewing conditions (maximum and minimum room illuminance) for IBM flat panels. It also addresses the possible loss of contrast in medical images on flat panel displays because of the effect of nonlinearity in the dependence of luminance on room illuminance at low room lighting.

  16. Availability of color calibration for consistent color display in medical images and optimization of reference brightness for clinical use

    NASA Astrophysics Data System (ADS)

    Iwai, Daiki; Suganami, Haruka; Hosoba, Minoru; Ohno, Kazuko; Emoto, Yutaka; Tabata, Yoshito; Matsui, Norihisa

    2013-03-01

    Color image consistency has not been accomplished yet except the Digital Imaging and Communication in Medicine (DICOM) Supplement 100 for implementing a color reproduction pipeline and device independent color spaces. Thus, most healthcare enterprises could not check monitor degradation routinely. To ensure color consistency in medical color imaging, monitor color calibration should be introduced. Using simple color calibration device . chromaticity of colors including typical color (Red, Green, Blue, Green and White) are measured as device independent profile connection space value called u'v' before and after calibration. In addition, clinical color images are displayed and visual differences are observed. In color calibration, monitor brightness level has to be set to quite lower value 80 cd/m2 according to sRGB standard. As Maximum brightness of most color monitors available currently for medical use have much higher brightness than 80 cd/m2, it is not seemed to be appropriate to use 80 cd/m2 level for calibration. Therefore, we propose that new brightness standard should be introduced while maintaining the color representation in clinical use. To evaluate effects of brightness to chromaticity experimentally, brightness level is changed in two monitors from 80 to 270cd/m2 and chromaticity value are compared with each brightness levels. As a result, there are no significant differences in chromaticity diagram when brightness levels are changed. In conclusion, chromaticity is close to theoretical value after color calibration. Moreover, chromaticity isn't moved when brightness is changed. The results indicate optimized reference brightness level for clinical use could be set at high brightness in current monitors .

  17. Large size three-dimensional video by electronic holography using multiple spatial light modulators

    PubMed Central

    Sasaki, Hisayuki; Yamamoto, Kenji; Wakunami, Koki; Ichihashi, Yasuyuki; Oi, Ryutaro; Senoh, Takanori

    2014-01-01

    In this paper, we propose a new method of using multiple spatial light modulators (SLMs) to increase the size of three-dimensional (3D) images that are displayed using electronic holography. The scalability of images produced by the previous method had an upper limit that was derived from the path length of the image-readout part. We were able to produce larger colour electronic holographic images with a newly devised space-saving image-readout optical system for multiple reflection-type SLMs. This optical system is designed so that the path length of the image-readout part is half that of the previous method. It consists of polarization beam splitters (PBSs), half-wave plates (HWPs), and polarizers. We used 16 (4 × 4) 4K×2K-pixel SLMs for displaying holograms. The experimental device we constructed was able to perform 20 fps video reproduction in colour of full-parallax holographic 3D images with a diagonal image size of 85 mm and a horizontal viewing-zone angle of 5.6 degrees. PMID:25146685

  18. Large size three-dimensional video by electronic holography using multiple spatial light modulators.

    PubMed

    Sasaki, Hisayuki; Yamamoto, Kenji; Wakunami, Koki; Ichihashi, Yasuyuki; Oi, Ryutaro; Senoh, Takanori

    2014-08-22

    In this paper, we propose a new method of using multiple spatial light modulators (SLMs) to increase the size of three-dimensional (3D) images that are displayed using electronic holography. The scalability of images produced by the previous method had an upper limit that was derived from the path length of the image-readout part. We were able to produce larger colour electronic holographic images with a newly devised space-saving image-readout optical system for multiple reflection-type SLMs. This optical system is designed so that the path length of the image-readout part is half that of the previous method. It consists of polarization beam splitters (PBSs), half-wave plates (HWPs), and polarizers. We used 16 (4 × 4) 4K×2K-pixel SLMs for displaying holograms. The experimental device we constructed was able to perform 20 fps video reproduction in colour of full-parallax holographic 3D images with a diagonal image size of 85 mm and a horizontal viewing-zone angle of 5.6 degrees.

  19. An MRI-Guided Telesurgery System Using a Fabry-Perot Interferometry Force Sensor and a Pneumatic Haptic Device.

    PubMed

    Su, Hao; Shang, Weijian; Li, Gang; Patel, Niravkumar; Fischer, Gregory S

    2017-08-01

    This paper presents a surgical master-slave teleoperation system for percutaneous interventional procedures under continuous magnetic resonance imaging (MRI) guidance. The slave robot consists of a piezoelectrically actuated 6-degree-of-freedom (DOF) robot for needle placement with an integrated fiber optic force sensor (1-DOF axial force measurement) using the Fabry-Perot interferometry (FPI) sensing principle; it is configured to operate inside the bore of the MRI scanner during imaging. By leveraging the advantages of pneumatic and piezoelectric actuation in force and position control respectively, we have designed a pneumatically actuated master robot (haptic device) with strain gauge based force sensing that is configured to operate the slave from within the scanner room during imaging. The slave robot follows the insertion motion of the haptic device while the haptic device displays the needle insertion force as measured by the FPI sensor. Image interference evaluation demonstrates that the telesurgery system presents a signal to noise ratio reduction of less than 17% and less than 1% geometric distortion during simultaneous robot motion and imaging. Teleoperated needle insertion and rotation experiments were performed to reach 10 targets in a soft tissue-mimicking phantom with 0.70 ± 0.35 mm Cartesian space error.

  20. A device for characterising the mechanical properties of the plantar soft tissue of the foot.

    PubMed

    Parker, D; Cooper, G; Pearson, S; Crofts, G; Howard, D; Busby, P; Nester, C

    2015-11-01

    The plantar soft tissue is a highly functional viscoelastic structure involved in transferring load to the human body during walking. A Soft Tissue Response Imaging Device was developed to apply a vertical compression to the plantar soft tissue whilst measuring the mechanical response via a combined load cell and ultrasound imaging arrangement. Accuracy of motion compared to input profiles; validation of the response measured for standard materials in compression; variability of force and displacement measures for consecutive compressive cycles; and implementation in vivo with five healthy participants. Static displacement displayed average error of 0.04 mm (range of 15 mm), and static load displayed average error of 0.15 N (range of 250 N). Validation tests showed acceptable agreement compared to a Houndsfield tensometer for both displacement (CMC > 0.99 RMSE > 0.18 mm) and load (CMC > 0.95 RMSE < 4.86 N). Device motion was highly repeatable for bench-top tests (ICC = 0.99) and participant trials (CMC = 1.00). Soft tissue response was found repeatable for intra (CMC > 0.98) and inter trials (CMC > 0.70). The device has been shown to be capable of implementing complex loading patterns similar to gait, and of capturing the compressive response of the plantar soft tissue for a range of loading conditions in vivo. Copyright © 2015. Published by Elsevier Ltd.

  1. Airtraq optical laryngoscope: advantages and disadvantages.

    PubMed

    Saracoglu, Kemal Tolga; Eti, Zeynep; Gogus, Fevzi Yilmaz

    2013-06-01

    Difficult or unsuccesful tracheal intubation is one of the important causes for morbidity and mortality in susceptible patients. Almost 30% of the anesthesia-related deaths are induced by the complications of difficult airway management and more than 85% of all respiratory related complications cause brain injury or death. Nowadays, due to the advances in technology, new videolaryngoscopic devices became available. Airtraq is a novel single-use laryngoscope which provides glottis display without any deviation in the normal position of the oral, pharyngeal or the tracheal axes. With the help of the display lens glottis and the surrounding structures are visualised and under direct view of its tip the tracheal tube is introduced between the vocal cords. In patients having restricted neck motion or limited mouth opening (provided that it is greater than 3 cm) Airtraq offers the advantage of a better display. Moreover the video image can be transfered to an external monitor thus an experienced specialist can provide assistance and an educational course can be conducted simultaneously. On the other hand the Airtraq videolaryngoscopic devices possess certain disadvantages including the need of experience and the time demand for the operator to learn how to use them properly, the rapid deterioration of their display in the presence of a swelling or a secretion and the fact that they are rather complicated and expensive devices. The Airtraq device has already documented benefits in the management of difficult airways, however serial utilization obviously necessitates experience.

  2. Post-processing methods of rendering and visualizing 3-D reconstructed tomographic images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, S.T.C.

    The purpose of this presentation is to discuss the computer processing techniques of tomographic images, after they have been generated by imaging scanners, for volume visualization. Volume visualization is concerned with the representation, manipulation, and rendering of volumetric data. Since the first digital images were produced from computed tomography (CT) scanners in the mid 1970s, applications of visualization in medicine have expanded dramatically. Today, three-dimensional (3D) medical visualization has expanded from using CT data, the first inherently digital source of 3D medical data, to using data from various medical imaging modalities, including magnetic resonance scanners, positron emission scanners, digital ultrasound,more » electronic and confocal microscopy, and other medical imaging modalities. We have advanced from rendering anatomy to aid diagnosis and visualize complex anatomic structures to planning and assisting surgery and radiation treatment. New, more accurate and cost-effective procedures for clinical services and biomedical research have become possible by integrating computer graphics technology with medical images. This trend is particularly noticeable in current market-driven health care environment. For example, interventional imaging, image-guided surgery, and stereotactic and visualization techniques are now stemming into surgical practice. In this presentation, we discuss only computer-display-based approaches of volumetric medical visualization. That is, we assume that the display device available is two-dimensional (2D) in nature and all analysis of multidimensional image data is to be carried out via the 2D screen of the device. There are technologies such as holography and virtual reality that do provide a {open_quotes}true 3D screen{close_quotes}. To confine the scope, this presentation will not discuss such approaches.« less

  3. Design and application of a small size SAFT imaging system for concrete structure

    NASA Astrophysics Data System (ADS)

    Shao, Zhixue; Shi, Lihua; Shao, Zhe; Cai, Jian

    2011-07-01

    A method of ultrasonic imaging detection is presented for quick non-destructive testing (NDT) of concrete structures using synthesized aperture focusing technology (SAFT). A low cost ultrasonic sensor array consisting of 12 market available low frequency ultrasonic transducers is designed and manufactured. A channel compensation method is proposed to improve the consistency of different transducers. The controlling devices for array scan as well as the virtual instrument for SAFT imaging are designed. In the coarse scan mode with the scan step of 50 mm, the system can quickly give an image display of a cross section of 600 mm (L) × 300 mm (D) by one measurement. In the refined scan model, the system can reduce the scan step and give an image display of the same cross section by moving the sensor array several times. Experiments on staircase specimen, concrete slab with embedded target, and building floor with underground pipe line all verify the efficiency of the proposed method.

  4. MACMULTIVIEW 5.1

    NASA Technical Reports Server (NTRS)

    Norikane, L.

    1994-01-01

    MacMultiview is an interactive tool for the Macintosh II family which allows one to display and make computations utilizing polarimetric radar data collected by the Jet Propulsion Laboratory's imaging SAR (synthetic aperture radar) polarimeter system. The system includes the single-frequency L-band sensor mounted on the NASA CV990 aircraft and its replacement, the multi-frequency P-, L-, and C-band sensors mounted on the NASA DC-8. MacMultiview provides two basic functions: computation of synthesized polarimetric images and computation of polarization signatures. The radar data can be used to compute a variety of images. The total power image displays the sum of the polarized and unpolarized components of the backscatter for each pixel. The magnitude/phase difference image displays the HH (horizontal transmit and horizontal receive polarization) to VV (vertical transmit and vertical receive polarization) phase difference using color. Magnitude is displayed using intensity. The user may also select any transmit and receive polarization combination from which an image is synthesized. This image displays the backscatter which would have been observed had the sensor been configured using the selected transmit and receive polarizations. MacMultiview can also be used to compute polarization signatures, three dimensional plots of backscatter versus transmit and receive polarizations. The standard co-polarization signatures (transmit and receive polarizations are the same) and cross-polarization signatures (transmit and receive polarizations are orthogonal) can be plotted for any rectangular subset of pixels within a radar data set. In addition, the ratio of co- and cross-polarization signatures computed from different subsets within the same data set can also be computed. Computed images can be saved in a variety of formats: byte format (headerless format which saves the image as a string of byte values), MacMultiview (a byte image preceded by an ASCII header), and PICT2 format (standard format readable by MacMultiview and other image processing programs for the Macintosh). Images can also be printed on PostScript output devices. Polarization signatures can be saved in either a PICT format or as a text file containing PostScript commands and printed on any QuickDraw output device. The associated Stokes matrices can be stored in a text file. MacMultiview is written in C-language for Macintosh II series computers. MacMultiview will only run on Macintosh II series computers with 8-bit video displays (gray shades or color). The program also requires a minimum configuration of System 6.0, Finder 6.1, and 1Mb of RAM. MacMultiview is NOT compatible with System 7.0. It requires 32-Bit QuickDraw. Note: MacMultiview may not be fully compatible with preliminary versions of 32-Bit QuickDraw. Macintosh Programmer's Workshop and Macintosh Programmer's Workshop C (version 3.0) are required for recompiling and relinking. The standard distribution medium for this package is a set of three 800K 3.5 inch diskettes in Macintosh format. This program was developed in 1989 and updated in 1991. MacMultiview is a copyrighted work with all copyright vested in NASA. QuickDraw, Finder, Macintosh, and System 7 are trademarks of Apple Computer, Inc.

  5. Image-based teleconsultation using smartphones or tablets: qualitative assessment of medical experts.

    PubMed

    Boissin, Constance; Blom, Lisa; Wallis, Lee; Laflamme, Lucie

    2017-02-01

    Mobile health has promising potential in improving healthcare delivery by facilitating access to expert advice. Enabling experts to review images on their smartphone or tablet may save valuable time. This study aims at assessing whether images viewed by medical specialists on handheld devices such as smartphones and tablets are perceived to be of comparable quality as when viewed on a computer screen. This was a prospective study comparing the perceived quality of 18 images on three different display devices (smartphone, tablet and computer) by 27 participants (4 burn surgeons and 23 emergency medicine specialists). The images, presented in random order, covered clinical (dermatological conditions, burns, ECGs and X-rays) and non-clinical subjects and their perceived quality was assessed using a 7-point Likert scale. Differences in devices' quality ratings were analysed using linear regression models for clustered data adjusting for image type and participants' characteristics (age, gender and medical specialty). Overall, the images were rated good or very good in most instances and more so for the smartphone (83.1%, mean score 5.7) and tablet (78.2%, mean 5.5) than for a standard computer (70.6%, mean 5.2). Both handheld devices had significantly higher ratings than the computer screen, even after controlling for image type and participants' characteristics. Nearly all experts expressed that they would be comfortable using smartphones (n=25) or tablets (n=26) for image-based teleconsultation. This study suggests that handheld devices could be a substitute for computer screens for teleconsultation by physicians working in emergency settings. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  6. High-performance web viewer for cardiac images

    NASA Astrophysics Data System (ADS)

    dos Santos, Marcelo; Furuie, Sergio S.

    2004-04-01

    With the advent of the digital devices for medical diagnosis the use of the regular films in radiology has decreased. Thus, the management and handling of medical images in digital format has become an important and critical task. In Cardiology, for example, the main difficulty is to display dynamic images with the appropriated color palette and frame rate used on acquisition process by Cath, Angio and Echo systems. In addition, other difficulty is handling large images in memory by any existing personal computer, including thin clients. In this work we present a web-based application that carries out these tasks with robustness and excellent performance, without burdening the server and network. This application provides near-diagnostic quality display of cardiac images stored as DICOM 3.0 files via a web browser and provides a set of resources that allows the viewing of still and dynamic images. It can access image files from the local disks, or network connection. Its features include: allows real-time playback, dynamic thumbnails image viewing during loading, access to patient database information, image processing tools, linear and angular measurements, on-screen annotations, image printing and exporting DICOM images to other image formats, and many others, all characterized by a pleasant user-friendly interface, inside a Web browser by means of a Java application. This approach offers some advantages over the most of medical images viewers, such as: facility of installation, integration with other systems by means of public and standardized interfaces, platform independence, efficient manipulation and display of medical images, all with high performance.

  7. DiTour

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pelaia, II, Thomas A.

    2014-06-05

    it is common for facilities to have a lobby with a display loop while also requiring an option for guided tours. Existing solutions have required expensive hardware and awkward software. Our solution is relative low cost as it runs on an iPad connected to an external monitor, and our software provides an intuitive touch interface. The media files are downloaded from a web server onto the device allowing a mobile option (e.g. displays at conferences). Media may include arbitrary sequences of images, movies or PDF documents. Tour guides can select different tracks of slides to display and the presentation willmore » return to the default loop after a timeout.« less

  8. Gamut mapping in a high-dynamic-range color space

    NASA Astrophysics Data System (ADS)

    Preiss, Jens; Fairchild, Mark D.; Ferwerda, James A.; Urban, Philipp

    2014-01-01

    In this paper, we present a novel approach of tone mapping as gamut mapping in a high-dynamic-range (HDR) color space. High- and low-dynamic-range (LDR) images as well as device gamut boundaries can simultaneously be represented within such a color space. This enables a unified transformation of the HDR image into the gamut of an output device (in this paper called HDR gamut mapping). An additional aim of this paper is to investigate the suitability of a specific HDR color space to serve as a working color space for the proposed HDR gamut mapping. For the HDR gamut mapping, we use a recent approach that iteratively minimizes an image-difference metric subject to in-gamut images. A psychophysical experiment on an HDR display shows that the standard reproduction workflow of two subsequent transformations - tone mapping and then gamut mapping - may be improved by HDR gamut mapping.

  9. Motmot, an open-source toolkit for realtime video acquisition and analysis.

    PubMed

    Straw, Andrew D; Dickinson, Michael H

    2009-07-22

    Video cameras sense passively from a distance, offer a rich information stream, and provide intuitively meaningful raw data. Camera-based imaging has thus proven critical for many advances in neuroscience and biology, with applications ranging from cellular imaging of fluorescent dyes to tracking of whole-animal behavior at ecologically relevant spatial scales. Here we present 'Motmot': an open-source software suite for acquiring, displaying, saving, and analyzing digital video in real-time. At the highest level, Motmot is written in the Python computer language. The large amounts of data produced by digital cameras are handled by low-level, optimized functions, usually written in C. This high-level/low-level partitioning and use of select external libraries allow Motmot, with only modest complexity, to perform well as a core technology for many high-performance imaging tasks. In its current form, Motmot allows for: (1) image acquisition from a variety of camera interfaces (package motmot.cam_iface), (2) the display of these images with minimal latency and computer resources using wxPython and OpenGL (package motmot.wxglvideo), (3) saving images with no compression in a single-pass, low-CPU-use format (package motmot.FlyMovieFormat), (4) a pluggable framework for custom analysis of images in realtime and (5) firmware for an inexpensive USB device to synchronize image acquisition across multiple cameras, with analog input, or with other hardware devices (package motmot.fview_ext_trig). These capabilities are brought together in a graphical user interface, called 'FView', allowing an end user to easily view and save digital video without writing any code. One plugin for FView, 'FlyTrax', which tracks the movement of fruit flies in real-time, is included with Motmot, and is described to illustrate the capabilities of FView. Motmot enables realtime image processing and display using the Python computer language. In addition to the provided complete applications, the architecture allows the user to write relatively simple plugins, which can accomplish a variety of computer vision tasks and be integrated within larger software systems. The software is available at http://code.astraw.com/projects/motmot.

  10. Imaging flaws in thin metal plates using a magneto-optic device

    NASA Technical Reports Server (NTRS)

    Wincheski, B.; Prabhu, D. R.; Namkung, M.; Birt, E. A.

    1992-01-01

    An account is given of the capabilities of the magnetooptic/eddy-current imager (MEI) apparatus in the case of aging aircraft structure-type flaws in 2024-T3 Al alloy plates. Attention is given to images of cyclically grown fatigue cracks from rivetted joints in fabricated lap-joint structures, electrical discharge machining notches, and corrosion spots. Although conventional eddy-current methods could have been used, the speed and ease of MEI's use in these tests is unmatched by such means. Results are displayed in real time as a test piece is scanned, furnishing easily interpreted flaw images.

  11. Color enhancement for portable LCD displays in low-power mode

    NASA Astrophysics Data System (ADS)

    Shih, Kuang-Tsu; Huang, Tai-Hsiang; Chen, Homer H.

    2011-09-01

    Switching the backlight of handheld devices to low power mode saves energy but affects the color appearance of an image. In this paper, we consider the chroma degradation problem and propose an enhancement algorithm that incorporates the CIECAM02 appearance model to quantitatively characterize the problem. In the proposed algorithm, we enhance the color appearance of the image in low power mode by weighted linear superposition of the chroma of the image and that of the estimated dim-backlight image. Subjective tests are carried out to determine the perceptually optimal weighting and prove the effectiveness of our framework.

  12. Visual Image Sensor Organ Replacement

    NASA Technical Reports Server (NTRS)

    Maluf, David A.

    2014-01-01

    This innovation is a system that augments human vision through a technique called "Sensing Super-position" using a Visual Instrument Sensory Organ Replacement (VISOR) device. The VISOR device translates visual and other sensors (i.e., thermal) into sounds to enable very difficult sensing tasks. Three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. Because the human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns, the translation of images into sounds reduces the risk of accidentally filtering out important clues. The VISOR device was developed to augment the current state-of-the-art head-mounted (helmet) display systems. It provides the ability to sense beyond the human visible light range, to increase human sensing resolution, to use wider angle visual perception, and to improve the ability to sense distances. It also allows compensation for movement by the human or changes in the scene being viewed.

  13. Vision-based calibration of parallax barrier displays

    NASA Astrophysics Data System (ADS)

    Ranieri, Nicola; Gross, Markus

    2014-03-01

    Static and dynamic parallax barrier displays became very popular over the past years. Especially for single viewer applications like tablets, phones and other hand-held devices, parallax barriers provide a convenient solution to render stereoscopic content. In our work we present a computer vision based calibration approach to relate image layer and barrier layer of parallax barrier displays with unknown display geometry for static or dynamic viewer positions using homographies. We provide the math and methods to compose the required homographies on the fly and present a way to compute the barrier without the need of any iteration. Our GPU implementation is stable and general and can be used to reduce latency and increase refresh rate of existing and upcoming barrier methods.

  14. A new concept for medical imaging centered on cellular phone technology.

    PubMed

    Granot, Yair; Ivorra, Antoni; Rubinsky, Boris

    2008-04-30

    According to World Health Organization reports, some three quarters of the world population does not have access to medical imaging. In addition, in developing countries over 50% of medical equipment that is available is not being used because it is too sophisticated or in disrepair or because the health personnel are not trained to use it. The goal of this study is to introduce and demonstrate the feasibility of a new concept in medical imaging that is centered on cellular phone technology and which may provide a solution to medical imaging in underserved areas. The new system replaces the conventional stand-alone medical imaging device with a new medical imaging system made of two independent components connected through cellular phone technology. The independent units are: a) a data acquisition device (DAD) at a remote patient site that is simple, with limited controls and no image display capability and b) an advanced image reconstruction and hardware control multiserver unit at a central site. The cellular phone technology transmits unprocessed raw data from the patient site DAD and receives and displays the processed image from the central site. (This is different from conventional telemedicine where the image reconstruction and control is at the patient site and telecommunication is used to transmit processed images from the patient site). The primary goal of this study is to demonstrate that the cellular phone technology can function in the proposed mode. The feasibility of the concept is demonstrated using a new frequency division multiplexing electrical impedance tomography system, which we have developed for dynamic medical imaging, as the medical imaging modality. The system is used to image through a cellular phone a simulation of breast cancer tumors in a medical imaging diagnostic mode and to image minimally invasive tissue ablation with irreversible electroporation in a medical imaging interventional mode.

  15. Software for keratometry measurements using portable devices

    NASA Astrophysics Data System (ADS)

    Iyomasa, C. M.; Ventura, L.; De Groote, J. J.

    2010-02-01

    In this work we present an image processing software for automatic astigmatism measurements developed for a hand held keratometer. The system projects 36 light spots, from LEDs, displayed in a precise circle at the lachrymal film of the examined cornea. The displacement, the size and deformation of the reflected image of these light spots are analyzed providing the keratometry. The purpose of this research is to develop a software that performs fast and precise calculations in mainstream mobile devices. In another words, a software that can be implemented in portable computer systems, which could be of low cost and easy to handle. This project allows portability for keratometers and is a previous work for a portable corneal topographer.

  16. Industrial Inspection System

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Lixi, Inc. has built a thriving business on NASA-developed x-ray technology. The Low Intensity X-ray Imaging scope (LIXI) was designed to use less than one percent of radiation required by conventional x-ray devices. It is portable and can be used for a variety of industrial inspection systems as well as medical devices. A food processing plant uses the new LIXI Conveyor system to identify small bone fragments in chicken. The chicken packages on a conveyor belt enter an x-ray chamber and the image is displayed on a monitor. Defects measuring less than a millimeter can be detected. An important advantage of the system is its ability to inspect 100 percent of the product right on the production line.

  17. The universal toolbox thermal imager

    NASA Astrophysics Data System (ADS)

    Hollock, Steve; Jones, Graham; Usowicz, Paul

    2003-09-01

    The Introduction of Microsoft Pocket PC 2000/2002 has seen a standardisation of the operating systems used by the majority of PDA manufacturers. This, coupled with the recent price reductions associated with these devices, has led to a rapid increase in the sales of such units; their use is now common in industrial, commercial and domestic applications throughout the world. This paper describes the results of a programme to develop a thermal imager that will interface directly to all of these units so as to take advantage of the existing and future installed base of such devices. The imager currently interfaces with virtually any Pocket PC which provides the necessary processing, display and storage capability; as an alternative, the output of the unit can be visualised and processed in real time using a PC or laptop computer. In future, the open architecture employed by this imager will allow it to support all mobile computing devices, including phones and PDAs. The imager has been designed for one-handed or two-handed operation so that it may be pointed at awkward angles or used in confined spaces; this flexibility of use coupled with the extensive feature range and exceedingly low-cost of the imager, is extending the marketplace for thermal imaging from military and professional, through industrial to the commercial and domestic marketplaces.

  18. [Research on WiFi-based wireless microscopy on a mobile phone and its application].

    PubMed

    Hailan, Jin; Jing, Liu

    2012-11-01

    We proposed and realized a new device that acquires microscopic image wirelessly based on mobile phone and WiFi system. The mobile terminals could record, display and store the image from the far end via the wireless LAN. Using this system, a series of conceptual experiments on monitoring the microscopic images of common objects and liver cancer cells were successfully demonstrated. This system is expected to have important value in the experimental investigations on wirelessly monitoring the cell culture, and small insect etc.

  19. Raster Metafile and Raster Metafile Translator

    NASA Technical Reports Server (NTRS)

    Taylor, Nancy L.; Everton, Eric L.; Randall, Donald P.; Gates, Raymond L.; Skeens, Kristi M.

    1989-01-01

    The intent is to present an effort undertaken at NASA Langley Research Center to design a generic raster image format and to develop tools for processing images prepared in this format. Both the Raster Metafile (RM) format and the Raster Metafile Translator (RMT) are addressed. This document is intended to serve a varied audience including: users wishing to display and manipulate raster image data, programmers responsible for either interfacing the RM format with other raster formats or for developing new RMT device drivers, and programmers charged with installing the software on a host platform.

  20. On-screen-display (OSD) menu detection for proper stereo content reproduction for 3D TV

    NASA Astrophysics Data System (ADS)

    Tolstaya, Ekaterina V.; Bucha, Victor V.; Rychagov, Michael N.

    2011-03-01

    Modern consumer 3D TV sets are able to show video content in two different modes: 2D and 3D. In 3D mode, stereo pair comes from external device such as Blue-ray player, satellite receivers etc. The stereo pair is split into left and right images that are shown one after another. The viewer sees different image for left and right eyes using shutter-glasses properly synchronized with a 3DTV. Besides, some devices that provide TV with a stereo content are able to display some additional information by imposing an overlay picture on video content, an On-Screen-Display (OSD) menu. Some OSDs are not always 3D compatible and lead to incorrect 3D reproduction. In this case, TV set must recognize the type of OSD, whether it is 3D compatible, and visualize it correctly by either switching off stereo mode, or continue demonstration of stereo content. We propose a new stable method for detection of 3D incompatible OSD menus on stereo content. Conventional OSD is a rectangular area with letters and pictograms. OSD menu can be of different transparency levels and colors. To be 3D compatible, an OSD is overlaid separately on both images of a stereo pair. The main problem in detecting OSD is to distinguish whether the color difference is due to OSD presence, or due to stereo parallax. We applied special techniques to find reliable image difference and additionally used a cue that usually OSD has very implicit geometrical features: straight parallel lines. The developed algorithm was tested on our video sequences database, with several types of OSD with different colors and transparency levels overlaid upon video content. Detection quality exceeded 99% of true answers.

  1. Television, computer and portable display device use by people with central vision impairment

    PubMed Central

    Woods, Russell L; Satgunam, PremNandhini

    2011-01-01

    Purpose To survey the viewing experience (e.g. hours watched, difficulty) and viewing metrics (e.g. distance viewed, display size) for television (TV), computers and portable visual display devices for normally-sighted (NS) and visually impaired participants. This information may guide visual rehabilitation. Methods Survey was administered either in person or in a telephone interview on 223 participants of whom 104 had low vision (LV, worse than 6/18, age 22 to 90y, 54 males), and 94 were NS (visual acuity 6/9 or better, age 20 to 86y, 50 males). Depending on their situation, NS participants answered up to 38 questions and LV participants answered up to a further 10 questions. Results Many LV participants reported at least “some” difficulty watching TV (71/103), reported at least “often” having difficulty with computer displays (40/76) and extreme difficulty watching videos on handheld devices (11/16). The average daily TV viewing was slightly, but not significantly, higher for the LV participants (3.6h) than the NS (3.0h). Only 18% of LV participants used visual aids (all optical) to watch TV. Most LV participants obtained effective magnification from a reduced viewing distance for both TV and computer display. Younger LV participants also used a larger display when compared to older LV participants to obtain increased magnification. About half of the TV viewing time occurred in the absence of a companion for both the LV and the NS participants. The mean number of TVs at home reported by LV participants (2.2) was slightly but not significantly (p=0.09) higher than NS participants (2.0). LV participants were equally likely to have a computer but were significantly (p=0.004) less likely to access the internet (73/104) compared to NS participants (82/94). Most LV participants expressed an interest in image enhancing technology for TV viewing (67/104) and for computer use (50/74), if they used a computer. Conclusion In this study, both NS and LV participants had comparable video viewing habits. Most LV participants in our sample reported difficulty watching TV, and indicated an interest in assistive technology, such as image enhancement. As our participants reported that at least half their video viewing hours are spent alone and that there is usually more than one TV per household, this suggests that there are opportunities to use image enhancement on the TVs of LV viewers without interfering with the viewing experience of NS viewers. PMID:21410501

  2. Non-binary Colour Modulation for Display Device Based on Phase Change Materials.

    PubMed

    Ji, Hong-Kai; Tong, Hao; Qian, Hang; Hui, Ya-Juan; Liu, Nian; Yan, Peng; Miao, Xiang-Shui

    2016-12-19

    A reflective-type display device based on phase change materials is attractive because of its ultrafast response time and high resolution compared with a conventional display device. This paper proposes and demonstrates a unique display device in which multicolour changing can be achieved on a single device by the selective crystallization of double layer phase change materials. The optical contrast is optimized by the availability of a variety of film thicknesses of two phase change layers. The device exhibits a low sensitivity to the angle of incidence, which is important for display and colour consistency. The non-binary colour rendering on a single device is demonstrated for the first time using optical excitation. The device shows the potential for ultrafast display applications.

  3. A goggle navigation system for cancer resection surgery

    NASA Astrophysics Data System (ADS)

    Xu, Junbin; Shao, Pengfei; Yue, Ting; Zhang, Shiwu; Ding, Houzhu; Wang, Jinkun; Xu, Ronald

    2014-02-01

    We describe a portable fluorescence goggle navigation system for cancer margin assessment during oncologic surgeries. The system consists of a computer, a head mount display (HMD) device, a near infrared (NIR) CCD camera, a miniature CMOS camera, and a 780 nm laser diode excitation light source. The fluorescence and the background images of the surgical scene are acquired by the CCD camera and the CMOS camera respectively, co-registered, and displayed on the HMD device in real-time. The spatial resolution and the co-registration deviation of the goggle navigation system are evaluated quantitatively. The technical feasibility of the proposed goggle system is tested in an ex vivo tumor model. Our experiments demonstrate the feasibility of using a goggle navigation system for intraoperative margin detection and surgical guidance.

  4. Initial experience with a nuclear medicine viewing workstation

    NASA Astrophysics Data System (ADS)

    Witt, Robert M.; Burt, Robert W.

    1992-07-01

    Graphical User Interfaced (GUI) workstations are now available from commercial vendors. We recently installed a GUI workstation in our nuclear medicine reading room for exclusive use of staff and resident physicians. The system is built upon a Macintosh platform and has been available as a DELTAmanager from MedImage and more recently as an ICON V from Siemens Medical Systems. The workstation provides only display functions and connects to our existing nuclear medicine imaging system via ethernet. The system has some processing capabilities to create oblique, sagittal and coronal views from transverse tomographic views. Hard copy output is via a screen save device and a thermal color printer. The DELTAmanager replaced a MicroDELTA workstation which had both process and view functions. The mouse activated GUI has made remarkable changes to physicians'' use of the nuclear medicine viewing system. Training time to view and review studies has been reduced from hours to about 30-minutes. Generation of oblique views and display of brain and heart tomographic studies has been reduced from about 30-minutes of technician''s time to about 5-minutes of physician''s time. Overall operator functionality has been increased so that resident physicians with little prior computer experience can access all images on the image server and display pertinent patient images when consulting with other staff.

  5. SPAM- SPECTRAL ANALYSIS MANAGER (DEC VAX/VMS VERSION)

    NASA Technical Reports Server (NTRS)

    Solomon, J. E.

    1994-01-01

    The Spectral Analysis Manager (SPAM) was developed to allow easy qualitative analysis of multi-dimensional imaging spectrometer data. Imaging spectrometers provide sufficient spectral sampling to define unique spectral signatures on a per pixel basis. Thus direct material identification becomes possible for geologic studies. SPAM provides a variety of capabilities for carrying out interactive analysis of the massive and complex datasets associated with multispectral remote sensing observations. In addition to normal image processing functions, SPAM provides multiple levels of on-line help, a flexible command interpretation, graceful error recovery, and a program structure which can be implemented in a variety of environments. SPAM was designed to be visually oriented and user friendly with the liberal employment of graphics for rapid and efficient exploratory analysis of imaging spectrometry data. SPAM provides functions to enable arithmetic manipulations of the data, such as normalization, linear mixing, band ratio discrimination, and low-pass filtering. SPAM can be used to examine the spectra of an individual pixel or the average spectra over a number of pixels. SPAM also supports image segmentation, fast spectral signature matching, spectral library usage, mixture analysis, and feature extraction. High speed spectral signature matching is performed by using a binary spectral encoding algorithm to separate and identify mineral components present in the scene. The same binary encoding allows automatic spectral clustering. Spectral data may be entered from a digitizing tablet, stored in a user library, compared to the master library containing mineral standards, and then displayed as a timesequence spectral movie. The output plots, histograms, and stretched histograms produced by SPAM can be sent to a lineprinter, stored as separate RGB disk files, or sent to a Quick Color Recorder. SPAM is written in C for interactive execution and is available for two different machine environments. There is a DEC VAX/VMS version with a central memory requirement of approximately 242K of 8 bit bytes and a machine independent UNIX 4.2 version. The display device currently supported is the Raster Technologies display processor. Other 512 x 512 resolution color display devices, such as De Anza, may be added with minor code modifications. This program was developed in 1986.

  6. SPAM- SPECTRAL ANALYSIS MANAGER (UNIX VERSION)

    NASA Technical Reports Server (NTRS)

    Solomon, J. E.

    1994-01-01

    The Spectral Analysis Manager (SPAM) was developed to allow easy qualitative analysis of multi-dimensional imaging spectrometer data. Imaging spectrometers provide sufficient spectral sampling to define unique spectral signatures on a per pixel basis. Thus direct material identification becomes possible for geologic studies. SPAM provides a variety of capabilities for carrying out interactive analysis of the massive and complex datasets associated with multispectral remote sensing observations. In addition to normal image processing functions, SPAM provides multiple levels of on-line help, a flexible command interpretation, graceful error recovery, and a program structure which can be implemented in a variety of environments. SPAM was designed to be visually oriented and user friendly with the liberal employment of graphics for rapid and efficient exploratory analysis of imaging spectrometry data. SPAM provides functions to enable arithmetic manipulations of the data, such as normalization, linear mixing, band ratio discrimination, and low-pass filtering. SPAM can be used to examine the spectra of an individual pixel or the average spectra over a number of pixels. SPAM also supports image segmentation, fast spectral signature matching, spectral library usage, mixture analysis, and feature extraction. High speed spectral signature matching is performed by using a binary spectral encoding algorithm to separate and identify mineral components present in the scene. The same binary encoding allows automatic spectral clustering. Spectral data may be entered from a digitizing tablet, stored in a user library, compared to the master library containing mineral standards, and then displayed as a timesequence spectral movie. The output plots, histograms, and stretched histograms produced by SPAM can be sent to a lineprinter, stored as separate RGB disk files, or sent to a Quick Color Recorder. SPAM is written in C for interactive execution and is available for two different machine environments. There is a DEC VAX/VMS version with a central memory requirement of approximately 242K of 8 bit bytes and a machine independent UNIX 4.2 version. The display device currently supported is the Raster Technologies display processor. Other 512 x 512 resolution color display devices, such as De Anza, may be added with minor code modifications. This program was developed in 1986.

  7. Exploring virtual reality technology and the Oculus Rift for the examination of digital pathology slides

    PubMed Central

    Farahani, Navid; Post, Robert; Duboy, Jon; Ahmed, Ishtiaque; Kolowitz, Brian J.; Krinchai, Teppituk; Monaco, Sara E.; Fine, Jeffrey L.; Hartman, Douglas J.; Pantanowitz, Liron

    2016-01-01

    Background: Digital slides obtained from whole slide imaging (WSI) platforms are typically viewed in two dimensions using desktop personal computer monitors or more recently on mobile devices. To the best of our knowledge, we are not aware of any studies viewing digital pathology slides in a virtual reality (VR) environment. VR technology enables users to be artificially immersed in and interact with a computer-simulated world. Oculus Rift is among the world's first consumer-targeted VR headsets, intended primarily for enhanced gaming. Our aim was to explore the use of the Oculus Rift for examining digital pathology slides in a VR environment. Methods: An Oculus Rift Development Kit 2 (DK2) was connected to a 64-bit computer running Virtual Desktop software. Glass slides from twenty randomly selected lymph node cases (ten with benign and ten malignant diagnoses) were digitized using a WSI scanner. Three pathologists reviewed these digital slides on a 27-inch 5K display and with the Oculus Rift after a 2-week washout period. Recorded endpoints included concordance of final diagnoses and time required to examine slides. The pathologists also rated their ease of navigation, image quality, and diagnostic confidence for both modalities. Results: There was 90% diagnostic concordance when reviewing WSI using a 5K display and Oculus Rift. The time required to examine digital pathology slides on the 5K display averaged 39 s (range 10–120 s), compared to 62 s with the Oculus Rift (range 15–270 s). All pathologists confirmed that digital pathology slides were easily viewable in a VR environment. The ratings for image quality and diagnostic confidence were higher when using the 5K display. Conclusion: Using the Oculus Rift DK2 to view and navigate pathology whole slide images in a virtual environment is feasible for diagnostic purposes. However, image resolution using the Oculus Rift device was limited. Interactive VR technologies such as the Oculus Rift are novel tools that may be of use in digital pathology. PMID:27217972

  8. An advanced programmable/reconfigurable color graphics display system for crew station technology research

    NASA Technical Reports Server (NTRS)

    Montoya, R. J.; England, J. N.; Hatfield, J. J.; Rajala, S. A.

    1981-01-01

    The hardware configuration, software organization, and applications software for the NASA IKONAS color graphics display system are described. The systems were created at the Langley Research Center Display Device Laboratory to develop, evaluate, and demonstrate advanced generic concepts, technology, and systems integration techniques for electronic crew station systems of future civil aircraft. A minicomputer with 64K core memory acts as a host for a raster scan graphics display generator. The architectures of the hardware system and the graphics display system are provided. The applications software features a FORTRAN-based model of an aircraft, a display system, and the utility program for real-time communications. The model accepts inputs from a two-dimensional joystick and outputs a set of aircraft states. Ongoing and planned work for image segmentation/generation, specialized graphics procedures, and higher level language user interface are discussed.

  9. Head-mounted spatial instruments: Synthetic reality or impossible dream

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R.; Grunwald, Arthur; Velger, Mordekhai

    1988-01-01

    A spatial instrument is defined as a display device which has been either geometrically or symbolically enhanced to better enable a user to accomplish a particular task. Research conducted over the past several years on 3-D spatial instruments has shown that perspective displays, even when viewed from the correct viewpoint, are subject to systematic viewer biases. These biases interfere with correct spatial judgements of the presented pictorial information. It is also found that deliberate, appropriate geometric distortion of the perspective projection of an image can improve user performance. These two findings raise intriguing questions concerning the design of head-mounted spatial instruments. The design of such instruments may not only require the introduction of compensatory distortions to remove the neutrally occurring biases but also may significantly benefit from the introduction of artificial distortions which enhance performance. These image manipulations, however, can cause a loss of visual-vestibular coordination and induce motion sickness. Additionally, adaptation to these manipulations is apt to be impaired by computational delays in the image display. Consequently, the design of head-mounted spatial instruments will require an understanding of the tolerable limits of visual-vestibular discord.

  10. Method of improving a digital image

    NASA Technical Reports Server (NTRS)

    Jobson, Daniel J. (Inventor); Woodell, Glenn A. (Inventor); Rahman, Zia-ur (Inventor)

    1999-01-01

    A method of improving a digital image is provided. The image is initially represented by digital data indexed to represent positions on a display. The digital data is indicative of an intensity value I.sub.i (x,y) for each position (x,y) in each i-th spectral band. The intensity value for each position in each i-th spectral band is adjusted to generate an adjusted intensity value for each position in each i-th spectral band in accordance with ##EQU1## where S is the number of unique spectral bands included in said digital data, W.sub.n is a weighting factor and * denotes the convolution operator. Each surround function F.sub.n (x,y) is uniquely scaled to improve an aspect of the digital image, e.g., dynamic range compression, color constancy, and lightness rendition. The adjusted intensity value for each position in each i-th spectral band is filtered with a common function and then presented to a display device. For color images, a novel color restoration step is added to give the image true-to-life color that closely matches human observation.

  11. A low-cost multimodal head-mounted display system for neuroendoscopic surgery.

    PubMed

    Xu, Xinghua; Zheng, Yi; Yao, Shujing; Sun, Guochen; Xu, Bainan; Chen, Xiaolei

    2018-01-01

    With rapid advances in technology, wearable devices as head-mounted display (HMD) have been adopted for various uses in medical science, ranging from simply aiding in fitness to assisting surgery. We aimed to investigate the feasibility and practicability of a low-cost multimodal HMD system in neuroendoscopic surgery. A multimodal HMD system, mainly consisted of a HMD with two built-in displays, an action camera, and a laptop computer displaying reconstructed medical images, was developed to assist neuroendoscopic surgery. With this intensively integrated system, the neurosurgeon could freely switch between endoscopic image, three-dimensional (3D) reconstructed virtual endoscopy images, and surrounding environment images. Using a leap motion controller, the neurosurgeon could adjust or rotate the 3D virtual endoscopic images at a distance to better understand the positional relation between lesions and normal tissues at will. A total of 21 consecutive patients with ventricular system diseases underwent neuroendoscopic surgery with the aid of this system. All operations were accomplished successfully, and no system-related complications occurred. The HMD was comfortable to wear and easy to operate. Screen resolution of the HMD was high enough for the neurosurgeon to operate carefully. With the system, the neurosurgeon might get a better comprehension on lesions by freely switching among images of different modalities. The system had a steep learning curve, which meant a quick increment of skill with it. Compared with commercially available surgical assistant instruments, this system was relatively low-cost. The multimodal HMD system is feasible, practical, helpful, and relatively cost efficient in neuroendoscopic surgery.

  12. High collimated coherent illumination for reconstruction of digitally calculated holograms: design and experimental realization

    NASA Astrophysics Data System (ADS)

    Morozov, Alexander; Dubinin, German; Dubynin, Sergey; Yanusik, Igor; Kim, Sun Il; Choi, Chil-Sung; Song, Hoon; Lee, Hong-Seok; Putilin, Andrey; Kopenkin, Sergey; Borodin, Yuriy

    2017-06-01

    Future commercialization of glasses-free holographic real 3D displays requires not only appropriate image quality but also slim design of backlight unit and whole display device to match market needs. While a lot of research aimed to solve computational issues of forming Computer Generated Holograms for 3D Holographic displays, less focus on development of backlight units suitable for 3D holographic display applications with form-factor of conventional 2D display systems. Thereby, we report coherent backlight unit for 3D holographic display with thickness comparable to commercially available 2D displays (cell phones, tablets, laptops, etc.). Coherent backlight unit forms uniform, high-collimated and effective illumination of spatial light modulator. Realization of such backlight unit is possible due to holographic optical elements, based on volume gratings, constructing coherent collimated beam to illuminate display plane. Design, recording and measurement of 5.5 inch coherent backlight unit based on two holographic optical elements are presented in this paper.

  13. Toward the light field display: autostereoscopic rendering via a cluster of projectors.

    PubMed

    Yang, Ruigang; Huang, Xinyu; Li, Sifang; Jaynes, Christopher

    2008-01-01

    Ultimately, a display device should be capable of reproducing the visual effects observed in reality. In this paper we introduce an autostereoscopic display that uses a scalable array of digital light projectors and a projection screen augmented with microlenses to simulate a light field for a given three-dimensional scene. Physical objects emit or reflect light in all directions to create a light field that can be approximated by the light field display. The display can simultaneously provide many viewers from different viewpoints a stereoscopic effect without head tracking or special viewing glasses. This work focuses on two important technical problems related to the light field display; calibration and rendering. We present a solution to automatically calibrate the light field display using a camera and introduce two efficient algorithms to render the special multi-view images by exploiting their spatial coherence. The effectiveness of our approach is demonstrated with a four-projector prototype that can display dynamic imagery with full parallax.

  14. Visualizing 3-D microscopic specimens

    NASA Astrophysics Data System (ADS)

    Forsgren, Per-Ola; Majlof, Lars L.

    1992-06-01

    The confocal microscope can be used in a vast number of fields and applications to gather more information than is possible with a regular light microscope, in particular about depth. Compared to other three-dimensional imaging devices such as CAT, NMR, and PET, the variations of the objects studied are larger and not known from macroscopic dissections. It is therefore important to have several complementary ways of displaying the gathered information. We present a system where the user can choose display techniques such as extended focus, depth coding, solid surface modeling, maximum intensity and other techniques, some of which may be combined. A graphical user interface provides easy and direct control of all input parameters. Motion and stereo are available options. Many three- dimensional imaging devices give recordings where one dimension has different resolution and sampling than the other two which requires interpolation to obtain correct geometry. We have evaluated algorithms with interpolation in object space and in projection space. There are many ways to simplify the geometrical transformations to gain performance. We present results of some ways to simplify the calculations.

  15. Polarization-tuned Dynamic Color Filters Incorporating a Dielectric-loaded Aluminum Nanowire Array

    PubMed Central

    Raj Shrestha, Vivek; Lee, Sang-Shin; Kim, Eun-Soo; Choi, Duk-Yong

    2015-01-01

    Nanostructured spectral filters enabling dynamic color-tuning are saliently attractive for implementing ultra-compact color displays and imaging devices. Realization of polarization-induced dynamic color-tuning via one-dimensional periodic nanostructures is highly challenging due to the absence of plasmonic resonances for transverse-electric polarization. Here we demonstrate highly efficient dynamic subtractive color filters incorporating a dielectric-loaded aluminum nanowire array, providing a continuum of customized color according to the incident polarization. Dynamic color filtering was realized relying on selective suppression in transmission spectra via plasmonic resonance at a metal-dielectric interface and guided-mode resonance for a metal-clad dielectric waveguide, each occurring at their characteristic wavelengths for transverse-magnetic and electric polarizations, respectively. A broad palette of colors, including cyan, magenta, and yellow, has been attained with high transmission beyond 80%, by tailoring the period of the nanowire array and the incident polarization. Thanks to low cost, high durability, and mass producibility of the aluminum adopted for the proposed devices, they are anticipated to be diversely applied to color displays, holographic imaging, information encoding, and anti-counterfeiting. PMID:26211625

  16. An Algorithm to Detect the Retinal Region of Interest

    NASA Astrophysics Data System (ADS)

    Şehirli, E.; Turan, M. K.; Demiral, E.

    2017-11-01

    Retina is one of the important layers of the eyes, which includes sensitive cells to colour and light and nerve fibers. Retina can be displayed by using some medical devices such as fundus camera, ophthalmoscope. Hence, some lesions like microaneurysm, haemorrhage, exudate with many diseases of the eye can be detected by looking at the images taken by devices. In computer vision and biomedical areas, studies to detect lesions of the eyes automatically have been done for a long time. In order to make automated detections, the concept of ROI may be utilized. ROI which stands for region of interest generally serves the purpose of focusing on particular targets. The main concentration of this paper is the algorithm to automatically detect retinal region of interest belonging to different retinal images on a software application. The algorithm consists of three stages such as pre-processing stage, detecting ROI on processed images and overlapping between input image and obtained ROI of the image.

  17. The ACR-NEMA Digital Imaging And Communications Standard: Evolution, Overview And Implementation Considerations

    NASA Astrophysics Data System (ADS)

    Alzner, Edgar; Murphy, Laura

    1986-06-01

    The growing digital nature of radiology images led to a recognition that compatibility of communication between imaging, display and data storage devices of different modalities and different manufacturers is necessary. The ACR-NEMA Digital Imaging and Communications Standard Committee was formed to develop a communications standard for radiological images. This standard includes the overall structure of a communication message and the protocols for bi-directional communication using end-to-end connections. The evolution and rationale of the ACR-NEMA Digital Imaging and Communication Standard are described. An overview is provided and sane practical implementation considerations are discussed. PACS will became reality only if the medical community accepts and implements the ACR-NEMA Standard.

  18. A Dedicated Microprocessor Controller for a Bound Document Scanner.

    DTIC Science & Technology

    1984-06-01

    focused onto the CCD which converts the image into 2048 pixels. After the pixel data are processed by the scanner hardware, they are sent to the display...data in real time after the data on each of the 2048 pixel elements .-.- .---.; . has been transferred out of the device. Display-control commands and...05 06 07 Fig. 4.9 2716 EPROM Block D~iagram and Pin Assignment HE-E 64 BYTES RA ’ FFF 4095 INTERNAL I FCO 4032 EXECUTABLE FBP 4031 RA Soo0 2048 _ _7FF

  19. A microdisplay-based HUD for automotive applications: Backplane design, planarization, and optical implementation

    NASA Astrophysics Data System (ADS)

    Schuck, Miller Harry

    Automotive head-up displays require compact, bright, and inexpensive imaging systems. In this thesis, a compact head-up display (HUD) utilizing liquid-crystal-on-silicon microdisplay technology is presented from concept to implementation. The thesis comprises three primary areas of HUD research: the specification, design and implementation of a compact HUD optical system, the development of a wafer planarization process to enhance reflective device brightness and light immunity and the design, fabrication and testing of an inexpensive 640 x 512 pixel active matrix backplane intended to meet the HUD requirements. The thesis addresses the HUD problem at three levels, the systems level, the device level, and the materials level. At the systems level, the optical design of an automotive HUD must meet several competing requirements, including high image brightness, compact packaging, video-rate performance, and low cost. An optical system design which meets the competing requirements has been developed utilizing a fully-reconfigurable reflective microdisplay. The design consists of two optical stages, the first a projector stage which magnifies the display, and a second stage which forms the virtual image eventually seen by the driver. A key component of the optical system is a diffraction grating/field lens which forms a large viewing eyebox while reducing the optical system complexity. Image quality biocular disparity and luminous efficacy were analyzed and results of the optical implementation are presented. At the device level, the automotive HUD requires a reconfigurable, video-rate, high resolution image source for applications such as navigation and night vision. The design of a 640 x 512 pixel active matrix backplane which meets the requirements of the HUD is described. The backplane was designed to produce digital field sequential color images at video rates utilizing fast switching liquid crystal as the modulation layer. The design methodology is discussed, and the example of a clock generator is described from design to implementation. Electrical and optical test results of the fabricated backplane are presented. At the materials level, a planarization method was developed to meet the stringent brightness requirements of automotive HUD's. The research efforts described here have resulted in a simple, low cost post-processing method for planarizing microdisplay substrates based on a spin-cast polymeric resin, benzocyclobutene (BCB). Six- fold reductions in substrate step height were accomplished with a single coating. Via masking and dry etching methods were developed. High reflectivity metal was deposited and patterned over the planarized substrate to produce high aperture pixel mirrors. The process is simple, rapid, and results in microdisplays better able to meet the stringent requirements of high brightness display systems. Methods and results of the post- processing are described.

  20. A tone mapping operator based on neural and psychophysical models of visual perception

    NASA Astrophysics Data System (ADS)

    Cyriac, Praveen; Bertalmio, Marcelo; Kane, David; Vazquez-Corral, Javier

    2015-03-01

    High dynamic range imaging techniques involve capturing and storing real world radiance values that span many orders of magnitude. However, common display devices can usually reproduce intensity ranges only up to two to three orders of magnitude. Therefore, in order to display a high dynamic range image on a low dynamic range screen, the dynamic range of the image needs to be compressed without losing details or introducing artefacts, and this process is called tone mapping. A good tone mapping operator must be able to produce a low dynamic range image that matches as much as possible the perception of the real world scene. We propose a two stage tone mapping approach, in which the first stage is a global method for range compression based on a gamma curve that equalizes the lightness histogram the best, and the second stage performs local contrast enhancement and color induction using neural activity models for the visual cortex.

  1. IR Hiding: A Method to Prevent Video Re-shooting by Exploiting Differences between Human Perceptions and Recording Device Characteristics

    NASA Astrophysics Data System (ADS)

    Yamada, Takayuki; Gohshi, Seiichi; Echizen, Isao

    A method is described to prevent video images and videos displayed on screens from being re-shot by digital cameras and camcorders. Conventional methods using digital watermarking for re-shooting prevention embed content IDs into images and videos, and they help to identify the place and time where the actual content was shot. However, these methods do not actually prevent digital content from being re-shot by camcorders. We developed countermeasures to stop re-shooting by exploiting the differences between the sensory characteristics of humans and devices. The countermeasures require no additional functions to use-side devices. It uses infrared light (IR) to corrupt the content recorded by CCD or CMOS devices. In this way, re-shot content will be unusable. To validate the method, we developed a prototype system and implemented it on a 100-inch cinema screen. Experimental evaluations showed that the method effectively prevents re-shooting.

  2. Design and implementation of a cartographic client application for mobile devices using SVG Tiny and J2ME

    NASA Astrophysics Data System (ADS)

    Hui, L.; Behr, F.-J.; Schröder, D.

    2006-10-01

    The dissemination of digital geospatial data is available now on mobile devices such as PDAs (personal digital assistants) and smart-phones etc. The mobile devices which support J2ME (Java 2 Micro Edition) offer users and developers one open interface, which they can use to develop or download the software according their own demands. Currently WMS (Web Map Service) can afford not only traditional raster image, but also the vector image. SVGT (Scalable Vector Graphics Tiny) is one subset of SVG (Scalable Vector Graphics) and because of its precise vector information, original styling and small file size, SVGT format is fitting well for the geographic mapping purpose, especially for the mobile devices which has bandwidth net connection limitation. This paper describes the development of a cartographic client for the mobile devices, using SVGT and J2ME technology. Mobile device will be simulated on the desktop computer for a series of testing with WMS, for example, send request and get the responding data from WMS and then display both vector and raster format image. Analyzing and designing of System structure such as user interface and code structure are discussed, the limitation of mobile device should be taken into consideration for this applications. The parsing of XML document which is received from WMS after the GetCapabilities request and the visual realization of SVGT and PNG (Portable Network Graphics) image are important issues in codes' writing. At last the client was tested on Nokia S40/60 mobile phone successfully.

  3. Visible-Infrared Hyperspectral Image Projector

    NASA Technical Reports Server (NTRS)

    Bolcar, Matthew

    2013-01-01

    The VisIR HIP generates spatially-spectrally complex scenes. The generated scenes simulate real-world targets viewed by various remote sensing instruments. The VisIR HIP consists of two subsystems: a spectral engine and a spatial engine. The spectral engine generates spectrally complex uniform illumination that spans the wavelength range between 380 nm and 1,600 nm. The spatial engine generates two-dimensional gray-scale scenes. When combined, the two engines are capable of producing two-dimensional scenes with a unique spectrum at each pixel. The VisIR HIP can be used to calibrate any spectrally sensitive remote-sensing instrument. Tests were conducted on the Wide-field Imaging Interferometer Testbed at NASA s Goddard Space Flight Center. The device is a variation of the calibrated hyperspectral image projector developed by the National Institute of Standards and Technology in Gaithersburg, MD. It uses Gooch & Housego Visible and Infrared OL490 Agile Light Sources to generate arbitrary spectra. The two light sources are coupled to a digital light processing (DLP(TradeMark)) digital mirror device (DMD) that serves as the spatial engine. Scenes are displayed on the DMD synchronously with desired spectrum. Scene/spectrum combinations are displayed in rapid succession, over time intervals that are short compared to the integration time of the system under test.

  4. Storage and retrieval of large digital images

    DOEpatents

    Bradley, J.N.

    1998-01-20

    Image compression and viewing are implemented with (1) a method for performing DWT-based compression on a large digital image with a computer system possessing a two-level system of memory and (2) a method for selectively viewing areas of the image from its compressed representation at multiple resolutions and, if desired, in a client-server environment. The compression of a large digital image I(x,y) is accomplished by first defining a plurality of discrete tile image data subsets T{sub ij}(x,y) that, upon superposition, form the complete set of image data I(x,y). A seamless wavelet-based compression process is effected on I(x,y) that is comprised of successively inputting the tiles T{sub ij}(x,y) in a selected sequence to a DWT routine, and storing the resulting DWT coefficients in a first primary memory. These coefficients are periodically compressed and transferred to a secondary memory to maintain sufficient memory in the primary memory for data processing. The sequence of DWT operations on the tiles T{sub ij}(x,y) effectively calculates a seamless DWT of I(x,y). Data retrieval consists of specifying a resolution and a region of I(x,y) for display. The subset of stored DWT coefficients corresponding to each requested scene is determined and then decompressed for input to an inverse DWT, the output of which forms the image display. The repeated process whereby image views are specified may take the form an interaction with a computer pointing device on an image display from a previous retrieval. 6 figs.

  5. Storage and retrieval of large digital images

    DOEpatents

    Bradley, Jonathan N.

    1998-01-01

    Image compression and viewing are implemented with (1) a method for performing DWT-based compression on a large digital image with a computer system possessing a two-level system of memory and (2) a method for selectively viewing areas of the image from its compressed representation at multiple resolutions and, if desired, in a client-server environment. The compression of a large digital image I(x,y) is accomplished by first defining a plurality of discrete tile image data subsets T.sub.ij (x,y) that, upon superposition, form the complete set of image data I(x,y). A seamless wavelet-based compression process is effected on I(x,y) that is comprised of successively inputting the tiles T.sub.ij (x,y) in a selected sequence to a DWT routine, and storing the resulting DWT coefficients in a first primary memory. These coefficients are periodically compressed and transferred to a secondary memory to maintain sufficient memory in the primary memory for data processing. The sequence of DWT operations on the tiles T.sub.ij (x,y) effectively calculates a seamless DWT of I(x,y). Data retrieval consists of specifying a resolution and a region of I(x,y) for display. The subset of stored DWT coefficients corresponding to each requested scene is determined and then decompressed for input to an inverse DWT, the output of which forms the image display. The repeated process whereby image views are specified may take the form an interaction with a computer pointing device on an image display from a previous retrieval.

  6. Advanced autostereoscopic display for G-7 pilot project

    NASA Astrophysics Data System (ADS)

    Hattori, Tomohiko; Ishigaki, Takeo; Shimamoto, Kazuhiro; Sawaki, Akiko; Ishiguchi, Tsuneo; Kobayashi, Hiromi

    1999-05-01

    An advanced auto-stereoscopic display is described that permits the observation of a stereo pair by several persons simultaneously without the use of special glasses and any kind of head tracking devices for the viewers. The system is composed of a right eye system, a left eye system and a sophisticated head tracking system. In the each eye system, a transparent type color liquid crystal imaging plate is used with a special back light unit. The back light unit consists of a monochrome 2D display and a large format convex lens. The unit distributes the light of the viewers' correct each eye only. The right eye perspective system is combined with a left eye perspective system is combined with a left eye perspective system by a half mirror in order to function as a time-parallel stereoscopic system. The viewer's IR image is taken through and focused by the large format convex lens and feed back to the back light as a modulated binary half face image. The auto-stereoscopic display employs the TTL method as the accurate head tracking. The system was worked as a stereoscopic TV phone between Duke University Department Tele-medicine and Nagoya University School of Medicine Department Radiology using a high-speed digital line of GIBN. The applications are also described in this paper.

  7. Digital mammography: comparative performance of color LCD and monochrome CRT displays.

    PubMed

    Samei, Ehsan; Poolla, Ananth; Ulissey, Michael J; Lewin, John M

    2007-05-01

    To evaluate the comparative performance of high-fidelity liquid crystal display (LCD) and cathode ray tube (CRT) devices for mammography applications, and to assess the impact of LCD viewing angle on detection accuracy. Ninety 1 k x 1 k images were selected from a database of digital mammograms: 30 without any abnormality present, 30 with subtle masses, and 30 with subtle microcalcifications. The images were used with waived informed consent, Health Insurance Portability and Accountability Act compliance, and Institutional Review Board approval. With postprocessing presentation identical to those of the commercial mammography system used, 1 k x 1 k sections of images were viewed on a monochrome CRT and a color LCD in native grayscale, and with a grayscale representative of images viewed from a 30 degrees or 50 degrees off-normal viewing angle. Randomized images were independently scored by four experienced breast radiologists for the presence of lesions using a 0-100 grading scale. To compare diagnostic performance of the display modes, observer scores were analyzed using receiver operating characteristic (ROC) and analysis of variance. For masses and microcalcifications, the detection rate in terms of the area under the ROC curve (A(z)) showed a 2% increase and a 4% decrease from CRT to LCD, respectively. However, differences were not statistically significant (P > .05). The viewing angle data showed better microcalcification detection but lower mass detection at 30 degrees viewing orientation. The overall results varied notably from observer to observer yielding no statistically discernible trends across all observers, suggesting that within the 0-50 degrees viewing angle range and in a controlled observer experiment, the variation in the contrast response of the LCD has little or no impact on the detection of mammographic lesions. Although CRTs and LCDs differ in terms of angular response, resolution, noise, and color, these characteristics seem to have little influence on the detection of mammographic lesions. The results suggest comparable performance in clinical applications of the two devices.

  8. Unexpected surface implanted layer in static random access memory devices observed by microwave impedance microscope

    NASA Astrophysics Data System (ADS)

    Kundhikanjana, W.; Yang, Y.; Tanga, Q.; Zhang, K.; Lai, K.; Ma, Y.; Kelly, M. A.; Li, X. X.; Shen, Z.-X.

    2013-02-01

    Real-space mapping of doping concentration in semiconductor devices is of great importance for the microelectronics industry. In this work, a scanning microwave impedance microscope (MIM) is employed to resolve the local conductivity distribution of a static random access memory sample. The MIM electronics can also be adjusted to the scanning capacitance microscopy (SCM) mode, allowing both measurements on the same region. Interestingly, while the conventional SCM images match the nominal device structure, the MIM results display certain unexpected features, which originate from a thin layer of the dopant ions penetrating through the protective layers during the heavy implantation steps.

  9. Development and evaluation of vision rehabilitation devices.

    PubMed

    Luo, Gang; Peli, Eli

    2011-01-01

    We have developed a range of vision rehabilitation devices and techniques for people with impaired vision due to either central vision loss or severely restricted peripheral visual field. We have conducted evaluation studies with patients to test the utilities of these techniques in an effort to document their advantages as well as their limitations. Here we describe our work on a visual field expander based on a head mounted display (HMD) for tunnel vision, a vision enhancement device for central vision loss, and a frequency domain JPEG/MPEG based image enhancement technique. All the evaluation studies included visual search paradigms that are suitable for conducting indoor controllable experiments.

  10. Tiny Devices Project Sharp, Colorful Images

    NASA Technical Reports Server (NTRS)

    2009-01-01

    Displaytech Inc., based in Longmont, Colorado and recently acquired by Micron Technology Inc. of Boise, Idaho, first received a Small Business Innovation Research contract in 1993 from Johnson Space Center to develop tiny, electronic, color displays, called microdisplays. Displaytech has since sold over 20 million microdisplays and was ranked one of the fastest growing technology companies by Deloitte and Touche in 2005. Customers currently incorporate the microdisplays in tiny pico-projectors, which weigh only a few ounces and attach to media players, cell phones, and other devices. The projectors can convert a digital image from the typical postage stamp size into a bright, clear, four-foot projection. The company believes sales of this type of pico-projector may exceed $1.1 billion within 5 years.

  11. Pilot Task Profiles, Human Factors, And Image Realism

    NASA Astrophysics Data System (ADS)

    McCormick, Dennis

    1982-06-01

    Computer Image Generation (CIG) visual systems provide real time scenes for state-of-the-art flight training simulators. The visual system reauires a greater understanding of training tasks, human factors, and the concept of image realism to produce an effective and efficient training scene than is required by other types of visual systems. Image realism must be defined in terms of pilot visual information reauirements. Human factors analysis of training and perception is necessary to determine the pilot's information requirements. System analysis then determines how the CIG and display device can best provide essential information to the pilot. This analysis procedure ensures optimum training effectiveness and system performance.

  12. Fast downscaled inverses for images compressed with M-channel lapped transforms.

    PubMed

    de Queiroz, R L; Eschbach, R

    1997-01-01

    Compressed images may be decompressed and displayed or printed using different devices at different resolutions. Full decompression and rescaling in space domain is a very expensive method. We studied downscaled inverses where the image is decompressed partially, and a reduced inverse transform is used to recover the image. In this fashion, fewer transform coefficients are used and the synthesis process is simplified. We studied the design of fast inverses, for a given forward transform. General solutions are presented for M-channel finite impulse response (FIR) filterbanks, of which block and lapped transforms are a subset. Designs of faster inverses are presented for popular block and lapped transforms.

  13. Image fusion and navigation platforms for percutaneous image-guided interventions.

    PubMed

    Rajagopal, Manoj; Venkatesan, Aradhana M

    2016-04-01

    Image-guided interventional procedures, particularly image guided biopsy and ablation, serve an important role in the care of the oncology patient. The need for tumor genomic and proteomic profiling, early tumor response assessment and confirmation of early recurrence are common scenarios that may necessitate successful biopsies of targets, including those that are small, anatomically unfavorable or inconspicuous. As image-guided ablation is increasingly incorporated into interventional oncology practice, similar obstacles are posed for the ablation of technically challenging tumor targets. Navigation tools, including image fusion and device tracking, can enable abdominal interventionalists to more accurately target challenging biopsy and ablation targets. Image fusion technologies enable multimodality fusion and real-time co-displays of US, CT, MRI, and PET/CT data, with navigational technologies including electromagnetic tracking, robotic, cone beam CT, optical, and laser guidance of interventional devices. Image fusion and navigational platform technology is reviewed in this article, including the results of studies implementing their use for interventional procedures. Pre-clinical and clinical experiences to date suggest these technologies have the potential to reduce procedure risk, time, and radiation dose to both the patient and the operator, with a valuable role to play for complex image-guided interventions.

  14. Can laptops be left inside passenger bags if motion imaging is used in X-ray security screening?

    PubMed

    Mendes, Marcia; Schwaninger, Adrian; Michel, Stefan

    2013-01-01

    This paper describes a study where a new X-ray machine for security screening featuring motion imaging (i.e., 5 views of a bag are shown as an image sequence) was evaluated and compared to single view imaging available on conventional X-ray screening systems. More specifically, it was investigated whether with this new technology X-ray screening of passenger bags could be enhanced to such an extent that laptops could be left inside passenger bags, without causing a significant impairment in threat detection performance. An X-ray image interpretation test was created in four different versions, manipulating the factors packing condition (laptop and bag separate vs. laptop in bag) and display condition (single vs. motion imaging). There was a highly significant and large main effect of packing condition. When laptops and bags were screened separately, threat item detection was substantially higher. For display condition, a medium effect was observed. Detection could be slightly enhanced through the application of motion imaging. There was no interaction between display and packing condition, implying that the high negative effect of leaving laptops in passenger bags could not be fully compensated by motion imaging. Additional analyses were carried out to examine effects depending on different threat categories (guns, improvised explosive devices, knives, others), the placement of the threat items (in bag vs. in laptop) and viewpoint (easy vs. difficult view). In summary, although motion imaging provides an enhancement, it is not strong enough to allow leaving laptops in bags for security screening.

  15. Can laptops be left inside passenger bags if motion imaging is used in X-ray security screening?

    PubMed Central

    Mendes, Marcia; Schwaninger, Adrian; Michel, Stefan

    2013-01-01

    This paper describes a study where a new X-ray machine for security screening featuring motion imaging (i.e., 5 views of a bag are shown as an image sequence) was evaluated and compared to single view imaging available on conventional X-ray screening systems. More specifically, it was investigated whether with this new technology X-ray screening of passenger bags could be enhanced to such an extent that laptops could be left inside passenger bags, without causing a significant impairment in threat detection performance. An X-ray image interpretation test was created in four different versions, manipulating the factors packing condition (laptop and bag separate vs. laptop in bag) and display condition (single vs. motion imaging). There was a highly significant and large main effect of packing condition. When laptops and bags were screened separately, threat item detection was substantially higher. For display condition, a medium effect was observed. Detection could be slightly enhanced through the application of motion imaging. There was no interaction between display and packing condition, implying that the high negative effect of leaving laptops in passenger bags could not be fully compensated by motion imaging. Additional analyses were carried out to examine effects depending on different threat categories (guns, improvised explosive devices, knives, others), the placement of the threat items (in bag vs. in laptop) and viewpoint (easy vs. difficult view). In summary, although motion imaging provides an enhancement, it is not strong enough to allow leaving laptops in bags for security screening. PMID:24151457

  16. Direct-Solve Image-Based Wavefront Sensing

    NASA Technical Reports Server (NTRS)

    Lyon, Richard G.

    2009-01-01

    A method of wavefront sensing (more precisely characterized as a method of determining the deviation of a wavefront from a nominal figure) has been invented as an improved means of assessing the performance of an optical system as affected by such imperfections as misalignments, design errors, and fabrication errors. The method is implemented by software running on a single-processor computer that is connected, via a suitable interface, to the image sensor (typically, a charge-coupled device) in the system under test. The software collects a digitized single image from the image sensor. The image is displayed on a computer monitor. The software directly solves for the wavefront in a time interval of a fraction of a second. A picture of the wavefront is displayed. The solution process involves, among other things, fast Fourier transforms. It has been reported to the effect that some measure of the wavefront is decomposed into modes of the optical system under test, but it has not been reported whether this decomposition is postprocessing of the solution or part of the solution process.

  17. Evaluation of a single-pixel one-transistor active pixel sensor for fingerprint imaging

    NASA Astrophysics Data System (ADS)

    Xu, Man; Ou, Hai; Chen, Jun; Wang, Kai

    2015-08-01

    Since it first appeared in iPhone 5S in 2013, fingerprint identification (ID) has rapidly gained popularity among consumers. Current fingerprint-enabled smartphones unanimously consists of a discrete sensor to perform fingerprint ID. This architecture not only incurs higher material and manufacturing cost, but also provides only static identification and limited authentication. Hence as the demand for a thinner, lighter, and more secure handset grows, we propose a novel pixel architecture that is a photosensitive device embedded in a display pixel and detects the reflected light from the finger touch for high resolution, high fidelity and dynamic biometrics. To this purpose, an amorphous silicon (a-Si:H) dual-gate photo TFT working in both fingerprint-imaging mode and display-driving mode will be developed.

  18. Imaging of ultraweak spontaneous photon emission from human body displaying diurnal rhythm.

    PubMed

    Kobayashi, Masaki; Kikuchi, Daisuke; Okamura, Hitoshi

    2009-07-16

    The human body literally glimmers. The intensity of the light emitted by the body is 1000 times lower than the sensitivity of our naked eyes. Ultraweak photon emission is known as the energy released as light through the changes in energy metabolism. We successfully imaged the diurnal change of this ultraweak photon emission with an improved highly sensitive imaging system using cryogenic charge-coupled device (CCD) camera. We found that the human body directly and rhythmically emits light. The diurnal changes in photon emission might be linked to changes in energy metabolism.

  19. Inkjet printing-based volumetric display projecting multiple full-colour 2D patterns

    NASA Astrophysics Data System (ADS)

    Hirayama, Ryuji; Suzuki, Tomotaka; Shimobaba, Tomoyoshi; Shiraki, Atsushi; Naruse, Makoto; Nakayama, Hirotaka; Kakue, Takashi; Ito, Tomoyoshi

    2017-04-01

    In this study, a method to construct a full-colour volumetric display is presented using a commercially available inkjet printer. Photoreactive luminescence materials are minutely and automatically printed as the volume elements, and volumetric displays are constructed with high resolution using easy-to-fabricate means that exploit inkjet printing technologies. The results experimentally demonstrate the first prototype of an inkjet printing-based volumetric display composed of multiple layers of transparent films that yield a full-colour three-dimensional (3D) image. Moreover, we propose a design algorithm with 3D structures that provide multiple different 2D full-colour patterns when viewed from different directions and experimentally demonstrate prototypes. It is considered that these types of 3D volumetric structures and their fabrication methods based on widely deployed existing printing technologies can be utilised as novel information display devices and systems, including digital signage, media art, entertainment and security.

  20. Carbon Nanotube Driver Circuit for 6 × 6 Organic Light Emitting Diode Display

    NASA Astrophysics Data System (ADS)

    Zou, Jianping; Zhang, Kang; Li, Jingqi; Zhao, Yongbiao; Wang, Yilei; Pillai, Suresh Kumar Raman; Volkan Demir, Hilmi; Sun, Xiaowei; Chan-Park, Mary B.; Zhang, Qing

    2015-06-01

    Single-walled carbon nanotube (SWNT) is expected to be a very promising material for flexible and transparent driver circuits for active matrix organic light emitting diode (AM OLED) displays due to its high field-effect mobility, excellent current carrying capacity, optical transparency and mechanical flexibility. Although there have been several publications about SWNT driver circuits, none of them have shown static and dynamic images with the AM OLED displays. Here we report on the first successful chemical vapor deposition (CVD)-grown SWNT network thin film transistor (TFT) driver circuits for static and dynamic AM OLED displays with 6 × 6 pixels. The high device mobility of ~45 cm2V-1s-1 and the high channel current on/off ratio of ~105 of the SWNT-TFTs fully guarantee the control capability to the OLED pixels. Our results suggest that SWNT-TFTs are promising backplane building blocks for future OLED displays.

  1. Carbon Nanotube Driver Circuit for 6 × 6 Organic Light Emitting Diode Display.

    PubMed

    Zou, Jianping; Zhang, Kang; Li, Jingqi; Zhao, Yongbiao; Wang, Yilei; Pillai, Suresh Kumar Raman; Volkan Demir, Hilmi; Sun, Xiaowei; Chan-Park, Mary B; Zhang, Qing

    2015-06-29

    Single-walled carbon nanotube (SWNT) is expected to be a very promising material for flexible and transparent driver circuits for active matrix organic light emitting diode (AM OLED) displays due to its high field-effect mobility, excellent current carrying capacity, optical transparency and mechanical flexibility. Although there have been several publications about SWNT driver circuits, none of them have shown static and dynamic images with the AM OLED displays. Here we report on the first successful chemical vapor deposition (CVD)-grown SWNT network thin film transistor (TFT) driver circuits for static and dynamic AM OLED displays with 6 × 6 pixels. The high device mobility of ~45 cm(2)V(-1)s(-1) and the high channel current on/off ratio of ~10(5) of the SWNT-TFTs fully guarantee the control capability to the OLED pixels. Our results suggest that SWNT-TFTs are promising backplane building blocks for future OLED displays.

  2. Blinded evaluation of the effects of high definition and magnification on perceived image quality in laryngeal imaging.

    PubMed

    Otto, Kristen J; Hapner, Edie R; Baker, Michael; Johns, Michael M

    2006-02-01

    Advances in commercial video technology have improved office-based laryngeal imaging. This study investigates the perceived image quality of a true high-definition (HD) video camera and the effect of magnification on laryngeal videostroboscopy. We performed a prospective, dual-armed, single-blinded analysis of a standard laryngeal videostroboscopic examination comparing 3 separate add-on camera systems: a 1-chip charge-coupled device (CCD) camera, a 3-chip CCD camera, and a true 720p (progressive scan) HD camera. Displayed images were controlled for magnification and image size (20-inch [50-cm] display, red-green-blue, and S-video cable for 1-chip and 3-chip cameras; digital visual interface cable and HD monitor for HD camera). Ten blinded observers were then asked to rate the following 5 items on a 0-to-100 visual analog scale: resolution, color, ability to see vocal fold vibration, sense of depth perception, and clarity of blood vessels. Eight unblinded observers were then asked to rate the difference in perceived resolution and clarity of laryngeal examination images when displayed on a 10-inch (25-cm) monitor versus a 42-inch (105-cm) monitor. A visual analog scale was used. These monitors were controlled for actual resolution capacity. For each item evaluated, randomized block design analysis demonstrated that the 3-chip camera scored significantly better than the 1-chip camera (p < .05). For the categories of color and blood vessel discrimination, the 3-chip camera scored significantly better than the HD camera (p < .05). For magnification alone, observers rated the 42-inch monitor statistically better than the 10-inch monitor. The expense of new medical technology must be judged against its added value. This study suggests that HD laryngeal imaging may not add significant value over currently available video systems, in perceived image quality, when a small monitor is used. Although differences in clarity between standard and HD cameras may not be readily apparent on small displays, a large display size coupled with HD technology may impart improved diagnosis of subtle vocal fold lesions and vibratory anomalies.

  3. Efficient use of bit planes in the generation of motion stimuli

    NASA Technical Reports Server (NTRS)

    Mulligan, Jeffrey B.; Stone, Leland S.

    1988-01-01

    The production of animated motion sequences on computer-controlled display systems presents a technical problem because large images cannot be transferred from disk storage to image memory at conventional frame rates. A technique is described in which a single base image can be used to generate a broad class of motion stimuli without the need for such memory transfers. This technique was applied to the generation of drifting sine-wave gratings (and by extension, sine wave plaids). For each drifting grating, sine and cosine spatial phase components are first reduced to 1 bit/pixel using a digital halftoning technique. The resulting pairs of 1-bit images are then loaded into pairs of bit planes of the display memory. To animate the patterns, the display hardware's color lookup table is modified on a frame-by-frame basis; for each frame the lookup table is set to display a weighted sum of the spatial sine and cosine phase components. Because the contrasts and temporal frequencies of the various components are mutually independent in each frame, the sine and cosine components can be counterphase modulated in temporal quadrature, yielding a single drifting grating. Using additional bit planes, multiple drifting gratings can be combined to form sine-wave plaid patterns. A large number of resultant plaid motions can be produced from a single image file because the temporal frequencies of all the components can be varied independently. For a graphics device having 8 bits/pixel, up to four drifting gratings may be combined, each having independently variable contrast and speed.

  4. Rapid Assessment of Contrast Sensitivity with Mobile Touch-screens

    NASA Technical Reports Server (NTRS)

    Mulligan, Jeffrey B.

    2013-01-01

    The availability of low-cost high-quality touch-screen displays in modern mobile devices has created opportunities for new approaches to routine visual measurements. Here we describe a novel method in which subjects use a finger swipe to indicate the transition from visible to invisible on a grating which is swept in both contrast and frequency. Because a single image can be swiped in about a second, it is practical to use a series of images to zoom in on particular ranges of contrast or frequency, both to increase the accuracy of the measurements and to obtain an estimate of the reliability of the subject. Sensitivities to chromatic and spatio-temporal modulations are easily measured using the same method. We will demonstrate a prototype for Apple Computer's iPad-iPod-iPhone family of devices, implemented using an open-source scripting environment known as QuIP (QUick Image Processing,

  5. Real-time "x-ray vision" for healthcare simulation: an interactive projective overlay system to enhance intubation training and other procedural training.

    PubMed

    Samosky, Joseph T; Baillargeon, Emma; Bregman, Russell; Brown, Andrew; Chaya, Amy; Enders, Leah; Nelson, Douglas A; Robinson, Evan; Sukits, Alison L; Weaver, Robert A

    2011-01-01

    We have developed a prototype of a real-time, interactive projective overlay (IPO) system that creates augmented reality display of a medical procedure directly on the surface of a full-body mannequin human simulator. These images approximate the appearance of both anatomic structures and instrument activity occurring within the body. The key innovation of the current work is sensing the position and motion of an actual device (such as an endotracheal tube) inserted into the mannequin and using the sensed position to control projected video images portraying the internal appearance of the same devices and relevant anatomic structures. The images are projected in correct registration onto the surface of the simulated body. As an initial practical prototype to test this technique we have developed a system permitting real-time visualization of the intra-airway position of an endotracheal tube during simulated intubation training.

  6. Automatic view synthesis by image-domain-warping.

    PubMed

    Stefanoski, Nikolce; Wang, Oliver; Lang, Manuel; Greisen, Pierre; Heinzle, Simon; Smolic, Aljosa

    2013-09-01

    Today, stereoscopic 3D (S3D) cinema is already mainstream, and almost all new display devices for the home support S3D content. S3D distribution infrastructure to the home is already established partly in the form of 3D Blu-ray discs, video on demand services, or television channels. The necessity to wear glasses is, however, often considered as an obstacle, which hinders broader acceptance of this technology in the home. Multiviewautostereoscopic displays enable a glasses free perception of S3D content for several observers simultaneously, and support head motion parallax in a limited range. To support multiviewautostereoscopic displays in an already established S3D distribution infrastructure, a synthesis of new views from S3D video is needed. In this paper, a view synthesis method based on image-domain-warping (IDW) is presented that automatically synthesizes new views directly from S3D video and functions completely. IDW relies on an automatic and robust estimation of sparse disparities and image saliency information, and enforces target disparities in synthesized images using an image warping framework. Two configurations of the view synthesizer in the scope of a transmission and view synthesis framework are analyzed and evaluated. A transmission and view synthesis system that uses IDW is recently submitted to MPEG's call for proposals on 3D video technology, where it is ranked among the four best performing proposals.

  7. Collision judgment when using an augmented-vision head-mounted display device

    PubMed Central

    Luo, Gang; Woods, Russell L; Peli, Eli

    2016-01-01

    Purpose We have developed a device to provide an expanded visual field to patients with tunnel vision by superimposing minified edge images of the wide scene, in which objects appear closer to the heading direction than they really are. We conducted experiments in a virtual environment to determine if users would overestimate collision risks. Methods Given simulated scenes of walking or standing with intention to walk towards a given direction (intended walking) in a shopping mall corridor, participants (12 normally sighted and 7 with tunnel vision) reported whether they would collide with obstacles appearing at different offsets from variable walking paths (or intended directions), with and without the device. The collision envelope (CE), a personal space based on perceived collision judgments, and judgment uncertainty (variability of response) were measured. When the device was used, combinations of two image scales (5× minified and 1:1) and two image types (grayscale or edge images) were tested. Results Image type did not significantly alter collision judgment (p>0.7). Compared to the without-device baseline, minification did not significantly change the CE of normally sighted subjects for simulated walking (p=0.12), but increased CE by 30% for intended walking (p<0.001). Their uncertainty was not affected by minification (p>0.25). For the patients, neither CE nor uncertainty was affected by minification (p>0.13) in both walking conditions. Baseline CE and uncertainty were greater for patients than normally-sighted subjects in simulated walking (p=0.03), but the two groups were not significantly different in all other conditions. Conclusion Users did not substantially overestimate collision risk, as the 5× minified images had only limited impact on collision judgments either during walking or before starting to walk. PMID:19458339

  8. Collision judgment when using an augmented-vision head-mounted display device.

    PubMed

    Luo, Gang; Woods, Russell L; Peli, Eli

    2009-09-01

    A device was developed to provide an expanded visual field to patients with tunnel vision by superimposing minified edge images of the wide scene, in which objects appear closer to the heading direction than they really are. Experiments were conducted in a virtual environment to determine whether users would overestimate collision risks. Given simulated scenes of walking or standing with intention to walk toward a given direction (intended walking) in a shopping mall corridor, participants (12 normally sighted and 7 with tunnel vision) reported whether they would collide with obstacles appearing at different offsets from variable walking paths (or intended directions), with and without the device. The collision envelope (CE), a personal space based on perceived collision judgments, and judgment uncertainty (variability of response) were measured. When the device was used, combinations of two image scales (5x minified and 1:1) and two image types (grayscale or edge images) were tested. Image type did not significantly alter collision judgment (P > 0.7). Compared to the without-device baseline, minification did not significantly change the CE of normally sighted subjects for simulated walking (P = 0.12), but increased CE by 30% for intended walking (P < 0.001). Their uncertainty was not affected by minification (P > 0.25). For the patients, neither CE nor uncertainty was affected by minification (P > 0.13) in both walking conditions. Baseline CE and uncertainty were greater for patients than normally sighted subjects in simulated walking (P = 0.03), but the two groups were not significantly different in all other conditions. Users did not substantially overestimate collision risk, as the x5 minified images had only limited impact on collision judgments either during walking or before starting to walk.

  9. Advanced freeform optics enabling ultra-compact VR headsets

    NASA Astrophysics Data System (ADS)

    Benitez, Pablo; Miñano, Juan C.; Zamora, Pablo; Grabovičkić, Dejan; Buljan, Marina; Narasimhan, Bharathwaj; Gorospe, Jorge; López, Jesús; Nikolić, Milena; Sánchez, Eduardo; Lastres, Carmen; Mohedano, Ruben

    2017-06-01

    We present novel advanced optical designs with a dramatically smaller display to eye distance, excellent image quality and a large field of view (FOV). This enables headsets to be much more compact, typically occupying about a fourth of the volume of a conventional headset with the same FOV. The design strategy of these optics is based on a multichannel approach, which reduces the distance from the eye to the display and the display size itself. Unlike conventional microlens arrays, which are also multichannel devices, our designs use freeform optical surfaces to produce excellent imaging quality in the entire field of view, even when operating at very oblique incidences. We present two families of compact solutions that use different types of lenslets: (1) refractive designs, whose lenslets are composed typically of two refractive surfaces each; and (2) light-folding designs that use prism-like three-surface lenslets, in which rays undergo refraction, reflection, total internal reflection and refraction again. The number of lenslets is not fixed, so different configurations may arise, adaptable for flat or curved displays with different aspect ratios. In the refractive designs the distance between the optics and the display decreases with the number of lenslets, allowing for displaying a light-field when the lenslet becomes significantly small than the eye pupil. On the other hand, the correlation between number of lenslets and the optics to display distance is broken in light-folding designs, since their geometry permits achieving a very short display to eye distance with even a small number of lenslets.

  10. Mixed Reality in Visceral Surgery: Development of a Suitable Workflow and Evaluation of Intraoperative Use-cases.

    PubMed

    Sauer, Igor M; Queisner, Moritz; Tang, Peter; Moosburner, Simon; Hoepfner, Ole; Horner, Rosa; Lohmann, Rudiger; Pratschke, Johann

    2017-11-01

    The paper evaluates the application of a mixed reality (MR) headmounted display (HMD) for the visualization of anatomical structures in complex visceral-surgical interventions. A workflow was developed and technical feasibility was evaluated. Medical images are still not seamlessly integrated into surgical interventions and, thus, remain separated from the surgical procedure.Surgeons need to cognitively relate 2-dimensional sectional images to the 3-dimensional (3D) during the actual intervention. MR applications simulate 3D images and reduce the offset between working space and visualization allowing for improved spatial-visual approximation of patient and image. The surgeon's field of vision was superimposed with a 3D-model of the patient's relevant liver structures displayed on a MR-HMD. This set-up was evaluated during open hepatic surgery. A suitable workflow for segmenting image masks and texture mapping of tumors, hepatic artery, portal vein, and the hepatic veins was developed. The 3D model was positioned above the surgical site. Anatomical reassurance was possible simply by looking up. Positioning in the room was stable without drift and minimal jittering. Users reported satisfactory comfort wearing the device without significant impairment of movement. MR technology has a high potential to improve the surgeon's action and perception in open visceral surgery by displaying 3D anatomical models close to the surgical site. Superimposing anatomical structures directly onto the organs within the surgical site remains challenging, as the abdominal organs undergo major deformations due to manipulation, respiratory motion, and the interaction with the surgical instruments during the intervention. A further application scenario would be intraoperative ultrasound examination displaying the image directly next to the transducer. Displays and sensor-technologies as well as biomechanical modeling and object-recognition algorithms will facilitate the application of MR-HMD in surgery in the near future.

  11. 3D image processing architecture for camera phones

    NASA Astrophysics Data System (ADS)

    Atanassov, Kalin; Ramachandra, Vikas; Goma, Sergio R.; Aleksic, Milivoje

    2011-03-01

    Putting high quality and easy-to-use 3D technology into the hands of regular consumers has become a recent challenge as interest in 3D technology has grown. Making 3D technology appealing to the average user requires that it be made fully automatic and foolproof. Designing a fully automatic 3D capture and display system requires: 1) identifying critical 3D technology issues like camera positioning, disparity control rationale, and screen geometry dependency, 2) designing methodology to automatically control them. Implementing 3D capture functionality on phone cameras necessitates designing algorithms to fit within the processing capabilities of the device. Various constraints like sensor position tolerances, sensor 3A tolerances, post-processing, 3D video resolution and frame rate should be carefully considered for their influence on 3D experience. Issues with migrating functions such as zoom and pan from the 2D usage model (both during capture and display) to 3D needs to be resolved to insure the highest level of user experience. It is also very important that the 3D usage scenario (including interactions between the user and the capture/display device) is carefully considered. Finally, both the processing power of the device and the practicality of the scheme needs to be taken into account while designing the calibration and processing methodology.

  12. Characterization of shape and deformation of MEMS by quantitative optoelectronic metrology techniques

    NASA Astrophysics Data System (ADS)

    Furlong, Cosme; Pryputniewicz, Ryszard J.

    2002-06-01

    Recent technological trends based on miniaturization of mechanical, electro-mechanical, and photonic devices to the microscopic scale, have led to the development of microelectromechanical systems (MEMS). Effective development of MEMS components requires the synergism of advanced design, analysis, and fabrication methodologies, and also of quantitative metrology techniques for characterizing their performance, reliability, and integrity during the electronic packaging cycle. In this paper, we describe opto-electronic techniques for measuring, with sub-micrometer accuracy, shape and changes in states of deformation of MEMS strictures. With the described opto-electronic techniques, it is possible to characterize MEMS components using the display and data modes. In the display mode, interferometric information related to shape and deformation is displayed at video frame rates, providing the capability for adjusting and setting experimental conditions. In the data mode, interferometric information related to shape and deformation is recorded as high-spatial and high-digital resolution images, which are further processed to provide quantitative 3D information. Furthermore, the quantitative 3D data are exported to computer-aided design (CAD) environments and utilized for analysis and optimization of MEMS devices. Capabilities of opto- electronic techniques are illustrated with representative applications demonstrating their applicability to provide indispensable quantitative information for the effective development and optimization of MEMS devices.

  13. Computers in imaging and health care: now and in the future.

    PubMed

    Arenson, R L; Andriole, K P; Avrin, D E; Gould, R G

    2000-11-01

    Early picture archiving and communication systems (PACS) were characterized by the use of very expensive hardware devices, cumbersome display stations, duplication of database content, lack of interfaces to other clinical information systems, and immaturity in their understanding of the folder manager concepts and workflow reengineering. They were implemented historically at large academic medical centers by biomedical engineers and imaging informaticists. PACS were nonstandard, home-grown projects with mixed clinical acceptance. However, they clearly showed the great potential for PACS and filmless medical imaging. Filmless radiology is a reality today. The advent of efficient softcopy display of images provides a means for dealing with the ever-increasing number of studies and number of images per study. Computer power has increased, and archival storage cost has decreased to the extent that the economics of PACS is justifiable with respect to film. Network bandwidths have increased to allow large studies of many megabytes to arrive at display stations within seconds of examination completion. PACS vendors have recognized the need for efficient workflow and have built systems with intelligence in the management of patient data. Close integration with the hospital information system (HIS)-radiology information system (RIS) is critical for system functionality. Successful implementation of PACS requires integration or interoperation with hospital and radiology information systems. Besides the economic advantages, secure rapid access to all clinical information on patients, including imaging studies, anytime and anywhere, enhances the quality of patient care, although it is difficult to quantify. Medical image management systems are maturing, providing access outside of the radiology department to images and clinical information throughout the hospital or the enterprise via the Internet. Small and medium-sized community hospitals, private practices, and outpatient centers in rural areas will begin realizing the benefits of PACS already realized by the large tertiary care academic medical centers and research institutions. Hand-held devices and the Worldwide Web are going to change the way people communicate and do business. The impact on health care will be huge, including radiology. Computer-aided diagnosis, decision support tools, virtual imaging, and guidance systems will transform our practice as value-added applications utilizing the technologies pushed by PACS development efforts. Outcomes data and the electronic medical record (EMR) will drive our interactions with referring physicians and we expect the radiologist to become the informaticist, a new version of the medical management consultant.

  14. Mental visualization of objects from cross-sectional images

    PubMed Central

    Wu, Bing; Klatzky, Roberta L.; Stetten, George D.

    2011-01-01

    We extended the classic anorthoscopic viewing procedure to test a model of visualization of 3D structures from 2D cross-sections. Four experiments were conducted to examine key processes described in the model, localizing cross-sections within a common frame of reference and spatiotemporal integration of cross sections into a hierarchical object representation. Participants used a hand-held device to reveal a hidden object as a sequence of cross-sectional images. The process of localization was manipulated by contrasting two displays, in-situ vs. ex-situ, which differed in whether cross sections were presented at their source locations or displaced to a remote screen. The process of integration was manipulated by varying the structural complexity of target objects and their components. Experiments 1 and 2 demonstrated visualization of 2D and 3D line-segment objects and verified predictions about display and complexity effects. In Experiments 3 and 4, the visualized forms were familiar letters and numbers. Errors and orientation effects showed that displacing cross-sectional images to a remote display (ex-situ viewing) impeded the ability to determine spatial relationships among pattern components, a failure of integration at the object level. PMID:22217386

  15. Projection display technology for avionics applications

    NASA Astrophysics Data System (ADS)

    Kalmanash, Michael H.; Tompkins, Richard D.

    2000-08-01

    Avionics displays often require custom image sources tailored to demanding program needs. Flat panel devices are attractive for cockpit installations, however recent history has shown that it is not possible to sustain a business manufacturing custom flat panels in small volume specialty runs. As the number of suppliers willing to undertake this effort shrinks, avionics programs unable to utilize commercial-off-the-shelf (COTS) flat panels are placed in serious jeopardy. Rear projection technology offers a new paradigm, enabling compact systems to be tailored to specific platform needs while using a complement of COTS components. Projection displays enable improved performance, lower cost and shorter development cycles based on inter-program commonality and the wide use of commercial components. This paper reviews the promise and challenges of projection technology and provides an overview of Kaiser Electronics' efforts in developing advanced avionics displays using this approach.

  16. Using mixed reality, force feedback and tactile augmentation to improve the realism of medical simulation.

    PubMed

    Fisher, J Brian; Porter, Susan M

    2002-01-01

    This paper describes an application of a display approach which uses chromakey techniques to composite real and computer-generated images allowing a user to see his hands and medical instruments collocated with the display of virtual objects during a medical training simulation. Haptic feedback is provided through the use of a PHANTOM force feedback device in addition to tactile augmentation, which allows the user to touch virtual objects by introducing corresponding real objects in the workspace. A simplified catheter introducer insertion simulation was developed to demonstrate the capabilities of this approach.

  17. Flexible Display and Integrated Communication Devices (FDICD) Technology. Volume 2

    DTIC Science & Technology

    2008-06-01

    AFRL-RH-WP-TR-2008-0072 Flexible Display and Integrated Communication Devices (FDICD) Technology, Volume II David Huffman Keith Tognoni...14 April 2004 – 20 June 2008 4. TITLE AND SUBTITLE Flexible Display and Integrated Communication Devices (FDICD) Technology, Volume II 5a...14. ABSTRACT This flexible display and integrated communication devices (FDICD) technology program sought to create a family of powerful

  18. Applications and requirements for MEMS scanner mirrors

    NASA Astrophysics Data System (ADS)

    Wolter, Alexander; Hsu, Shu-Ting; Schenk, Harald; Lakner, Hubert K.

    2005-01-01

    Micro scanning mirrors are quite versatile MEMS devices for the deflection of a laser beam or a shaped beam from another light source. The most exciting application is certainly in laser-scanned displays. Laser television, home cinema and data projectors will display the most brilliant colors exceeding even plasma, OLED and CRT. Devices for front and rear projection will have advantages in size, weight and price. These advantages will be even more important in near-eye virtual displays like head-mounted displays or viewfinders in digital cameras and potentially in UMTS handsets. Optical pattern generation by scanning a modulated beam over an area can be used also in a number of other applications: laser printers, direct writing of photo resist for printed circuit boards or laser marking and with higher laser power laser ablation or material processing. Scanning a continuous laser beam over a printed pattern and analyzing the scattered reflection is the principle of barcode reading in 1D and 2D. This principle works also for identification of signatures, coins, bank notes, vehicles and other objects. With a focused white-light or RGB beam even full color imaging with high resolution is possible from an amazingly small device. The form factor is also very interesting for the application in endoscopes. Further applications are light curtains for intrusion control and the generation of arbitrary line patterns for triangulation. Scanning a measurement beam extends point measurements to 1D or 2D scans. Automotive LIDAR (laser RADAR) or scanning confocal microscopy are just two examples. Last but not least there is the field of beam steering. E.g. for all-optical fiber switches or positioning of read-/write heads in optical storage devices. The variety of possible applications also brings a variety of specifications. This publication discusses various applications and their requirements.

  19. Design of an ultra-thin near-eye display with geometrical waveguide and freeform optics

    NASA Astrophysics Data System (ADS)

    Tsai, Meng-Che; Lee, Tsung-Xian

    2017-02-01

    Due to the worldwide portable devices and illumination technology trends, researches interest in laser diodes applications are booming in recent years. One of the popular and potential LDs applications is near-eye display used in VR/AR. An ideal near-eye display needs to provide high resolution, wide FOV imagery with compact magnifying optics, and long battery life for prolonged use. However, previous studies still cannot reach high light utilization efficiency in illumination and imaging optical systems which should be raised as possible to increase wear comfort. To meet these needs, a waveguide illumination system of near-eye display is presented in this paper. We focused on proposing a high efficiency RGB LDs light engine which could reduce power consumption and increase flexibility of mechanism design by using freeform TIR reflectors instead of beam splitters. By these structures, the total system efficiency of near-eye display is successfully increased, and the improved results in efficiency and fabrication tolerance of near-eye displays are shown in this paper.

  20. A transportable system for the in situ recording of color Denisyuk holograms of Greek cultural heritage artifacts in silver halide panchromatic emulsions and an optimized illuminating device for the finished holograms

    NASA Astrophysics Data System (ADS)

    Sarakinos, A.; Lembessis, A.; Zervos, N.

    2013-02-01

    In this paper we will present the Z-Lab transportable color holography system, the HoLoFoS illuminator and results of actual in situ recording of color Denisyuk holograms of artifacts on panchromatic silver halide emulsions. Z-lab and HoLoFoS were developed to meet identified prerequisites of holographic recording of artifacts: a) in situ recording b) a high degree of detail and color reproduction c) a low degree of image distortions. The Z-Lab consists of the Z3RGB camera, its accessories and a mobile darkroom. HoLoFoS is an RGB LED-based lighting device for the display of color holograms. The device is capable of digitally controlled intensity mixing and provides a beam of uniform color cross section. The small footprint and emission characteristics of the device LEDs result in a narrow band, quasi point source at selected wavelengths. A case study in recording and displaying 'Optical Clones' of Greek cultural heritage artifacts with the aforementioned systems will also be presented.

  1. 76 FR 72439 - Certain Consumer Electronics and Display Devices and Products Containing Same; Receipt of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-23

    ... INTERNATIONAL TRADE COMMISSION [DN 2858] Certain Consumer Electronics and Display Devices and.... International Trade Commission has received a complaint entitled In Re Certain Consumer Electronics and Display... importation of certain consumer electronics and display devices and products containing same. The complaint...

  2. Emerging digital micromirror device (DMD) applications

    NASA Astrophysics Data System (ADS)

    Dudley, Dana; Duncan, Walter M.; Slaughter, John

    2003-01-01

    For the past six years, Digital Light Processing technology from Texas Instruments has made significant inroads in the projection display market. With products enabling the world"s smallest data and video projectors, HDTVs, and digital cinema, DLP technology is extremely powerful and flexible. At the heart of these display solutions is Texas Instruments Digital Micromirror Device (DMD), a semiconductor-based "light switch" array of thousands of individually addressable, tiltable, mirror-pixels. With success of the DMD as a spatial light modulator for projector applications, dozens of new applications are now being enabled by general-use DMD products that are recently available to developers. The same light switching speed and "on-off" (contrast) ratio that have resulted in superior projector performance, along with the capability of operation outside the visible spectrum, make the DMD very attractive for many applications, including volumetric display, holographic data storage, lithography, scientific instrumentation, and medical imaging. This paper presents an overview of past and future DMD performance in the context of new DMD applications, cites several examples of emerging products, and describes the DMD components and tools now available to developers.

  3. Advanced millimeter-wave security portal imaging techniques

    NASA Astrophysics Data System (ADS)

    Sheen, David M.; Bernacki, Bruce E.; McMakin, Douglas L.

    2012-03-01

    Millimeter-wave (mm-wave) imaging is rapidly gaining acceptance as a security tool to augment conventional metal detectors and baggage x-ray systems for passenger screening at airports and other secured facilities. This acceptance indicates that the technology has matured; however, many potential improvements can yet be realized. The authors have developed a number of techniques over the last several years including novel image reconstruction and display techniques, polarimetric imaging techniques, array switching schemes, and high-frequency high-bandwidth techniques. All of these may improve the performance of new systems; however, some of these techniques will increase the cost and complexity of the mm-wave security portal imaging systems. Reducing this cost may require the development of novel array designs. In particular, RF photonic methods may provide new solutions to the design and development of the sequentially switched linear mm-wave arrays that are the key element in the mm-wave portal imaging systems. Highfrequency, high-bandwidth designs are difficult to achieve with conventional mm-wave electronic devices, and RF photonic devices may be a practical alternative. In this paper, the mm-wave imaging techniques developed at PNNL are reviewed and the potential for implementing RF photonic mm-wave array designs is explored.

  4. Computer Assisted Virtual Environment - CAVE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erickson, Phillip; Podgorney, Robert; Weingartner,

    Research at the Center for Advanced Energy Studies is taking on another dimension with a 3-D device known as a Computer Assisted Virtual Environment. The CAVE uses projection to display high-end computer graphics on three walls and the floor. By wearing 3-D glasses to create depth perception and holding a wand to move and rotate images, users can delve into data.

  5. Contextualized Interdisciplinary Learning in Mainstream Schools Using Augmented Reality-Based Technology: A Dream or Reality?

    ERIC Educational Resources Information Center

    Ong, Alex

    2010-01-01

    The use of augmented reality (AR) tools, where virtual objects such as tables and graphs can be displayed and be interacted with in real scenes created from imaging devices, in mainstream school curriculum is uncommon, as they are potentially costly and sometimes bulky. Thus, such learning tools are mainly applied in tertiary institutions, such as…

  6. Computer Assisted Virtual Environment - CAVE

    ScienceCinema

    Erickson, Phillip; Podgorney, Robert; Weingartner,

    2018-05-30

    Research at the Center for Advanced Energy Studies is taking on another dimension with a 3-D device known as a Computer Assisted Virtual Environment. The CAVE uses projection to display high-end computer graphics on three walls and the floor. By wearing 3-D glasses to create depth perception and holding a wand to move and rotate images, users can delve into data.

  7. OLED microdisplays in near-to-eye applications: challenges and solutions

    NASA Astrophysics Data System (ADS)

    Vogel, Uwe; Richter, Bernd; Wartenberg, Philipp; König, Peter; Hild, Olaf R.; Fehse, Karsten; Schober, Matthias; Bodenstein, Elisabeth; Beyer, Beatrice

    2017-06-01

    Wearable augmented-reality (AR) has already started to be used productively mainly in manufacturing industry and logistics. Next step will be to move wearable AR from "professionals to citizens" by enabling networked, everywhere augmented-reality (in-/outdoor localisation, scene recognition, cloud access,…) which is non-intrusive, exhibits intuitive user-interaction, anytime safe and secure use, and considers personal privacy protection (user's and others). Various hardware improvements (e.g., low-power, seamless interactivity, small form factor, ergonomics,…), as well as connectivity and network integration will become vital for consumer adoption. Smart-Glasses (i.e., near-to-eye (NTE) displays) have evolved as major devices for wearable AR, that hold potential to become adopted by consumers soon. Tiny microdisplays are a key component of smart-glasses, e.g., creating images from organic light emitting diodes (OLED), that have become popular in mobile phone displays. All microdisplay technologies on the market comprise an image-creating pixel modulation, but only the emissive ones (for example, OLED and LED) feature the image and light source in a single device, and therefore do not require an external light source. This minimizes system size and power consumption, while providing exceptional contrast and color space. These advantages make OLED microdisplays a perfect fit for near-eye applications. Low-power active-matrix circuitry CMOS backplane architecture, embedded sensors, emission spectra outside the visible and high-resolution sub-pixel micro-patterning address some of the application challenges (e.g., long battery life, sun-light readability, user interaction modes) and enable advanced features for OLED microdisplays in near-to-eye displays, e.g., upcoming connected augmented-reality smart glasses. This report is to analyze the challenges in addressing those features and discuss solutions.

  8. Stimulus factors in motion perception and spatial orientation

    NASA Technical Reports Server (NTRS)

    Post, R. B.; Johnson, C. A.

    1984-01-01

    The Malcolm horizon utilizes a large projected light stimulus Peripheral Vision Horizon Device (PVHD) as an attitude indicator in order to achieve a more compelling sense of roll than is obtained with smaller devices. The basic principle is that the larger stimulus is more similar to visibility of a real horizon during roll, and does not require fixation and attention to the degree that smaller displays do. Successful implementation of such a device requires adjustment of the parameters of the visual stimulus so that its effects on motion perception and spatial orientation are optimized. With this purpose in mind, the effects of relevant image variables on the perception of object motion, self motion and spatial orientation are reviewed.

  9. Prototype Focal-Plane-Array Optoelectronic Image Processor

    NASA Technical Reports Server (NTRS)

    Fang, Wai-Chi; Shaw, Timothy; Yu, Jeffrey

    1995-01-01

    Prototype very-large-scale integrated (VLSI) planar array of optoelectronic processing elements combines speed of optical input and output with flexibility of reconfiguration (programmability) of electronic processing medium. Basic concept of processor described in "Optical-Input, Optical-Output Morphological Processor" (NPO-18174). Performs binary operations on binary (black and white) images. Each processing element corresponds to one picture element of image and located at that picture element. Includes input-plane photodetector in form of parasitic phototransistor part of processing circuit. Output of each processing circuit used to modulate one picture element in output-plane liquid-crystal display device. Intended to implement morphological processing algorithms that transform image into set of features suitable for high-level processing; e.g., recognition.

  10. MTF measurement of LCDs by a linear CCD imager: I. Monochrome case

    NASA Astrophysics Data System (ADS)

    Kim, Tae-hee; Choe, O. S.; Lee, Yun Woo; Cho, Hyun-Mo; Lee, In Won

    1997-11-01

    We construct the modulation transfer function (MTF) measurement system of a LCD using a linear charge-coupled device (CCD) imager. The MTF used in optical system can not describe in the effect of both resolution and contrast on the image quality of display. Thus we present the new measurement method based on the transmission property of a LCD. While controlling contrast and brightness levels, the MTF is measured. From the result, we show that the method is useful for describing of the image quality. A ne measurement method and its condition are described. To demonstrate validity, the method is applied for comparison of the performance of two different LCDs.

  11. Holographic diffuser by use of a silver halide sensitized gelatin process

    NASA Astrophysics Data System (ADS)

    Kim, Sun Il; Choi, Yoon Sun; Ham, Yong Nam; Park, Chong Yun; Kim, Jong Man

    2003-05-01

    Diffusers play an important role in liquid-crystal display (LCD) application as a beam-shaping device, a brightness homogenizer, a light-scattering device, and an imaging screen. The transmittance and diffusing angle of the diffusers are the critical aspects for the applications to the LCD. The holographic diffusers by use of various processing methods have been investigated. The diffusing characteristics of different diffusing materials and processing methods have been evaluated and compared. The micro-structures of holographic diffusers have been investigated by use of using scanning electron microscopy. The holographic diffusers by use of the silver halide sensitized gelatin (SHSG) method have the structural merits for the improvement of the quality of diffusers. The features of holographic diffuser were exceptional in terms of transmittance and diffusing angle. The replication method by use of the SHSG process can be directly used for the manufacturing of diffusers for the display application.

  12. Research and Development of Large Area Color AC Plasma Displays

    NASA Astrophysics Data System (ADS)

    Shinoda, Tsutae

    1998-10-01

    Plasma display is essentially a gas discharge device using discharges in small cavities about 0. 1 m. The color plasma displays utilize the visible light from phosphors excited by the ultra-violet by discharge in contrast to monochrome plasma displays utilizing visible light directly from gas discharges. At the early stage of the color plasma display development, the degradation of the phosphors and unstable operating voltage prevented to realize a practical color plasma display. The introduction of the three-electrode surface-discharge technology opened the way to solve the problems. Two key technologies of a simple panel structure with a stripe rib and phosphor alignment and a full color image driving method with an address-and-display-period-separated sub-field method have realized practically available full color plasma displays. A full color plasma display has been firstly developed in 1992 with a 21-in.-diagonal PDP and then a 42-in.-diagonal PDP in 1995 Currently a 50-in.-diagonal color plasma display has been developed. The large area color plasma displays have already been put into the market and are creating new markets, such as a wall hanging TV and multimedia displays for advertisement, information, etc. This paper will show the history of the surface-discharge color plasma display technologies and current status of the color plasma display.

  13. A scalable diffraction-based scanning 3D colour video display as demonstrated by using tiled gratings and a vertical diffuser.

    PubMed

    Jia, Jia; Chen, Jhensi; Yao, Jun; Chu, Daping

    2017-03-17

    A high quality 3D display requires a high amount of optical information throughput, which needs an appropriate mechanism to distribute information in space uniformly and efficiently. This study proposes a front-viewing system which is capable of managing the required amount of information efficiently from a high bandwidth source and projecting 3D images with a decent size and a large viewing angle at video rate in full colour. It employs variable gratings to support a high bandwidth distribution. This concept is scalable and the system can be made compact in size. A horizontal parallax only (HPO) proof-of-concept system is demonstrated by projecting holographic images from a digital micro mirror device (DMD) through rotational tiled gratings before they are realised on a vertical diffuser for front-viewing.

  14. High brightness MEMS mirror based head-up display (HUD) modules with wireless data streaming capability

    NASA Astrophysics Data System (ADS)

    Milanovic, Veljko; Kasturi, Abhishek; Hachtel, Volker

    2015-02-01

    A high brightness Head-Up Display (HUD) module was demonstrated with a fast, dual-axis MEMS mirror that displays vector images and text, utilizing its ~8kHz bandwidth on both axes. Two methodologies were evaluated: in one, the mirror steers a laser at wide angles of <48° on transparent multi-color fluorescent emissive film and displays content directly on the windshield, and in the other the mirror displays content on reflective multi-color emissive phosphor plates reflected off the windshield to create a virtual image for the driver. The display module is compact, consisting of a single laser diode, off-the-shelf lenses and a MEMS mirror in combination with a MEMS controller to enable precise movement of the mirror's X- and Y-axis. The MEMS controller offers both USB and wireless streaming capability and we utilize a library of functions on a host computer for creating content and controlling the mirror. Integration with smart phone applications is demonstrated, utilizing the mobile device both for content generation based on various messages or data, and for content streaming to the MEMS controller via Bluetooth interface. The display unit is highly resistant to vibrations and shock, and requires only ~1.5W to operate, even with content readable in sunlit outdoor conditions. The low power requirement is in part due to a vector graphics approach, allowing the efficient use of laser power, and also due to the use of a single, relatively high efficiency laser and simple optics.

  15. Nonlinear mapping of the luminance in dual-layer high dynamic range displays

    NASA Astrophysics Data System (ADS)

    Guarnieri, Gabriele; Ramponi, Giovanni; Bonfiglio, Silvio; Albani, Luigi

    2009-02-01

    It has long been known that the human visual system (HVS) has a nonlinear response to luminance. This nonlinearity can be quantified using the concept of just noticeable difference (JND), which represents the minimum amplitude of a specified test pattern an average observer can discern from a uniform background. The JND depends on the background luminance following a threshold versus intensity (TVI) function. It is possible to define a curve which maps physical luminances into a perceptually linearized domain. This mapping can be used to optimize a digital encoding, by minimizing the visibility of quantization noise. It is also commonly used in medical applications to display images adapting to the characteristics of the display device. High dynamic range (HDR) displays, which are beginning to appear on the market, can display luminance levels outside the range in which most standard mapping curves are defined. In particular, dual-layer LCD displays are able to extend the gamut of luminance offered by conventional liquid crystals towards the black region; in such areas suitable and HVS-compliant luminance transformations need to be determined. In this paper we propose a method, which is primarily targeted to the extension of the DICOM curve used in medical imaging, but also has a more general application. The method can be modified in order to compensate for the ambient light, which can be significantly greater than the black level of an HDR display and consequently reduce the visibility of the details in dark areas.

  16. Measuring snow cover using satellite imagery during 1973 and 1974 melt season: North Santiam, Boise, and Upper Snake Basins, phase 1. [LANDSAT satellites, imaging techniques

    NASA Technical Reports Server (NTRS)

    Wiegman, E. J.; Evans, W. E.; Hadfield, R.

    1975-01-01

    Measurements are examined of snow coverage during the snow-melt season in 1973 and 1974 from LANDSAT imagery for the three Columbia River Subbasins. Satellite derived snow cover inventories for the three test basins were obtained as an alternative to inventories performed with the current operational practice of using small aircraft flights over selected snow fields. The accuracy and precision versus cost for several different interactive image analysis procedures was investigated using a display device, the Electronic Satellite Image Analysis Console. Single-band radiance thresholding was the principal technique employed in the snow detection, although this technique was supplemented by an editing procedure involving reference to hand-generated elevation contours. For each data and view measured, a binary thematic map or "mask" depicting the snow cover was generated by a combination of objective and subjective procedures. Photographs of data analysis equipment (displays) are shown.

  17. Video image processor on the Spacelab 2 Solar Optical Universal Polarimeter /SL2 SOUP/

    NASA Technical Reports Server (NTRS)

    Lindgren, R. W.; Tarbell, T. D.

    1981-01-01

    The SOUP instrument is designed to obtain diffraction-limited digital images of the sun with high photometric accuracy. The Video Processor originated from the requirement to provide onboard real-time image processing, both to reduce the telemetry rate and to provide meaningful video displays of scientific data to the payload crew. This original concept has evolved into a versatile digital processing system with a multitude of other uses in the SOUP program. The central element in the Video Processor design is a 16-bit central processing unit based on 2900 family bipolar bit-slice devices. All arithmetic, logical and I/O operations are under control of microprograms, stored in programmable read-only memory and initiated by commands from the LSI-11. Several functions of the Video Processor are described, including interface to the High Rate Multiplexer downlink, cosmetic and scientific data processing, scan conversion for crew displays, focus and exposure testing, and use as ground support equipment.

  18. Biplane reconstruction and visualization of virtual endoscopic and fluoroscopic views for interventional device navigation

    NASA Astrophysics Data System (ADS)

    Wagner, Martin G.; Strother, Charles M.; Schafer, Sebastian; Mistretta, Charles A.

    2016-03-01

    Biplane fluoroscopic imaging is an important tool for minimally invasive procedures for the treatment of cerebrovascular diseases. However, finding a good working angle for the C-arms of the angiography system as well as navigating based on the 2D projection images can be a difficult task. The purpose of this work is to propose a novel 4D reconstruction algorithm for interventional devices from biplane fluoroscopy images and to propose new techniques for a better visualization of the results. The proposed reconstruction methods binarizes the fluoroscopic images using a dedicated noise reduction algorithm for curvilinear structures and a global thresholding approach. A topology preserving thinning algorithm is then applied and a path search algorithm minimizing the curvature of the device is used to extract the 2D device centerlines. Finally, the 3D device path is reconstructed using epipolar geometry. The point correspondences are determined by a monotonic mapping function that minimizes the reconstruction error. The three dimensional reconstruction of the device path allows the rendering of virtual fluoroscopy images from arbitrary angles as well as 3D visualizations like virtual endoscopic views or glass pipe renderings, where the vessel wall is rendered with a semi-transparent material. This work also proposes a combination of different visualization techniques in order to increase the usability and spatial orientation for the user. A combination of synchronized endoscopic and glass pipe views is proposed, where the virtual endoscopic camera position is determined based on the device tip location as well as the previous camera position using a Kalman filter in order to create a smooth path. Additionally, vessel centerlines are displayed and the path to the target is highlighted. Finally, the virtual endoscopic camera position is also visualized in the glass pipe view to further improve the spatial orientation. The proposed techniques could considerably improve the workflow of minimally invasive procedures for the treatment of cerebrovascular diseases.

  19. A world of minerals in your mobile device

    USGS Publications Warehouse

    Jenness, Jane E.; Ober, Joyce A.; Wilkins, Aleeza M.; Gambogi, Joseph

    2016-09-15

    Mobile phones and other high-technology communications devices could not exist without mineral commodities. More than one-half of all components in a mobile device—including its electronics, display, battery, speakers, and more—are made from mined and semiprocessed materials (mineral commodities). Some mineral commodities can be recovered as byproducts during the production and processing of other commodities. As an example, bauxite is mined for its aluminum content, but gallium is recovered during the aluminum production process. The images show the ore minerals (sources) of some mineral commodities that are used to make components of a mobile device. On the reverse side, the map and table depict the major source countries producing these mineral commodities along with how these commodities are used in mobile devices. For more information on minerals, visit http://minerals.usgs.gov.

  20. Synthetic environment employing a craft for providing user perspective reference

    DOEpatents

    Maples, Creve; Peterson, Craig A.

    1997-10-21

    A multi-dimensional user oriented synthetic environment system allows application programs to be programmed and accessed with input/output device independent, generic functional commands which are a distillation of the actual functions performed by any application program. A shared memory structure allows the translation of device specific commands to device independent, generic functional commands. Complete flexibility of the mapping of synthetic environment data to the user is thereby allowed. Accordingly, synthetic environment data may be provided to the user on parallel user information processing channels allowing the subcognitive mind to act as a filter, eliminating irrelevant information and allowing the processing of increase amounts of data by the user. The user is further provided with a craft surrounding the user within the synthetic environment, which craft, imparts important visual referential an motion parallax cues, enabling the user to better appreciate distances and directions within the synthetic environment. Display of this craft in close proximity to the user's point of perspective may be accomplished without substantially degrading the image resolution of the displayed portions of the synthetic environment.

  1. Autostereoscopic image creation by hyperview matrix controlled single pixel rendering

    NASA Astrophysics Data System (ADS)

    Grasnick, Armin

    2017-06-01

    Just as the increasing awareness level of the stereoscopic cinema, so the perception of limitations while watching movies with 3D glasses has been emerged as well. It is not only that the additional glasses are uncomfortable and annoying; there are some tangible arguments for avoiding 3D glasses. These "stereoscopic deficits" are caused by the 3D glasses itself. In contrast to natural viewing with naked eyes, the artificial 3D viewing with 3D glasses introduces specific "unnatural" side effects. The most of the moviegoers has experienced unspecific discomfort in 3D cinema, which they may have associated with insufficient image quality. Obviously, quality problems with 3D glasses can be solved by technical improvement. But this simple answer can -and already has- mislead some decision makers to relax on the existing 3D glasses solution. It needs to be underlined, that there are inherent difficulties with the glasses, which can never be solved with modest advancement; as the 3D glasses initiate them. To overcome the limitations of stereoscopy in display applications, several technologies has been proposed to create a 3D impression without the need of 3D glasses, known as autostereoscopy. But even todays autostereoscopic displays cannot solve all viewing problems and still show limitations. A hyperview display could be a suitable candidate, if it would be possible to create an affordable device and generate the necessary content in an acceptable time frame. All autostereoscopic displays, based on the idea of lightfield, integral photography or super-multiview could be unified within the concept of hyperview. It is essential for functionality that every of these display technologies uses numerous of different perspective images to create the 3D impression. Such a calculation of a very high number of views will require much more computing time as for the formation of a simple stereoscopic image pair. The hyperview concept allows to describe the screen image of any 3D technology just with a simple equation. This formula can be utilized to create a specific hyperview matrix for a certain 3D display - independent of the technology used. A hyperview matrix may contain the references to loads of images and act as an instruction for a subsequent rendering process of particular pixels. Naturally, a single pixel will deliver an image with no resolution and does not provide any idea of the rendered scene. However, by implementing the method of pixel recycling, a 3D image can be perceived, even if all source images are different. It will be proven that several millions of perspectives can be rendered with the support of GPU rendering and benefit from the hyperview matrix. In result, a conventional autostereoscopic display, which is designed to represent only a few perspectives can be used to show a hyperview image by using a suitable hyperview matrix. It will be shown that a millions-of-views-hyperview-image can be presented on a conventional autostereoscopic display. For such an hyperview image it is required that all pixels of the displays are allocated by different source images. Controlled by the hyperview matrix, an adapted renderer can render a full hyperview image in real-time.

  2. A portable near-infrared fluorescence image overlay device for surgical navigation (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    McWade, Melanie A.

    2016-03-01

    A rise in the use of near-infrared (NIR) fluorescent dyes or intrinsic fluorescent markers for surgical guidance and tissue diagnosis has triggered the development of NIR fluorescence imaging systems. Because NIR wavelengths are invisible to the naked eye, instrumentation must allow surgeons to visualize areas of high fluorescence. Current NIR fluorescence imaging systems have limited ease-of-use because they display fluorescent information on remote display monitors that require surgeons to divert attention away from the patient to identify the location of tissue fluorescence. Furthermore, some systems lack simultaneous visible light imaging which provides valuable spatial context to fluorescence images. We have developed a novel, portable NIR fluorescence imaging approach for intraoperative surgical guidance that provides information for surgical navigation within the clinician's line of sight. The system utilizes a NIR CMOS detector to collect excited NIR fluorescence from the surgical field. Tissues with NIR fluorescence are overlaid with visible light to provide information on tissue margins directly on the surgical field. In vitro studies have shown this versatile imaging system can be applied to applications with both extrinsic NIR contrast agents such as indocyanine green and weaker sources of biological fluorescence such as parathyroid gland tissue. This non-invasive, portable NIR fluorescence imaging system overlays an image directly on tissue, potentially allowing surgical decisions to be made quicker and with greater ease-of-use than current NIR fluorescence imaging systems.

  3. Interpretation and mapping of geological features using mobile devices for 3D outcrop modelling

    NASA Astrophysics Data System (ADS)

    Buckley, Simon J.; Kehl, Christian; Mullins, James R.; Howell, John A.

    2016-04-01

    Advances in 3D digital geometric characterisation have resulted in widespread adoption in recent years, with photorealistic models utilised for interpretation, quantitative and qualitative analysis, as well as education, in an increasingly diverse range of geoscience applications. Topographic models created using lidar and photogrammetry, optionally combined with imagery from sensors such as hyperspectral and thermal cameras, are now becoming commonplace in geoscientific research. Mobile devices (tablets and smartphones) are maturing rapidly to become powerful field computers capable of displaying and interpreting 3D models directly in the field. With increasingly high-quality digital image capture, combined with on-board sensor pose estimation, mobile devices are, in addition, a source of primary data, which can be employed to enhance existing geological models. Adding supplementary image textures and 2D annotations to photorealistic models is therefore a desirable next step to complement conventional field geoscience. This contribution reports on research into field-based interpretation and conceptual sketching on images and photorealistic models on mobile devices, motivated by the desire to utilise digital outcrop models to generate high quality training images (TIs) for multipoint statistics (MPS) property modelling. Representative training images define sedimentological concepts and spatial relationships between elements in the system, which are subsequently modelled using artificial learning to populate geocellular models. Photorealistic outcrop models are underused sources of quantitative and qualitative information for generating TIs, explored further in this research by linking field and office workflows through the mobile device. Existing textured models are loaded to the mobile device, allowing rendering in a 3D environment. Because interpretation in 2D is more familiar and comfortable for users, the developed application allows new images to be captured with the device's digital camera, and an interface is available for annotating (interpreting) the image using lines and polygons. Image-to-geometry registration is then performed using a developed algorithm, initialised using the coarse pose from the on-board orientation and positioning sensors. The annotations made on the captured images are then available in the 3D model coordinate system for overlay and export. This workflow allows geologists to make interpretations and conceptual models in the field, which can then be linked to and refined in office workflows for later MPS property modelling.

  4. Accuracy of remote electrocardiogram interpretation with the use of Google Glass technology.

    PubMed

    Jeroudi, Omar M; Christakopoulos, George; Christopoulos, George; Kotsia, Anna; Kypreos, Megan A; Rangan, Bavana V; Banerjee, Subhash; Brilakis, Emmanouil S

    2015-02-01

    We sought to investigate the accuracy of remote electrocardiogram (ECG) interpretation using Google Glass (Google, Mountain View, California). Google Glass is an optical head mounted display device with growing applications in medicine. We compared interpretation of 10 ECGs with 21 clinically important findings by faculty and fellow cardiologists by (1) viewing the electrocardiographic image at the Google Glass screen; (2) viewing a photograph of the ECG taken using Google Glass and interpreted on a mobile device; (3) viewing the original paper ECG; and (4) viewing a photograph of the ECG taken with a high-resolution camera and interpreted on a mobile device. One point was given for identification of each correct finding. Subjective rating of the user experience was also recorded. Twelve physicians (4 faculty and 8 fellow cardiologists) participated. The average electrocardiographic interpretation score (maximum 21 points) as viewed through the Google Glass, Google Glass photograph on a mobile device, on paper, and high-resolution photograph on a mobile device was 13.5 ± 1.8, 16.1 ± 2.6, 18.3 ± 1.7, and 18.6 ± 1.5, respectively (p = 0.0005 between Google Glass and mobile device, p = 0.0005 between Google Glass and paper, and p = 0.002 between mobile device and paper). Of the 12 physicians, 9 (75%) were dissatisfied with ECGs viewing on the prism display of Google Glass. In conclusion, further improvements are needed before Google Glass can be reliably used for remote electrocardiographic analysis. Published by Elsevier Inc.

  5. Controlling motion sickness and spatial disorientation and enhancing vestibular rehabilitation with a user-worn see-through display.

    PubMed

    Krueger, Wesley W O

    2011-01-01

    An eyewear mounted visual display ("User-worn see-through display") projecting an artificial horizon aligned with the user's head and body position in space can prevent or lessen motion sickness in susceptible individuals when in a motion provocative environment as well as aid patients undergoing vestibular rehabilitation. In this project, a wearable display device, including software technology and hardware, was developed and a phase I feasibility study and phase II clinical trial for safety and efficacy were performed. Both phase I and phase II were prospective studies funded by the NIH. The phase II study used repeated measures for motion intolerant subjects and a randomized control group (display device/no display device) pre-posttest design for patients in vestibular rehabilitation. Following technology and display device development, 75 patients were evaluated by test and rating scales in the phase II study; 25 subjects with motion intolerance used the technology in the display device in provocative environments and completed subjective rating scales, whereas 50 patients were evaluated before and after vestibular rehabilitation (25 using the display device and 25 in a control group) using established test measures. All patients with motion intolerance rated the technology as helpful for nine symptoms assessed, and 96% rated the display device as simple and easy to use. Duration of symptoms significantly decreased with use of the technology displayed. In patients undergoing vestibular rehabilitation, there were no significant differences in amount of change from pre- to posttherapy on objective balance tests between display device users and controls. However, those using the technology required significantly fewer rehabilitation sessions to achieve those outcomes than the control group. A user-worn see-through display, utilizing a visual fixation target coupled with a stable artificial horizon and aligned with user movement, has demonstrated substantial benefit for individuals susceptible to motion intolerance and spatial disorientation and those undergoing vestibular rehabilitation. The technology developed has applications in any environment where motion sensitivity affects human performance.

  6. Autostereoscopic display technology for mobile 3DTV applications

    NASA Astrophysics Data System (ADS)

    Harrold, Jonathan; Woodgate, Graham J.

    2007-02-01

    Mobile TV is now a commercial reality, and an opportunity exists for the first mass market 3DTV products based on cell phone platforms with switchable 2D/3D autostereoscopic displays. Compared to conventional cell phones, TV phones need to operate for extended periods of time with the display running at full brightness, so the efficiency of the 3D optical system is key. The desire for increased viewing freedom to provide greater viewing comfort can be met by increasing the number of views presented. A four view lenticular display will have a brightness five times greater than the equivalent parallax barrier display. Therefore, lenticular displays are very strong candidates for cell phone 3DTV. Selection of Polarisation Activated Microlens TM architectures for LCD, OLED and reflective display applications is described. The technology delivers significant advantages especially for high pixel density panels and optimises device ruggedness while maintaining display brightness. A significant manufacturing breakthrough is described, enabling switchable microlenses to be fabricated using a simple coating process, which is also readily scalable to large TV panels. The 3D image performance of candidate 3DTV panels will also be compared using autostereoscopic display optical output simulations.

  7. Modeling a color-rendering operator for high dynamic range images using a cone-response function

    NASA Astrophysics Data System (ADS)

    Choi, Ho-Hyoung; Kim, Gi-Seok; Yun, Byoung-Ju

    2015-09-01

    Tone-mapping operators are the typical algorithms designed to produce visibility and the overall impression of brightness, contrast, and color of high dynamic range (HDR) images on low dynamic range (LDR) display devices. Although several new tone-mapping operators have been proposed in recent years, the results of these operators have not matched those of the psychophysical experiments based on the human visual system. A color-rendering model that is a combination of tone-mapping and cone-response functions using an XYZ tristimulus color space is presented. In the proposed method, the tone-mapping operator produces visibility and the overall impression of brightness, contrast, and color in HDR images when mapped onto relatively LDR devices. The tone-mapping resultant image is obtained using chromatic and achromatic colors to avoid well-known color distortions shown in the conventional methods. The resulting image is then processed with a cone-response function wherein emphasis is placed on human visual perception (HVP). The proposed method covers the mismatch between the actual scene and the rendered image based on HVP. The experimental results show that the proposed method yields an improved color-rendering performance compared to conventional methods.

  8. Design of crossed-mirror array to form floating 3D LED signs

    NASA Astrophysics Data System (ADS)

    Yamamoto, Hirotsugu; Bando, Hiroki; Kujime, Ryousuke; Suyama, Shiro

    2012-03-01

    3D representation of digital signage improves its significance and rapid notification of important points. Our goal is to realize floating 3D LED signs. The problem is there is no sufficient device to form floating 3D images from LEDs. LED lamp size is around 1 cm including wiring and substrates. Such large pitch increases display size and sometimes spoils image quality. The purpose of this paper is to develop optical device to meet the three requirements and to demonstrate floating 3D arrays of LEDs. We analytically investigate image formation by a crossed mirror structure with aerial aperture, called CMA (crossed-mirror array). CMA contains dihedral corner reflectors at each aperture. After double reflection, light rays emitted from an LED will converge into the corresponding image point. We have fabricated CMA for 3D array of LEDs. One CMA unit contains 20 x 20 apertures that are located diagonally. Floating image of LEDs was formed in wide range of incident angle. The image size of focused beam agreed to the apparent aperture size. When LEDs were located three-dimensionally (LEDs in three depths), the focused distances were the same as the distance between the real LED and the CMA.

  9. Virtual Reality Used to Serve the Glenn Engineering Community

    NASA Technical Reports Server (NTRS)

    Carney, Dorothy V.

    2001-01-01

    There are a variety of innovative new visualization tools available to scientists and engineers for the display and analysis of their models. At the NASA Glenn Research Center, we have an ImmersaDesk, a large, single-panel, semi-immersive display device. This versatile unit can interactively display three-dimensional images in visual stereo. Our challenge is to make this virtual reality platform accessible and useful to researchers. An example of a successful application of this computer technology is the display of blade out simulations. NASA Glenn structural dynamicists, Dr. Kelly Carney and Dr. Charles Lawrence, funded by the Ultra Safe Propulsion Project under Base R&T, are researching blade outs, when turbine engines lose a fan blade during operation. Key objectives of this research include minimizing danger to the aircraft via effective blade containment, predicting destructive loads due to the imbalance following a blade loss, and identifying safe, cost-effective designs and materials for future engines.

  10. Flight instruments and helmet-mounted SWIR imaging systems

    NASA Astrophysics Data System (ADS)

    Robinson, Tim; Green, John; Jacobson, Mickey; Grabski, Greg

    2011-06-01

    Night vision technology has experienced significant advances in the last two decades. Night vision goggles (NVGs) based on gallium arsenide (GaAs) continues to raise the bar for alternative technologies. Resolution, gain, sensitivity have all improved; the image quality through these devices is nothing less than incredible. Panoramic NVGs and enhanced NVGs are examples of recent advances that increase the warfighter capabilities. Even with these advances, alternative night vision devices such as solid-state indium gallium arsenide (InGaAs) focal plane arrays are under development for helmet-mounted imaging systems. The InGaAs imaging system offers advantages over the existing NVGs. Two key advantages are; (1) the new system produces digital image data, and (2) the new system is sensitive to energy in the shortwave infrared (SWIR) spectrum. While it is tempting to contrast the performance of these digital systems to the existing NVGs, the advantage of different spectral detection bands leads to the conclusion that the technologies are less competitive and more synergistic. It is likely, by the end of the decade, pilots within a cockpit will use multi-band devices. As such, flight decks will need to be compatible with both NVGs and SWIR imaging systems. Insertion of NVGs in aircraft during the late 70's and early 80's resulted in many "lessons learned" concerning instrument compatibility with NVGs. These "lessons learned" ultimately resulted in specifications such as MIL-L-85762A and MIL-STD-3009. These specifications are now used throughout industry to produce NVG-compatible illuminated instruments and displays for both military and civilian applications. Inserting a SWIR imaging device in a cockpit will require similar consideration. A project evaluating flight deck instrument compatibility with SWIR devices is currently ongoing; aspects of this evaluation are described in this paper. This project is sponsored by the Air Force Research Laboratory (AFRL).

  11. Integrated Dual Imaging Detector

    NASA Technical Reports Server (NTRS)

    Rust, David M.

    1999-01-01

    A new type of image detector was designed to simultaneously analyze the polarization of light at all picture elements in a scene. The integrated Dual Imaging detector (IDID) consists of a lenslet array and a polarizing beamsplitter bonded to a commercial charge coupled device (CCD). The IDID simplifies the design and operation of solar vector magnetographs and the imaging polarimeters and spectroscopic imagers used, for example, in atmosphere and solar research. When used in a solar telescope, the vector magnetic fields on the solar surface. Other applications include environmental monitoring, robot vision, and medical diagnoses (through the eye). Innovations in the IDID include (1) two interleaved imaging arrays (one for each polarization plane); (2) large dynamic range (well depth of 10(exp 5) electrons per pixel); (3) simultaneous readout and display of both images; and (4) laptop computer signal processing to produce polarization maps in field situations.

  12. Optical profilometer using laser based conical triangulation for inspection of inner geometry of corroded pipes in cylindrical coordinates

    NASA Astrophysics Data System (ADS)

    Buschinelli, Pedro D. V.; Melo, João. Ricardo C.; Albertazzi, Armando; Santos, João. M. C.; Camerini, Claudio S.

    2013-04-01

    An axis-symmetrical optical laser triangulation system was developed by the authors to measure the inner geometry of long pipes used in the oil industry. It has a special optical configuration able to acquire shape information of the inner geometry of a section of a pipe from a single image frame. A collimated laser beam is pointed to the tip of a 45° conical mirror. The laser light is reflected in such a way that a radial light sheet is formed and intercepts the inner geometry and forms a bright laser line on a section of the inspected pipe. A camera acquires the image of the laser line through a wide angle lens. An odometer-based triggering system is used to shot the camera to acquire a set of equally spaced images at high speed while the device is moved along the pipe's axis. Image processing is done in real-time (between images acquisitions) thanks to the use of parallel computing technology. The measured geometry is analyzed to identify corrosion damages. The measured geometry and results are graphically presented using virtual reality techniques and devices as 3D glasses and head-mounted displays. The paper describes the measurement principles, calibration strategies, laboratory evaluation of the developed device, as well as, a practical example of a corroded pipe used in an industrial gas production plant.

  13. Small convolution kernels for high-fidelity image restoration

    NASA Technical Reports Server (NTRS)

    Reichenbach, Stephen E.; Park, Stephen K.

    1991-01-01

    An algorithm is developed for computing the mean-square-optimal values for small, image-restoration kernels. The algorithm is based on a comprehensive, end-to-end imaging system model that accounts for the important components of the imaging process: the statistics of the scene, the point-spread function of the image-gathering device, sampling effects, noise, and display reconstruction. Subject to constraints on the spatial support of the kernel, the algorithm generates the kernel values that restore the image with maximum fidelity, that is, the kernel minimizes the expected mean-square restoration error. The algorithm is consistent with the derivation of the spatially unconstrained Wiener filter, but leads to a small, spatially constrained kernel that, unlike the unconstrained filter, can be efficiently implemented by convolution. Simulation experiments demonstrate that for a wide range of imaging systems these small kernels can restore images with fidelity comparable to images restored with the unconstrained Wiener filter.

  14. Spacesuit Data Display and Management System

    NASA Technical Reports Server (NTRS)

    Hall, David G.; Sells, Aaron; Shah, Hemal

    2009-01-01

    A prototype embedded avionics system has been designed for the next generation of NASA extra-vehicular-activity (EVA) spacesuits. The system performs biomedical and other sensor monitoring, image capture, data display, and data transmission. An existing NASA Phase I and II award winning design for an embedded computing system (ZIN vMetrics - BioWATCH) has been modified. The unit has a reliable, compact form factor with flexible packaging options. These innovations are significant, because current state-of-the-art EVA spacesuits do not provide capability for data displays or embedded data acquisition and management. The Phase 1 effort achieved Technology Readiness Level 4 (high fidelity breadboard demonstration). The breadboard uses a commercial-grade field-programmable gate array (FPGA) with embedded processor core that can be upgraded to a space-rated device for future revisions.

  15. Online coupled camera pose estimation and dense reconstruction from video

    DOEpatents

    Medioni, Gerard; Kang, Zhuoliang

    2016-11-01

    A product may receive each image in a stream of video image of a scene, and before processing the next image, generate information indicative of the position and orientation of an image capture device that captured the image at the time of capturing the image. The product may do so by identifying distinguishable image feature points in the image; determining a coordinate for each identified image feature point; and for each identified image feature point, attempting to identify one or more distinguishable model feature points in a three dimensional (3D) model of at least a portion of the scene that appears likely to correspond to the identified image feature point. Thereafter, the product may find each of the following that, in combination, produce a consistent projection transformation of the 3D model onto the image: a subset of the identified image feature points for which one or more corresponding model feature points were identified; and, for each image feature point that has multiple likely corresponding model feature points, one of the corresponding model feature points. The product may update a 3D model of at least a portion of the scene following the receipt of each video image and before processing the next video image base on the generated information indicative of the position and orientation of the image capture device at the time of capturing the received image. The product may display the updated 3D model after each update to the model.

  16. Carbon Nanotube Driver Circuit for 6 × 6 Organic Light Emitting Diode Display

    PubMed Central

    Zou, Jianping; Zhang, Kang; Li, Jingqi; Zhao, Yongbiao; Wang, Yilei; Pillai, Suresh Kumar Raman; Volkan Demir, Hilmi; Sun, Xiaowei; Chan-Park, Mary B.; Zhang, Qing

    2015-01-01

    Single-walled carbon nanotube (SWNT) is expected to be a very promising material for flexible and transparent driver circuits for active matrix organic light emitting diode (AM OLED) displays due to its high field-effect mobility, excellent current carrying capacity, optical transparency and mechanical flexibility. Although there have been several publications about SWNT driver circuits, none of them have shown static and dynamic images with the AM OLED displays. Here we report on the first successful chemical vapor deposition (CVD)-grown SWNT network thin film transistor (TFT) driver circuits for static and dynamic AM OLED displays with 6 × 6 pixels. The high device mobility of ~45 cm2V−1s−1 and the high channel current on/off ratio of ~105 of the SWNT-TFTs fully guarantee the control capability to the OLED pixels. Our results suggest that SWNT-TFTs are promising backplane building blocks for future OLED displays. PMID:26119218

  17. Kelvin probe imaging of photo-injected electrons in metal oxide nanosheets from metal sulfide quantum dots under remote photochromic coloration

    NASA Astrophysics Data System (ADS)

    Kondo, A.; Yin, G.; Srinivasan, N.; Atarashi, D.; Sakai, E.; Miyauchi, M.

    2015-07-01

    Metal oxide and quantum dot (QD) heterostructures have attracted considerable recent attention as materials for developing efficient solar cells, photocatalysts, and display devices, thus nanoscale imaging of trapped electrons in these heterostructures provides important insight for developing efficient devices. In the present study, Kelvin probe force microscopy (KPFM) of CdS quantum dot (QD)-grafted Cs4W11O362- nanosheets was performed before and after visible-light irradiation. After visible-light excitation of the CdS QDs, the Cs4W11O362- nanosheet surface exhibited a decreased work function in the vicinity of the junction with CdS QDs, even though the Cs4W11O362- nanosheet did not absorb visible light. X-ray photoelectron spectroscopy revealed that W5+ species were formed in the nanosheet after visible-light irradiation. These results demonstrated that excited electrons in the CdS QDs were injected and trapped in the Cs4W11O362- nanosheet to form color centers. Further, the CdS QDs and Cs4W11O362- nanosheet composite films exhibited efficient remote photochromic coloration, which was attributed to the quantum nanostructure of the film. Notably, the responsive wavelength of the material is tunable by adjusting the size of QDs, and the decoloration rate is highly efficient, as the required length for trapped electrons to diffuse into the nanosheet surface is very short owing to its nanoscale thickness. The unique properties of this photochromic device make it suitable for display or memory applications. In addition, the methodology described in the present study for nanoscale imaging is expected to aid in the understanding of electron transport and trapping processes in metal oxide and metal chalcogenide heterostructure, which are crucial phenomena in QD-based solar cells and/or photocatalytic water-splitting systems.Metal oxide and quantum dot (QD) heterostructures have attracted considerable recent attention as materials for developing efficient solar cells, photocatalysts, and display devices, thus nanoscale imaging of trapped electrons in these heterostructures provides important insight for developing efficient devices. In the present study, Kelvin probe force microscopy (KPFM) of CdS quantum dot (QD)-grafted Cs4W11O362- nanosheets was performed before and after visible-light irradiation. After visible-light excitation of the CdS QDs, the Cs4W11O362- nanosheet surface exhibited a decreased work function in the vicinity of the junction with CdS QDs, even though the Cs4W11O362- nanosheet did not absorb visible light. X-ray photoelectron spectroscopy revealed that W5+ species were formed in the nanosheet after visible-light irradiation. These results demonstrated that excited electrons in the CdS QDs were injected and trapped in the Cs4W11O362- nanosheet to form color centers. Further, the CdS QDs and Cs4W11O362- nanosheet composite films exhibited efficient remote photochromic coloration, which was attributed to the quantum nanostructure of the film. Notably, the responsive wavelength of the material is tunable by adjusting the size of QDs, and the decoloration rate is highly efficient, as the required length for trapped electrons to diffuse into the nanosheet surface is very short owing to its nanoscale thickness. The unique properties of this photochromic device make it suitable for display or memory applications. In addition, the methodology described in the present study for nanoscale imaging is expected to aid in the understanding of electron transport and trapping processes in metal oxide and metal chalcogenide heterostructure, which are crucial phenomena in QD-based solar cells and/or photocatalytic water-splitting systems. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr02405f

  18. Development and implementation of ultrasound picture archiving and communication system

    NASA Astrophysics Data System (ADS)

    Weinberg, Wolfram S.; Tessler, Franklin N.; Grant, Edward G.; Kangarloo, Hooshang; Huang, H. K.

    1990-08-01

    The Department of Radiological Sciences at the UCLA School of Medicine is developing an archiving and communication system (PACS) for digitized ultrasound images. In its final stage the system will involve the acquisition and archiving of ultrasound studies from four different locations including the Center for Health Sciences, the Department for Mental Health and the Outpatient Radiology and Endoscopy Departments with a total of 200-250 patient studies per week. The concept comprises two stages of image manipulation for each ultrasound work area. The first station is located close to the examination site and accomodates the acquisition of digital images from up to five ultrasound devices and provides for instantaneous display and primary viewing and image selection. Completed patient studies are transferred to a main workstation for secondary review, further analysis and comparison studies. The review station has an on-line storage capacity of 10,000 images with a resolution of 512x512 8 bit data to allow for immediate retrieval of active patient studies of up to two weeks. The main work stations are connected through the general network and use one central archive for long term storage and a film printer for hardcopy output. First phase development efforts concentrate on the implementation and testing of a system at one location consisting of a number of ultrasound units with video digitizer and network interfaces and a microcomputer workstation as host for the display station with two color monitors, each allowing simultaneous display of four 512x512 images. The discussion emphasizes functionality, performance and acceptance of the system in the clinical environment.

  19. Fiber optic engine for micro projection display.

    PubMed

    Arabi, Hesam Edin; An, Sohee; Oh, Kyunghwan

    2010-03-01

    A novel compact optical engine for a micro projector display is experimentally demonstrated, which is composed of RGB light sources, a tapered 3 x 1 Fiber Optic Color Synthesizer (FOCS) along with a fiberized ball-lens, and a two dimensional micro electromechanical scanning mirror. In the proposed optical engine, we successfully employed an all-fiber beam shaping technique combining optical fiber taper and fiberized ball lens that can render a narrow beam and enhance the resolution of the screened image in the far field. Optical performances of the proposed device assembly are investigated in terms of power loss, collimating strength of the collimator assembly, and color gamut of the output.

  20. Efficient stereoscopic contents file format on the basis of ISO base media file format

    NASA Astrophysics Data System (ADS)

    Kim, Kyuheon; Lee, Jangwon; Suh, Doug Young; Park, Gwang Hoon

    2009-02-01

    A lot of 3D contents haven been widely used for multimedia services, however, real 3D video contents have been adopted for a limited applications such as a specially designed 3D cinema. This is because of the difficulty of capturing real 3D video contents and the limitation of display devices available in a market. However, diverse types of display devices for stereoscopic video contents for real 3D video contents have been recently released in a market. Especially, a mobile phone with a stereoscopic camera has been released in a market, which provides a user as a consumer to have more realistic experiences without glasses, and also, as a content creator to take stereoscopic images or record the stereoscopic video contents. However, a user can only store and display these acquired stereoscopic contents with his/her own devices due to the non-existence of a common file format for these contents. This limitation causes a user not share his/her contents with any other users, which makes it difficult the relevant market to stereoscopic contents is getting expanded. Therefore, this paper proposes the common file format on the basis of ISO base media file format for stereoscopic contents, which enables users to store and exchange pure stereoscopic contents. This technology is also currently under development for an international standard of MPEG as being called as a stereoscopic video application format.

  1. Realization of Multi-Stable Ground States in a Nematic Liquid Crystal by Surface and Electric Field Modification

    NASA Astrophysics Data System (ADS)

    Gwag, Jin Seog; Kim, Young-Ki; Lee, Chang Hoon; Kim, Jae-Hoon

    2015-06-01

    Owing to the significant price drop of liquid crystal displays (LCDs) and the efforts to save natural resources, LCDs are even replacing paper to display static images such as price tags and advertising boards. Because of a growing market demand on such devices, the LCD that can be of numerous surface alignments of directors as its ground state, the so-called multi-stable LCD, comes into the limelight due to the great potential for low power consumption. However, the multi-stable LCD with industrial feasibility has not yet been successfully performed. In this paper, we propose a simple and novel configuration for the multi-stable LCD. We demonstrate experimentally and theoretically that a battery of stable surface alignments can be achieved by the field-induced surface dragging effect on an aligning layer with a weak surface anchoring. The simplicity and stability of the proposed system suggest that it is suitable for the multi-stable LCDs to display static images with low power consumption and thus opens applications in various fields.

  2. Run-to-Run Optimization Control Within Exact Inverse Framework for Scan Tracking.

    PubMed

    Yeoh, Ivan L; Reinhall, Per G; Berg, Martin C; Chizeck, Howard J; Seibel, Eric J

    2017-09-01

    A run-to-run optimization controller uses a reduced set of measurement parameters, in comparison to more general feedback controllers, to converge to the best control point for a repetitive process. A new run-to-run optimization controller is presented for the scanning fiber device used for image acquisition and display. This controller utilizes very sparse measurements to estimate a system energy measure and updates the input parameterizations iteratively within a feedforward with exact-inversion framework. Analysis, simulation, and experimental investigations on the scanning fiber device demonstrate improved scan accuracy over previous methods and automatic controller adaptation to changing operating temperature. A specific application example and quantitative error analyses are provided of a scanning fiber endoscope that maintains high image quality continuously across a 20 °C temperature rise without interruption of the 56 Hz video.

  3. Systems, methods, and products for graphically illustrating and controlling a droplet actuator

    NASA Technical Reports Server (NTRS)

    Brafford, Keith R. (Inventor); Pamula, Vamsee K. (Inventor); Paik, Philip Y. (Inventor); Pollack, Michael G. (Inventor); Sturmer, Ryan A. (Inventor); Smith, Gregory F. (Inventor)

    2010-01-01

    Systems for controlling a droplet microactuator are provided. According to one embodiment, a system is provided and includes a controller, a droplet microactuator electronically coupled to the controller, and a display device displaying a user interface electronically coupled to the controller, wherein the system is programmed and configured to permit a user to effect a droplet manipulation by interacting with the user interface. According to another embodiment, a system is provided and includes a processor, a display device electronically coupled to the processor, and software loaded and/or stored in a storage device electronically coupled to the controller, a memory device electronically coupled to the controller, and/or the controller and programmed to display an interactive map of a droplet microactuator. According to yet another embodiment, a system is provided and includes a controller, a droplet microactuator electronically coupled to the controller, a display device displaying a user interface electronically coupled to the controller, and software for executing a protocol loaded and/or stored in a storage device electronically coupled to the controller, a memory device electronically coupled to the controller, and/or the controller.

  4. 78 FR 52211 - Certain Electronic Devices Having Placeshifting or Display Replication and Products Containing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-22

    ... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-878] Certain Electronic Devices Having Placeshifting or Display Replication and Products Containing Same; Commission Determination Not To Review an... States after importation of certain electronic devices having placeshifting or display replication...

  5. Web-based platform for collaborative medical imaging research

    NASA Astrophysics Data System (ADS)

    Rittner, Leticia; Bento, Mariana P.; Costa, André L.; Souza, Roberto M.; Machado, Rubens C.; Lotufo, Roberto A.

    2015-03-01

    Medical imaging research depends basically on the availability of large image collections, image processing and analysis algorithms, hardware and a multidisciplinary research team. It has to be reproducible, free of errors, fast, accessible through a large variety of devices spread around research centers and conducted simultaneously by a multidisciplinary team. Therefore, we propose a collaborative research environment, named Adessowiki, where tools and datasets are integrated and readily available in the Internet through a web browser. Moreover, processing history and all intermediate results are stored and displayed in automatic generated web pages for each object in the research project or clinical study. It requires no installation or configuration from the client side and offers centralized tools and specialized hardware resources, since processing takes place in the cloud.

  6. Exploration of Mars by Mariner 9 - Television sensors and image processing.

    NASA Technical Reports Server (NTRS)

    Cutts, J. A.

    1973-01-01

    Two cameras equipped with selenium sulfur slow scan vidicons were used in the orbital reconnaissance of Mars by the U.S. Spacecraft Mariner 9 and the performance characteristics of these devices are presented. Digital image processing techniques have been widely applied in the analysis of images of Mars and its satellites. Photometric and geometric distortion corrections, image detail enhancement and transformation to standard map projection have been routinely employed. More specializing applications included picture differencing, limb profiling, solar lighting corrections, noise removal, line plots and computer mosaics. Information on enhancements as well as important picture geometric information was stored in a master library. Display of the library data in graphic or numerical form was accomplished by a data management computer program.

  7. Short-term memory for figure-ground organization in the visual cortex.

    PubMed

    O'Herron, Philip; von der Heydt, Rüdiger

    2009-03-12

    Whether the visual system uses a buffer to store image information and the duration of that storage have been debated intensely in recent psychophysical studies. The long phases of stable perception of reversible figures suggest a memory that persists for seconds. But persistence of similar duration has not been found in signals of the visual cortex. Here, we show that figure-ground signals in the visual cortex can persist for a second or more after the removal of the figure-ground cues. When new figure-ground information is presented, the signals adjust rapidly, but when a figure display is changed to an ambiguous edge display, the signals decay slowly--a behavior that is characteristic of memory devices. Figure-ground signals represent the layout of objects in a scene, and we propose that a short-term memory for object layout is important in providing continuity of perception in the rapid stream of images flooding our eyes.

  8. Adaptive focus for deep tissue using diffuse backscatter

    NASA Astrophysics Data System (ADS)

    Kress, Jeremy; Pourrezaei, Kambiz

    2014-02-01

    A system integrating high density diffuse optical imaging with adaptive optics using MEMS for deep tissue interaction is presented. In this system, a laser source is scanned over a high density fiber bundle using Digital Micromirror Device (DMD) and channeled to a tissue phantom. Backscatter is then collected from the tissue phantom by a high density fiber array of different fiber type and channeled to CMOS sensor for image acquisition. Intensity focus is directly verified using a second CMOS sensor which measures intensity transmitted though the tissue phantom. A set of training patterns are displayed on the DMD and backscatter is numerically fit to the transmission intensity. After the training patterns are displayed, adaptive focus is performed using only the backscatter and fitting functions. Additionally, tissue reconstruction and prediction of interference focusing by photoacoustic and optical tomographic methods is discussed. Finally, potential NIR applications such as in-vivo adaptive neural photostimulation and cancer targeting are discussed.

  9. Meteorological Instruction Software

    NASA Technical Reports Server (NTRS)

    1990-01-01

    At Florida State University and the Naval Postgraduate School, meteorology students have the opportunity to apply theoretical studies to current weather phenomena, even prepare forecasts and see how their predictions stand up utilizing GEMPAK. GEMPAK can display data quickly in both conventional and non-traditional ways, allowing students to view multiple perspectives of the complex three-dimensional atmospheric structure. With GEMPAK, mathematical equations come alive as students do homework and laboratory assignments on the weather events happening around them. Since GEMPAK provides data on a 'today' basis, each homework assignment is new. At the Naval Postgraduate School, students are now using electronically-managed environmental data in the classroom. The School's Departments of Meteorology and Oceanography have developed the Interactive Digital Environment Analysis (IDEA) Laboratory. GEMPAK is the IDEA Lab's general purpose display package; the IDEA image processing package is a modified version of NASA's Device Management System. Bringing the graphic and image processing packages together is NASA's product, the Transportable Application Executive (TAE).

  10. Helmet-Mounted Display Of Clouds Of Harmful Gases

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B.; Barengoltz, Jack B.; Schober, Wayne R.

    1995-01-01

    Proposed helmet-mounted opto-electronic instrument provides real-time stereoscopic views of clouds of otherwise invisible toxic, explosive, and/or corrosive gas. Display semitransparent: images of clouds superimposed on scene ordinarily visible to wearer. Images give indications on sizes and concentrations of gas clouds and their locations in relation to other objects in scene. Instruments serve as safety devices for astronauts, emergency response crews, fire fighters, people cleaning up chemical spills, or anyone working near invisible hazardous gases. Similar instruments used as sensors in automated emergency response systems that activate safety equipment and emergency procedures. Both helmet-mounted and automated-sensor versions used at industrial sites, chemical plants, or anywhere dangerous and invisible or difficult-to-see gases present. In addition to helmet-mounted and automated-sensor versions, there could be hand-held version. In some industrial applications, desirable to mount instruments and use them similarly to parking-lot surveillance cameras.

  11. A scalable diffraction-based scanning 3D colour video display as demonstrated by using tiled gratings and a vertical diffuser

    PubMed Central

    Jia, Jia; Chen, Jhensi; Yao, Jun; Chu, Daping

    2017-01-01

    A high quality 3D display requires a high amount of optical information throughput, which needs an appropriate mechanism to distribute information in space uniformly and efficiently. This study proposes a front-viewing system which is capable of managing the required amount of information efficiently from a high bandwidth source and projecting 3D images with a decent size and a large viewing angle at video rate in full colour. It employs variable gratings to support a high bandwidth distribution. This concept is scalable and the system can be made compact in size. A horizontal parallax only (HPO) proof-of-concept system is demonstrated by projecting holographic images from a digital micro mirror device (DMD) through rotational tiled gratings before they are realised on a vertical diffuser for front-viewing. PMID:28304371

  12. ACQ4: an open-source software platform for data acquisition and analysis in neurophysiology research.

    PubMed

    Campagnola, Luke; Kratz, Megan B; Manis, Paul B

    2014-01-01

    The complexity of modern neurophysiology experiments requires specialized software to coordinate multiple acquisition devices and analyze the collected data. We have developed ACQ4, an open-source software platform for performing data acquisition and analysis in experimental neurophysiology. This software integrates the tasks of acquiring, managing, and analyzing experimental data. ACQ4 has been used primarily for standard patch-clamp electrophysiology, laser scanning photostimulation, multiphoton microscopy, intrinsic imaging, and calcium imaging. The system is highly modular, which facilitates the addition of new devices and functionality. The modules included with ACQ4 provide for rapid construction of acquisition protocols, live video display, and customizable analysis tools. Position-aware data collection allows automated construction of image mosaics and registration of images with 3-dimensional anatomical atlases. ACQ4 uses free and open-source tools including Python, NumPy/SciPy for numerical computation, PyQt for the user interface, and PyQtGraph for scientific graphics. Supported hardware includes cameras, patch clamp amplifiers, scanning mirrors, lasers, shutters, Pockels cells, motorized stages, and more. ACQ4 is available for download at http://www.acq4.org.

  13. Web-based visualization of very large scientific astronomy imagery

    NASA Astrophysics Data System (ADS)

    Bertin, E.; Pillay, R.; Marmo, C.

    2015-04-01

    Visualizing and navigating through large astronomy images from a remote location with current astronomy display tools can be a frustrating experience in terms of speed and ergonomics, especially on mobile devices. In this paper, we present a high performance, versatile and robust client-server system for remote visualization and analysis of extremely large scientific images. Applications of this work include survey image quality control, interactive data query and exploration, citizen science, as well as public outreach. The proposed software is entirely open source and is designed to be generic and applicable to a variety of datasets. It provides access to floating point data at terabyte scales, with the ability to precisely adjust image settings in real-time. The proposed clients are light-weight, platform-independent web applications built on standard HTML5 web technologies and compatible with both touch and mouse-based devices. We put the system to the test and assess the performance of the system and show that a single server can comfortably handle more than a hundred simultaneous users accessing full precision 32 bit astronomy data.

  14. On Alternative Approaches to 3D Image Perception: Monoscopic 3D Techniques

    NASA Astrophysics Data System (ADS)

    Blundell, Barry G.

    2015-06-01

    In the eighteenth century, techniques that enabled a strong sense of 3D perception to be experienced without recourse to binocular disparities (arising from the spatial separation of the eyes) underpinned the first significant commercial sales of 3D viewing devices and associated content. However following the advent of stereoscopic techniques in the nineteenth century, 3D image depiction has become inextricably linked to binocular parallax and outside the vision science and arts communities relatively little attention has been directed towards earlier approaches. Here we introduce relevant concepts and terminology and consider a number of techniques and optical devices that enable 3D perception to be experienced on the basis of planar images rendered from a single vantage point. Subsequently we allude to possible mechanisms for non-binocular parallax based 3D perception. Particular attention is given to reviewing areas likely to be thought-provoking to those involved in 3D display development, spatial visualization, HCI, and other related areas of interdisciplinary research.

  15. Radiology on handheld devices: image display, manipulation, and PACS integration issues.

    PubMed

    Raman, Bhargav; Raman, Raghav; Raman, Lalithakala; Beaulieu, Christopher F

    2004-01-01

    Handheld personal digital assistants (PDAs) have undergone continuous and substantial improvements in hardware and graphics capabilities, making them a compelling platform for novel developments in teleradiology. The latest PDAs have processor speeds of up to 400 MHz and storage capacities of up to 80 Gbytes with memory expansion methods. A Digital Imaging and Communications in Medicine (DICOM)-compliant, vendor-independent handheld image access system was developed in which a PDA server acts as the gateway between a picture archiving and communication system (PACS) and PDAs. The system is compatible with most currently available PDA models. It is capable of both wired and wireless transfer of images and includes custom PDA software and World Wide Web interfaces that implement a variety of basic image manipulation functions. Implementation of this system, which is currently undergoing debugging and beta testing, required optimization of the user interface to efficiently display images on smaller PDA screens. The PDA server manages user work lists and implements compression and security features to accelerate transfer speeds, protect patient information, and regulate access. Although some limitations remain, PDA-based teleradiology has the potential to increase the efficiency of the radiologic work flow, increasing productivity and improving communication with referring physicians and patients. Copyright RSNA, 2004

  16. Speckless head-up display on two spatial light modulators

    NASA Astrophysics Data System (ADS)

    Siemion, Andrzej; Ducin, Izabela; Kakarenko, Karol; Makowski, Michał; Siemion, Agnieszka; Suszek, Jarosław; Sypek, Maciej; Wojnowski, Dariusz; Jaroszewicz, Zbigniew; Kołodziejczyk, Andrzej

    2010-12-01

    There is a continuous demand for the computer generated holograms to give an almost perfect reconstruction with a reasonable cost of manufacturing. One method of improving the image quality is to illuminate a Fourier hologram with a quasi-random, but well known, light field phase distribution. It can be achieved with a lithographically produced phase mask. Up to date, the implementation of the lithographic technique is relatively complex and time and money consuming, which is why we have decided to use two Spatial Light Modulators (SLM). For the correctly adjusted light polarization a SLM acts as a pure phase modulator with 256 adjustable phase levels between 0 and 2π. The two modulators give us an opportunity to use the whole surface of the device and to reduce the size of the experimental system. The optical system with one SLM can also be used but it requires dividing the active surface into halves (one for the Fourier hologram and the second for the quasi-random diffuser), which implies a more complicated optical setup. A larger surface allows to display three Fourier holograms, each for one primary colour: red, green and blue. This allows to reconstruct almost noiseless colourful dynamic images. In this work we present the results of numerical simulations of image reconstructions with the use of two SLM displays.

  17. X-Windows Widget for Image Display

    NASA Technical Reports Server (NTRS)

    Deen, Robert G.

    2011-01-01

    XvicImage is a high-performance XWindows (Motif-compliant) user interface widget for displaying images. It handles all aspects of low-level image display. The fully Motif-compliant image display widget handles the following tasks: (1) Image display, including dithering as needed (2) Zoom (3) Pan (4) Stretch (contrast enhancement, via lookup table) (5) Display of single-band or color data (6) Display of non-byte data (ints, floats) (7) Pseudocolor display (8) Full overlay support (drawing graphics on image) (9) Mouse-based panning (10) Cursor handling, shaping, and planting (disconnecting cursor from mouse) (11) Support for all user interaction events (passed to application) (12) Background loading and display of images (doesn't freeze the GUI) (13) Tiling of images.

  18. Analyzing Dental Implant Sites From Cone Beam Computed Tomography Scans on a Tablet Computer: A Comparative Study Between iPad and 3 Display Systems.

    PubMed

    Carrasco, Alejandro; Jalali, Elnaz; Dhingra, Ajay; Lurie, Alan; Yadav, Sumit; Tadinada, Aditya

    2017-06-01

    The aim of this study was to compare a medical-grade PACS (picture archiving and communication system) monitor, a consumer-grade monitor, a laptop computer, and a tablet computer for linear measurements of height and width for specific implant sites in the posterior maxilla and mandible, along with visualization of the associated anatomical structures. Cone beam computed tomography (CBCT) scans were evaluated. The images were reviewed using PACS-LCD monitor, consumer-grade LCD monitor using CB-Works software, a 13″ MacBook Pro, and an iPad 4 using OsiriX DICOM reader software. The operators had to identify anatomical structures in each display using a 2-point scale. User experience between PACS and iPad was also evaluated by means of a questionnaire. The measurements were very similar for each device. P-values were all greater than 0.05, indicating no significant difference between the monitors for each measurement. The intraoperator reliability was very high. The user experience was similar in each category with the most significant difference regarding the portability where the PACS display received the lowest score and the iPad received the highest score. The iPad with retina display was comparable with the medical-grade monitor, producing similar measurements and image visualization, and thus providing an inexpensive, portable, and reliable screen to analyze CBCT images in the operating room during the implant surgery.

  19. A two-dimensional location method based on digital micromirror device used in interactive projection systems

    NASA Astrophysics Data System (ADS)

    Chen, Liangjun; Ni, Kai; Zhou, Qian; Cheng, Xuemin; Ma, Jianshe; Gao, Yuan; Sun, Peng; Li, Yi; Liu, Minxia

    2010-11-01

    Interactive projection systems based on CCD/CMOS have been greatly developed in recent years. They can locate and trace the movement of a pen equipped with an infrared LED, and displays the user's handwriting or react to the user's operation in real time. However, a major shortcoming is that the location device and the projector are independent with each other, including both the optical system and the control system. This requires construction of two optical systems, calibration of the differences between the projector view and the camera view, and also synchronization between two control systems, etc. In this paper, we introduced a two-dimensional location method based on digital micro-mirror device (DMD). The DMD is used as the display device and the position detector in turn. By serially flipping the micro-mirrors on the DMD according to a specially designed scheme and monitoring the reflected light energy, the image spot of the infrared LED can be quickly located. By using this method, the same optical system as well as the DMD can be multiplexed for projection and location, which will reduce the complexity and cost of the whole system. Furthermore, this method can also achieve high positioning accuracy and sampling rates. The results of location experiments are given.

  20. Imaging optical fields below metal films and metal-dielectric waveguides by a scanning microscope

    NASA Astrophysics Data System (ADS)

    Zhu, Liangfu; Wang, Yong; Zhang, Douguo; Wang, Ruxue; Qiu, Dong; Wang, Pei; Ming, Hai; Badugu, Ramachandram; Rosenfeld, Mary; Lakowicz, Joseph R.

    2017-09-01

    Laser scanning confocal fluorescence microscopy (LSCM) is now an important method for tissue and cell imaging when the samples are located on the surfaces of glass slides. In the past decade, there has been extensive development of nano-optical structures that display unique effects on incident and transmitted light, which will be used with novel configurations for medical and consumer products. For these applications, it is necessary to characterize the light distribution within short distances from the structures for efficient detection and elimination of bulky optical components. These devices will minimize or possibly eliminate the need for free-space light propagation outside of the device itself. We describe the use of the scanning function of a LSCM to obtain 3D images of the light intensities below the surface of nano-optical structures. More specifically, we image the spatial distributions inside the substrate of fluorescence emission coupled to waveguide modes after it leaks through thin metal films or dielectric-coated metal films. The observed spatial distribution were in general agreement with far-field calculations, but the scanning images also revealed light intensities at angles not observed with classical back focal plane imaging. Knowledge of the subsurface optical intensities will be crucial in the combination of nano-optical structures with rapidly evolving imaging detectors.

  1. Tools virtualization for command and control systems

    NASA Astrophysics Data System (ADS)

    Piszczek, Marek; Maciejewski, Marcin; Pomianek, Mateusz; Szustakowski, Mieczysław

    2017-10-01

    Information management is an inseparable part of the command process. The result is that the person making decisions at the command post interacts with data providing devices in various ways. Tools virtualization process can introduce a number of significant modifications in the design of solutions for management and command. The general idea involves replacing physical devices user interface with their digital representation (so-called Virtual instruments). A more advanced level of the systems "digitalization" is to use the mixed reality environments. In solutions using Augmented reality (AR) customized HMI is displayed to the operator when he approaches to each device. Identification of device is done by image recognition of photo codes. Visualization is achieved by (optical) see-through head mounted display (HMD). Control can be done for example by means of a handheld touch panel. Using the immersive virtual environment, the command center can be digitally reconstructed. Workstation requires only VR system (HMD) and access to information network. Operator can interact with devices in such a way as it would perform in real world (for example with the virtual hands). Because of their procedures (an analysis of central vision, eye tracking) MR systems offers another useful feature of reducing requirements for system data throughput. Due to the fact that at the moment we focus on the single device. Experiments carried out using Moverio BT-200 and SteamVR systems and the results of experimental application testing clearly indicate the ability to create a fully functional information system with the use of mixed reality technology.

  2. 77 FR 14422 - Certain Consumer Electronics and Display Devices and Products Containing Same; Notice of Receipt...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-09

    ... INTERNATIONAL TRADE COMMISSION [DN 2882] Certain Consumer Electronics and Display Devices and... the U.S. International Trade Commission has received a complaint entitled Certain Consumer Electronics... importation of certain consumer electronics and display devices and products containing same. The complaint...

  3. 77 FR 31876 - Certain Consumer Electronics and Display Devices and Products Containing Same Determination Not...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-30

    ... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-836] Certain Consumer Electronics and Display Devices and Products Containing Same Determination Not To Review Initial Determination To Amend... electronics and display devices and products containing the same by reason of infringement of U.S. Patent Nos...

  4. 76 FR 38417 - In the Matter of Certain Multimedia Display and Navigation Devices and Systems, Components...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-30

    ... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-694] In the Matter of Certain Multimedia Display and Navigation Devices and Systems, Components Thereof, and Products Containing Same; Notice of... importation of certain multimedia display and navigation devices and systems, components thereof, and products...

  5. 76 FR 25707 - In The Matter of Certain Multimedia Display and Navigation Devices and Systems, Components...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-05

    ... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-694] In The Matter of Certain Multimedia Display and Navigation Devices and Systems, Components Thereof, and Products Containing Same; Notice of... multimedia display and navigation devices and systems, components thereof, and products containing same by...

  6. 78 FR 68861 - Certain Navigation Products, Including GPS Devices, Navigation and Display Systems, Radar Systems...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-15

    ... Devices, Navigation and Display Systems, Radar Systems, Navigational Aids, Mapping Systems and Related... navigation products, including GPS devices, navigation and display systems, radar systems, navigational aids..., radar systems, navigational aids, mapping systems and related software by reason of infringement of one...

  7. The diffractionator

    NASA Astrophysics Data System (ADS)

    Gaskill, Jack D.; Curtis, Craig H.

    1995-10-01

    Physical demonstrations of diffraction and image formation for educational purposes have long been hampered by limitations of equipment and viewing facilities: it has usually been possible to demonstrate diffraction and image formation for only a few simple apertures or objects; it has often been time consuming to set up the optical bench used for the demonstration and difficult to keep it aligned; a darkened demonstration room has normally been required; and, it has usually been possible for only small groups of people to view the diffraction patterns and images. In 1990, the Optical Sciences Center was awarded an AT&T Special Purpose Grant to construct a device that would allow diffraction and image formation demonstrations to be conducted while avoiding the limitations noted above. This device, which was completed in the fall of 1992 and is affectionately called 'The Defractionator', makes use of video technology to permit demonstrations of diffraction, image formation and spatial filtering for large audiences in regular classrooms or auditoria. In addition, video tapes of the demonstrations can be recorded for viewing at sites where use of the actual demonstrator is inconvenient. A description of the system will be given, and video tapes will be used to display previously recorded diffraction phenomena and spatial filtering demonstrations.

  8. Image quality assessment metric for frame accumulated image

    NASA Astrophysics Data System (ADS)

    Yu, Jianping; Li, Gang; Wang, Shaohui; Lin, Ling

    2018-01-01

    The medical image quality determines the accuracy of diagnosis, and the gray-scale resolution is an important parameter to measure image quality. But current objective metrics are not very suitable for assessing medical images obtained by frame accumulation technology. Little attention was paid to the gray-scale resolution, basically based on spatial resolution and limited to the 256 level gray scale of the existing display device. Thus, this paper proposes a metric, "mean signal-to-noise ratio" (MSNR) based on signal-to-noise in order to be more reasonable to evaluate frame accumulated medical image quality. We demonstrate its potential application through a series of images under a constant illumination signal. Here, the mean image of enough images was regarded as the reference image. Several groups of images by different frame accumulation and their MSNR were calculated. The results of the experiment show that, compared with other quality assessment methods, the metric is simpler, more effective, and more suitable for assessing frame accumulated images that surpass the gray scale and precision of the original image.

  9. Objective analysis of image quality of video image capture systems

    NASA Astrophysics Data System (ADS)

    Rowberg, Alan H.

    1990-07-01

    As Picture Archiving and Communication System (PACS) technology has matured, video image capture has become a common way of capturing digital images from many modalities. While digital interfaces, such as those which use the ACR/NEMA standard, will become more common in the future, and are preferred because of the accuracy of image transfer, video image capture will be the dominant method in the short term, and may continue to be used for some time because of the low cost and high speed often associated with such devices. Currently, virtually all installed systems use methods of digitizing the video signal that is produced for display on the scanner viewing console itself. A series of digital test images have been developed for display on either a GE CT9800 or a GE Signa MRI scanner. These images have been captured with each of five commercially available image capture systems, and the resultant images digitally transferred on floppy disk to a PC1286 computer containing Optimast' image analysis software. Here the images can be displayed in a comparative manner for visual evaluation, in addition to being analyzed statistically. Each of the images have been designed to support certain tests, including noise, accuracy, linearity, gray scale range, stability, slew rate, and pixel alignment. These image capture systems vary widely in these characteristics, in addition to the presence or absence of other artifacts, such as shading and moire pattern. Other accessories such as video distribution amplifiers and noise filters can also add or modify artifacts seen in the captured images, often giving unusual results. Each image is described, together with the tests which were performed using them. One image contains alternating black and white lines, each one pixel wide, after equilibration strips ten pixels wide. While some systems have a slew rate fast enough to track this correctly, others blur it to an average shade of gray, and do not resolve the lines, or give horizontal or vertical streaking. While many of these results are significant from an engineering standpoint alone, there are clinical implications and some anatomy or pathology may not be visualized if an image capture system is used improperly.

  10. High throughput dual-wavelength temperature distribution imaging via compressive imaging

    NASA Astrophysics Data System (ADS)

    Yao, Xu-Ri; Lan, Ruo-Ming; Liu, Xue-Feng; Zhu, Ge; Zheng, Fu; Yu, Wen-Kai; Zhai, Guang-Jie

    2018-03-01

    Thermal imaging is an essential tool in a wide variety of research areas. In this work we demonstrate high-throughput double-wavelength temperature distribution imaging using a modified single-pixel camera without the requirement of a beam splitter (BS). A digital micro-mirror device (DMD) is utilized to display binary masks and split the incident radiation, which eliminates the necessity of a BS. Because the spatial resolution is dictated by the DMD, this thermal imaging system has the advantage of perfect spatial registration between the two images, which limits the need for the pixel registration and fine adjustments. Two bucket detectors, which measures the total light intensity reflected from the DMD, are employed in this system and yield an improvement in the detection efficiency of the narrow-band radiation. A compressive imaging algorithm is utilized to achieve under-sampling recovery. A proof-of-principle experiment was presented to demonstrate the feasibility of this structure.

  11. Mobile-based text recognition from water quality devices

    NASA Astrophysics Data System (ADS)

    Dhakal, Shanti; Rahnemoonfar, Maryam

    2015-03-01

    Measuring water quality of bays, estuaries, and gulfs is a complicated and time-consuming process. YSI Sonde is an instrument used to measure water quality parameters such as pH, temperature, salinity, and dissolved oxygen. This instrument is taken to water bodies in a boat trip and researchers note down different parameters displayed by the instrument's display monitor. In this project, a mobile application is developed for Android platform that allows a user to take a picture of the YSI Sonde monitor, extract text from the image and store it in a file on the phone. The image captured by the application is first processed to remove perspective distortion. Probabilistic Hough line transform is used to identify lines in the image and the corner of the image is then obtained by determining the intersection of the detected horizontal and vertical lines. The image is warped using the perspective transformation matrix, obtained from the corner points of the source image and the destination image, hence, removing the perspective distortion. Mathematical morphology operation, black-hat is used to correct the shading of the image. The image is binarized using Otsu's binarization technique and is then passed to the Optical Character Recognition (OCR) software for character recognition. The extracted information is stored in a file on the phone and can be retrieved later for analysis. The algorithm was tested on 60 different images of YSI Sonde with different perspective features and shading. Experimental results, in comparison to ground-truth results, demonstrate the effectiveness of the proposed method.

  12. LAS - LAND ANALYSIS SYSTEM, VERSION 5.0

    NASA Technical Reports Server (NTRS)

    Pease, P. B.

    1994-01-01

    The Land Analysis System (LAS) is an image analysis system designed to manipulate and analyze digital data in raster format and provide the user with a wide spectrum of functions and statistical tools for analysis. LAS offers these features under VMS with optional image display capabilities for IVAS and other display devices as well as the X-Windows environment. LAS provides a flexible framework for algorithm development as well as for the processing and analysis of image data. Users may choose between mouse-driven commands or the traditional command line input mode. LAS functions include supervised and unsupervised image classification, film product generation, geometric registration, image repair, radiometric correction and image statistical analysis. Data files accepted by LAS include formats such as Multi-Spectral Scanner (MSS), Thematic Mapper (TM) and Advanced Very High Resolution Radiometer (AVHRR). The enhanced geometric registration package now includes both image to image and map to map transformations. The over 200 LAS functions fall into image processing scenario categories which include: arithmetic and logical functions, data transformations, fourier transforms, geometric registration, hard copy output, image restoration, intensity transformation, multispectral and statistical analysis, file transfer, tape profiling and file management among others. Internal improvements to the LAS code have eliminated the VAX VMS dependencies and improved overall system performance. The maximum LAS image size has been increased to 20,000 lines by 20,000 samples with a maximum of 256 bands per image. The catalog management system used in earlier versions of LAS has been replaced by a more streamlined and maintenance-free method of file management. This system is not dependent on VAX/VMS and relies on file naming conventions alone to allow the use of identical LAS file names on different operating systems. While the LAS code has been improved, the original capabilities of the system have been preserved. These include maintaining associated image history, session logging, and batch, asynchronous and interactive mode of operation. The LAS application programs are integrated under version 4.1 of an interface called the Transportable Applications Executive (TAE). TAE 4.1 has four modes of user interaction: menu, direct command, tutor (or help), and dynamic tutor. In addition TAE 4.1 allows the operation of LAS functions using mouse-driven commands under the TAE-Facelift environment provided with TAE 4.1. These modes of operation allow users, from the beginner to the expert, to exercise specific application options. LAS is written in C-language and FORTRAN 77 for use with DEC VAX computers running VMS with approximately 16Mb of physical memory. This program runs under TAE 4.1. Since TAE 4.1 is not a current version of TAE, TAE 4.1 is included within the LAS distribution. Approximately 130,000 blocks (65Mb) of disk storage space are necessary to store the source code and files generated by the installation procedure for LAS and 44,000 blocks (22Mb) of disk storage space are necessary for TAE 4.1 installation. The only other dependencies for LAS are the subroutine libraries for the specific display device(s) that will be used with LAS/DMS (e.g. X-Windows and/or IVAS). The standard distribution medium for LAS is a set of two 9track 6250 BPI magnetic tapes in DEC VAX BACKUP format. It is also available on a set of two TK50 tape cartridges in DEC VAX BACKUP format. This program was developed in 1986 and last updated in 1992.

  13. Initial steps toward the realization of large area arrays of single photon counting pixels based on polycrystalline silicon TFTs

    NASA Astrophysics Data System (ADS)

    Liang, Albert K.; Koniczek, Martin; Antonuk, Larry E.; El-Mohri, Youcef; Zhao, Qihua; Jiang, Hao; Street, Robert A.; Lu, Jeng Ping

    2014-03-01

    The thin-film semiconductor processing methods that enabled creation of inexpensive liquid crystal displays based on amorphous silicon transistors for cell phones and televisions, as well as desktop, laptop and mobile computers, also facilitated the development of devices that have become ubiquitous in medical x-ray imaging environments. These devices, called active matrix flat-panel imagers (AMFPIs), measure the integrated signal generated by incident X rays and offer detection areas as large as ~43×43 cm2. In recent years, there has been growing interest in medical x-ray imagers that record information from X ray photons on an individual basis. However, such photon counting devices have generally been based on crystalline silicon, a material not inherently suited to the cost-effective manufacture of monolithic devices of a size comparable to that of AMFPIs. Motivated by these considerations, we have developed an initial set of small area prototype arrays using thin-film processing methods and polycrystalline silicon transistors. These prototypes were developed in the spirit of exploring the possibility of creating large area arrays offering single photon counting capabilities and, to our knowledge, are the first photon counting arrays fabricated using thin film techniques. In this paper, the architecture of the prototype pixels is presented and considerations that influenced the design of the pixel circuits, including amplifier noise, TFT performance variations, and minimum feature size, are discussed.

  14. Medical diagnosis system and method with multispectral imaging. [depth of burns and optical density of the skin

    NASA Technical Reports Server (NTRS)

    Anselmo, V. J.; Reilly, T. H. (Inventor)

    1979-01-01

    A skin diagnosis system includes a scanning and optical arrangement whereby light reflected from each incremental area (pixel) of the skin is directed simultaneously to three separate light filters, e.g., IR, red, and green. As a result, the three devices simultaneously produce three signals which are directly related to the reflectance of light of different wavelengths from the corresponding pixel. These three signals for each pixel after processing are used as inputs to one or more output devices to produce a visual color display and/or a hard copy color print, for one usable as a diagnostic aid by a physician.

  15. Full resolution hologram-like autostereoscopic display

    NASA Technical Reports Server (NTRS)

    Eichenlaub, Jesse B.; Hutchins, Jamie

    1995-01-01

    Under this program, Dimension Technologies Inc. (DTI) developed a prototype display that uses a proprietary illumination technique to create autostereoscopic hologram-like full resolution images on an LCD operating at 180 fps. The resulting 3D image possesses a resolution equal to that of the LCD along with properties normally associated with holograms, including change of perspective with observer position and lack of viewing position restrictions. Furthermore, this autostereoscopic technique eliminates the need to wear special glasses to achieve the parallax effect. Under the program a prototype display was developed which demonstrates the hologram-like full resolution concept. To implement such a system, DTI explored various concept designs and enabling technologies required to support those designs. Specifically required were: a parallax illumination system with sufficient brightness and control; an LCD with rapid address and pixel response; and an interface to an image generation system for creation of computer graphics. Of the possible parallax illumination system designs, we chose a design which utilizes an array of fluorescent lamps. This system creates six sets of illumination areas to be imaged behind an LCD. This controlled illumination array is interfaced to a lenticular lens assembly which images the light segments into thin vertical light lines to achieve the parallax effect. This light line formation is the foundation of DTI's autostereoscopic technique. The David Sarnoff Research Center (Sarnoff) was subcontracted to develop an LCD that would operate with a fast scan rate and pixel response. Sarnoff chose a surface mode cell technique and produced the world's first large area pi-cell active matrix TFT LCD. The device provided adequate performance to evaluate five different perspective stereo viewing zones. A Silicon Graphics' Iris Indigo system was used for image generation which allowed for static and dynamic multiple perspective image rendering. During the development of the prototype display, we identified many critical issues associated with implementing such a technology. Testing and evaluation enabled us to prove that this illumination technique provides autostereoscopic 3D multi perspective images with a wide range of view, smooth transition, and flickerless operation given suitable enabling technologies.

  16. How to reinforce perception of depth in single two-dimensional pictures

    NASA Technical Reports Server (NTRS)

    Nagata, S.

    1989-01-01

    The physical conditions of the display of single 2-D pictures, which produce images realistically, were studied by using the characteristics of the intake of the information for visual depth perception. Depth sensitivity, which is defined as the ratio of viewing distance to depth discrimination threshold, was introduced in order to evaluate the availability of various cues for depth perception: binocular parallax, motion parallax, accommodation, convergence, size, texture, brightness, and air-perspective contrast. The effects of binocular parallax in different conditions, the depth sensitivity of which is greatest at a distance of up to about 10 m, were studied with the new versatile stereoscopic display. From these results, four conditions to reinforce the perception of depth in single pictures were proposed, and these conditions are met by the old viewing devices and the new high-definition and wide television displays.

  17. The Visible Heart® project and free-access website 'Atlas of Human Cardiac Anatomy'.

    PubMed

    Iaizzo, Paul A

    2016-12-01

    Pre- and post-evaluations of implantable cardiac devices require innovative and critical testing in all phases of the design process. The Visible Heart ® Project was successfully launched in 1997 and 3 years later the Atlas of Human Cardiac Anatomy website was online. The Visible Heart ® methodologies and Atlas website can be used to better understand human cardiac anatomy, disease states and/or to improve cardiac device design throughout the development process. To date, Visible ® Heart methodologies have been used to reanimate 75 human hearts, all considered non-viable for transplantation. The Atlas is a unique free-access website featuring novel images of functional and fixed human cardiac anatomies from >400 human heart specimens. Furthermore, this website includes education tutorials on anatomy, physiology, congenital heart disease and various imaging modalities. For instance, the Device Tutorial provides examples of commonly deployed devices that were present at the time of in vitro reanimation or were subsequently delivered, including: leads, catheters, valves, annuloplasty rings, leadless pacemakers and stents. Another section of the website displays 3D models of vasculature, blood volumes, and/or tissue volumes reconstructed from computed tomography (CT) and magnetic resonance images (MRI) of various heart specimens. A new section allows the user to interact with various heart models. Visible Heart ® methodologies have enabled our laboratory to reanimate 75 human hearts and visualize functional cardiac anatomies and device/tissue interfaces. The website freely shares all images, video clips and CT/MRI DICOM files in honour of the generous gifts received from donors and their families. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2016. For Permissions, please email: journals.permissions@oup.com.

  18. Controlling Motion Sickness and Spatial Disorientation and Enhancing Vestibular Rehabilitation with a User-Worn See-Through Display

    PubMed Central

    Krueger, Wesley W.O.

    2010-01-01

    Objectives/Hypotheses An eyewear mounted visual display (“User-worn see-through display”) projecting an artificial horizon aligned with the user's head and body position in space can prevent or lessen motion sickness in susceptible individuals when in a motion provocative environment as well as aid patients undergoing vestibular rehabilitation. In this project, a wearable display device, including software technology and hardware, was developed and a phase I feasibility study and phase II clinical trial for safety and efficacy were performed. Study Design Both phase I and phase II were prospective studies funded by the NIH. The phase II study used repeated measures for motion intolerant subjects and a randomized control group (display device/no display device) pre-post test design for patients in vestibular rehabilitation. Methods Following technology and display device development, 75 patients were evaluated by test and rating scales in the phase II study; 25 subjects with motion intolerance used the technology in the display device in provocative environments and completed subjective rating scales while 50 patients were evaluated before and after vestibular rehabilitation (25 using the display device and 25 in a control group) using established test measures. Results All patients with motion intolerance rated the technology as helpful for nine symptoms assessed, and 96% rated the display device as simple and easy to use. Duration of symptoms significantly decreased with use of the technology displayed. In patients undergoing vestibular rehabilitation, there were no significant differences in amount of change from pre- to post-therapy on objective balance tests between display device users and controls. However, those using the technology required significantly fewer rehabilitation sessions to achieve those outcomes than the control group. Conclusions A user-worn see-through display, utilizing a visual fixation target coupled with a stable artificial horizon and aligned with user movement, has demonstrated substantial benefit for individuals susceptible to motion intolerance and spatial disorientation and those undergoing vestibular rehabilitation. The technology developed has applications in any environment where motion sensitivity affects human performance. PMID:21181963

  19. Polyplanar optical display electronics

    NASA Astrophysics Data System (ADS)

    DeSanto, Leonard; Biscardi, Cyrus

    1997-07-01

    The polyplanar optical display (POD) is a unique display screen which can be used with any projection source. The prototype ten inch display is two inches thick and has a matte black face which allows for high contrast images. The prototype being developed is a form, fit and functional replacement display for the B-52 aircraft which uses a monochrome ten-inch display. In order to achieve a long lifetime, the new display uses a 100 milliwatt green solid- state laser at 532 nm as its light source. To produce real- time video, the laser light is being modulated by a digital light processing (DLP) chip manufactured by Texas Instruments. In order to use the solid-state laser as the light source and also fit within the constraints of the B-52 display, the digital micromirror device (DMD) circuit board is removed from the Texas Instruments DLP light engine assembly. Due to the compact architecture of the projection system within the display chassis, the DMD chip is operated remotely from the Texas Instruments circuit board. We discuss the operation of the DMD divorced from the light engine and the interfacing of the DMD board with various video formats including the format specific to the B-52 aircraft. A brief discussion of the electronics required to drive the laser is also presented.

  20. Biotube

    NASA Technical Reports Server (NTRS)

    Richards, Stephanie E. (Compiler); Levine, Howard G.; Romero, Vergel

    2016-01-01

    Biotube was developed for plant gravitropic research investigating the potential for magnetic fields to orient plant roots as they grow in microgravity. Prior to flight, experimental seeds are placed into seed cassettes, that are capable of containing up to 10 seeds, and inserted between two magnets located within one of three Magnetic Field Chamber (MFC). Biotube is stored within an International Space Station (ISS) stowage locker and provides three levels of containment for chemical fixatives. Features include monitoring of temperature, fixative/ preservative delivery to specimens, and real-time video imaging downlink. Biotube's primary subsystems are: (1) The Water Delivery System that automatically activates and controls the delivery of water (to initiate seed germination). (2) The Fixative Storage and Delivery System that stores and delivers chemical fixative or RNA later to each seed cassette. (3) The Digital Imaging System consisting of 4 charge-coupled device (CCD) cameras, a video multiplexer, a lighting multiplexer, and 16 infrared light-emitting diodes (LEDs) that provide illumination while the photos are being captured. (4) The Command and Data Management System that provides overall control of the integrated subsystems, graphical user interface, system status and error message display, image display, and other functions.

  1. Diffraction phase microscopy realized with an automatic digital pinhole

    NASA Astrophysics Data System (ADS)

    Zheng, Cheng; Zhou, Renjie; Kuang, Cuifang; Zhao, Guangyuan; Zhang, Zhimin; Liu, Xu

    2017-12-01

    We report a novel approach to diffraction phase microscopy (DPM) with automatic pinhole alignment. The pinhole, which serves as a spatial low-pass filter to generate a uniform reference beam, is made out of a liquid crystal display (LCD) device that allows for electrical control. We have made DPM more accessible to users, while maintaining high phase measurement sensitivity and accuracy, through exploring low cost optical components and replacing the tedious pinhole alignment process with an automatic pinhole optical alignment procedure. Due to its flexibility in modifying the size and shape, this LCD device serves as a universal filter, requiring no future replacement. Moreover, a graphic user interface for real-time phase imaging has been also developed by using a USB CMOS camera. Experimental results of height maps of beads sample and live red blood cells (RBCs) dynamics are also presented, making this system ready for broad adaption to biological imaging and material metrology.

  2. Holography and optical information processing; Proceedings of the Soviet-Chinese Joint Seminar, Bishkek, Kyrgyzstan, Sept. 21-26, 1991

    NASA Astrophysics Data System (ADS)

    Mikaelian, Andrei L.

    Attention is given to data storage, devices, architectures, and implementations of optical memory and neural networks; holographic optical elements and computer-generated holograms; holographic display and materials; systems, pattern recognition, interferometry, and applications in optical information processing; and special measurements and devices. Topics discussed include optical immersion as a new way to increase information recording density, systems for data reading from optical disks on the basis of diffractive lenses, a new real-time optical associative memory system, an optical pattern recognition system based on a WTA model of neural networks, phase diffraction grating for the integral transforms of coherent light fields, holographic recording with operated sensitivity and stability in chalcogenide glass layers, a compact optical logic processor, a hybrid optical system for computing invariant moments of images, optical fiber holographic inteferometry, and image transmission through random media in single pass via optical phase conjugation.

  3. Design and fabrication of vertically-integrated CMOS image sensors.

    PubMed

    Skorka, Orit; Joseph, Dileepan

    2011-01-01

    Technologies to fabricate integrated circuits (IC) with 3D structures are an emerging trend in IC design. They are based on vertical stacking of active components to form heterogeneous microsystems. Electronic image sensors will benefit from these technologies because they allow increased pixel-level data processing and device optimization. This paper covers general principles in the design of vertically-integrated (VI) CMOS image sensors that are fabricated by flip-chip bonding. These sensors are composed of a CMOS die and a photodetector die. As a specific example, the paper presents a VI-CMOS image sensor that was designed at the University of Alberta, and fabricated with the help of CMC Microsystems and Micralyne Inc. To realize prototypes, CMOS dies with logarithmic active pixels were prepared in a commercial process, and photodetector dies with metal-semiconductor-metal devices were prepared in a custom process using hydrogenated amorphous silicon. The paper also describes a digital camera that was developed to test the prototype. In this camera, scenes captured by the image sensor are read using an FPGA board, and sent in real time to a PC over USB for data processing and display. Experimental results show that the VI-CMOS prototype has a higher dynamic range and a lower dark limit than conventional electronic image sensors.

  4. A customizable system for real-time image processing using the Blackfin DSProcessor and the MicroC/OS-II real-time kernel

    NASA Astrophysics Data System (ADS)

    Coffey, Stephen; Connell, Joseph

    2005-06-01

    This paper presents a development platform for real-time image processing based on the ADSP-BF533 Blackfin processor and the MicroC/OS-II real-time operating system (RTOS). MicroC/OS-II is a completely portable, ROMable, pre-emptive, real-time kernel. The Blackfin Digital Signal Processors (DSPs), incorporating the Analog Devices/Intel Micro Signal Architecture (MSA), are a broad family of 16-bit fixed-point products with a dual Multiply Accumulate (MAC) core. In addition, they have a rich instruction set with variable instruction length and both DSP and MCU functionality thus making them ideal for media based applications. Using the MicroC/OS-II for task scheduling and management, the proposed system can capture and process raw RGB data from any standard 8-bit greyscale image sensor in soft real-time and then display the processed result using a simple PC graphical user interface (GUI). Additionally, the GUI allows configuration of the image capture rate and the system and core DSP clock rates thereby allowing connectivity to a selection of image sensors and memory devices. The GUI also allows selection from a set of image processing algorithms based in the embedded operating system.

  5. Design and Fabrication of Vertically-Integrated CMOS Image Sensors

    PubMed Central

    Skorka, Orit; Joseph, Dileepan

    2011-01-01

    Technologies to fabricate integrated circuits (IC) with 3D structures are an emerging trend in IC design. They are based on vertical stacking of active components to form heterogeneous microsystems. Electronic image sensors will benefit from these technologies because they allow increased pixel-level data processing and device optimization. This paper covers general principles in the design of vertically-integrated (VI) CMOS image sensors that are fabricated by flip-chip bonding. These sensors are composed of a CMOS die and a photodetector die. As a specific example, the paper presents a VI-CMOS image sensor that was designed at the University of Alberta, and fabricated with the help of CMC Microsystems and Micralyne Inc. To realize prototypes, CMOS dies with logarithmic active pixels were prepared in a commercial process, and photodetector dies with metal-semiconductor-metal devices were prepared in a custom process using hydrogenated amorphous silicon. The paper also describes a digital camera that was developed to test the prototype. In this camera, scenes captured by the image sensor are read using an FPGA board, and sent in real time to a PC over USB for data processing and display. Experimental results show that the VI-CMOS prototype has a higher dynamic range and a lower dark limit than conventional electronic image sensors. PMID:22163860

  6. SU-C-209-06: Improving X-Ray Imaging with Computer Vision and Augmented Reality

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacDougall, R.D.; Scherrer, B; Don, S

    Purpose: To determine the feasibility of using a computer vision algorithm and augmented reality interface to reduce repeat rates and improve consistency of image quality and patient exposure in general radiography. Methods: A prototype device, designed for use with commercially available hardware (Microsoft Kinect 2.0) capable of depth sensing and high resolution/frame rate video, was mounted to the x-ray tube housing as part of a Philips DigitalDiagnost digital radiography room. Depth data and video was streamed to a Windows 10 PC. Proprietary software created an augmented reality interface where overlays displayed selectable information projected over real-time video of the patient.more » The information displayed prior to and during x-ray acquisition included: recognition and position of ordered body part, position of image receptor, thickness of anatomy, location of AEC cells, collimated x-ray field, degree of patient motion and suggested x-ray technique. Pre-clinical data was collected in a volunteer study to validate patient thickness measurements and x-ray images were not acquired. Results: Proprietary software correctly identified ordered body part, measured patient motion, and calculated thickness of anatomy. Pre-clinical data demonstrated accuracy and precision of body part thickness measurement when compared with other methods (e.g. laser measurement tool). Thickness measurements provided the basis for developing a database of thickness-based technique charts that can be automatically displayed to the technologist. Conclusion: The utilization of computer vision and commercial hardware to create an augmented reality view of the patient and imaging equipment has the potential to drastically improve the quality and safety of x-ray imaging by reducing repeats and optimizing technique based on patient thickness. Society of Pediatric Radiology Pilot Grant; Washington University Bear Cub Fund.« less

  7. Low vision goggles: optical design studies

    NASA Astrophysics Data System (ADS)

    Levy, Ofer; Apter, Boris; Efron, Uzi

    2006-08-01

    Low Vision (LV) due to Age Related Macular Degeneration (AMD), Glaucoma or Retinitis Pigmentosa (RP) is a growing problem, which will affect more than 15 million people in the U.S alone in 2010. Low Vision Aid Goggles (LVG) have been under development at Ben-Gurion University and the Holon Institute of Technology. The device is based on a unique Image Transceiver Device (ITD), combining both functions of imaging and Display in a single chip. Using the ITD-based goggles, specifically designed for the visually impaired, our aim is to develop a head-mounted device that will allow the capture of the ambient scenery, perform the necessary image enhancement and processing, and re-direct it to the healthy part of the patient's retina. This design methodology will allow the Goggles to be mobile, multi-task and environmental-adaptive. In this paper we present the optical design considerations of the Goggles, including a preliminary performance analysis. Common vision deficiencies of LV patients are usually divided into two main categories: peripheral vision loss (PVL) and central vision loss (CVL), each requiring different Goggles design. A set of design principles had been defined for each category. Four main optical designs are presented and compared according to the design principles. Each of the designs is presented in two main optical configurations: See-through system and Video imaging system. The use of a full-color ITD-Based Goggles is also discussed.

  8. Development of an ultra-portable echo device connected to USB port.

    PubMed

    Saijo, Yoshifumi; Nitta, Shin-ichi; Kobayashi, Kazuto; Arai, Hitoshi; Nemoto, Yukiko

    2004-04-01

    In practical cardiology, a stethoscope based auscultation has been used to reveal the patient's clinical status. Recently, several hand-held echo devices are going on market and they are expected to play a role as "visible" auscultation instead of stethoscope. We have developed a portable and inexpensive echo device which can be used for screening of cardiac function. Two single element transducers were attached 180 degrees apart to a rotor with 14-mm diameter. The mechanical scanner, integrated circuits for transmitting and receiving ultrasonic signals and an A/D converter were encapsulated in a 150 x 40 mm probe weighing 200 g. The scan was started and the image was displayed on a Windows based personal computer (PC) as soon as the probe was connected to USB 2.0 port of the PC. The central frequency was available between 2.5 and 7.5 MHz, the image depth was 15 cm and the frame rate was 30/s. The estimated price of this ultra-portable ultrasound is about 3000 US dollars with software. For 69 cardiac patients with informed consent, image quality was compared with those obtained with basic range diagnostic echo machines. Left ventricular ejection fraction (EF) derived from normal M-mode image of standard machines (EFm) were compared with visual EF of the ultra-portable ultrasound device (EFv). The image quality was comparable to the basic range diagnostic echo machines although short axis view of aortic root was not clearly visualized because the probe was too large for intercostal approach. EFv agreed well with EFm. The ultra-portable ultrasound may provide useful information on screening and health care.

  9. Blue light effect on retinal pigment epithelial cells by display devices.

    PubMed

    Moon, Jiyoung; Yun, Jieun; Yoon, Yeo Dae; Park, Sang-Il; Seo, Young-Jun; Park, Won-Sang; Chu, Hye Yong; Park, Keun Hong; Lee, Myung Yeol; Lee, Chang Woo; Oh, Soo Jin; Kwak, Young-Shin; Jang, Young Pyo; Kang, Jong Soon

    2017-05-22

    Blue light has high photochemical energy and induces cell apoptosis in retinal pigment epithelial cells. Due to its phototoxicity, retinal hazard by blue light stimulation has been well demonstrated using high intensity light sources. However, it has not been studied whether blue light in the displays, emitting low intensity light, such as those used in today's smartphones, monitors, and TVs, also causes apoptosis in retinal pigment epithelial cells. We attempted to examine the blue light effect on human adult retinal epithelial cells using display devices with different blue light wavelength ranges, the peaks of which specifically appear at 449 nm, 458 nm, and 470 nm. When blue light was illuminated on A2E-loaded ARPE-19 cells using these displays, the display with a blue light peak at a shorter wavelength resulted in an increased production of reactive oxygen species (ROS). Moreover, the reduction of cell viability and induction of caspase-3/7 activity were more evident in A2E-loaded ARPE-19 cells after illumination by the display with a blue light peak at a shorter wavelength, especially at 449 nm. Additionally, white light was tested to examine the effect of blue light in a mixed color illumination with red and green lights. Consistent with the results obtained using only blue light, white light illuminated by display devices with a blue light peak at a shorter wavelength also triggered increased cell death and apoptosis compared to that illuminated by display devices with a blue light peak at longer wavelength. These results show that even at the low intensity utilized in the display devices, blue light can induce ROS production and apoptosis in retinal cells. Our results also suggest that the blue light hazard of display devices might be highly reduced if the display devices contain less short wavelength blue light.

  10. Document cards: a top trumps visualization for documents.

    PubMed

    Strobelt, Hendrik; Oelke, Daniela; Rohrdantz, Christian; Stoffel, Andreas; Keim, Daniel A; Deussen, Oliver

    2009-01-01

    Finding suitable, less space consuming views for a document's main content is crucial to provide convenient access to large document collections on display devices of different size. We present a novel compact visualization which represents the document's key semantic as a mixture of images and important key terms, similar to cards in a top trumps game. The key terms are extracted using an advanced text mining approach based on a fully automatic document structure extraction. The images and their captions are extracted using a graphical heuristic and the captions are used for a semi-semantic image weighting. Furthermore, we use the image color histogram for classification and show at least one representative from each non-empty image class. The approach is demonstrated for the IEEE InfoVis publications of a complete year. The method can easily be applied to other publication collections and sets of documents which contain images.

  11. Computer vision research with new imaging technology

    NASA Astrophysics Data System (ADS)

    Hou, Guangqi; Liu, Fei; Sun, Zhenan

    2015-12-01

    Light field imaging is capable of capturing dense multi-view 2D images in one snapshot, which record both intensity values and directions of rays simultaneously. As an emerging 3D device, the light field camera has been widely used in digital refocusing, depth estimation, stereoscopic display, etc. Traditional multi-view stereo (MVS) methods only perform well on strongly texture surfaces, but the depth map contains numerous holes and large ambiguities on textureless or low-textured regions. In this paper, we exploit the light field imaging technology on 3D face modeling in computer vision. Based on a 3D morphable model, we estimate the pose parameters from facial feature points. Then the depth map is estimated through the epipolar plane images (EPIs) method. At last, the high quality 3D face model is exactly recovered via the fusing strategy. We evaluate the effectiveness and robustness on face images captured by a light field camera with different poses.

  12. Invited Paper Optical Resonators For Associative Memory

    NASA Astrophysics Data System (ADS)

    Anderson, Dana Z.

    1986-06-01

    One can construct a memory having associative characteristics using optical resonators with an internal gain medium. The device operates on the principle that an optical resonator employing a holographic grating can have user prescribed eigenmodes. Information that is to be recalled is contained in the hologram. Each information entity (e.g. an image of a cat) defines an eigenmode of the resonator. The stored information is accessed by injecting partial information (e.g. an image of the cat's ear) into the resonator. The appropriate eigenmode is selected through a competitive process in a gain medium placed inside the resonator. With a net gain greater than one, the gain amplifies the field belonging to the eigenmode that most resembles the injected field; the other eigenmodes are suppressed via the competition for the gain. One can expect this device to display several intriguing features such as recall transitions and creativity. I will discuss some of the general properties of this class of devices and present the results from a series of experiments with a simple holographic resonator employing photorefractive gain.

  13. Relevance of ERTS to the State of Ohio

    NASA Technical Reports Server (NTRS)

    Sweet, D. C. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. A significant result was the fabrication of an image transfer and comparison device. To avoid problems and high costs encountered in manual drafting methods, Battelle staff members have fabricated an inexpensive, yet effective, technique for transferring ERTS-1 analysis displays from the Spatial Data 32-Color Viewer to maps and/or aircraft imagery. In brief, the image transfer-comparison device consists of a 2-way mirror which functions similar to a zoom transfer scope. However, the device permits multiuser viewing and real time photographic recording (35-mm and Polaroid) of enhanced ERTS-1 imagery superimposed over maps and aircraft photography. Thirty-five mm, 70 mm, and 4 in. x 5 in. photographs are taken of 80% of the TV screen of the Spatial Data Density Slicing Viewer. The resulting black and white and color imagery is then used in transparent overlays, viewgraphs, 35-mm and 70-mm transparencies, and paper prints for reports and publications. Annotations can be added on the TV screen or on the finished product.

  14. Interrogation of an object for dimensional and topographical information

    DOEpatents

    McMakin, Douglas L.; Severtsen, Ronald H.; Hall, Thomas E.; Sheen, David M.; Kennedy, Mike O.

    2004-03-09

    Disclosed are systems, methods, devices, and apparatus to interrogate a clothed individual with electromagnetic radiation to determine one or more body measurements at least partially covered by the individual's clothing. The invention further includes techniques to interrogate an object with electromagnetic radiation in the millimeter and/or microwave range to provide a volumetric representation of the object. This representation can be used to display images and/or determine dimensional information concerning the object.

  15. Interrogation of an object for dimensional and topographical information

    DOEpatents

    McMakin, Doug L [Richland, WA; Severtsen, Ronald H [Richland, WA; Hall, Thomas E [Richland, WA; Sheen, David M [Richland, WA

    2003-01-14

    Disclosed are systems, methods, devices, and apparatus to interrogate a clothed individual with electromagnetic radiation to determine one or more body measurements at least partially covered by the individual's clothing. The invention further includes techniques to interrogate an object with electromagnetic radiation in the millimeter and/or microwave range to provide a volumetric representation of the object. This representation can be used to display images and/or determine dimensional information concerning the object.

  16. Medical Robotic and Tele surgical Simulation Education Research

    DTIC Science & Technology

    2017-05-01

    training exercises, DVSS = 40, dVT = 65, and RoSS = 52 for skills development. All three offer 3D visual images but use different display technologies...capabilities with an emphasis on their educational skills. They offer unique advantages and capabilities in training robotic sur- geons. Each device has been...evaluate the transfer of training effect of each simulator. Collectively, this work will offer end users and potential buyers a comparison of the value

  17. Double-heterojunction nanorod light-responsive LEDs for display applications.

    PubMed

    Oh, Nuri; Kim, Bong Hoon; Cho, Seong-Yong; Nam, Sooji; Rogers, Steven P; Jiang, Yiran; Flanagan, Joseph C; Zhai, You; Kim, Jae-Hwan; Lee, Jungyup; Yu, Yongjoon; Cho, Youn Kyoung; Hur, Gyum; Zhang, Jieqian; Trefonas, Peter; Rogers, John A; Shim, Moonsub

    2017-02-10

    Dual-functioning displays, which can simultaneously transmit and receive information and energy through visible light, would enable enhanced user interfaces and device-to-device interactivity. We demonstrate that double heterojunctions designed into colloidal semiconductor nanorods allow both efficient photocurrent generation through a photovoltaic response and electroluminescence within a single device. These dual-functioning, all-solution-processed double-heterojunction nanorod light-responsive light-emitting diodes open feasible routes to a variety of advanced applications, from touchless interactive screens to energy harvesting and scavenging displays and massively parallel display-to-display data communication. Copyright © 2017, American Association for the Advancement of Science.

  18. Space Images for NASA JPL Android Version

    NASA Technical Reports Server (NTRS)

    Nelson, Jon D.; Gutheinz, Sandy C.; Strom, Joshua R.; Arca, Jeremy M.; Perez, Martin; Boggs, Karen; Stanboli, Alice

    2013-01-01

    This software addresses the demand for easily accessible NASA JPL images and videos by providing a user friendly and simple graphical user interface that can be run via the Android platform from any location where Internet connection is available. This app is complementary to the iPhone version of the application. A backend infrastructure stores, tracks, and retrieves space images from the JPL Photojournal and Institutional Communications Web server, and catalogs the information into a streamlined rating infrastructure. This system consists of four distinguishing components: image repository, database, server-side logic, and Android mobile application. The image repository contains images from various JPL flight projects. The database stores the image information as well as the user rating. The server-side logic retrieves the image information from the database and categorizes each image for display. The Android mobile application is an interfacing delivery system that retrieves the image information from the server for each Android mobile device user. Also created is a reporting and tracking system for charting and monitoring usage. Unlike other Android mobile image applications, this system uses the latest emerging technologies to produce image listings based directly on user input. This allows for countless combinations of images returned. The backend infrastructure uses industry-standard coding and database methods, enabling future software improvement and technology updates. The flexibility of the system design framework permits multiple levels of display possibilities and provides integration capabilities. Unique features of the software include image/video retrieval from a selected set of categories, image Web links that can be shared among e-mail users, sharing to Facebook/Twitter, marking as user's favorites, and image metadata searchable for instant results.

  19. Liquid crystal true 3D displays for augmented reality applications

    NASA Astrophysics Data System (ADS)

    Li, Yan; Liu, Shuxin; Zhou, Pengcheng; Chen, Quanming; Su, Yikai

    2018-02-01

    Augmented reality (AR) technology, which integrates virtual computer-generated information into the real world scene, is believed to be the next-generation human-machine interface. However, most AR products adopt stereoscopic 3D display technique, which causes the accommodation-vergence conflict. To solve this problem, we have proposed two approaches. The first is a multi-planar volumetric display using fast switching polymer-stabilized liquid crystal (PSLC) films. By rapidly switching the films between scattering and transparent states while synchronizing with a high-speed projector, the 2D slices of a 3D volume could be displayed in time sequence. We delved into the research on developing high-performance PSLC films in both normal mode and reverse mode; moreover, we also realized the demonstration of four-depth AR images with correct accommodation cues. For the second approach, we realized a holographic AR display using digital blazed gratings and a 4f system to eliminate zero-order and higher-order noise. With a 4k liquid crystal on silicon device, we achieved a field of view (FOV) of 32 deg. Moreover, we designed a compact waveguidebased holographic 3D display. In the design, there are two holographic optical elements (HOEs), each of which functions as a diffractive grating and a Fresnel lens. Because of the grating effect, holographic 3D image light is coupled into and decoupled out of the waveguide by modifying incident angles. Because of the lens effect, the collimated zero order light is focused at a point, and got filtered out. The optical power of the second HOE also helps enlarge FOV.

  20. Development of scanning holographic display using MEMS SLM

    NASA Astrophysics Data System (ADS)

    Takaki, Yasuhiro

    2016-10-01

    Holography is an ideal three-dimensional (3D) display technique, because it produces 3D images that naturally satisfy human 3D perception including physiological and psychological factors. However, its electronic implementation is quite challenging because ultra-high resolution is required for display devices to provide sufficient screen size and viewing zone. We have developed holographic display techniques to enlarge the screen size and the viewing zone by use of microelectromechanical systems spatial light modulators (MEMS-SLMs). Because MEMS-SLMs can generate hologram patterns at a high frame rate, the time-multiplexing technique is utilized to virtually increase the resolution. Three kinds of scanning systems have been combined with MEMS-SLMs; the screen scanning system, the viewing-zone scanning system, and the 360-degree scanning system. The screen scanning system reduces the hologram size to enlarge the viewing zone and the reduced hologram patterns are scanned on the screen to increase the screen size: the color display system with a screen size of 6.2 in. and a viewing zone angle of 11° was demonstrated. The viewing-zone scanning system increases the screen size and the reduced viewing zone is scanned to enlarge the viewing zone: a screen size of 2.0 in. and a viewing zone angle of 40° were achieved. The two-channel system increased the screen size to 7.4 in. The 360-degree scanning increases the screen size and the reduced viewing zone is scanned circularly: the display system having a flat screen with a diameter of 100 mm was demonstrated, which generates 3D images viewed from any direction around the flat screen.

  1. Optical characterization of display screens by speckle-contrast measurements

    NASA Astrophysics Data System (ADS)

    Pozo, Antonio M.; Castro, José J.; Rubiño, Manuel

    2012-10-01

    In recent years, the flat-panel display (FPD) technology has undergone great development. Currently, FPDs are present in many devices. A significant element in FPD manufacturing is the display front surface. Manufacturers sell FPDs with different types of front surface which can be matte (also called anti-glare) or glossy screens. Users who prefer glossy screens consider images shown in these types of displays to have more vivid colours compared with matte-screen displays. However, external light sources may cause unpleasant reflections on the glossy screens. These reflections can be reduced by a matte treatment in the front surface of FPDs. In this work, we present a method to characterize the front surface of FPDs using laser speckle patterns. We characterized three FPDs: a Samsung XL2370 LCD monitor of 23" with matte screen, a Toshiba Satellite A100 laptop of 15.4" with glossy screen, and a Papyre electronic book reader. The results show great differences in speckle contrast values for the three screens characterized and, therefore, this work shows the feasibility of this method for characterizing and comparing FPDs which have different types of front surfaces.

  2. A new phase encoding approach for a compact head-up display

    NASA Astrophysics Data System (ADS)

    Suszek, Jaroslaw; Makowski, Michal; Sypek, Maciej; Siemion, Andrzej; Kolodziejczyk, Andrzej; Bartosz, Andrzej

    2008-12-01

    The possibility of encoding multiple asymmetric symbols into a single thin binary Fourier hologram would have a practical application in the design of simple translucent holographic head-up displays. A Fourier hologram displays the encoded images at the infinity so this enables an observation without a time-consuming eye accommodation. Presenting a set of the most crucial signs for a driver in this way is desired, especially by older people with various eyesight disabilities. In this paper a method of holographic design is presented that assumes a combination of a spatial segmentation and carrier frequencies. It allows to achieve multiple reconstructed images selectable by the angle of the incident laser beam. In order to encode several binary symbols into a single Fourier hologram, the chessboard shaped segmentation function is used. An optimized sequence of phase encoding steps and a final direct phase binarization enables recording of asymmetric symbols into a binary hologram. The theoretical analysis is presented, verified numerically and confirmed in the optical experiment. We suggest and describe a practical and highly useful application of such holograms in an inexpensive HUD device for the use of the automotive industry. We present two alternative propositions of car viewing setups.

  3. Advanced electronic displays and their potential in future transport aircraft

    NASA Technical Reports Server (NTRS)

    Hatfield, J. J.

    1981-01-01

    It is pointed out that electronic displays represent one of the keys to continued integration and improvement of the effectiveness of avionic systems in future transport aircraft. An employment of modern electronic display media and generation has become vital in connection with the increases in modes and functions of modern aircraft. Requirements for electronic systems of future transports are examined, and a description is provided of the tools which are available for cockpit integration, taking into account trends in information processing and presentation, trends in integrated display devices, and trends concerning input/output devices. Developments related to display media, display generation, and I/O devices are considered, giving attention to a comparison of CRT and flat-panel display technology, advanced HUD technology and multifunction controls. Integrated display formats are discussed along with integrated systems and cockpit configurations.

  4. Spacecraft 3D Augmented Reality Mobile App

    NASA Technical Reports Server (NTRS)

    Hussey, Kevin J.; Doronila, Paul R.; Kumanchik, Brian E.; Chan, Evan G.; Ellison, Douglas J.; Boeck, Andrea; Moore, Justin M.

    2013-01-01

    The Spacecraft 3D application allows users to learn about and interact with iconic NASA missions in a new and immersive way using common mobile devices. Using Augmented Reality (AR) techniques to project 3D renditions of the mission spacecraft into real-world surroundings, users can interact with and learn about Curiosity, GRAIL, Cassini, and Voyager. Additional updates on future missions, animations, and information will be ongoing. Using a printed AR Target and camera on a mobile device, users can get up close with these robotic explorers, see how some move, and learn about these engineering feats, which are used to expand knowledge and understanding about space. The software receives input from the mobile device's camera to recognize the presence of an AR marker in the camera's field of view. It then displays a 3D rendition of the selected spacecraft in the user's physical surroundings, on the mobile device's screen, while it tracks the device's movement in relation to the physical position of the spacecraft's 3D image on the AR marker.

  5. Novel low-cost 2D/3D switchable autostereoscopic system for notebook computers and other portable devices

    NASA Astrophysics Data System (ADS)

    Eichenlaub, Jesse B.

    1995-03-01

    Mounting a lenticular lens in front of a flat panel display is a well known, inexpensive, and easy way to create an autostereoscopic system. Such a lens produces half resolution 3D images because half the pixels on the LCD are seen by the left eye and half by the right eye. This may be acceptable for graphics, but it makes full resolution text, as displayed by common software, nearly unreadable. Very fine alignment tolerances normally preclude the possibility of removing and replacing the lens in order to switch between 2D and 3D applications. Lenticular lens based displays are therefore limited to use as dedicated 3D devices. DTI has devised a technique which removes this limitation, allowing switching between full resolution 2D and half resolution 3D imaging modes. A second element, in the form of a concave lenticular lens array whose shape is exactly the negative of the first lens, is mounted on a hinge so that it can be swung down over the first lens array. When so positioned the two lenses cancel optically, allowing the user to see full resolution 2D for text or numerical applications. The two lenses, having complementary shapes, naturally tend to nestle together and snap into perfect alignment when pressed together--thus obviating any need for user operated alignment mechanisms. This system represents an ideal solution for laptop and notebook computer applications. It was devised to meet the stringent requirements of a laptop computer manufacturer including very compact size, very low cost, little impact on existing manufacturing or assembly procedures, and compatibility with existing full resolution 2D text- oriented software as well as 3D graphics. Similar requirements apply to high and electronic calculators, several models of which now use LCDs for the display of graphics.

  6. ACQ4: an open-source software platform for data acquisition and analysis in neurophysiology research

    PubMed Central

    Campagnola, Luke; Kratz, Megan B.; Manis, Paul B.

    2014-01-01

    The complexity of modern neurophysiology experiments requires specialized software to coordinate multiple acquisition devices and analyze the collected data. We have developed ACQ4, an open-source software platform for performing data acquisition and analysis in experimental neurophysiology. This software integrates the tasks of acquiring, managing, and analyzing experimental data. ACQ4 has been used primarily for standard patch-clamp electrophysiology, laser scanning photostimulation, multiphoton microscopy, intrinsic imaging, and calcium imaging. The system is highly modular, which facilitates the addition of new devices and functionality. The modules included with ACQ4 provide for rapid construction of acquisition protocols, live video display, and customizable analysis tools. Position-aware data collection allows automated construction of image mosaics and registration of images with 3-dimensional anatomical atlases. ACQ4 uses free and open-source tools including Python, NumPy/SciPy for numerical computation, PyQt for the user interface, and PyQtGraph for scientific graphics. Supported hardware includes cameras, patch clamp amplifiers, scanning mirrors, lasers, shutters, Pockels cells, motorized stages, and more. ACQ4 is available for download at http://www.acq4.org. PMID:24523692

  7. Method and system for providing work machine multi-functional user interface

    DOEpatents

    Hoff, Brian D [Peoria, IL; Akasam, Sivaprasad [Peoria, IL; Baker, Thomas M [Peoria, IL

    2007-07-10

    A method is performed to provide a multi-functional user interface on a work machine for displaying suggested corrective action. The process includes receiving status information associated with the work machine and analyzing the status information to determine an abnormal condition. The process also includes displaying a warning message on the display device indicating the abnormal condition and determining one or more corrective actions to handle the abnormal condition. Further, the process includes determining an appropriate corrective action among the one or more corrective actions and displaying a recommendation message on the display device reflecting the appropriate corrective action. The process may also include displaying a list including the remaining one or more corrective actions on the display device to provide alternative actions to an operator.

  8. See-through 3D technology for augmented reality

    NASA Astrophysics Data System (ADS)

    Lee, Byoungho; Lee, Seungjae; Li, Gang; Jang, Changwon; Hong, Jong-Young

    2017-06-01

    Augmented reality is recently attracting a lot of attention as one of the most spotlighted next-generation technologies. In order to get toward realization of ideal augmented reality, we need to integrate 3D virtual information into real world. This integration should not be noticed by users blurring the boundary between the virtual and real worlds. Thus, ultimate device for augmented reality can reconstruct and superimpose 3D virtual information on the real world so that they are not distinguishable, which is referred to as see-through 3D technology. Here, we introduce our previous researches to combine see-through displays and 3D technologies using emerging optical combiners: holographic optical elements and index matched optical elements. Holographic optical elements are volume gratings that have angular and wavelength selectivity. Index matched optical elements are partially reflective elements using a compensation element for index matching. Using these optical combiners, we could implement see-through 3D displays based on typical methodologies including integral imaging, digital holographic displays, multi-layer displays, and retinal projection. Some of these methods are expected to be optimized and customized for head-mounted or wearable displays. We conclude with demonstration and analysis of fundamental researches for head-mounted see-through 3D displays.

  9. Study for verification testing of the helmet-mounted display in the Japanese Experimental Module.

    PubMed

    Nakajima, I; Yamamoto, I; Kato, H; Inokuchi, S; Nemoto, M

    2000-02-01

    Our purpose is to propose a research and development project in the field of telemedicine. The proposed Multimedia Telemedicine Experiment for Extra-Vehicular Activity will entail experiments designed to support astronaut health management during Extra-Vehicular Activity (EVA). Experiments will have relevant applications to the Japanese Experimental Module (JEM) operated by National Space Development Agency of Japan (NASDA) for the International Space Station (ISS). In essence, this is a proposal for verification testing of the Helmet-Mounted Display (HMD), which enables astronauts to verify their own blood pressures and electrocardiograms, and to view a display of instructions from the ground station and listings of work procedures. Specifically, HMD is a device designed to project images and data inside the astronaut's helmet. We consider this R&D proposal to be one of the most suitable projects under consideration in response to NASDA's open invitation calling for medical experiments to be conducted on JEM.

  10. Vacuum status-display and sector-conditioning programs

    NASA Astrophysics Data System (ADS)

    Skelly, J.; Yen, S.

    1990-08-01

    Two programs have been developed for observation and control of the AGS vacuum system, which include the following notable features: (1) they incorporate a graphical user interface and (2) they are driven by a relational database which describes the vacuum system. The vacuum system comprises some 440 devices organized into 28 vacuum sectors. The status-display program invites menu selection of a sector, interrogates the relational database for relevant vacuum devices, acquires live readbacks and posts a graphical display of their status. The sector-conditioning program likewise invites sector selection, produces the same status display and also implements process control logic on the sector devices to pump the sector down from atmospheric pressure to high vacuum over a period extending several hours. As additional devices are installed in the vacuum system, the devices are added to the relational database; these programs then automatically include the new devices.

  11. Hydraulic Universal Display Processor System (HUDPS).

    DTIC Science & Technology

    1981-11-21

    emphasis on smart alphanumeric devices in Task II. Volatile and non-volatile memory components were utilized along with the Intel 8748 microprocessor...system. 1.2 TASK 11 Fault display methods for ground support personnel were investigated during Phase II with emphasis on smart alphanumeric devices...CONSIDERATIONS Methods of display fault indication for ground support personnel have been investigated with emphasis on " smart " alphanumeric devices

  12. An embedded system developed for hand held assay used in water monitoring

    NASA Astrophysics Data System (ADS)

    Wu, Lin; Wang, Jianwei; Ramakrishna, Bharath; Hsueh, Mingkai; Liu, Jonathan; Wu, Qufei; Wu, Chao-Cheng; Cao, Mang; Chang, Chein-I.; Jensen, Janet L.; Jensen, James O.; Knapp, Harlan; Daniel, Robert; Yin, Ray

    2005-11-01

    The US Army Joint Service Agent Water Monitor (JSAWM) program is currently interested in an approach that can implement a hardware- designed device in ticket-based hand-held assay (currently being developed) used for chemical/biological agent detection. This paper presents a preliminary investigation of the proof of concept. Three components are envisioned to accomplish the task. One is the ticket development which has been undertaken by the ANP, Inc. Another component is the software development which has been carried out by the Remote Sensing Signal and Image Processing Laboratory (RSSIPL) at the University of Maryland, Baltimore County (UMBC). A third component is an embedded system development which can be used to drive the UMBC-developed software to analyze the ANP-developed HHA tickets on a small pocket-size device like a PDA. The main focus of this paper is to investigate the third component that is viable and is yet to be explored. In order to facilitate to prove the concept, a flatbed scanner is used to replace a ticket reader to serve as an input device. The Stargate processor board is used as the embedded System with Embedded Linux installed. It is connected to an input device such as scanner as well as output devices such as LCD display or laptop etc. It executes the C-Coded processing program developed for this embedded system and outputs its findings on a display device. The embedded system to be developed and investigated in this paper is the core of a future hardware device. Several issues arising in such an embedded system will be addressed. Finally, the proof-of-concept pilot embedded system will be demonstrated.

  13. Polarization digital holographic microscopy using low-cost liquid crystal polarization rotators

    NASA Astrophysics Data System (ADS)

    Dovhaliuk, Rostyslav Yu

    2018-02-01

    Polarization imaging methods are actively used to study anisotropic objects. A number of methods and systems, such as imaging polarimeters, were proposed to measure the state of polarization of light that passed through the object. Digital holographic and interferometric approaches can be used to quantitatively measure both amplitude and phase of a wavefront. Using polarization modulation optics, the measurement capabilities of such interference-based systems can be extended to measure polarization-dependent parameters, such as phase retardation. Different kinds of polarization rotators can be used to alternate the polarization of a reference beam. Liquid crystals are used in a rapidly increasing number of different optoelectronic devices. Twisted nematic liquid crystals are widely used as amplitude modulators in electronic displays and light valves or shutter glass. Such devices are of particular interest for polarization imaging, as they can be used as polarization rotators, and due to large-scale manufacturing have relatively low cost. A simple Mach-Zehnder polarized holographic setup that uses modified shutter glass as a polarization rotator is demonstrated. The suggested approach is experimentally validated by measuring retardation of quarter-wave film.

  14. High speed, intermediate resolution, large area laser beam induced current imaging and laser scribing system for photovoltaic devices and modules

    NASA Astrophysics Data System (ADS)

    Phillips, Adam B.; Song, Zhaoning; DeWitt, Jonathan L.; Stone, Jon M.; Krantz, Patrick W.; Royston, John M.; Zeller, Ryan M.; Mapes, Meghan R.; Roland, Paul J.; Dorogi, Mark D.; Zafar, Syed; Faykosh, Gary T.; Ellingson, Randy J.; Heben, Michael J.

    2016-09-01

    We have developed a laser beam induced current imaging tool for photovoltaic devices and modules that utilizes diode pumped Q-switched lasers. Power densities on the order of one sun (100 mW/cm2) can be produced in a ˜40 μm spot size by operating the lasers at low diode current and high repetition rate. Using galvanostatically controlled mirrors in an overhead configuration and high speed data acquisition, large areas can be scanned in short times. As the beam is rastered, focus is maintained on a flat plane with an electronically controlled lens that is positioned in a coordinated fashion with the movements of the mirrors. The system can also be used in a scribing mode by increasing the diode current and decreasing the repetition rate. In either mode, the instrument can accommodate samples ranging in size from laboratory scale (few cm2) to full modules (1 m2). Customized LabVIEW programs were developed to control the components and acquire, display, and manipulate the data in imaging mode.

  15. Detection of explosives by differential hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Dubroca, Thierry; Brown, Gregory; Hummel, Rolf E.

    2014-02-01

    Our team has pioneered an explosives detection technique based on hyperspectral imaging of surfaces. Briefly, differential reflectometry (DR) shines ultraviolet (UV) and blue light on two close-by areas on a surface (for example, a piece of luggage on a moving conveyer belt). Upon reflection, the light is collected with a spectrometer combined with a charge coupled device (CCD) camera. A computer processes the data and produces in turn differential reflection spectra taken from these two adjacent areas on the surface. This differential technique is highly sensitive and provides spectroscopic data of materials, particularly of explosives. As an example, 2,4,6-trinitrotoluene displays strong and distinct features in differential reflectograms near 420 and 250 nm, that is, in the near-UV region. Similar, but distinctly different features are observed for other explosives. Finally, a custom algorithm classifies the collected spectral data and outputs an acoustic signal if a threat is detected. This paper presents the complete DR hyperspectral imager which we have designed and built from the hardware to the software, complete with an analysis of the device specifications.

  16. A Compact Polarization Imager

    NASA Technical Reports Server (NTRS)

    Thompson, Karl E.; Rust, David M.; Chen, Hua

    1995-01-01

    A new type of image detector has been designed to analyze the polarization of light simultaneously at all picture elements (pixels) in a scene. The Integrated Dual Imaging Detector (IDID) consists of a polarizing beamsplitter bonded to a custom-designed charge-coupled device with signal-analysis circuitry, all integrated on a silicon chip. The IDID should simplify the design and operation of imaging polarimeters and spectroscopic imagers used, for example, in atmospheric and solar research. Other applications include environmental monitoring and robot vision. Innovations in the IDID include two interleaved 512 x 1024 pixel imaging arrays (one for each polarization plane), large dynamic range (well depth of 10(exp 6) electrons per pixel), simultaneous readout and display of both images at 10(exp 6) pixels per second, and on-chip analog signal processing to produce polarization maps in real time. When used with a lithium niobate Fabry-Perot etalon or other color filter that can encode spectral information as polarization, the IDID can reveal tiny differences between simultaneous images at two wavelengths.

  17. Model-based error diffusion for high fidelity lenticular screening.

    PubMed

    Lau, Daniel; Smith, Trebor

    2006-04-17

    Digital halftoning is the process of converting a continuous-tone image into an arrangement of black and white dots for binary display devices such as digital ink-jet and electrophotographic printers. As printers are achieving print resolutions exceeding 1,200 dots per inch, it is becoming increasingly important for halftoning algorithms to consider the variations and interactions in the size and shape of printed dots between neighboring pixels. In the case of lenticular screening where statistically independent images are spatially multiplexed together, ignoring these variations and interactions, such as dot overlap, will result in poor lenticular image quality. To this end, we describe our use of model-based error-diffusion for the lenticular screening problem where statistical independence between component images is achieved by restricting the diffusion of error to only those pixels of the same component image where, in order to avoid instabilities, the proposed approach involves a novel error-clipping procedure.

  18. Short-Term Memory for Figure-Ground Organization in the Visual Cortex

    PubMed Central

    O’Herron, Philip; von der Heydt, Rüdiger

    2009-01-01

    Summary Whether the visual system uses a buffer to store image information and the duration of that storage have been debated intensely in recent psychophysical studies. The long phases of stable perception of reversible figures suggest a memory that persists for seconds. But persistence of similar duration has not been found in signals of the visual cortex. Here we show that figure-ground signals in the visual cortex can persist for a second or more after the removal of the figure-ground cues. When new figure-ground information is presented, the signals adjust rapidly, but when a figure display is changed to an ambiguous edge display, the signals decay slowly – a behavior that is characteristic of memory devices. Figure-ground signals represent the layout of objects in a scene, and we propose that a short-term memory for object layout is important in providing continuity of perception in the rapid stream of images flooding our eyes. PMID:19285475

  19. Humanization of work circumstances in dialog communication using data display devices, volume 1

    NASA Astrophysics Data System (ADS)

    Graunke, H.; Julich, H.; Petersen, H. C.; Schaefer, H.; Strupp, K.

    1982-11-01

    The effects of data display on working places was investigated. Data processing by data display devices is not considered. Important criteria for job contentment is the integration into complex job structures. Corresponding to this principle of organization is team work with a flexible way of labor division which provides the chance and the motivation for a cooperative self controlled working process which give strain caused by data display devices. It is found that in public administration a team with an institutional leadership with primarily social integrative functions is appreciated most.

  20. Extremely Vivid, Highly Transparent, and Ultrathin Quantum Dot Light-Emitting Diodes.

    PubMed

    Choi, Moon Kee; Yang, Jiwoong; Kim, Dong Chan; Dai, Zhaohe; Kim, Junhee; Seung, Hyojin; Kale, Vinayak S; Sung, Sae Jin; Park, Chong Rae; Lu, Nanshu; Hyeon, Taeghwan; Kim, Dae-Hyeong

    2018-01-01

    Displaying information on transparent screens offers new opportunities in next-generation electronics, such as augmented reality devices, smart surgical glasses, and smart windows. Outstanding luminance and transparency are essential for such "see-through" displays to show vivid images over clear background view. Here transparent quantum dot light-emitting diodes (Tr-QLEDs) are reported with high brightness (bottom: ≈43 000 cd m -2 , top: ≈30 000 cd m -2 , total: ≈73 000 cd m -2 at 9 V), excellent transmittance (90% at 550 nm, 84% over visible range), and an ultrathin form factor (≈2.7 µm thickness). These superb characteristics are accomplished by novel electron transport layers (ETLs) and engineered quantum dots (QDs). The ETLs, ZnO nanoparticle assemblies with ultrathin alumina overlayers, dramatically enhance durability of active layers, and balance electron/hole injection into QDs, which prevents nonradiative recombination processes. In addition, the QD structure is further optimized to fully exploit the device architecture. The ultrathin nature of Tr-QLEDs allows their conformal integration on various shaped objects. Finally, the high resolution patterning of red, green, and blue Tr-QLEDs (513 pixels in. -1 ) shows the potential of the full-color transparent display. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. OLED microdisplay design and materials

    NASA Astrophysics Data System (ADS)

    Wacyk, Ihor; Prache, Olivier; Ali, Tariq; Khayrullin, Ilyas; Ghosh, Amalkumar

    2010-04-01

    AMOLED microdisplays from eMagin Corporation are finding growing acceptance within the military display market as a result of their excellent power efficiency, wide operating temperature range, small size and weight, good system flexibility, and ease of use. The latest designs have also demonstrated improved optical performance including better uniformity, contrast, MTF, and color gamut. eMagin's largest format display is currently the SXGA design, which includes features such as a 30-bit wide RGB digital interface, automatic luminance regulation from -45 to +70°C, variable gamma control, and a dynamic range exceeding 50:000 to 1. This paper will highlight the benefits of eMagin's latest microdisplay designs and review the roadmap for next generation devices. The ongoing development of reduced size pixels and larger format displays (up to WUXGA) as well as new OLED device architecture (e.g. high-brightness yellow) will be discussed. Approaches being explored for improved performance in next generation designs such as lowpower serial interfaces, high frame rate operation, and new operational modes for reduction of motion artifacts will also be described. These developments should continue to enhance the appeal of AMOLED microdisplays for a broad spectrum of near-to-the-eye applications such as night vision, simulation and training, situational awareness, augmented reality, medical imaging, and mobile video entertainment and gaming.

  2. Radar image processing module development program, phase 3

    NASA Technical Reports Server (NTRS)

    1977-01-01

    The feasibility of using charge coupled devices in an IPM for processing synthetic aperture radar signals onboard the NASA Convair 990 (CV990) aircraft was demonstrated. Radar data onboard the aircraft was recorded and processed using a CCD sampler and digital tape recorder. A description of equipment and testing was provided. The derivation of the digital presum filter was documented. Photographs of the sampler/tape recorder, real time display and circuit boards in the IPM were also included.

  3. A multi-directional backlight for a wide-angle, glasses-free three-dimensional display.

    PubMed

    Fattal, David; Peng, Zhen; Tran, Tho; Vo, Sonny; Fiorentino, Marco; Brug, Jim; Beausoleil, Raymond G

    2013-03-21

    Multiview three-dimensional (3D) displays can project the correct perspectives of a 3D image in many spatial directions simultaneously. They provide a 3D stereoscopic experience to many viewers at the same time with full motion parallax and do not require special glasses or eye tracking. None of the leading multiview 3D solutions is particularly well suited to mobile devices (watches, mobile phones or tablets), which require the combination of a thin, portable form factor, a high spatial resolution and a wide full-parallax view zone (for short viewing distance from potentially steep angles). Here we introduce a multi-directional diffractive backlight technology that permits the rendering of high-resolution, full-parallax 3D images in a very wide view zone (up to 180 degrees in principle) at an observation distance of up to a metre. The key to our design is a guided-wave illumination technique based on light-emitting diodes that produces wide-angle multiview images in colour from a thin planar transparent lightguide. Pixels associated with different views or colours are spatially multiplexed and can be independently addressed and modulated at video rate using an external shutter plane. To illustrate the capabilities of this technology, we use simple ink masks or a high-resolution commercial liquid-crystal display unit to demonstrate passive and active (30 frames per second) modulation of a 64-view backlight, producing 3D images with a spatial resolution of 88 pixels per inch and full-motion parallax in an unprecedented view zone of 90 degrees. We also present several transparent hand-held prototypes showing animated sequences of up to six different 200-view images at a resolution of 127 pixels per inch.

  4. Optimization of the polyplanar optical display electronics for a monochrome B-52 display

    NASA Astrophysics Data System (ADS)

    DeSanto, Leonard

    1998-09-01

    The Polyplanar Optical Display (POD) is a unique display screen which can be used with any projection source. The prototype ten-inch display is two inches thick and has a matte black face which allows for high contrast images. The prototype being developed is a form, fit and functional replacement display for the B-52 aircraft which uses a monochrome ten-inch display. In order to achieve a long lifetime, the new display uses a new 200 mW green solid-state laser (10,000 hr. life) at 532 nm as its light source. To produce real-time video, the laser light is being modulated by a Digital Light Processing (DLPTM) chip manufactured by Texas Instruments (TI). In order to use the solid-state laser as the light source and also fit within the constraints of the B-52 display, the Digital Micromirror Device (DMDTM) chip is operated remotely from the Texas Instruments circuit board. In order to achieve increased brightness a monochrome digitizing interface was investigated. The operation of the DMDTM divorced from the light engine and the interfacing of the DMDTM board with the RS-170 video format specific to the B-52 aircraft will be discussed, including the increased brightness of the monochrome digitizing interface. A brief description of the electronics required to drive the new 200 mW laser is also presented.

  5. A visual-display and storage device

    NASA Technical Reports Server (NTRS)

    Bosomworth, D. R.; Moles, W. H.

    1972-01-01

    Memory and display device uses cathodochromic material to store visual information and fast phosphor to recall information for display and electronic processing. Cathodochromic material changes color when bombarded with electrons, and is restored to its original color when exposed to light of appropiate wavelength.

  6. Video display engineering and optimization system

    NASA Technical Reports Server (NTRS)

    Larimer, James (Inventor)

    1997-01-01

    A video display engineering and optimization CAD simulation system for designing a LCD display integrates models of a display device circuit, electro-optics, surface geometry, and physiological optics to model the system performance of a display. This CAD system permits system performance and design trade-offs to be evaluated without constructing a physical prototype of the device. The systems includes a series of modules which permit analysis of design trade-offs in terms of their visual impact on a viewer looking at a display.

  7. Voxel-based plaque classification in coronary intravascular optical coherence tomography images using decision trees

    NASA Astrophysics Data System (ADS)

    Kolluru, Chaitanya; Prabhu, David; Gharaibeh, Yazan; Wu, Hao; Wilson, David L.

    2018-02-01

    Intravascular Optical Coherence Tomography (IVOCT) is a high contrast, 3D microscopic imaging technique that can be used to assess atherosclerosis and guide stent interventions. Despite its advantages, IVOCT image interpretation is challenging and time consuming with over 500 image frames generated in a single pullback volume. We have developed a method to classify voxel plaque types in IVOCT images using machine learning. To train and test the classifier, we have used our unique database of labeled cadaver vessel IVOCT images accurately registered to gold standard cryoimages. This database currently contains 300 images and is growing. Each voxel is labeled as fibrotic, lipid-rich, calcified or other. Optical attenuation, intensity and texture features were extracted for each voxel and were used to build a decision tree classifier for multi-class classification. Five-fold cross-validation across images gave accuracies of 96 % +/- 0.01 %, 90 +/- 0.02% and 90 % +/- 0.01 % for fibrotic, lipid-rich and calcified classes respectively. To rectify performance degradation seen in left out vessel specimens as opposed to left out images, we are adding data and reducing features to limit overfitting. Following spatial noise cleaning, important vascular regions were unambiguous in display. We developed displays that enable physicians to make rapid determination of calcified and lipid regions. This will inform treatment decisions such as the need for devices (e.g., atherectomy or scoring balloon in the case of calcifications) or extended stent lengths to ensure coverage of lipid regions prone to injury at the edge of a stent.

  8. MACSIGMA0 - MACINTOSH TOOL FOR ANALYZING JPL AIRSAR, ERS-1, JERS-1, AND MAGELLAN MIDR DATA

    NASA Technical Reports Server (NTRS)

    Norikane, L.

    1994-01-01

    MacSigma0 is an interactive tool for the Macintosh which allows you to display and make computations from radar data collected by the following sensors: the JPL AIRSAR, ERS-1, JERS-1, and Magellan. The JPL AIRSAR system is a multi-polarimetric airborne synthetic aperture radar developed and operated by the Jet Propulsion Laboratory. It includes the single-frequency L-band sensor mounted on the NASA CV990 aircraft and its replacement, the multi-frequency P-, L-, and C-band sensors mounted on the NASA DC-8. MacSigma0 works with data in the standard JPL AIRSAR output product format, the compressed Stokes matrix format. ERS-1 and JERS-1 are single-frequency, single-polarization spaceborne synthetic aperture radars launched by the European Space Agency and NASDA respectively. To be usable by MacSigma0, The data must have been processed at the Alaska SAR Facility and must be in the "low-resolution" format. Magellan is a spacecraft mission to map the surface of Venus with imaging radar. The project is managed by the Jet Propulsion Laboratory. The spacecraft carries a single-frequency, single-polarization synthetic aperture radar. MacSigma0 works with framelets of the standard MIDR CD-ROM data products. MacSigma0 provides four basic functions: synthesis of images (if necessary), statistical analysis of selected areas, analysis of corner reflectors as a calibration measure (if appropriate and possible), and informative mouse tracking. For instance, the JPL AIRSAR data can be used to synthesize a variety of images such as a total power image. The total power image displays the sum of the polarized and unpolarized components of the backscatter for each pixel. Other images which can be synthesized are HH, HV, VV, RL, RR, HHVV*, HHHV*, HVVV*, HHVV* phase and correlation coefficient images. For the complex and phase images, phase is displayed using color and magnitude is displayed using intensity. MacSigma0 can also be used to compute statistics from within a selected area. The statistics computed depend on the image type. For JPL AIRSAR data, the HH, HV, VV, HHVV* phase, and correlation coefficient means and standard deviation measures are calculated. The mean, relative standard deviation, minimum, and maximum values are calculated for all other data types. A histogram of the selected area is also calculated and displayed. The selected area can be rectangular, linear, or polygonal in shape. The user is allowed to select multiple rectangular areas, but not multiple linear or polygonal areas. The statistics and histogram are displayed to the user and can either be printed or saved as a text file. MacSigma0 can also be used to analyze corner reflectors as a measure of the calibration for JPL AIRSAR, ERS-1, and JERS-1 data types. It computes a theoretical radar cross section and the actual radar cross section for a selected trihedral corner reflector. The theoretical cross section, measured cross section, their ratio in dBs, and other information are displayed to the user and can be saved into a text file. For ERS-1, JERS-1, and Magellan data, MacSigma0 simultaneously displays pixel location in data coordinates and in latitude, longitude coordinates. It also displays sigma0, the incidence angle (for Magellan data), the original pixel value (for Magellan data), and the noise power value (for ERS-1 and JERS-1 data). Grey scale computed images can be saved in a byte format (a headerless format which saves the image as a string of byte values) or a PICT format (a standard format readable by other image processing programs for the Macintosh). Images can also be printed. MacSigma0 is written in C-language for use on Macintosh series computers. The minimum configuration requirements for MacSigma0 are System 6.0, Finder 6.1, 1Mb of RAM, and at least a 4-bit color or grey-scale graphics display. MacSigma0 is also System 7 compatible. To compile the source code, Apple's Macintosh Programmers Workbench (MPW) 3.2 and the MPW C language compiler version 3.2 are required. The source code will not compile with a later version of the compiler; however, the compiled application which will run under the minimum hardware configuration is provided on the distribution medium. In addition, the distribution media includes an executable which runs significantly faster but requires a 68881 compatible math coprocessor and a 68020 compatible CPU. Since JPL AIRSAR data files can be very large, it is often desirable to reduce the size of a data file before transferring it to the Macintosh for use in MacSigma0. A small FORTRAN program which can be used for this purpose is included on the distribution media. MacSigma0 will print statistics on any output device which supports QuickDraw, and it will print images on any device which supports QuickDraw or PostScript. The standard distribution medium for MacSigma0 is a set of five 1.4Mb Macintosh format diskettes. This program was developed in 1992 and is a copyrighted work with all copyright vested in NASA. Version 4.2 of MacSigma0 was released in 1993.

  9. Three-dimensional reconstruction and display of the heart, lungs and circulation by multiplanar X-ray scanning videodensitometry

    NASA Technical Reports Server (NTRS)

    Robb, R. A.; Ritman, E. L.; Wood, E. H.

    1975-01-01

    A device was developed which makes possible the dynamic reconstruction of the heart and lungs within the intact thorax of a living dog or human and which can record approximately 30 multiplanar X-ray images of the thorax practically instantaneously, and at frequent enough intervals of time and with sufficient density and spatial resolution to capture and resolve the most rapid changes in cardiac structural detail throughout each cardiac cycle. It can be installed in a clinical diagnostic setting as well as in a research environment and its construction and application for determination and display in real-time modes of cross sections of the functioning thorax and its contents of living animals and man is technologically feasible.

  10. Preliminary display comparison for dental diagnostic applications

    NASA Astrophysics Data System (ADS)

    Odlum, Nicholas; Spalla, Guillaume; van Assche, Nele; Vandenberghe, Bart; Jacobs, Reinhilde; Quirynen, Marc; Marchessoux, Cédric

    2012-02-01

    The aim of this study is to predict the clinical performance and image quality of a display system for viewing dental images. At present, the use of dedicated medical displays is not uniform among dentists - many still view images on ordinary consumer displays. This work investigated whether the use of a medical display improved the perception of dental images by a clinician, compared to a consumer display. Display systems were simulated using the MEdical Virtual Imaging Chain (MEVIC). Images derived from two carefully performed studies on periodontal bone lesion detection and endodontic file length determination, were used. Three displays were selected: a medical grade one and two consumer displays (Barco MDRC-2120, Dell 1907FP and Dell 2007FPb). Some typical characteristics of the displays are evaluated by measurements and simulations like the Modulation Function (MTF), the Noise Power Spectrum (NPS), backlight stability or calibration. For the MTF, the display with the largest pixel pitch has logically the worst MTF. Moreover, the medical grade display has a slightly better MTF and the displays have similar NPS. The study shows the instability effect for the emitted intensity of the consumer displays compared to the medical grade one. Finally the study on the calibration methodology of the display shows that the signal in the dental images will be always more perceivable on the DICOM GSDF display than a gamma 2,2 display.

  11. Synfograms: a new generation of holographic applications

    NASA Astrophysics Data System (ADS)

    Meulien Öhlmann, Odile; Öhlmann, Dietmar; Zacharovas, Stanislovas J.

    2008-04-01

    The new synthetic Four-dimensional printing technique (Syn4D) Synfogram is introducing time (animation) into spatial configuration of the imprinted three-dimensional shapes. While lenticular solutions offer 2 to 9 stereoscopic images Syn4D offers large format, full colors true 3D visualization printing of 300 to 2500 frames imprinted as holographic dots. This past 2 years Syn4D high-resolution displays proved to be extremely efficient for museums presentation, engineering design, automobile prototyping, and advertising virtual presentation as well as, for portrait and fashion applications. The main advantages of syn4D is that it offers a very easy way of using a variety of digital media, like most of 3D Modelling programs, 3D scan system, video sequences, digital photography, tomography as well as the Syn4D camera track system for life recording of spatial scenes changing in time. The use of digital holographic printer in conjunction with Syn4D image acquiring and processing devices separates printing and imaging creation in such a way that makes four-dimensional printing similar to a conventional digital photography processes where imaging and printing are usually separated in space and time. Besides making content easy to prepare, Syn4D has also developed new display and lighting solutions for trade show, museum, POP, merchandising, etc. The introduction of Synfograms is opening new applications for real life and virtual 4D displays. In this paper we will analyse the 3D market, the properties of the Synfograms and specific applications, the problems we encounter, solutions we find, discuss about customers demand and need for new product development.

  12. Initial experience with a radiology imaging network to newborn and intensive care units.

    PubMed

    Witt, R M; Cohen, M D; Appledorn, C R

    1991-02-01

    A digital image network has been installed in the James Whitcomb Riley Hospital for Children on the Indiana University Medical Center to create a limited all digital imaging system. The system is composed of commercial components, Philips/AT&T CommView system, (Philips Medical Systems, Shelton, CT; AT&T Bell Laboratories, West Long Beach, NJ) and connects an existing Philips Computed Radiology (PCR) system to two remote workstations that reside in the intensive care unit and the newborn nursery. The purpose of the system is to display images obtained from the PCR system on the remote workstations for direct viewing by referring clinicians, and to reduce many of their visits to the radiology reading room three floors away. The design criteria includes the ability to centrally control all image management functions on the remote workstations to relieve the clinicians from any image management tasks except for recalling patient images. The principal components of the system are the Philips PCR system, the acquisition module (AM), and the PCR interface to the Data Management Module (DMM). Connected to the DMM are an Enhanced Graphics Display Workstation (EGDW), an optical disk drive, and a network gateway to an ethernet link. The ethernet network is the connection to the two Results Viewing Stations (RVS) and both RVSs are approximately 100 m from the gateway. The DMM acts as an image file server and an image archive device. The DMM manages the image data base and can load images to the EGDW and the two RVSs. The system has met the initial design specifications and can successfully capture images from the PCR and direct them to the RVSs.(ABSTRACT TRUNCATED AT 250 WORDS)

  13. Clinical development of BLZ-100 for real-time optical imaging of tumors during resection

    NASA Astrophysics Data System (ADS)

    Franklin, Heather L.; Miller, Dennis M.; Hedges, Teresa; Perry, Jeff; Parrish-Novak, Julia

    2016-03-01

    Complete initial resection can give cancer patients the best opportunity for long-term survival. There is unmet need in surgical oncology for optical imaging that enables simple and precise visualization of tumors and consistent contrast with surrounding normal tissues. Near-infrared (NIR) contrast agents and camera systems that can detect them represent an area of active research and development. The investigational Tumor Paint agent BLZ-100 is a conjugate of a chlorotoxin peptide and the NIR dye indocyanine green (ICG) that has been shown to specifically bind to a broad range of solid tumors. Clinical efficacy studies with BLZ-100 are in progress, a necessary step in bringing the product into clinical practice. To ensure a product that will be useful for and accepted by surgeons, the early clinical development of BLZ- 100 incorporates multiple tumor types and imaging devices so that surgeon feedback covers the range of anticipated clinical uses. Key contrast agent characteristics include safety, specificity, flexibility in timing between dose and surgery, and breadth of tumor types recognized. Imaging devices should use wavelengths that are optimal for the contrast agent, be sensitive enough that contrast agent dosing can be adjusted for optimal contrast, include real-time video display of fluorescence and white light image, and be simple for surgeons to use with minimal disruption of surgical flow. Rapid entry into clinical studies provides the best opportunity for early surgeon feedback, enabling development of agents and devices that will gain broad acceptance and provide information that helps surgeons achieve more complete and precise resections.

  14. A framework for breast cancer visualization using augmented reality x-ray vision technique in mobile technology

    NASA Astrophysics Data System (ADS)

    Rahman, Hameedur; Arshad, Haslina; Mahmud, Rozi; Mahayuddin, Zainal Rasyid

    2017-10-01

    Breast Cancer patients who require breast biopsy has increased over the past years. Augmented Reality guided core biopsy of breast has become the method of choice for researchers. However, this cancer visualization has limitations to the extent of superimposing the 3D imaging data only. In this paper, we are introducing an Augmented Reality visualization framework that enables breast cancer biopsy image guidance by using X-Ray vision technique on a mobile display. This framework consists of 4 phases where it initially acquires the image from CT/MRI and process the medical images into 3D slices, secondly it will purify these 3D grayscale slices into 3D breast tumor model using 3D modeling reconstruction technique. Further, in visualization processing this virtual 3D breast tumor model has been enhanced using X-ray vision technique to see through the skin of the phantom and the final composition of it is displayed on handheld device to optimize the accuracy of the visualization in six degree of freedom. The framework is perceived as an improved visualization experience because the Augmented Reality x-ray vision allowed direct understanding of the breast tumor beyond the visible surface and direct guidance towards accurate biopsy targets.

  15. Ink-jet printout of radiographs on transparent film and glossy paper versus monitor display: an ROC analysis.

    PubMed

    Kühl, Sebastian; Krummenauer, Frank; Dagassan-Berndt, Dorothea; Lambrecht, Thomas J; d'Hoedt, Bernd; Schulze, Ralf Kurt Willy

    2011-06-01

    The aim of this study was to compare the depiction ability of small grayscale contrasts in ink-jet printouts of digital radiographs on different print media with CRT monitor. A CCD-based digital cephalometric image of a stepless aluminum wedge containing 50 bur holes of different depth was cut into 100 isometric images. Each image was printed on glossy paper and on transparent film by means of a high-resolution desktop inkjet printer at specific settings. The printed images were viewed under standardized conditions, and the perceptibility of the bur holes was evaluated and compared to the perceptibility on a 17-in CRT monitor. Thirty observers stated their blinded decision on a five-point confidence scale. Areas (Az) under receiver operating characteristics curves were calculated and compared using the pair wise sign tests. Overall agreement was estimated using Cohen's kappa device and observer bias using McNemar's test. Glossy paper prints and monitor display revealed significantly higher (P < 0.001) average Az values (0.83) compared to prints on transparent film (0.79), which was caused by higher sensitivity. Specificity was similar for all modalities. The sensitivity was dependent on the mean gray scale values for the transparent film.

  16. Immersive viewing engine

    NASA Astrophysics Data System (ADS)

    Schonlau, William J.

    2006-05-01

    An immersive viewing engine providing basic telepresence functionality for a variety of application types is presented. Augmented reality, teleoperation and virtual reality applications all benefit from the use of head mounted display devices that present imagery appropriate to the user's head orientation at full frame rates. Our primary application is the viewing of remote environments, as with a camera equipped teleoperated vehicle. The conventional approach where imagery from a narrow field camera onboard the vehicle is presented to the user on a small rectangular screen is contrasted with an immersive viewing system where a cylindrical or spherical format image is received from a panoramic camera on the vehicle, resampled in response to sensed user head orientation and presented via wide field eyewear display, approaching 180 degrees of horizontal field. Of primary interest is the user's enhanced ability to perceive and understand image content, even when image resolution parameters are poor, due to the innate visual integration and 3-D model generation capabilities of the human visual system. A mathematical model for tracking user head position and resampling the panoramic image to attain distortion free viewing of the region appropriate to the user's current head pose is presented and consideration is given to providing the user with stereo viewing generated from depth map information derived using stereo from motion algorithms.

  17. Digital photography for the light microscope: results with a gated, video-rate CCD camera and NIH-image software.

    PubMed

    Shaw, S L; Salmon, E D; Quatrano, R S

    1995-12-01

    In this report, we describe a relatively inexpensive method for acquiring, storing and processing light microscope images that combines the advantages of video technology with the powerful medium now termed digital photography. Digital photography refers to the recording of images as digital files that are stored, manipulated and displayed using a computer. This report details the use of a gated video-rate charge-coupled device (CCD) camera and a frame grabber board for capturing 256 gray-level digital images from the light microscope. This camera gives high-resolution bright-field, phase contrast and differential interference contrast (DIC) images but, also, with gated on-chip integration, has the capability to record low-light level fluorescent images. The basic components of the digital photography system are described, and examples are presented of fluorescence and bright-field micrographs. Digital processing of images to remove noise, to enhance contrast and to prepare figures for printing is discussed.

  18. High ambient contrast ratio OLED and QLED without a circular polarizer

    NASA Astrophysics Data System (ADS)

    Tan, Guanjun; Zhu, Ruidong; Tsai, Yi-Shou; Lee, Kuo-Chang; Luo, Zhenyue; Lee, Yuh-Zheng; Wu, Shin-Tson

    2016-08-01

    A high ambient contrast ratio display device using a transparent organic light emitting diode (OLED) or transparent quantum-dot light-emitting diode (QLED) with embedded multilayered structure and absorber is proposed and its performance is simulated. With the help of multilayered structure, the device structure allows almost all ambient light to get through the display device and be absorbed by the absorber. Because the reflected ambient light is greatly reduced, the ambient contrast ratio of the display system is improved significantly. Meanwhile, the multilayered structure helps to lower the effective refractive index, which in turn improves the out-coupling efficiency of the display system. Potential applications for sunlight readable flexible and rollable displays are emphasized.

  19. Digital management and regulatory submission of medical images from clinical trials: role and benefits of the core laboratory

    NASA Astrophysics Data System (ADS)

    Robbins, William L.; Conklin, James J.

    1995-10-01

    Medical images (angiography, CT, MRI, nuclear medicine, ultrasound, x ray) play an increasingly important role in the clinical development and regulatory review process for pharmaceuticals and medical devices. Since medical images are increasingly acquired and archived digitally, or are readily digitized from film, they can be visualized, processed and analyzed in a variety of ways using digital image processing and display technology. Moreover, with image-based data management and data visualization tools, medical images can be electronically organized and submitted to the U.S. Food and Drug Administration (FDA) for review. The collection, processing, analysis, archival, and submission of medical images in a digital format versus an analog (film-based) format presents both challenges and opportunities for the clinical and regulatory information management specialist. The medical imaging 'core laboratory' is an important resource for clinical trials and regulatory submissions involving medical imaging data. Use of digital imaging technology within a core laboratory can increase efficiency and decrease overall costs in the image data management and regulatory review process.

  20. Use of stereo vision and 24-bit false-color imagery to enhance visualization of multimodal confocal images

    NASA Astrophysics Data System (ADS)

    Beltrame, Francesco; Diaspro, Alberto; Fato, Marco; Martin, I.; Ramoino, Paola; Sobel, Irwin E.

    1995-03-01

    Confocal microscopy systems can be linked to 3D data oriented devices for the interactive navigation of the operator through a 3D object space. Sometimes, such environments are named `virtual reality' or `augmented reality' systems. We consider optical confocal laser scanning microscopy images, in fluorescence with various excitations and emissions, and versus time The aim of our study has been the quantitative spatial analysis of confocal data using the false-color composition technique. Starting from three 2D confocal fluorescent images at the same slice location in a given biological specimen, a new single image representation of all three parameters has been generated by the false-color technique on a HP 9000/735 workstation, connected to the confocal microscope. The color composite result of the mapping of the three parameters is displayed using a resolution of 24 bits per pixel. The operator may independently vary the mix of each of the three components in the false-color composite via three (R, G, B) mixing sliders. Furthermore, by using the pixel data in the three fluorescent component images, a 3D space containing the density distribution of these three parameters has been constructed. The histogram has been displayed in stereo: it can be used for clustering purposes from the operator, through an original thresholding algorithm.

Top