Image quality metrics for volumetric laser displays
NASA Astrophysics Data System (ADS)
Williams, Rodney D.; Donohoo, Daniel
1991-08-01
This paper addresses the extensions to the image quality metrics and related human factors research that are needed to establish the baseline standards for emerging volume display technologies. The existing and recently developed technologies for multiplanar volume displays are reviewed with an emphasis on basic human visual issues. Human factors image quality metrics and guidelines are needed to firmly establish this technology in the marketplace. The human visual requirements and the display design tradeoffs for these prototype laser-based volume displays are addressed and several critical image quality issues identified for further research. The American National Standard for Human Factors Engineering of Visual Display Terminal Workstations (ANSIHFS-100) and other international standards (ISO, DIN) can serve as a starting point, but this research base must be extended to provide new image quality metrics for this new technology for volume displays.
Effect of Display Technology on Perceived Scale of Space.
Geuss, Michael N; Stefanucci, Jeanine K; Creem-Regehr, Sarah H; Thompson, William B; Mohler, Betty J
2015-11-01
Our goal was to evaluate the degree to which display technologies influence the perception of size in an image. Research suggests that factors such as whether an image is displayed stereoscopically, whether a user's viewpoint is tracked, and the field of view of a given display can affect users' perception of scale in the displayed image. Participants directly estimated the size of a gap by matching the distance between their hands to the gap width and judged their ability to pass unimpeded through the gap in one of five common implementations of three display technologies (two head-mounted displays [HMD] and a back-projection screen). Both measures of gap width were similar for the two HMD conditions and the back projection with stereo and tracking. For the displays without tracking, stereo and monocular conditions differed from each other, with monocular viewing showing underestimation of size. Display technologies that are capable of stereoscopic display and tracking of the user's viewpoint are beneficial as perceived size does not differ from real-world estimates. Evaluations of different display technologies are necessary as display conditions vary and the availability of different display technologies continues to grow. The findings are important to those using display technologies for research, commercial, and training purposes when it is important for the displayed image to be perceived at an intended scale. © 2015, Human Factors and Ergonomics Society.
A photophoretic-trap volumetric display
NASA Astrophysics Data System (ADS)
Smalley, D. E.; Nygaard, E.; Squire, K.; van Wagoner, J.; Rasmussen, J.; Gneiting, S.; Qaderi, K.; Goodsell, J.; Rogers, W.; Lindsey, M.; Costner, K.; Monk, A.; Pearson, M.; Haymore, B.; Peatross, J.
2018-01-01
Free-space volumetric displays, or displays that create luminous image points in space, are the technology that most closely resembles the three-dimensional displays of popular fiction. Such displays are capable of producing images in ‘thin air’ that are visible from almost any direction and are not subject to clipping. Clipping restricts the utility of all three-dimensional displays that modulate light at a two-dimensional surface with an edge boundary; these include holographic displays, nanophotonic arrays, plasmonic displays, lenticular or lenslet displays and all technologies in which the light scattering surface and the image point are physically separate. Here we present a free-space volumetric display based on photophoretic optical trapping that produces full-colour graphics in free space with ten-micrometre image points using persistence of vision. This display works by first isolating a cellulose particle in a photophoretic trap created by spherical and astigmatic aberrations. The trap and particle are then scanned through a display volume while being illuminated with red, green and blue light. The result is a three-dimensional image in free space with a large colour gamut, fine detail and low apparent speckle. This platform, named the Optical Trap Display, is capable of producing image geometries that are currently unobtainable with holographic and light-field technologies, such as long-throw projections, tall sandtables and ‘wrap-around’ displays.
Display challenges resulting from the use of wide field of view imaging devices
NASA Astrophysics Data System (ADS)
Petty, Gregory J.; Fulton, Jack; Nicholson, Gail; Seals, Ean
2012-06-01
As focal plane array technologies advance and imagers increase in resolution, display technology must outpace the imaging improvements in order to adequately represent the complete data collection. Typical display devices tend to have an aspect ratio similar to 4:3 or 16:9, however a breed of Wide Field of View (WFOV) imaging devices exist that skew from the norm with aspect ratios as high as 5:1. This particular quality, when coupled with a high spatial resolution, presents a unique challenge for display devices. Standard display devices must choose between resizing the image data to fit the display and displaying the image data in native resolution and truncating potentially important information. The problem compounds when considering the applications; WFOV high-situationalawareness imagers are sought for space-limited military vehicles. Tradeoffs between these issues are assessed to the image quality of the WFOV sensor.
Super long viewing distance light homogeneous emitting three-dimensional display
NASA Astrophysics Data System (ADS)
Liao, Hongen
2015-04-01
Three-dimensional (3D) display technology has continuously been attracting public attention with the progress in today's 3D television and mature display technologies. The primary characteristics of conventional glasses-free autostereoscopic displays, such as spatial resolution, image depths, and viewing angle, are often limited due to the use of optical lenses or optical gratings. We present a 3D display using MEMS-scanning-mechanism-based light homogeneous emitting (LHE) approach and demonstrate that the display can directly generate an autostereoscopic 3D image without the need for optical lenses or gratings. The generated 3D image has the advantages of non-aberration and a high-definition spatial resolution, making it the first to exhibit animated 3D images with image depth of six meters. Our LHE 3D display approach can be used to generate a natural flat-panel 3D display with super long viewing distance and alternative real-time image update.
Research of an optimization design method of integral imaging three-dimensional display system
NASA Astrophysics Data System (ADS)
Gao, Hui; Yan, Zhiqiang; Wen, Jun; Jiang, Guanwu
2016-03-01
The information warfare needs a highly transparent environment of battlefield, it follows that true three-dimensional display technology has obvious advantages than traditional display technology in the current field of military science and technology. It also focuses on the research progress of lens array imaging technology and aims at what restrict the development of integral imaging, main including low spatial resolution, narrow depth range and small viewing angle. This paper summarizes the principle, characteristics and development history of the integral imaging. A variety of methods are compared and analyzed that how to improve the resolution, extend depth of field, increase scope and eliminate the artifact aiming at problems currently. And makes a discussion about the experimental results of the research, comparing the display performance of different methods.
3D Image Display Courses for Information Media Students.
Yanaka, Kazuhisa; Yamanouchi, Toshiaki
2016-01-01
Three-dimensional displays are used extensively in movies and games. These displays are also essential in mixed reality, where virtual and real spaces overlap. Therefore, engineers and creators should be trained to master 3D display technologies. For this reason, the Department of Information Media at the Kanagawa Institute of Technology has launched two 3D image display courses specifically designed for students who aim to become information media engineers and creators.
Future Directions for Astronomical Image Display
NASA Technical Reports Server (NTRS)
Mandel, Eric
2000-01-01
In the "Future Directions for Astronomical Image Displav" project, the Smithsonian Astrophysical Observatory (SAO) and the National Optical Astronomy Observatories (NOAO) evolved our existing image display program into fully extensible. cross-platform image display software. We also devised messaging software to support integration of image display into astronomical analysis systems. Finally, we migrated our software from reliance on Unix and the X Window System to a platform-independent architecture that utilizes the cross-platform Tcl/Tk technology.
Mobile viewer system for virtual 3D space using infrared LED point markers and camera
NASA Astrophysics Data System (ADS)
Sakamoto, Kunio; Taneji, Shoto
2006-09-01
The authors have developed a 3D workspace system using collaborative imaging devices. A stereoscopic display enables this system to project 3D information. In this paper, we describe the position detecting system for a see-through 3D viewer. A 3D display system is useful technology for virtual reality, mixed reality and augmented reality. We have researched spatial imaging and interaction system. We have ever proposed 3D displays using the slit as a parallax barrier, the lenticular screen and the holographic optical elements(HOEs) for displaying active image 1)2)3)4). The purpose of this paper is to propose the interactive system using these 3D imaging technologies. The observer can view virtual images in the real world when the user watches the screen of a see-through 3D viewer. The goal of our research is to build the display system as follows; when users see the real world through the mobile viewer, the display system gives users virtual 3D images, which is floating in the air, and the observers can touch these floating images and interact them such that kids can make play clay. The key technologies of this system are the position recognition system and the spatial imaging display. The 3D images are presented by the improved parallax barrier 3D display. Here the authors discuss the measuring method of the mobile viewer using infrared LED point markers and a camera in the 3D workspace (augmented reality world). The authors show the geometric analysis of the proposed measuring method, which is the simplest method using a single camera not the stereo camera, and the results of our viewer system.
Wentink, M; Jakimowicz, J J; Vos, L M; Meijer, D W; Wieringa, P A
2002-08-01
Compared to open surgery, minimally invasive surgery (MIS) relies heavily on advanced technology, such as endoscopic viewing systems and innovative instruments. The aim of the study was to objectively compare three technologically advanced laparoscopic viewing systems with the standard viewing system currently used in most Dutch hospitals. We evaluated the following advanced laparoscopic viewing systems: a Thin Film Transistor (TFT) display, a stereo endoscope, and an image projection display. The standard viewing system was comprised of a monocular endoscope and a high-resolution monitor. Task completion time served as the measure of performance. Eight surgeons with laparoscopic experience participated in the experiment. The average task time was significantly greater (p <0.05) with the stereo viewing system than with the standard viewing system. The average task times with the TFT display and the image projection display did not differ significantly from the standard viewing system. Although the stereo viewing system promises improved depth perception and the TFT and image projection displays are supposed to improve hand-eye coordination, none of these systems provided better task performance than the standard viewing system in this pelvi-trainer experiment.
Display technologies for augmented reality
NASA Astrophysics Data System (ADS)
Lee, Byoungho; Lee, Seungjae; Jang, Changwon; Hong, Jong-Young; Li, Gang
2018-02-01
With the virtue of rapid progress in optics, sensors, and computer science, we are witnessing that commercial products or prototypes for augmented reality (AR) are penetrating into the consumer markets. AR is spotlighted as expected to provide much more immersive and realistic experience than ordinary displays. However, there are several barriers to be overcome for successful commercialization of AR. Here, we explore challenging and important topics for AR such as image combiners, enhancement of display performance, and focus cue reproduction. Image combiners are essential to integrate virtual images with real-world. Display performance (e.g. field of view and resolution) is important for more immersive experience and focus cue reproduction may mitigate visual fatigue caused by vergence-accommodation conflict. We also demonstrate emerging technologies to overcome these issues: index-matched anisotropic crystal lens (IMACL), retinal projection displays, and 3D display with focus cues. For image combiners, a novel optical element called IMACL provides relatively wide field of view. Retinal projection displays may enhance field of view and resolution of AR displays. Focus cues could be reconstructed via multi-layer displays and holographic displays. Experimental results of our prototypes are explained.
Light-field and holographic three-dimensional displays [Invited].
Yamaguchi, Masahiro
2016-12-01
A perfect three-dimensional (3D) display that satisfies all depth cues in human vision is possible if a light field can be reproduced exactly as it appeared when it emerged from a real object. The light field can be generated based on either light ray or wavefront reconstruction, with the latter known as holography. This paper first provides an overview of the advances of ray-based and wavefront-based 3D display technologies, including integral photography and holography, and the integration of those technologies with digital information systems. Hardcopy displays have already been used in some applications, whereas the electronic display of a light field is under active investigation. Next, a fundamental question in this technology field is addressed: what is the difference between ray-based and wavefront-based methods for light-field 3D displays? In considering this question, it is of particular interest to look at the technology of holographic stereograms. The phase information in holography contributes to the resolution of a reconstructed image, especially for deep 3D images. Moreover, issues facing the electronic display system of light fields are discussed, including the resolution of the spatial light modulator, the computational techniques of holography, and the speckle in holographic images.
Further advances in autostereoscopic technology at Dimension Technologies Inc.
NASA Astrophysics Data System (ADS)
Eichenlaub, Jesse B.
1992-06-01
Dimension Technologies is currently one of three companies offering autostereoscopic displays for sale and one of several which are actively pursuing advances to the technology. We have devised a new autostereoscopic imaging technique which possesses several advantages over previously explored methods. We are currently manufacturing autostereoscopic displays based on this technology, as well as vigorously pursuing research and development toward more advanced displays. During the past year, DTI has made major strides in advancing its LCD based autostereoscopic display technology. DTI has developed a color product -- a stand alone 640 X 480 flat panel LCD based 3-D display capable of accepting input from IBM PC and Apple MAC computers or TV cameras, and capable of changing from 3-D mode to 2-D mode with the flip of a switch. DTI is working on development of a prototype second generation color product that will provide autostereoscopic 3-D while allowing each eye to see the full resolution of the liquid crystal display. And development is also underway on a proof-of-concept display which produces hologram-like look-around images visible from a wide viewing angle, again while allowing the observer to see the full resolution of the display from all locations. Development of a high resolution prototype display of this type has begun.
NASA Astrophysics Data System (ADS)
Schlam, E.
1983-01-01
Human factors in visible displays are discussed, taking into account an introduction to color vision, a laser optometric assessment of visual display viewability, the quantification of color contrast, human performance evaluations of digital image quality, visual problems of office video display terminals, and contemporary problems in airborne displays. Other topics considered are related to electroluminescent technology, liquid crystal and related technologies, plasma technology, and display terminal and systems. Attention is given to the application of electroluminescent technology to personal computers, electroluminescent driving techniques, thin film electroluminescent devices with memory, the fabrication of very large electroluminescent displays, the operating properties of thermally addressed dye switching liquid crystal display, light field dichroic liquid crystal displays for very large area displays, and hardening military plasma displays for a nuclear environment.
NASA Technical Reports Server (NTRS)
2002-01-01
Dimension Technologies Inc., developed a line of 2-D/3-D Liquid Crystal Display (LCD) screens, including a 15-inch model priced at consumer levels. DTI's family of flat panel LCD displays, called the Virtual Window(TM), provide real-time 3-D images without the use of glasses, head trackers, helmets, or other viewing aids. Most of the company initial 3-D display research was funded through NASA's Small Business Innovation Research (SBIR) program. The images on DTI's displays appear to leap off the screen and hang in space. The display accepts input from computers or stereo video sources, and can be switched from 3-D to full-resolution 2-D viewing with the push of a button. The Virtual Window displays have applications in data visualization, medicine, architecture, business, real estate, entertainment, and other research, design, military, and consumer applications. Displays are currently used for computer games, protein analysis, and surgical imaging. The technology greatly benefits the medical field, as surgical simulators are helping to increase the skills of surgical residents. Virtual Window(TM) is a trademark of Dimension Technologies Inc.
Paradigms of perception in clinical practice.
Jacobson, Francine L; Berlanstein, Bruce P; Andriole, Katherine P
2006-06-01
Display strategies for medical images in radiology have evolved in tandem with the technology by which images are made. The close of the 20th century, nearly coincident with the 100th anniversary of the discovery of x-rays, brought radiologists to a new crossroad in the evolution of image display. The increasing availability, speed, and flexibility of computer technology can now revolutionize how images are viewed and interpreted. Radiologists are not yet in agreement regarding the next paradigm for image display. The possibilities are being explored systematically through the Society for Computer Applications in Radiology's Transforming the Radiological Interpretation Process initiative. The varied input of radiologists who work in a large variety of settings will enable new display strategies to best serve radiologists in the detection and quantification of disease. Considerations and possibilities for the future are presented in this paper.
Sunlight-readable display technology: a dual-use case study
NASA Astrophysics Data System (ADS)
Blanchard, Randall D.
1996-05-01
This paper describes our vision of sunlight readable color display requirements, an alternate technology that offers a high level of performance, and how we implemented it for the military avionics display market. This knowledge base and product development experience was then applied with a comparable level of performance to commercial applications. The successful dual use of this technology for these two diverse markets is presented. Details of the technical commonality and a comparison of the design and performance differences are presented. A basis for specifying the required level of performance for a sunlight readable full color display is discussed. With the objective of providing a high level of image brightness and high ambient light rejection, a display architecture using collimated light is used. The resulting designs of two military cockpit display products, with contrast ratios above 20:1 in sunlight are shown. The performance of a commercial display providing several thousand foot- Lamberts of image brightness is presented.
NASA Technical Reports Server (NTRS)
1995-01-01
The 1100C Virtual Window is based on technology developed under NASA Small Business Innovation (SBIR) contracts to Ames Research Center. For example, under one contract Dimension Technologies, Inc. developed a large autostereoscopic display for scientific visualization applications. The Virtual Window employs an innovative illumination system to deliver the depth and color of true 3D imaging. Its applications include surgery and Magnetic Resonance Imaging scans, viewing for teleoperated robots, training, and in aviation cockpit displays.
NASA Astrophysics Data System (ADS)
Robbins, Woodrow E.
1988-01-01
The present conference discusses topics in novel technologies and techniques of three-dimensional imaging, human factors-related issues in three-dimensional display system design, three-dimensional imaging applications, and image processing for remote sensing. Attention is given to a 19-inch parallactiscope, a chromostereoscopic CRT-based display, the 'SpaceGraph' true three-dimensional peripheral, advantages of three-dimensional displays, holographic stereograms generated with a liquid crystal spatial light modulator, algorithms and display techniques for four-dimensional Cartesian graphics, an image processing system for automatic retina diagnosis, the automatic frequency control of a pulsed CO2 laser, and a three-dimensional display of magnetic resonance imaging of the spine.
Flatbed-type 3D display systems using integral imaging method
NASA Astrophysics Data System (ADS)
Hirayama, Yuzo; Nagatani, Hiroyuki; Saishu, Tatsuo; Fukushima, Rieko; Taira, Kazuki
2006-10-01
We have developed prototypes of flatbed-type autostereoscopic display systems using one-dimensional integral imaging method. The integral imaging system reproduces light beams similar of those produced by a real object. Our display architecture is suitable for flatbed configurations because it has a large margin for viewing distance and angle and has continuous motion parallax. We have applied our technology to 15.4-inch displays. We realized horizontal resolution of 480 with 12 parallaxes due to adoption of mosaic pixel arrangement of the display panel. It allows viewers to see high quality autostereoscopic images. Viewing the display from angle allows the viewer to experience 3-D images that stand out several centimeters from the surface of the display. Mixed reality of virtual 3-D objects and real objects are also realized on a flatbed display. In seeking reproduction of natural 3-D images on the flatbed display, we developed proprietary software. The fast playback of the CG movie contents and real-time interaction are realized with the aid of a graphics card. Realization of the safety 3-D images to the human beings is very important. Therefore, we have measured the effects on the visual function and evaluated the biological effects. For example, the accommodation and convergence were measured at the same time. The various biological effects are also measured before and after the task of watching 3-D images. We have found that our displays show better results than those to a conventional stereoscopic display. The new technology opens up new areas of application for 3-D displays, including arcade games, e-learning, simulations of buildings and landscapes, and even 3-D menus in restaurants.
Recent developments in stereoscopic and holographic 3D display technologies
NASA Astrophysics Data System (ADS)
Sarma, Kalluri
2014-06-01
Currently, there is increasing interest in the development of high performance 3D display technologies to support a variety of applications including medical imaging, scientific visualization, gaming, education, entertainment, air traffic control and remote operations in 3D environments. In this paper we will review the attributes of the various 3D display technologies including stereoscopic and holographic 3D, human factors issues of stereoscopic 3D, the challenges in realizing Holographic 3D displays and the recent progress in these technologies.
Flight Simulator: Use of SpaceGraph Display in an Instructor/Operator Station. Final Report.
ERIC Educational Resources Information Center
Sher, Lawrence D.
This report describes SpaceGraph, a new computer-driven display technology capable of showing space-filling images, i.e., true three dimensional displays, and discusses the advantages of this technology over flat displays for use with the instructor/operator station (IOS) of a flight simulator. Ideas resulting from 17 brainstorming sessions with…
Three-dimensional display technologies
Geng, Jason
2014-01-01
The physical world around us is three-dimensional (3D), yet traditional display devices can show only two-dimensional (2D) flat images that lack depth (i.e., the third dimension) information. This fundamental restriction greatly limits our ability to perceive and to understand the complexity of real-world objects. Nearly 50% of the capability of the human brain is devoted to processing visual information [Human Anatomy & Physiology (Pearson, 2012)]. Flat images and 2D displays do not harness the brain’s power effectively. With rapid advances in the electronics, optics, laser, and photonics fields, true 3D display technologies are making their way into the marketplace. 3D movies, 3D TV, 3D mobile devices, and 3D games have increasingly demanded true 3D display with no eyeglasses (autostereoscopic). Therefore, it would be very beneficial to readers of this journal to have a systematic review of state-of-the-art 3D display technologies. PMID:25530827
Ultra-realistic imaging: a new beginning for display holography
NASA Astrophysics Data System (ADS)
Bjelkhagen, Hans I.; Brotherton-Ratcliffe, David
2014-02-01
Recent improvements in key foundation technologies are set to potentially transform the field of Display Holography. In particular new recording systems, based on recent DPSS and semiconductor lasers combined with novel recording materials and processing, have now demonstrated full-color analogue holograms of both lower noise and higher spectral accuracy. Progress in illumination technology is leading to a further major reduction in display noise and to a significant increase of the clear image depth and brightness of such holograms. So too, recent progress in 1-step Direct-Write Digital Holography (DWDH) now opens the way to the creation of High Virtual Volume Displays (HVV) - large format full-parallax DWDH reflection holograms having fundamentally larger clear image depths. In a certain fashion HVV displays can be thought of as providing a high quality full-color digital equivalent to the large-format laser-illuminated transmission holograms of the sixties and seventies. Back then, the advent of such holograms led to much optimism for display holography in the market. However, problems with laser illumination, their monochromatic analogue nature and image noise are well cited as being responsible for their failure in reality. Is there reason for believing that the latest technology improvements will make the mark this time around? This paper argues that indeed there is.
Enhanced Images for Checked and Carry-on Baggage and Cargo Screening
NASA Technical Reports Server (NTRS)
Woodell, Glenn; Rahman, Zia-ur; Jobson, Daniel J.; Hines, Glenn
2004-01-01
The current X-ray systems used by airport security personnel for the detection of contraband, and objects such as knives and guns that can impact the security of a flight, have limited effect because of the limited display quality of the X-ray images. Since the displayed images do not possess optimal contrast and sharpness, it is possible for the security personnel to miss potentially hazardous objects. This problem is also common to other disciplines such as medical Xrays, and can be mitigated, to a large extent, by the use of state-of-the-art image processing techniques to enhance the contrast and sharpness of the displayed image. The NASA Langley Research Center's Visual Information Processing Group has developed an image enhancement technology that has direct applications to this problem of inadequate display quality. Airport security X-ray imaging systems would benefit considerably by using this novel technology, making the task of the personnel who have to interpret the X-ray images considerably easier, faster, and more reliable. This improvement would translate into more accurate screening as well as minimizing the screening time delays to airline passengers. This technology, Retinex, has been optimized for consumer applications but has been applied to medical X-rays on a very preliminary basis. The resultant technology could be incorporated into a new breed of commercial x-ray imaging systems which would be transparent to the screener yet allow them to see subtle detail much more easily, reducing the amount of time needed for screening while greatly increasing the effectiveness of contraband detection and thus public safety.
Enhanced Images for Checked and Carry-on Baggage and Cargo Screening
NASA Technical Reports Server (NTRS)
Woodell, Glen; Rahman, Zia-ur; Jobson, Daniel J.; Hines, Glenn
2004-01-01
The current X-ray systems used by airport security personnel for the detection of contraband, and objects such as knives and guns that can impact the security of a flight, have limited effect because of the limited display quality of the X-ray images. Since the displayed images do not possess optimal contrast and sharpness, it is possible for the security personnel to miss potentially hazardous objects. This problem is also common to other disciplines such as medical X-rays, and can be mitigated, to a large extent, by the use of state-of-the-art image processing techniques to enhance the contrast and sharpness of the displayed image. The NASA Langley Research Centers Visual Information Processing Group has developed an image enhancement technology that has direct applications to this problem of inadequate display quality. Airport security X-ray imaging systems would benefit considerably by using this novel technology, making the task of the personnel who have to interpret the X-ray images considerably easier, faster, and more reliable. This improvement would translate into more accurate screening as well as minimizing the screening time delays to airline passengers. This technology, Retinex, has been optimized for consumer applications but has been applied to medical X-rays on a very preliminary basis. The resultant technology could be incorporated into a new breed of commercial x-ray imaging systems which would be transparent to the screener yet allow them to see subtle detail much more easily, reducing the amount of time needed for screening while greatly increasing the effectiveness of contraband detection and thus public safety.
Yanagita, Satoshi; Imahana, Masato; Suwa, Kazuaki; Sugimura, Hitomi; Nishiki, Masayuki
2016-01-01
Japanese Society of Radiological Technology (JSRT) standard digital image database contains many useful cases of chest X-ray images, and has been used in many state-of-the-art researches. However, the pixel values of all the images are simply digitized as relative density values by utilizing a scanned film digitizer. As a result, the pixel values are completely different from the standardized display system input value of digital imaging and communications in medicine (DICOM), called presentation value (P-value), which can maintain a visual consistency when observing images using different display luminance. Therefore, we converted all the images from JSRT standard digital image database to DICOM format followed by the conversion of the pixel values to P-value using an original program developed by ourselves. Consequently, JSRT standard digital image database has been modified so that the visual consistency of images is maintained among different luminance displays.
Digital image forensics for photographic copying
NASA Astrophysics Data System (ADS)
Yin, Jing; Fang, Yanmei
2012-03-01
Image display technology has greatly developed over the past few decades, which make it possible to recapture high-quality images from the display medium, such as a liquid crystal display(LCD) screen or a printed paper. The recaptured images are not regarded as a separate image class in the current research of digital image forensics, while the content of the recaptured images may have been tempered. In this paper, two sets of features based on the noise and the traces of double JPEG compression are proposed to identify these recaptured images. Experimental results showed that our proposed features perform well for detecting photographic copying.
64.1: Display Technologies for Therapeutic Applications of Virtual Reality
Hoffman, Hunter G.; Schowengerdt, Brian T.; Lee, Cameron M.; Magula, Jeff; Seibel, Eric J.
2015-01-01
A paradigm shift in image source technology for VR helmets is needed. Using scanning fiber displays to replace LCD displays creates lightweight, safe, low cost, wide field of view, portable VR goggles ideal for reducing pain during severe burn wound care in hospitals and possibly in austere combat-transport environments. PMID:26146424
Combining volumetric edge display and multiview display for expression of natural 3D images
NASA Astrophysics Data System (ADS)
Yasui, Ryota; Matsuda, Isamu; Kakeya, Hideki
2006-02-01
In the present paper the authors present a novel stereoscopic display method combining volumetric edge display technology and multiview display technology to realize presentation of natural 3D images where the viewers do not suffer from contradiction between binocular convergence and focal accommodation of the eyes, which causes eyestrain and sickness. We adopt volumetric display method only for edge drawing, while we adopt stereoscopic approach for flat areas of the image. Since focal accommodation of our eyes is affected only by the edge part of the image, natural focal accommodation can be induced if the edges of the 3D image are drawn on the proper depth. The conventional stereo-matching technique can give us robust depth values of the pixels which constitute noticeable edges. Also occlusion and gloss of the objects can be roughly expressed with the proposed method since we use stereoscopic approach for the flat area. We can attain a system where many users can view natural 3D objects at the consistent position and posture at the same time in this system. A simple optometric experiment using a refractometer suggests that the proposed method can give us 3-D images without contradiction between binocular convergence and focal accommodation.
The research on a novel type of the solar-blind UV head-mounted displays
NASA Astrophysics Data System (ADS)
Zhao, Shun-long
2011-08-01
Ultraviolet technology of detecting is playing a more and more important role in the field of civil application, especially in the corona discharge detection, in modern society. Now the UV imaging detector is one of the most important equipments in power equipment flaws detection. And the modern head-mounted displays (HMDs) have shown the applications in the fields of military, industry production, medical treatment, entertainment, 3D visualization, education and training. We applied the system of head-mounted displays to the UV image detection, and a novel type of head-mounted displays is presented: the solar-blind UV head-mounted displays. And the structure is given. By the solar-blind UV head-mounted displays, a real-time, isometric and visible image of the corona discharge is correctly displayed upon the background scene where it exists. The user will see the visible image of the corona discharge on the real scene rather than on a small screen. Then the user can easily find out the power equipment flaws and repair them. Compared with the traditional UV imaging detector, the introducing of the HMDs simplifies the structure of the whole system. The original visible spectrum optical system is replaced by the eye in the solar-blind UV head-mounted displays. And the optical image fusion technology would be used rather than the digital image fusion system which is necessary in traditional UV imaging detector. That means the visible spectrum optical system and digital image fusion system are not necessary. This makes the whole system cheaper than the traditional UV imaging detector. Another advantage of the solar-blind UV head-mounted displays is that the two hands of user will be free. So while observing the corona discharge the user can do some things about it. Therefore the solar-blind UV head-mounted displays can make the corona discharge expose itself to the user in a better way, and it will play an important role in corona detection in the future.
Flexible Display Technologies...Do They Have a Role in the Cockpit?
2005-03-01
can be updated as needed via wireless technology. The main element of Radio PaperTM is an electronic ink, consisting of millions of microcapsules ...creating black text and images against an otherwise white (negatively charged) background. The microcapsules can retain their charge (and hence the image...for as long as months without additional power. Figure 3. Example of eltrophoretic display (Source: E-Ink Corporation). The microcapsules are
On-demand server-side image processing for web-based DICOM image display
NASA Astrophysics Data System (ADS)
Sakusabe, Takaya; Kimura, Michio; Onogi, Yuzo
2000-04-01
Low cost image delivery is needed in modern networked hospitals. If a hospital has hundreds of clients, cost of client systems is a big problem. Naturally, a Web-based system is the most effective solution. But a Web browser could not display medical images with certain image processing such as a lookup table transformation. We developed a Web-based medical image display system using Web browser and on-demand server-side image processing. All images displayed on a Web page are generated from DICOM files on a server, delivered on-demand. User interaction on the Web page is handled by a client-side scripting technology such as JavaScript. This combination makes a look-and-feel of an imaging workstation not only for its functionality but also for its speed. Real time update of images with tracing mouse motion is achieved on Web browser without any client-side image processing which may be done by client-side plug-in technology such as Java Applets or ActiveX. We tested performance of the system in three cases. Single client, small number of clients in a fast speed network, and large number of clients in a normal speed network. The result shows that there are very slight overhead for communication and very scalable in number of clients.
View generation for 3D-TV using image reconstruction from irregularly spaced samples
NASA Astrophysics Data System (ADS)
Vázquez, Carlos
2007-02-01
Three-dimensional television (3D-TV) will become the next big step in the development of advanced TV systems. One of the major challenges for the deployment of 3D-TV systems is the diversity of display technologies and the high cost of capturing multi-view content. Depth image-based rendering (DIBR) has been identified as a key technology for the generation of new views for stereoscopic and multi-view displays from a small number of views captured and transmitted. We propose a disparity compensation method for DIBR that does not require spatial interpolation of the disparity map. We use a forward-mapping disparity compensation with real precision. The proposed method deals with the irregularly sampled image resulting from this disparity compensation process by applying a re-sampling algorithm based on a bi-cubic spline function space that produces smooth images. The fact that no approximation is made on the position of the samples implies that geometrical distortions in the final images due to approximations in sample positions are minimized. We also paid attention to the occlusion problem. Our algorithm detects the occluded regions in the newly generated images and uses simple depth-aware inpainting techniques to fill the gaps created by newly exposed areas. We tested the proposed method in the context of generation of views needed for viewing on SynthaGram TM auto-stereoscopic displays. We used as input either a 2D image plus a depth map or a stereoscopic pair with the associated disparity map. Our results show that this technique provides high quality images to be viewed on different display technologies such as stereoscopic viewing with shutter glasses (two views) and lenticular auto-stereoscopic displays (nine views).
Design of integrated eye tracker-display device for head mounted systems
NASA Astrophysics Data System (ADS)
David, Y.; Apter, B.; Thirer, N.; Baal-Zedaka, I.; Efron, U.
2009-08-01
We propose an Eye Tracker/Display system, based on a novel, dual function device termed ETD, which allows sharing the optical paths of the Eye tracker and the display and on-chip processing. The proposed ETD design is based on a CMOS chip combining a Liquid-Crystal-on-Silicon (LCoS) micro-display technology with near infrared (NIR) Active Pixel Sensor imager. The ET operation allows capturing the Near IR (NIR) light, back-reflected from the eye's retina. The retinal image is then used for the detection of the current direction of eye's gaze. The design of the eye tracking imager is based on the "deep p-well" pixel technology, providing low crosstalk while shielding the active pixel circuitry, which serves the imaging and the display drivers, from the photo charges generated in the substrate. The use of the ETD in the HMD Design enables a very compact design suitable for Smart Goggle applications. A preliminary optical, electronic and digital design of the goggle and its associated ETD chip and digital control, are presented.
A spatio-temporal model of the human observer for use in display design
NASA Astrophysics Data System (ADS)
Bosman, Dick
1989-08-01
A "quick look" visual model, a kind of standard observer in software, is being developed to estimate the appearance of new display designs before prototypes are built. It operates on images also stored in software. It is assumed that the majority of display design flaws and technology artefacts can be identified in representations of early visual processing, and insight obtained into very local to global (supra-threshold) brightness distributions. Cognitive aspects are not considered because it seems that poor acceptance of technology and design is only weakly coupled to image content.
Progress in 3D imaging and display by integral imaging
NASA Astrophysics Data System (ADS)
Martinez-Cuenca, R.; Saavedra, G.; Martinez-Corral, M.; Pons, A.; Javidi, B.
2009-05-01
Three-dimensionality is currently considered an important added value in imaging devices, and therefore the search for an optimum 3D imaging and display technique is a hot topic that is attracting important research efforts. As main value, 3D monitors should provide the observers with different perspectives of a 3D scene by simply varying the head position. Three-dimensional imaging techniques have the potential to establish a future mass-market in the fields of entertainment and communications. Integral imaging (InI), which can capture true 3D color images, has been seen as the right technology to 3D viewing to audiences of more than one person. Due to the advanced degree of development, InI technology could be ready for commercialization in the coming years. This development is the result of a strong research effort performed along the past few years by many groups. Since Integral Imaging is still an emerging technology, the first aim of the "3D Imaging and Display Laboratory" at the University of Valencia, has been the realization of a thorough study of the principles that govern its operation. Is remarkable that some of these principles have been recognized and characterized by our group. Other contributions of our research have been addressed to overcome some of the classical limitations of InI systems, like the limited depth of field (in pickup and in display), the poor axial and lateral resolution, the pseudoscopic-to-orthoscopic conversion, the production of 3D images with continuous relief, or the limited range of viewing angles of InI monitors.
Design and evaluation of web-based image transmission and display with different protocols
NASA Astrophysics Data System (ADS)
Tan, Bin; Chen, Kuangyi; Zheng, Xichuan; Zhang, Jianguo
2011-03-01
There are many Web-based image accessing technologies used in medical imaging area, such as component-based (ActiveX Control) thick client Web display, Zerofootprint thin client Web viewer (or called server side processing Web viewer), Flash Rich Internet Application(RIA) ,or HTML5 based Web display. Different Web display methods have different peformance in different network environment. In this presenation, we give an evaluation on two developed Web based image display systems. The first one is used for thin client Web display. It works between a PACS Web server with WADO interface and thin client. The PACS Web server provides JPEG format images to HTML pages. The second one is for thick client Web display. It works between a PACS Web server with WADO interface and thick client running in browsers containing ActiveX control, Flash RIA program or HTML5 scripts. The PACS Web server provides native DICOM format images or JPIP stream for theses clients.
Viewpoint Dependent Imaging: An Interactive Stereoscopic Display
NASA Astrophysics Data System (ADS)
Fisher, Scott
1983-04-01
Design and implementation of a viewpoint Dependent imaging system is described. The resultant display is an interactive, lifesize, stereoscopic image. that becomes a window into a three dimensional visual environment. As the user physically changes his viewpoint of the represented data in relation to the display surface, the image is continuously updated. The changing viewpoints are retrieved from a comprehensive, stereoscopic image array stored on computer controlled, optical videodisc and fluidly presented. in coordination with the viewer's, movements as detected by a body-tracking device. This imaging system is an attempt to more closely represent an observers interactive perceptual experience of the visual world by presenting sensory information cues not offered by traditional media technologies: binocular parallax, motion parallax, and motion perspective. Unlike holographic imaging, this display requires, relatively low bandwidth.
Phage display and molecular imaging: expanding fields of vision in living subjects.
Cochran, R; Cochran, Frank
2010-01-01
In vivo molecular imaging enables non-invasive visualization of biological processes within living subjects, and holds great promise for diagnosis and monitoring of disease. The ability to create new agents that bind to molecular targets and deliver imaging probes to desired locations in the body is critically important to further advance this field. To address this need, phage display, an established technology for the discovery and development of novel binding agents, is increasingly becoming a key component of many molecular imaging research programs. This review discusses the expanding role played by phage display in the field of molecular imaging with a focus on in vivo applications. Furthermore, new methodological advances in phage display that can be directly applied to the discovery and development of molecular imaging agents are described. Various phage library selection strategies are summarized and compared, including selections against purified target, intact cells, and ex vivo tissue, plus in vivo homing strategies. An outline of the process for converting polypeptides obtained from phage display library selections into successful in vivo imaging agents is provided, including strategies to optimize in vivo performance. Additionally, the use of phage particles as imaging agents is also described. In the latter part of the review, a survey of phage-derived in vivo imaging agents is presented, and important recent examples are highlighted. Other imaging applications are also discussed, such as the development of peptide tags for site-specific protein labeling and the use of phage as delivery agents for reporter genes. The review concludes with a discussion of how phage display technology will continue to impact both basic science and clinical applications in the field of molecular imaging.
Wide-Field-of-View, High-Resolution, Stereoscopic Imager
NASA Technical Reports Server (NTRS)
Prechtl, Eric F.; Sedwick, Raymond J.
2010-01-01
A device combines video feeds from multiple cameras to provide wide-field-of-view, high-resolution, stereoscopic video to the user. The prototype under development consists of two camera assemblies, one for each eye. One of these assemblies incorporates a mounting structure with multiple cameras attached at offset angles. The video signals from the cameras are fed to a central processing platform where each frame is color processed and mapped into a single contiguous wide-field-of-view image. Because the resolution of most display devices is typically smaller than the processed map, a cropped portion of the video feed is output to the display device. The positioning of the cropped window will likely be controlled through the use of a head tracking device, allowing the user to turn his or her head side-to-side or up and down to view different portions of the captured image. There are multiple options for the display of the stereoscopic image. The use of head mounted displays is one likely implementation. However, the use of 3D projection technologies is another potential technology under consideration, The technology can be adapted in a multitude of ways. The computing platform is scalable, such that the number, resolution, and sensitivity of the cameras can be leveraged to improve image resolution and field of view. Miniaturization efforts can be pursued to shrink the package down for better mobility. Power savings studies can be performed to enable unattended, remote sensing packages. Image compression and transmission technologies can be incorporated to enable an improved telepresence experience.
Aggoun, Amar; Swash, Mohammad; Grange, Philippe C.R.; Challacombe, Benjamin; Dasgupta, Prokar
2013-01-01
Abstract Background and Purpose Existing imaging modalities of urologic pathology are limited by three-dimensional (3D) representation on a two-dimensional screen. We present 3D-holoscopic imaging as a novel method of representing Digital Imaging and Communications in Medicine data images taken from CT and MRI to produce 3D-holographic representations of anatomy without special eyewear in natural light. 3D-holoscopic technology produces images that are true optical models. This technology is based on physical principles with duplication of light fields. The 3D content is captured in real time with the content viewed by multiple viewers independently of their position, without 3D eyewear. Methods We display 3D-holoscopic anatomy relevant to minimally invasive urologic surgery without the need for 3D eyewear. Results The results have demonstrated that medical 3D-holoscopic content can be displayed on commercially available multiview auto-stereoscopic display. Conclusion The next step is validation studies comparing 3D-Holoscopic imaging with conventional imaging. PMID:23216303
Volumetric 3D Display System with Static Screen
NASA Technical Reports Server (NTRS)
Geng, Jason
2011-01-01
Current display technology has relied on flat, 2D screens that cannot truly convey the third dimension of visual information: depth. In contrast to conventional visualization that is primarily based on 2D flat screens, the volumetric 3D display possesses a true 3D display volume, and places physically each 3D voxel in displayed 3D images at the true 3D (x,y,z) spatial position. Each voxel, analogous to a pixel in a 2D image, emits light from that position to form a real 3D image in the eyes of the viewers. Such true volumetric 3D display technology provides both physiological (accommodation, convergence, binocular disparity, and motion parallax) and psychological (image size, linear perspective, shading, brightness, etc.) depth cues to human visual systems to help in the perception of 3D objects. In a volumetric 3D display, viewers can watch the displayed 3D images from a completely 360 view without using any special eyewear. The volumetric 3D display techniques may lead to a quantum leap in information display technology and can dramatically change the ways humans interact with computers, which can lead to significant improvements in the efficiency of learning and knowledge management processes. Within a block of glass, a large amount of tiny dots of voxels are created by using a recently available machining technique called laser subsurface engraving (LSE). The LSE is able to produce tiny physical crack points (as small as 0.05 mm in diameter) at any (x,y,z) location within the cube of transparent material. The crack dots, when illuminated by a light source, scatter the light around and form visible voxels within the 3D volume. The locations of these tiny voxels are strategically determined such that each can be illuminated by a light ray from a high-resolution digital mirror device (DMD) light engine. The distribution of these voxels occupies the full display volume within the static 3D glass screen. This design eliminates any moving screen seen in previous approaches, so there is no image jitter, and has an inherent parallel mechanism for 3D voxel addressing. High spatial resolution is possible with a full color display being easy to implement. The system is low-cost and low-maintenance.
NCAP projection displays: key issues for commercialization
NASA Astrophysics Data System (ADS)
Tomita, Akira; Jones, Philip J.
1992-06-01
Recently there has been much interest in a new polymer nematic dispersion technology, often called as NCAP, PDLC, PNLC, LCPC, etc., since projection displays using this technology have been shown to produce much brighter display images than projectors using conventional twisted nematic (TN) lightvalves. For commercializing projection displays based on this polymer nematic dispersion technology, the new materials must not only meet various electro- optic requirements, e.g., operational voltage, `off-state'' scattering angle, voltage holding ratio and hysteresis, but must also be stable over the lifetime of the product. This paper reports recent progress in the development of NCAP based projection displays and discusses some of the key commercialization issues.
A compact eyetracked optical see-through head-mounted display
NASA Astrophysics Data System (ADS)
Hua, Hong; Gao, Chunyu
2012-03-01
An eye-tracked head-mounted display (ET-HMD) system is able to display virtual images as a classical HMD does, while additionally tracking the gaze direction of the user. There is ample evidence that a fully-integrated ETHMD system offers multi-fold benefits, not only to fundamental scientific research but also to emerging applications of such technology. For instance eyetracking capability in HMDs adds a very valuable tool and objective metric for scientists to quantitatively assess user interaction with 3D environments and investigate the effectiveness of various 3D visualization technologies for various specific tasks including training, education, and augmented cognition tasks. In this paper, we present an innovative optical approach to the design of an optical see-through ET-HMD system based on freeform optical technology and an innovative optical scheme that uniquely combines the display optics with the eye imaging optics. A preliminary design of the described ET-HMD system will be presented.
NASA Astrophysics Data System (ADS)
Holter, Borre; Kamfjord, Thor G.; Fossum, Richard; Fagerberg, Ragnar
2000-08-01
The Norwegian based company PolyDisplayR ASA, in collaboration with the Norwegian Army Material Command and SINTEF, has refined, developed and shown with color and black/white technology demonstrators an electrically addressed Smectic A reflective LCD technology featuring: (1) Good contrast, all-round viewing angle and readability under all light conditions (no wash-out in direct sunlight). (2) Infinite memory -- image remains without power -- very low power consumption, no or very low radiation ('silent display') and narrow band updating. (3) Clear, sharp and flicker-free images. (4) Large number of gray tones and colors possible. (5) Simple construction and production -- reduced cost, higher yield, more robust and environmentally friendly. (6) Possibility for lighter, more robust and flexible displays based on plastic substrates. The results and future implementation possibilities for cockpit and soldier-system displays are discussed.
Future directions in 3-dimensional imaging and neurosurgery: stereoscopy and autostereoscopy.
Christopher, Lauren A; William, Albert; Cohen-Gadol, Aaron A
2013-01-01
Recent advances in 3-dimensional (3-D) stereoscopic imaging have enabled 3-D display technologies in the operating room. We find 2 beneficial applications for the inclusion of 3-D imaging in clinical practice. The first is the real-time 3-D display in the surgical theater, which is useful for the neurosurgeon and observers. In surgery, a 3-D display can include a cutting-edge mixed-mode graphic overlay for image-guided surgery. The second application is to improve the training of residents and observers in neurosurgical techniques. This article documents the requirements of both applications for a 3-D system in the operating room and for clinical neurosurgical training, followed by a discussion of the strengths and weaknesses of the current and emerging 3-D display technologies. An important comparison between a new autostereoscopic display without glasses and current stereo display with glasses improves our understanding of the best applications for 3-D in neurosurgery. Today's multiview autostereoscopic display has 3 major benefits: It does not require glasses for viewing; it allows multiple views; and it improves the workflow for image-guided surgery registration and overlay tasks because of its depth-rendering format and tools. Two current limitations of the autostereoscopic display are that resolution is reduced and depth can be perceived as too shallow in some cases. Higher-resolution displays will be available soon, and the algorithms for depth inference from stereo can be improved. The stereoscopic and autostereoscopic systems from microscope cameras to displays were compared by the use of recorded and live content from surgery. To the best of our knowledge, this is the first report of application of autostereoscopy in neurosurgery.
Scanning laser beam displays based on a 2D MEMS
NASA Astrophysics Data System (ADS)
Niesten, Maarten; Masood, Taha; Miller, Josh; Tauscher, Jason
2010-05-01
The combination of laser light sources and MEMS technology enables a range of display systems such as ultra small projectors for mobile devices, head-up displays for vehicles, wearable near-eye displays and projection systems for 3D imaging. Images are created by scanning red, green and blue lasers horizontally and vertically with a single two-dimensional MEMS. Due to the excellent beam quality of laser beams, the optical designs are efficient and compact. In addition, the laser illumination enables saturated display colors that are desirable for augmented reality applications where a virtual image is used. With this technology, the smallest projector engine for high volume manufacturing to date has been developed. This projector module has a height of 7 mm and a volume of 5 cc. The resolution of this projector is WVGA. No additional projection optics is required, resulting in an infinite focus depth. Unlike with micro-display projection displays, an increase in resolution will not lead to an increase in size or a decrease in efficiency. Therefore future projectors can be developed that combine a higher resolution in an even smaller and thinner form factor with increased efficiencies that will lead to lower power consumption.
NASA Astrophysics Data System (ADS)
Qin, Chen; Ren, Bin; Guo, Longfei; Dou, Wenhua
2014-11-01
Multi-projector three dimension display is a promising multi-view glass-free three dimension (3D) display technology, can produce full colour high definition 3D images on its screen. One key problem of multi-projector 3D display is how to acquire the source images of projector array while avoiding pseudoscopic problem. This paper analysis the displaying characteristics of multi-projector 3D display first and then propose a projector content synthetic method using tetrahedral transform. A 3D video format that based on stereo image pair and associated disparity map is presented, it is well suit for any type of multi-projector 3D display and has advantage in saving storage usage. Experiment results show that our method solved the pseudoscopic problem.
Reconfigurable and responsive droplet-based compound micro-lenses.
Nagelberg, Sara; Zarzar, Lauren D; Nicolas, Natalie; Subramanian, Kaushikaram; Kalow, Julia A; Sresht, Vishnu; Blankschtein, Daniel; Barbastathis, George; Kreysing, Moritz; Swager, Timothy M; Kolle, Mathias
2017-03-07
Micro-scale optical components play a crucial role in imaging and display technology, biosensing, beam shaping, optical switching, wavefront-analysis, and device miniaturization. Herein, we demonstrate liquid compound micro-lenses with dynamically tunable focal lengths. We employ bi-phase emulsion droplets fabricated from immiscible hydrocarbon and fluorocarbon liquids to form responsive micro-lenses that can be reconfigured to focus or scatter light, form real or virtual images, and display variable focal lengths. Experimental demonstrations of dynamic refractive control are complemented by theoretical analysis and wave-optical modelling. Additionally, we provide evidence of the micro-lenses' functionality for two potential applications-integral micro-scale imaging devices and light field display technology-thereby demonstrating both the fundamental characteristics and the promising opportunities for fluid-based dynamic refractive micro-scale compound lenses.
NASA Technical Reports Server (NTRS)
Stoller, Ray A.; Wedding, Donald K.; Friedman, Peter S.
1993-01-01
A development status evaluation is presented for gas plasma display technology, noting how tradeoffs among the parameters of size, resolution, speed, portability, color, and image quality can yield cost-effective solutions for medical imaging, CAD, teleconferencing, multimedia, and both civil and military applications. Attention is given to plasma-based large-area displays' suitability for radar, sonar, and IR, due to their lack of EM susceptibility. Both monochrome and color displays are available.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beiser, L.; Veligdan, J.
A Planar Optic Display (POD) is being built and tested for suitability as a high brightness replacement for the cathode ray tube, (CRT). The POD display technology utilizes a laminated optical waveguide structure which allows a projection type of display to be constructed in a thin (I to 2 inch) housing. Inherent in the optical waveguide is a black cladding matrix which gives the display a black appearance leading to very high contrast. A Digital Micromirror Device, (DMD) from Texas Instruments is used to create video images in conjunction with a 100 milliwatt green solid state laser. An anamorphic opticalmore » system is used to inject light into the POD to form a stigmatic image. In addition to the design of the POD screen, we discuss: image formation, image projection, and optical design constraints.« less
All-CMOS night vision viewer with integrated microdisplay
NASA Astrophysics Data System (ADS)
Goosen, Marius E.; Venter, Petrus J.; du Plessis, Monuko; Faure, Nicolaas M.; Janse van Rensburg, Christo; Rademeyer, Pieter
2014-02-01
The unrivalled integration potential of CMOS has made it the dominant technology for digital integrated circuits. With the advent of visible light emission from silicon through hot carrier electroluminescence, several applications arose, all of which rely upon the advantages of mature CMOS technologies for a competitive edge in a very active and attractive market. In this paper we present a low-cost night vision viewer which employs only standard CMOS technologies. A commercial CMOS imager is utilized for near infrared image capturing with a 128x96 pixel all-CMOS microdisplay implemented to convey the image to the user. The display is implemented in a standard 0.35 μm CMOS process, with no process alterations or post processing. The display features a 25 μm pixel pitch and a 3.2 mm x 2.4 mm active area, which through magnification presents the virtual image to the user equivalent of a 19-inch display viewed from a distance of 3 meters. This work represents the first application of a CMOS microdisplay in a low-cost consumer product.
Formulation of coarse integral imaging and its applications
NASA Astrophysics Data System (ADS)
Kakeya, Hideki
2008-02-01
This paper formulates the notion of coarse integral imaging and applies it to practical designs of 3D displays for the purposes of robot teleoperation and automobile HUDs. 3D display technologies are demanded in the applications where real-time and precise depth perception is required, such as teleoperation of robot manipulators and HUDs for automobiles. 3D displays for these applications, however, have not been realized so far. In the conventional 3D display technologies, the eyes are usually induced to focus on the screen, which is not suitable for the above purposes. To overcome this problem the author adopts the coarse integral imaging system, where each component lens is large enough to cover pixels dozens of times more than the number of views. The merit of this system is that it can induce the viewer's focus on the planes of various depths by generating a real image or a virtual image off the screen. This system, however, has major disadvantages in the quality of image, which is caused by aberration of lenses and discontinuity at the joints of component lenses. In this paper the author proposes practical optical designs for 3D monitors for robot teleoperation and 3D HUDs for automobiles by overcoming the problems of aberration and discontinuity of images.
Color speckle in laser displays
NASA Astrophysics Data System (ADS)
Kuroda, Kazuo
2015-07-01
At the beginning of this century, lighting technology has been shifted from discharge lamps, fluorescent lamps and electric bulbs to solid-state lighting. Current solid-state lighting is based on the light emitting diodes (LED) technology, but the laser lighting technology is developing rapidly, such as, laser cinema projectors, laser TVs, laser head-up displays, laser head mounted displays, and laser headlamps for motor vehicles. One of the main issues of laser displays is the reduction of speckle noise1). For the monochromatic laser light, speckle is random interference pattern on the image plane (retina for human observer). For laser displays, RGB (red-green-blue) lasers form speckle patterns independently, which results in random distribution of chromaticity, called color speckle2).
Analysis on the 3D crosstalk in stereoscopic display
NASA Astrophysics Data System (ADS)
Choi, Hee-Jin
2010-11-01
Nowadays, with the rapid progresses in flat panel display (FPD) technologies, the three-dimensional (3D) display is now becoming a next mainstream of display market. Among the various 3D display techniques, the stereoscopic 3D display shows different left/right images for each eye of observer using special glasses and is the most popular 3D technique with the advantages of low price and high 3D resolution. However, current stereoscopic 3D displays suffer with the 3D crosstalk which means the interference between the left eye mage and right eye images since it degrades the quality of 3D image severely. In this paper, the meaning and causes of the 3D crosstalk in stereoscopic 3D display are introduced and the pre-proposed methods of 3D crosstalk measurement vision science are reviewed. Based on them The threshold of 3D crosstalk to realize a 3D display with no degradation is analyzed.
Digital Light Processing update: status and future applications
NASA Astrophysics Data System (ADS)
Hornbeck, Larry J.
1999-05-01
Digital Light Processing (DLP) projection displays based on the Digital Micromirror Device (DMD) were introduced to the market in 1996. Less than 3 years later, DLP-based projectors are found in such diverse applications as mobile, conference room, video wall, home theater, and large-venue. They provide high-quality, seamless, all-digital images that have exceptional stability as well as freedom from both flicker and image lag. Marked improvements have been made in the image quality of DLP-based projection display, including brightness, resolution, contrast ratio, and border image. DLP-based mobile projectors that weighted about 27 pounds in 1996 now weight only about 7 pounds. This weight reduction has been responsible for the definition of an entirely new projector class, the ultraportable. New applications are being developed for this important new projection display technology; these include digital photofinishing for high process speed minilab and maxilab applications and DLP Cinema for the digital delivery of films to audiences around the world. This paper describes the status of DLP-based projection display technology, including its manufacturing, performance improvements, and new applications, with emphasis on DLP Cinema.
Amorphous Silicon: Flexible Backplane and Display Application
NASA Astrophysics Data System (ADS)
Sarma, Kalluri R.
Advances in the science and technology of hydrogenated amorphous silicon (a-Si:H, also referred to as a-Si) and the associated devices including thin-film transistors (TFT) during the past three decades have had a profound impact on the development and commercialization of major applications such as thin-film solar cells, digital image scanners and X-ray imagers and active matrix liquid crystal displays (AMLCDs). Particularly, during approximately the past 15 years, a-Si TFT-based flat panel AMLCDs have been a huge commercial success. a-Si TFT-LCD has enabled the note book PCs, and is now rapidly replacing the venerable CRT in the desktop monitor and home TV applications. a-Si TFT-LCD is now the dominant technology in use for applications ranging from small displays such as in mobile phones to large displays such as in home TV, as well-specialized applications such as industrial and avionics displays.
Autostereoscopic image creation by hyperview matrix controlled single pixel rendering
NASA Astrophysics Data System (ADS)
Grasnick, Armin
2017-06-01
Just as the increasing awareness level of the stereoscopic cinema, so the perception of limitations while watching movies with 3D glasses has been emerged as well. It is not only that the additional glasses are uncomfortable and annoying; there are some tangible arguments for avoiding 3D glasses. These "stereoscopic deficits" are caused by the 3D glasses itself. In contrast to natural viewing with naked eyes, the artificial 3D viewing with 3D glasses introduces specific "unnatural" side effects. The most of the moviegoers has experienced unspecific discomfort in 3D cinema, which they may have associated with insufficient image quality. Obviously, quality problems with 3D glasses can be solved by technical improvement. But this simple answer can -and already has- mislead some decision makers to relax on the existing 3D glasses solution. It needs to be underlined, that there are inherent difficulties with the glasses, which can never be solved with modest advancement; as the 3D glasses initiate them. To overcome the limitations of stereoscopy in display applications, several technologies has been proposed to create a 3D impression without the need of 3D glasses, known as autostereoscopy. But even todays autostereoscopic displays cannot solve all viewing problems and still show limitations. A hyperview display could be a suitable candidate, if it would be possible to create an affordable device and generate the necessary content in an acceptable time frame. All autostereoscopic displays, based on the idea of lightfield, integral photography or super-multiview could be unified within the concept of hyperview. It is essential for functionality that every of these display technologies uses numerous of different perspective images to create the 3D impression. Such a calculation of a very high number of views will require much more computing time as for the formation of a simple stereoscopic image pair. The hyperview concept allows to describe the screen image of any 3D technology just with a simple equation. This formula can be utilized to create a specific hyperview matrix for a certain 3D display - independent of the technology used. A hyperview matrix may contain the references to loads of images and act as an instruction for a subsequent rendering process of particular pixels. Naturally, a single pixel will deliver an image with no resolution and does not provide any idea of the rendered scene. However, by implementing the method of pixel recycling, a 3D image can be perceived, even if all source images are different. It will be proven that several millions of perspectives can be rendered with the support of GPU rendering and benefit from the hyperview matrix. In result, a conventional autostereoscopic display, which is designed to represent only a few perspectives can be used to show a hyperview image by using a suitable hyperview matrix. It will be shown that a millions-of-views-hyperview-image can be presented on a conventional autostereoscopic display. For such an hyperview image it is required that all pixels of the displays are allocated by different source images. Controlled by the hyperview matrix, an adapted renderer can render a full hyperview image in real-time.
Ultrahigh-definition dynamic 3D holographic display by active control of volume speckle fields
NASA Astrophysics Data System (ADS)
Yu, Hyeonseung; Lee, Kyeoreh; Park, Jongchan; Park, Yongkeun
2017-01-01
Holographic displays generate realistic 3D images that can be viewed without the need for any visual aids. They operate by generating carefully tailored light fields that replicate how humans see an actual environment. However, the realization of high-performance, dynamic 3D holographic displays has been hindered by the capabilities of present wavefront modulator technology. In particular, spatial light modulators have a small diffraction angle range and limited pixel number limiting the viewing angle and image size of a holographic 3D display. Here, we present an alternative method to generate dynamic 3D images by controlling volume speckle fields significantly enhancing image definition. We use this approach to demonstrate a dynamic display of micrometre-sized optical foci in a volume of 8 mm × 8 mm × 20 mm.
NASA Astrophysics Data System (ADS)
Kim, Hak-Rin; Park, Min-Kyu; Choi, Jun-Chan; Park, Ji-Sub; Min, Sung-Wook
2016-09-01
Three-dimensional (3D) display technology has been studied actively because it can offer more realistic images compared to the conventional 2D display. Various psychological factors such as accommodation, binocular parallax, convergence and motion parallax are used to recognize a 3D image. For glass-type 3D displays, they use only the binocular disparity in 3D depth cues. However, this method cause visual fatigue and headaches due to accommodation conflict and distorted depth perception. Thus, the hologram and volumetric display are expected to be an ideal 3D display. Holographic displays can represent realistic images satisfying the entire factors of depth perception. But, it require tremendous amount of data and fast signal processing. The volumetric 3D displays can represent images using voxel which is a physical volume. However, it is required for large data to represent the depth information on voxel. In order to simply encode 3D information, the compact type of depth fused 3D (DFD) display, which can create polarization distributed depth map (PDDM) image having both 2D color image and depth image is introduced. In this paper, a new volumetric 3D display system is shown by using PDDM image controlled by polarization controller. In order to introduce PDDM image, polarization states of the light through spatial light modulator (SLM) was analyzed by Stokes parameter depending on the gray level. Based on the analysis, polarization controller is properly designed to convert PDDM image into sectioned depth images. After synchronizing PDDM images with active screens, we can realize reconstructed 3D image. Acknowledgment This work was supported by `The Cross-Ministry Giga KOREA Project' grant from the Ministry of Science, ICT and Future Planning, Korea
Three-Dimensional Media Technologies: Potentials for Study in Visual Literacy.
ERIC Educational Resources Information Center
Thwaites, Hal
This paper presents an overview of three-dimensional media technologies (3Dmt). Many of the new 3Dmt are the direct result of interactions of computing, communications, and imaging technologies. Computer graphics are particularly well suited to the creation of 3D images due to the high resolution and programmable nature of the current displays.…
Monocular display unit for 3D display with correct depth perception
NASA Astrophysics Data System (ADS)
Sakamoto, Kunio; Hosomi, Takashi
2009-11-01
A study of virtual-reality system has been popular and its technology has been applied to medical engineering, educational engineering, a CAD/CAM system and so on. The 3D imaging display system has two types in the presentation method; one is a 3-D display system using a special glasses and the other is the monitor system requiring no special glasses. A liquid crystal display (LCD) recently comes into common use. It is possible for this display unit to provide the same size of displaying area as the image screen on the panel. A display system requiring no special glasses is useful for a 3D TV monitor, but this system has demerit such that the size of a monitor restricts the visual field for displaying images. Thus the conventional display can show only one screen, but it is impossible to enlarge the size of a screen, for example twice. To enlarge the display area, the authors have developed an enlarging method of display area using a mirror. Our extension method enables the observers to show the virtual image plane and to enlarge a screen area twice. In the developed display unit, we made use of an image separating technique using polarized glasses, a parallax barrier or a lenticular lens screen for 3D imaging. The mirror can generate the virtual image plane and it enlarges a screen area twice. Meanwhile the 3D display system using special glasses can also display virtual images over a wide area. In this paper, we present a monocular 3D vision system with accommodation mechanism, which is useful function for perceiving depth.
Liquid crystal light valve technologies for display applications
NASA Astrophysics Data System (ADS)
Kikuchi, Hiroshi; Takizawa, Kuniharu
2001-11-01
The liquid crystal (LC) light valve, which is a spatial light modulator that uses LC material, is a very important device in the area of display development, image processing, optical computing, holograms, etc. In particular, there have been dramatic developments in the past few years in the application of the LC light valve to projectors and other display technologies. Various LC operating modes have been developed, including thin film transistors, MOS-FETs and other active matrix drive techniques to meet the requirements for higher resolution, and substantial improvements have been achieved in the performance of optical systems, resulting in brighter display images. Given this background, the number of applications for the LC light valve has greatly increased. The resolution has increased from QVGA (320 x 240) to QXGA (2048 x 1536) or even super- high resolution of eight million pixels. In the area of optical output, projectors of 600 to 13,000 lm are now available, and they are used for presentations, home theatres, electronic cinema and other diverse applications. Projectors using the LC light valve can display high- resolution images on large screens. They are now expected to be developed further as part of hyper-reality visual systems. This paper provides an overview of the needs for large-screen displays, human factors related to visual effects, the way in which LC light valves are applied to projectors, improvements in moving picture quality, and the results of the latest studies that have been made to increase the quality of images and moving images or pictures.
Emergent technologies: 25 years
NASA Astrophysics Data System (ADS)
Rising, Hawley K.
2013-03-01
This paper will talk about the technologies that have been emerging over the 25 years since the Human Vision and Electronic Imaging conference began that the conference has been a part of, and that have been a part of the conference, and will look at those technologies that are emerging today, such as social networks, haptic technologies, and still emerging imaging technologies, and what we might look at for the future.Twenty-five years is a long time, and it is not without difficulty that we remember what was emerging in the late 1980s. Yet to be developed: The first commercial digital still camera was not yet on the market, although there were hand held electronic cameras. Personal computers were not displaying standardized images, and image quality was not something that could be talked about in a standardized fashion, if only because image compression algorithms were not standardized yet for several years hence. Even further away were any standards for movie compression standards, there was no personal computer even on the horizon which could display them. What became an emergent technology and filled many sessions later, image comparison and search, was not possible, nor the current emerging technology of social networks- the world wide web was still several years away. Printer technology was still devising dithers and image size manipulations which would consume many years, as would scanning technology, and image quality for both was a major issue for dithers and Fourier noise.From these humble beginnings to the current moves that are changing computing and the meaning of both electronic devices and human interaction with them, we will see a course through the changing technology that holds some features constant for many years, while others come and go.
Oe, Hiroki; Watanabe, Nobuhisa; Miyoshi, Toru; Osawa, Kazuhiro; Akagi, Teiji; Kanazawa, Susumu; Ito, Hiroshi
2018-06-18
Management of adult congenital heart disease (ACHD) patients requires understanding of its complex morphology and functional features. An innovative imaging technique has been developed to display a virtual multi-planar reconstruction obtained from contrast-enhanced multidetector-computed tomography (MDCT) corresponding to the same cross-sectional image from transthoracic echocardiography (TTE). The aim of this study is to assess the usefulness of this imaging technology in ACHD patients. This study consisted of 46 consecutive patients (30 women; mean age, 52±18 years old) with ACHD who had undergone contrast MDCT. All patients underwent TTE within a week of MDCT. An experienced sonographer who did not know the results of MDCT conducted a diagnosis using TTE and, then, using the new imaging technology. We studied whether this imaging technology provided additional or unexpected findings or makes more accurate diagnosis. In this imaging technology, MDCT cross-section provides higher-resolution image to the deep compared to corresponding TTE image. Depending on the MDCT section which can be arbitrarily set under the echo guide, we can diagnose unexpected or incremental lesions or more accurately assess the severity of the lesion in 27 patients (59%) compared to TTE study alone. This imaging technology was useful in the following situations: CONCLUSIONS: This integrated imaging technology provides incremental role over TTE in complex anatomy, and allows functional information in ACHD patients. Copyright © 2018 Japanese College of Cardiology. Published by Elsevier Ltd. All rights reserved.
Optical design and testing: introduction.
Liang, Chao-Wen; Koshel, John; Sasian, Jose; Breault, Robert; Wang, Yongtian; Fang, Yi Chin
2014-10-10
Optical design and testing has numerous applications in industrial, military, consumer, and medical settings. Assembling a complete imaging or nonimage optical system may require the integration of optics, mechatronics, lighting technology, optimization, ray tracing, aberration analysis, image processing, tolerance compensation, and display rendering. This issue features original research ranging from the optical design of image and nonimage optical stimuli for human perception, optics applications, bio-optics applications, 3D display, solar energy system, opto-mechatronics to novel imaging or nonimage modalities in visible and infrared spectral imaging, modulation transfer function measurement, and innovative interferometry.
Incorporating digital imaging into dental hygiene practice.
Saxe, M J; West, D J
1997-01-01
The objective of this paper is to describe digital imaging technology: available modalities, scientific imaging process, advantages and limitations, and applications to dental hygiene practice. Advances in technology have created innovative imaging modalities for intraoral radiography that eliminate film as the traditional image receptor. Digital imaging generates instantaneous radiographic images on a display monitor following exposure. Advantages include lower patient exposure per image and elimination of film processing. Digital imaging enhances diagnostic capabilities and, therefore, treatment decisions by the oral healthcare provider. Utilization of digital imaging technology for intraoral radiography will advance the practice of dental hygiene. Although spatial resolution is inferior to conventional film, digital imaging provides adequate resolution to diagnose oral diseases. Dental hygienists must evaluate new technologies in radiography to continue providing quality care while reducing patient exposure to ionizing radiation.
Display technologies: application for the discovery of drug and gene delivery agents
Sergeeva, Anna; Kolonin, Mikhail G.; Molldrem, Jeffrey J.; Pasqualini, Renata; Arap, Wadih
2007-01-01
Recognition of molecular diversity of cell surface proteomes in disease is essential for the development of targeted therapies. Progress in targeted therapeutics requires establishing effective approaches for high-throughput identification of agents specific for clinically relevant cell surface markers. Over the past decade, a number of platform strategies have been developed to screen polypeptide libraries for ligands targeting receptors selectively expressed in the context of various cell surface proteomes. Streamlined procedures for identification of ligand-receptor pairs that could serve as targets in disease diagnosis, profiling, imaging and therapy have relied on the display technologies, in which polypeptides with desired binding profiles can be serially selected, in a process called biopanning, based on their physical linkage with the encoding nucleic acid. These technologies include virus/phage display, cell display, ribosomal display, mRNA display and covalent DNA display (CDT), with phage display being by far the most utilized. The scope of this review is the recent advancements in the display technologies with a particular emphasis on molecular mapping of cell surface proteomes with peptide phage display. Prospective applications of targeted compounds derived from display libraries in the discovery of targeted drugs and gene therapy vectors are discussed. PMID:17123658
Design of video processing and testing system based on DSP and FPGA
NASA Astrophysics Data System (ADS)
Xu, Hong; Lv, Jun; Chen, Xi'ai; Gong, Xuexia; Yang, Chen'na
2007-12-01
Based on high speed Digital Signal Processor (DSP) and Field Programmable Gate Array (FPGA), a video capture, processing and display system is presented, which is of miniaturization and low power. In this system, a triple buffering scheme was used for the capture and display, so that the application can always get a new buffer without waiting; The Digital Signal Processor has an image process ability and it can be used to test the boundary of workpiece's image. A video graduation technology is used to aim at the position which is about to be tested, also, it can enhance the system's flexibility. The character superposition technology realized by DSP is used to display the test result on the screen in character format. This system can process image information in real time, ensure test precision, and help to enhance product quality and quality management.
Hard copies for digital medical images: an overview
NASA Astrophysics Data System (ADS)
Blume, Hartwig R.; Muka, Edward
1995-04-01
This paper is a condensed version of an invited overview on the technology of film hard-copies used in radiology. Because the overview was given to an essentially nonmedical audience, the reliance on film hard-copies in radiology is outlined in greater detail. The overview is concerned with laser image recorders generating monochrome prints on silver-halide films. The basic components of laser image recorders are sketched. The paper concentrates on the physical parameters - characteristic function, dynamic range, digitization resolution, modulation transfer function, and noise power spectrum - which define image quality and information transfer capability of the printed image. A preliminary approach is presented to compare the printed image quality with noise in the acquired image as well as with the noise of state-of- the-art cathode-ray-tube display systems. High-performance laser-image- recorder/silver-halide-film/light-box systems are well capable of reproducing acquired radiologic information. Most recently development was begun toward a display function standard for soft-copy display systems to facilitate similarity of image presentation between different soft-copy displays as well as between soft- and hard-copy displays. The standard display function is based on perceptional linearization. The standard is briefly reviewed to encourage the printer industry to adopt it, too.
NASA Astrophysics Data System (ADS)
Hotta, Aira; Sasaki, Takashi; Okumura, Haruhiko
2007-02-01
In this paper, we propose a novel display method to realize a high-resolution image in a central visual field for a hyper-realistic head dome projector. The method uses image processing based on the characteristics of human vision, namely, high central visual acuity and low peripheral visual acuity, and pixel shift technology, which is one of the resolution-enhancing technologies for projectors. The projected image with our method is a fine wide-viewing-angle image with high definition in the central visual field. We evaluated the psychological effects of the projected images with our method in terms of sensation of reality. According to the result, we obtained 1.5 times higher resolution in the central visual field and a greater sensation of reality by using our method.
NASA Astrophysics Data System (ADS)
Zharinov, I. O.; Zharinov, O. O.
2017-12-01
The problem of the research is concerned with quantitative analysis of influence of technological variation of the screen color profile parameters on chromaticity coordinates of the displayed image. Some mathematical expressions which approximate the two-dimensional distribution of chromaticity coordinates of an image, which is displayed on the screen with a three-component color formation principle were proposed. Proposed mathematical expressions show the way to development of correction techniques to improve reproducibility of the colorimetric features of displays.
Image design and replication for image-plane disk-type multiplex holograms
NASA Astrophysics Data System (ADS)
Chen, Chih-Hung; Cheng, Yih-Shyang
2017-09-01
The fabrication methods and parameter design for both real-image generation and virtual-image display in image-plane disk-type multiplex holography are introduced in this paper. A theoretical model of a disk-type hologram is also presented and is then used in our two-step holographic processes, including the production of a non-image-plane master hologram and optical replication using a single-beam copying system for the production of duplicated holograms. Experimental results are also presented to verify the possibility of mass production using the one-shot holographic display technology described in this study.
Otto, Kristen J; Hapner, Edie R; Baker, Michael; Johns, Michael M
2006-02-01
Advances in commercial video technology have improved office-based laryngeal imaging. This study investigates the perceived image quality of a true high-definition (HD) video camera and the effect of magnification on laryngeal videostroboscopy. We performed a prospective, dual-armed, single-blinded analysis of a standard laryngeal videostroboscopic examination comparing 3 separate add-on camera systems: a 1-chip charge-coupled device (CCD) camera, a 3-chip CCD camera, and a true 720p (progressive scan) HD camera. Displayed images were controlled for magnification and image size (20-inch [50-cm] display, red-green-blue, and S-video cable for 1-chip and 3-chip cameras; digital visual interface cable and HD monitor for HD camera). Ten blinded observers were then asked to rate the following 5 items on a 0-to-100 visual analog scale: resolution, color, ability to see vocal fold vibration, sense of depth perception, and clarity of blood vessels. Eight unblinded observers were then asked to rate the difference in perceived resolution and clarity of laryngeal examination images when displayed on a 10-inch (25-cm) monitor versus a 42-inch (105-cm) monitor. A visual analog scale was used. These monitors were controlled for actual resolution capacity. For each item evaluated, randomized block design analysis demonstrated that the 3-chip camera scored significantly better than the 1-chip camera (p < .05). For the categories of color and blood vessel discrimination, the 3-chip camera scored significantly better than the HD camera (p < .05). For magnification alone, observers rated the 42-inch monitor statistically better than the 10-inch monitor. The expense of new medical technology must be judged against its added value. This study suggests that HD laryngeal imaging may not add significant value over currently available video systems, in perceived image quality, when a small monitor is used. Although differences in clarity between standard and HD cameras may not be readily apparent on small displays, a large display size coupled with HD technology may impart improved diagnosis of subtle vocal fold lesions and vibratory anomalies.
NASA Technical Reports Server (NTRS)
1990-01-01
A review is presented of the literature concerning control and display technology that is applicable to the Orbital Maneuvering Vehicle (OMV), a system being developed by NASA that will enable the user to remotely pilot it during a mission in space. In addition to the general review, special consideration is given to virtual image displays and their potential for use in the system, and a preliminary partial task analysis of the user's functions is also presented.
Deng, William Nanqiao; Wang, Shuo; Ventrici de Souza, Joao; Kuhl, Tonya L; Liu, Gang-Yu
2018-06-25
Scanning probe microscopy (SPM), such as atomic force microscopy (AFM), is widely known for high-resolution imaging of surface structures and nanolithography in two dimensions (2D), providing important physical insights into surface science and material science. This work reports a new algorithm to enable construction and display of layer-by-layer 3D structures from SPM images. The algorithm enables alignment of SPM images acquired during layer-by-layer deposition and removal of redundant features and faithfully constructs the deposited 3D structures. The display uses a "see-through" strategy to enable the structure of each layer to be visible. The results demonstrate high spatial accuracy as well as algorithm versatility; users can set parameters for reconstruction and display as per image quality and research needs. To the best of our knowledge, this method represents the first report to enable SPM technology for 3D imaging construction and display. The detailed algorithm is provided to facilitate usage of the same approach in any SPM software. These new capabilities support wide applications of SPM that require 3D image reconstruction and display, such as 3D nanoprinting and 3D additive and subtractive manufacturing and imaging.
Spatial noise in microdisplays for near-to-eye applications
NASA Astrophysics Data System (ADS)
Hastings, Arthur R., Jr.; Draper, Russell S.; Wood, Michael V.; Fellowes, David A.
2011-06-01
Spatial noise in imaging systems has been characterized and its impact on image quality metrics has been addressed primarily with respect to the introduction of this noise at the sensor component. However, sensor fixed pattern noise is not the only source of fixed pattern noise in an imaging system. Display fixed pattern noise cannot be easily mitigated in processing and, therefore, must be addressed. In this paper, a thorough examination of the amount and the effect of display fixed pattern noise is presented. The specific manifestation of display fixed pattern noise is dependent upon the display technology. Utilizing a calibrated camera, US Army RDECOM CERDEC NVESD has developed a microdisplay (μdisplay) spatial noise data collection capability. Noise and signal power spectra were used to characterize the display signal to noise ratio (SNR) as a function of spatial frequency analogous to the minimum resolvable temperature difference (MRTD) of a thermal sensor. The goal of this study is to establish a measurement technique to characterize μdisplay limiting performance to assist in proper imaging system specification.
Teng, Dongdong; Xiong, Yi; Liu, Lilin; Wang, Biao
2015-03-09
Existing multiview three-dimensional (3D) display technologies encounter discontinuous motion parallax problem, due to a limited number of stereo-images which are presented to corresponding sub-viewing zones (SVZs). This paper proposes a novel multiview 3D display system to obtain continuous motion parallax by using a group of planar aligned OLED microdisplays. Through blocking partial light-rays by baffles inserted between adjacent OLED microdisplays, transitional stereo-image assembled by two spatially complementary segments from adjacent stereo-images is presented to a complementary fusing zone (CFZ) which locates between two adjacent SVZs. For a moving observation point, the spatial ratio of the two complementary segments evolves gradually, resulting in continuously changing transitional stereo-images and thus overcoming the problem of discontinuous motion parallax. The proposed display system employs projection-type architecture, taking the merit of full display resolution, but at the same time having a thin optical structure, offering great potentials for portable or mobile 3D display applications. Experimentally, a prototype display system is demonstrated by 9 OLED microdisplays.
A 2D/3D hybrid integral imaging display by using fast switchable hexagonal liquid crystal lens array
NASA Astrophysics Data System (ADS)
Lee, Hsin-Hsueh; Huang, Ping-Ju; Wu, Jui-Yi; Hsieh, Po-Yuan; Huang, Yi-Pai
2017-05-01
The paper proposes a new display which could switch 2D and 3D images on a monitor, and we call it as Hybrid Display. In 3D display technologies, the reduction of image resolution is still an important issue. The more angle information offer to the observer, the less spatial resolution would offer to image resolution because of the fixed panel resolution. Take it for example, in the integral photography system, the part of image without depth, like background, will reduce its resolution by transform from 2D to 3D image. Therefore, we proposed a method by using liquid crystal component to quickly switch the 2D image and 3D image. Meanwhile, the 2D image is set as a background to compensate the resolution.. In the experiment, hexagonal liquid crystal lens array would be used to take the place of fixed lens array. Moreover, in order to increase lens power of the hexagonal LC lens array, we applied high resistance (Hi-R) layer structure on the electrode. Hi-R layer would make the gradient electric field and affect the lens profile. Also, we use panel with 801 PPI to display the integral image in our system. Hence, the consequence of full resolution 2D background with the 3D depth object forms the Hybrid Display.
Full resolution hologram-like autostereoscopic display
NASA Technical Reports Server (NTRS)
Eichenlaub, Jesse B.; Hutchins, Jamie
1995-01-01
Under this program, Dimension Technologies Inc. (DTI) developed a prototype display that uses a proprietary illumination technique to create autostereoscopic hologram-like full resolution images on an LCD operating at 180 fps. The resulting 3D image possesses a resolution equal to that of the LCD along with properties normally associated with holograms, including change of perspective with observer position and lack of viewing position restrictions. Furthermore, this autostereoscopic technique eliminates the need to wear special glasses to achieve the parallax effect. Under the program a prototype display was developed which demonstrates the hologram-like full resolution concept. To implement such a system, DTI explored various concept designs and enabling technologies required to support those designs. Specifically required were: a parallax illumination system with sufficient brightness and control; an LCD with rapid address and pixel response; and an interface to an image generation system for creation of computer graphics. Of the possible parallax illumination system designs, we chose a design which utilizes an array of fluorescent lamps. This system creates six sets of illumination areas to be imaged behind an LCD. This controlled illumination array is interfaced to a lenticular lens assembly which images the light segments into thin vertical light lines to achieve the parallax effect. This light line formation is the foundation of DTI's autostereoscopic technique. The David Sarnoff Research Center (Sarnoff) was subcontracted to develop an LCD that would operate with a fast scan rate and pixel response. Sarnoff chose a surface mode cell technique and produced the world's first large area pi-cell active matrix TFT LCD. The device provided adequate performance to evaluate five different perspective stereo viewing zones. A Silicon Graphics' Iris Indigo system was used for image generation which allowed for static and dynamic multiple perspective image rendering. During the development of the prototype display, we identified many critical issues associated with implementing such a technology. Testing and evaluation enabled us to prove that this illumination technique provides autostereoscopic 3D multi perspective images with a wide range of view, smooth transition, and flickerless operation given suitable enabling technologies.
Projection type transparent 3D display using active screen
NASA Astrophysics Data System (ADS)
Kamoshita, Hiroki; Yendo, Tomohiro
2015-05-01
Equipment to enjoy a 3D image, such as a movie theater, television and so on have been developed many. So 3D video are widely known as a familiar image of technology now. The display representing the 3D image are there such as eyewear, naked-eye, the HMD-type, etc. They has been used for different applications and location. But have not been widely studied for the transparent 3D display. If transparent large 3D display is realized, it is useful to display 3D image overlaid on real scene in some applications such as road sign, shop window, screen in the conference room etc. As a previous study, to produce a transparent 3D display by using a special transparent screen and number of projectors is proposed. However, for smooth motion parallax, many projectors are required. In this paper, we propose a display that has transparency and large display area by time multiplexing projection image in time-division from one or small number of projectors to active screen. The active screen is composed of a number of vertically-long small rotate mirrors. It is possible to realize the stereoscopic viewing by changing the image of the projector in synchronism with the scanning of the beam.3D vision can be realized by light is scanned. Also, the display has transparency, because it is possible to see through the display when the mirror becomes perpendicular to the viewer. We confirmed the validity of the proposed method by using simulation.
High resolution image processing on low-cost microcomputers
NASA Technical Reports Server (NTRS)
Miller, R. L.
1993-01-01
Recent advances in microcomputer technology have resulted in systems that rival the speed, storage, and display capabilities of traditionally larger machines. Low-cost microcomputers can provide a powerful environment for image processing. A new software program which offers sophisticated image display and analysis on IBM-based systems is presented. Designed specifically for a microcomputer, this program provides a wide-range of functions normally found only on dedicated graphics systems, and therefore can provide most students, universities and research groups with an affordable computer platform for processing digital images. The processing of AVHRR images within this environment is presented as an example.
Real time diffuse reflectance polarisation spectroscopy imaging to evaluate skin microcirculation
NASA Astrophysics Data System (ADS)
O'Doherty, Jim; Henricson, Joakim; Nilsson, Gert E.; Anderson, Chris; Leahy, Martin J.
2007-07-01
This article describes the theoretical development and design of a real-time microcirculation imaging system, an extension from a previously technology developed by our group. The technology utilises polarisation spectroscopy, a technique used in order to selectively gate photons returning from various compartments of human skin tissue, namely from the superficial layers of the epidermis, and the deeper backscattered light from the dermal matrix. A consumer-end digital camcorder captures colour data with three individual CCDs, and a custom designed light source consisting of a 24 LED ring light provides broadband illumination over the 400 nm - 700 nm wavelength region. Theory developed leads to an image processing algorithm, the output of which scales linearly with increasing red blood cell (RBC) concentration. Processed images are displayed online in real-time at a rate of 25 frames s -1, at a frame size of 256 x 256 pixels, and is limited only by computer RAM memory and processing speed. General demonstrations of the technique in vivo display several advantages over similar technology.
A large flat panel multifunction display for military and space applications
NASA Astrophysics Data System (ADS)
Pruitt, James S.
1992-09-01
A flat panel multifunction display (MFD) that offers the size and reliability benefits of liquid crystal display technology while achieving near-CRT display quality is presented. Display generation algorithms that provide exceptional display quality are being implemented in custom VLSI components to minimize MFD size. A high-performance processor converts user-specified display lists to graphics commands used by these components, resulting in high-speed updates of two-dimensional and three-dimensional images. The MFD uses the MIL-STD-1553B data bus for compatibility with virtually all avionics systems. The MFD can generate displays directly from display lists received from the MIL-STD-1553B bus. Complex formats can be stored in the MFD and displayed using parameters from the data bus. The MFD also accepts direct video input and performs special processing on this input to enhance image quality.
Advances in phage display technology for drug discovery.
Omidfar, Kobra; Daneshpour, Maryam
2015-06-01
Over the past decade, several library-based methods have been developed to discover ligands with strong binding affinities for their targets. These methods mimic the natural evolution for screening and identifying ligand-target interactions with specific functional properties. Phage display technology is a well-established method that has been applied to many technological challenges including novel drug discovery. This review describes the recent advances in the use of phage display technology for discovering novel bioactive compounds. Furthermore, it discusses the application of this technology to produce proteins and peptides as well as minimize the use of antibodies, such as antigen-binding fragment, single-chain fragment variable or single-domain antibody fragments like VHHs. Advances in screening, manufacturing and humanization technologies demonstrate that phage display derived products can play a significant role in the diagnosis and treatment of disease. The effects of this technology are inevitable in the development pipeline for bringing therapeutics into the market, and this number is expected to rise significantly in the future as new advances continue to take place in display methods. Furthermore, a widespread application of this methodology is predicted in different medical technological areas, including biosensing, monitoring, molecular imaging, gene therapy, vaccine development and nanotechnology.
Military display market: third comprehensive edition
NASA Astrophysics Data System (ADS)
Desjardins, Daniel D.; Hopper, Darrel G.
2002-08-01
Defense displays comprise a niche market whose continually high performance requirements drive technology. The military displays market is being characterized to ascertain opportunities for synergy across platforms, and needs for new technology. All weapons systems are included. Some 382,585 displays are either now in use or planned in DoD weapon systems over the next 15 years, comprising displays designed into direct-view, projection-view, and virtual- image-view applications. This defense niche market is further fractured into 1163 micro-niche markets by the some 403 program offices who make decisions independently of one another. By comparison, a consumer electronics product has volumes of tens-of-millions of units for a single fixed design. Some 81% of defense displays are ruggedized versions of consumer-market driven designs. Some 19% of defense displays, especially in avionics cockpits and combat crewstations, are custom designs to gain the additional performance available in the technology base but not available in consumer-market-driven designs. Defense display sizes range from 13.6 to 4543 mm. More than half of defense displays are now based on some form of flat panel display technology, especially thin-film-transistor active matrix liquid crystal display (TFT AMLCD); the cathode ray tube (CRT) is still widely used but continuing to drop rapidly in defense market share.
CERESVis: A QC Tool for CERES that Leverages Browser Technology for Data Validation
NASA Astrophysics Data System (ADS)
Chu, C.; Sun-Mack, S.; Heckert, E.; Chen, Y.; Doelling, D.
2015-12-01
In this poster, we are going to present three user interfaces that CERES team uses to validate pixel-level data. Besides our home grown tools, we will aslo present the browser technology that we use to provide interactive interfaces, such as jquery, HighCharts and Google Earth. We pass data to the users' browsers and use the browsers to do some simple computations. The three user interfaces are: Thumbnails -- it displays hundrends images to allow users to browse 24-hour data files in few seconds. Multiple-synchronized cursors -- it allows users to compare multiple images side by side. Bounding Boxes and Histograms -- it allows users to draw multiple bounding boxes on an image and the browser computes/display the histograms.
Aidlen, Jeremy T; Glick, Sara; Silverman, Kenneth; Silverman, Harvey F; Luks, Francois I
2009-08-01
Light-weight, low-profile, and high-resolution head-mounted displays (HMDs) now allow personalized viewing, of a laparoscopic image. The advantages include unobstructed viewing, regardless of position at the operating table, and the possibility to customize the image (i.e., enhanced reality, picture-in-picture, etc.). The bright image display allows use in daylight surroundings and the low profile of the HMD provides adequate peripheral vision. Theoretic disadvantages include reliance for all on the same image capture and anticues (i.e., reality disconnect) when the projected image remains static, despite changes in head position. This can lead to discomfort and even nausea. We have developed a prototype of interactive laparoscopic image display that allows hands-free control of the displayed image by changes in spatial orientation of the operator's head. The prototype consists of an HMD, a spatial orientation device, and computer software to enable hands-free panning and zooming of a video-endoscopic image display. The spatial orientation device uses magnetic fields created by a transmitter and receiver, each containing three orthogonal coils. The transmitter coils are efficiently driven, using USB power only, by a newly developed circuit, each at a unique frequency. The HMD-mounted receiver system links to a commercially available PC-interface PCI-bus sound card (M-Audiocard Delta 44; Avid Technology, Tewksbury, MA). Analog signals at the receiver are filtered, amplified, and converted to digital signals, which are processed to control the image display. The prototype uses a proprietary static fish-eye lens and software for the distortion-free reconstitution of any portion of the captured image. Left-right and up-down motions of the head (and HMD) produce real-time panning of the displayed image. Motion of the head toward, or away from, the transmitter causes real-time zooming in or out, respectively, of the displayed image. This prototype of the interactive HMD allows hands-free, intuitive control of the laparoscopic field, independent of the captured image.
Development of a high-performance image server using ATM technology
NASA Astrophysics Data System (ADS)
Do Van, Minh; Humphrey, Louis M.; Ravin, Carl E.
1996-05-01
The ability to display digital radiographs to a radiologist in a reasonable time has long been the goal of many PACS. Intelligent routing, or pre-fetching images, has become a solution whereby a system uses a set of rules to route the images to a pre-determined destination. Images would then be stored locally on a workstation for faster display times. Some PACS use a large, centralized storage approach and workstations retrieve images over high bandwidth connections. Another approach to image management is to provide a high performance, clustered storage system. This has the advantage of eliminating the complexity of pre-fetching and allows for rapid image display from anywhere within the hospital. We discuss the development of such a storage device, which provides extremely fast access to images across a local area network. Among the requirements for development of the image server were high performance, DICOM 3.0 compliance, and the use of industry standard components. The completed image server provides performance more than sufficient for use in clinical practice. Setting up modalities to send images to the image server is simple due to the adherence to the DICOM 3.0 specification. Using only off-the-shelf components allows us to keep the cost of the server relatively inexpensive and allows for easy upgrades as technology becomes more advanced. These factors make the image server ideal for use as a clustered storage system in a radiology department.
Real-time free-viewpoint DIBR for large-size 3DLED
NASA Astrophysics Data System (ADS)
Wang, NengWen; Sang, Xinzhu; Guo, Nan; Wang, Kuiru
2017-10-01
Three-dimensional (3D) display technologies make great progress in recent years, and lenticular array based 3D display is a relatively mature technology, which most likely to commercial. In naked-eye-3D display, the screen size is one of the most important factors that affect the viewing experience. In order to construct a large-size naked-eye-3D display system, the LED display is used. However, the pixel misalignment is an inherent defect of the LED screen, which will influences the rendering quality. To address this issue, an efficient image synthesis algorithm is proposed. The Texture-Plus-Depth(T+D) format is chosen for the display content, and the modified Depth Image Based Rendering (DIBR) method is proposed to synthesize new views. In order to achieve realtime, the whole algorithm is implemented on GPU. With the state-of-the-art hardware and the efficient algorithm, a naked-eye-3D display system with a LED screen size of 6m × 1.8m is achieved. Experiment shows that the algorithm can process the 43-view 3D video with 4K × 2K resolution in real time on GPU, and vivid 3D experience is perceived.
Migration of the digital interactive breast-imaging teaching file
NASA Astrophysics Data System (ADS)
Cao, Fei; Sickles, Edward A.; Huang, H. K.; Zhou, Xiaoqiang
1998-06-01
The digital breast imaging teaching file developed during the last two years in our laboratory has been used successfully at UCSF (University of California, San Francisco) as a routine teaching tool for training radiology residents and fellows in mammography. Building on this success, we have ported the teaching file from an old Pixar imaging/Sun SPARC 470 display system to our newly designed telemammography display workstation (Ultra SPARC 2 platform with two DOME Md5/SBX display boards). The old Pixar/Sun 470 system, although adequate for fast and high-resolution image display, is 4- year-old technology, expensive to maintain and difficult to upgrade. The new display workstation is more cost-effective and is also compatible with the digital image format from a full-field direct digital mammography system. The digital teaching file is built on a sophisticated computer-aided instruction (CAI) model, which simulates the management sequences used in imaging interpretation and work-up. Each user can be prompted to respond by making his/her own observations, assessments, and work-up decisions as well as the marking of image abnormalities. This effectively replaces the traditional 'show-and-tell' teaching file experience with an interactive, response-driven type of instruction.
The development of machine technology processing for earth resource survey
NASA Technical Reports Server (NTRS)
Landgrebe, D. A.
1970-01-01
The following technologies are considered for automatic processing of earth resources data: (1) registration of multispectral and multitemporal images, (2) digital image display systems, (3) data system parameter effects on satellite remote sensing systems, and (4) data compression techniques based on spectral redundancy. The importance of proper spectral band and compression algorithm selections is pointed out.
Development and evaluation of amusement machine using autostereoscopic 3D display
NASA Astrophysics Data System (ADS)
Kawai, Takashi; Shibata, Takashi; Shimizu, Yoichi; Kawata, Mitsuhiro; Suto, Masahiro
2004-05-01
Pachinko is a pinball-like game peculiar to Japan, and is one of the most common pastimes around the country. Recently, with the videogame market contracting, various multimedia technologies have been introduced into Pachinko machines. The authors have developed a Pachinko machine incorporating an autostereoscopic 3D display, and evaluated its effect on the visual function. As of April 2003, the new Pachinko machine has been on sale in Japan. The stereoscopic 3D image is displayed using an LCD. Backlighting for the right and left images is separate, and passes through a polarizing filter before reaching the LCD, which is sandwiched with a micro polarizer. The content selected for display was ukiyoe pictures (Japanese traditional woodblocks). The authors intended to reduce visual fatigue by presenting 3D images with depth "behind" the display and switching between 3D and 2D images. For evaluation of the Pachinko machine, a 2D version with identical content was also prepared, and the effects were examined and compared by testing psycho-physiological responses.
On the Uncertain Future of the Volumetric 3D Display Paradigm
NASA Astrophysics Data System (ADS)
Blundell, Barry G.
2017-06-01
Volumetric displays permit electronically processed images to be depicted within a transparent physical volume and enable a range of cues to depth to be inherently associated with image content. Further, images can be viewed directly by multiple simultaneous observers who are able to change vantage positions in a natural way. On the basis of research to date, we assume that the technologies needed to implement useful volumetric displays able to support translucent image formation are available. Consequently, in this paper we review aspects of the volumetric paradigm and identify important issues which have, to date, precluded their successful commercialization. Potentially advantageous characteristics are outlined and demonstrate that significant research is still needed in order to overcome barriers which continue to hamper the effective exploitation of this display modality. Given the recent resurgence of interest in developing commercially viable general purpose volumetric systems, this discussion is of particular relevance.
Adaptive controller for volumetric display of neuroimaging studies
NASA Astrophysics Data System (ADS)
Bleiberg, Ben; Senseney, Justin; Caban, Jesus
2014-03-01
Volumetric display of medical images is an increasingly relevant method for examining an imaging acquisition as the prevalence of thin-slice imaging increases in clinical studies. Current mouse and keyboard implementations for volumetric control provide neither the sensitivity nor specificity required to manipulate a volumetric display for efficient reading in a clinical setting. Solutions to efficient volumetric manipulation provide more sensitivity by removing the binary nature of actions controlled by keyboard clicks, but specificity is lost because a single action may change display in several directions. When specificity is then further addressed by re-implementing hardware binary functions through the introduction of mode control, the result is a cumbersome interface that fails to achieve the revolutionary benefit required for adoption of a new technology. We address the specificity versus sensitivity problem of volumetric interfaces by providing adaptive positional awareness to the volumetric control device by manipulating communication between hardware driver and existing software methods for volumetric display of medical images. This creates a tethered effect for volumetric display, providing a smooth interface that improves on existing hardware approaches to volumetric scene manipulation.
Image Quality Characteristics of Handheld Display Devices for Medical Imaging
Yamazaki, Asumi; Liu, Peter; Cheng, Wei-Chung; Badano, Aldo
2013-01-01
Handheld devices such as mobile phones and tablet computers have become widespread with thousands of available software applications. Recently, handhelds are being proposed as part of medical imaging solutions, especially in emergency medicine, where immediate consultation is required. However, handheld devices differ significantly from medical workstation displays in terms of display characteristics. Moreover, the characteristics vary significantly among device types. We investigate the image quality characteristics of various handheld devices with respect to luminance response, spatial resolution, spatial noise, and reflectance. We show that the luminance characteristics of the handheld displays are different from those of workstation displays complying with grayscale standard target response suggesting that luminance calibration might be needed. Our results also demonstrate that the spatial characteristics of handhelds can surpass those of medical workstation displays particularly for recent generation devices. While a 5 mega-pixel monochrome workstation display has horizontal and vertical modulation transfer factors of 0.52 and 0.47 at the Nyquist frequency, the handheld displays released after 2011 can have values higher than 0.63 at the respective Nyquist frequencies. The noise power spectra for workstation displays are higher than 1.2×10−5 mm2 at 1 mm−1, while handheld displays have values lower than 3.7×10−6 mm2. Reflectance measurements on some of the handheld displays are consistent with measurements for workstation displays with, in some cases, low specular and diffuse reflectance coefficients. The variability of the characterization results among devices due to the different technological features indicates that image quality varies greatly among handheld display devices. PMID:24236113
Panoramic, large-screen, 3-D flight display system design
NASA Technical Reports Server (NTRS)
Franklin, Henry; Larson, Brent; Johnson, Michael; Droessler, Justin; Reinhart, William F.
1995-01-01
The report documents and summarizes the results of the required evaluations specified in the SOW and the design specifications for the selected display system hardware. Also included are the proposed development plan and schedule as well as the estimated rough order of magnitude (ROM) cost to design, fabricate, and demonstrate a flyable prototype research flight display system. The thrust of the effort was development of a complete understanding of the user/system requirements for a panoramic, collimated, 3-D flyable avionic display system and the translation of the requirements into an acceptable system design for fabrication and demonstration of a prototype display in the early 1997 time frame. Eleven display system design concepts were presented to NASA LaRC during the program, one of which was down-selected to a preferred display system concept. A set of preliminary display requirements was formulated. The state of the art in image source technology, 3-D methods, collimation methods, and interaction methods for a panoramic, 3-D flight display system were reviewed in depth and evaluated. Display technology improvements and risk reductions associated with maturity of the technologies for the preferred display system design concept were identified.
Recent patents on electrophoretic displays and materials.
Christophersen, Marc; Phlips, Bernard F
2010-11-01
Electrophoretic displays (EPDs) have made their way into consumer products. EPDs enable displays that offer the look and form of a printed page, often called "electronic paper". We will review recent apparatus and method patents for EPD devices and their fabrication. A brief introduction into the basic display operation and history of EPDs is given, while pointing out the technological challenges and difficulties for inventors. Recently, the majority of scientific publications and patenting activity has been directed to micro-segmented EPDs. These devices exhibit high optical reflectance and contrast, wide viewing angle, and high image resolution. Micro-segmented EPDs can also be integrated with flexible transistors technologies into flexible displays. Typical particles size ranges from 200 nm to 2 micrometer. Currently one very active area of patenting is the development of full-color EPDs. We summarize the recent patenting activity for EPDs and provide comments on perceiving factors driving intellectual property protection for EPD technologies.
Research and Development of Large Area Color AC Plasma Displays
NASA Astrophysics Data System (ADS)
Shinoda, Tsutae
1998-10-01
Plasma display is essentially a gas discharge device using discharges in small cavities about 0. 1 m. The color plasma displays utilize the visible light from phosphors excited by the ultra-violet by discharge in contrast to monochrome plasma displays utilizing visible light directly from gas discharges. At the early stage of the color plasma display development, the degradation of the phosphors and unstable operating voltage prevented to realize a practical color plasma display. The introduction of the three-electrode surface-discharge technology opened the way to solve the problems. Two key technologies of a simple panel structure with a stripe rib and phosphor alignment and a full color image driving method with an address-and-display-period-separated sub-field method have realized practically available full color plasma displays. A full color plasma display has been firstly developed in 1992 with a 21-in.-diagonal PDP and then a 42-in.-diagonal PDP in 1995 Currently a 50-in.-diagonal color plasma display has been developed. The large area color plasma displays have already been put into the market and are creating new markets, such as a wall hanging TV and multimedia displays for advertisement, information, etc. This paper will show the history of the surface-discharge color plasma display technologies and current status of the color plasma display.
NASA Astrophysics Data System (ADS)
Bergstedt, Robert; Fink, Charles G.; Flint, Graham W.; Hargis, David E.; Peppler, Philipp W.
1997-07-01
Laser Power Corporation has developed a new type of projection display, based upon microlaser technology and a novel scan architecture, which provides the foundation for bright, extremely high resolution images. A review of projection technologies is presented along with the limitations of each and the difficulties they experience in trying to generate high resolution imagery. The design of the microlaser based projector is discussed along with the advantage of this technology. High power red, green, and blue microlasers have been designed and developed specifically for use in projection displays. These sources, in combination with high resolution, high contrast modulator, produce a 24 bit color gamut, capable of supporting the full range of real world colors. The new scan architecture, which reduces the modulation rate and scan speeds required, is described. This scan architecture, along with the inherent brightness of the laser provides the fundamentals necessary to produce a 5120 by 4096 resolution display. The brightness and color uniformity of the display is excellent, allowing for tiling of the displays with far fewer artifacts than those in a traditionally tiled display. Applications for the display include simulators, command and control centers, and electronic cinema.
A new concept for medical imaging centered on cellular phone technology.
Granot, Yair; Ivorra, Antoni; Rubinsky, Boris
2008-04-30
According to World Health Organization reports, some three quarters of the world population does not have access to medical imaging. In addition, in developing countries over 50% of medical equipment that is available is not being used because it is too sophisticated or in disrepair or because the health personnel are not trained to use it. The goal of this study is to introduce and demonstrate the feasibility of a new concept in medical imaging that is centered on cellular phone technology and which may provide a solution to medical imaging in underserved areas. The new system replaces the conventional stand-alone medical imaging device with a new medical imaging system made of two independent components connected through cellular phone technology. The independent units are: a) a data acquisition device (DAD) at a remote patient site that is simple, with limited controls and no image display capability and b) an advanced image reconstruction and hardware control multiserver unit at a central site. The cellular phone technology transmits unprocessed raw data from the patient site DAD and receives and displays the processed image from the central site. (This is different from conventional telemedicine where the image reconstruction and control is at the patient site and telecommunication is used to transmit processed images from the patient site). The primary goal of this study is to demonstrate that the cellular phone technology can function in the proposed mode. The feasibility of the concept is demonstrated using a new frequency division multiplexing electrical impedance tomography system, which we have developed for dynamic medical imaging, as the medical imaging modality. The system is used to image through a cellular phone a simulation of breast cancer tumors in a medical imaging diagnostic mode and to image minimally invasive tissue ablation with irreversible electroporation in a medical imaging interventional mode.
Advanced helmet tracking technology developments for naval aviation
NASA Astrophysics Data System (ADS)
Brindle, James H.
1996-06-01
There is a critical need across the Services to improve the effectiveness of aircrew within the crewstation by capitalizing on the natural psycho-motor skills of the pilot through the use of a variety of helmet-mounted visual display and control techniques. This has resulted in considerable interest and significant ongoing research and development efforts on the part of the Navy, as well as the Army and the Air Force, in the technology building blocks associated with this area, such as advanced head position sensing or head tracking technologies, helmet- mounted display optics and electronics, and advanced night vision or image intensification technologies.
FSC LCD technology for military and avionics applications
NASA Astrophysics Data System (ADS)
Sarma, Kalluri R.; Schmidt, John; Roush, Jerry
2009-05-01
Field sequential color (FSC) liquid crystal displays (LCD) using a high speed LCD mode and an R, G, B LED backlight, offers a significant potential for lower power consumption, higher resolution, higher brightness and lower cost compared to the conventional R, G, B color filter based LCD, and thus is of interest to various military and avionic display applications. While the DLP projection TVs, and Camcorder LCD view finder type displays using the FSC technology have been introduced in the consumer market, large area direct view LCD displays based on the FSC technology have not reached the commercial market yet. Further, large area FSC LCDs can present unique operational issues in avionic and military environments particularly for operation in a broad temperature range and with respect to its susceptibility for the color breakup image artifact. In this paper we will review the current status of the FSC LCD technology and then discuss the results of our efforts on the FSC LCD technology evaluation for the avionic applications.
Design of polarization imaging system based on CIS and FPGA
NASA Astrophysics Data System (ADS)
Zeng, Yan-an; Liu, Li-gang; Yang, Kun-tao; Chang, Da-ding
2008-02-01
As polarization is an important characteristic of light, polarization image detecting is a new image detecting technology of combining polarimetric and image processing technology. Contrasting traditional image detecting in ray radiation, polarization image detecting could acquire a lot of very important information which traditional image detecting couldn't. Polarization image detecting will be widely used in civilian field and military field. As polarization image detecting could resolve some problem which couldn't be resolved by traditional image detecting, it has been researched widely around the world. The paper introduces polarization image detecting in physical theory at first, then especially introduces image collecting and polarization image process based on CIS (CMOS image sensor) and FPGA. There are two parts including hardware and software for polarization imaging system. The part of hardware include drive module of CMOS image sensor, VGA display module, SRAM access module and the real-time image data collecting system based on FPGA. The circuit diagram and PCB was designed. Stokes vector and polarization angle computing method are analyzed in the part of software. The float multiply of Stokes vector is optimized into just shift and addition operation. The result of the experiment shows that real time image collecting system could collect and display image data from CMOS image sensor in real-time.
Dynamic integral imaging technology for 3D applications (Conference Presentation)
NASA Astrophysics Data System (ADS)
Huang, Yi-Pai; Javidi, Bahram; Martínez-Corral, Manuel; Shieh, Han-Ping D.; Jen, Tai-Hsiang; Hsieh, Po-Yuan; Hassanfiroozi, Amir
2017-05-01
Depth and resolution are always the trade-off in integral imaging technology. With the dynamic adjustable devices, the two factors of integral imaging can be fully compensated with time-multiplexed addressing. Those dynamic devices can be mechanical or electrical driven. In this presentation, we will mainly focused on discussing various Liquid Crystal devices which can change the focal length, scan and shift the image position, or switched in between 2D/3D mode. By using the Liquid Crystal devices, dynamic integral imaging have been successfully applied on 3D Display, capturing, and bio-imaging applications.
Internet Protocol Display Sharing Solution for Mission Control Center Video System
NASA Technical Reports Server (NTRS)
Brown, Michael A.
2009-01-01
With the advent of broadcast television as a constant source of information throughout the NASA manned space flight Mission Control Center (MCC) at the Johnson Space Center (JSC), the current Video Transport System (VTS) characteristics provides the ability to visually enhance real-time applications as a broadcast channel that decision making flight controllers come to rely on, but can be difficult to maintain and costly. The Operations Technology Facility (OTF) of the Mission Operations Facility Division (MOFD) has been tasked to provide insight to new innovative technological solutions for the MCC environment focusing on alternative architectures for a VTS. New technology will be provided to enable sharing of all imagery from one specific computer display, better known as Display Sharing (DS), to other computer displays and display systems such as; large projector systems, flight control rooms, and back supporting rooms throughout the facilities and other offsite centers using IP networks. It has been stated that Internet Protocol (IP) applications are easily readied to substitute for the current visual architecture, but quality and speed may need to be forfeited for reducing cost and maintainability. Although the IP infrastructure can support many technologies, the simple task of sharing ones computer display can be rather clumsy and difficult to configure and manage to the many operators and products. The DS process shall invest in collectively automating the sharing of images while focusing on such characteristics as; managing bandwidth, encrypting security measures, synchronizing disconnections from loss of signal / loss of acquisitions, performance latency, and provide functions like, scalability, multi-sharing, ease of initial integration / sustained configuration, integration with video adjustments packages, collaborative tools, host / recipient controllability, and the utmost paramount priority, an enterprise solution that provides ownership to the whole process, while maintaining the integrity of the latest technological displayed image devices. This study will provide insights to the many possibilities that can be filtered down to a harmoniously responsive product that can be used in today's MCC environment.
Context-dependent JPEG backward-compatible high-dynamic range image compression
NASA Astrophysics Data System (ADS)
Korshunov, Pavel; Ebrahimi, Touradj
2013-10-01
High-dynamic range (HDR) imaging is expected, together with ultrahigh definition and high-frame rate video, to become a technology that may change photo, TV, and film industries. Many cameras and displays capable of capturing and rendering both HDR images and video are already available in the market. The popularity and full-public adoption of HDR content is, however, hindered by the lack of standards in evaluation of quality, file formats, and compression, as well as large legacy base of low-dynamic range (LDR) displays that are unable to render HDR. To facilitate the wide spread of HDR usage, the backward compatibility of HDR with commonly used legacy technologies for storage, rendering, and compression of video and images are necessary. Although many tone-mapping algorithms are developed for generating viewable LDR content from HDR, there is no consensus of which algorithm to use and under which conditions. We, via a series of subjective evaluations, demonstrate the dependency of the perceptual quality of the tone-mapped LDR images on the context: environmental factors, display parameters, and image content itself. Based on the results of subjective tests, it proposes to extend JPEG file format, the most popular image format, in a backward compatible manner to deal with HDR images also. An architecture to achieve such backward compatibility with JPEG is proposed. A simple implementation of lossy compression demonstrates the efficiency of the proposed architecture compared with the state-of-the-art HDR image compression.
ERIC Educational Resources Information Center
Ekstrom, James
2001-01-01
Advocates using computer imaging technology to assist students in doing projects in which determining density is important. Students can study quantitative comparisons of masses, lengths, and widths using computer software. Includes figures displaying computer images of shells, yeast cultures, and the Aral Sea. (SAH)
Projection display technology for avionics applications
NASA Astrophysics Data System (ADS)
Kalmanash, Michael H.; Tompkins, Richard D.
2000-08-01
Avionics displays often require custom image sources tailored to demanding program needs. Flat panel devices are attractive for cockpit installations, however recent history has shown that it is not possible to sustain a business manufacturing custom flat panels in small volume specialty runs. As the number of suppliers willing to undertake this effort shrinks, avionics programs unable to utilize commercial-off-the-shelf (COTS) flat panels are placed in serious jeopardy. Rear projection technology offers a new paradigm, enabling compact systems to be tailored to specific platform needs while using a complement of COTS components. Projection displays enable improved performance, lower cost and shorter development cycles based on inter-program commonality and the wide use of commercial components. This paper reviews the promise and challenges of projection technology and provides an overview of Kaiser Electronics' efforts in developing advanced avionics displays using this approach.
Tang, Rui; Ma, Long-Fei; Rong, Zhi-Xia; Li, Mo-Dan; Zeng, Jian-Ping; Wang, Xue-Dong; Liao, Hong-En; Dong, Jia-Hong
2018-04-01
Augmented reality (AR) technology is used to reconstruct three-dimensional (3D) images of hepatic and biliary structures from computed tomography and magnetic resonance imaging data, and to superimpose the virtual images onto a view of the surgical field. In liver surgery, these superimposed virtual images help the surgeon to visualize intrahepatic structures and therefore, to operate precisely and to improve clinical outcomes. The keywords "augmented reality", "liver", "laparoscopic" and "hepatectomy" were used for searching publications in the PubMed database. The primary source of literatures was from peer-reviewed journals up to December 2016. Additional articles were identified by manual search of references found in the key articles. In general, AR technology mainly includes 3D reconstruction, display, registration as well as tracking techniques and has recently been adopted gradually for liver surgeries including laparoscopy and laparotomy with video-based AR assisted laparoscopic resection as the main technical application. By applying AR technology, blood vessels and tumor structures in the liver can be displayed during surgery, which permits precise navigation during complex surgical procedures. Liver transformation and registration errors during surgery were the main factors that limit the application of AR technology. With recent advances, AR technologies have the potential to improve hepatobiliary surgical procedures. However, additional clinical studies will be required to evaluate AR as a tool for reducing postoperative morbidity and mortality and for the improvement of long-term clinical outcomes. Future research is needed in the fusion of multiple imaging modalities, improving biomechanical liver modeling, and enhancing image data processing and tracking technologies to increase the accuracy of current AR methods. Copyright © 2018 First Affiliated Hospital, Zhejiang University School of Medicine in China. Published by Elsevier B.V. All rights reserved.
Emerging Technologies: Something Borrowed, Something New
NASA Astrophysics Data System (ADS)
Heinhorst, Sabine; Cannon, Gordon
1999-04-01
The cover of the July 16, 1998 issue of Nature features a remarkable new "smart material" that can be used to print electronically on a variety of surfaces, including paper, plastic, and metal. The electrophoretic ink developed in J. Jacobson's lab at the Massachusetts Institute of Technology consists of liquid with dispersed, oppositely charged black and white microparticles that are contained in microcapsules. Application of a potential results in migration of the microparticles to opposite sides of the microcapsule, thereby generating either a white or black image that depends on the direction of the potential. Unlike liquid crystal displays, the image generated with electrophoretic ink is stable even after the power has been turned off. Cost and resolution of this new technology compare favorably with most other electronic image display systems currently in use or under development. Promising applications for electrophoretic ink in the future may range from street signs to electronic books (Comiskey et al., Vol. 394, pp 253-255; "News and Views" commentary by R. Wisnieff on pp 225-227).
Real Time Urban Acoustics Using Commerical Technologies
2011-08-01
delays, and rendering for binaural or surround sound display [2]. VibeStudio does not include propagation effects of reflections, diffusion, or...available for rending both binaural headphones displays as well as standard and arbitrary surround sound formats. For this reason, minimal detail is...provided in this paper and the reader is referred to [2]. An image illustrating a binaural display scenario and a typical surround sound setup are
Augmented reality for the surgeon: Systematic review.
Yoon, Jang W; Chen, Robert E; Kim, Esther J; Akinduro, Oluwaseun O; Kerezoudis, Panagiotis; Han, Phillip K; Si, Phong; Freeman, William D; Diaz, Roberto J; Komotar, Ricardo J; Pirris, Stephen M; Brown, Benjamin L; Bydon, Mohamad; Wang, Michael Y; Wharen, Robert E; Quinones-Hinojosa, Alfredo
2018-04-30
Since the introduction of wearable head-up displays, there has been much interest in the surgical community adapting this technology into routine surgical practice. We used the keywords augmented reality OR wearable device OR head-up display AND surgery using PubMed, EBSCO, IEEE and SCOPUS databases. After exclusions, 74 published articles that evaluated the utility of wearable head-up displays in surgical settings were included in our review. Across all studies, the most common use of head-up displays was in cases of live streaming from surgical microscopes, navigation, monitoring of vital signs, and display of preoperative images. The most commonly used head-up display was Google Glass. Head-up displays enhanced surgeons' operating experience; common disadvantages include limited battery life, display size and discomfort. Due to ergonomic issues with dual-screen devices, augmented reality devices with the capacity to overlay images onto the surgical field will be key features of next-generation surgical head-up displays. Copyright © 2018 John Wiley & Sons, Ltd.
Parallax barrier engineering for image quality improvement in an autostereoscopic 3D display.
Kim, Sung-Kyu; Yoon, Ki-Hyuk; Yoon, Seon Kyu; Ju, Heongkyu
2015-05-18
We present a image quality improvement in a parallax barrier (PB)-based multiview autostereoscopic 3D display system under a real-time tracking of positions of a viewer's eyes. The system presented exploits a parallax barrier engineered to offer significantly improved quality of three-dimensional images for a moving viewer without an eyewear under the dynamic eye tracking. The improved image quality includes enhanced uniformity of image brightness, reduced point crosstalk, and no pseudoscopic effects. We control the relative ratio between two parameters i.e., a pixel size and the aperture of a parallax barrier slit to improve uniformity of image brightness at a viewing zone. The eye tracking that monitors positions of a viewer's eyes enables pixel data control software to turn on only pixels for view images near the viewer's eyes (the other pixels turned off), thus reducing point crosstalk. The eye tracking combined software provides right images for the respective eyes, therefore producing no pseudoscopic effects at its zone boundaries. The viewing zone can be spanned over area larger than the central viewing zone offered by a conventional PB-based multiview autostereoscopic 3D display (no eye tracking). Our 3D display system also provides multiviews for motion parallax under eye tracking. More importantly, we demonstrate substantial reduction of point crosstalk of images at the viewing zone, its level being comparable to that of a commercialized eyewear-assisted 3D display system. The multiview autostereoscopic 3D display presented can greatly resolve the point crosstalk problem, which is one of the critical factors that make it difficult for previous technologies for a multiview autostereoscopic 3D display to replace an eyewear-assisted counterpart.
Optimization of electro-optical parameters of LCD for advertising systems
NASA Astrophysics Data System (ADS)
Olifierczuk, Marek; Zielinski, Jerzy; Klosowicz, Stanislaw J.
1998-02-01
The analysis of the optimization of negative image twisted nematic LCD is presented. Theoretical considerations are confirmed by experimental results. The effect of material parameters and technology on the contrast ratio and display dynamics is given. The effect in TN display with black dye is presented.
A full-parallax 3D display with restricted viewing zone tracking viewer's eye
NASA Astrophysics Data System (ADS)
Beppu, Naoto; Yendo, Tomohiro
2015-03-01
The Three-Dimensional (3D) vision became widely known as familiar imaging technique now. The 3D display has been put into practical use in various fields, such as entertainment and medical fields. Development of 3D display technology will play an important role in a wide range of fields. There are various ways to the method of displaying 3D image. There is one of the methods that showing 3D image method to use the ray reproduction and we focused on it. This method needs many viewpoint images when achieve a full-parallax because this method display different viewpoint image depending on the viewpoint. We proposed to reduce wasteful rays by limiting projector's ray emitted to around only viewer using a spinning mirror, and to increase effectiveness of display device to achieve a full-parallax 3D display. We propose a method by using a tracking viewer's eye, a high-speed projector, a rotating mirror that tracking viewer (a spinning mirror), a concave mirror array having the different vertical slope arranged circumferentially (a concave mirror array), a cylindrical mirror. About proposed method in simulation, we confirmed the scanning range and the locus of the movement in the horizontal direction of the ray. In addition, we confirmed the switching of the viewpoints and convergence performance in the vertical direction of rays. Therefore, we confirmed that it is possible to realize a full-parallax.
Udani, Ankeet D; Harrison, T Kyle; Howard, Steven K; Kim, T Edward; Brock-Utne, John G; Gaba, David M; Mariano, Edward R
2012-08-01
A head-mounted display provides continuous real-time imaging within the practitioner's visual field. We evaluated the feasibility of using head-mounted display technology to improve ergonomics in ultrasound-guided regional anesthesia in a simulated environment. Two anesthesiologists performed an equal number of ultrasound-guided popliteal-sciatic nerve blocks using the head-mounted display on a porcine hindquarter, and an independent observer assessed each practitioner's ergonomics (eg, head turning, arching, eye movements, and needle manipulation) and the overall block quality based on the injectate spread around the target nerve for each procedure. Both practitioners performed their procedures without directly viewing the ultrasound monitor, and neither practitioner showed poor ergonomic behavior. Head-mounted display technology may offer potential advantages during ultrasound-guided regional anesthesia.
See-through 3D technology for augmented reality
NASA Astrophysics Data System (ADS)
Lee, Byoungho; Lee, Seungjae; Li, Gang; Jang, Changwon; Hong, Jong-Young
2017-06-01
Augmented reality is recently attracting a lot of attention as one of the most spotlighted next-generation technologies. In order to get toward realization of ideal augmented reality, we need to integrate 3D virtual information into real world. This integration should not be noticed by users blurring the boundary between the virtual and real worlds. Thus, ultimate device for augmented reality can reconstruct and superimpose 3D virtual information on the real world so that they are not distinguishable, which is referred to as see-through 3D technology. Here, we introduce our previous researches to combine see-through displays and 3D technologies using emerging optical combiners: holographic optical elements and index matched optical elements. Holographic optical elements are volume gratings that have angular and wavelength selectivity. Index matched optical elements are partially reflective elements using a compensation element for index matching. Using these optical combiners, we could implement see-through 3D displays based on typical methodologies including integral imaging, digital holographic displays, multi-layer displays, and retinal projection. Some of these methods are expected to be optimized and customized for head-mounted or wearable displays. We conclude with demonstration and analysis of fundamental researches for head-mounted see-through 3D displays.
Volumetric 3D display using a DLP projection engine
NASA Astrophysics Data System (ADS)
Geng, Jason
2012-03-01
In this article, we describe a volumetric 3D display system based on the high speed DLPTM (Digital Light Processing) projection engine. Existing two-dimensional (2D) flat screen displays often lead to ambiguity and confusion in high-dimensional data/graphics presentation due to lack of true depth cues. Even with the help of powerful 3D rendering software, three-dimensional (3D) objects displayed on a 2D flat screen may still fail to provide spatial relationship or depth information correctly and effectively. Essentially, 2D displays have to rely upon capability of human brain to piece together a 3D representation from 2D images. Despite the impressive mental capability of human visual system, its visual perception is not reliable if certain depth cues are missing. In contrast, volumetric 3D display technologies to be discussed in this article are capable of displaying 3D volumetric images in true 3D space. Each "voxel" on a 3D image (analogous to a pixel in 2D image) locates physically at the spatial position where it is supposed to be, and emits light from that position toward omni-directions to form a real 3D image in 3D space. Such a volumetric 3D display provides both physiological depth cues and psychological depth cues to human visual system to truthfully perceive 3D objects. It yields a realistic spatial representation of 3D objects and simplifies our understanding to the complexity of 3D objects and spatial relationship among them.
MEMS scanned laser head-up display
NASA Astrophysics Data System (ADS)
Freeman, Mark O.
2011-03-01
Head-up displays (HUD) in automobiles and other vehicles have been shown to significantly reduce accident rates by keeping the driver's eyes on the road. The requirements for automotive HUDs are quite demanding especially in terms of brightness, dimming range, supplied power, and size. Scanned laser display technology is particularly well-suited to this application since the lasers can be very efficiently relayed to the driver's eyes. Additionally, the lasers are only turned on where the light is needed in the image. This helps to provide the required brightness while minimizing power and avoiding a background glow that disturbs the see-through experience. Microvision has developed a couple of HUD architectures that are presented herein. One design uses an exit pupil expander and relay optics to produce a high quality virtual image for built-in systems where the image appears to float above the hood of the auto. A second design uses a patented see-through screen technology and pico projector to make automotive HUDs available to anyone with a projector. The presentation will go over the basic designs for the two types of HUD and discuss design tradeoffs.
Advances in lenticular lens arrays for visual display
NASA Astrophysics Data System (ADS)
Johnson, R. Barry; Jacobsen, Gary A.
2005-08-01
Lenticular lens arrays are widely used in the printed display industry and in specialized applications of electronic displays. In general, lenticular arrays can create from interlaced printed images such visual effects as 3-D, animation, flips, morph, zoom, or various combinations. The use of these typically cylindrical lens arrays for this purpose began in the late 1920's. The lenses comprise a front surface having a spherical crosssection and a flat rear surface upon where the material to be displayed is proximately located. The principal limitation to the resultant image quality for current technology lenticular lenses is spherical aberration. This limitation causes the lenticular lens arrays to be generally thick (0.5 mm) and not easily wrapped around such items as cans or bottles. The objectives of this research effort were to develop a realistic analytical model, to significantly improve the image quality, to develop the tooling necessary to fabricate lenticular lens array extrusion cylinders, and to develop enhanced fabrication technology for the extrusion cylinder. It was determined that the most viable cross-sectional shape for the lenticular lenses is elliptical. This shape dramatically improves the image quality. The relationship between the lens radius, conic constant, material refractive index, and thickness will be discussed. A significant challenge was to fabricate a diamond-cutting tool having the proper elliptical shape. Both true elliptical and pseudo-elliptical diamond tools were designed and fabricated. The plastic sheets extruded can be quite thin (< 0.25 mm) and, consequently, can be wrapped around cans and the like. Fabrication of the lenticular engraved extrusion cylinder required remarkable development considering the large physical size and weight of the cylinder, and the tight mechanical tolerances associated with the lenticular lens molds cut into the cylinder's surface. The development of the cutting tool and the lenticular engraved extrusion cylinder will be presented in addition to an illustrative comparison of current lenticular technology and the new technology. Three U.S. patents have been issued as a consequence of this research effort.
Development of exosome surface display technology in living human cells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stickney, Zachary, E-mail: zstickney@scu.edu; Losacco, Joseph, E-mail: jlosacco@scu.edu; McDevitt, Sophie, E-mail: smmcdevitt@scu.edu
Surface display technology is an emerging key player in presenting functional proteins for targeted drug delivery and therapy. Although a number of technologies exist, a desirable mammalian surface display system is lacking. Exosomes are extracellular vesicles that facilitate cell–cell communication and can be engineered as nano-shuttles for cell-specific delivery. In this study, we report the development of a novel exosome surface display technology by exploiting mammalian cell secreted nano-vesicles and their trans-membrane protein tetraspanins. By constructing a set of fluorescent reporters for both the inner and outer surface display on exosomes at two selected sites of tetraspanins, we demonstrated themore » successful exosomal display via gene transfection and monitoring fluorescence in vivo. We subsequently validated our system by demonstrating the expected intracellular partitioning of reporter protein into sub-cellular compartments and secretion of exosomes from human HEK293 cells. Lastly, we established the stable engineered cells to harness the ability of this robust system for continuous production, secretion, and uptake of displayed exosomes with minimal impact on human cell biology. In sum, our work paved the way for potential applications of exosome, including exosome tracking and imaging, targeted drug delivery, as well as exosome-mediated vaccine and therapy.« less
Integration of real-time 3D capture, reconstruction, and light-field display
NASA Astrophysics Data System (ADS)
Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Pei, Renjing; Liu, Yongchun; Zhang, Xiao
2015-03-01
Effective integration of 3D acquisition, reconstruction (modeling) and display technologies into a seamless systems provides augmented experience of visualizing and analyzing real objects and scenes with realistic 3D sensation. Applications can be found in medical imaging, gaming, virtual or augmented reality and hybrid simulations. Although 3D acquisition, reconstruction, and display technologies have gained significant momentum in recent years, there seems a lack of attention on synergistically combining these components into a "end-to-end" 3D visualization system. We designed, built and tested an integrated 3D visualization system that is able to capture in real-time 3D light-field images, perform 3D reconstruction to build 3D model of the objects, and display the 3D model on a large autostereoscopic screen. In this article, we will present our system architecture and component designs, hardware/software implementations, and experimental results. We will elaborate on our recent progress on sparse camera array light-field 3D acquisition, real-time dense 3D reconstruction, and autostereoscopic multi-view 3D display. A prototype is finally presented with test results to illustrate the effectiveness of our proposed integrated 3D visualization system.
Military display performance parameters
NASA Astrophysics Data System (ADS)
Desjardins, Daniel D.; Meyer, Frederick
2012-06-01
The military display market is analyzed in terms of four of its segments: avionics, vetronics, dismounted soldier, and command and control. Requirements are summarized for a number of technology-driving parameters, to include luminance, night vision imaging system compatibility, gray levels, resolution, dimming range, viewing angle, video capability, altitude, temperature, shock and vibration, etc., for direct-view and virtual-view displays in cockpits and crew stations. Technical specifications are discussed for selected programs.
NASA Technical Reports Server (NTRS)
Montoya, R. J.; England, J. N.; Hatfield, J. J.; Rajala, S. A.
1981-01-01
The hardware configuration, software organization, and applications software for the NASA IKONAS color graphics display system are described. The systems were created at the Langley Research Center Display Device Laboratory to develop, evaluate, and demonstrate advanced generic concepts, technology, and systems integration techniques for electronic crew station systems of future civil aircraft. A minicomputer with 64K core memory acts as a host for a raster scan graphics display generator. The architectures of the hardware system and the graphics display system are provided. The applications software features a FORTRAN-based model of an aircraft, a display system, and the utility program for real-time communications. The model accepts inputs from a two-dimensional joystick and outputs a set of aircraft states. Ongoing and planned work for image segmentation/generation, specialized graphics procedures, and higher level language user interface are discussed.
Design of area array CCD image acquisition and display system based on FPGA
NASA Astrophysics Data System (ADS)
Li, Lei; Zhang, Ning; Li, Tianting; Pan, Yue; Dai, Yuming
2014-09-01
With the development of science and technology, CCD(Charge-coupled Device) has been widely applied in various fields and plays an important role in the modern sensing system, therefore researching a real-time image acquisition and display plan based on CCD device has great significance. This paper introduces an image data acquisition and display system of area array CCD based on FPGA. Several key technical challenges and problems of the system have also been analyzed and followed solutions put forward .The FPGA works as the core processing unit in the system that controls the integral time sequence .The ICX285AL area array CCD image sensor produced by SONY Corporation has been used in the system. The FPGA works to complete the driver of the area array CCD, then analog front end (AFE) processes the signal of the CCD image, including amplification, filtering, noise elimination, CDS correlation double sampling, etc. AD9945 produced by ADI Corporation to convert analog signal to digital signal. Developed Camera Link high-speed data transmission circuit, and completed the PC-end software design of the image acquisition, and realized the real-time display of images. The result through practical testing indicates that the system in the image acquisition and control is stable and reliable, and the indicators meet the actual project requirements.
Liu, Yang; Njuguna, Raphael; Matthews, Thomas; Akers, Walter J.; Sudlow, Gail P.; Mondal, Suman; Tang, Rui
2013-01-01
Abstract. We have developed a near-infrared (NIR) fluorescence goggle system based on the complementary metal–oxide–semiconductor active pixel sensor imaging and see-through display technologies. The fluorescence goggle system is a compact wearable intraoperative fluorescence imaging and display system that can guide surgery in real time. The goggle is capable of detecting fluorescence of indocyanine green solution in the picomolar range. Aided by NIR quantum dots, we successfully used the fluorescence goggle to guide sentinel lymph node mapping in a rat model. We further demonstrated the feasibility of using the fluorescence goggle in guiding surgical resection of breast cancer metastases in the liver in conjunction with NIR fluorescent probes. These results illustrate the diverse potential use of the goggle system in surgical procedures. PMID:23728180
Reconfigurable and responsive droplet-based compound micro-lenses
Nagelberg, Sara; Zarzar, Lauren D.; Nicolas, Natalie; Subramanian, Kaushikaram; Kalow, Julia A.; Sresht, Vishnu; Blankschtein, Daniel; Barbastathis, George; Kreysing, Moritz; Swager, Timothy M.; Kolle, Mathias
2017-01-01
Micro-scale optical components play a crucial role in imaging and display technology, biosensing, beam shaping, optical switching, wavefront-analysis, and device miniaturization. Herein, we demonstrate liquid compound micro-lenses with dynamically tunable focal lengths. We employ bi-phase emulsion droplets fabricated from immiscible hydrocarbon and fluorocarbon liquids to form responsive micro-lenses that can be reconfigured to focus or scatter light, form real or virtual images, and display variable focal lengths. Experimental demonstrations of dynamic refractive control are complemented by theoretical analysis and wave-optical modelling. Additionally, we provide evidence of the micro-lenses' functionality for two potential applications—integral micro-scale imaging devices and light field display technology—thereby demonstrating both the fundamental characteristics and the promising opportunities for fluid-based dynamic refractive micro-scale compound lenses. PMID:28266505
Conceptual design study for an advanced cab and visual system, volume 1
NASA Technical Reports Server (NTRS)
Rue, R. J.; Cyrus, M. L.; Garnett, T. A.; Nachbor, J. W.; Seery, J. A.; Starr, R. L.
1980-01-01
A conceptual design study was conducted to define requirements for an advanced cab and visual system. The rotorcraft system integration simulator is for engineering studies in the area of mission associated vehicle handling qualities. Principally a technology survey and assessment of existing and proposed simulator visual display systems, image generation systems, modular cab designs, and simulator control station designs were performed and are discussed. State of the art survey data were used to synthesize a set of preliminary visual display system concepts of which five candidate display configurations were selected for further evaluation. Basic display concepts incorporated in these configurations included: real image projection, using either periscopes, fiber optic bundles, or scanned laser optics; and virtual imaging with helmet mounted displays. These display concepts were integrated in the study with a simulator cab concept employing a modular base for aircraft controls, crew seating, and instrumentation (or other) displays. A simple concept to induce vibration in the various modules was developed and is described. Results of evaluations and trade offs related to the candidate system concepts are given, along with a suggested weighting scheme for numerically comparing visual system performance characteristics.
Farahani, Navid; Post, Robert; Duboy, Jon; Ahmed, Ishtiaque; Kolowitz, Brian J; Krinchai, Teppituk; Monaco, Sara E; Fine, Jeffrey L; Hartman, Douglas J; Pantanowitz, Liron
2016-01-01
Digital slides obtained from whole slide imaging (WSI) platforms are typically viewed in two dimensions using desktop personal computer monitors or more recently on mobile devices. To the best of our knowledge, we are not aware of any studies viewing digital pathology slides in a virtual reality (VR) environment. VR technology enables users to be artificially immersed in and interact with a computer-simulated world. Oculus Rift is among the world's first consumer-targeted VR headsets, intended primarily for enhanced gaming. Our aim was to explore the use of the Oculus Rift for examining digital pathology slides in a VR environment. An Oculus Rift Development Kit 2 (DK2) was connected to a 64-bit computer running Virtual Desktop software. Glass slides from twenty randomly selected lymph node cases (ten with benign and ten malignant diagnoses) were digitized using a WSI scanner. Three pathologists reviewed these digital slides on a 27-inch 5K display and with the Oculus Rift after a 2-week washout period. Recorded endpoints included concordance of final diagnoses and time required to examine slides. The pathologists also rated their ease of navigation, image quality, and diagnostic confidence for both modalities. There was 90% diagnostic concordance when reviewing WSI using a 5K display and Oculus Rift. The time required to examine digital pathology slides on the 5K display averaged 39 s (range 10-120 s), compared to 62 s with the Oculus Rift (range 15-270 s). All pathologists confirmed that digital pathology slides were easily viewable in a VR environment. The ratings for image quality and diagnostic confidence were higher when using the 5K display. Using the Oculus Rift DK2 to view and navigate pathology whole slide images in a virtual environment is feasible for diagnostic purposes. However, image resolution using the Oculus Rift device was limited. Interactive VR technologies such as the Oculus Rift are novel tools that may be of use in digital pathology.
Sauer, Igor M; Queisner, Moritz; Tang, Peter; Moosburner, Simon; Hoepfner, Ole; Horner, Rosa; Lohmann, Rudiger; Pratschke, Johann
2017-11-01
The paper evaluates the application of a mixed reality (MR) headmounted display (HMD) for the visualization of anatomical structures in complex visceral-surgical interventions. A workflow was developed and technical feasibility was evaluated. Medical images are still not seamlessly integrated into surgical interventions and, thus, remain separated from the surgical procedure.Surgeons need to cognitively relate 2-dimensional sectional images to the 3-dimensional (3D) during the actual intervention. MR applications simulate 3D images and reduce the offset between working space and visualization allowing for improved spatial-visual approximation of patient and image. The surgeon's field of vision was superimposed with a 3D-model of the patient's relevant liver structures displayed on a MR-HMD. This set-up was evaluated during open hepatic surgery. A suitable workflow for segmenting image masks and texture mapping of tumors, hepatic artery, portal vein, and the hepatic veins was developed. The 3D model was positioned above the surgical site. Anatomical reassurance was possible simply by looking up. Positioning in the room was stable without drift and minimal jittering. Users reported satisfactory comfort wearing the device without significant impairment of movement. MR technology has a high potential to improve the surgeon's action and perception in open visceral surgery by displaying 3D anatomical models close to the surgical site. Superimposing anatomical structures directly onto the organs within the surgical site remains challenging, as the abdominal organs undergo major deformations due to manipulation, respiratory motion, and the interaction with the surgical instruments during the intervention. A further application scenario would be intraoperative ultrasound examination displaying the image directly next to the transducer. Displays and sensor-technologies as well as biomechanical modeling and object-recognition algorithms will facilitate the application of MR-HMD in surgery in the near future.
Design on the x-ray oral digital image display card
NASA Astrophysics Data System (ADS)
Wang, Liping; Gu, Guohua; Chen, Qian
2009-10-01
According to the main characteristics of X-ray imaging, the X-ray display card is successfully designed and debugged using the basic principle of correlated double sampling (CDS) and combined with embedded computer technology. CCD sensor drive circuit and the corresponding procedures have been designed. Filtering and sampling hold circuit have been designed. The data exchange with PC104 bus has been implemented. Using complex programmable logic device as a device to provide gating and timing logic, the functions which counting, reading CPU control instructions, corresponding exposure and controlling sample-and-hold have been completed. According to the image effect and noise analysis, the circuit components have been adjusted. And high-quality images have been obtained.
Projection displays and MEMS: timely convergence for a bright future
NASA Astrophysics Data System (ADS)
Hornbeck, Larry J.
1995-09-01
Projection displays and microelectromechanical systems (MEMS) have evolved independently, occasionally crossing paths as early as the 1950s. But the commercially viable use of MEMS for projection displays has been illusive until the recent invention of Texas Instruments Digital Light Processing TM (DLP) technology. DLP technology is based on the Digital Micromirror DeviceTM (DMD) microchip, a MEMS technology that is a semiconductor digital light switch that precisely controls a light source for projection display and hardcopy applications. DLP technology provides a unique business opportunity because of the timely convergence of market needs and technology advances. The world is rapidly moving to an all- digital communications and entertainment infrastructure. In the near future, most of the technologies necessary for this infrastrucutre will be available at the right performance and price levels. This will make commercially viable an all-digital chain (capture, compression, transmission, reception decompression, hearing, and viewing). Unfortunately, the digital images received today must be translated into analog signals for viewing on today's televisions. Digital video is the final link in the all-digital infrastructure and DLP technoogy provides that link. DLP technology is an enabler for digital, high-resolution, color projection displays that have high contrast, are bright, seamless, and have the accuracy of color and grayscale that can be achieved only by digital control. This paper contains an introduction to DMD and DLP technology, including the historical context from which to view their developemnt. The architecture, projection operation, and fabrication are presented. Finally, the paper includes an update about current DMD business opportunities in projection displays and hardcopy.
VENI, video, VICI: The merging of computer and video technologies
NASA Technical Reports Server (NTRS)
Horowitz, Jay G.
1993-01-01
The topics covered include the following: High Definition Television (HDTV) milestones; visual information bandwidth; television frequency allocation and bandwidth; horizontal scanning; workstation RGB color domain; NTSC color domain; American HDTV time-table; HDTV image size; digital HDTV hierarchy; task force on digital image architecture; open architecture model; future displays; and the ULTIMATE imaging system.
Flat panel ferroelectric electron emission display system
Sampayan, Stephen E.; Orvis, William J.; Caporaso, George J.; Wieskamp, Ted F.
1996-01-01
A device which can produce a bright, raster scanned or non-raster scanned image from a flat panel. Unlike many flat panel technologies, this device does not require ambient light or auxiliary illumination for viewing the image. Rather, this device relies on electrons emitted from a ferroelectric emitter impinging on a phosphor. This device takes advantage of a new electron emitter technology which emits electrons with significant kinetic energy and beam current density.
Tsai, Yu-Hsiang; Huang, Mao-Hsiu; Jeng, Wei-de; Huang, Ting-Wei; Lo, Kuo-Lung; Ou-Yang, Mang
2015-10-01
Transparent display is one of the main technologies in next-generation displays, especially for augmented reality applications. An aperture structure is attached on each display pixel to partition them into transparent and black regions. However, diffraction blurs caused by the aperture structure typically degrade the transparent image when the light from a background object passes through finite aperture window. In this paper, the diffraction effect of an active-matrix organic light-emitting diode display (AMOLED) is studied. Several aperture structures have been proposed and implemented. Based on theoretical analysis and simulation, the appropriate aperture structure will effectively reduce the blur. The analysis data are also consistent with the experimental results. Compared with the various transparent aperture structure on AMOLED, diffraction width (zero energy position of diffraction pattern) of the optimize aperture structure can be reduced 63% and 31% in the x and y directions in CASE 3. Associated with a lenticular lens on the aperture structure, the improvement could reach to 77% and 54% of diffraction width in the x and y directions. Modulation transfer function and practical images are provided to evaluate the improvement of image blurs.
Automatic view synthesis by image-domain-warping.
Stefanoski, Nikolce; Wang, Oliver; Lang, Manuel; Greisen, Pierre; Heinzle, Simon; Smolic, Aljosa
2013-09-01
Today, stereoscopic 3D (S3D) cinema is already mainstream, and almost all new display devices for the home support S3D content. S3D distribution infrastructure to the home is already established partly in the form of 3D Blu-ray discs, video on demand services, or television channels. The necessity to wear glasses is, however, often considered as an obstacle, which hinders broader acceptance of this technology in the home. Multiviewautostereoscopic displays enable a glasses free perception of S3D content for several observers simultaneously, and support head motion parallax in a limited range. To support multiviewautostereoscopic displays in an already established S3D distribution infrastructure, a synthesis of new views from S3D video is needed. In this paper, a view synthesis method based on image-domain-warping (IDW) is presented that automatically synthesizes new views directly from S3D video and functions completely. IDW relies on an automatic and robust estimation of sparse disparities and image saliency information, and enforces target disparities in synthesized images using an image warping framework. Two configurations of the view synthesizer in the scope of a transmission and view synthesis framework are analyzed and evaluated. A transmission and view synthesis system that uses IDW is recently submitted to MPEG's call for proposals on 3D video technology, where it is ranked among the four best performing proposals.
NASA Astrophysics Data System (ADS)
Hui, Jie; Cao, Yingchun; Zhang, Yi; Kole, Ayeeshik; Wang, Pu; Yu, Guangli; Eakins, Gregory; Sturek, Michael; Chen, Weibiao; Cheng, Ji-Xin
2017-03-01
Intravascular photoacoustic-ultrasound (IVPA-US) imaging is an emerging hybrid modality for the detection of lipidladen plaques by providing simultaneous morphological and lipid-specific chemical information of an artery wall. The clinical utility of IVPA-US technology requires real-time imaging and display at speed of video-rate level. Here, we demonstrate a compact and portable IVPA-US system capable of imaging at up to 25 frames per second in real-time display mode. This unprecedented imaging speed was achieved by concurrent innovations in excitation laser source, rotary joint assembly, 1 mm IVPA-US catheter, differentiated A-line strategy, and real-time image processing and display algorithms. By imaging pulsatile motion at different imaging speeds, 16 frames per second was deemed to be adequate to suppress motion artifacts from cardiac pulsation for in vivo applications. Our lateral resolution results further verified the number of A-lines used for a cross-sectional IVPA image reconstruction. The translational capability of this system for the detection of lipid-laden plaques was validated by ex vivo imaging of an atherosclerotic human coronary artery at 16 frames per second, which showed strong correlation to gold-standard histopathology.
Holographic and light-field imaging for augmented reality
NASA Astrophysics Data System (ADS)
Lee, Byoungho; Hong, Jong-Young; Jang, Changwon; Jeong, Jinsoo; Lee, Chang-Kun
2017-02-01
We discuss on the recent state of the augmented reality (AR) display technology. In order to realize AR, various seethrough three-dimensional (3D) display techniques have been reported. We describe the AR display with 3D functionality such as light-field display and holography. See-through light-field display can be categorized by the optical elements which are used for see-through property: optical elements controlling path of the light-fields and those generating see-through light-field. Holographic display can be also a good candidate for AR display because it can reconstruct wavefront information and provide realistic virtual information. We introduce the see-through holographic display using various optical techniques.
A JPEG backward-compatible HDR image compression
NASA Astrophysics Data System (ADS)
Korshunov, Pavel; Ebrahimi, Touradj
2012-10-01
High Dynamic Range (HDR) imaging is expected to become one of the technologies that could shape next generation of consumer digital photography. Manufacturers are rolling out cameras and displays capable of capturing and rendering HDR images. The popularity and full public adoption of HDR content is however hindered by the lack of standards in evaluation of quality, file formats, and compression, as well as large legacy base of Low Dynamic Range (LDR) displays that are unable to render HDR. To facilitate wide spread of HDR usage, the backward compatibility of HDR technology with commonly used legacy image storage, rendering, and compression is necessary. Although many tone-mapping algorithms were developed for generating viewable LDR images from HDR content, there is no consensus on which algorithm to use and under which conditions. This paper, via a series of subjective evaluations, demonstrates the dependency of perceived quality of the tone-mapped LDR images on environmental parameters and image content. Based on the results of subjective tests, it proposes to extend JPEG file format, as the most popular image format, in a backward compatible manner to also deal with HDR pictures. To this end, the paper provides an architecture to achieve such backward compatibility with JPEG and demonstrates efficiency of a simple implementation of this framework when compared to the state of the art HDR image compression.
Design of the control system for full-color LED display based on MSP430 MCU
NASA Astrophysics Data System (ADS)
Li, Xue; Xu, Hui-juan; Qin, Ling-ling; Zheng, Long-jiang
2013-08-01
The LED display incorporate the micro electronic technique, computer technology and information processing as a whole, it becomes the most preponderant of a new generation of display media with the advantages of bright in color, high dynamic range, high brightness and long operating life, etc. The LED display has been widely used in the bank, securities trading, highway signs, airport and advertising, etc. According to the display color, the LED display screen is divided into monochrome screen, double color display and full color display. With the diversification of the LED display's color and the ceaseless rise of the display demands, the LED display's drive circuit and control technology also get the corresponding progress and development. The earliest monochrome screen just displaying Chinese characters, simple character or digital, so the requirements of the controller are relatively low. With the widely used of the double color LED display, the performance of its controller will also increase. In recent years, the full color LED display with three primary colors of red, green, blue and grayscale display effect has been highly attention with its rich and colorful display effect. Every true color pixel includes three son pixels of red, green, blue, using the space colour mixture to realize the multicolor. The dynamic scanning control system of LED full-color display is designed based on MSP430 microcontroller technology of the low power consumption. The gray control technology of this system used the new method of pulse width modulation (PWM) and 19 games show principle are combining. This method in meet 256 level grayscale display conditions, improves the efficiency of the LED light device, and enhances the administrative levels feels of the image. Drive circuit used 1/8 scanning constant current drive mode, and make full use of the single chip microcomputer I/O mouth resources to complete the control. The system supports text, pictures display of 256 grayscale full-color LED screen.
Time multiplexing for increased FOV and resolution in virtual reality
NASA Astrophysics Data System (ADS)
Miñano, Juan C.; Benitez, Pablo; Grabovičkić, Dejan; Zamora, Pablo; Buljan, Marina; Narasimhan, Bharathwaj
2017-06-01
We introduce a time multiplexing strategy to increase the total pixel count of the virtual image seen in a VR headset. This translates into an improvement of the pixel density or the Field of View FOV (or both) A given virtual image is displayed by generating a succession of partial real images, each representing part of the virtual image and together representing the virtual image. Each partial real image uses the full set of physical pixels available in the display. The partial real images are successively formed and combine spatially and temporally to form a virtual image viewable from the eye position. Partial real images are imaged through different optical channels depending of its time slot. Shutters or other schemes are used to avoid that a partial real image be imaged through the wrong optical channels or at the wrong time slot. This time multiplexing strategy needs real images be shown at high frame rates (>120fps). Available display and shutters technologies are discussed. Several optical designs for achieving this time multiplexing scheme in a compact format are shown. This time multiplexing scheme allows increasing the resolution/FOV of the virtual image not only by increasing the physical pixel density but also by decreasing the pixels switching time, a feature that may be simpler to achieve in certain circumstances.
Projection display industry market and technology trends
NASA Astrophysics Data System (ADS)
Castellano, Joseph A.; Mentley, David E.
1995-04-01
The projection display industry is diverse, embracing a variety of technologies and applications. In recent years, there has been a high level of interest in projection displays, particularly those using LCD panels or light valves because of the difficulty in making large screen, direct view displays. Many developers feel that projection displays will be the wave of the future for large screen HDTV (high-definition television), penetrating the huge existing market for direct view CRT-based televisions. Projection displays can have the images projected onto a screen either from the rear or the front; the main characteristic is their ability to be viewed by more than one person. In addition to large screen home television receivers, there are numerous other uses for projection displays including conference room presentations, video conferences, closed circuit programming, computer-aided design, and military command/control. For any given application, the user can usually choose from several alternative technologies. These include CRT front or rear projectors, LCD front or rear projectors, LCD overhead projector plate monitors, various liquid or solid-state light valve projectors, or laser-addressed systems. The overall worldwide market for projection information displays of all types and for all applications, including home television, will top DOL4.6 billion in 1995 and DOL6.45 billion in 2001.
Anaglyph Image Technology As a Visualization Tool for Teaching Geology of National Parks
NASA Astrophysics Data System (ADS)
Stoffer, P. W.; Phillips, E.; Messina, P.
2003-12-01
Anaglyphic stereo viewing technology emerged in the mid 1800's. Anaglyphs use offset images in contrasting colors (typically red and cyan) that when viewed through color filters produce a three-dimensional (3-D) image. Modern anaglyph image technology has become increasingly easy to use and relatively inexpensive using digital cameras, scanners, color printing, and common image manipulation software. Perhaps the primary drawbacks of anaglyph images include visualization problems with primary colors (such as flowers, bright clothing, or blue sky) and distortion factors in large depth-of-field images. However, anaglyphs are more versatile than polarization techniques since they can be printed, displayed on computer screens (such as on websites), or projected with a single projector (as slides or digital images), and red and cyan viewing glasses cost less than polarization glasses and other 3-D viewing alternatives. Anaglyph images are especially well suited for most natural landscapes, such as views dominated by natural earth tones (grays, browns, greens), and they work well for sepia and black and white images (making the conversion of historic stereo photography into anaglyphs easy). We used a simple stereo camera setup incorporating two digital cameras with a rigid base to photograph landscape features in national parks (including arches, caverns, cactus, forests, and coastlines). We also scanned historic stereographic images. Using common digital image manipulation software we created websites featuring anaglyphs of geologic features from national parks. We used the same images for popular 3-D poster displays at the U.S. Geological Survey Open House 2003 in Menlo Park, CA. Anaglyph photography could easily be used in combined educational outdoor activities and laboratory exercises.
Tactile Media for the Visually Handicapped.
ERIC Educational Resources Information Center
Diodato, Virgil
New technological developments allow even the most severely visually handicapped person to read print, sense images, and operate calculators and meters. One of these new developments is the Optacon, which converts printed images to vibrations sensed by finger touch, and may be used to read print, handwriting, and calculator displays. Another…
Tse computers. [ultrahigh speed optical processing for two dimensional binary image
NASA Technical Reports Server (NTRS)
Schaefer, D. H.; Strong, J. P., III
1977-01-01
An ultra-high-speed computer that utilizes binary images as its basic computational entity is being developed. The basic logic components perform thousands of operations simultaneously. Technologies of the fiber optics, display, thin film, and semiconductor industries are being utilized in the building of the hardware.
Inkjet printing-based volumetric display projecting multiple full-colour 2D patterns
NASA Astrophysics Data System (ADS)
Hirayama, Ryuji; Suzuki, Tomotaka; Shimobaba, Tomoyoshi; Shiraki, Atsushi; Naruse, Makoto; Nakayama, Hirotaka; Kakue, Takashi; Ito, Tomoyoshi
2017-04-01
In this study, a method to construct a full-colour volumetric display is presented using a commercially available inkjet printer. Photoreactive luminescence materials are minutely and automatically printed as the volume elements, and volumetric displays are constructed with high resolution using easy-to-fabricate means that exploit inkjet printing technologies. The results experimentally demonstrate the first prototype of an inkjet printing-based volumetric display composed of multiple layers of transparent films that yield a full-colour three-dimensional (3D) image. Moreover, we propose a design algorithm with 3D structures that provide multiple different 2D full-colour patterns when viewed from different directions and experimentally demonstrate prototypes. It is considered that these types of 3D volumetric structures and their fabrication methods based on widely deployed existing printing technologies can be utilised as novel information display devices and systems, including digital signage, media art, entertainment and security.
Volonté, Francesco; Buchs, Nicolas C; Pugin, François; Spaltenstein, Joël; Schiltz, Boris; Jung, Minoa; Hagen, Monika; Ratib, Osman; Morel, Philippe
2013-09-01
Computerized management of medical information and 3D imaging has become the norm in everyday medical practice. Surgeons exploit these emerging technologies and bring information previously confined to the radiology rooms into the operating theatre. The paper reports the authors' experience with integrated stereoscopic 3D-rendered images in the da Vinci surgeon console. Volume-rendered images were obtained from a standard computed tomography dataset using the OsiriX DICOM workstation. A custom OsiriX plugin was created that permitted the 3D-rendered images to be displayed in the da Vinci surgeon console and to appear stereoscopic. These rendered images were displayed in the robotic console using the TilePro multi-input display. The upper part of the screen shows the real endoscopic surgical field and the bottom shows the stereoscopic 3D-rendered images. These are controlled by a 3D joystick installed on the console, and are updated in real time. Five patients underwent a robotic augmented reality-enhanced procedure. The surgeon was able to switch between the classical endoscopic view and a combined virtual view during the procedure. Subjectively, the addition of the rendered images was considered to be an undeniable help during the dissection phase. With the rapid evolution of robotics, computer-aided surgery is receiving increasing interest. This paper details the authors' experience with 3D-rendered images projected inside the surgical console. The use of this intra-operative mixed reality technology is considered very useful by the surgeon. It has been shown that the usefulness of this technique is a step toward computer-aided surgery that will progress very quickly over the next few years. Copyright © 2012 John Wiley & Sons, Ltd.
WE-E-12A-01: Medical Physics 1.0 to 2.0: MRI, Displays, Informatics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pickens, D; Flynn, M; Peck, D
Medical Physics 2.0 is a bold vision for an existential transition of clinical imaging physics in face of the new realities of value-based and evidence-based medicine, comparative effectiveness, and meaningful use. It speaks to how clinical imaging physics can expand beyond traditional insular models of inspection and acceptance testing, oriented toward compliance, towards team-based models of operational engagement, prospective definition and assurance of effective use, and retrospective evaluation of clinical performance. Organized into four sessions of the AAPM, this particular session focuses on three specific modalities as outlined below. MRI 2.0: This presentation will look into the future of clinicalmore » MR imaging and what the clinical medical physicist will need to be doing as the technology of MR imaging evolves. Many of the measurement techniques used today will need to be expanded to address the advent of higher field imaging systems and dedicated imagers for specialty applications. Included will be the need to address quality assurance and testing metrics for multi-channel MR imagers and hybrid devices such as MR/PET systems. New pulse sequences and acquisition methods, increasing use of MR spectroscopy, and real-time guidance procedures will place the burden on the medical physicist to define and use new tools to properly evaluate these systems, but the clinical applications must be understood so that these tools are use correctly. Finally, new rules, clinical requirements, and regulations will mean that the medical physicist must actively work to keep her/his sites compliant and must work closely with physicians to ensure best performance of these systems. Informatics Display 1.0 to 2.0: Medical displays are an integral part of medical imaging operation. The DICOM and AAPM (TG18) efforts have led to clear definitions of performance requirements of monochrome medical displays that can be followed by medical physicists to ensure proper performance. However, effective implementation of that oversight has been challenging due to the number and extend of medical displays in use at a facility. The advent of color display and mobile displays has added additional challenges to the task of the medical physicist. This informatics display lecture first addresses the current display guidelines (the 1.0 paradigm) and further outlines the initiatives and prospects for color and mobile displays (the 2.0 paradigm). Informatics Management 1.0 to 2.0: Imaging informatics is part of every radiology practice today. Imaging informatics covers everything from the ordering of a study, through the data acquisition and processing, display and archiving, reporting of findings and the billing for the services performed. The standardization of the processes used to manage the information and methodologies to integrate these standards is being developed and advanced continuously. These developments are done in an open forum and imaging organizations and professionals all have a part in the process. In the Informatics Management presentation, the flow of information and the integration of the standards used in the processes will be reviewed. The role of radiologists and physicists in the process will be discussed. Current methods (the 1.0 paradigm) and evolving methods (the 2.0 paradigm) for validation of informatics systems function will also be discussed. Learning Objectives: Identify requirements for improving quality assurance and compliance tools for advanced and hybrid MRI systems. Identify the need for new quality assurance metrics and testing procedures for advanced systems. Identify new hardware systems and new procedures needed to evaluate MRI systems. Understand the components of current medical physics expectation for medical displays. Understand the role and prospect fo medical physics for color and mobile display devices. Understand different areas of imaging informatics and the methodology for developing informatics standards. Understand the current status of informatics standards and the role of physicists and radiologists in the process, and the current technology for validating the function of these systems.« less
Flat panel ferroelectric electron emission display system
Sampayan, S.E.; Orvis, W.J.; Caporaso, G.J.; Wieskamp, T.F.
1996-04-16
A device is disclosed which can produce a bright, raster scanned or non-raster scanned image from a flat panel. Unlike many flat panel technologies, this device does not require ambient light or auxiliary illumination for viewing the image. Rather, this device relies on electrons emitted from a ferroelectric emitter impinging on a phosphor. This device takes advantage of a new electron emitter technology which emits electrons with significant kinetic energy and beam current density. 6 figs.
Using high-resolution displays for high-resolution cardiac data.
Goodyer, Christopher; Hodrien, John; Wood, Jason; Kohl, Peter; Brodlie, Ken
2009-07-13
The ability to perform fast, accurate, high-resolution visualization is fundamental to improving our understanding of anatomical data. As the volumes of data increase from improvements in scanning technology, the methods applied to visualization must evolve. In this paper, we address the interactive display of data from high-resolution magnetic resonance imaging scanning of a rabbit heart and subsequent histological imaging. We describe a visualization environment involving a tiled liquid crystal display panel display wall and associated software, which provides an interactive and intuitive user interface. The oView software is an OpenGL application that is written for the VR Juggler environment. This environment abstracts displays and devices away from the application itself, aiding portability between different systems, from desktop PCs to multi-tiled display walls. Portability between display walls has been demonstrated through its use on walls at the universities of both Leeds and Oxford. We discuss important factors to be considered for interactive two-dimensional display of large three-dimensional datasets, including the use of intuitive input devices and level of detail aspects.
Multifocal planes head-mounted displays.
Rolland, J P; Krueger, M W; Goon, A
2000-07-01
Stereoscopic head-mounted displays (HMD's) provide an effective capability to create dynamic virtual environments. For a user of such environments, virtual objects would be displayed ideally at the appropriate distances, and natural concordant accommodation and convergence would be provided. Under such image display conditions, the user perceives these objects as if they were objects in a real environment. Current HMD technology requires convergent eye movements. However, it is currently limited by fixed visual accommodation, which is inconsistent with real-world vision. A prototype multiplanar volumetric projection display based on a stack of laminated planes was built for medical visualization as discussed in a paper presented at a 1999 Advanced Research Projects Agency workshop (Sullivan, Advanced Research Projects Agency, Arlington, Va., 1999). We show how such technology can be engineered to create a set of virtual planes appropriately configured in visual space to suppress conflicts of convergence and accommodation in HMD's. Although some scanning mechanism could be employed to create a set of desirable planes from a two-dimensional conventional display, multiplanar technology accomplishes such function with no moving parts. Based on optical principles and human vision, we present a comprehensive investigation of the engineering specification of multiplanar technology for integration in HMD's. Using selected human visual acuity and stereoacuity criteria, we show that the display requires at most 27 equally spaced planes, which is within the capability of current research and development display devices, located within a maximal 26-mm-wide stack. We further show that the necessary in-plane resolution is of the order of 5 microm.
Advances in three-dimensional integral imaging: sensing, display, and applications [Invited].
Xiao, Xiao; Javidi, Bahram; Martinez-Corral, Manuel; Stern, Adrian
2013-02-01
Three-dimensional (3D) sensing and imaging technologies have been extensively researched for many applications in the fields of entertainment, medicine, robotics, manufacturing, industrial inspection, security, surveillance, and defense due to their diverse and significant benefits. Integral imaging is a passive multiperspective imaging technique, which records multiple two-dimensional images of a scene from different perspectives. Unlike holography, it can capture a scene such as outdoor events with incoherent or ambient light. Integral imaging can display a true 3D color image with full parallax and continuous viewing angles by incoherent light; thus it does not suffer from speckle degradation. Because of its unique properties, integral imaging has been revived over the past decade or so as a promising approach for massive 3D commercialization. A series of key articles on this topic have appeared in the OSA journals, including Applied Optics. Thus, it is fitting that this Commemorative Review presents an overview of literature on physical principles and applications of integral imaging. Several data capture configurations, reconstruction, and display methods are overviewed. In addition, applications including 3D underwater imaging, 3D imaging in photon-starved environments, 3D tracking of occluded objects, 3D optical microscopy, and 3D polarimetric imaging are reviewed.
Fan, Zhencheng; Weng, Yitong; Chen, Guowen; Liao, Hongen
2017-07-01
Three-dimensional (3D) visualization of preoperative and intraoperative medical information becomes more and more important in minimally invasive surgery. We develop a 3D interactive surgical visualization system using mobile spatial information acquisition and autostereoscopic display for surgeons to observe surgical target intuitively. The spatial information of regions of interest (ROIs) is captured by the mobile device and transferred to a server for further image processing. Triangular patches of intraoperative data with texture are calculated with a dimension-reduced triangulation algorithm and a projection-weighted mapping algorithm. A point cloud selection-based warm-start iterative closest point (ICP) algorithm is also developed for fusion of the reconstructed 3D intraoperative image and the preoperative image. The fusion images are rendered for 3D autostereoscopic display using integral videography (IV) technology. Moreover, 3D visualization of medical image corresponding to observer's viewing direction is updated automatically using mutual information registration method. Experimental results show that the spatial position error between the IV-based 3D autostereoscopic fusion image and the actual object was 0.38±0.92mm (n=5). The system can be utilized in telemedicine, operating education, surgical planning, navigation, etc. to acquire spatial information conveniently and display surgical information intuitively. Copyright © 2017 Elsevier Inc. All rights reserved.
Image standards in tissue-based diagnosis (diagnostic surgical pathology).
Kayser, Klaus; Görtler, Jürgen; Goldmann, Torsten; Vollmer, Ekkehard; Hufnagl, Peter; Kayser, Gian
2008-04-18
Progress in automated image analysis, virtual microscopy, hospital information systems, and interdisciplinary data exchange require image standards to be applied in tissue-based diagnosis. To describe the theoretical background, practical experiences and comparable solutions in other medical fields to promote image standards applicable for diagnostic pathology. THEORY AND EXPERIENCES: Images used in tissue-based diagnosis present with pathology-specific characteristics. It seems appropriate to discuss their characteristics and potential standardization in relation to the levels of hierarchy in which they appear. All levels can be divided into legal, medical, and technological properties. Standards applied to the first level include regulations or aims to be fulfilled. In legal properties, they have to regulate features of privacy, image documentation, transmission, and presentation; in medical properties, features of disease-image combination, human-diagnostics, automated information extraction, archive retrieval and access; and in technological properties features of image acquisition, display, formats, transfer speed, safety, and system dynamics. The next lower second level has to implement the prescriptions of the upper one, i.e. describe how they are implemented. Legal aspects should demand secure encryption for privacy of all patient related data, image archives that include all images used for diagnostics for a period of 10 years at minimum, accurate annotations of dates and viewing, and precise hardware and software information. Medical aspects should demand standardized patients' files such as DICOM 3 or HL 7 including history and previous examinations, information of image display hardware and software, of image resolution and fields of view, of relation between sizes of biological objects and image sizes, and of access to archives and retrieval. Technological aspects should deal with image acquisition systems (resolution, colour temperature, focus, brightness, and quality evaluation procedures), display resolution data, implemented image formats, storage, cycle frequency, backup procedures, operation system, and external system accessibility. The lowest third level describes the permitted limits and threshold in detail. At present, an applicable standard including all mentioned features does not exist to our knowledge; some aspects can be taken from radiological standards (PACS, DICOM 3); others require specific solutions or are not covered yet. The progress in virtual microscopy and application of artificial intelligence (AI) in tissue-based diagnosis demands fast preparation and implementation of an internationally acceptable standard. The described hierarchic order as well as analytic investigation in all potentially necessary aspects and details offers an appropriate tool to specifically determine standardized requirements.
The Eyephone: a head-mounted stereo display
NASA Astrophysics Data System (ADS)
Teitel, Michael A.
1990-09-01
Head mounted stereo displays for virtual environments and computer simulations have been made since 1969. Most of the recent displays have been based on monochrome (black and white) liquid crystal display technology. Color LCD displays have generally not been used due to their lower resolution and color triad structure. As the resolution of color LCDdisplays is increasing we have begun to use color displays in our Eyephone. In this paper we describe four methods for minimizing the effect of the color triads in the magnified images of LCD displays in the Eyephone stereo head mounted display. We have settled on the use of wavefront randomizer with a spatial frequency enhancement overlay in order to blur the triacis in the displays while keeping the perceived resolution of the display high.
Raster Scan Computer Image Generation (CIG) System Based On Refresh Memory
NASA Astrophysics Data System (ADS)
Dichter, W.; Doris, K.; Conkling, C.
1982-06-01
A full color, Computer Image Generation (CIG) raster visual system has been developed which provides a high level of training sophistication by utilizing advanced semiconductor technology and innovative hardware and firmware techniques. Double buffered refresh memory and efficient algorithms eliminate the problem of conventional raster line ordering by allowing the generated image to be stored in a random fashion. Modular design techniques and simplified architecture provide significant advantages in reduced system cost, standardization of parts, and high reliability. The major system components are a general purpose computer to perform interfacing and data base functions; a geometric processor to define the instantaneous scene image; a display generator to convert the image to a video signal; an illumination control unit which provides final image processing; and a CRT monitor for display of the completed image. Additional optional enhancements include texture generators, increased edge and occultation capability, curved surface shading, and data base extensions.
Vision Algorithms Catch Defects in Screen Displays
NASA Technical Reports Server (NTRS)
2014-01-01
Andrew Watson, a senior scientist at Ames Research Center, developed a tool called the Spatial Standard Observer (SSO), which models human vision for use in robotic applications. Redmond, Washington-based Radiant Zemax LLC licensed the technology from NASA and combined it with its imaging colorimeter system, creating a powerful tool that high-volume manufacturers of flat-panel displays use to catch defects in screens.
Highly Reflective Multi-stable Electrofluidic Display Pixels
NASA Astrophysics Data System (ADS)
Yang, Shu
Electronic papers (E-papers) refer to the displays that mimic the appearance of printed papers, but still owning the features of conventional electronic displays, such as the abilities of browsing websites and playing videos. The motivation of creating paper-like displays is inspired by the truths that reading on a paper caused least eye fatigue due to the paper's reflective and light diffusive nature, and, unlike the existing commercial displays, there is no cost of any form of energy for sustaining the displayed image. To achieve the equivalent visual effect of a paper print, an ideal E-paper has to be a highly reflective with good contrast ratio and full-color capability. To sustain the image with zero power consumption, the display pixels need to be bistable, which means the "on" and "off" states are both lowest energy states. Pixel can change its state only when sufficient external energy is given. There are many emerging technologies competing to demonstrate the first ideal E-paper device. However, none is able to achieve satisfactory visual effect, bistability and video speed at the same time. Challenges come from either the inherent physical/chemical properties or the fabrication process. Electrofluidic display is one of the most promising E-paper technologies. It has successfully demonstrated high reflectivity, brilliant color and video speed operation by moving colored pigment dispersion between visible and invisible places with electrowetting force. However, the pixel design did not allow the image bistability. Presented in this dissertation are the multi-stable electrofluidic display pixels that are able to sustain grayscale levels without any power consumption, while keeping the favorable features of the previous generation electrofluidic display. The pixel design, fabrication method using multiple layer dry film photoresist lamination, and physical/optical characterizations are discussed in details. Based on the pixel structure, the preliminary results of a simplified design and fabrication method are demonstrated. As advanced research topics regarding the device optical performance, firstly an optical model for evaluating reflective displays' light out-coupling efficiency is established to guide the pixel design; Furthermore, Aluminum surface diffusers are analytically modeled and then fabricated onto multi-stable electrofluidic display pixels to demonstrate truly "white" multi-stable electrofluidic display modules. The achieved results successfully promoted multi-stable electrofluidic display as excellent candidate for the ultimate E-paper device especially for larger scale signage applications.
NASA Astrophysics Data System (ADS)
Fan, Hang; Li, Kunyang; Zhou, Yangui; Liang, Haowen; Wang, Jiahui; Zhou, Jianying
2016-09-01
Recent upsurge on virtual and augmented realities (VR and AR) has re-ignited the interest to the immerse display technology. The VR/AR technology based on stereoscopic display is believed in its early stage as glasses-free, or autostereoscopic display, will be ultimately adopted for the viewing convenience, visual comfort and for the multi-viewer purposes. On the other hand, autostereoscopic display has not yet received positive market response for the past years neither with stereoscopic displays using shutter or polarized glasses. We shall present the analysis on the real-world applications, rigid user demand, the drawbacks to the existing barrier- and lenticular lens-based LCD autostereoscopy. We shall emphasize the emerging autostereoscopic display, and notably on directional backlight LCD technology using a hybrid spatial- and temporal-control scenario. We report the numerical simulation of a display system using Monte-Carlo ray-tracing method with the human retina as the real image receiver. The system performance is optimized using newly developed figure of merit for system design. The reduced crosstalk in an autostereoscopic system, the enhanced display quality, including the high resolution received by the retina, the display homogeneity without Moiré- and defect-pattern, will be highlighted. Recent research progress including a novel scheme for diffraction-free backlight illumination, the expanded viewing zone for autostereoscopic display, and the novel Fresnel lens array to achieve a near perfect display in 2D/3D mode will be introduced. The experimental demonstration will be presented to the autostereoscopic display with the highest resolution, low crosstalk, Moiré- and defect- pattern free.
Silosky, Michael S; Marsh, Rebecca M; Scherzinger, Ann L
2016-07-08
When The Joint Commission updated its Requirements for Diagnostic Imaging Services for hospitals and ambulatory care facilities on July 1, 2015, among the new requirements was an annual performance evaluation for acquisition workstation displays. The purpose of this work was to evaluate a large cohort of acquisition displays used in a clinical environment and compare the results with existing performance standards provided by the American College of Radiology (ACR) and the American Association of Physicists in Medicine (AAPM). Measurements of the minimum luminance, maximum luminance, and luminance uniformity, were performed on 42 acquisition displays across multiple imaging modalities. The mean values, standard deviations, and ranges were calculated for these metrics. Additionally, visual evaluations of contrast, spatial resolution, and distortion were performed using either the Society of Motion Pictures and Television Engineers test pattern or the TG-18-QC test pattern. Finally, an evaluation of local nonuniformities was performed using either a uniform white display or the TG-18-UN80 test pattern. Displays tested were flat panel, liquid crystal displays that ranged from less than 1 to up to 10 years of use and had been built by a wide variety of manufacturers. The mean values for Lmin and Lmax for the displays tested were 0.28 ± 0.13 cd/m2 and 135.07 ± 33.35 cd/m2, respectively. The mean maximum luminance deviation for both ultrasound and non-ultrasound displays was 12.61% ± 4.85% and 14.47% ± 5.36%, respectively. Visual evaluation of display performance varied depending on several factors including brightness and contrast settings and the test pattern used for image quality assessment. This work provides a snapshot of the performance of 42 acquisition displays across several imaging modalities in clinical use at a large medical center. Comparison with existing performance standards reveals that changes in display technology and the move from cathode ray tube displays to flat panel displays may have rendered some of the tests inappropriate for modern use. © 2016 The Authors.
NASA Astrophysics Data System (ADS)
Farnand, Susan; Jiang, Jun; Frey, Franziska
2011-01-01
A project, supported by the Andrew W. Mellon Foundation, is currently underway to evaluate current practices in fine art image reproduction, determine the image quality generally achievable, and establish a suggested framework for art image interchange. To determine the image quality currently being achieved, experimentation has been conducted in which a set of objective targets and pieces of artwork in various media were imaged by participating museums and other cultural heritage institutions. Prints and images for display made from the delivered image files at the Rochester Institute of Technology were used as stimuli in psychometric testing in which observers were asked to evaluate the prints as reproductions of the original artwork and as stand alone images. The results indicated that there were limited differences between assessments made using displayed images relative to printed reproductions. Further, the differences between rankings made with and without the original artwork present were much smaller than expected.
Artificial Structural Color Pixels: A Review
Zhao, Yuqian; Zhao, Yong; Hu, Sheng; Lv, Jiangtao; Ying, Yu; Gervinskas, Gediminas; Si, Guangyuan
2017-01-01
Inspired by natural photonic structures (Morpho butterfly, for instance), researchers have demonstrated varying artificial color display devices using different designs. Photonic-crystal/plasmonic color filters have drawn increasing attention most recently. In this review article, we show the developing trend of artificial structural color pixels from photonic crystals to plasmonic nanostructures. Such devices normally utilize the distinctive optical features of photonic/plasmon resonance, resulting in high compatibility with current display and imaging technologies. Moreover, dynamical color filtering devices are highly desirable because tunable optical components are critical for developing new optical platforms which can be integrated or combined with other existing imaging and display techniques. Thus, extensive promising potential applications have been triggered and enabled including more abundant functionalities in integrated optics and nanophotonics. PMID:28805736
Crew and Display Concepts Evaluation for Synthetic / Enhanced Vision Systems
NASA Technical Reports Server (NTRS)
Bailey, Randall E.; Kramer, Lynda J.; Prinzel, Lawrence J., III
2006-01-01
NASA s Synthetic Vision Systems (SVS) project is developing technologies with practical applications that strive to eliminate low-visibility conditions as a causal factor to civil aircraft accidents and replicate the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. Enhanced Vision System (EVS) technologies are analogous and complementary in many respects to SVS, with the principle difference being that EVS is an imaging sensor presentation, as opposed to a database-derived image. The use of EVS in civil aircraft is projected to increase rapidly as the Federal Aviation Administration recently changed the aircraft operating rules under Part 91, revising the flight visibility requirements for conducting operations to civil airports. Operators conducting straight-in instrument approach procedures may now operate below the published approach minimums when using an approved EVS that shows the required visual references on the pilot s Head-Up Display. An experiment was conducted to evaluate the complementary use of SVS and EVS technologies, specifically focusing on new techniques for integration and/or fusion of synthetic and enhanced vision technologies and crew resource management while operating under the newly adopted FAA rules which provide operating credit for EVS. Overall, the experimental data showed that significant improvements in SA without concomitant increases in workload and display clutter could be provided by the integration and/or fusion of synthetic and enhanced vision technologies for the pilot-flying and the pilot-not-flying.
Autostereoscopic display technology for mobile 3DTV applications
NASA Astrophysics Data System (ADS)
Harrold, Jonathan; Woodgate, Graham J.
2007-02-01
Mobile TV is now a commercial reality, and an opportunity exists for the first mass market 3DTV products based on cell phone platforms with switchable 2D/3D autostereoscopic displays. Compared to conventional cell phones, TV phones need to operate for extended periods of time with the display running at full brightness, so the efficiency of the 3D optical system is key. The desire for increased viewing freedom to provide greater viewing comfort can be met by increasing the number of views presented. A four view lenticular display will have a brightness five times greater than the equivalent parallax barrier display. Therefore, lenticular displays are very strong candidates for cell phone 3DTV. Selection of Polarisation Activated Microlens TM architectures for LCD, OLED and reflective display applications is described. The technology delivers significant advantages especially for high pixel density panels and optimises device ruggedness while maintaining display brightness. A significant manufacturing breakthrough is described, enabling switchable microlenses to be fabricated using a simple coating process, which is also readily scalable to large TV panels. The 3D image performance of candidate 3DTV panels will also be compared using autostereoscopic display optical output simulations.
NASA Tech Briefs, April 2000. Volume 24, No. 4
NASA Technical Reports Server (NTRS)
2000-01-01
Topics covered include: Imaging/Video/Display Technology; Electronic Components and Circuits; Electronic Systems; Physical Sciences; Materials; Computer Programs; Mechanics; Bio-Medical; Test and Measurement; Mathematics and Information Sciences; Books and Reports.
Augmented Reality in Neurosurgery: A Review of Current Concepts and Emerging Applications.
Guha, Daipayan; Alotaibi, Naif M; Nguyen, Nhu; Gupta, Shaurya; McFaul, Christopher; Yang, Victor X D
2017-05-01
Augmented reality (AR) superimposes computer-generated virtual objects onto the user's view of the real world. Among medical disciplines, neurosurgery has long been at the forefront of image-guided surgery, and it continues to push the frontiers of AR technology in the operating room. In this systematic review, we explore the history of AR in neurosurgery and examine the literature on current neurosurgical applications of AR. Significant challenges to surgical AR exist, including compounded sources of registration error, impaired depth perception, visual and tactile temporal asynchrony, and operator inattentional blindness. Nevertheless, the ability to accurately display multiple three-dimensional datasets congruently over the area where they are most useful, coupled with future advances in imaging, registration, display technology, and robotic actuation, portend a promising role for AR in the neurosurgical operating room.
Stereoscopic-3D display design: a new paradigm with Intel Adaptive Stable Image Technology [IA-SIT
NASA Astrophysics Data System (ADS)
Jain, Sunil
2012-03-01
Stereoscopic-3D (S3D) proliferation on personal computers (PC) is mired by several technical and business challenges: a) viewing discomfort due to cross-talk amongst stereo images; b) high system cost; and c) restricted content availability. Users expect S3D visual quality to be better than, or at least equal to, what they are used to enjoying on 2D in terms of resolution, pixel density, color, and interactivity. Intel Adaptive Stable Image Technology (IA-SIT) is a foundational technology, successfully developed to resolve S3D system design challenges and deliver high quality 3D visualization at PC price points. Optimizations in display driver, panel timing firmware, backlight hardware, eyewear optical stack, and synch mechanism combined can help accomplish this goal. Agnostic to refresh rate, IA-SIT will scale with shrinking of display transistors and improvements in liquid crystal and LED materials. Industry could profusely benefit from the following calls to action:- 1) Adopt 'IA-SIT S3D Mode' in panel specs (via VESA) to help panel makers monetize S3D; 2) Adopt 'IA-SIT Eyewear Universal Optical Stack' and algorithm (via CEA) to help PC peripheral makers develop stylish glasses; 3) Adopt 'IA-SIT Real Time Profile' for sub-100uS latency control (via BT Sig) to extend BT into S3D; and 4) Adopt 'IA-SIT Architecture' for Monitors and TVs to monetize via PC attach.
Farahani, Navid; Post, Robert; Duboy, Jon; Ahmed, Ishtiaque; Kolowitz, Brian J.; Krinchai, Teppituk; Monaco, Sara E.; Fine, Jeffrey L.; Hartman, Douglas J.; Pantanowitz, Liron
2016-01-01
Background: Digital slides obtained from whole slide imaging (WSI) platforms are typically viewed in two dimensions using desktop personal computer monitors or more recently on mobile devices. To the best of our knowledge, we are not aware of any studies viewing digital pathology slides in a virtual reality (VR) environment. VR technology enables users to be artificially immersed in and interact with a computer-simulated world. Oculus Rift is among the world's first consumer-targeted VR headsets, intended primarily for enhanced gaming. Our aim was to explore the use of the Oculus Rift for examining digital pathology slides in a VR environment. Methods: An Oculus Rift Development Kit 2 (DK2) was connected to a 64-bit computer running Virtual Desktop software. Glass slides from twenty randomly selected lymph node cases (ten with benign and ten malignant diagnoses) were digitized using a WSI scanner. Three pathologists reviewed these digital slides on a 27-inch 5K display and with the Oculus Rift after a 2-week washout period. Recorded endpoints included concordance of final diagnoses and time required to examine slides. The pathologists also rated their ease of navigation, image quality, and diagnostic confidence for both modalities. Results: There was 90% diagnostic concordance when reviewing WSI using a 5K display and Oculus Rift. The time required to examine digital pathology slides on the 5K display averaged 39 s (range 10–120 s), compared to 62 s with the Oculus Rift (range 15–270 s). All pathologists confirmed that digital pathology slides were easily viewable in a VR environment. The ratings for image quality and diagnostic confidence were higher when using the 5K display. Conclusion: Using the Oculus Rift DK2 to view and navigate pathology whole slide images in a virtual environment is feasible for diagnostic purposes. However, image resolution using the Oculus Rift device was limited. Interactive VR technologies such as the Oculus Rift are novel tools that may be of use in digital pathology. PMID:27217972
IMDISP - INTERACTIVE IMAGE DISPLAY PROGRAM
NASA Technical Reports Server (NTRS)
Martin, M. D.
1994-01-01
The Interactive Image Display Program (IMDISP) is an interactive image display utility for the IBM Personal Computer (PC, XT and AT) and compatibles. Until recently, efforts to utilize small computer systems for display and analysis of scientific data have been hampered by the lack of sufficient data storage capacity to accomodate large image arrays. Most planetary images, for example, require nearly a megabyte of storage. The recent development of the "CDROM" (Compact Disk Read-Only Memory) storage technology makes possible the storage of up to 680 megabytes of data on a single 4.72-inch disk. IMDISP was developed for use with the CDROM storage system which is currently being evaluated by the Planetary Data System. The latest disks to be produced by the Planetary Data System are a set of three disks containing all of the images of Uranus acquired by the Voyager spacecraft. The images are in both compressed and uncompressed format. IMDISP can read the uncompressed images directly, but special software is provided to decompress the compressed images, which can not be processed directly. IMDISP can also display images stored on floppy or hard disks. A digital image is a picture converted to numerical form so that it can be stored and used in a computer. The image is divided into a matrix of small regions called picture elements, or pixels. The rows and columns of pixels are called "lines" and "samples", respectively. Each pixel has a numerical value, or DN (data number) value, quantifying the darkness or brightness of the image at that spot. In total, each pixel has an address (line number, sample number) and a DN value, which is all that the computer needs for processing. DISPLAY commands allow the IMDISP user to display all or part of an image at various positions on the display screen. The user may also zoom in and out from a point on the image defined by the cursor, and may pan around the image. To enable more or all of the original image to be displayed on the screen at once, the image can be "subsampled." For example, if the image were subsampled by a factor of 2, every other pixel from every other line would be displayed, starting from the upper left corner of the image. Any positive integer may be used for subsampling. The user may produce a histogram of an image file, which is a graph showing the number of pixels per DN value, or per range of DN values, for the entire image. IMDISP can also plot the DN value versus pixels along a line between two points on the image. The user can "stretch" or increase the contrast of an image by specifying low and high DN values; all pixels with values lower than the specified "low" will then become black, and all pixels higher than the specified "high" value will become white. Pixels between the low and high values will be evenly shaded between black and white. IMDISP is written in a modular form to make it easy to change it to work with different display devices or on other computers. The code can also be adapted for use in other application programs. There are device dependent image display modules, general image display subroutines, image I/O routines, and image label and command line parsing routines. The IMDISP system is written in C-language (94%) and Assembler (6%). It was implemented on an IBM PC with the MS DOS 3.21 operating system. IMDISP has a memory requirement of about 142k bytes. IMDISP was developed in 1989 and is a copyrighted work with all copyright vested in NASA. Additional planetary images can be obtained from the National Space Science Data Center at (301) 286-6695.
Real Time Computer Graphics From Body Motion
NASA Astrophysics Data System (ADS)
Fisher, Scott; Marion, Ann
1983-10-01
This paper focuses on the recent emergence and development of real, time, computer-aided body tracking technologies and their use in combination with various computer graphics imaging techniques. The convergence of these, technologies in our research results, in an interactive display environment. in which multipde, representations of a given body motion can be displayed in real time. Specific reference, to entertainment applications is described in the development of a real time, interactive stage set in which dancers can 'draw' with their bodies as they move, through the space. of the stage or manipulate virtual elements of the set with their gestures.
Yoon, Ki-Hyuk; Kang, Min-Koo; Lee, Hwasun; Kim, Sung-Kyu
2018-01-01
We study optical technologies for viewer-tracked autostereoscopic 3D display (VTA3D), which provides improved 3D image quality and extended viewing range. In particular, we utilize a technique-the so-called dynamic fusion of viewing zone (DFVZ)-for each 3D optical line to realize image quality equivalent to that achievable at optimal viewing distance, even when a viewer is moving in a depth direction. In addition, we examine quantitative properties of viewing zones provided by the VTA3D system that adopted DFVZ, revealing that the optimal viewing zone can be formed at viewer position. Last, we show that the comfort zone is extended due to DFVZ. This is demonstrated by a viewer's subjective evaluation of the 3D display system that employs both multiview autostereoscopic 3D display and DFVZ.
Image fusion and navigation platforms for percutaneous image-guided interventions.
Rajagopal, Manoj; Venkatesan, Aradhana M
2016-04-01
Image-guided interventional procedures, particularly image guided biopsy and ablation, serve an important role in the care of the oncology patient. The need for tumor genomic and proteomic profiling, early tumor response assessment and confirmation of early recurrence are common scenarios that may necessitate successful biopsies of targets, including those that are small, anatomically unfavorable or inconspicuous. As image-guided ablation is increasingly incorporated into interventional oncology practice, similar obstacles are posed for the ablation of technically challenging tumor targets. Navigation tools, including image fusion and device tracking, can enable abdominal interventionalists to more accurately target challenging biopsy and ablation targets. Image fusion technologies enable multimodality fusion and real-time co-displays of US, CT, MRI, and PET/CT data, with navigational technologies including electromagnetic tracking, robotic, cone beam CT, optical, and laser guidance of interventional devices. Image fusion and navigational platform technology is reviewed in this article, including the results of studies implementing their use for interventional procedures. Pre-clinical and clinical experiences to date suggest these technologies have the potential to reduce procedure risk, time, and radiation dose to both the patient and the operator, with a valuable role to play for complex image-guided interventions.
Earth Observation Services (Image Processing Software)
NASA Technical Reports Server (NTRS)
1992-01-01
San Diego State University and Environmental Systems Research Institute, with other agencies, have applied satellite imaging and image processing techniques to geographic information systems (GIS) updating. The resulting images display land use and are used by a regional planning agency for applications like mapping vegetation distribution and preserving wildlife habitats. The EOCAP program provides government co-funding to encourage private investment in, and to broaden the use of NASA-developed technology for analyzing information about Earth and ocean resources.
Study of a direct visualization display tool for space applications
NASA Astrophysics Data System (ADS)
Pereira do Carmo, J.; Gordo, P. R.; Martins, M.; Rodrigues, F.; Teodoro, P.
2017-11-01
The study of a Direct Visualization Display Tool (DVDT) for space applications is reported. The review of novel technologies for a compact display tool is described. Several applications for this tool have been identified with the support of ESA astronauts and are presented. A baseline design is proposed. It consists mainly of OLEDs as image source; a specially designed optical prism as relay optics; a Personal Digital Assistant (PDA), with data acquisition card, as control unit; and voice control and simplified keyboard as interfaces. Optical analysis and the final estimated performance are reported. The system is able to display information (text, pictures or/and video) with SVGA resolution directly to the astronaut using a Field of View (FOV) of 20x14.5 degrees. The image delivery system is a monocular Head Mounted Display (HMD) that weights less than 100g. The HMD optical system has an eye pupil of 7mm and an eye relief distance of 30mm.
Space shuttle visual simulation system design study
NASA Technical Reports Server (NTRS)
1973-01-01
The current and near-future state-of-the-art in visual simulation equipment technology is related to the requirements of the space shuttle visual system. Image source, image sensing, and displays are analyzed on a subsystem basis, and the principal conclusions are used in the formulation of a recommended baseline visual system. Perceptibility and visibility are also analyzed.
NASA Astrophysics Data System (ADS)
Meng, Yang; Yu, Zhongyuan; Jia, Fangda; Zhang, Chunyu; Wang, Ye; Liu, Yumin; Ye, Han; Chen, Laurence Lujun
2017-10-01
A multi-view autostereoscopic three-dimensional (3D) system is built by using a 2D display screen and a customized parallax-barrier shutter (PBS) screen. The shutter screen is controlled dynamically by address driving matrix circuit and it is placed in front of the display screen at a certain location. The system could achieve densest viewpoints due to its specially optical and geometric design which is based on concept of "eye space". The resolution of 3D imaging is not reduced compared to 2D mode by using limited time division multiplexing technology. The diffraction effects may play an important role in 3D display imaging quality, especially when applied to small screen, such as iPhone screen etc. For small screen, diffraction effects may contribute crosstalk between binocular views, image brightness uniformity etc. Therefore, diffraction effects are analyzed and considered in a one-dimensional shutter screen model of the 3D display, in which the numerical simulation of light from display pixels on display screen through parallax barrier slits to each viewing zone in eye space, is performed. The simulation results provide guidance for criteria screen size over which the impact of diffraction effects are ignorable, and below which diffraction effects must be taken into account. Finally, the simulation results are compared to the corresponding experimental measurements and observation with discussion.
Cardio-PACs: a new opportunity
NASA Astrophysics Data System (ADS)
Heupler, Frederick A., Jr.; Thomas, James D.; Blume, Hartwig R.; Cecil, Robert A.; Heisler, Mary
2000-05-01
It is now possible to replace film-based image management in the cardiac catheterization laboratory with a Cardiology Picture Archiving and Communication System (Cardio-PACS) based on digital imaging technology. The first step in the conversion process is installation of a digital image acquisition system that is capable of generating high-quality DICOM-compatible images. The next three steps, which are the subject of this presentation, involve image display, distribution, and storage. Clinical requirements and associated cost considerations for these three steps are listed below: Image display: (1) Image quality equal to film, with DICOM format, lossless compression, image processing, desktop PC-based with color monitor, and physician-friendly imaging software; (2) Performance specifications include: acquire 30 frames/sec; replay 15 frames/sec; access to file server 5 seconds, and to archive 5 minutes; (3) Compatibility of image file, transmission, and processing formats; (4) Image manipulation: brightness, contrast, gray scale, zoom, biplane display, and quantification; (5) User-friendly control of image review. Image distribution: (1) Standard IP-based network between cardiac catheterization laboratories, file server, long-term archive, review stations, and remote sites; (2) Non-proprietary formats; (3) Bidirectional distribution. Image storage: (1) CD-ROM vs disk vs tape; (2) Verification of data integrity; (3) User-designated storage capacity for catheterization laboratory, file server, long-term archive. Costs: (1) Image acquisition equipment, file server, long-term archive; (2) Network infrastructure; (3) Review stations and software; (4) Maintenance and administration; (5) Future upgrades and expansion; (6) Personnel.
NASA Technical Reports Server (NTRS)
Begault, Durand R.; Null, Cynthia H. (Technical Monitor)
1997-01-01
This talk will overview the basic technologies related to the creation of virtual acoustic images, and the potential of including spatial auditory displays in human-machine interfaces. Research into the perceptual error inherent in both natural and virtual spatial hearing is reviewed, since the formation of improved technologies is tied to psychoacoustic research. This includes a discussion of Head Related Transfer Function (HRTF) measurement techniques (the HRTF provides important perceptual cues within a virtual acoustic display). Many commercial applications of virtual acoustics have so far focused on games and entertainment ; in this review, other types of applications are examined, including aeronautic safety, voice communications, virtual reality, and room acoustic simulation. In particular, the notion that realistic simulation is optimized within a virtual acoustic display when head motion and reverberation cues are included within a perceptual model.
NASA Tech Briefs, December 2000. Volume 24, No. 12
NASA Technical Reports Server (NTRS)
2000-01-01
Topics include: special coverage sections on Imaging/Video/Display Technology, and sections on electronic components and systems, test and measurement, software, information sciences, and special sections of Electronics Tech Briefs and Motion Control Tech Briefs.
Proceedings of the Augmented VIsual Display (AVID) Research Workshop
NASA Technical Reports Server (NTRS)
Kaiser, Mary K. (Editor); Sweet, Barbara T. (Editor)
1993-01-01
The papers, abstracts, and presentations were presented at a three day workshop focused on sensor modeling and simulation, and image enhancement, processing, and fusion. The technical sessions emphasized how sensor technology can be used to create visual imagery adequate for aircraft control and operations. Participants from industry, government, and academic laboratories contributed to panels on Sensor Systems, Sensor Modeling, Sensor Fusion, Image Processing (Computer and Human Vision), and Image Evaluation and Metrics.
Navigation surgery using an augmented reality for pancreatectomy.
Okamoto, Tomoyoshi; Onda, Shinji; Yasuda, Jungo; Yanaga, Katsuhiko; Suzuki, Naoki; Hattori, Asaki
2015-01-01
The aim of this study was to evaluate the utility of navigation surgery using augmented reality technology (AR-based NS) for pancreatectomy. The 3D reconstructed images from CT were created by segmentation. The initial registration was performed by using the optical location sensor. The reconstructed images were superimposed onto the real organs in the monitor display. Of the 19 patients who had undergone hepatobiliary and pancreatic surgery using AR-based NS, the accuracy, visualization ability, and utility of our system were assessed in five cases with pancreatectomy. The position of each organ in the surface-rendering image corresponded almost to that of the actual organ. Reference to the display image allowed for safe dissection while preserving the adjacent vessels or organs. The locations of the lesions and resection line on the targeted organ were overlaid on the operating field. The initial mean registration error was improved to approximately 5 mm by our refinements. However, several problems such as registration accuracy, portability and cost still remain. AR-based NS contributed to accurate and effective surgical resection in pancreatectomy. The pancreas appears to be a suitable organ for further investigations. This technology is promising to improve surgical quality, training, and education. © 2015 S. Karger AG, Basel.
NASA Technical Reports Server (NTRS)
1986-01-01
The FluoroScan Imaging System is a high resolution, low radiation device for viewing stationary or moving objects. It resulted from NASA technology developed for x-ray astronomy and Goddard application to a low intensity x-ray imaging scope. FlouroScan Imaging Systems, Inc, (formerly HealthMate, Inc.), a NASA licensee, further refined the FluoroScan System. It is used for examining fractures, placement of catheters, and in veterinary medicine. Its major components include an x-ray generator, scintillator, visible light image intensifier and video display. It is small, light and maneuverable.
NASA Technical Reports Server (NTRS)
1986-01-01
Mallinckrodt Institute of Radiology (MIR) is using a digital image processing system which employs NASA-developed technology. MIR's computer system is the largest radiology system in the world. It is used in diagnostic imaging. Blood vessels are injected with x-ray dye, and the images which are produced indicate whether arteries are hardened or blocked. A computer program developed by Jet Propulsion Laboratory known as Mini-VICAR/IBIS was supplied to MIR by COSMIC. The program provides the basis for developing the computer imaging routines for data processing, contrast enhancement and picture display.
Distributing medical images with internet technologies: a DICOM web server and a DICOM java viewer.
Fernàndez-Bayó, J; Barbero, O; Rubies, C; Sentís, M; Donoso, L
2000-01-01
With the advent of filmless radiology, it becomes important to be able to distribute radiologic images digitally throughout an entire hospital. A new approach based on World Wide Web technologies was developed to accomplish this objective. This approach involves a Web server that allows the query and retrieval of images stored in a Digital Imaging and Communications in Medicine (DICOM) archive. The images can be viewed inside a Web browser with use of a small Java program known as the DICOM Java Viewer, which is executed inside the browser. The system offers several advantages over more traditional picture archiving and communication systems (PACS): It is easy to install and maintain, is platform independent, allows images to be manipulated and displayed efficiently, and is easy to integrate with existing systems that are already making use of Web technologies. The system is user-friendly and can easily be used from outside the hospital if a security policy is in place. The simplicity and flexibility of Internet technologies makes them highly preferable to the more complex PACS workstations. The system works well, especially with magnetic resonance and computed tomographic images, and can help improve and simplify interdepartmental relationships in a filmless hospital environment.
Space Images for NASA JPL Android Version
NASA Technical Reports Server (NTRS)
Nelson, Jon D.; Gutheinz, Sandy C.; Strom, Joshua R.; Arca, Jeremy M.; Perez, Martin; Boggs, Karen; Stanboli, Alice
2013-01-01
This software addresses the demand for easily accessible NASA JPL images and videos by providing a user friendly and simple graphical user interface that can be run via the Android platform from any location where Internet connection is available. This app is complementary to the iPhone version of the application. A backend infrastructure stores, tracks, and retrieves space images from the JPL Photojournal and Institutional Communications Web server, and catalogs the information into a streamlined rating infrastructure. This system consists of four distinguishing components: image repository, database, server-side logic, and Android mobile application. The image repository contains images from various JPL flight projects. The database stores the image information as well as the user rating. The server-side logic retrieves the image information from the database and categorizes each image for display. The Android mobile application is an interfacing delivery system that retrieves the image information from the server for each Android mobile device user. Also created is a reporting and tracking system for charting and monitoring usage. Unlike other Android mobile image applications, this system uses the latest emerging technologies to produce image listings based directly on user input. This allows for countless combinations of images returned. The backend infrastructure uses industry-standard coding and database methods, enabling future software improvement and technology updates. The flexibility of the system design framework permits multiple levels of display possibilities and provides integration capabilities. Unique features of the software include image/video retrieval from a selected set of categories, image Web links that can be shared among e-mail users, sharing to Facebook/Twitter, marking as user's favorites, and image metadata searchable for instant results.
RISK ASSESSMENT OF MANUFACTURED NANOMATERIAL: MORE THAN JUST SIZE
Nanotechnology is a dynamic and enabling technology capable of producing nano-scale materials with unique electrical, catalytic, thermal, mechanical, or imaging properties for a variety of applications. Nanomaterials may display unique toxicological properties and routes of expos...
NASA Technical Reports Server (NTRS)
Serebreny, S. M.; Evans, W. E.; Wiegman, E. J.
1974-01-01
The usefulness of dynamic display techniques in exploiting the repetitive nature of ERTS imagery was investigated. A specially designed Electronic Satellite Image Analysis Console (ESIAC) was developed and employed to process data for seven ERTS principal investigators studying dynamic hydrological conditions for diverse applications. These applications include measurement of snowfield extent and sediment plumes from estuary discharge, Playa Lake inventory, and monitoring of phreatophyte and other vegetation changes. The ESIAC provides facilities for storing registered image sequences in a magnetic video disc memory for subsequent recall, enhancement, and animated display in monochrome or color. The most unique feature of the system is the capability to time lapse the imagery and analytic displays of the imagery. Data products included quantitative measurements of distances and areas, binary thematic maps based on monospectral or multispectral decisions, radiance profiles, and movie loops. Applications of animation for uses other than creating time-lapse sequences are identified. Input to the ESIAC can be either digital or via photographic transparencies.
NASA Astrophysics Data System (ADS)
Kim, Min Su; Bos, Philip J.; Kim, Dong-Woo; Yang, Deng-Ke; Lee, Joong Hee; Lee, Seung Hee
2016-10-01
Technology of displaying static images in portable displays, advertising panels and price tags pursues significant reduction in power consumption and in product cost. Driving at a low-frequency electric field in fringe-field switching (FFS) mode can be one of the efficient ways to save powers of the recent portable devices, but a serious drop of image-quality, so-called image-flickering, has been found in terms of the coupling of elastic deformation to not only quadratic dielectric effect but linear flexoelectric effect. Despite of the urgent requirement of solving the issue, understanding of such a phenomenon is yet vague. Here, we thoroughly analyze and firstly report the flexoelectric effect in in-plane switching (IPS) liquid crystal cell. The effect takes place on the area above electrodes due to splay and bend deformations of nematic liquid crystal along oblique electric fields, so that the obvious spatial shift of the optical transmittance is experimentally observed and is clearly demonstrated based on the relation between direction of flexoelectric polarization and electric field polarity. In addition, we report that the IPS mode has inherent characteristics to solve the image-flickering issue in the low-power consumption display in terms of the physical property of liquid crystal material and the electrode structure.
Kim, Min Su; Bos, Philip J; Kim, Dong-Woo; Yang, Deng-Ke; Lee, Joong Hee; Lee, Seung Hee
2016-10-12
Technology of displaying static images in portable displays, advertising panels and price tags pursues significant reduction in power consumption and in product cost. Driving at a low-frequency electric field in fringe-field switching (FFS) mode can be one of the efficient ways to save powers of the recent portable devices, but a serious drop of image-quality, so-called image-flickering, has been found in terms of the coupling of elastic deformation to not only quadratic dielectric effect but linear flexoelectric effect. Despite of the urgent requirement of solving the issue, understanding of such a phenomenon is yet vague. Here, we thoroughly analyze and firstly report the flexoelectric effect in in-plane switching (IPS) liquid crystal cell. The effect takes place on the area above electrodes due to splay and bend deformations of nematic liquid crystal along oblique electric fields, so that the obvious spatial shift of the optical transmittance is experimentally observed and is clearly demonstrated based on the relation between direction of flexoelectric polarization and electric field polarity. In addition, we report that the IPS mode has inherent characteristics to solve the image-flickering issue in the low-power consumption display in terms of the physical property of liquid crystal material and the electrode structure.
Color image fusion for concealed weapon detection
NASA Astrophysics Data System (ADS)
Toet, Alexander
2003-09-01
Recent advances in passive and active imaging sensor technology offer the potential to detect weapons that are concealed underneath a person's clothing or carried along in bags. Although the concealed weapons can sometimes easily be detected, it can be difficult to perceive their context, due to the non-literal nature of these images. Especially for dynamic crowd surveillance purposes it may be impossible to rapidly asses with certainty which individual in the crowd is the one carrying the observed weapon. Sensor fusion is an enabling technology that may be used to solve this problem. Through fusion the signal of the sensor that depicts the weapon can be displayed in the context provided by a sensor of a different modality. We propose an image fusion scheme in which non-literal imagery can be fused with standard color images such that the result clearly displays the observed weapons in the context of the original color image. The procedure is such that the relevant contrast details from the non-literal image are transferred to the color image without altering the original color distribution of this image. The result is a natural looking color image that fluently combines all details from both input sources. When an observer who performs a dynamic crowd surveillance task, detects a weapon in the scene, he will also be able to quickly determine which person in the crowd is actually carrying the observed weapon (e.g. "the man with the red T-shirt and blue jeans"). The method is illustrated by the fusion of thermal 8-12 μm imagery with standard RGB color images.
DIAGNOcam--a Near Infrared Digital Imaging Transillumination (NIDIT) technology.
Abdelaziz, Marwa; Krejci, Ivo
2015-01-01
In developed countries, clinical manifestation of carious lesions is changing: instead of dentists being confronted with wide-open cavities, more and more hidden caries are seen. For a long time, the focus of the research community was on finding a method for the detection of carious lesions without the need for radiographs. The research on Digital Imaging Fiber-Optic Transillumination (DIFOTI) has been an active domain. The scope of the present article is to describe a novel technology for caries diagnostics based on Near Infrared Digital Imaging Transillumination (NIDIT), and to give first examples of its clinical indications. In addition, the coupling of NIDIT with a head-mounted retinal image display (RID) to improve clinical workflow is presented. The novel NIDIT technology was shown to be useful as a diagnostic tool in several indications, including mainly the detection of proximal caries and, less importantly, for occlusal caries, fissures, and secondary decay around amalgam and composite restorations. The coupling of this technology with a head-mounted retinal image system allows for its very efficient implementation into daily practice.
Yang, Xiaofeng; Wu, Wei; Wang, Guoan
2015-04-01
This paper presents a surgical optical navigation system with non-invasive, real-time, and positioning characteristics for open surgical procedure. The design was based on the principle of near-infrared fluorescence molecular imaging. The in vivo fluorescence excitation technology, multi-channel spectral camera technology and image fusion software technology were used. Visible and near-infrared light ring LED excitation source, multi-channel band pass filters, spectral camera 2 CCD optical sensor technology and computer systems were integrated, and, as a result, a new surgical optical navigation system was successfully developed. When the near-infrared fluorescence was injected, the system could display anatomical images of the tissue surface and near-infrared fluorescent functional images of surgical field simultaneously. The system can identify the lymphatic vessels, lymph node, tumor edge which doctor cannot find out with naked eye intra-operatively. Our research will guide effectively the surgeon to remove the tumor tissue to improve significantly the success rate of surgery. The technologies have obtained a national patent, with patent No. ZI. 2011 1 0292374. 1.
NASA Astrophysics Data System (ADS)
Park, Young Woo; Guo, Bing; Mogensen, Monique; Wang, Kevin; Law, Meng; Liu, Brent
2010-03-01
When a patient is accepted in the emergency room suspected of stroke, time is of the utmost importance. The infarct brain area suffers irreparable damage as soon as three hours after the onset of stroke symptoms. A CT scan is one of standard first line of investigations with imaging and is crucial to identify and properly triage stroke cases. The availability of an expert Radiologist in the emergency environment to diagnose the stroke patient in a timely manner only increases the challenges within the clinical workflow. Therefore, a truly zero-footprint web-based system with powerful advanced visualization tools for volumetric imaging including 2D. MIP/MPR, 3D display can greatly facilitate this dynamic clinical workflow for stroke patients. Together with mobile technology, the proper visualization tools can be delivered at the point of decision anywhere and anytime. We will present a small pilot project to evaluate the use of mobile technologies using devices such as iPhones in evaluating stroke patients. The results of the evaluation as well as any challenges in setting up the system will also be discussed.
Enhanced visualization of inner ear structures
NASA Astrophysics Data System (ADS)
Niemczyk, Kazimierz; Kucharski, Tomasz; Kujawinska, Malgorzata; Bruzgielewicz, Antoni
2004-07-01
Recently surgery requires extensive support from imaging technologies in order to increase effectiveness and safety of operations. One of important tasks is to enhance visualisation of quasi-phase (transparent) 3d structures. Those structures are characterized by very low contrast. It makes differentiation of tissues in field of view very difficult. For that reason the surgeon may be extremly uncertain during operation. This problem is connected with supporting operations of inner ear during which physician has to perform cuts at specific places of quasi-transparent velums. Conventionally during such operations medical doctor views the operating field through stereoscopic microscope. In the paper we propose a 3D visualisation system based on Helmet Mounted Display. Two CCD cameras placed at the output of microscope perform acquisition of stereo pairs of images. The images are processed in real-time with the goal of enhancement of quasi-phased structures. The main task is to create algorithm that is not sensitive to changes in intensity distribution. The disadvantages of existing algorithms is their lack of adaptation to occuring reflexes and shadows in field of view. The processed images from both left and right channels are overlaid on the actual images exported and displayed at LCD's of Helmet Mounted Display. A physician observes by HMD (Helmet Mounted Display) a stereoscopic operating scene with indication of the places of special interest. The authors present the hardware ,procedures applied and initial results of inner ear structure visualisation. Several problems connected with processing of stereo-pair images are discussed.
DLP™-based dichoptic vision test system
NASA Astrophysics Data System (ADS)
Woods, Russell L.; Apfelbaum, Henry L.; Peli, Eli
2010-01-01
It can be useful to present a different image to each of the two eyes while they cooperatively view the world. Such dichoptic presentation can occur in investigations of stereoscopic and binocular vision (e.g., strabismus, amblyopia) and vision rehabilitation in clinical and research settings. Various techniques have been used to construct dichoptic displays. The most common and most flexible modern technique uses liquid-crystal (LC) shutters. When used in combination with cathode ray tube (CRT) displays, there is often leakage of light from the image intended for one eye into the view of the other eye. Such interocular crosstalk is 14% even in our state of the art CRT-based dichoptic system. While such crosstalk may have minimal impact on stereo movie or video game experiences, it can defeat clinical and research investigations. We use micromirror digital light processing (DLP™) technology to create a novel dichoptic visual display system with substantially lower interocular crosstalk (0.3% remaining crosstalk comes from the LC shutters). The DLP system normally uses a color wheel to display color images. Our approach is to disable the color wheel, synchronize the display directly to the computer's sync signal, allocate each of the three (former) color presentations to one or both eyes, and open and close the LC shutters in synchrony with those color events.
Characterization of crosstalk in stereoscopic display devices.
Zafar, Fahad; Badano, Aldo
2014-12-01
Many different types of stereoscopic display devices are used for commercial and research applications. Stereoscopic displays offer the potential to improve performance in detection tasks for medical imaging diagnostic systems. Due to the variety of stereoscopic display technologies, it remains unclear how these compare with each other for detection and estimation tasks. Different stereo devices have different performance trade-offs due to their display characteristics. Among them, crosstalk is known to affect observer perception of 3D content and might affect detection performance. We measured and report the detailed luminance output and crosstalk characteristics for three different types of stereoscopic display devices. We recorded the effect of other issues on recorded luminance profiles such as viewing angle, use of different eye wear, and screen location. Our results show that the crosstalk signature for viewing 3D content can vary considerably when using different types of 3D glasses for active stereo displays. We also show that significant differences are present in crosstalk signatures when varying the viewing angle from 0 degrees to 20 degrees for a stereo mirror 3D display device. Our detailed characterization can help emulate the effect of crosstalk in conducting computational observer image quality assessment evaluations that minimize costly and time-consuming human reader studies.
A review of wearable technology in medicine.
Iqbal, Mohammed H; Aydin, Abdullatif; Brunckhorst, Oliver; Dasgupta, Prokar; Ahmed, Kamran
2016-10-01
With rapid advances in technology, wearable devices have evolved and been adopted for various uses, ranging from simple devices used in aiding fitness to more complex devices used in assisting surgery. Wearable technology is broadly divided into head-mounted displays and body sensors. A broad search of the current literature revealed a total of 13 different body sensors and 11 head-mounted display devices. The latter have been reported for use in surgery (n = 7), imaging (n = 3), simulation and education (n = 2) and as navigation tools (n = 1). Body sensors have been used as vital signs monitors (n = 9) and for posture-related devices for posture and fitness (n = 4). Body sensors were found to have excellent functionality in aiding patient posture and rehabilitation while head-mounted displays can provide information to surgeons to while maintaining sterility during operative procedures. There is a potential role for head-mounted wearable technology and body sensors in medicine and patient care. However, there is little scientific evidence available proving that the application of such technologies improves patient satisfaction or care. Further studies need to be conducted prior to a clear conclusion. © The Royal Society of Medicine.
Eng, J
1997-01-01
Java is a programming language that runs on a "virtual machine" built into World Wide Web (WWW)-browsing programs on multiple hardware platforms. Web pages were developed with Java to enable Web-browsing programs to overlay transparent graphics and text on displayed images so that the user could control the display of labels and annotations on the images, a key feature not available with standard Web pages. This feature was extended to include the presentation of normal radiologic anatomy. Java programming was also used to make Web browsers compatible with the Digital Imaging and Communications in Medicine (DICOM) file format. By enhancing the functionality of Web pages, Java technology should provide greater incentive for using a Web-based approach in the development of radiology teaching material.
Hologlyphics: volumetric image synthesis performance system
NASA Astrophysics Data System (ADS)
Funk, Walter
2008-02-01
This paper describes a novel volumetric image synthesis system and artistic technique, which generate moving volumetric images in real-time, integrated with music. The system, called the Hologlyphic Funkalizer, is performance based, wherein the images and sound are controlled by a live performer, for the purposes of entertaining a live audience and creating a performance art form unique to volumetric and autostereoscopic images. While currently configured for a specific parallax barrier display, the Hologlyphic Funkalizer's architecture is completely adaptable to various volumetric and autostereoscopic display technologies. Sound is distributed through a multi-channel audio system; currently a quadraphonic speaker setup is implemented. The system controls volumetric image synthesis, production of music and spatial sound via acoustic analysis and human gestural control, using a dedicated control panel, motion sensors, and multiple musical keyboards. Music can be produced by external acoustic instruments, pre-recorded sounds or custom audio synthesis integrated with the volumetric image synthesis. Aspects of the sound can control the evolution of images and visa versa. Sounds can be associated and interact with images, for example voice synthesis can be combined with an animated volumetric mouth, where nuances of generated speech modulate the mouth's expressiveness. Different images can be sent to up to 4 separate displays. The system applies many novel volumetric special effects, and extends several film and video special effects into the volumetric realm. Extensive and various content has been developed and shown to live audiences by a live performer. Real world applications will be explored, with feedback on the human factors.
ERIC Educational Resources Information Center
Canelos, James
An internal cognitive variable--mental imagery representation--was studied using a set of three information-processing strategies under external stimulus visual display conditions for various learning levels. The copy strategy provided verbal and visual dual-coding and required formation of a vivid mental image. The relational strategy combined…
A multi-directional backlight for a wide-angle, glasses-free three-dimensional display.
Fattal, David; Peng, Zhen; Tran, Tho; Vo, Sonny; Fiorentino, Marco; Brug, Jim; Beausoleil, Raymond G
2013-03-21
Multiview three-dimensional (3D) displays can project the correct perspectives of a 3D image in many spatial directions simultaneously. They provide a 3D stereoscopic experience to many viewers at the same time with full motion parallax and do not require special glasses or eye tracking. None of the leading multiview 3D solutions is particularly well suited to mobile devices (watches, mobile phones or tablets), which require the combination of a thin, portable form factor, a high spatial resolution and a wide full-parallax view zone (for short viewing distance from potentially steep angles). Here we introduce a multi-directional diffractive backlight technology that permits the rendering of high-resolution, full-parallax 3D images in a very wide view zone (up to 180 degrees in principle) at an observation distance of up to a metre. The key to our design is a guided-wave illumination technique based on light-emitting diodes that produces wide-angle multiview images in colour from a thin planar transparent lightguide. Pixels associated with different views or colours are spatially multiplexed and can be independently addressed and modulated at video rate using an external shutter plane. To illustrate the capabilities of this technology, we use simple ink masks or a high-resolution commercial liquid-crystal display unit to demonstrate passive and active (30 frames per second) modulation of a 64-view backlight, producing 3D images with a spatial resolution of 88 pixels per inch and full-motion parallax in an unprecedented view zone of 90 degrees. We also present several transparent hand-held prototypes showing animated sequences of up to six different 200-view images at a resolution of 127 pixels per inch.
Computer vision research with new imaging technology
NASA Astrophysics Data System (ADS)
Hou, Guangqi; Liu, Fei; Sun, Zhenan
2015-12-01
Light field imaging is capable of capturing dense multi-view 2D images in one snapshot, which record both intensity values and directions of rays simultaneously. As an emerging 3D device, the light field camera has been widely used in digital refocusing, depth estimation, stereoscopic display, etc. Traditional multi-view stereo (MVS) methods only perform well on strongly texture surfaces, but the depth map contains numerous holes and large ambiguities on textureless or low-textured regions. In this paper, we exploit the light field imaging technology on 3D face modeling in computer vision. Based on a 3D morphable model, we estimate the pose parameters from facial feature points. Then the depth map is estimated through the epipolar plane images (EPIs) method. At last, the high quality 3D face model is exactly recovered via the fusing strategy. We evaluate the effectiveness and robustness on face images captured by a light field camera with different poses.
Status of development of LCOS projection displays for F-22A, F/A-18E/F, and JSF cockpits
NASA Astrophysics Data System (ADS)
Kalmanash, Michael H.
2001-09-01
Projection display technology has been found to be an attractive alternative to direct view flat panel displays in many avionics applications. The projection approach permits compact high performance systems to be tailored to specific platform needs while using a complement of commercial off the shelf (COTS) components, including liquid crystal on silicon (LCOS) microdisplay imagers. A common projection engine used on multiple platforms enables improved performance, lower cost and shorter development cycles. This paper provides a status update for projection displays under development for the F-22A, the F/A-18E/F and the Lockheed Joint Strike Fighter (JSF) aircraft.
General consumer communication tools for improved image management and communication in medicine
NASA Astrophysics Data System (ADS)
Ratib, Osman M.; Rosset, Antoine; McCoy, J. Michael
2005-04-01
We elected to explore emerging consumer technologies that can be adopted to improve and facilitate image and data communication in medical and clinical environment. The wide adoption of new communication paradigm such as instant messaging, chatting and direct emailing can be integrated in specific applications. The increasing capacity of portable and hand held devices such as iPod music players offer an attractive alternative for data storage that exceeds the capabilities of traditional offline storage media such as CD or even DVD. We adapted medical image display and manipulation software called OSIRIX to integrate different innovative technologies facilitating the communication and data transfer between remote users. We integrated email and instant messaging features to the program allowing users to instantaneously email an image or a set of images that are displayed on the screen. Using iChat instant messaging application from Apple a user can share the content of his screen with a remote correspondent and communicate in real time using voice and video. To provide convenient mechanism for exchange of large data sets the program can store the data in DICOM format on CD or DVD, but was also extended to use the large storage capacity of iPod hard disks as well as Apple"s online storage service "dot Mac" that users can subscribe to benefit from scalable secure storage that accessible from anywhere on the internet. The adoption of these innovative technologies is likely to change the architecture of traditional picture archiving and communication systems and provide more flexible and efficient means of communication.
Concept Car Design and Ability Training
NASA Astrophysics Data System (ADS)
Lv, Jiefeng; Lu, Hairong
The concept design as a symbol of creative design thinking, reflecting on the future design of exploratory and prospective, as a vehicle to explore the notion of future car design, design inspiration and creativity is not only a bold display, more through demonstrate the concept, reflects the company's technological strength and technological progress, and thus enhance their brand image. Present Chinese automobile design also has a very big disparity with world level, through cultivating students' concept design ability, to establish native design features and self-reliant brand image is practical and effective ways, also be necessary and pressing.
Image-guided interventions and computer-integrated therapy: Quo vadis?
Peters, Terry M; Linte, Cristian A
2016-10-01
Significant efforts have been dedicated to minimizing invasiveness associated with surgical interventions, most of which have been possible thanks to the developments in medical imaging, surgical navigation, visualization and display technologies. Image-guided interventions have promised to dramatically change the way therapies are delivered to many organs. However, in spite of the development of many sophisticated technologies over the past two decades, other than some isolated examples of successful implementations, minimally invasive therapy is far from enjoying the wide acceptance once envisioned. This paper provides a large-scale overview of the state-of-the-art developments, identifies several barriers thought to have hampered the wider adoption of image-guided navigation, and suggests areas of research that may potentially advance the field. Copyright © 2016. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Smith, Joseph; Marrs, Michael; Strnad, Mark; Apte, Raj B.; Bert, Julie; Allee, David; Colaneri, Nicholas; Forsythe, Eric; Morton, David
2013-05-01
Today's flat panel digital x-ray image sensors, which have been in production since the mid-1990s, are produced exclusively on glass substrates. While acceptable for use in a hospital or doctor's office, conventional glass substrate digital x-ray sensors are too fragile for use outside these controlled environments without extensive reinforcement. Reinforcement, however, significantly increases weight, bulk, and cost, making them impractical for far-forward remote diagnostic applications, which demand rugged and lightweight x-ray detectors. Additionally, glass substrate x-ray detectors are inherently rigid. This limits their use in curved or bendable, conformal x-ray imaging applications such as the non-destructive testing (NDT) of oil pipelines. However, by extending low-temperature thin-film transistor (TFT) technology previously demonstrated on plastic substrate- based electrophoretic and organic light emitting diode (OLED) flexible displays, it is now possible to manufacture durable, lightweight, as well as flexible digital x-ray detectors. In this paper, we discuss the principal technical approaches used to apply flexible display technology to two new large-area flexible digital x-ray sensors for defense, security, and industrial applications and demonstrate their imaging capabilities. Our results include a 4.8″ diagonal, 353 x 463 resolution, flexible digital x-ray detector, fabricated on a 6″ polyethylene naphthalate (PEN) plastic substrate; and a larger, 7.9″ diagonal, 720 x 640 resolution, flexible digital x-ray detector also fabricated on PEN and manufactured on a gen 2 (370 x 470 mm) substrate.
A framework for interactive visualization of digital medical images.
Koehring, Andrew; Foo, Jung Leng; Miyano, Go; Lobe, Thom; Winer, Eliot
2008-10-01
The visualization of medical images obtained from scanning techniques such as computed tomography and magnetic resonance imaging is a well-researched field. However, advanced tools and methods to manipulate these data for surgical planning and other tasks have not seen widespread use among medical professionals. Radiologists have begun using more advanced visualization packages on desktop computer systems, but most physicians continue to work with basic two-dimensional grayscale images or not work directly with the data at all. In addition, new display technologies that are in use in other fields have yet to be fully applied in medicine. It is our estimation that usability is the key aspect in keeping this new technology from being more widely used by the medical community at large. Therefore, we have a software and hardware framework that not only make use of advanced visualization techniques, but also feature powerful, yet simple-to-use, interfaces. A virtual reality system was created to display volume-rendered medical models in three dimensions. It was designed to run in many configurations, from a large cluster of machines powering a multiwalled display down to a single desktop computer. An augmented reality system was also created for, literally, hands-on interaction when viewing models of medical data. Last, a desktop application was designed to provide a simple visualization tool, which can be run on nearly any computer at a user's disposal. This research is directed toward improving the capabilities of medical professionals in the tasks of preoperative planning, surgical training, diagnostic assistance, and patient education.
NASA Technical Reports Server (NTRS)
Schell, J. A.
1974-01-01
The recent availability of timely synoptic earth imagery from the Earth Resources Technology Satellites (ERTS) provides a wealth of information for the monitoring and management of vital natural resources. Formal language definitions and syntax interpretation algorithms were adapted to provide a flexible, computer information system for the maintenance of resource interpretation of imagery. These techniques are incorporated, together with image analysis functions, into an Interactive Resource Information Management and Analysis System, IRIMAS, which is implemented on a Texas Instruments 980A minicomputer system augmented with a dynamic color display for image presentation. A demonstration of system usage and recommendations for further system development are also included.
Investigation of a Space Delta Technology Facility (SDTF) for Spacelab
NASA Technical Reports Server (NTRS)
Welch, J. D.
1977-01-01
The Space Data Technology Facility (SDTF) would have the role of supporting a wide range of data technology related demonstrations which might be performed on Spacelab. The SDTF design is incorporated primarily in one single width standardized Spacelab rack. It consists of various display, control and data handling components together with interfaces with the demonstration-specific equipment and Spacelab. To arrive at this design a wide range of data related technologies and potential demonstrations were also investigated. One demonstration concerned with online image rectification and registration was developed in some depth.
Design and application of a small size SAFT imaging system for concrete structure
NASA Astrophysics Data System (ADS)
Shao, Zhixue; Shi, Lihua; Shao, Zhe; Cai, Jian
2011-07-01
A method of ultrasonic imaging detection is presented for quick non-destructive testing (NDT) of concrete structures using synthesized aperture focusing technology (SAFT). A low cost ultrasonic sensor array consisting of 12 market available low frequency ultrasonic transducers is designed and manufactured. A channel compensation method is proposed to improve the consistency of different transducers. The controlling devices for array scan as well as the virtual instrument for SAFT imaging are designed. In the coarse scan mode with the scan step of 50 mm, the system can quickly give an image display of a cross section of 600 mm (L) × 300 mm (D) by one measurement. In the refined scan model, the system can reduce the scan step and give an image display of the same cross section by moving the sensor array several times. Experiments on staircase specimen, concrete slab with embedded target, and building floor with underground pipe line all verify the efficiency of the proposed method.
Reimagining the microscope in the 21(st) century using the scalable adaptive graphics environment.
Mateevitsi, Victor; Patel, Tushar; Leigh, Jason; Levy, Bruce
2015-01-01
Whole-slide imaging (WSI), while technologically mature, remains in the early adopter phase of the technology adoption lifecycle. One reason for this current situation is that current methods of visualizing and using WSI closely follow long-existing workflows for glass slides. We set out to "reimagine" the digital microscope in the era of cloud computing by combining WSI with the rich collaborative environment of the Scalable Adaptive Graphics Environment (SAGE). SAGE is a cross-platform, open-source visualization and collaboration tool that enables users to access, display and share a variety of data-intensive information, in a variety of resolutions and formats, from multiple sources, on display walls of arbitrary size. A prototype of a WSI viewer app in the SAGE environment was created. While not full featured, it enabled the testing of our hypothesis that these technologies could be blended together to change the essential nature of how microscopic images are utilized for patient care, medical education, and research. Using the newly created WSI viewer app, demonstration scenarios were created in the patient care and medical education scenarios. This included a live demonstration of a pathology consultation at the International Academy of Digital Pathology meeting in Boston in November 2014. SAGE is well suited to display, manipulate and collaborate using WSIs, along with other images and data, for a variety of purposes. It goes beyond how glass slides and current WSI viewers are being used today, changing the nature of digital pathology in the process. A fully developed WSI viewer app within SAGE has the potential to encourage the wider adoption of WSI throughout pathology.
Reimagining the microscope in the 21st century using the scalable adaptive graphics environment
Mateevitsi, Victor; Patel, Tushar; Leigh, Jason; Levy, Bruce
2015-01-01
Background: Whole-slide imaging (WSI), while technologically mature, remains in the early adopter phase of the technology adoption lifecycle. One reason for this current situation is that current methods of visualizing and using WSI closely follow long-existing workflows for glass slides. We set out to “reimagine” the digital microscope in the era of cloud computing by combining WSI with the rich collaborative environment of the Scalable Adaptive Graphics Environment (SAGE). SAGE is a cross-platform, open-source visualization and collaboration tool that enables users to access, display and share a variety of data-intensive information, in a variety of resolutions and formats, from multiple sources, on display walls of arbitrary size. Methods: A prototype of a WSI viewer app in the SAGE environment was created. While not full featured, it enabled the testing of our hypothesis that these technologies could be blended together to change the essential nature of how microscopic images are utilized for patient care, medical education, and research. Results: Using the newly created WSI viewer app, demonstration scenarios were created in the patient care and medical education scenarios. This included a live demonstration of a pathology consultation at the International Academy of Digital Pathology meeting in Boston in November 2014. Conclusions: SAGE is well suited to display, manipulate and collaborate using WSIs, along with other images and data, for a variety of purposes. It goes beyond how glass slides and current WSI viewers are being used today, changing the nature of digital pathology in the process. A fully developed WSI viewer app within SAGE has the potential to encourage the wider adoption of WSI throughout pathology. PMID:26110092
Lee, Kenneth K C; Mariampillai, Adrian; Yu, Joe X Z; Cadotte, David W; Wilson, Brian C; Standish, Beau A; Yang, Victor X D
2012-07-01
Advances in swept source laser technology continues to increase the imaging speed of swept-source optical coherence tomography (SS-OCT) systems. These fast imaging speeds are ideal for microvascular detection schemes, such as speckle variance (SV), where interframe motion can cause severe imaging artifacts and loss of vascular contrast. However, full utilization of the laser scan speed has been hindered by the computationally intensive signal processing required by SS-OCT and SV calculations. Using a commercial graphics processing unit that has been optimized for parallel data processing, we report a complete high-speed SS-OCT platform capable of real-time data acquisition, processing, display, and saving at 108,000 lines per second. Subpixel image registration of structural images was performed in real-time prior to SV calculations in order to reduce decorrelation from stationary structures induced by the bulk tissue motion. The viability of the system was successfully demonstrated in a high bulk tissue motion scenario of human fingernail root imaging where SV images (512 × 512 pixels, n = 4) were displayed at 54 frames per second.
Satellite Imagery Via Personal Computer
NASA Technical Reports Server (NTRS)
1989-01-01
Automatic Picture Transmission (APT) was incorporated by NASA in the Tiros 8 weather satellite. APT included an advanced satellite camera that immediately transmitted a picture as well as low cost receiving equipment. When an advanced scanning radiometer was later introduced, ground station display equipment would not readily adjust to the new format until GSFC developed an APT Digital Scan Converter that made them compatible. A NASA Technical Note by Goddard's Vermillion and Kamoski described how to build a converter. In 1979, Electro-Services, using this technology, built the first microcomputer weather imaging system in the U.S. The company changed its name to Satellite Data Systems, Inc. and now manufactures the WeatherFax facsimile display graphics system which converts a personal computer into a weather satellite image acquisition and display workstation. Hardware, antennas, receivers, etc. are also offered. Customers include U.S. Weather Service, schools, military, etc.
Event-Based Tone Mapping for Asynchronous Time-Based Image Sensor
Simon Chane, Camille; Ieng, Sio-Hoi; Posch, Christoph; Benosman, Ryad B.
2016-01-01
The asynchronous time-based neuromorphic image sensor ATIS is an array of autonomously operating pixels able to encode luminance information with an exceptionally high dynamic range (>143 dB). This paper introduces an event-based methodology to display data from this type of event-based imagers, taking into account the large dynamic range and high temporal accuracy that go beyond available mainstream display technologies. We introduce an event-based tone mapping methodology for asynchronously acquired time encoded gray-level data. A global and a local tone mapping operator are proposed. Both are designed to operate on a stream of incoming events rather than on time frame windows. Experimental results on real outdoor scenes are presented to evaluate the performance of the tone mapping operators in terms of quality, temporal stability, adaptation capability, and computational time. PMID:27642275
A low-cost multimodal head-mounted display system for neuroendoscopic surgery.
Xu, Xinghua; Zheng, Yi; Yao, Shujing; Sun, Guochen; Xu, Bainan; Chen, Xiaolei
2018-01-01
With rapid advances in technology, wearable devices as head-mounted display (HMD) have been adopted for various uses in medical science, ranging from simply aiding in fitness to assisting surgery. We aimed to investigate the feasibility and practicability of a low-cost multimodal HMD system in neuroendoscopic surgery. A multimodal HMD system, mainly consisted of a HMD with two built-in displays, an action camera, and a laptop computer displaying reconstructed medical images, was developed to assist neuroendoscopic surgery. With this intensively integrated system, the neurosurgeon could freely switch between endoscopic image, three-dimensional (3D) reconstructed virtual endoscopy images, and surrounding environment images. Using a leap motion controller, the neurosurgeon could adjust or rotate the 3D virtual endoscopic images at a distance to better understand the positional relation between lesions and normal tissues at will. A total of 21 consecutive patients with ventricular system diseases underwent neuroendoscopic surgery with the aid of this system. All operations were accomplished successfully, and no system-related complications occurred. The HMD was comfortable to wear and easy to operate. Screen resolution of the HMD was high enough for the neurosurgeon to operate carefully. With the system, the neurosurgeon might get a better comprehension on lesions by freely switching among images of different modalities. The system had a steep learning curve, which meant a quick increment of skill with it. Compared with commercially available surgical assistant instruments, this system was relatively low-cost. The multimodal HMD system is feasible, practical, helpful, and relatively cost efficient in neuroendoscopic surgery.
Geller, G.N.; Fosnight, E.A.; Chaudhuri, Sambhudas
2008-01-01
Access to satellite images has been largely limited to communities with specialized tools and expertise, even though images could also benefit other communities. This situation has resulted in underutilization of the data. TerraLook, which consists of collections of georeferenced JPEG images and an open source toolkit to use them, makes satellite images available to those lacking experience with remote sensing. Users can find, roam, and zoom images, create and display vector overlays, adjust and annotate images so they can be used as a communication vehicle, compare images taken at different times, and perform other activities useful for natural resource management, sustainable development, education, and other activities. ?? 2007 IEEE.
Geller, G.N.; Fosnight, E.A.; Chaudhuri, Sambhudas
2007-01-01
Access to satellite images has been largely limited to communities with specialized tools and expertise, even though images could also benefit other communities. This situation has resulted in underutilization of the data. TerraLook, which consists of collections of georeferenced JPEG images and an open source toolkit to use them, makes satellite images available to those lacking experience with remote sensing. Users can find, roam, and zoom images, create and display vector overlays, adjust and annotate images so they can be used as a communication vehicle, compare images taken at different times, and perform other activities useful for natural resource management, sustainable development, education, and other activities. ?? 2007 IEEE.
Virtual reality 3D headset based on DMD light modulators
NASA Astrophysics Data System (ADS)
Bernacki, Bruce E.; Evans, Allan; Tang, Edward
2014-06-01
We present the design of an immersion-type 3D headset suitable for virtual reality applications based upon digital micromirror devices (DMD). Current methods for presenting information for virtual reality are focused on either polarizationbased modulators such as liquid crystal on silicon (LCoS) devices, or miniature LCD or LED displays often using lenses to place the image at infinity. LCoS modulators are an area of active research and development, and reduce the amount of viewing light by 50% due to the use of polarization. Viewable LCD or LED screens may suffer low resolution, cause eye fatigue, and exhibit a "screen door" or pixelation effect due to the low pixel fill factor. Our approach leverages a mature technology based on silicon micro mirrors delivering 720p resolution displays in a small form-factor with high fill factor. Supporting chip sets allow rapid integration of these devices into wearable displays with high-definition resolution and low power consumption, and many of the design methods developed for DMD projector applications can be adapted to display use. Potential applications include night driving with natural depth perception, piloting of UAVs, fusion of multiple sensors for pilots, training, vision diagnostics and consumer gaming. Our design concept is described in which light from the DMD is imaged to infinity and the user's own eye lens forms a real image on the user's retina resulting in a virtual retinal display.
NASA Astrophysics Data System (ADS)
Kuroki, Hayato; Ino, Shuichi; Nakano, Satoko; Hori, Kotaro; Ifukube, Tohru
The authors of this paper have been studying a real-time speech-to-caption system using speech recognition technology with a “repeat-speaking” method. In this system, they used a “repeat-speaker” who listens to a lecturer's voice and then speaks back the lecturer's speech utterances into a speech recognition computer. The througoing system showed that the accuracy of the captions is about 97% in Japanese-Japanese conversion and the conversion time from voices to captions is about 4 seconds in English-English conversion in some international conferences. Of course it required a lot of costs to achieve these high performances. In human communications, speech understanding depends not only on verbal information but also on non-verbal information such as speaker's gestures, and face and mouth movements. So the authors found the idea to display information of captions and speaker's face movement images with a suitable way to achieve a higher comprehension after storing information once into a computer briefly. In this paper, we investigate the relationship of the display sequence and display timing between captions that have speech recognition errors and the speaker's face movement images. The results show that the sequence “to display the caption before the speaker's face image” improves the comprehension of the captions. The sequence “to display both simultaneously” shows an improvement only a few percent higher than the question sentence, and the sequence “to display the speaker's face image before the caption” shows almost no change. In addition, the sequence “to display the caption 1 second before the speaker's face shows the most significant improvement of all the conditions.
Real-time MRI guidance of cardiac interventions.
Campbell-Washburn, Adrienne E; Tavallaei, Mohammad A; Pop, Mihaela; Grant, Elena K; Chubb, Henry; Rhode, Kawal; Wright, Graham A
2017-10-01
Cardiac magnetic resonance imaging (MRI) is appealing to guide complex cardiac procedures because it is ionizing radiation-free and offers flexible soft-tissue contrast. Interventional cardiac MR promises to improve existing procedures and enable new ones for complex arrhythmias, as well as congenital and structural heart disease. Guiding invasive procedures demands faster image acquisition, reconstruction and analysis, as well as intuitive intraprocedural display of imaging data. Standard cardiac MR techniques such as 3D anatomical imaging, cardiac function and flow, parameter mapping, and late-gadolinium enhancement can be used to gather valuable clinical data at various procedural stages. Rapid intraprocedural image analysis can extract and highlight critical information about interventional targets and outcomes. In some cases, real-time interactive imaging is used to provide a continuous stream of images displayed to interventionalists for dynamic device navigation. Alternatively, devices are navigated relative to a roadmap of major cardiac structures generated through fast segmentation and registration. Interventional devices can be visualized and tracked throughout a procedure with specialized imaging methods. In a clinical setting, advanced imaging must be integrated with other clinical tools and patient data. In order to perform these complex procedures, interventional cardiac MR relies on customized equipment, such as interactive imaging environments, in-room image display, audio communication, hemodynamic monitoring and recording systems, and electroanatomical mapping and ablation systems. Operating in this sophisticated environment requires coordination and planning. This review provides an overview of the imaging technology used in MRI-guided cardiac interventions. Specifically, this review outlines clinical targets, standard image acquisition and analysis tools, and the integration of these tools into clinical workflow. 1 Technical Efficacy: Stage 5 J. Magn. Reson. Imaging 2017;46:935-950. © 2017 International Society for Magnetic Resonance in Medicine.
Post-processing methods of rendering and visualizing 3-D reconstructed tomographic images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, S.T.C.
The purpose of this presentation is to discuss the computer processing techniques of tomographic images, after they have been generated by imaging scanners, for volume visualization. Volume visualization is concerned with the representation, manipulation, and rendering of volumetric data. Since the first digital images were produced from computed tomography (CT) scanners in the mid 1970s, applications of visualization in medicine have expanded dramatically. Today, three-dimensional (3D) medical visualization has expanded from using CT data, the first inherently digital source of 3D medical data, to using data from various medical imaging modalities, including magnetic resonance scanners, positron emission scanners, digital ultrasound,more » electronic and confocal microscopy, and other medical imaging modalities. We have advanced from rendering anatomy to aid diagnosis and visualize complex anatomic structures to planning and assisting surgery and radiation treatment. New, more accurate and cost-effective procedures for clinical services and biomedical research have become possible by integrating computer graphics technology with medical images. This trend is particularly noticeable in current market-driven health care environment. For example, interventional imaging, image-guided surgery, and stereotactic and visualization techniques are now stemming into surgical practice. In this presentation, we discuss only computer-display-based approaches of volumetric medical visualization. That is, we assume that the display device available is two-dimensional (2D) in nature and all analysis of multidimensional image data is to be carried out via the 2D screen of the device. There are technologies such as holography and virtual reality that do provide a {open_quotes}true 3D screen{close_quotes}. To confine the scope, this presentation will not discuss such approaches.« less
Neuroradiology Using Secure Mobile Device Review.
Randhawa, Privia A; Morrish, William; Lysack, John T; Hu, William; Goyal, Mayank; Hill, Michael D
2016-04-05
Image review on computer-based workstations has made film-based review outdated. Despite advances in technology, the lack of portability of digital workstations creates an inherent disadvantage. As such, we sought to determine if the quality of image review on a handheld device is adequate for routine clinical use. Six CT/CTA cases and six MR/MRA cases were independently reviewed by three neuroradiologists in varying environments: high and low ambient light using a handheld device and on a traditional imaging workstation in ideal conditions. On first review (using a handheld device in high ambient light), a preliminary diagnosis for each case was made. Upon changes in review conditions, neuroradiologists were asked if any additional features were seen that changed their initial diagnoses. Reviewers were also asked to comment on overall clinical quality and if the handheld display was of acceptable quality for image review. After the initial CT review in high ambient light, additional findings were reported in 2 of 18 instances on subsequent reviews. Similarly, additional findings were identified in 4 of 18 instances after the initial MR review in high ambient lighting. Only one of these six additional findings contributed to the diagnosis made on the initial preliminary review. Use of a handheld device for image review is of adequate diagnostic quality based on image contrast, sharpness of structures, visible artefacts and overall display quality. Although reviewers were comfortable with using this technology, a handheld device with a larger screen may be diagnostically superior.
X-Windows Widget for Image Display
NASA Technical Reports Server (NTRS)
Deen, Robert G.
2011-01-01
XvicImage is a high-performance XWindows (Motif-compliant) user interface widget for displaying images. It handles all aspects of low-level image display. The fully Motif-compliant image display widget handles the following tasks: (1) Image display, including dithering as needed (2) Zoom (3) Pan (4) Stretch (contrast enhancement, via lookup table) (5) Display of single-band or color data (6) Display of non-byte data (ints, floats) (7) Pseudocolor display (8) Full overlay support (drawing graphics on image) (9) Mouse-based panning (10) Cursor handling, shaping, and planting (disconnecting cursor from mouse) (11) Support for all user interaction events (passed to application) (12) Background loading and display of images (doesn't freeze the GUI) (13) Tiling of images.
NASA Astrophysics Data System (ADS)
Robbins, William L.; Conklin, James J.
1995-10-01
Medical images (angiography, CT, MRI, nuclear medicine, ultrasound, x ray) play an increasingly important role in the clinical development and regulatory review process for pharmaceuticals and medical devices. Since medical images are increasingly acquired and archived digitally, or are readily digitized from film, they can be visualized, processed and analyzed in a variety of ways using digital image processing and display technology. Moreover, with image-based data management and data visualization tools, medical images can be electronically organized and submitted to the U.S. Food and Drug Administration (FDA) for review. The collection, processing, analysis, archival, and submission of medical images in a digital format versus an analog (film-based) format presents both challenges and opportunities for the clinical and regulatory information management specialist. The medical imaging 'core laboratory' is an important resource for clinical trials and regulatory submissions involving medical imaging data. Use of digital imaging technology within a core laboratory can increase efficiency and decrease overall costs in the image data management and regulatory review process.
A Framework for Realistic Modeling and Display of Object Surface Appearance
NASA Astrophysics Data System (ADS)
Darling, Benjamin A.
With advances in screen and video hardware technology, the type of content presented on computers has progressed from text and simple shapes to high-resolution photographs, photorealistic renderings, and high-definition video. At the same time, there have been significant advances in the area of content capture, with the development of devices and methods for creating rich digital representations of real-world objects. Unlike photo or video capture, which provide a fixed record of the light in a scene, these new technologies provide information on the underlying properties of the objects, allowing their appearance to be simulated for novel lighting and viewing conditions. These capabilities provide an opportunity to continue the computer display progression, from high-fidelity image presentations to digital surrogates that recreate the experience of directly viewing objects in the real world. In this dissertation, a framework was developed for representing objects with complex color, gloss, and texture properties and displaying them onscreen to appear as if they are part of the real-world environment. At its core, there is a conceptual shift from a traditional image-based display workflow to an object-based one. Instead of presenting the stored patterns of light from a scene, the objective is to reproduce the appearance attributes of a stored object by simulating its dynamic patterns of light for the real viewing and lighting geometry. This is accomplished using a computational approach where the physical light sources are modeled and the observer and display screen are actively tracked. Surface colors are calculated for the real spectral composition of the illumination with a custom multispectral rendering pipeline. In a set of experiments, the accuracy of color and gloss reproduction was evaluated by measuring the screen directly with a spectroradiometer. Gloss reproduction was assessed by comparing gonio measurements of the screen output to measurements of the real samples in the same measurement configuration. A chromatic adaptation experiment was performed to evaluate color appearance in the framework and explore the factors that contribute to differences when viewing self-luminous displays as opposed to reflective objects. A set of sample applications was developed to demonstrate the potential utility of the object display technology for digital proofing, psychophysical testing, and artwork display.
Holographic enhanced remote sensing system
NASA Technical Reports Server (NTRS)
Iavecchia, Helene P.; Gaynor, Edwin S.; Huff, Lloyd; Rhodes, William T.; Rothenheber, Edward H.
1990-01-01
The Holographic Enhanced Remote Sensing System (HERSS) consists of three primary subsystems: (1) an Image Acquisition System (IAS); (2) a Digital Image Processing System (DIPS); and (3) a Holographic Generation System (HGS) which multiply exposes a thermoplastic recording medium with sequential 2-D depth slices that are displayed on a Spatial Light Modulator (SLM). Full-parallax holograms were successfully generated by superimposing SLM images onto the thermoplastic and photopolymer. An improved HGS configuration utilizes the phase conjugate recording configuration, the 3-SLM-stacking technique, and the photopolymer. The holographic volume size is currently limited to the physical size of the SLM. A larger-format SLM is necessary to meet the desired 6 inch holographic volume. A photopolymer with an increased photospeed is required to ultimately meet a display update rate of less than 30 seconds. It is projected that the latter two technology developments will occur in the near future. While the IAS and DIPS subsystems were unable to meet NASA goals, an alternative technology is now available to perform the IAS/DIPS functions. Specifically, a laser range scanner can be utilized to build the HGS numerical database of the objects at the remote work site.
Recent advances in a linear micromirror array for high-resolution projection
NASA Astrophysics Data System (ADS)
Picard, Francis; Doucet, Michel; Niall, Keith K.; Larouche, Carl; Savard, Maxime; Crisan, Silviu; Thibault, Simon; Jerominek, Hubert
2004-05-01
The visual displays of contemporary military flight simulators lack adequate definition to represent scenes in basic fast-jet fighter tasks. For example, air-to-air and air-to-ground targets are not projected with sufficient contrast and resolution for a pilot to perceive aspect, aspect rate and object detail at real world slant ranges. Simulator display geometries require the development of ultra-high resolution projectors with greater than 20 megapixel resolution at 60 Hz frame rate. A new micromirror device has been developed to address this requirement; it is able to modulate light intensity in an analog fashion with switching times shorter than 5 μs. When combined with a scanner, a laser and Schlieren optics, a linear array of these flexible micromirrors can display images composed of thousands of lines at a frame rate of 60 Hz. Recent results related to evaluation of this technology for high resolution projection are presented. Alternate operation modes for light modulation with flexible micromirrors are proposed. The related importance of controlling the residual micromirror curvature is discussed and results of experiments investigating the use of the deposition pressure to achieve such control are reported. Moreover, activities aiming at minimizing the micromirror response time and, so doing, maximizing the number of image columns per image frame are discussed. Finally, contrast measurement and estimate of the contrast limit achievable with the flexible micromirror technology are presented. All reported activities support the development of a fully addressable 2000-element micromirror array.
Projection display technologies for the new millennium
NASA Astrophysics Data System (ADS)
Kahn, Frederic J.
2000-04-01
Although analog CRTs continue to enable most of the world's electronic projection displays such as US consumer rear projection televisions, discrete pixel (digital) active matrix LCD and DLP reflective mirror array projectors have rapidly created large nonconsumer markets--primarily for business. Recent advances in image quality, compactness and cost effectiveness of digital projectors have the potential to revolutionize major consumer and entertainment markets as well. Digital penetration of the mainstream consumer projection TV market will begin in the hear 2000. By 2005 digital projection HDTVs could take the major share of the consumer HDTV projection market. Digital projection is expected to dominate both the consumer HDTV and the cinema market by 2010, resulting in potential shipments for all projection markets exceeding 10 M units per year. Digital projection is improving at a rate 10X faster than analog CRT projectors and 5X faster than PDP flat panels. Continued rapid improvement of digital projection is expected due to its relative immaturity and due to the wide diversity of technological improvements being pursued. Key technology enablers are the imaging panels, light sources and micro-optics. Market shares of single panel projectors, MEMs panels, LCOS panels and low T p-Si TFT LCD panel variants are expected to increase.
NASA Astrophysics Data System (ADS)
Min, Jae-Hong; Gelo, Nikolas J.; Jo, Hongki
2016-04-01
The newly developed smartphone application, named RINO, in this study allows measuring absolute dynamic displacements and processing them in real time using state-of-the-art smartphone technologies, such as high-performance graphics processing unit (GPU), in addition to already powerful CPU and memories, embedded high-speed/ resolution camera, and open-source computer vision libraries. A carefully designed color-patterned target and user-adjustable crop filter enable accurate and fast image processing, allowing up to 240fps for complete displacement calculation and real-time display. The performances of the developed smartphone application are experimentally validated, showing comparable accuracy with those of conventional laser displacement sensor.
Semi-autonomous wheelchair system using stereoscopic cameras.
Nguyen, Jordan S; Nguyen, Thanh H; Nguyen, Hung T
2009-01-01
This paper is concerned with the design and development of a semi-autonomous wheelchair system using stereoscopic cameras to assist hands-free control technologies for severely disabled people. The stereoscopic cameras capture an image from both the left and right cameras, which are then processed with a Sum of Absolute Differences (SAD) correlation algorithm to establish correspondence between image features in the different views of the scene. This is used to produce a stereo disparity image containing information about the depth of objects away from the camera in the image. A geometric projection algorithm is then used to generate a 3-Dimensional (3D) point map, placing pixels of the disparity image in 3D space. This is then converted to a 2-Dimensional (2D) depth map allowing objects in the scene to be viewed and a safe travel path for the wheelchair to be planned and followed based on the user's commands. This assistive technology utilising stereoscopic cameras has the purpose of automated obstacle detection, path planning and following, and collision avoidance during navigation. Experimental results obtained in an indoor environment displayed the effectiveness of this assistive technology.
Slomka, P J; Elliott, E; Driedger, A A
2000-01-01
In nuclear medicine practice, images often need to be reviewed and reports prepared from locations outside the department, usually in the form of hard copy. Although hard-copy images are simple and portable, they do not offer electronic data search and image manipulation capabilities. On the other hand, picture archiving and communication systems or dedicated workstations cannot be easily deployed at numerous locations. To solve this problem, we propose a Java-based remote viewing station (JaRViS) for the reading and reporting of nuclear medicine images using Internet browser technology. JaRViS interfaces to the clinical patient database of a nuclear medicine workstation. All JaRViS software resides on a nuclear medicine department server. The contents of the clinical database can be searched by a browser interface after providing a password. Compressed images with the Java applet and color lookup tables are downloaded on the client side. This paradigm does not require nuclear medicine software to reside on remote computers, which simplifies support and deployment of such a system. To enable versatile reporting of the images, color tables and thresholds can be interactively manipulated and images can be displayed in a variety of layouts. Image filtering, frame grouping (adding frames), and movie display are available. Tomographic mode displays are supported, including gated SPECT. The time to display 14 lung perfusion images in 128 x 128 matrix together with the Java applet and color lookup tables over a V.90 modem is <1 min. SPECT and PET slice reorientation is interactive (<1 s). JaRViS could run on a Windows 95/98/NT or a Macintosh platform with Netscape Communicator or Microsoft Intemet Explorer. The performance of Java code for bilinear interpolation, cine display, and filtering approaches that of a standard imaging workstation. It is feasible to set up a remote nuclear medicine viewing station using Java and an Internet or intranet browser. Images can be made easily and cost-effectively available to referring physicians and ambulatory clinics within and outside of the hospital, providing a convenient alternative to film media. We also find this system useful in home reporting of emergency procedures such as lung ventilation-perfusion scans or dynamic studies.
NASA Astrophysics Data System (ADS)
Huh, Jae-Won; Yu, Byeong-Hun; Shin, Dong-Myung; Yoon, Tae-Hoon
2015-03-01
Recently, a transparent display has got much attention as one of the next generation display devices. Especially, active studies on a transparent display using organic light-emitting diodes (OLEDs) are in progress. However, since it is not possible to obtain black color using a transparent OLED, it suffers from poor visibility. This inevitable problem can be solved by using a light shutter. Light shutter technology can be divided into two types; light absorption and scattering. However, a light shutter based on light absorption cannot block the background image perfectly and a light shutter based on light scattering cannot provide black color. In this work we demonstrate a light shutter using two liquid crystal (LC) layers, a light absorption layer and a light scattering layer. To realize a light absorption layer and a light scattering layer, we use the planar state of a dye-doped chiral nematic LC (CNLC) cell and the focal-conic state of a long-pitch CNLC cell, respectively. The proposed light shutter device can block the background image perfectly and show black color. We expect that the proposed light shutter can increase the visibility of a transparent display.
Liquid-crystal displays for medical imaging: a discussion of monochrome versus color
NASA Astrophysics Data System (ADS)
Wright, Steven L.; Samei, Ehsan
2004-05-01
A common view is that color displays cannot match the performance of monochrome displays, normally used for diagnostic x-ray imaging. This view is based largely on historical experience with cathode-ray tube (CRT) displays, and does not apply in the same way to liquid-crystal displays (LCDs). Recent advances in color LCD technology have considerably narrowed performance differences with monochrome LCDs for medical applications. The most significant performance advantage of monochrome LCDs is higher luminance, a concern for use under bright ambient conditions. LCD luminance is limited primarily by backlight design, yet to be optimized for color LCDs for medical applications. Monochrome LCDs have inherently higher contrast than color LCDs, but this is not a major advantage under most conditions. There is no practical difference in luminance precision between color and monochrome LCDs, with a slight theoretical advantage for color. Color LCDs can provide visualization and productivity enhancement for medical applications, using digital drive from standard commercial graphics cards. The desktop computer market for color LCDs far exceeds the medical monitor market, with an economy of scale. The performance-to-price ratio for color LCDs is much higher than monochrome, and warrants re-evaluation for medical applications.
An open architecture for medical image workstation
NASA Astrophysics Data System (ADS)
Liang, Liang; Hu, Zhiqiang; Wang, Xiangyun
2005-04-01
Dealing with the difficulties of integrating various medical image viewing and processing technologies with a variety of clinical and departmental information systems and, in the meantime, overcoming the performance constraints in transferring and processing large-scale and ever-increasing image data in healthcare enterprise, we design and implement a flexible, usable and high-performance architecture for medical image workstations. This architecture is not developed for radiology only, but for any workstations in any application environments that may need medical image retrieving, viewing, and post-processing. This architecture contains an infrastructure named Memory PACS and different kinds of image applications built on it. The Memory PACS is in charge of image data caching, pre-fetching and management. It provides image applications with a high speed image data access and a very reliable DICOM network I/O. In dealing with the image applications, we use dynamic component technology to separate the performance-constrained modules from the flexibility-constrained modules so that different image viewing or processing technologies can be developed and maintained independently. We also develop a weakly coupled collaboration service, through which these image applications can communicate with each other or with third party applications. We applied this architecture in developing our product line and it works well. In our clinical sites, this architecture is applied not only in Radiology Department, but also in Ultrasonic, Surgery, Clinics, and Consultation Center. Giving that each concerned department has its particular requirements and business routines along with the facts that they all have different image processing technologies and image display devices, our workstations are still able to maintain high performance and high usability.
Active confocal imaging for visual prostheses
Jung, Jae-Hyun; Aloni, Doron; Yitzhaky, Yitzhak; Peli, Eli
2014-01-01
There are encouraging advances in prosthetic vision for the blind, including retinal and cortical implants, and other “sensory substitution devices” that use tactile or electrical stimulation. However, they all have low resolution, limited visual field, and can display only few gray levels (limited dynamic range), severely restricting their utility. To overcome these limitations, image processing or the imaging system could emphasize objects of interest and suppress the background clutter. We propose an active confocal imaging system based on light-field technology that will enable a blind user of any visual prosthesis to efficiently scan, focus on, and “see” only an object of interest while suppressing interference from background clutter. The system captures three-dimensional scene information using a light-field sensor and displays only an in-focused plane with objects in it. After capturing a confocal image, a de-cluttering process removes the clutter based on blur difference. In preliminary experiments we verified the positive impact of confocal-based background clutter removal on recognition of objects in low resolution and limited dynamic range simulated phosphene images. Using a custom-made multiple-camera system, we confirmed that the concept of a confocal de-cluttered image can be realized effectively using light field imaging. PMID:25448710
European training network on full-parallax imaging (Conference Presentation)
NASA Astrophysics Data System (ADS)
Martínez-Corral, Manuel; Saavedra, Genaro
2017-05-01
Current displays are far from truly recreating visual reality. This requires a full-parallax display that can reproduce radiance field emanated from the real scenes. The develop-ment of such technology will require a new generation of researchers trained both in the physics, and in the biology of human vision. The European Training Network on Full-Parallax Imaging (ETN-FPI) aims at developing this new generation. Under H2020 funding ETN-FPI brings together 8 beneficiaries and 8 partner organizations from five EU countries with the aim of training 15 talented pre-doctoral students to become future research leaders in this area. In this contribution we will explain the main objectives of the network, and specifically the advances obtained at the University of Valencia.
Perceived image quality for autostereoscopic holograms in healthcare training
NASA Astrophysics Data System (ADS)
Goldiez, Brian; Abich, Julian; Carter, Austin; Hackett, Matthew
2017-03-01
The current state of dynamic light field holography requires further empirical investigation to ultimately advance this developing technology. This paper describes a user-centered design approach for gaining insight into the features most important to clinical personnel using emerging dynamic holographic displays. The approach describes the generation of a high quality holographic model of a simulated traumatic amputation above the knee using 3D scanning. Using that model, a set of static holographic prints will be created varying in color or monochrome, contrast ratio, and polygon density. Leveraging methods from image quality research, the goal for this paper is to describe an experimental approach wherein participants are asked to provide feedback regarding the elements previously mentioned in order to guide the ongoing evolution of holographic displays.
Helmet-Mounted Displays: Sensation, Perception and Cognition Issues
2009-01-01
Inc., web site: http://www.metavr.com/ technology/ papers /syntheticvision.html Helmetag, A., Halbig, C., Kubbat, W., and Schmidt, R. (1999...system-of-systems.” One integral system is a “head-borne vision enhancement” system (an HMD) that provides fused I2/ IR sensor imagery (U.S. Army Natick...Using microwave, radar, I2, infrared ( IR ), and other technology-based imaging sensors, the “seeing” range of the human eye is extended into the
General consumer communication tools for improved image management and communication in medicine.
Rosset, Chantal; Rosset, Antoine; Ratib, Osman
2005-12-01
We elected to explore new technologies emerging on the general consumer market that can improve and facilitate image and data communication in medical and clinical environment. These new technologies developed for communication and storage of data can improve the user convenience and facilitate the communication and transport of images and related data beyond the usual limits and restrictions of a traditional picture archiving and communication systems (PACS) network. We specifically tested and implemented three new technologies provided on Apple computer platforms. (1) We adopted the iPod, a MP3 portable player with a hard disk storage, to easily and quickly move large number of DICOM images. (2) We adopted iChat, a videoconference and instant-messaging software, to transmit DICOM images in real time to a distant computer for conferencing teleradiology. (3) Finally, we developed a direct secure interface to use the iDisk service, a file-sharing service based on the WebDAV technology, to send and share DICOM files between distant computers. These three technologies were integrated in a new open-source image navigation and display software called OsiriX allowing for manipulation and communication of multimodality and multidimensional DICOM image data sets. This software is freely available as an open-source project at http://homepage.mac.com/rossetantoine/OsiriX. Our experience showed that the implementation of these technologies allowed us to significantly enhance the existing PACS with valuable new features without any additional investment or the need for complex extensions of our infrastructure. The added features such as teleradiology, secure and convenient image and data communication, and the use of external data storage services open the gate to a much broader extension of our imaging infrastructure to the outside world.
Can laptops be left inside passenger bags if motion imaging is used in X-ray security screening?
Mendes, Marcia; Schwaninger, Adrian; Michel, Stefan
2013-01-01
This paper describes a study where a new X-ray machine for security screening featuring motion imaging (i.e., 5 views of a bag are shown as an image sequence) was evaluated and compared to single view imaging available on conventional X-ray screening systems. More specifically, it was investigated whether with this new technology X-ray screening of passenger bags could be enhanced to such an extent that laptops could be left inside passenger bags, without causing a significant impairment in threat detection performance. An X-ray image interpretation test was created in four different versions, manipulating the factors packing condition (laptop and bag separate vs. laptop in bag) and display condition (single vs. motion imaging). There was a highly significant and large main effect of packing condition. When laptops and bags were screened separately, threat item detection was substantially higher. For display condition, a medium effect was observed. Detection could be slightly enhanced through the application of motion imaging. There was no interaction between display and packing condition, implying that the high negative effect of leaving laptops in passenger bags could not be fully compensated by motion imaging. Additional analyses were carried out to examine effects depending on different threat categories (guns, improvised explosive devices, knives, others), the placement of the threat items (in bag vs. in laptop) and viewpoint (easy vs. difficult view). In summary, although motion imaging provides an enhancement, it is not strong enough to allow leaving laptops in bags for security screening.
Can laptops be left inside passenger bags if motion imaging is used in X-ray security screening?
Mendes, Marcia; Schwaninger, Adrian; Michel, Stefan
2013-01-01
This paper describes a study where a new X-ray machine for security screening featuring motion imaging (i.e., 5 views of a bag are shown as an image sequence) was evaluated and compared to single view imaging available on conventional X-ray screening systems. More specifically, it was investigated whether with this new technology X-ray screening of passenger bags could be enhanced to such an extent that laptops could be left inside passenger bags, without causing a significant impairment in threat detection performance. An X-ray image interpretation test was created in four different versions, manipulating the factors packing condition (laptop and bag separate vs. laptop in bag) and display condition (single vs. motion imaging). There was a highly significant and large main effect of packing condition. When laptops and bags were screened separately, threat item detection was substantially higher. For display condition, a medium effect was observed. Detection could be slightly enhanced through the application of motion imaging. There was no interaction between display and packing condition, implying that the high negative effect of leaving laptops in passenger bags could not be fully compensated by motion imaging. Additional analyses were carried out to examine effects depending on different threat categories (guns, improvised explosive devices, knives, others), the placement of the threat items (in bag vs. in laptop) and viewpoint (easy vs. difficult view). In summary, although motion imaging provides an enhancement, it is not strong enough to allow leaving laptops in bags for security screening. PMID:24151457
Full-color reflective cholesteric liquid crystal display
NASA Astrophysics Data System (ADS)
Huang, Xiao-Yang; Khan, Asad A.; Davis, Donald J.; Podojil, Gregg M.; Jones, Chad M.; Miller, Nick; Doane, J. William
1999-03-01
We report a full color 1/4 VGA reflective cholesteric display with 4096 colors. The display can deliver a brightness approaching 40 percent reflected luminance, far exceeding all other reflective technologies. With its zero voltage bistability, images can be stored for days and months without ny power consumption. This property can significantly extend the battery life. The capability of displaying full color complex graphics and images is a must in order to establish a market position in this multimedia age. Color is achieved by stacking RGB cells. The top layer is blue with right chirality, the middle layer is green with left chirality, and the bottom layer is red with right chirality. The choice of opposite chirality prevents the loss in the green and red spectra from the blue layer on the top. We also adjusted the thickness of each layer to achieve color balance. We implement gray scale in each layer with pulse width modulation. This modulation method is the best choice consideration of lower driver cost, simpler structure with fewer cross talk problems. Various drive schemes and modulation methods will be discussed in the conference.
Zawawi, Khalid H; Malki, Ghadah A; Al-Zahrani, Mohammad S; Alkhiary, Yaser M
2013-01-01
Aim The aim of this study was to evaluate the influence of education on the perception of female college students on the effect of lip position and gingival display upon smiling and esthetics. Methods A photograph of a smiling subject was altered to show varying degrees of gingival display. Female students, who were studying in different colleges, assessed a total of five images, using a numerical rating scale. Results A total of 440 college students from eight educational faculties (dentistry, dental assistants, medicine, medical technology, nursing, science, arts, and pharmacology) participated in this study. There was no difference found between students’ ratings of the altered images (P<0.05). The perception of a gummy smile was found to be similar among the participants. There was agreement between all participants that 2 mm of gingival display was the most attractive smile, while a 4 mm covering of the teeth by the upper lip was the least attractive. Conclusion Educational influence did not have an effect of the perception of a gummy smile. PMID:24204173
Leveraging Open Standards and Technologies to Search and Display Planetary Image Data
NASA Astrophysics Data System (ADS)
Rose, M.; Schauer, C.; Quinol, M.; Trimble, J.
2011-12-01
Mars and the Moon have both been visited by multiple NASA spacecraft. A large number of images and other data have been gathered by the spacecraft and are publicly available in NASA's Planetary Data System. Through a collaboration with Google, Inc., the User Centered Technologies group at NASA Ames Resarch Center has developed at tool for searching and browsing among images from multiple Mars and Moon missions. Development of this tool was facilitated by the use of several open technologies and standards. First, an open-source full-text search engine is used to search both place names on the target and to find images matching a geographic region. Second, the published API of the Google Earth browser plugin is used to geolocate the images on a virtual globe and allow the user to navigate on the globe to see related images. The structure of the application also employs standard protocols and services. The back-end is exposed as RESTful APIs, which could be reused by other client systems in the future. Further, the communication between the front- and back-end portions of the system utilizes open data standards including XML and KML (Keyhole Markup Language) for representation of textual and geographic data. The creation of the search index was facilitated by reuse of existing, publicly available metadata, including the Gazetteer of Planetary Nomenclature from the USGS, available in KML format. And the image metadata was reused from standards-compliant archives in the Planetary Data System. The system also supports collaboration with other tools by allowing export of search results in KML, and the ability to display those results in the Google Earth desktop application. We will demonstrate the search and visualization capabilities of the system, with emphasis on how the system facilitates reuse of data and services through the adoption of open standards.
An Operationally Based Vision Assessment Simulator for Domes
NASA Technical Reports Server (NTRS)
Archdeacon, John; Gaska, James; Timoner, Samson
2012-01-01
The Operational Based Vision Assessment (OBVA) simulator was designed and built by NASA and the United States Air Force (USAF) to provide the Air Force School of Aerospace Medicine (USAFSAM) with a scientific testing laboratory to study human vision and testing standards in an operationally relevant environment. This paper describes the general design objectives and implementation characteristics of the simulator visual system being created to meet these requirements. A key design objective for the OBVA research simulator is to develop a real-time computer image generator (IG) and display subsystem that can display and update at 120 frame s per second (design target), or at a minimum, 60 frames per second, with minimal transport delay using commercial off-the-shelf (COTS) technology. There are three key parts of the OBVA simulator that are described in this paper: i) the real-time computer image generator, ii) the various COTS technology used to construct the simulator, and iii) the spherical dome display and real-time distortion correction subsystem. We describe the various issues, possible COTS solutions, and remaining problem areas identified by NASA and the USAF while designing and building the simulator for future vision research. We also describe the critically important relationship of the physical display components including distortion correction for the dome consistent with an objective of minimizing latency in the system. The performance of the automatic calibration system used in the dome is also described. Various recommendations for possible future implementations shall also be discussed.
Help for the Visually Impaired
NASA Technical Reports Server (NTRS)
1995-01-01
The Low Vision Enhancement System (LVES) is a video headset that offers people with low vision a view of their surroundings equivalent to the image on a five-foot television screen four feet from the viewer. It will not make the blind see but for many people with low vision, it eases everyday activities such as reading, watching TV and shopping. LVES was developed over almost a decade of cooperation between Stennis Space Center, the Wilmer Eye Institute of the Johns Hopkins Medical Institutions, the Department of Veteran Affairs, and Visionics Corporation. With the aid of Stennis scientists, Wilmer researchers used NASA technology for computer processing of satellite images and head-mounted vision enhancement systems originally intended for the space station. The unit consists of a head-mounted video display, three video cameras, and a control unit for the cameras. The cameras feed images to the video display in the headset.
HyperGLOB/Freedom: Preparing Student Designers for a New Media.
ERIC Educational Resources Information Center
Slawson, Brian
The HyperGLOB project introduced university-level graphic design students to interactive multimedia. This technology involves using the personal computer to display and manipulate a variety of electronic media simultaneously (combining elements of text and speech, music and sound, still images, motion video, and animated graphics) and allows…
Iniaghe, Paschal O; Adie, Gilbert U
2015-11-01
Cathode ray tubes are image display units found in computer monitors and televisions. In recent years, cathode ray tubes have been generated as waste owing to the introduction of newer and advanced technologies in image displays, such as liquid crystal displays and high definition televisions, among others. Generation and subsequent disposal of end-of-life cathode ray tubes presents a challenge owing to increasing volumes and high lead content embedded in the funnel and neck sections of the glass. Disposal in landfills and open dumping are anti-environmental practices considering the large-scale contamination of environmental media by the potential of toxic metals leaching from glass. Mitigating such environmental contamination will require sound management strategies that are environmentally friendly and economically feasible. This review covers existing and emerging management practices for end-of-life cathode ray tubes. An in-depth analysis of available technologies (glass smelting, detoxification of cathode ray tube glass, lead extraction from cathode ray tube glass) revealed that most of the techniques are environmentally friendly, but are largely confined to either laboratory scale, or are often limited owing to high cost to mount, or generate secondary pollutants, while a closed-looped method is antiquated. However, recycling in cementitious systems (cement mortar and concrete) gives an added advantage in terms of quantity of recyclable cathode ray tube glass at a given time, with minimal environmental and economic implications. With significant quantity of waste cathode ray tube glass being generated globally, cementitious systems could be economically and environmentally acceptable as a sound management practice for cathode ray tube glass, where other technologies may not be applicable. © The Author(s) 2015.
Membrane-mirror-based autostereoscopic display for tele-operation and teleprescence applications
NASA Astrophysics Data System (ADS)
McKay, Stuart; Mair, Gordon M.; Mason, Steven; Revie, Kenneth
2000-05-01
An autostereoscopic display for telepresence and tele- operation applications has been developed at the University of Strathclyde in Glasgow, Scotland. The research is a collaborative effort between the Imaging Group and the Transparent Telepresence Research Group, both based at Strathclyde. A key component of the display is the directional screen; a 1.2-m diameter Stretchable Membrane Mirror is currently used. This patented technology enables large diameter, small f No., mirrors to be produced at a fraction of the cost of conventional optics. Another key element of the present system is an anthropomorphic and anthropometric stereo camera sensor platform. Thus, in addition to mirror development, research areas include sensor platform design focused on sight, hearing, research areas include sensor platform design focused on sight, hearing, and smell, telecommunications, display systems for all visual, aural and other senses, tele-operation, and augmented reality. The sensor platform is located at the remote site and transmits live video to the home location. Applications for this technology are as diverse as they are numerous, ranging from bomb disposal and other hazardous environment applications to tele-conferencing, sales, education and entertainment.
Calibration, reconstruction, and rendering of cylindrical millimeter-wave image data
NASA Astrophysics Data System (ADS)
Sheen, David M.; Hall, Thomas E.
2011-05-01
Cylindrical millimeter-wave imaging systems and technology have been under development at the Pacific Northwest National Laboratory (PNNL) for several years. This technology has been commercialized, and systems are currently being deployed widely across the United States and internationally. These systems are effective at screening for concealed items of all types; however, new sensor designs, image reconstruction techniques, and image rendering algorithms could potentially improve performance. At PNNL, a number of specific techniques have been developed recently to improve cylindrical imaging methods including wideband techniques, combining data from full 360-degree scans, polarimetric imaging techniques, calibration methods, and 3-D data visualization techniques. Many of these techniques exploit the three-dimensionality of the cylindrical imaging technique by optimizing the depth resolution of the system and using this information to enhance detection. Other techniques, such as polarimetric methods, exploit scattering physics of the millimeter-wave interaction with concealed targets on the body. In this paper, calibration, reconstruction, and three-dimensional rendering techniques will be described that optimize the depth information in these images and the display of the images to the operator.
Dynamic "inline" images: context-sensitive retrieval and integration of images into Web documents.
Kahn, Charles E
2008-09-01
Integrating relevant images into web-based information resources adds value for research and education. This work sought to evaluate the feasibility of using "Web 2.0" technologies to dynamically retrieve and integrate pertinent images into a radiology web site. An online radiology reference of 1,178 textual web documents was selected as the set of target documents. The ARRS GoldMiner image search engine, which incorporated 176,386 images from 228 peer-reviewed journals, retrieved images on demand and integrated them into the documents. At least one image was retrieved in real-time for display as an "inline" image gallery for 87% of the web documents. Each thumbnail image was linked to the full-size image at its original web site. Review of 20 randomly selected Collaborative Hypertext of Radiology documents found that 69 of 72 displayed images (96%) were relevant to the target document. Users could click on the "More" link to search the image collection more comprehensively and, from there, link to the full text of the article. A gallery of relevant radiology images can be inserted easily into web pages on any web server. Indexing by concepts and keywords allows context-aware image retrieval, and searching by document title and subject metadata yields excellent results. These techniques allow web developers to incorporate easily a context-sensitive image gallery into their documents.
Past, present, and future of sublimation transfer imaging
NASA Astrophysics Data System (ADS)
Akada, Masanori
1990-07-01
SONY's announcement of tlavica system shaked the world in 1981. In the new nonphotographic imaging system, image is acquired with CCD to be converted into electric image-signal, stored in magnetic recording media,displayed on a CR1 and printed on a special sheet. To get a hard copy, Sublimation Transfer technology was developed. That announcement brought about world-wide R&D of competitive color imaging systems: Ink-jet, Wax transfer,. Sublimation Transfer(ST) and Electrophotography. In spite of much effort,most of those were insufficient for getting a good hard copy. Developing sufficient ST recording media, Dai Nippon Printing started ST recording media business in 1986. It was the first manufacturing scale production and sale of ST recording media in the world. Nowadays ST technology is known for its advantages: high image quality, consistency from copy to copy, smooth tone-reproduction from high-light to maximum density, and easiness to use. In the following paper progress of ST recording media and the present situation and future markets of the media will be presented.
Solid models for CT/MR image display: accuracy and utility in surgical planning
NASA Astrophysics Data System (ADS)
Mankovich, Nicholas J.; Yue, Alvin; Ammirati, Mario; Kioumehr, Farhad; Turner, Scott
1991-05-01
Medical imaging can now take wider advantage of Computer-Aided-Manufacturing through rapid prototyping technologies (RPT) such as stereolithography, laser sintering, and laminated object manufacturing to directly produce solid models of patient anatomy from processed CT and MR images. While conventional surgical planning relies on consultation with the radiologist combined with direct reading and measurement of CT and MR studies, 3-D surface and volumetric display workstations are providing a more easily interpretable view of patient anatomy. RPT can provide the surgeon with a life size model of patient anatomy constructed layer by layer with full internal detail. Although this life-size anatomic model is more easily understandable by the surgeon, its accuracy and true surgical utility remain untested. We have developed a prototype image processing and model fabrication system based on stereolithography, which provides the neurosurgeon with models of the skull base. Parallel comparison of the model with the original thresholded CT data and with a CRT displayed surface rendering showed that both have an accuracy of 99.6 percent. Because of the ease of exact voxel localization on the model, its precision was high with the standard deviation of measurement of 0.71 percent. The measurements on the surface rendered display proved more difficult to exactly locate and yielded a standard deviation of 2.37 percent. This paper presents our accuracy study and discussed ways of assessing the quality of neurosurgical plans when 3-D models a made available as planning tools.
Development of a stereo 3-D pictorial primary flight display
NASA Technical Reports Server (NTRS)
Nataupsky, Mark; Turner, Timothy L.; Lane, Harold; Crittenden, Lucille
1989-01-01
Computer-generated displays are becoming increasingly popular in aerospace applications. The use of stereo 3-D technology provides an opportunity to present depth perceptions which otherwise might be lacking. In addition, the third dimension could also be used as an additional dimension along which information can be encoded. Historically, the stereo 3-D displays have been used in entertainment, in experimental facilities, and in the handling of hazardous waste. In the last example, the source of the stereo images generally has been remotely controlled television camera pairs. The development of a stereo 3-D pictorial primary flight display used in a flight simulation environment is described. The applicability of stereo 3-D displays for aerospace crew stations to meet the anticipated needs for 2000 to 2020 time frame is investigated. Although, the actual equipment that could be used in an aerospace vehicle is not currently available, the lab research is necessary to determine where stereo 3-D enhances the display of information and how the displays should be formatted.
Windsor, J S; Rodway, G W; Middleton, P M; McCarthy, S
2006-01-01
Objective The emergence of a new generation of “point‐and‐shoot” digital cameras offers doctors a compact, portable and user‐friendly solution to the recording of highly detailed digital photographs and video images. This work highlights the use of such technology, and provides information for those who wish to record, store and display their own medical images. Methods Over a 3‐month period, a digital camera was carried by a doctor in a busy, adult emergency department and used to record a range of clinical images that were subsequently transferred to a computer database. Results In total, 493 digital images were recorded, of which 428 were photographs and 65 were video clips. These were successfully used for teaching purposes, publications and patient records. Conclusions This study highlights the importance of informed consent, the selection of a suitable package of digital technology and the role of basic photographic technique in developing a successful digital database in a busy clinical environment. PMID:17068281
Messier, Erik; Wilcox, Jascha; Dawson-Elli, Alexander; Diaz, Gabriel; Linte, Cristian A
2016-01-01
To inspire young students (grades 6-12) to become medical practitioners and biomedical engineers, it is necessary to expose them to key concepts of the field in a way that is both exciting and informative. Recent advances in medical image acquisition, manipulation, processing, visualization, and display have revolutionized the approach in which the human body and internal anatomy can be seen and studied. It is now possible to collect 3D, 4D, and 5D medical images of patient specific data, and display that data to the end user using consumer level 3D stereoscopic display technology. Despite such advancements, traditional 2D modes of content presentation such as textbooks and slides are still the standard didactic equipment used to teach young students anatomy. More sophisticated methods of display can help to elucidate the complex 3D relationships between structures that are so often missed when viewing only 2D media, and can instill in students an appreciation for the interconnection between medicine and technology. Here we describe the design, implementation, and preliminary evaluation of a 3D virtual anatomy puzzle dedicated to helping users learn the anatomy of various organs and systems by manipulating 3D virtual data. The puzzle currently comprises several components of the human anatomy and can be easily extended to include additional organs and systems. The 3D virtual anatomy puzzle game was implemented and piloted using three display paradigms - a traditional 2D monitor, a 3D TV with active shutter glass, and the DK2 version Oculus Rift, as well as two different user interaction devices - a space mouse and traditional keyboard controls.
NASA Astrophysics Data System (ADS)
Pansing, Craig W.; Hua, Hong; Rolland, Jannick P.
2005-08-01
Head-mounted display (HMD) technologies find a variety of applications in the field of 3D virtual and augmented environments, 3D scientific visualization, as well as wearable displays. While most of the current HMDs use head pose to approximate line of sight, we propose to investigate approaches and designs for integrating eye tracking capability into HMDs from a low-level system design perspective and to explore schemes for optimizing system performance. In this paper, we particularly propose to optimize the illumination scheme, which is a critical component in designing an eye tracking-HMD (ET-HMD) integrated system. An optimal design can improve not only eye tracking accuracy, but also robustness. Using LightTools, we present the simulation of a complete eye illumination and imaging system using an eye model along with multiple near infrared LED (IRLED) illuminators and imaging optics, showing the irradiance variation of the different eye structures. The simulation of dark pupil effects along with multiple 1st-order Purkinje images will be presented. A parametric analysis is performed to investigate the relationships between the IRLED configurations and the irradiance distribution at the eye, and a set of optimal configuration parameters is recommended. The analysis will be further refined by actual eye image acquisition and processing.
MO-C-BRCD-03: The Role of Informatics in Medical Physics and Vice Versa.
Andriole, K
2012-06-01
Like Medical Physics, Imaging Informatics encompasses concepts touching every aspect of the imaging chain from image creation, acquisition, management and archival, to image processing, analysis, display and interpretation. The two disciplines are in fact quite complementary, with similar goals to improve the quality of care provided to patients using an evidence-based approach, to assure safety in the clinical and research environments, to facilitate efficiency in the workplace, and to accelerate knowledge discovery. Use-cases describing several areas of informatics activity will be given to illustrate current limitations that would benefit from medical physicist participation, and conversely areas in which informaticists may contribute to the solution. Topics to be discussed include radiation dose monitoring, process management and quality control, display technologies, business analytics techniques, and quantitative imaging. Quantitative imaging is increasingly becoming an essential part of biomedicalresearch as well as being incorporated into clinical diagnostic activities. Referring clinicians are asking for more objective information to be gleaned from the imaging tests that they order so that they may make the best clinical management decisions for their patients. Medical Physicists may be called upon to identify existing issues as well as develop, validate and implement new approaches and technologies to help move the field further toward quantitative imaging methods for the future. Biomedical imaging informatics tools and techniques such as standards, integration, data mining, cloud computing and new systems architectures, ontologies and lexicons, data visualization and navigation tools, and business analytics applications can be used to overcome some of the existing limitations. 1. Describe what is meant by Medical Imaging Informatics and understand why the medical physicist should care. 2. Identify existing limitations in information technologies with respect to Medical Physics, and conversely see how Informatics may assist the medical physicist in filling some of the current gaps in their activities. 3. Understand general informatics concepts and areas of investigation including imaging and workflow standards, systems integration, computing architectures, ontologies, data mining and business analytics, data visualization and human-computer interface tools, and the importance of quantitative imaging for the future of Medical Physics and Imaging Informatics. 4. Become familiar with on-going efforts to address current challenges facing future research into and clinical implementation of quantitative imaging applications. © 2012 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Bradu, Adrian; Kapinchev, Konstantin; Barnes, Fred; Garway-Heath, David F.; Rajendram, Ranjan; Keane, Pearce; Podoleanu, Adrian G.
2015-03-01
Recently, we introduced a novel Optical Coherence Tomography (OCT) method, termed as Master Slave OCT (MS-OCT), specialized for delivering en-face images. This method uses principles of spectral domain interfereometry in two stages. MS-OCT operates like a time domain OCT, selecting only signals from a chosen depth only while scanning the laser beam across the eye. Time domain OCT allows real time production of an en-face image, although relatively slowly. As a major advance, the Master Slave method allows collection of signals from any number of depths, as required by the user. The tremendous advantage in terms of parallel provision of data from numerous depths could not be fully employed by using multi core processors only. The data processing required to generate images at multiple depths simultaneously is not achievable with commodity multicore processors only. We compare here the major improvement in processing and display, brought about by using graphic cards. We demonstrate images obtained with a swept source at 100 kHz (which determines an acquisition time [Ta] for a frame of 200×200 pixels2 of Ta =1.6 s). By the end of the acquired frame being scanned, using our computing capacity, 4 simultaneous en-face images could be created in T = 0.8 s. We demonstrate that by using graphic cards, 32 en-face images can be displayed in Td 0.3 s. Other faster swept source engines can be used with no difference in terms of Td. With 32 images (or more), volumes can be created for 3D display, using en-face images, as opposed to the current technology where volumes are created using cross section OCT images.
Lin, Meng Kuan; Nicolini, Oliver; Waxenegger, Harald; Galloway, Graham J; Ullmann, Jeremy F P; Janke, Andrew L
2013-01-01
Digital Imaging Processing (DIP) requires data extraction and output from a visualization tool to be consistent. Data handling and transmission between the server and a user is a systematic process in service interpretation. The use of integrated medical services for management and viewing of imaging data in combination with a mobile visualization tool can be greatly facilitated by data analysis and interpretation. This paper presents an integrated mobile application and DIP service, called M-DIP. The objective of the system is to (1) automate the direct data tiling, conversion, pre-tiling of brain images from Medical Imaging NetCDF (MINC), Neuroimaging Informatics Technology Initiative (NIFTI) to RAW formats; (2) speed up querying of imaging measurement; and (3) display high-level of images with three dimensions in real world coordinates. In addition, M-DIP provides the ability to work on a mobile or tablet device without any software installation using web-based protocols. M-DIP implements three levels of architecture with a relational middle-layer database, a stand-alone DIP server, and a mobile application logic middle level realizing user interpretation for direct querying and communication. This imaging software has the ability to display biological imaging data at multiple zoom levels and to increase its quality to meet users' expectations. Interpretation of bioimaging data is facilitated by an interface analogous to online mapping services using real world coordinate browsing. This allows mobile devices to display multiple datasets simultaneously from a remote site. M-DIP can be used as a measurement repository that can be accessed by any network environment, such as a portable mobile or tablet device. In addition, this system and combination with mobile applications are establishing a virtualization tool in the neuroinformatics field to speed interpretation services.
Lin, Meng Kuan; Nicolini, Oliver; Waxenegger, Harald; Galloway, Graham J.; Ullmann, Jeremy F. P.; Janke, Andrew L.
2013-01-01
Digital Imaging Processing (DIP) requires data extraction and output from a visualization tool to be consistent. Data handling and transmission between the server and a user is a systematic process in service interpretation. The use of integrated medical services for management and viewing of imaging data in combination with a mobile visualization tool can be greatly facilitated by data analysis and interpretation. This paper presents an integrated mobile application and DIP service, called M-DIP. The objective of the system is to (1) automate the direct data tiling, conversion, pre-tiling of brain images from Medical Imaging NetCDF (MINC), Neuroimaging Informatics Technology Initiative (NIFTI) to RAW formats; (2) speed up querying of imaging measurement; and (3) display high-level of images with three dimensions in real world coordinates. In addition, M-DIP provides the ability to work on a mobile or tablet device without any software installation using web-based protocols. M-DIP implements three levels of architecture with a relational middle-layer database, a stand-alone DIP server, and a mobile application logic middle level realizing user interpretation for direct querying and communication. This imaging software has the ability to display biological imaging data at multiple zoom levels and to increase its quality to meet users’ expectations. Interpretation of bioimaging data is facilitated by an interface analogous to online mapping services using real world coordinate browsing. This allows mobile devices to display multiple datasets simultaneously from a remote site. M-DIP can be used as a measurement repository that can be accessed by any network environment, such as a portable mobile or tablet device. In addition, this system and combination with mobile applications are establishing a virtualization tool in the neuroinformatics field to speed interpretation services. PMID:23847587
Assessment of OLED displays for vision research.
Cooper, Emily A; Jiang, Haomiao; Vildavski, Vladimir; Farrell, Joyce E; Norcia, Anthony M
2013-10-23
Vision researchers rely on visual display technology for the presentation of stimuli to human and nonhuman observers. Verifying that the desired and displayed visual patterns match along dimensions such as luminance, spectrum, and spatial and temporal frequency is an essential part of developing controlled experiments. With cathode-ray tubes (CRTs) becoming virtually unavailable on the commercial market, it is useful to determine the characteristics of newly available displays based on organic light emitting diode (OLED) panels to determine how well they may serve to produce visual stimuli. This report describes a series of measurements summarizing the properties of images displayed on two commercially available OLED displays: the Sony Trimaster EL BVM-F250 and PVM-2541. The results show that the OLED displays have large contrast ratios, wide color gamuts, and precise, well-behaved temporal responses. Correct adjustment of the settings on both models produced luminance nonlinearities that were well predicted by a power function ("gamma correction"). Both displays have adjustable pixel independence and can be set to have little to no spatial pixel interactions. OLED displays appear to be a suitable, or even preferable, option for many vision research applications.
NASA Technical Reports Server (NTRS)
Dubin, Matthew B. (Inventor); Larson, Brent D. (Inventor); Kolosowsky, Aleksandra (Inventor)
2006-01-01
A modular and scalable seamless tiled display apparatus includes multiple display devices, a screen, and multiple lens assemblies. Each display device is subdivided into multiple sections, and each section is configured to display a sectional image. One of the lens assemblies is optically coupled to each of the sections of each of the display devices to project the sectional image displayed on that section onto the screen. The multiple lens assemblies are configured to merge the projected sectional images to form a single tiled image. The projected sectional images may be merged on the screen by magnifying and shifting the images in an appropriate manner. The magnification and shifting of these images eliminates any visual effect on the tiled display that may result from dead-band regions defined between each pair of adjacent sections on each display device, and due to gaps between multiple display devices.
Virtual reality: a reality for future military pilotage?
NASA Astrophysics Data System (ADS)
McIntire, John P.; Martinsen, Gary L.; Marasco, Peter L.; Havig, Paul R.
2009-05-01
Virtual reality (VR) systems provide exciting new ways to interact with information and with the world. The visual VR environment can be synthetic (computer generated) or be an indirect view of the real world using sensors and displays. With the potential opportunities of a VR system, the question arises about what benefits or detriments a military pilot might incur by operating in such an environment. Immersive and compelling VR displays could be accomplished with an HMD (e.g., imagery on the visor), large area collimated displays, or by putting the imagery on an opaque canopy. But what issues arise when, instead of viewing the world directly, a pilot views a "virtual" image of the world? Is 20/20 visual acuity in a VR system good enough? To deliver this acuity over the entire visual field would require over 43 megapixels (MP) of display surface for an HMD or about 150 MP for an immersive CAVE system, either of which presents a serious challenge with current technology. Additionally, the same number of sensor pixels would be required to drive the displays to this resolution (and formidable network architectures required to relay this information), or massive computer clusters are necessary to create an entirely computer-generated virtual reality with this resolution. Can we presently implement such a system? What other visual requirements or engineering issues should be considered? With the evolving technology, there are many technological issues and human factors considerations that need to be addressed before a pilot is placed within a virtual cockpit.
Perceptual Image Compression in Telemedicine
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Ahumada, Albert J., Jr.; Eckstein, Miguel; Null, Cynthia H. (Technical Monitor)
1996-01-01
The next era of space exploration, especially the "Mission to Planet Earth" will generate immense quantities of image data. For example, the Earth Observing System (EOS) is expected to generate in excess of one terabyte/day. NASA confronts a major technical challenge in managing this great flow of imagery: in collection, pre-processing, transmission to earth, archiving, and distribution to scientists at remote locations. Expected requirements in most of these areas clearly exceed current technology. Part of the solution to this problem lies in efficient image compression techniques. For much of this imagery, the ultimate consumer is the human eye. In this case image compression should be designed to match the visual capacities of the human observer. We have developed three techniques for optimizing image compression for the human viewer. The first consists of a formula, developed jointly with IBM and based on psychophysical measurements, that computes a DCT quantization matrix for any specified combination of viewing distance, display resolution, and display brightness. This DCT quantization matrix is used in most recent standards for digital image compression (JPEG, MPEG, CCITT H.261). The second technique optimizes the DCT quantization matrix for each individual image, based on the contents of the image. This is accomplished by means of a model of visual sensitivity to compression artifacts. The third technique extends the first two techniques to the realm of wavelet compression. Together these two techniques will allow systematic perceptual optimization of image compression in NASA imaging systems. Many of the image management challenges faced by NASA are mirrored in the field of telemedicine. Here too there are severe demands for transmission and archiving of large image databases, and the imagery is ultimately used primarily by human observers, such as radiologists. In this presentation I will describe some of our preliminary explorations of the applications of our technology to the special problems of telemedicine.
3D image processing architecture for camera phones
NASA Astrophysics Data System (ADS)
Atanassov, Kalin; Ramachandra, Vikas; Goma, Sergio R.; Aleksic, Milivoje
2011-03-01
Putting high quality and easy-to-use 3D technology into the hands of regular consumers has become a recent challenge as interest in 3D technology has grown. Making 3D technology appealing to the average user requires that it be made fully automatic and foolproof. Designing a fully automatic 3D capture and display system requires: 1) identifying critical 3D technology issues like camera positioning, disparity control rationale, and screen geometry dependency, 2) designing methodology to automatically control them. Implementing 3D capture functionality on phone cameras necessitates designing algorithms to fit within the processing capabilities of the device. Various constraints like sensor position tolerances, sensor 3A tolerances, post-processing, 3D video resolution and frame rate should be carefully considered for their influence on 3D experience. Issues with migrating functions such as zoom and pan from the 2D usage model (both during capture and display) to 3D needs to be resolved to insure the highest level of user experience. It is also very important that the 3D usage scenario (including interactions between the user and the capture/display device) is carefully considered. Finally, both the processing power of the device and the practicality of the scheme needs to be taken into account while designing the calibration and processing methodology.
Boeing Displays Process Action team
NASA Astrophysics Data System (ADS)
Wright, R. Nick; Jacobsen, Alan R.
2000-08-01
Boeing uses Active Matrix Liquid Crystal Display (AMLCD) technology in a wide variety of its aerospace products, including military, commercial, and space applications. With the demise of Optical Imaging Systems (OIS) in September 1998, the source of on-shore custom AMLCD products has become very tenuous. Reliance on off-shore sources of AMLCD glass for aerospace products is also difficult when the average life of a display product is often less than one-tenth the 30 or more years expected from aerospace platforms. Boeing is addressing this problem through the development of a Displays Process Action Team that gathers input from all display users across the spectrum of our aircraft products. By consolidating requirements, developing common interface standards, working with our suppliers and constantly monitoring custom sources as well as commercially available products, Boeing is minimizing the impact (current and future) of the uncertain AMLCD avionics supply picture.
The application of autostereoscopic display in smart home system based on mobile devices
NASA Astrophysics Data System (ADS)
Zhang, Yongjun; Ling, Zhi
2015-03-01
Smart home is a system to control home devices which are more and more popular in our daily life. Mobile intelligent terminals based on smart homes have been developed, make remote controlling and monitoring possible with smartphones or tablets. On the other hand, 3D stereo display technology developed rapidly in recent years. Therefore, a iPad-based smart home system adopts autostereoscopic display as the control interface is proposed to improve the userfriendliness of using experiences. In consideration of iPad's limited hardware capabilities, we introduced a 3D image synthesizing method based on parallel processing with Graphic Processing Unit (GPU) implemented it with OpenGL ES Application Programming Interface (API) library on IOS platforms for real-time autostereoscopic displaying. Compared to the traditional smart home system, the proposed system applied autostereoscopic display into smart home system's control interface enhanced the reality, user-friendliness and visual comfort of interface.
Quality assurance and quality control in mammography: a review of available guidance worldwide.
Reis, Cláudia; Pascoal, Ana; Sakellaris, Taxiarchis; Koutalonis, Manthos
2013-10-01
Review available guidance for quality assurance (QA) in mammography and discuss its contribution to harmonise practices worldwide. Literature search was performed on different sources to identify guidance documents for QA in mammography available worldwide in international bodies, healthcare providers, professional/scientific associations. The guidance documents identified were reviewed and a selection was compared for type of guidance (clinical/technical), technology and proposed QA methodologies focusing on dose and image quality (IQ) performance assessment. Fourteen protocols (targeted at conventional and digital mammography) were reviewed. All included recommendations for testing acquisition, processing and display systems associated with mammographic equipment. All guidance reviewed highlighted the importance of dose assessment and testing the Automatic Exposure Control (AEC) system. Recommended tests for assessment of IQ showed variations in the proposed methodologies. Recommended testing focused on assessment of low-contrast detection, spatial resolution and noise. QC of image display is recommended following the American Association of Physicists in Medicine guidelines. The existing QA guidance for mammography is derived from key documents (American College of Radiology and European Union guidelines) and proposes similar tests despite the variations in detail and methodologies. Studies reported on QA data should provide detail on experimental technique to allow robust data comparison. Countries aiming to implement a mammography/QA program may select/prioritise the tests depending on available technology and resources. •An effective QA program should be practical to implement in a clinical setting. •QA should address the various stages of the imaging chain: acquisition, processing and display. •AEC system QC testing is simple to implement and provides information on equipment performance.
Digitally switchable multi-focal lens using freeform optics.
Wang, Xuan; Qin, Yi; Hua, Hong; Lee, Yun-Han; Wu, Shin-Tson
2018-04-16
Optical technologies offering electrically tunable optical power have found a broad range of applications, from head-mounted displays for virtual and augmented reality applications to microscopy. In this paper, we present a novel design and prototype of a digitally switchable multi-focal lens (MFL) that offers the capability of rapidly switching the optical power of the system among multiple foci. It consists of a freeform singlet and a customized programmable optical shutter array (POSA). Time-multiplexed multiple foci can be obtained by electrically controlling the POSA to switch the light path through different segments of the freeform singlet rapidly. While this method can be applied to a broad range of imaging and display systems, we experimentally demonstrate a proof-of-concept prototype for a multi-foci imaging system.
49 CFR 1549.103 - Qualifications and training of individuals with security-related duties.
Code of Federal Regulations, 2011 CFR
2011-10-01
... screening technologies that the facility is authorized to use. These include: (i) The ability to operate x-ray equipment and to distinguish on the x-ray monitor the appropriate imaging standard specified in the certified cargo screening facility security program. Wherever the x-ray system displays colors...
Interactive Video in Training. Computers in Personnel--Making Management Profitable.
ERIC Educational Resources Information Center
Copeland, Peter
Interactive video is achieved by merging the two powerful technologies of microcomputing and video. Using television as the vehicle for display, text and diagrams, filmic images, and sound can be used separately or in combination to achieve a specific training task. An interactive program can check understanding, determine progress, and challenge…
NASA Astrophysics Data System (ADS)
Liu, Wei-Ting; Huang, Wen-Yao
2012-10-01
This study used the novel fluorescence based deep-blue-emitting molecule BPVPDA in an organic fluorescent color thin film to exhibit deep blue color with CIE coordinates of (0.13, 0.16). The developed original organic RGB color thin film technology enables the optimization of the distinctive features of an organic light emitting diode (OLED) and thin-film-transistor (TFT) LCD display. The color filter structure maintains the same high resolution to obtain a higher level of brightness in comparison with conventional organic RGB color thin film. The image-processing engine is designed to achieve a sharp text image for a TFT LCD with organic color thin films. The organic color thin films structure uses an organic dye dopant in a limpid photoresist. With this technology, the following characteristics can be obtained: 1. high color reproduction of gamut ratio, and 2. improved luminous efficiency with organic color fluorescent thin film. This performance is among the best results ever reported for a color-filter used on TFT-LCD or OLED.
NASA Astrophysics Data System (ADS)
Liu, Wei-ting; Huang, Wen-Yao
2012-06-01
This study used novel fluorescence based deep-blue-emitting molecules, namely BPVPDA, an organic fluorescence color thin film using BPVPDA exhibit deep blue fluorine with CIE coordinates of (0.13,0.16). The developed original Organic RGB color thin film technology enables the optimization of the distinctive features of an organic light emitting diode (OLED) and (TFT) LCD display. The color filter structure maintains the same high resolution to obtain a higher level of brightness, in comparison with conventional organic RGB color thin film. The image-processing engine is designed to achieve a sharp text image for a thin-film-transistor (TFT) LCD with organic color thin films. The organic color thin films structure uses organic dye dopent in limpid photo resist. With this technology , the following characteristics can be obtained: (1) high color reproduction of gamut ratio, and (2) improved luminous efficiency with organic color fluorescence thin film. This performance is among the best results ever reported for a color-filter used on TFT-LCD and OLED.
NASA Technical Reports Server (NTRS)
2002-01-01
In 1999, Genex submitted a proposal to Stennis Space Center for a volumetric 3-D display technique that would provide multiple users with a 360-degree perspective to simultaneously view and analyze 3-D data. The futuristic capabilities of the VolumeViewer(R) have offered tremendous benefits to commercial users in the fields of medicine and surgery, air traffic control, pilot training and education, computer-aided design/computer-aided manufacturing, and military/battlefield management. The technology has also helped NASA to better analyze and assess the various data collected by its satellite and spacecraft sensors. Genex capitalized on its success with Stennis by introducing two separate products to the commercial market that incorporate key elements of the 3-D display technology designed under an SBIR contract. The company Rainbow 3D(R) imaging camera is a novel, three-dimensional surface profile measurement system that can obtain a full-frame 3-D image in less than 1 second. The third product is the 360-degree OmniEye(R) video system. Ideal for intrusion detection, surveillance, and situation management, this unique camera system offers a continuous, panoramic view of a scene in real time.
NASA Technical Reports Server (NTRS)
Harrison, D. C.; Sandler, H.; Miller, H. A.
1975-01-01
The present collection of papers outlines advances in ultrasonography, scintigraphy, and commercialization of medical technology as applied to cardiovascular diagnosis in research and clinical practice. Particular attention is given to instrumentation, image processing and display. As necessary concomitants to mathematical analysis, recently improved magnetic recording methods using tape or disks and high-speed computers of large capacity are coming into use. Major topics include Doppler ultrasonic techniques, high-speed cineradiography, three-dimensional imaging of the myocardium with isotopes, sector-scanning echocardiography, and commercialization of the echocardioscope. Individual items are announced in this issue.
Advances in diagnostic ultrasonography.
Reef, V B
1991-08-01
A wide variety of ultrasonographic equipment currently is available for use in equine practice, but no one machine is optimal for every type of imaging. Image quality is the most important factor in equipment selection once the needs of the practitioner are ascertained. The transducer frequencies available, transducer footprints, depth of field displayed, frame rate, gray scale, simultaneous electrocardiography, Doppler, and functions to modify the image are all important considerations. The ability to make measurements off of videocassette recorder playback and future upgradability should be evaluated. Linear array and sector technology are the backbone of equine ultrasonography today. Linear array technology is most useful for a high-volume broodmare practice, whereas sector technology is ideal for a more general equine practice. The curved or convex linear scanner has more applications than the standard linear array and is equipped with the linear array rectal probe, which provides the equine practitioner with a more versatile unit for equine ultrasonographic evaluations. The annular array and phased array systems have improved image quality, but each has its own limitations. The new sector scanners still provide the most versatile affordable equipment for equine general practice.
Improved inter-layer prediction for light field content coding with display scalability
NASA Astrophysics Data System (ADS)
Conti, Caroline; Ducla Soares, Luís.; Nunes, Paulo
2016-09-01
Light field imaging based on microlens arrays - also known as plenoptic, holoscopic and integral imaging - has recently risen up as feasible and prospective technology due to its ability to support functionalities not straightforwardly available in conventional imaging systems, such as: post-production refocusing and depth of field changing. However, to gradually reach the consumer market and to provide interoperability with current 2D and 3D representations, a display scalable coding solution is essential. In this context, this paper proposes an improved display scalable light field codec comprising a three-layer hierarchical coding architecture (previously proposed by the authors) that provides interoperability with 2D (Base Layer) and 3D stereo and multiview (First Layer) representations, while the Second Layer supports the complete light field content. For further improving the compression performance, novel exemplar-based inter-layer coding tools are proposed here for the Second Layer, namely: (i) an inter-layer reference picture construction relying on an exemplar-based optimization algorithm for texture synthesis, and (ii) a direct prediction mode based on exemplar texture samples from lower layers. Experimental results show that the proposed solution performs better than the tested benchmark solutions, including the authors' previous scalable codec.
Future opportunities for advancing glucose test device electronics.
Young, Brian R; Young, Teresa L; Joyce, Margaret K; Kennedy, Spencer I; Atashbar, Massood Z
2011-09-01
Advancements in the field of printed electronics can be applied to the field of diabetes testing. A brief history and some new developments in printed electronics components applicable to personal test devices, including circuitry, batteries, transmission devices, displays, and sensors, are presented. Low-cost, thin, and lightweight materials containing printed circuits with energy storage or harvest capability and reactive/display centers, made using new printing/imaging technologies, are ideal for incorporation into personal-use medical devices such as glucose test meters. Semicontinuous rotogravure printing, which utilizes flexible substrates and polymeric, metallic, and/or nano "ink" composite materials to effect rapidly produced, lower-cost printed electronics, is showing promise. Continuing research advancing substrate, "ink," and continuous processing development presents the opportunity for research collaboration with medical device designers. © 2011 Diabetes Technology Society.
NASA Astrophysics Data System (ADS)
Phillips, Jonathan B.; Coppola, Stephen M.; Jin, Elaine W.; Chen, Ying; Clark, James H.; Mauer, Timothy A.
2009-01-01
Texture appearance is an important component of photographic image quality as well as object recognition. Noise cleaning algorithms are used to decrease sensor noise of digital images, but can hinder texture elements in the process. The Camera Phone Image Quality (CPIQ) initiative of the International Imaging Industry Association (I3A) is developing metrics to quantify texture appearance. Objective and subjective experimental results of the texture metric development are presented in this paper. Eight levels of noise cleaning were applied to ten photographic scenes that included texture elements such as faces, landscapes, architecture, and foliage. Four companies (Aptina Imaging, LLC, Hewlett-Packard, Eastman Kodak Company, and Vista Point Technologies) have performed psychophysical evaluations of overall image quality using one of two methods of evaluation. Both methods presented paired comparisons of images on thin film transistor liquid crystal displays (TFT-LCD), but the display pixel pitch and viewing distance differed. CPIQ has also been developing objective texture metrics and targets that were used to analyze the same eight levels of noise cleaning. The correlation of the subjective and objective test results indicates that texture perception can be modeled with an objective metric. The two methods of psychophysical evaluation exhibited high correlation despite the differences in methodology.
Design Issues in Video Disc Map Display.
1984-10-01
such items as the equipment used by ETL in its work with discs and selected images from a disc. % %. I 4 11. VIDEO DISC TECHNOLOGY AND VOCABULARY 0...The term video refers to a television image. The standard home television set is equipped with a receiver, which is capable of picking up a signal...plays for one hour per side and is played at a constant linear velocity. The industria )y-formatted disc has 54,000 frames per side in concentric tracks
On-line measurement of diameter of hot-rolled steel tube
NASA Astrophysics Data System (ADS)
Zhu, Xueliang; Zhao, Huiying; Tian, Ailing; Li, Bin
2015-02-01
In order to design a online diameter measurement system for Hot-rolled seamless steel tube production line. On one hand, it can play a stimulate part in the domestic pipe measuring technique. On the other hand, it can also make our domestic hot rolled seamless steel tube enterprises gain a strong product competitiveness with low input. Through the analysis of various detection methods and techniques contrast, this paper choose a CCD camera-based online caliper system design. The system mainly includes the hardware measurement portion and the image processing section, combining with software control technology and image processing technology, which can complete online measurement of heat tube diameter. Taking into account the complexity of the actual job site situation, it can choose a relatively simple and reasonable layout. The image processing section mainly to solve the camera calibration and the application of a function in Matlab, to achieve the diameter size display directly through the algorithm to calculate the image. I build a simulation platform in the design last phase, successfully, collect images for processing, to prove the feasibility and rationality of the design and make error in less than 2%. The design successfully using photoelectric detection technology to solve real work problems
Laboratory and in-flight experiments to evaluate 3-D audio display technology
NASA Technical Reports Server (NTRS)
Ericson, Mark; Mckinley, Richard; Kibbe, Marion; Francis, Daniel
1994-01-01
Laboratory and in-flight experiments were conducted to evaluate 3-D audio display technology for cockpit applications. A 3-D audio display generator was developed which digitally encodes naturally occurring direction information onto any audio signal and presents the binaural sound over headphones. The acoustic image is stabilized for head movement by use of an electromagnetic head-tracking device. In the laboratory, a 3-D audio display generator was used to spatially separate competing speech messages to improve the intelligibility of each message. Up to a 25 percent improvement in intelligibility was measured for spatially separated speech at high ambient noise levels (115 dB SPL). During the in-flight experiments, pilots reported that spatial separation of speech communications provided a noticeable improvement in intelligibility. The use of 3-D audio for target acquisition was also investigated. In the laboratory, 3-D audio enabled the acquisition of visual targets in about two seconds average response time at 17 degrees accuracy. During the in-flight experiments, pilots correctly identified ground targets 50, 75, and 100 percent of the time at separation angles of 12, 20, and 35 degrees, respectively. In general, pilot performance in the field with the 3-D audio display generator was as expected, based on data from laboratory experiments.
Automated Percentage of Breast Density Measurements for Full-field Digital Mammography Applications.
Fowler, Erin E E; Vachon, Celine M; Scott, Christopher G; Sellers, Thomas A; Heine, John J
2014-08-01
Increased mammographic breast density is a significant risk factor for breast cancer. A reproducible, accurate, and automated breast density measurement is required for full-field digital mammography (FFDM) to support clinical applications. We evaluated a novel automated percentage of breast density measure (PDa) and made comparisons with the standard operator-assisted measure (PD) using FFDM data. We used a nested breast cancer case-control study matched on age, year of mammogram and diagnosis with images acquired from a specific direct x-ray conversion FFDM technology. PDa was applied to the raw and clinical display (or processed) representation images. We evaluated the transformation (pixel mapping) of the raw image, giving a third representation (raw-transformed), to improve the PDa performance using differential evolution optimization. We applied PD to the raw and clinical display images as a standard for measurement comparison. Conditional logistic regression was used to estimate the odd ratios (ORs) for breast cancer with 95% confidence intervals (CI) for all measurements; analyses were adjusted for body mass index. PDa operates by evaluating signal-dependent noise (SDN), captured as local signal variation. Therefore, we characterized the SDN relationship to understand the PDa performance as a function of data representation and investigated a variation analysis of the transformation. The associations of the quartiles of operator-assisted PD with breast cancer were similar for the raw (OR: 1.00 [ref.]; 1.59 [95% CI, 0.93-2.70]; 1.70 [95% CI, 0.95-3.04]; 2.04 [95% CI, 1.13-3.67]) and clinical display (OR: 1.00 [ref.]; 1.31 [95% CI, 0.79-2.18]; 1.14 [95% CI, 0.65-1.98]; 1.95 [95% CI, 1.09-3.47]) images. PDa could not be assessed on the raw images without preprocessing. However, PDa had similar associations with breast cancer when assessed on 1) raw-transformed (OR: 1.00 [ref.]; 1.27 [95% CI, 0.74-2.19]; 1.86 [95% CI, 1.05-3.28]; 3.00 [95% CI, 1.67-5.38]) and 2) clinical display (OR: 1.00 [ref.]; 1.79 [95% CI, 1.04-3.11]; 1.61 [95% CI, 0.90-2.88]; 2.94 [95% CI, 1.66-5.19]) images. The SDN analysis showed that a nonlinear relationship between the mammographic signal and its variation (ie, the biomarker for the breast density) is required for PDa. Although variability in the transform influenced the respective PDa distribution, it did not affect the measurement's association with breast cancer. PDa assessed on either raw-transformed or clinical display images is a valid automated breast density measurement for a specific FFDM technology and compares well against PD. Further work is required for measurement generalization. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.
Color image quality in projection displays: a case study
NASA Astrophysics Data System (ADS)
Strand, Monica; Hardeberg, Jon Y.; Nussbaum, Peter
2005-01-01
Recently the use of projection displays has increased dramatically in different applications such as digital cinema, home theatre, and business and educational presentations. Even if the color image quality of these devices has improved significantly over the years, it is still a common situation for users of projection displays that the projected colors differ significantly from the intended ones. This study presented in this paper attempts to analyze the color image quality of a large set of projection display devices, particularly investigating the variations in color reproduction. As a case study, a set of 14 projectors (LCD and DLP technology) at Gjovik University College have been tested under four different conditions: dark and light room, with and without using an ICC-profile. To find out more about the importance of the illumination conditions in a room, and the degree of improvement when using an ICC-profile, the results from the measurements was processed and analyzed. Eye-One Beamer from GretagMacbeth was used to make the profiles. The color image quality was evaluated both visually and by color difference calculations. The results from the analysis indicated large visual and colorimetric differences between the projectors. Our DLP projectors have generally smaller color gamut than LCD projectors. The color gamuts of older projectors are significantly smaller than that of newer ones. The amount of ambient light reaching the screen is of great importance for the visual impression. If too much reflections and other ambient light reaches the screen, the projected image gets pale and has low contrast. When using a profile, the differences in colors between the projectors gets smaller and the colors appears more correct. For one device, the average ΔE*ab color difference when compared to a relative white reference was reduced from 22 to 11, for another from 13 to 6. Blue colors have the largest variations among the projection displays and makes them therefore harder to predict.
Color image quality in projection displays: a case study
NASA Astrophysics Data System (ADS)
Strand, Monica; Hardeberg, Jon Y.; Nussbaum, Peter
2004-10-01
Recently the use of projection displays has increased dramatically in different applications such as digital cinema, home theatre, and business and educational presentations. Even if the color image quality of these devices has improved significantly over the years, it is still a common situation for users of projection displays that the projected colors differ significantly from the intended ones. This study presented in this paper attempts to analyze the color image quality of a large set of projection display devices, particularly investigating the variations in color reproduction. As a case study, a set of 14 projectors (LCD and DLP technology) at Gjøvik University College have been tested under four different conditions: dark and light room, with and without using an ICC-profile. To find out more about the importance of the illumination conditions in a room, and the degree of improvement when using an ICC-profile, the results from the measurements was processed and analyzed. Eye-One Beamer from GretagMacbeth was used to make the profiles. The color image quality was evaluated both visually and by color difference calculations. The results from the analysis indicated large visual and colorimetric differences between the projectors. Our DLP projectors have generally smaller color gamut than LCD projectors. The color gamuts of older projectors are significantly smaller than that of newer ones. The amount of ambient light reaching the screen is of great importance for the visual impression. If too much reflections and other ambient light reaches the screen, the projected image gets pale and has low contrast. When using a profile, the differences in colors between the projectors gets smaller and the colors appears more correct. For one device, the average ΔE*ab color difference when compared to a relative white reference was reduced from 22 to 11, for another from 13 to 6. Blue colors have the largest variations among the projection displays and makes them therefore harder to predict.
Design of an ultra-thin near-eye display with geometrical waveguide and freeform optics
NASA Astrophysics Data System (ADS)
Tsai, Meng-Che; Lee, Tsung-Xian
2017-02-01
Due to the worldwide portable devices and illumination technology trends, researches interest in laser diodes applications are booming in recent years. One of the popular and potential LDs applications is near-eye display used in VR/AR. An ideal near-eye display needs to provide high resolution, wide FOV imagery with compact magnifying optics, and long battery life for prolonged use. However, previous studies still cannot reach high light utilization efficiency in illumination and imaging optical systems which should be raised as possible to increase wear comfort. To meet these needs, a waveguide illumination system of near-eye display is presented in this paper. We focused on proposing a high efficiency RGB LDs light engine which could reduce power consumption and increase flexibility of mechanism design by using freeform TIR reflectors instead of beam splitters. By these structures, the total system efficiency of near-eye display is successfully increased, and the improved results in efficiency and fabrication tolerance of near-eye displays are shown in this paper.
High-immersion three-dimensional display of the numerical computer model
NASA Astrophysics Data System (ADS)
Xing, Shujun; Yu, Xunbo; Zhao, Tianqi; Cai, Yuanfa; Chen, Duo; Chen, Zhidong; Sang, Xinzhu
2013-08-01
High-immersion three-dimensional (3D) displays making them valuable tools for many applications, such as designing and constructing desired building houses, industrial architecture design, aeronautics, scientific research, entertainment, media advertisement, military areas and so on. However, most technologies provide 3D display in the front of screens which are in parallel with the walls, and the sense of immersion is decreased. To get the right multi-view stereo ground image, cameras' photosensitive surface should be parallax to the public focus plane and the cameras' optical axes should be offset to the center of public focus plane both atvertical direction and horizontal direction. It is very common to use virtual cameras, which is an ideal pinhole camera to display 3D model in computer system. We can use virtual cameras to simulate the shooting method of multi-view ground based stereo image. Here, two virtual shooting methods for ground based high-immersion 3D display are presented. The position of virtual camera is determined by the people's eye position in the real world. When the observer stand in the circumcircle of 3D ground display, offset perspective projection virtual cameras is used. If the observer stands out the circumcircle of 3D ground display, offset perspective projection virtual cameras and the orthogonal projection virtual cameras are adopted. In this paper, we mainly discussed the parameter setting of virtual cameras. The Near Clip Plane parameter setting is the main point in the first method, while the rotation angle of virtual cameras is the main point in the second method. In order to validate the results, we use the D3D and OpenGL to render scenes of different viewpoints and generate a stereoscopic image. A realistic visualization system for 3D models is constructed and demonstrated for viewing horizontally, which provides high-immersion 3D visualization. The displayed 3D scenes are compared with the real objects in the real world.
Preliminary display comparison for dental diagnostic applications
NASA Astrophysics Data System (ADS)
Odlum, Nicholas; Spalla, Guillaume; van Assche, Nele; Vandenberghe, Bart; Jacobs, Reinhilde; Quirynen, Marc; Marchessoux, Cédric
2012-02-01
The aim of this study is to predict the clinical performance and image quality of a display system for viewing dental images. At present, the use of dedicated medical displays is not uniform among dentists - many still view images on ordinary consumer displays. This work investigated whether the use of a medical display improved the perception of dental images by a clinician, compared to a consumer display. Display systems were simulated using the MEdical Virtual Imaging Chain (MEVIC). Images derived from two carefully performed studies on periodontal bone lesion detection and endodontic file length determination, were used. Three displays were selected: a medical grade one and two consumer displays (Barco MDRC-2120, Dell 1907FP and Dell 2007FPb). Some typical characteristics of the displays are evaluated by measurements and simulations like the Modulation Function (MTF), the Noise Power Spectrum (NPS), backlight stability or calibration. For the MTF, the display with the largest pixel pitch has logically the worst MTF. Moreover, the medical grade display has a slightly better MTF and the displays have similar NPS. The study shows the instability effect for the emitted intensity of the consumer displays compared to the medical grade one. Finally the study on the calibration methodology of the display shows that the signal in the dental images will be always more perceivable on the DICOM GSDF display than a gamma 2,2 display.
The Sharjah Center for Astronomy and Space Sciences (SCASS 2015): Concept and Resources
NASA Astrophysics Data System (ADS)
Naimiy, Hamid M. K. Al
2015-08-01
The Sharjah Center for Astronomy and Space Sciences (SCASS) was launched this year 2015 at the University of Sharjah in the UAE. The center will serve to enrich research in the fields of astronomy and space sciences, promote these fields at all educational levels, and encourage community involvement in these sciences. SCASS consists of:The Planetarium: Contains a semi-circle display screen (18 meters in diameter) installed at an angle of 10° which displays high-definition images using an advanced digital display system consisting of seven (7) high-performance light-display channels. The Planetarium Theatre offers a 200-seat capacity with seats placed at highly calculated angles. The Planetarium also contains an enormous star display (Star Ball - 10 million stars) located in the heart of the celestial dome theatre.The Sharjah Astronomy Observatory: A small optical observatory consisting of a reflector telescope 45 centimeters in diameter to observe the galaxies, stars and planets. Connected to it is a refractor telescope of 20 centimeters in diameter to observe the sun and moon with highly developed astronomical devices, including a digital camera (CCD) and a high-resolution Echelle Spectrograph with auto-giving and remote calibration ports.Astronomy, space and physics educational displays for various age groups include:An advanced space display that allows for viewing the universe during four (4) different time periods as seen by:1) The naked eye; 2) Galileo; 3) Spectrographic technology; and 4) The space technology of today.A space technology display that includes space discoveries since the launching of the first satellite in 1940s until now.The Design Concept for the Center (450,000 sq. meters) was originated by HH Sheikh Sultan bin Mohammed Al Qasimi, Ruler of Sharjah, and depicts the dome as representing the sun in the middle of the center surrounded by planetary bodies in orbit to form the solar system as seen in the sky.
Visually representing reality: aesthetics and accessibility aspects
NASA Astrophysics Data System (ADS)
van Nes, Floris L.
2009-02-01
This paper gives an overview of the visual representation of reality with three imaging technologies: painting, photography and electronic imaging. The contribution of the important image aspects, called dimensions hereafter, such as color, fine detail and total image size, to the degree of reality and aesthetic value of the rendered image are described for each of these technologies. Whereas quite a few of these dimensions - or approximations, or even only suggestions thereof - were already present in prehistoric paintings, apparent motion and true stereoscopic vision only recently were added - unfortunately also introducing accessibility and image safety issues. Efforts are made to reduce the incidence of undesirable biomedical effects such as photosensitive seizures (PSS), visually induced motion sickness (VIMS), and visual fatigue from stereoscopic images (VFSI) by international standardization of the image parameters to be avoided by image providers and display manufacturers. The history of this type of standardization, from an International Workshop Agreement to a strategy for accomplishing effective international standardization by ISO, is treated at some length. One of the difficulties to be mastered in this process is the reconciliation of the, sometimes opposing, interests of vulnerable persons, thrill-seeking viewers, creative video designers and the game industry.
Fulfilling the promise of holographic optical elements
NASA Astrophysics Data System (ADS)
Moss, Gaylord E.
1990-05-01
Consider the whole class of holographic optical elements which either contain pictorial image information or have the ability to modify wavefronts. Even after many years of development, there are pitifully few marketable applications. The visionary promises that holography would create a revolution in the optics and display industries have not been fulfilled. Time has shown that, while it was relatively simple to dream up ideas for myriad applications, these ideas have generally not moved beyond laboratory demonstrations. Exceptions are a few items such as optical elements for supermarket scanners, head-up displays and laser diode lenses. This paper addresses: 1. The many promises of holographic elements 2. The difficulties of practical implementation 3. A reassessment of research and development priorities To give simple examples of these points, they are discussed mainly as they apply to one type of holographic application: automotive displays. These familiar displays give a clear example of both the promises and difficulties that holographic elements present in the world of high volume, low-costproduction. Automotive displays could be considered as a trivial application alongside more interesting fundamental research programs or high cost, sophisticated military applications. One might even consider "trivial" automotive displays to be a disreputable subject for serious researchers. The case is made that exactly the opposite is true. The resources for large scale development exist only in a healthy commercial market. An example is the Japanese funding of high technology through commercial product development. This has been shown to be effective in the development of other technologies, such as ceramics, semiconductors, solar cells and composite materials. In like manner, if holography is to become an economically important technology, more and more competent researchers must start looking outside the universities and military laboratories for support. They must involve themselves in some of the "trivial" commercial applications.
Assessment of display performance for medical imaging systems: Executive summary of AAPM TG18 report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samei, Ehsan; Badano, Aldo; Chakraborty, Dev
Digital imaging provides an effective means to electronically acquire, archive, distribute, and view medical images. Medical imaging display stations are an integral part of these operations. Therefore, it is vitally important to assure that electronic display devices do not compromise image quality and ultimately patient care. The AAPM Task Group 18 (TG18) recently published guidelines and acceptance criteria for acceptance testing and quality control of medical display devices. This paper is an executive summary of the TG18 report. TG18 guidelines include visual, quantitative, and advanced testing methodologies for primary and secondary class display devices. The characteristics, tested in conjunction withmore » specially designed test patterns (i.e., TG18 patterns), include reflection, geometric distortion, luminance, the spatial and angular dependencies of luminance, resolution, noise, glare, chromaticity, and display artifacts. Geometric distortions are evaluated by linear measurements of the TG18-QC test pattern, which should render distortion coefficients less than 2%/5% for primary/secondary displays, respectively. Reflection measurements include specular and diffuse reflection coefficients from which the maximum allowable ambient lighting is determined such that contrast degradation due to display reflection remains below a 20% limit and the level of ambient luminance (L{sub amb}) does not unduly compromise luminance ratio (LR) and contrast at low luminance levels. Luminance evaluation relies on visual assessment of low contrast features in the TG18-CT and TG18-MP test patterns, or quantitative measurements at 18 distinct luminance levels of the TG18-LN test patterns. The major acceptable criteria for primary/secondary displays are maximum luminance of greater than 170/100 cd/m{sup 2}, LR of greater than 250/100, and contrast conformance to that of the grayscale standard display function (GSDF) of better than 10%/20%, respectively. The angular response is tested to ascertain the viewing cone within which contrast conformance to the GSDF is better than 30%/60% and LR is greater than 175/70 for primary/secondary displays, or alternatively, within which the on-axis contrast thresholds of the TG18-CT test pattern remain discernible. The evaluation of luminance spatial uniformity at two distinct luminance levels across the display faceplate using TG18-UNL test patterns should yield nonuniformity coefficients smaller than 30%. The resolution evaluation includes the visual scoring of the CX test target in the TG18-QC or TG18-CX test patterns, which should yield scores greater than 4/6 for primary/secondary displays. Noise evaluation includes visual evaluation of the contrast threshold in the TG18-AFC test pattern, which should yield a minimum of 3/2 targets visible for primary/secondary displays. The guidelines also include methodologies for more quantitative resolution and noise measurements based on MTF and NPS analyses. The display glare test, based on the visibility of the low-contrast targets of the TG18-GV test pattern or the measurement of the glare ratio (GR), is expected to yield scores greater than 3/1 and GRs greater than 400/150 for primary/secondary displays. Chromaticity, measured across a display faceplate or between two display devices, is expected to render a u{sup '},v{sup '} color separation of less than 0.01 for primary displays. The report offers further descriptions of prior standardization efforts, current display technologies, testing prerequisites, streamlined procedures and timelines, and TG18 test patterns.« less
A compact and lightweight off-axis lightguide prism in near to eye display
NASA Astrophysics Data System (ADS)
Zhuang, Zhenfeng; Cheng, Qijia; Surman, Phil; Zheng, Yuanjin; Sun, Xiao Wei
2017-06-01
We propose a method to improve the design of an off-axis lightguide configuration for near to eye displays (NED) using freeform optics technology. The advantage of this modified optical system, which includes an organic light-emitting diode (OLED), a doublet lens, an imaging lightguide prism and a compensation prism, is that it increases optical length path, offers a smaller size, as well as avoids the obstructed views, and matches the user's head shape. In this system, the light emitted from the OLED passes through the doublet lens and is refracted/reflected by the imaging lightguide prism, which is used to magnify the image from the microdisplay, while the compensation prism is utilized to correct the light ray shift so that a low-distortion image can be observed in a real-world setting. A NED with a 4 mm diameter exit pupil, 21.5° diagonal full field of view (FoV), 23 mm eye relief, and a size of 33 mm by 9.3 mm by 16 mm is designed. The developed system is compact, lightweight and suitable for entertainment and education application.
Computer-generated holographic near-eye display system based on LCoS phase only modulator
NASA Astrophysics Data System (ADS)
Sun, Peng; Chang, Shengqian; Zhang, Siman; Xie, Ting; Li, Huaye; Liu, Siqi; Wang, Chang; Tao, Xiao; Zheng, Zhenrong
2017-09-01
Augmented reality (AR) technology has been applied in various areas, such as large-scale manufacturing, national defense, healthcare, movie and mass media and so on. An important way to realize AR display is using computer-generated hologram (CGH), which is accompanied by low image quality and heavy computing defects. Meanwhile, the diffraction of Liquid Crystal on Silicon (LCoS) has a negative effect on image quality. In this paper, a modified algorithm based on traditional Gerchberg-Saxton (GS) algorithm was proposed to improve the image quality, and new method to establish experimental system was used to broaden field of view (FOV). In the experiment, undesired zero-order diffracted light was eliminated and high definition 2D image was acquired with FOV broadened to 36.1 degree. We have also done some pilot research in 3D reconstruction with tomography algorithm based on Fresnel diffraction. With the same experimental system, experimental results demonstrate the feasibility of 3D reconstruction. These modifications are effective and efficient, and may provide a better solution in AR realization.
A Survey of Immersive Technology For Maintenance Evaluations
1998-04-01
image display system. Based on original work performed at the German National Computer Science and Mathematics Research Institute (GMD), and further...simulations, architectural walk- throughs, medical simulations, general research , entertainment applications and location based entertainment use...simulations. This study was conducted as part of a logistics research and development program Design Evaluation for Personnel, Training, and Human Factors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friedman, Dr. Peter S.; Ball, Robert; Chapman, J. Wehrley
2010-01-01
A new radiation sensor derived from plasma panel display technology is introduced. It has the capability to detect ionizing and non-ionizing radiation over a wide energy range and the potential for use in many applications. The principle of operation is described and some early results presented.
NASA Technical Reports Server (NTRS)
Deen, Robert G.; Andres, Paul M.; Mortensen, Helen B.; Parizher, Vadim; McAuley, Myche; Bartholomew, Paul
2009-01-01
The XVD [X-Windows VICAR (video image communication and retrieval) Display] computer program offers an interactive display of VICAR and PDS (planetary data systems) images. It is designed to efficiently display multiple-GB images and runs on Solaris, Linux, or Mac OS X systems using X-Windows.
Veligdan, James T.
2005-05-31
A video image is displayed from an optical panel by splitting the image into a plurality of image components, and then projecting the image components through corresponding portions of the panel to collectively form the image. Depth of the display is correspondingly reduced.
Veligdan, James T [Manorville, NY
2007-05-29
A video image is displayed from an optical panel by splitting the image into a plurality of image components, and then projecting the image components through corresponding portions of the panel to collectively form the image. Depth of the display is correspondingly reduced.
Augmenting digital displays with computation
NASA Astrophysics Data System (ADS)
Liu, Jing
As we inevitably step deeper and deeper into a world connected via the Internet, more and more information will be exchanged digitally. Displays are the interface between digital information and each individual. Naturally, one fundamental goal of displays is to reproduce information as realistically as possible since humans still care a lot about what happens in the real world. Human eyes are the receiving end of such information exchange; therefore it is impossible to study displays without studying the human visual system. In fact, the design of displays is rather closely coupled with what human eyes are capable of perceiving. For example, we are less interested in building displays that emit light in the invisible spectrum. This dissertation explores how we can augment displays with computation, which takes both display hardware and the human visual system into consideration. Four novel projects on display technologies are included in this dissertation: First, we propose a software-based approach to driving multiview autostereoscopic displays. Our display algorithm can dynamically assign views to hardware display zones based on multiple observers' current head positions, substantially reducing crosstalk and stereo inversion. Second, we present a dense projector array that creates a seamless 3D viewing experience for multiple viewers. We smoothly interpolate the set of viewer heights and distances on a per-vertex basis across the arrays field of view, reducing image distortion, crosstalk, and artifacts from tracking errors. Third, we propose a method for high dynamic range display calibration that takes into account the variation of the chrominance error over luminance. We propose a data structure for enabling efficient representation and querying of the calibration function, which also allows user-guided balancing between memory consumption and the amount of computation. Fourth, we present user studies that demonstrate that the ˜ 60 Hz critical flicker fusion rate for traditional displays is not enough for some computational displays that show complex image patterns. The study focuses on displays with hidden channels, and their application to 3D+2D TV. By taking advantage of the fast growing power of computation and sensors, these four novel display setups - in combination with display algorithms - advance the frontier of computational display research.
Tiny Devices Project Sharp, Colorful Images
NASA Technical Reports Server (NTRS)
2009-01-01
Displaytech Inc., based in Longmont, Colorado and recently acquired by Micron Technology Inc. of Boise, Idaho, first received a Small Business Innovation Research contract in 1993 from Johnson Space Center to develop tiny, electronic, color displays, called microdisplays. Displaytech has since sold over 20 million microdisplays and was ranked one of the fastest growing technology companies by Deloitte and Touche in 2005. Customers currently incorporate the microdisplays in tiny pico-projectors, which weigh only a few ounces and attach to media players, cell phones, and other devices. The projectors can convert a digital image from the typical postage stamp size into a bright, clear, four-foot projection. The company believes sales of this type of pico-projector may exceed $1.1 billion within 5 years.
Aoki, Hisae; Yamashita, Hiromasa; Mori, Toshiyuki; Fukuyo, Tsuneo; Chiba, Toshio
2014-11-01
We developed a new ultrahigh-sensitive CMOS camera using a specific sensor that has a wide range of spectral sensitivity characteristics. The objective of this study is to present our updated endoscopic technology that has successfully integrated two innovative functions; ultrasensitive imaging as well as advanced fluorescent viewing. Two different experiments were conducted. One was carried out to evaluate the function of the ultrahigh-sensitive camera. The other was to test the availability of the newly developed sensor and its performance as a fluorescence endoscope. In both studies, the distance from the endoscopic tip to the target was varied and those endoscopic images in each setting were taken for further comparison. In the first experiment, the 3-CCD camera failed to display the clear images under low illumination, and the target was hardly seen. In contrast, the CMOS camera was able to display the targets regardless of the camera-target distance under low illumination. Under high illumination, imaging quality given by both cameras was quite alike. In the second experiment as a fluorescence endoscope, the CMOS camera was capable of clearly showing the fluorescent-activated organs. The ultrahigh sensitivity CMOS HD endoscopic camera is expected to provide us with clear images under low illumination in addition to the fluorescent images under high illumination in the field of laparoscopic surgery.
The relationship between three-dimensional imaging and group decision making: an exploratory study.
Litynski, D M; Grabowski, M; Wallace, W A
1997-07-01
This paper describes an empirical investigation of the effect of three dimensional (3-D) imaging on group performance in a tactical planning task. The objective of the study is to examine the role that stereoscopic imaging can play in supporting face-to-face group problem solving and decision making-in particular, the alternative generation and evaluation processes in teams. It was hypothesized that with the stereoscopic display, group members would better visualize the information concerning the task environment, producing open communication and information exchanges. The experimental setting was a tactical command and control task, and the quality of the decisions and nature of the group decision process were investigated with three treatments: 1) noncomputerized, i.e., topographic maps with depth cues; 2) two-dimensional (2-D) imaging; and 3) stereoscopic imaging. The results were mixed on group performance. However, those groups with the stereoscopic displays generated more alternatives and spent less time on evaluation. In addition, the stereoscopic decision aid did not interfere with the group problem solving and decision-making processes. The paper concludes with a discussion of potential benefits, and the need to resolve demonstrated weaknesses of the technology.
Handheld hyperspectral imager system for chemical/biological and environmental applications
NASA Astrophysics Data System (ADS)
Hinnrichs, Michele; Piatek, Bob
2004-08-01
A small, hand held, battery operated imaging infrared spectrometer, Sherlock, has been developed by Pacific Advanced Technology and was field tested in early 2003. The Sherlock spectral imaging camera has been designed for remote gas leak detection, however, the architecture of the camera is versatile enough that it can be applied to numerous other applications such as homeland security, chemical/biological agent detection, medical and pharmaceutical applications as well as standard research and development. This paper describes the Sherlock camera, theory of operations, shows current applications and touches on potential future applications for the camera. The Sherlock has an embedded Power PC and performs real-time-image processing function in an embedded FPGA. The camera has a built in LCD display as well as output to a standard monitor, or NTSC display. It has several I/O ports, ethernet, firewire, RS232 and thus can be easily controlled from a remote location. In addition, software upgrades can be performed over the ethernet eliminating the need to send the camera back to the factory for a retrofit. Using the USB port a mouse and key board can be connected and the camera can be used in a laboratory environment as a stand alone imaging spectrometer.
Hand-held hyperspectral imager for chemical/biological and environmental applications
NASA Astrophysics Data System (ADS)
Hinnrichs, Michele; Piatek, Bob
2004-03-01
A small, hand held, battery operated imaging infrared spectrometer, Sherlock, has been developed by Pacific Advanced Technology and was field tested in early 2003. The Sherlock spectral imaging camera has been designed for remote gas leak detection, however, the architecture of the camera is versatile enough that it can be applied to numerous other applications such as homeland security, chemical/biological agent detection, medical and pharmaceutical applications as well as standard research and development. This paper describes the Sherlock camera, theory of operations, shows current applications and touches on potential future applications for the camera. The Sherlock has an embedded Power PC and performs real-time-image processing function in an embedded FPGA. The camera has a built in LCD display as well as output to a standard monitor, or NTSC display. It has several I/O ports, ethernet, firewire, RS232 and thus can be easily controlled from a remote location. In addition, software upgrades can be performed over the ethernet eliminating the need to send the camera back to the factory for a retrofit. Using the USB port a mouse and key board can be connected and the camera can be used in a laboratory environment as a stand alone imaging spectrometer.
Assessment of OLED displays for vision research
Cooper, Emily A.; Jiang, Haomiao; Vildavski, Vladimir; Farrell, Joyce E.; Norcia, Anthony M.
2013-01-01
Vision researchers rely on visual display technology for the presentation of stimuli to human and nonhuman observers. Verifying that the desired and displayed visual patterns match along dimensions such as luminance, spectrum, and spatial and temporal frequency is an essential part of developing controlled experiments. With cathode-ray tubes (CRTs) becoming virtually unavailable on the commercial market, it is useful to determine the characteristics of newly available displays based on organic light emitting diode (OLED) panels to determine how well they may serve to produce visual stimuli. This report describes a series of measurements summarizing the properties of images displayed on two commercially available OLED displays: the Sony Trimaster EL BVM-F250 and PVM-2541. The results show that the OLED displays have large contrast ratios, wide color gamuts, and precise, well-behaved temporal responses. Correct adjustment of the settings on both models produced luminance nonlinearities that were well predicted by a power function (“gamma correction”). Both displays have adjustable pixel independence and can be set to have little to no spatial pixel interactions. OLED displays appear to be a suitable, or even preferable, option for many vision research applications. PMID:24155345
Hangiandreou, Nicholas J
2003-01-01
Ultrasonography (US) has been used in medical imaging for over half a century. Current US scanners are based largely on the same basic principles used in the initial devices for human imaging. Modern equipment uses a pulse-echo approach with a brightness-mode (B-mode) display. Fundamental aspects of the B-mode imaging process include basic ultrasound physics, interactions of ultrasound with tissue, ultrasound pulse formation, scanning the ultrasound beam, and echo detection and signal processing. Recent technical innovations that have been developed to improve the performance of modern US equipment include the following: tissue harmonic imaging, spatial compound imaging, extended field of view imaging, coded pulse excitation, electronic section focusing, three-dimensional and four-dimensional imaging, and the general trend toward equipment miniaturization. US is a relatively inexpensive, portable, safe, and real-time modality, all of which make it one of the most widely used imaging modalities in medicine. Although B-mode US is sometimes referred to as a mature technology, this modality continues to experience a significant evolution in capability with even more exciting developments on the horizon. Copyright RSNA, 2003
Dual-surface dielectric depth detector for holographic millimeter-wave security scanners
NASA Astrophysics Data System (ADS)
McMakin, Douglas L.; Keller, Paul E.; Sheen, David M.; Hall, Thomas E.
2009-05-01
The Transportation Security Administration (TSA) is presently deploying millimeter-wave whole body scanners at over 20 airports in the United States. Threats that may be concealed on a person are displayed to the security operator of this scanner. "Passenger privacy is ensured through the anonymity of the image. The officer attending the passenger cannot view the image, and the officer viewing the image is remotely located and cannot see the passenger. Additionally, the image cannot be stored, transmitted or printed and is deleted immediately after being viewed. Finally, the facial area of the image has been blurred to further ensure privacy." Pacific Northwest National Laboratory (PNNL) originated research into this novel security technology which has been independently commercialized by L-3 Communications, SafeView, Inc. PNNL continues to perform fundamental research into improved software techniques which are applicable to the field of holographic security screening technology. This includes performing significant research to remove human features from the imagery. Both physical and software imaging techniques have been employed. The physical imaging techniques include polarization diversity illumination and reception, dual frequency implementation, and high frequency imaging at 100 GHz. This paper will focus on a software privacy technique using a dual surface dielectric depth detector method.
Serial sectioning for examination of photoreceptor cell architecture by focused ion beam technology
Mustafi, Debarshi; Avishai, Amir; Avishai, Nanthawan; Engel, Andreas; Heuer, Arthur; Palczewski, Krzysztof
2011-01-01
Structurally deciphering complex neural networks requires technology with sufficient resolution to allow visualization of single cells and their intimate surrounding connections. Scanning electron microscopy (SEM), coupled with serial ion ablation (SIA) technology, presents a new avenue to study these networks. SIA allows ion ablation to remove nanometer sections of tissue for SEM imaging, resulting in serial section data collection for three-dimensional reconstruction. Here we highlight a method for preparing retinal tissues for imaging of photoreceptors by SIA-SEM technology. We show that this technique can be used to visualize whole rod photoreceptors and the internal disc elements from wild-type (wt) mice. The distance parameters of the discs and photoreceptors are in good agreement with previous work with other methods. Moreover, we show that large planes of retinal tissue can be imaged at high resolution to display the packing of normal rods. Finally, SIA-SEM imaging of retinal tissue from a mouse model (Nrl−/−) with phenotypic changes akin to the human disease enhanced S-cone syndrome (ESCS) revealed a structural profile of overall photoreceptor ultrastructure and internal elements that accompany this disease. Overall, this work presents a new method to study photoreceptor cells at high structural resolution that has a broad applicability to the visual neuroscience field. PMID:21439323
Features and limitations of mobile tablet devices for viewing radiological images.
Grunert, J H
2015-03-01
Mobile radiological image display systems are becoming increasingly common, necessitating a comparison of the features of these systems, specifically the operating system employed, connection to stationary PACS, data security and rang of image display and image analysis functions. In the fall of 2013, a total of 17 PACS suppliers were surveyed regarding the technical features of 18 mobile radiological image display systems using a standardized questionnaire. The study also examined to what extent the technical specifications of the mobile image display systems satisfy the provisions of the Germany Medical Devices Act as well as the provisions of the German X-ray ordinance (RöV). There are clear differences in terms of how the mobile systems connected to the stationary PACS. Web-based solutions allow the mobile image display systems to function independently of their operating systems. The examined systems differed very little in terms of image display and image analysis functions. Mobile image display systems complement stationary PACS and can be used to view images. The impacts of the new quality assurance guidelines (QS-RL) as well as the upcoming new standard DIN 6868 - 157 on the acceptance testing of mobile image display units for the purpose of image evaluation are discussed. © Georg Thieme Verlag KG Stuttgart · New York.
Ocular Tolerance of Contemporary Electronic Display Devices.
Clark, Andrew J; Yang, Paul; Khaderi, Khizer R; Moshfeghi, Andrew A
2018-05-01
Electronic displays have become an integral part of life in the developed world since the revolution of mobile computing a decade ago. With the release of multiple consumer-grade virtual reality (VR) and augmented reality (AR) products in the past 2 years utilizing head-mounted displays (HMDs), as well as the development of low-cost, smartphone-based HMDs, the ability to intimately interact with electronic screens is greater than ever. VR/AR HMDs also place the display at much closer ocular proximity than traditional electronic devices while also isolating the user from the ambient environment to create a "closed" system between the user's eyes and the display. Whether the increased interaction with these devices places the user's retina at higher risk of damage is currently unclear. Herein, the authors review the discovery of photochemical damage of the retina from visible light as well as summarize relevant clinical and preclinical data regarding the influence of modern display devices on retinal health. Multiple preclinical studies have been performed with modern light-emitting diode technology demonstrating damage to the retina at modest exposure levels, particularly from blue-light wavelengths. Unfortunately, high-quality in-human studies are lacking, and the small clinical investigations performed to date have failed to keep pace with the rapid evolutions in display technology. Clinical investigations assessing the effect of HMDs on human retinal function are also yet to be performed. From the available data, modern consumer electronic displays do not appear to pose any acute risk to vision with average use; however, future studies with well-defined clinical outcomes and illuminance metrics are needed to better understand the long-term risks of cumulative exposure to electronic displays in general and with "closed" VR/AR HMDs in particular. [Ophthalmic Surg Lasers Imaging Retina. 2018;49:346-354.]. Copyright 2018, SLACK Incorporated.
Soendergaard, Mette; Newton-Northup, Jessica R; Deutscher, Susan L
2014-01-01
Ovarian cancer is among the leading causes of cancer deaths in women, and is the most fatal gynecological malignancy. Poor outcomes of the disease are a direct result of inadequate detection and diagnostic methods, which may be overcome by the development of novel efficacious screening modalities. However, the advancement of such technologies is often time-consuming and costly. To overcome this hurdle, our laboratory has established a time and cost effective method of selecting and identifying ovarian carcinoma avid bacteriophage (phage) clones using high throughput phage display technology. These phage clones were selected from a filamentous phage fusion vector (fUSE5) 15-amino acid peptide library against human ovarian carcinoma (SKOV-3) cells, and identified by DNA sequencing. Two phage clones, pM6 and pM9, were shown to exhibit high binding affinity and specificity for SKOV-3 cells using micropanning, cell binding and fluorescent microscopy studies. To validate that the binding was mediated by the phage-displayed peptides, biotinylated peptides (M6 and M9) were synthesized and the specificity for ovarian carcinoma cells was analyzed. These results showed that M6 and M9 bound to SKOV-3 cells in a dose-response manner and exhibited EC50 values of 22.9 ± 2.0 μM and 12.2 ± 2.1μM (mean ± STD), respectively. Based on this, phage clones pM6 and pM9 were labeled with the near-infrared fluorophore AF680, and examined for their pharmacokinetic properties and tumor imaging abilities in vivo. Both phage successfully targeted and imaged SKOV-3 tumors in xenografted nude mice, demonstrating the ability of this method to quickly and cost effectively develop novel ovarian carcinoma avid phage.
JTEC panel on display technologies in Japan
NASA Technical Reports Server (NTRS)
Tannas, Lawrence E., Jr.; Glenn, William E.; Credelle, Thomas; Doane, J. William; Firester, Arthur H.; Thompson, Malcolm
1992-01-01
This report is one in a series of reports that describes research and development efforts in Japan in the area of display technologies. The following are included in this report: flat panel displays (technical findings, liquid crystal display development and production, large flat panel displays (FPD's), electroluminescent displays and plasma panels, infrastructure in Japan's FPD industry, market and projected sales, and new a-Si active matrix liquid crystal display (AMLCD) factory); materials for flat panel displays (liquid crystal materials, and light-emissive display materials); manufacturing and infrastructure of active matrix liquid crystal displays (manufacturing logistics and equipment); passive matrix liquid crystal displays (LCD basics, twisted nematics LCD's, supertwisted nematic LCD's, ferroelectric LCD's, and a comparison of passive matrix LCD technology); active matrix technology (basic active matrix technology, investment environment, amorphous silicon, polysilicon, and commercial products and prototypes); and projection displays (comparison of Japanese and U.S. display research, and technical evaluation of work).
Image quality assessment for selfies with and without super resolution
NASA Astrophysics Data System (ADS)
Kubota, Aya; Gohshi, Seiichi
2018-04-01
With the advent of cellphone cameras, in particular, on smartphones, many people now take photos of themselves alone and with others in the frame; such photos are popularly known as "selfies". Most smartphones are equipped with two cameras: the front-facing and rear cameras. The camera located on the back of the smartphone is referred to as the "out-camera," whereas the one located on the front of the smartphone is called the "in-camera." In-cameras are mainly used for selfies. Some smartphones feature high-resolution cameras. However, the original image quality cannot be obtained because smartphone cameras often have low-performance lenses. Super resolution (SR) is one of the recent technological advancements that has increased image resolution. We developed a new SR technology that can be processed on smartphones. Smartphones with new SR technology are currently available in the market have already registered sales. However, the effective use of new SR technology has not yet been verified. Comparing the image quality with and without SR on smartphone display is necessary to confirm the usefulness of this new technology. Methods that are based on objective and subjective assessments are required to quantitatively measure image quality. It is known that the typical object assessment value, such as Peak Signal to Noise Ratio (PSNR), does not go together with how we feel when we assess image/video. When digital broadcast started, the standard was determined using subjective assessment. Although subjective assessment usually comes at high cost because of personnel expenses for observers, the results are highly reproducible when they are conducted under right conditions and statistical analysis. In this study, the subjective assessment results for selfie images are reported.
Diagnostic value of the fluoroscopic triggering 3D LAVA technique for primary liver cancer.
Shen, Xiao-Yong; Chai, Chun-Hua; Xiao, Wen-Bo; Wang, Qi-Dong
2010-04-01
Primary liver cancer (PLC) is one of the common malignant tumors. Liver acquisition with acceleration volume acquisition (LAVA), which allows simultaneous dynamic enhancement of the hepatic parenchyma and vasculature imaging, is of great help in the diagnosis of PLC. This study aimed to evaluate application of the fluoroscopic triggering 3D LAVA technique in the imaging of PLC and liver vasculature. The clinical data and imaging findings of 38 adults with PLC (22 men and 16 women; average age 52 years), pathologically confirmed by surgical resection or biopsy, were collected and analyzed. All magnetic resonance images were obtained with a 1.5-T system (General Electrics Medical Systems) with an eight-element body array coil and application of the fluoroscopic triggering 3D LAVA technique. Overall image quality was assessed on a 5-point scale by two experienced radiologists. All the nodules and blood vessel were recorded and compared. The diagnostic accuracy and feasibility of LAVA were evaluated. Thirty-eight patients gave high quality images of 72 nodules in the liver for diagnosis. The accuracy of LAVA was 97.2% (70/72), and the coincidence rate between the extent of tumor judged by dynamic enhancement and pathological examination was 87.5% (63/72). Displayed by the maximum intensity projection reconstruction, nearly all cases gave satisfactory images of branches III and IV of the hepatic artery. Furthermore, small early-stage enhancing hepatic lesions and the parallel portal vein were also well displayed. Sequence of LAVA provides good multi-phase dynamic enhancement scanning of hepatic lesions. Combined with conventional scanning technology, LAVA effectively and safely displays focal hepatic lesions and the relationship between tumor and normal tissues, especially blood vessels.
Objective analysis of image quality of video image capture systems
NASA Astrophysics Data System (ADS)
Rowberg, Alan H.
1990-07-01
As Picture Archiving and Communication System (PACS) technology has matured, video image capture has become a common way of capturing digital images from many modalities. While digital interfaces, such as those which use the ACR/NEMA standard, will become more common in the future, and are preferred because of the accuracy of image transfer, video image capture will be the dominant method in the short term, and may continue to be used for some time because of the low cost and high speed often associated with such devices. Currently, virtually all installed systems use methods of digitizing the video signal that is produced for display on the scanner viewing console itself. A series of digital test images have been developed for display on either a GE CT9800 or a GE Signa MRI scanner. These images have been captured with each of five commercially available image capture systems, and the resultant images digitally transferred on floppy disk to a PC1286 computer containing Optimast' image analysis software. Here the images can be displayed in a comparative manner for visual evaluation, in addition to being analyzed statistically. Each of the images have been designed to support certain tests, including noise, accuracy, linearity, gray scale range, stability, slew rate, and pixel alignment. These image capture systems vary widely in these characteristics, in addition to the presence or absence of other artifacts, such as shading and moire pattern. Other accessories such as video distribution amplifiers and noise filters can also add or modify artifacts seen in the captured images, often giving unusual results. Each image is described, together with the tests which were performed using them. One image contains alternating black and white lines, each one pixel wide, after equilibration strips ten pixels wide. While some systems have a slew rate fast enough to track this correctly, others blur it to an average shade of gray, and do not resolve the lines, or give horizontal or vertical streaking. While many of these results are significant from an engineering standpoint alone, there are clinical implications and some anatomy or pathology may not be visualized if an image capture system is used improperly.
Current progress and technical challenges of flexible liquid crystal displays
NASA Astrophysics Data System (ADS)
Fujikake, Hideo; Sato, Hiroto
2009-02-01
We focused on several technical approaches to flexible liquid crystal (LC) display in this report. We have been developing flexible displays using plastic film substrates based on polymer-dispersed LC technology with molecular alignment control. In our representative devices, molecular-aligned polymer walls keep plastic-substrate gap constant without LC alignment disorder, and aligned polymer networks create monostable switching of fast-response ferroelectric LC (FLC) for grayscale capability. In the fabrication process, a high-viscosity FLC/monomer solution was printed, sandwiched and pressed between plastic substrates. Then the polymer walls and networks were sequentially formed based on photo-polymerization-induced phase separation in the nematic phase by two exposure processes of patterned and uniform ultraviolet light. The two flexible backlight films of direct illumination and light-guide methods using small three-primary-color light-emitting diodes were fabricated to obtain high-visibility display images. The fabricated flexible FLC panels were driven by external transistor arrays, internal organic thin film transistor (TFT) arrays, and poly-Si TFT arrays. We achieved full-color moving-image displays using the flexible FLC panel and the flexible backlight film based on field-sequential-color driving technique. Otherwise, for backlight-free flexible LC displays, flexible reflective devices of twisted guest-host nematic LC and cholesteric LC were discussed with molecular-aligned polymer walls. Singlesubstrate device structure and fabrication method using self-standing polymer-stabilized nematic LC film and polymer ceiling layer were also proposed for obtaining LC devices with excellent flexibility.
Virtual Reality Used to Serve the Glenn Engineering Community
NASA Technical Reports Server (NTRS)
Carney, Dorothy V.
2001-01-01
There are a variety of innovative new visualization tools available to scientists and engineers for the display and analysis of their models. At the NASA Glenn Research Center, we have an ImmersaDesk, a large, single-panel, semi-immersive display device. This versatile unit can interactively display three-dimensional images in visual stereo. Our challenge is to make this virtual reality platform accessible and useful to researchers. An example of a successful application of this computer technology is the display of blade out simulations. NASA Glenn structural dynamicists, Dr. Kelly Carney and Dr. Charles Lawrence, funded by the Ultra Safe Propulsion Project under Base R&T, are researching blade outs, when turbine engines lose a fan blade during operation. Key objectives of this research include minimizing danger to the aircraft via effective blade containment, predicting destructive loads due to the imbalance following a blade loss, and identifying safe, cost-effective designs and materials for future engines.
Practical low-cost stereo head-mounted display
NASA Astrophysics Data System (ADS)
Pausch, Randy; Dwivedi, Pramod; Long, Allan C., Jr.
1991-08-01
A high-resolution head-mounted display has been developed from substantially cheaper components than previous systems. Monochrome displays provide 720 by 280 monochrome pixels to each eye in a one-inch-square region positioned approximately one inch from each eye. The display hardware is the Private Eye, manufactured by Reflection Technologies, Inc. The tracking system uses the Polhemus Isotrak, providing (x,y,z, azimuth, elevation and roll) information on the user''s head position and orientation 60 times per second. In combination with a modified Nintendo Power Glove, this system provides a full-functionality virtual reality/simulation system. Using two host 80386 computers, real-time wire frame images can be produced. Other virtual reality systems require roughly 250,000 in hardware, while this one requires only 5,000. Stereo is particularly useful for this system because shading or occlusion cannot be used as depth cues.
In memoriam: Fumio Okano, innovator of 3D display
NASA Astrophysics Data System (ADS)
Arai, Jun
2014-06-01
Dr. Fumio Okano, a well-known pioneer and innovator of three-dimensional (3D) displays, passed away on 26 November 2013 in Kanagawa, Japan, at the age of 61. Okano joined Japan Broadcasting Corporation (NHK) in Tokyo in 1978. In 1981, he began researching high-definition television (HDTV) cameras, HDTV systems, ultrahigh-definition television systems, and 3D televisions at NHK Science and Technology Research Laboratories. His publications have been frequently cited by other researchers. Okano served eight years as chair of the annual SPIE conference on Three- Dimensional Imaging, Visualization, and Display and another four years as co-chair. Okano's leadership in this field will be greatly missed and he will be remembered for his enduring contributions and innovations in the field of 3D displays. This paper is a summary of the career of Fumio Okano, as well as a tribute to that career and its lasting legacy.
NASA Astrophysics Data System (ADS)
Bianchi, R. M.; Boudreau, J.; Konstantinidis, N.; Martyniuk, A. C.; Moyse, E.; Thomas, J.; Waugh, B. M.; Yallup, D. P.; ATLAS Collaboration
2017-10-01
At the beginning, HEP experiments made use of photographical images both to record and store experimental data and to illustrate their findings. Then the experiments evolved and needed to find ways to visualize their data. With the availability of computer graphics, software packages to display event data and the detector geometry started to be developed. Here, an overview of the usage of event display tools in HEP is presented. Then the case of the ATLAS experiment is considered in more detail and two widely used event display packages are presented, Atlantis and VP1, focusing on the software technologies they employ, as well as their strengths, differences and their usage in the experiment: from physics analysis to detector development, and from online monitoring to outreach and communication. Towards the end, the other ATLAS visualization tools will be briefly presented as well. Future development plans and improvements in the ATLAS event display packages will also be discussed.
Study on high power ultraviolet laser oil detection system
NASA Astrophysics Data System (ADS)
Jin, Qi; Cui, Zihao; Bi, Zongjie; Zhang, Yanchao; Tian, Zhaoshuo; Fu, Shiyou
2018-03-01
Laser Induce Fluorescence (LIF) is a widely used new telemetry technology. It obtains information about oil spill and oil film thickness by analyzing the characteristics of stimulated fluorescence and has an important application in the field of rapid analysis of water composition. A set of LIF detection system for marine oil pollution is designed in this paper, which uses 355nm high-energy pulsed laser as the excitation light source. A high-sensitivity image intensifier is used in the detector. The upper machine sends a digital signal through a serial port to achieve nanoseconds range-gated width control for image intensifier. The target fluorescence spectrum image is displayed on the image intensifier by adjusting the delay time and the width of the pulse signal. The spectral image is coupled to CCD by lens imaging to achieve spectral display and data analysis function by computer. The system is used to detect the surface of the floating oil film in the distance of 25m to obtain the fluorescence spectra of different oil products respectively. The fluorescence spectra of oil products are obvious. The experimental results show that the system can realize high-precision long-range fluorescence detection and reflect the fluorescence characteristics of the target accurately, with broad application prospects in marine oil pollution identification and oil film thickness detection.
NPS assessment of color medical image displays using a monochromatic CCD camera
NASA Astrophysics Data System (ADS)
Roehrig, Hans; Gu, Xiliang; Fan, Jiahua
2012-10-01
This paper presents an approach to Noise Power Spectrum (NPS) assessment of color medical displays without using an expensive imaging colorimeter. The R, G and B color uniform patterns were shown on the display under study and the images were taken using a high resolution monochromatic camera. A colorimeter was used to calibrate the camera images. Synthetic intensity images were formed by the weighted sum of the R, G, B and the dark screen images. Finally the NPS analysis was conducted on the synthetic images. The proposed method replaces an expensive imaging colorimeter for NPS evaluation, which also suggests a potential solution for routine color medical display QA/QC in the clinical area, especially when imaging of display devices is desired
ERIC Educational Resources Information Center
Ong, Alex
2010-01-01
The use of augmented reality (AR) tools, where virtual objects such as tables and graphs can be displayed and be interacted with in real scenes created from imaging devices, in mainstream school curriculum is uncommon, as they are potentially costly and sometimes bulky. Thus, such learning tools are mainly applied in tertiary institutions, such as…
Contingency diagrams as teaching tools
Mattaini, Mark A.
1995-01-01
Contingency diagrams are particularly effective teaching tools, because they provide a means for students to view the complexities of contingency networks present in natural and laboratory settings while displaying the elementary processes that constitute those networks. This paper sketches recent developments in this visualization technology and illustrates approaches for using contingency diagrams in teaching. ImagesFigure 2Figure 3Figure 4 PMID:22478208
The advanced linked extended reconnaissance and targeting technology demonstration project
NASA Astrophysics Data System (ADS)
Cruickshank, James; de Villers, Yves; Maheux, Jean; Edwards, Mark; Gains, David; Rea, Terry; Banbury, Simon; Gauthier, Michelle
2007-06-01
The Advanced Linked Extended Reconnaissance & Targeting (ALERT) Technology Demonstration (TD) project is addressing key operational needs of the future Canadian Army's Surveillance and Reconnaissance forces by fusing multi-sensor and tactical data, developing automated processes, and integrating beyond line-of-sight sensing. We discuss concepts for displaying and fusing multi-sensor and tactical data within an Enhanced Operator Control Station (EOCS). The sensor data can originate from the Coyote's own visible-band and IR cameras, laser rangefinder, and ground-surveillance radar, as well as beyond line-of-sight systems such as a mini-UAV and unattended ground sensors. The authors address technical issues associated with the use of fully digital IR and day video cameras and discuss video-rate image processing developed to assist the operator to recognize poorly visible targets. Automatic target detection and recognition algorithms processing both IR and visible-band images have been investigated to draw the operator's attention to possible targets. The machine generated information display requirements are presented with the human factors engineering aspects of the user interface in this complex environment, with a view to establishing user trust in the automation. The paper concludes with a summary of achievements to date and steps to project completion.
Image change detection systems, methods, and articles of manufacture
Jones, James L.; Lassahn, Gordon D.; Lancaster, Gregory D.
2010-01-05
Aspects of the invention relate to image change detection systems, methods, and articles of manufacture. According to one aspect, a method of identifying differences between a plurality of images is described. The method includes loading a source image and a target image into memory of a computer, constructing source and target edge images from the source and target images to enable processing of multiband images, displaying the source and target images on a display device of the computer, aligning the source and target edge images, switching displaying of the source image and the target image on the display device, to enable identification of differences between the source image and the target image.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-19
... Effects Devices and Image Display Devices and Components and Products Containing Same; Notice of... United States after importation of certain motion-sensitive sound effects devices and image display... devices and image display devices and components and products containing same that infringe one or more of...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-27
... Frames and Image Display Devices and Components Thereof; Notice of Institution of Investigation... United States after importation of certain digital photo frames and image display devices and components... certain digital photo frames and image display devices and components thereof that infringe one or more of...
NASA Technical Reports Server (NTRS)
Martin, Russel A.; Ahumada, Albert J., Jr.; Larimer, James O.
1992-01-01
This paper describes the design and operation of a new simulation model for color matrix display development. It models the physical structure, the signal processing, and the visual perception of static displays, to allow optimization of display design parameters through image quality measures. The model is simple, implemented in the Mathematica computer language, and highly modular. Signal processing modules operate on the original image. The hardware modules describe backlights and filters, the pixel shape, and the tiling of the pixels over the display. Small regions of the displayed image can be visualized on a CRT. Visual perception modules assume static foveal images. The image is converted into cone catches and then into luminance, red-green, and blue-yellow images. A Haar transform pyramid separates the three images into spatial frequency and direction-specific channels. The channels are scaled by weights taken from human contrast sensitivity measurements of chromatic and luminance mechanisms at similar frequencies and orientations. Each channel provides a detectability measure. These measures allow the comparison of images displayed on prospective devices and, by that, the optimization of display designs.
Emerging digital micromirror device (DMD) applications
NASA Astrophysics Data System (ADS)
Dudley, Dana; Duncan, Walter M.; Slaughter, John
2003-01-01
For the past six years, Digital Light Processing technology from Texas Instruments has made significant inroads in the projection display market. With products enabling the world"s smallest data and video projectors, HDTVs, and digital cinema, DLP technology is extremely powerful and flexible. At the heart of these display solutions is Texas Instruments Digital Micromirror Device (DMD), a semiconductor-based "light switch" array of thousands of individually addressable, tiltable, mirror-pixels. With success of the DMD as a spatial light modulator for projector applications, dozens of new applications are now being enabled by general-use DMD products that are recently available to developers. The same light switching speed and "on-off" (contrast) ratio that have resulted in superior projector performance, along with the capability of operation outside the visible spectrum, make the DMD very attractive for many applications, including volumetric display, holographic data storage, lithography, scientific instrumentation, and medical imaging. This paper presents an overview of past and future DMD performance in the context of new DMD applications, cites several examples of emerging products, and describes the DMD components and tools now available to developers.
NASA Astrophysics Data System (ADS)
Defanti, Thomas A.; Acevedo, Daniel; Ainsworth, Richard A.; Brown, Maxine D.; Cutchin, Steven; Dawe, Gregory; Doerr, Kai-Uwe; Johnson, Andrew; Knox, Chris; Kooima, Robert; Kuester, Falko; Leigh, Jason; Long, Lance; Otto, Peter; Petrovic, Vid; Ponto, Kevin; Prudhomme, Andrew; Rao, Ramesh; Renambot, Luc; Sandin, Daniel J.; Schulze, Jurgen P.; Smarr, Larry; Srinivasan, Madhu; Weber, Philip; Wickham, Gregory
2011-03-01
The CAVE, a walk-in virtual reality environment typically consisting of 4-6 3 m-by-3 m sides of a room made of rear-projected screens, was first conceived and built in 1991. In the nearly two decades since its conception, the supporting technology has improved so that current CAVEs are much brighter, at much higher resolution, and have dramatically improved graphics performance. However, rear-projection-based CAVEs typically must be housed in a 10 m-by-10 m-by-10 m room (allowing space behind the screen walls for the projectors), which limits their deployment to large spaces. The CAVE of the future will be made of tessellated panel displays, eliminating the projection distance, but the implementation of such displays is challenging. Early multi-tile, panel-based, virtual-reality displays have been designed, prototyped, and built for the King Abdullah University of Science and Technology (KAUST) in Saudi Arabia by researchers at the University of California, San Diego, and the University of Illinois at Chicago. New means of image generation and control are considered key contributions to the future viability of the CAVE as a virtual-reality device.
A design of real time image capturing and processing system using Texas Instrument's processor
NASA Astrophysics Data System (ADS)
Wee, Toon-Joo; Chaisorn, Lekha; Rahardja, Susanto; Gan, Woon-Seng
2007-09-01
In this work, we developed and implemented an image capturing and processing system that equipped with capability of capturing images from an input video in real time. The input video can be a video from a PC, video camcorder or DVD player. We developed two modes of operation in the system. In the first mode, an input image from the PC is processed on the processing board (development platform with a digital signal processor) and is displayed on the PC. In the second mode, current captured image from the video camcorder (or from DVD player) is processed on the board but is displayed on the LCD monitor. The major difference between our system and other existing conventional systems is that image-processing functions are performed on the board instead of the PC (so that the functions can be used for further developments on the board). The user can control the operations of the board through the Graphic User Interface (GUI) provided on the PC. In order to have a smooth image data transfer between the PC and the board, we employed Real Time Data Transfer (RTDX TM) technology to create a link between them. For image processing functions, we developed three main groups of function: (1) Point Processing; (2) Filtering and; (3) 'Others'. Point Processing includes rotation, negation and mirroring. Filter category provides median, adaptive, smooth and sharpen filtering in the time domain. In 'Others' category, auto-contrast adjustment, edge detection, segmentation and sepia color are provided, these functions either add effect on the image or enhance the image. We have developed and implemented our system using C/C# programming language on TMS320DM642 (or DM642) board from Texas Instruments (TI). The system was showcased in College of Engineering (CoE) exhibition 2006 at Nanyang Technological University (NTU) and have more than 40 users tried our system. It is demonstrated that our system is adequate for real time image capturing. Our system can be used or applied for applications such as medical imaging, video surveillance, etc.
Preparing a business justification for going electronic.
Ortiz, A Orlando; Luyckx, Michael P
2002-01-01
Exponential advances in the technology sector and computer industry have benefited the science and practice of radiology. Modalities such as digital radiography, computed radiography, computed tomography, magnetic resonance imaging, ultrasound, digital angiography, and gamma cameras are all capable of producing DICOM compliant images. Text can likewise be acquired using voice recognition technology (VRT) and efficiently rendered into a digital format. All of these digital data sets can subsequently be transferred over a network between machines for display and further manipulation on workstations. Large capacity archiving units are required to store these voluminous data sets. The enterprise components of radiology departments and imaging centers--radiology information systems (RIS) and picture archiving and communications systems (PACS)--have thus undergone a transition from hardcopy to softcopy. When preparing to make transition to a digital environment, the first step is introspective. A detailed SWOT (strengths, weaknesses, opportunities and threats) analysis, with a focus on the status of "electronic preparedness," ensues. The next step in the strategic planning process is to formulate responses to the following questions: Will this technology acquisition provide sufficient value to my organization to justify the expense? Is there a true need for the new technology? What issues or problems does this technology address? What customer needs will this technology satisfy today and tomorrow? How will the organization's shareholders benefit from this technology? The answers to these questions and the questions that they in turn generate will stimulate the strategic planning process to define demands, investigate technology and investment options, identify resources and set goals. The mission of your radiology center will determine what you will demand from the electronic environment. All radiology practices must address the demand of clinical service. Additional demands based on your mission may include education and research. The investigation of options is probably the most time consuming portion of the analysis. It is in this stage where the system architecture is drafted. Important contributions must be solicited from your information technology division, radiologists and other physicians, hospital administration and any other service where the use of imaging technology information is required and beneficial. Vendors and consultants can be extremely valuable in generating workflow diagrams, which include imaging acquisition components and imaging display components. A request for proposal (RFP) may facilitate this step. A detailed inventory of imaging equipment, imaging equipment locations and use, imaging equipment DICOM compatibility, imaging equipment upgrade requirements, reading locations and user locations must be obtained and confirmed. It is a good idea to take a careful inventory of your resources during the process of investigating system architecture and financial options. An often-ignored issue is the human resource allocation that is required to implement, maintain and upgrade the system. These costs must be estimated and included in the financial analysis. Further, to predict the finances of your operation in the future, a solid understanding of your center's historical financial data is required. This will enable you to make legitimate and reasonable financial calculations using incremental volumes. The radiology center must formulate and articulate discrete clinical and business goals for the transition to a digital environment that are consistent with the institutional or enterprise mission. Once goals are set, it is possible to generate a strategic plan. It is necessary to establish individual accountability for all aspects of the planning and implementation process. A realistic timetable should be implemented. Keep in mind that this is a dynamic process; technology is rapidly changing, as are clinical service demands and regulatory initiatives. It is therefore prudent to monitor the process, make appropriate revisions when necessary and address contingencies as they arise.
Spacesuit Data Display and Management System
NASA Technical Reports Server (NTRS)
Hall, David G.; Sells, Aaron; Shah, Hemal
2009-01-01
A prototype embedded avionics system has been designed for the next generation of NASA extra-vehicular-activity (EVA) spacesuits. The system performs biomedical and other sensor monitoring, image capture, data display, and data transmission. An existing NASA Phase I and II award winning design for an embedded computing system (ZIN vMetrics - BioWATCH) has been modified. The unit has a reliable, compact form factor with flexible packaging options. These innovations are significant, because current state-of-the-art EVA spacesuits do not provide capability for data displays or embedded data acquisition and management. The Phase 1 effort achieved Technology Readiness Level 4 (high fidelity breadboard demonstration). The breadboard uses a commercial-grade field-programmable gate array (FPGA) with embedded processor core that can be upgraded to a space-rated device for future revisions.
Image based performance analysis of thermal imagers
NASA Astrophysics Data System (ADS)
Wegner, D.; Repasi, E.
2016-05-01
Due to advances in technology, modern thermal imagers resemble sophisticated image processing systems in functionality. Advanced signal and image processing tools enclosed into the camera body extend the basic image capturing capability of thermal cameras. This happens in order to enhance the display presentation of the captured scene or specific scene details. Usually, the implemented methods are proprietary company expertise, distributed without extensive documentation. This makes the comparison of thermal imagers especially from different companies a difficult task (or at least a very time consuming/expensive task - e.g. requiring the execution of a field trial and/or an observer trial). For example, a thermal camera equipped with turbulence mitigation capability stands for such a closed system. The Fraunhofer IOSB has started to build up a system for testing thermal imagers by image based methods in the lab environment. This will extend our capability of measuring the classical IR-system parameters (e.g. MTF, MTDP, etc.) in the lab. The system is set up around the IR- scene projector, which is necessary for the thermal display (projection) of an image sequence for the IR-camera under test. The same set of thermal test sequences might be presented to every unit under test. For turbulence mitigation tests, this could be e.g. the same turbulence sequence. During system tests, gradual variation of input parameters (e. g. thermal contrast) can be applied. First ideas of test scenes selection and how to assembly an imaging suite (a set of image sequences) for the analysis of imaging thermal systems containing such black boxes in the image forming path is discussed.
A Workstation for Interactive Display and Quantitative Analysis of 3-D and 4-D Biomedical Images
Robb, R.A.; Heffeman, P.B.; Camp, J.J.; Hanson, D.P.
1986-01-01
The capability to extract objective and quantitatively accurate information from 3-D radiographic biomedical images has not kept pace with the capabilities to produce the images themselves. This is rather an ironic paradox, since on the one hand the new 3-D and 4-D imaging capabilities promise significant potential for providing greater specificity and sensitivity (i.e., precise objective discrimination and accurate quantitative measurement of body tissue characteristics and function) in clinical diagnostic and basic investigative imaging procedures than ever possible before, but on the other hand, the momentous advances in computer and associated electronic imaging technology which have made these 3-D imaging capabilities possible have not been concomitantly developed for full exploitation of these capabilities. Therefore, we have developed a powerful new microcomputer-based system which permits detailed investigations and evaluation of 3-D and 4-D (dynamic 3-D) biomedical images. The system comprises a special workstation to which all the information in a large 3-D image data base is accessible for rapid display, manipulation, and measurement. The system provides important capabilities for simultaneously representing and analyzing both structural and functional data and their relationships in various organs of the body. This paper provides a detailed description of this system, as well as some of the rationale, background, theoretical concepts, and practical considerations related to system implementation. ImagesFigure 5Figure 7Figure 8Figure 9Figure 10Figure 11Figure 12Figure 13Figure 14Figure 15Figure 16
Softcopy quality ruler method: implementation and validation
NASA Astrophysics Data System (ADS)
Jin, Elaine W.; Keelan, Brian W.; Chen, Junqing; Phillips, Jonathan B.; Chen, Ying
2009-01-01
A softcopy quality ruler method was implemented for the International Imaging Industry Association (I3A) Camera Phone Image Quality (CPIQ) Initiative. This work extends ISO 20462 Part 3 by virtue of creating reference digital images of known subjective image quality, complimenting the hardcopy Standard Reference Stimuli (SRS). The softcopy ruler method was developed using images from a Canon EOS 1Ds Mark II D-SLR digital still camera (DSC) and a Kodak P880 point-and-shoot DSC. Images were viewed on an Apple 30in Cinema Display at a viewing distance of 34 inches. Ruler images were made for 16 scenes. Thirty ruler images were generated for each scene, representing ISO 20462 Standard Quality Scale (SQS) values of approximately 2 to 31 at an increment of one just noticeable difference (JND) by adjusting the system modulation transfer function (MTF). A Matlab GUI was developed to display the ruler and test images side-by-side with a user-adjustable ruler level controlled by a slider. A validation study was performed at Kodak, Vista Point Technology, and Aptina Imaging in which all three companies set up a similar viewing lab to run the softcopy ruler method. The results show that the three sets of data are in reasonable agreement with each other, with the differences within the range expected from observer variability. Compared to previous implementations of the quality ruler, the slider-based user interface allows approximately 2x faster assessments with 21.6% better precision.
Veligdan, James Thomas
1997-01-01
An optical display includes a plurality of optical waveguides each including a cladding bound core for guiding internal display light between first and second opposite ends by total internal reflection. The waveguides are stacked together to define a collective display thickness. Each of the cores includes a heterogeneous portion defining a light scattering site disposed longitudinally between the first and second ends. Adjacent ones of the sites are longitudinally offset from each other for forming a longitudinal internal image display over the display thickness upon scattering of internal display light thereagainst for generating a display image. In a preferred embodiment, the waveguides and scattering sites are transparent for transmitting therethrough an external image in superposition with the display image formed by scattering the internal light off the scattering sites for defining a heads up display.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-18
... Image Display Devices and Components Thereof; Issuance of a Limited Exclusion Order and Cease and Desist... within the United States after importation of certain digital photo frames and image display devices and...: (1) The unlicensed entry of digital photo frames and image display devices and components thereof...
Head Mounted Display with a Roof Mirror Array Fold
NASA Technical Reports Server (NTRS)
Olczak, Eugene (Inventor)
2014-01-01
The present invention includes a head mounted display (HMD) worn by a user. The HMD includes a display projecting an image through an optical lens. The HMD also includes a one-dimensional retro reflective array receiving the image through the optical lens at a first angle with respect to the display and deflecting the image at a second angle different than the first angle with respect to the display. The one-dimensional retro reflective array reflects the image in order to project the image onto an eye of the user.
Applying colour science in colour design
NASA Astrophysics Data System (ADS)
Luo, Ming Ronnier
2006-06-01
Although colour science has been widely used in a variety of industries over the years, it has not been fully explored in the field of product design. This paper will initially introduce the three main application fields of colour science: colour specification, colour-difference evaluation and colour appearance modelling. By integrating these advanced colour technologies together with modern colour imaging devices such as display, camera, scanner and printer, some computer systems have been recently developed to assist designers for designing colour palettes through colour selection by means of a number of widely used colour order systems, for creating harmonised colour schemes via a categorical colour system, for generating emotion colours using various colour emotional scales and for facilitating colour naming via a colour-name library. All systems are also capable of providing accurate colour representation on displays and output to different imaging devices such as printers.
NPS assessment of color medical displays using a monochromatic CCD camera
NASA Astrophysics Data System (ADS)
Roehrig, Hans; Gu, Xiliang; Fan, Jiahua
2012-02-01
This paper presents an approach to Noise Power Spectrum (NPS) assessment of color medical displays without using an expensive imaging colorimeter. The R, G and B color uniform patterns were shown on the display under study and the images were taken using a high resolution monochromatic camera. A colorimeter was used to calibrate the camera images. Synthetic intensity images were formed by the weighted sum of the R, G, B and the dark screen images. Finally the NPS analysis was conducted on the synthetic images. The proposed method replaces an expensive imaging colorimeter for NPS evaluation, which also suggests a potential solution for routine color medical display QA/QC in the clinical area, especially when imaging of display devices is desired.
Designing Websites for Displaying Large Data Sets and Images on Multiple Platforms
NASA Astrophysics Data System (ADS)
Anderson, A.; Wolf, V. G.; Garron, J.; Kirschner, M.
2012-12-01
The desire to build websites to analyze and display ever increasing amounts of scientific data and images pushes for web site designs which utilize large displays, and to use the display area as efficiently as possible. Yet, scientists and users of their data are increasingly wishing to access these websites in the field and on mobile devices. This results in the need to develop websites that can support a wide range of devices and screen sizes, and to optimally use whatever display area is available. Historically, designers have addressed this issue by building two websites; one for mobile devices, and one for desktop environments, resulting in increased cost, duplicity of work, and longer development times. Recent advancements in web design technology and techniques have evolved which allow for the development of a single website that dynamically adjusts to the type of device being used to browse the website (smartphone, tablet, desktop). In addition they provide the opportunity to truly optimize whatever display area is available. HTML5 and CSS3 give web designers media query statements which allow design style sheets to be aware of the size of the display being used, and to format web content differently based upon the queried response. Web elements can be rendered in a different size, position, or even removed from the display entirely, based upon the size of the display area. Using HTML5/CSS3 media queries in this manner is referred to as "Responsive Web Design" (RWD). RWD in combination with technologies such as LESS and Twitter Bootstrap allow the web designer to build web sites which not only dynamically respond to the browser display size being used, but to do so in very controlled and intelligent ways, ensuring that good layout and graphic design principles are followed while doing so. At the University of Alaska Fairbanks, the Alaska Satellite Facility SAR Data Center (ASF) recently redesigned their popular Vertex application and converted it from a traditional, fixed-layout website into a RWD site built on HTML5, LESS and Twitter Bootstrap. Vertex is a data portal for remotely sensed imagery of the earth, offering Synthetic Aperture Radar (SAR) data products from the global ASF archive. By using Responsive Web Design, ASF is able to provide access to a massive collection of SAR imagery and allow the user to use mobile devices and desktops to maximum advantage. ASF's Vertex web site demonstrates that with increased interface flexibility, scientists, managers and users can increase their personal effectiveness by accessing data portals from their preferred device as their science dictates.
Sreenilayam, Sithara P.; Panarin, Yuri P.; Vij, Jagdish K.; Panov, Vitaly P.; Lehmann, Anne; Poppe, Marco; Prehm, Marko; Tschierske, Carsten
2016-01-01
Liquid crystals (LCs) represent one of the foundations of modern communication and photonic technologies. Present display technologies are based mainly on nematic LCs, which suffer from limited response time for use in active colour sequential displays and limited image grey scale. Herein we report the first observation of a spontaneously formed helix in a polar tilted smectic LC phase (SmC phase) of achiral bent-core (BC) molecules with the axis of helix lying parallel to the layer normal and a pitch much shorter than the optical wavelength. This new phase shows fast (∼30 μs) grey-scale switching due to the deformation of the helix by the electric field. Even more importantly, defect-free alignment is easily achieved for the first time for a BC mesogen, thus providing potential use in large-scale devices with fast linear and thresholdless electro-optical response. PMID:27156514
NASA Astrophysics Data System (ADS)
Sreenilayam, Sithara P.; Panarin, Yuri P.; Vij, Jagdish K.; Panov, Vitaly P.; Lehmann, Anne; Poppe, Marco; Prehm, Marko; Tschierske, Carsten
2016-05-01
Liquid crystals (LCs) represent one of the foundations of modern communication and photonic technologies. Present display technologies are based mainly on nematic LCs, which suffer from limited response time for use in active colour sequential displays and limited image grey scale. Herein we report the first observation of a spontaneously formed helix in a polar tilted smectic LC phase (SmC phase) of achiral bent-core (BC) molecules with the axis of helix lying parallel to the layer normal and a pitch much shorter than the optical wavelength. This new phase shows fast (~30 μs) grey-scale switching due to the deformation of the helix by the electric field. Even more importantly, defect-free alignment is easily achieved for the first time for a BC mesogen, thus providing potential use in large-scale devices with fast linear and thresholdless electro-optical response.
Interactive display system having a digital micromirror imaging device
Veligdan, James T.; DeSanto, Leonard; Kaull, Lisa; Brewster, Calvin
2006-04-11
A display system includes a waveguide optical panel having an inlet face and an opposite outlet face. A projector cooperates with a digital imaging device, e.g. a digital micromirror imaging device, for projecting an image through the panel for display on the outlet face. The imaging device includes an array of mirrors tiltable between opposite display and divert positions. The display positions reflect an image light beam from the projector through the panel for display on the outlet face. The divert positions divert the image light beam away from the panel, and are additionally used for reflecting a probe light beam through the panel toward the outlet face. Covering a spot on the panel, e.g. with a finger, reflects the probe light beam back through the panel toward the inlet face for detection thereat and providing interactive capability.
Second International Airborne Remote Sensing Conference and Exhibition
NASA Technical Reports Server (NTRS)
1996-01-01
The conference provided four days of displays and scientific presentations on applications, technology, a science of sub-orbital data gathering and analysis. The twelve displayed aircraft equipped with sophisticated instrumentation represented a wide range of environmental and reconnaissance missions,including marine pollution control, fire detection, Open Skies Treaty verification, thermal mapping, hydrographical measurements, military research, ecological and agricultural observations, geophysical research, atmospheric and meterological observations, and aerial photography. The U.S. Air Force and the On-Site Inspection Agency displayed the new Open Skies Treaty verification Boeing OC 135B that promotes international monitoring of military forces and activities. SRl's Jetstream uses foliage and ground penetrating SAR for forest inventories, toxic waste delineation, and concealed target and buried unexploded ordnance detection. Earth Search Sciences's Gulfstream 1 with prototype miniaturized airborne hyperspectral imaging equipment specializes in accurate mineral differentiation, low-cost hydrocarbon exploration, and nonproliferation applications. John E. Chance and the U.S. Army Corps of Engineers displayed the Bell 2 helicopter with SHOALS that performs hydrographic surveying of navigation projects, coastal environment assessment, and nautical charting surveys. Bechtel Nevada and U.S. DOE displayed both the Beech King AIR B-200 platform equipped to provide first response to nuclear accidents and routine environmental surveillance, and the MBB BO-105 helicopter used in spectral analysis for environmental assessment and military appraisal. NASA Ames Research Center's high-altitude Lockheed ER-2 assists in earth resources monitoring research in atmospheric chemistry, oceanography, and electronic sensors; ozone and greenhouse studies and satellite calibration and data validation. Ames also showcased the Learjet 24 Airborne Observatory that completed missions in Venus cloud cover analysis, Quadantid meteor shower studies, extra-solar far infrared ionic structure lines measurement, Cape Kennedy launch support, and studies in air pollution, The Products and Services Exhibit showcased new sensor and image processing technologies, aircraft data collection services, unmanned vehicle technology, platform equipment, turn-key services, software a workstations, GPS services, publications, and processing and integration systems by 58 exhibitors. The participation of aircraft users and crews provided unique dialogue between those who plan data collection a operate the remote sensing technology, and those who supply the data processing and integration equipment. Research results using hyperspectral imagery, radar and optical sensors, lidar, digital aerial photography, a integrated systems were presented. Major research and development programs and campaigns we reviewed, including CNR's LARA Project and European Space Agency's 1991-1995 Airborne Campaign. The pre-conference short courses addressed airborne video, photogrammetry, hyperspectral data analysis, digital orthophotography, imagery and GIS integration, IFSAR, GPS, and spectrometer calibration.
Obstacles encountered in the development of the low vision enhancement system.
Massof, R W; Rickman, D L
1992-01-01
The Johns Hopkins Wilmer Eye Institute and the NASA Stennis Space Center are collaborating on the development of a new high technology low vision aid called the Low Vision Enhancement System (LVES). The LVES consists of a binocular head-mounted video display system, video cameras mounted on the head-mounted display, and real-time video image processing in a system package that is battery powered and portable. Through a phased development approach, several generations of the LVES can be made available to the patient in a timely fashion. This paper describes the LVES project with major emphasis on technical problems encountered or anticipated during the development process.
Extracting the Data From the LCM vk4 Formatted Output File
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wendelberger, James G.
These are slides about extracting the data from the LCM vk4 formatted output file. The following is covered: vk4 file produced by Keyence VK Software, custom analysis, no off the shelf way to read the file, reading the binary data in a vk4 file, various offsets in decimal lines, finding the height image data, directly in MATLAB, binary output beginning of height image data, color image information, color image binary data, color image decimal and binary data, MATLAB code to read vk4 file (choose a file, read the file, compute offsets, read optical image, laser optical image, read and computemore » laser intensity image, read height image, timing, display height image, display laser intensity image, display RGB laser optical images, display RGB optical images, display beginning data and save images to workspace, gamma correction subroutine), reading intensity form the vk4 file, linear in the low range, linear in the high range, gamma correction for vk4 files, computing the gamma intensity correction, observations.« less
Display of travelling 3D scenes from single integral-imaging capture
NASA Astrophysics Data System (ADS)
Martinez-Corral, Manuel; Dorado, Adrian; Hong, Seok-Min; Sola-Pikabea, Jorge; Saavedra, Genaro
2016-06-01
Integral imaging (InI) is a 3D auto-stereoscopic technique that captures and displays 3D images. We present a method for easily projecting the information recorded with this technique by transforming the integral image into a plenoptic image, as well as choosing, at will, the field of view (FOV) and the focused plane of the displayed plenoptic image. Furthermore, with this method we can generate a sequence of images that simulates a camera travelling through the scene from a single integral image. The application of this method permits to improve the quality of 3D display images and videos.
Ehlers, Justis P; Srivastava, Sunil K; Feiler, Daniel; Noonan, Amanda I; Rollins, Andrew M; Tao, Yuankai K
2014-01-01
To demonstrate key integrative advances in microscope-integrated intraoperative optical coherence tomography (iOCT) technology that will facilitate adoption and utilization during ophthalmic surgery. We developed a second-generation prototype microscope-integrated iOCT system that interfaces directly with a standard ophthalmic surgical microscope. Novel features for improved design and functionality included improved profile and ergonomics, as well as a tunable lens system for optimized image quality and heads-up display (HUD) system for surgeon feedback. Novel material testing was performed for potential suitability for OCT-compatible instrumentation based on light scattering and transmission characteristics. Prototype surgical instruments were developed based on material testing and tested using the microscope-integrated iOCT system. Several surgical maneuvers were performed and imaged, and surgical motion visualization was evaluated with a unique scanning and image processing protocol. High-resolution images were successfully obtained with the microscope-integrated iOCT system with HUD feedback. Six semi-transparent materials were characterized to determine their attenuation coefficients and scatter density with an 830 nm OCT light source. Based on these optical properties, polycarbonate was selected as a material substrate for prototype instrument construction. A surgical pick, retinal forceps, and corneal needle were constructed with semi-transparent materials. Excellent visualization of both the underlying tissues and surgical instrument were achieved on OCT cross-section. Using model eyes, various surgical maneuvers were visualized, including membrane peeling, vessel manipulation, cannulation of the subretinal space, subretinal intraocular foreign body removal, and corneal penetration. Significant iterative improvements in integrative technology related to iOCT and ophthalmic surgery are demonstrated.
A virtual image chain for perceived image quality of medical display
NASA Astrophysics Data System (ADS)
Marchessoux, Cédric; Jung, Jürgen
2006-03-01
This paper describes a virtual image chain for medical display (project VICTOR: granted in the 5th framework program by European commission). The chain starts from raw data of an image digitizer (CR, DR) or synthetic patterns and covers image enhancement (MUSICA by Agfa) and both display possibilities, hardcopy (film on viewing box) and softcopy (monitor). Key feature of the chain is a complete image wise approach. A first prototype is implemented in an object-oriented software platform. The display chain consists of several modules. Raw images are either taken from scanners (CR-DR) or from a pattern generator, in which characteristics of DR- CR systems are introduced by their MTF and their dose-dependent Poisson noise. The image undergoes image enhancement and comes to display. For soft display, color and monochrome monitors are used in the simulation. The image is down-sampled. The non-linear response of a color monitor is taken into account by the GOG or S-curve model, whereas the Standard Gray-Scale-Display-Function (DICOM) is used for monochrome display. The MTF of the monitor is applied on the image in intensity levels. For hardcopy display, the combination of film, printer, lightbox and viewing condition is modeled. The image is up-sampled and the DICOM-GSDF or a Kanamori Look-Up-Table is applied. An anisotropic model for the MTF of the printer is applied on the image in intensity levels. The density-dependent color (XYZ) of the hardcopy film is introduced by Look-Up-tables. Finally a Human Visual System Model is applied to the intensity images (XYZ in terms of cd/m2) in order to eliminate nonvisible differences. Comparison leads to visible differences, which are quantified by higher order image quality metrics. A specific image viewer is used for the visualization of the intensity image and the visual difference maps.
NASA Astrophysics Data System (ADS)
Zamorano, Lucia J.; Dujovny, Manuel; Ausman, James I.
1990-01-01
"Real time" surgical treatment planning utilizing multimodality imaging (CT, MRI, DA) has been developed to provide the neurosurgeon with 2D multiplanar and 3D views of a patient's lesion for stereotactic planning. Both diagnostic and therapeutic stereotactic procedures have been implemented utilizing workstation (SUN 1/10) and specially developed software and hardware (developed in collaboration with TOMO Medical Imaging Technology, Southfield, MI). This provides complete 3D and 2D free-tilt views as part of the system instrumentation. The 2D Multiplanar includes reformatted sagittal, coronal, paraaxial and free tilt oblique vectors at any arbitrary plane of the patient's lesion. The 3D includes features for extracting a view of the target volume localized by a process including steps of automatic segmentation, thresholding, and/or boundary detection with 3D display of the volumes of interest. The system also includes the capability of interactive playback of reconstructed 3D movies, which can be viewed at any hospital network having compatible software on strategical locations or at remote sites through data transmission and record documentation by image printers. Both 2D and 3D menus include real time stereotactic coordinate measurements and trajectory definition capabilities as well as statistical functions for computing distances, angles, areas, and volumes. A combined interactive 3D-2D multiplanar menu allows simultaneous display of selected trajectory, final optimization, and multiformat 2D display of free-tilt reformatted images perpendicular to selected trajectory of the entire target volume.
Development of an immersive virtual reality head-mounted display with high performance.
Wang, Yunqi; Liu, Weiqi; Meng, Xiangxiang; Fu, Hanyi; Zhang, Daliang; Kang, Yusi; Feng, Rui; Wei, Zhonglun; Zhu, Xiuqing; Jiang, Guohua
2016-09-01
To resolve the contradiction between large field of view and high resolution in immersive virtual reality (VR) head-mounted displays (HMDs), an HMD monocular optical system with a large field of view and high resolution was designed. The system was fabricated by adopting aspheric technology with CNC grinding and a high-resolution LCD as the image source. With this monocular optical system, an HMD binocular optical system with a wide-range continuously adjustable interpupillary distance was achieved in the form of partially overlapping fields of view (FOV) combined with a screw adjustment mechanism. A fast image processor-centered LCD driver circuit and an image preprocessing system were also built to address binocular vision inconsistency in the partially overlapping FOV binocular optical system. The distortions of the HMD optical system with a large field of view were measured. Meanwhile, the optical distortions in the display and the trapezoidal distortions introduced during image processing were corrected by a calibration model for reverse rotations and translations. A high-performance not-fully-transparent VR HMD device with high resolution (1920×1080) and large FOV [141.6°(H)×73.08°(V)] was developed. The full field-of-view average value of angular resolution is 18.6 pixels/degree. With the device, high-quality VR simulations can be completed under various scenarios, and the device can be utilized for simulated trainings in aeronautics, astronautics, and other fields with corresponding platforms. The developed device has positive practical significance.
A method of rapidly evaluating image quality of NED optical system
NASA Astrophysics Data System (ADS)
Sun, Qi; Qiu, Chuankai; Yang, Huan
2014-11-01
In recent years, with the development of technology of micro-display, advanced optics and the software and hardware, near-to-eye display ( NED) optical system will have a wide range of potential applications in the fields of amusement and virtual reality. However, research on the evaluating image quality of this kind optical system is comparatively lagging behind. Although now there are some methods and equipment for evaluation, they can't be applied in commercial production because of their complex operation and inaccuracy. In this paper, an academic method is proposed and a Rapid Evaluation System (RES) is designed to evaluate the image of optical system rapidly and exactly. Firstly, a set of parameters that eyes are sensitive to and also express the quality of system should be extracted and quantized to be criterion, so the evaluation standards can be established. Then, some parameters can be detected by RES consisted of micro-display, CCD camera and computer and so on. By process of scaling, the measuring results of the RES are exact and creditable, relationship between object measurement, subjective evaluation and the RES will be established. After that, image quality of optical system can be evaluated just by detecting parameters of that. The RES is simple and the results of evaluation are exact and keeping with human vision. So the method can be used not only for optimizing design of optical system, but also for evaluation in commercial production.
exVis: a visual analysis tool for wind tunnel data
NASA Astrophysics Data System (ADS)
Deardorff, D. G.; Keeley, Leslie E.; Uselton, Samuel P.
1998-05-01
exVis is a software tool created to support interactive display and analysis of data collected during wind tunnel experiments. It is a result of a continuing project to explore the uses of information technology in improving the effectiveness of aeronautical design professionals. The data analysis goals are accomplished by allowing aerodynamicists to display and query data collected by new data acquisition systems and to create traditional wind tunnel plots from this data by interactively interrogating these images. exVis was built as a collection of distinct modules to allow for rapid prototyping, to foster evolution of capabilities, and to facilitate object reuse within other applications being developed. It was implemented using C++ and Open Inventor, commercially available object-oriented tools. The initial version was composed of three main classes. Two of these modules are autonomous viewer objects intended to display the test images (ImageViewer) and the plots (GraphViewer). The third main class is the Application User Interface (AUI) which manages the passing of data and events between the viewers, as well as providing a user interface to certain features. User feedback was obtained on a regular basis, which allowed for quick revision cycles and appropriately enhanced feature sets. During the development process additional classes were added, including a color map editor and a data set manager. The ImageViewer module was substantially rewritten to add features and to use the data set manager. The use of an object-oriented design was successful in allowing rapid prototyping and easy feature addition.
Image quality assessment metric for frame accumulated image
NASA Astrophysics Data System (ADS)
Yu, Jianping; Li, Gang; Wang, Shaohui; Lin, Ling
2018-01-01
The medical image quality determines the accuracy of diagnosis, and the gray-scale resolution is an important parameter to measure image quality. But current objective metrics are not very suitable for assessing medical images obtained by frame accumulation technology. Little attention was paid to the gray-scale resolution, basically based on spatial resolution and limited to the 256 level gray scale of the existing display device. Thus, this paper proposes a metric, "mean signal-to-noise ratio" (MSNR) based on signal-to-noise in order to be more reasonable to evaluate frame accumulated medical image quality. We demonstrate its potential application through a series of images under a constant illumination signal. Here, the mean image of enough images was regarded as the reference image. Several groups of images by different frame accumulation and their MSNR were calculated. The results of the experiment show that, compared with other quality assessment methods, the metric is simpler, more effective, and more suitable for assessing frame accumulated images that surpass the gray scale and precision of the original image.
Zhang, Xiao-Bo; Ge, Xiao-Guang; Jin, Yan; Shi, Ting-Ting; Wang, Hui; Li, Meng; Jing, Zhi-Xian; Guo, Lan-Ping; Huang, Lu-Qi
2017-11-01
With the development of computer and image processing technology, image recognition technology has been applied to the national medicine resources census work at all stages.Among them: ①In the preparatory work, in order to establish a unified library of traditional Chinese medicine resources, using text recognition technology based on paper materials, be the assistant in the digitalization of various categories related to Chinese medicine resources; to determine the representative area and plots of the survey from each census team, based on the satellite remote sensing image and vegetation map and other basic data, using remote sensing image classification and other technical methods to assist in determining the key investigation area. ②In the process of field investigation, to obtain the planting area of Chinese herbal medicine was accurately, we use the decision tree model, spectral feature and object-oriented method were used to assist the regional identification and area estimation of Chinese medicinal materials.③In the process of finishing in the industry, in order to be able to relatively accurately determine the type of Chinese medicine resources in the region, based on the individual photos of the plant, the specimens and the name of the use of image recognition techniques, to assist the statistical summary of the types of traditional Chinese medicine resources. ④In the application of the results of transformation, based on the pharmaceutical resources and individual samples of medicinal herbs, the development of Chinese medicine resources to identify APP and authentic herbs 3D display system, assisted the identification of Chinese medicine resources and herbs identification characteristics. The introduction of image recognition technology in the census of Chinese medicine resources, assisting census personnel to carry out related work, not only can reduce the workload of the artificial, improve work efficiency, but also improve the census results of information technology and sharing application ability. With the deepening of the work of Chinese medicine resources census, image recognition technology in the relevant work will also play its unique role. Copyright© by the Chinese Pharmaceutical Association.
Active learning in optics and photonics: Liquid Crystal Display in the do-it-yourself
NASA Astrophysics Data System (ADS)
Vauderwange, Oliver; Haiss, Ulrich; Wozniak, Peter; Israel, Kai; Curticapean, Dan
2015-10-01
Monitors are in the center of media productions and hold an important function as the main visual interface. Tablets and smartphones are becoming more and more important work tools in the media industry. As an extension to our lecture contents an intensive discussion of different display technologies and its applications is taking place now. The established LCD (Liquid Crystal Display) technology and the promising OLED (Organic Light Emitting Diode) technology are in the focus. The classic LCD is currently the most important display technology. The paper will present how the students should develop sense for display technologies besides the theoretical scientific basics. The workshop focuses increasingly on the technical aspects of the display technology and has the goal of deepening the students understanding of the functionality by building simple Liquid Crystal Displays by themselves. The authors will present their experience in the field of display technologies. A mixture of theoretical and practical lectures has the goal of a deeper understanding in the field of digital color representation and display technologies. The design and development of a suitable learning environment with the required infrastructure is crucial. The main focus of this paper is on the hands-on optics workshop "Liquid Crystal Display in the do-it-yourself".
ACTS Satellite Telemammography Network Experiments
NASA Technical Reports Server (NTRS)
Kachmar, Brian A.; Kerczewski, Robert J.
2000-01-01
The Satellite Networks and Architectures Branch of NASA's Glenn Research Center has developed and demonstrated several advanced satellite communications technologies through the Advanced Communications Technology Satellite (ACTS) program. One of these technologies is the implementation of a Satellite Telemammography Network (STN) encompassing NASA Glenn, the Cleveland Clinic Foundation. the University of Virginia, and the Ashtabula County Medical Center. This paper will present a look at the STN from its beginnings to the impact it may have on future telemedicine applications. Results obtained using the experimental ACTS satellite demonstrate the feasibility of Satellite Telemammography. These results have improved teleradiology processes and mammography image manipulation, and enabled advances in remote screening methodologies. Future implementation of satellite telemammography using next generation commercial satellite networks will be explored. In addition, the technical aspects of the project will be discussed, in particular how the project has evolved from using NASA developed hardware and software to commercial off the shelf (COTS) products. Development of asymmetrical link technologies was an outcome of this work. Improvements in the display of digital mammographic images, better understanding of end-to-end system requirements, and advances in radiological image compression were achieved as a result of the research. Finally, rigorous clinical medical studies are required for new technologies such as digital satellite telemammography to gain acceptance in the medical establishment. These experiments produced data that were useful in two key medical studies that addressed the diagnostic accuracy of compressed satellite transmitted digital mammography images. The results of these studies will also be discussed.
Stereo 3D vision adapter using commercial DIY goods
NASA Astrophysics Data System (ADS)
Sakamoto, Kunio; Ohara, Takashi
2009-10-01
The conventional display can show only one screen, but it is impossible to enlarge the size of a screen, for example twice. Meanwhile the mirror supplies us with the same image but this mirror image is usually upside down. Assume that the images on an original screen and a virtual screen in the mirror are completely different and both images can be displayed independently. It would be possible to enlarge a screen area twice. This extension method enables the observers to show the virtual image plane and to enlarge a screen area twice. Although the displaying region is doubled, this virtual display could not produce 3D images. In this paper, we present an extension method using a unidirectional diffusing image screen and an improvement for displaying a 3D image using orthogonal polarized image projection.
NASA Astrophysics Data System (ADS)
Choe, Giseok; Nang, Jongho
The tiled-display system has been used as a Computer Supported Cooperative Work (CSCW) environment, in which multiple local (and/or remote) participants cooperate using some shared applications whose outputs are displayed on a large-scale and high-resolution tiled-display, which is controlled by a cluster of PC's, one PC per display. In order to make the collaboration effective, each remote participant should be aware of all CSCW activities on the titled display system in real-time. This paper presents a capturing and delivering mechanism of all activities on titled-display system to remote participants in real-time. In the proposed mechanism, the screen images of all PC's are periodically captured and delivered to the Merging Server that maintains separate buffers to store the captured images from the PCs. The mechanism selects one tile image from each buffer, merges the images to make a screen shot of the whole tiled-display, clips a Region of Interest (ROI), compresses and streams it to remote participants in real-time. A technical challenge in the proposed mechanism is how to select a set of tile images, one from each buffer, for merging so that the tile images displayed at the same time on the tiled-display can be properly merged together. This paper presents three selection algorithms; a sequential selection algorithm, a capturing time based algorithm, and a capturing time and visual consistency based algorithm. It also proposes a mechanism of providing several virtual cameras on tiled-display system to remote participants by concurrently clipping several different ROI's from the same merged tiled-display images, and delivering them after compressing with video encoders requested by the remote participants. By interactively changing and resizing his/her own ROI, a remote participant can check the activities on the tiled-display effectively. Experiments on a 3 × 2 tiled-display system show that the proposed merging algorithm can build a tiled-display image stream synchronously, and the ROI-based clipping and delivering mechanism can provide individual views on the tiled-display system to multiple remote participants in real-time.
Algorithm-Based Motion Magnification for Video Processing in Urological Laparoscopy.
Adams, Fabian; Schoelly, Reto; Schlager, Daniel; Schoenthaler, Martin; Schoeb, Dominik S; Wilhelm, Konrad; Hein, Simon; Wetterauer, Ulrich; Miernik, Arkadiusz
2017-06-01
Minimally invasive surgery is in constant further development and has replaced many conventional operative procedures. If vascular structure movement could be detected during these procedures, it could reduce the risk of vascular injury and conversion to open surgery. The recently proposed motion-amplifying algorithm, Eulerian Video Magnification (EVM), has been shown to substantially enhance minimal object changes in digitally recorded video that is barely perceptible to the human eye. We adapted and examined this technology for use in urological laparoscopy. Video sequences of routine urological laparoscopic interventions were recorded and further processed using spatial decomposition and filtering algorithms. The freely available EVM algorithm was investigated for its usability in real-time processing. In addition, a new image processing technology, the CRS iimotion Motion Magnification (CRSMM) algorithm, was specifically adjusted for endoscopic requirements, applied, and validated by our working group. Using EVM, no significant motion enhancement could be detected without severe impairment of the image resolution, motion, and color presentation. The CRSMM algorithm significantly improved image quality in terms of motion enhancement. In particular, the pulsation of vascular structures could be displayed more accurately than in EVM. Motion magnification image processing technology has the potential for clinical importance as a video optimizing modality in endoscopic and laparoscopic surgery. Barely detectable (micro)movements can be visualized using this noninvasive marker-free method. Despite these optimistic results, the technology requires considerable further technical development and clinical tests.
[Remote access to a web-based image distribution system].
Bergh, B; Schlaefke, A; Frankenbach, R; Vogl, T J
2004-06-01
To assess different network and security technologies for remote access to a web-based image distribution system of a hospital intranet. Following preparatory testing, the time-to-display (TTD) was measured for three image types (CR, CT, MR). The evaluation included two remote access technologies consisting of direct ISDN-Dial-Up or VPN connection (Virtual Private Network), with three different connection speeds of 64, 128 (ISDN) and 768 Kbit/s (ADSL-Asymmetric Digital Subscriber Line), as well as with lossless and lossy compression. Depending on the image type, the TTD with lossless compression for 64 Kbit/s varied from 1 : 00 to 2 : 40 minutes, for 128 Kbit/s from 0 : 35 to 1 : 15 minutes and for ADSL from 0 : 15 to 0 : 45 minutes. The ISDN-Dial-Up connection was superior to VPN technology at 64 Kbit/s but did not allow higher connection speeds. Lossy compression reduced the TTD by half for all measurements. VPN technology is preferable to direct Dial-Up connections since it offers higher connection speeds and advantages in usage and security. For occasional usage, 128 Kbit/s (ISDN) can be considered sufficient, especially in conjunction with lossy compression. ADSL should be chosen when a more frequent usage is anticipated, whereby lossy compression may be omitted. Due to higher bandwidths and improved usability, the web-based approach appears superior to conventional teleradiology systems.
Image volume analysis of omnidirectional parallax regular-polyhedron three-dimensional displays.
Kim, Hwi; Hahn, Joonku; Lee, Byoungho
2009-04-13
Three-dimensional (3D) displays having regular-polyhedron structures are proposed and their imaging characteristics are analyzed. Four types of conceptual regular-polyhedron 3D displays, i.e., hexahedron, octahedron, dodecahedron, and icosahedrons, are considered. In principle, regular-polyhedron 3D display can present omnidirectional full parallax 3D images. Design conditions of structural factors such as viewing angle of facet panel and observation distance for 3D display with omnidirectional full parallax are studied. As a main issue, image volumes containing virtual 3D objects represented by the four types of regular-polyhedron displays are comparatively analyzed.
Han, Ruizhen; He, Yong; Liu, Fei
2012-01-01
This paper presents a feasibility study on a real-time in field pest classification system design based on Blackfin DSP and 3G wireless communication technology. This prototype system is composed of remote on-line classification platform (ROCP), which uses a digital signal processor (DSP) as a core CPU, and a host control platform (HCP). The ROCP is in charge of acquiring the pest image, extracting image features and detecting the class of pest using an Artificial Neural Network (ANN) classifier. It sends the image data, which is encoded using JPEG 2000 in DSP, to the HCP through the 3G network at the same time for further identification. The image transmission and communication are accomplished using 3G technology. Our system transmits the data via a commercial base station. The system can work properly based on the effective coverage of base stations, no matter the distance from the ROCP to the HCP. In the HCP, the image data is decoded and the pest image displayed in real-time for further identification. Authentication and performance tests of the prototype system were conducted. The authentication test showed that the image data were transmitted correctly. Based on the performance test results on six classes of pests, the average accuracy is 82%. Considering the different live pests’ pose and different field lighting conditions, the result is satisfactory. The proposed technique is well suited for implementation in field pest classification on-line for precision agriculture. PMID:22736996
Han, Ruizhen; He, Yong; Liu, Fei
2012-01-01
This paper presents a feasibility study on a real-time in field pest classification system design based on Blackfin DSP and 3G wireless communication technology. This prototype system is composed of remote on-line classification platform (ROCP), which uses a digital signal processor (DSP) as a core CPU, and a host control platform (HCP). The ROCP is in charge of acquiring the pest image, extracting image features and detecting the class of pest using an Artificial Neural Network (ANN) classifier. It sends the image data, which is encoded using JPEG 2000 in DSP, to the HCP through the 3G network at the same time for further identification. The image transmission and communication are accomplished using 3G technology. Our system transmits the data via a commercial base station. The system can work properly based on the effective coverage of base stations, no matter the distance from the ROCP to the HCP. In the HCP, the image data is decoded and the pest image displayed in real-time for further identification. Authentication and performance tests of the prototype system were conducted. The authentication test showed that the image data were transmitted correctly. Based on the performance test results on six classes of pests, the average accuracy is 82%. Considering the different live pests' pose and different field lighting conditions, the result is satisfactory. The proposed technique is well suited for implementation in field pest classification on-line for precision agriculture.
Efficient fabrication method of nano-grating for 3D holographic display with full parallax views.
Wan, Wenqiang; Qiao, Wen; Huang, Wenbin; Zhu, Ming; Fang, Zongbao; Pu, Donglin; Ye, Yan; Liu, Yanhua; Chen, Linsen
2016-03-21
Without any special glasses, multiview 3D displays based on the diffractive optics can present high resolution, full-parallax 3D images in an ultra-wide viewing angle. The enabling optical component, namely the phase plate, can produce arbitrarily distributed view zones by carefully designing the orientation and the period of each nano-grating pixel. However, such 3D display screen is restricted to a limited size due to the time-consuming fabricating process of nano-gratings on the phase plate. In this paper, we proposed and developed a lithography system that can fabricate the phase plate efficiently. Here we made two phase plates with full nano-grating pixel coverage at a speed of 20 mm2/mins, a 500 fold increment in the efficiency when compared to the method of E-beam lithography. One 2.5-inch phase plate generated 9-view 3D images with horizontal-parallax, while the other 6-inch phase plate produced 64-view 3D images with full-parallax. The angular divergence in horizontal axis and vertical axis was 1.5 degrees, and 1.25 degrees, respectively, slightly larger than the simulated value of 1.2 degrees by Finite Difference Time Domain (FDTD). The intensity variation was less than 10% for each viewpoint, in consistency with the simulation results. On top of each phase plate, a high-resolution binary masking pattern containing amplitude information of all viewing zone was well aligned. We achieved a resolution of 400 pixels/inch and a viewing angle of 40 degrees for 9-view 3D images with horizontal parallax. In another prototype, the resolution of each view was 160 pixels/inch and the view angle was 50 degrees for 64-view 3D images with full parallax. As demonstrated in the experiments, the homemade lithography system provided the key fabricating technology for multiview 3D holographic display.
Intelligent navigation to improve obstetrical sonography.
Yeo, Lami; Romero, Roberto
2016-04-01
'Manual navigation' by the operator is the standard method used to obtain information from two-dimensional and volumetric sonography. Two-dimensional sonography is highly operator dependent and requires extensive training and expertise to assess fetal anatomy properly. Most of the sonographic examination time is devoted to acquisition of images, while 'retrieval' and display of diagnostic planes occurs rapidly (essentially instantaneously). In contrast, volumetric sonography has a rapid acquisition phase, but the retrieval and display of relevant diagnostic planes is often time-consuming, tedious and challenging. We propose the term 'intelligent navigation' to refer to a new method of interrogation of a volume dataset whereby identification and selection of key anatomical landmarks allow the system to: 1) generate a geometrical reconstruction of the organ of interest; and 2) automatically navigate, find, extract and display specific diagnostic planes. This is accomplished using operator-independent algorithms that are both predictable and adaptive. Virtual Intelligent Sonographer Assistance (VIS-Assistance®) is a tool that allows operator-independent sonographic navigation and exploration of the surrounding structures in previously identified diagnostic planes. The advantage of intelligent (over manual) navigation in volumetric sonography is the short time required for both acquisition and retrieval and display of diagnostic planes. Intelligent navigation technology automatically realigns the volume, and reorients and standardizes the anatomical position, so that the fetus and the diagnostic planes are consistently displayed in the same manner each time, regardless of the fetal position or the initial orientation. Automatic labeling of anatomical structures, subject orientation and each of the diagnostic planes is also possible. Intelligent navigation technology can operate on conventional computers, and is not dependent on specific ultrasound platforms or on the use of software to perform manual navigation of volume datasets. Diagnostic planes and VIS-Assistance videoclips can be transmitted by telemedicine so that expert consultants can evaluate the images to provide an opinion. The end result is a user-friendly, simple, fast and consistent method of obtaining sonographic images with decreased operator dependency. Intelligent navigation is one approach to improve obstetrical sonography. Published 2015. This article is a U.S. Government work and is in the public domain in the USA. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.
Emergence of solid state helmet-mounted displays in military applications
NASA Astrophysics Data System (ADS)
Casey, Curtis J.
2002-08-01
Helmet Mounted Displays (HMDs) are used to provide pilots with out-the-window capabilities for engaging tactical threats. The first modern system to be employed was the Apache Integrated Helmet Display Sighting System (IHADSS). Using an optical tracker and multiple sensors, the pilot is able to navigate and engage the enemy with his weapons systems cued by the HMD in day and night conditions. Over the next several years HMDs were tested on tactical jet aircraft. The tactical fighter environment - high G maneuvering and the possibility of ejection - created several problems regarding integration and head-borne weight. However, these problems were soon solved by American, British, Israeli, and Russian companies and are employed or in the process of employment aboard the respective countries' tactical aircraft. It is noteworthy that the current configuration employs both the Heads-Up Display (HUD) as well as the HMD. The new Joint Strike Fighter (JSF), however, will become the first tactical jet to employ only a HMD. HMDs have increasingly become part of the avionics and weapons systems of new aircraft and helicopter platforms. Their use however, is migrating to other military applications. They are currently under evaluation on Combat Vehicle platforms for driving tasks to target acquisition and designation tasks under near-all weather, 24-hour conditions. Their use also has penetrated the individual application such as providing data and situational awareness to the individual soldier; the U.S. Army's Land Warrior Program is an example of this technology being applied. Current HMD systems are CRT-based and have many short-comings, including weight, reliability. The emergence of new microelectronics and solid state image sources - Flat Panel Displays (FPDs) - however, has expanded the application of vision devices across all facets of military applications. Some of the greatest contributions are derived from the following Enabling Technologies, and it is upon those technologies and their applications to HMDs that this paper will address: ¸ Active Matrix Liquid Crystal Displays; improved response times, compensation films ¸ Sub-micron electronics ¸ Backlight Technology to address brightness issues across the spectrum of operations ¸ Distortion Correction to compensate for optical aberrations in near-real time.
Ghosting in anaglyphic stereoscopic images
NASA Astrophysics Data System (ADS)
Woods, Andrew J.; Rourke, Tegan
2004-05-01
Anaglyphic 3D images are an easy way of displaying stereoscopic 3D images on a wide range of display types, e.g. CRT, LCD, print, etc. While the anaglyphic 3D image method is cheap and accessible, its use requires a compromise in stereoscopic image quality. A common problem with anaglyphic 3D images is ghosting. Ghosting (or crosstalk) is the leaking of an image to one eye, when it is intended exclusively for the other eye. Ghosting degrades the ability of the observer to fuse the stereoscopic image and hence the quality of the 3D image is reduced. Ghosting is present in various levels with most stereoscopic displays, however it is often particularly evident with anaglyphic 3D images. This paper describes a project whose aim was to characterize the presence of ghosting in anaglyphic 3D images due to spectral issues. The spectral response curves of several different display types and several different brands of anaglyph glasses were measured using a spectroradiometer or spectrophotometer. A mathematical model was then developed to predict the amount of crosstalk in anaglyphic 3D images when different combinations of displays and glasses are used, and therefore predict the best type of anaglyph glasses for use with a particular display type.
Television, computer and portable display device use by people with central vision impairment
Woods, Russell L; Satgunam, PremNandhini
2011-01-01
Purpose To survey the viewing experience (e.g. hours watched, difficulty) and viewing metrics (e.g. distance viewed, display size) for television (TV), computers and portable visual display devices for normally-sighted (NS) and visually impaired participants. This information may guide visual rehabilitation. Methods Survey was administered either in person or in a telephone interview on 223 participants of whom 104 had low vision (LV, worse than 6/18, age 22 to 90y, 54 males), and 94 were NS (visual acuity 6/9 or better, age 20 to 86y, 50 males). Depending on their situation, NS participants answered up to 38 questions and LV participants answered up to a further 10 questions. Results Many LV participants reported at least “some” difficulty watching TV (71/103), reported at least “often” having difficulty with computer displays (40/76) and extreme difficulty watching videos on handheld devices (11/16). The average daily TV viewing was slightly, but not significantly, higher for the LV participants (3.6h) than the NS (3.0h). Only 18% of LV participants used visual aids (all optical) to watch TV. Most LV participants obtained effective magnification from a reduced viewing distance for both TV and computer display. Younger LV participants also used a larger display when compared to older LV participants to obtain increased magnification. About half of the TV viewing time occurred in the absence of a companion for both the LV and the NS participants. The mean number of TVs at home reported by LV participants (2.2) was slightly but not significantly (p=0.09) higher than NS participants (2.0). LV participants were equally likely to have a computer but were significantly (p=0.004) less likely to access the internet (73/104) compared to NS participants (82/94). Most LV participants expressed an interest in image enhancing technology for TV viewing (67/104) and for computer use (50/74), if they used a computer. Conclusion In this study, both NS and LV participants had comparable video viewing habits. Most LV participants in our sample reported difficulty watching TV, and indicated an interest in assistive technology, such as image enhancement. As our participants reported that at least half their video viewing hours are spent alone and that there is usually more than one TV per household, this suggests that there are opportunities to use image enhancement on the TVs of LV viewers without interfering with the viewing experience of NS viewers. PMID:21410501
Liquid crystal true 3D displays for augmented reality applications
NASA Astrophysics Data System (ADS)
Li, Yan; Liu, Shuxin; Zhou, Pengcheng; Chen, Quanming; Su, Yikai
2018-02-01
Augmented reality (AR) technology, which integrates virtual computer-generated information into the real world scene, is believed to be the next-generation human-machine interface. However, most AR products adopt stereoscopic 3D display technique, which causes the accommodation-vergence conflict. To solve this problem, we have proposed two approaches. The first is a multi-planar volumetric display using fast switching polymer-stabilized liquid crystal (PSLC) films. By rapidly switching the films between scattering and transparent states while synchronizing with a high-speed projector, the 2D slices of a 3D volume could be displayed in time sequence. We delved into the research on developing high-performance PSLC films in both normal mode and reverse mode; moreover, we also realized the demonstration of four-depth AR images with correct accommodation cues. For the second approach, we realized a holographic AR display using digital blazed gratings and a 4f system to eliminate zero-order and higher-order noise. With a 4k liquid crystal on silicon device, we achieved a field of view (FOV) of 32 deg. Moreover, we designed a compact waveguidebased holographic 3D display. In the design, there are two holographic optical elements (HOEs), each of which functions as a diffractive grating and a Fresnel lens. Because of the grating effect, holographic 3D image light is coupled into and decoupled out of the waveguide by modifying incident angles. Because of the lens effect, the collimated zero order light is focused at a point, and got filtered out. The optical power of the second HOE also helps enlarge FOV.
Ivanova, Maria V.; Hallowell, Brooke
2017-01-01
Purpose Language comprehension in people with aphasia (PWA) is frequently evaluated using multiple-choice displays: PWA are asked to choose the image that best corresponds to the verbal stimulus in a display. When a nontarget image is selected, comprehension failure is assumed. However, stimulus-driven factors unrelated to linguistic comprehension may influence performance. In this study we explore the influence of physical image characteristics of multiple-choice image displays on visual attention allocation by PWA. Method Eye fixations of 41 PWA were recorded while they viewed 40 multiple-choice image sets presented with and without verbal stimuli. Within each display, 3 images (majority images) were the same and 1 (singleton image) differed in terms of 1 image characteristic. The mean proportion of fixation duration (PFD) allocated across majority images was compared against the PFD allocated to singleton images. Results PWA allocated significantly greater PFD to the singleton than to the majority images in both nonverbal and verbal conditions. Those with greater severity of comprehension deficits allocated greater PFD to nontarget singleton images in the verbal condition. Conclusion When using tasks that rely on multiple-choice displays and verbal stimuli, one cannot assume that verbal stimuli will override the effect of visual-stimulus characteristics. PMID:28520866
Color imaging technologies in the prepress industry
NASA Astrophysics Data System (ADS)
Silverman, Lee
1992-05-01
Over much of the last half century, electronic technologies have played an increasing role in the prepress production of film and plates prepared for printing presses. The last decade has seen an explosion of technologies capable of supplementing this production. The most outstanding technology infusing this growth has been the microcomputer, but other component technologies have also diversified the capacity for high-quality scanning of photographs. In addition, some fundamental software and affordable laser recorder technologies have provided new approaches to the merging of typographic and halftoned photographic data onto film. The next decade will evolve the methods and the technologies to achieve superior text and image communication on mass distribution media used in the printed page or instead of the printed page. This paper focuses on three domains of electronic prepress classified as the input, transformation, and output phases of the production process. The evolution of the component technologies in each of these three phases is described. The unique attributes in each are defined and then follows a discussion of the pertinent technologies which overlap all three domains. Unique to input is sensor technology and analogue to digital conversion. Unique to the transformation phase is the display on monitor for soft proofing and interactive processing. The display requires special technologies for digital frame storage and high-speed, gamma- compensated, digital to analogue conversion. Unique to output is the need for halftoning and binary recording device linearization or calibration. Specialized direct digital color technologies now allow color quality proofing without the need for writing intermediate separation films, but ultimately these technologies will be supplanted by direct printing technologies. First, dry film processing, then direct plate writing, and finally direct application of ink or toner onto paper at the 20 - 30 thousand impressions per hour now achieved by offset printing. In summary, a review of technological evolution guides industry methodologies that will define a transformation of workflow in graphic arts during the next decade. Prepress production will integrate component technologies with microcomputers in order to optimize the production cycle from graphic design to printed piece. These changes will drastically alter the business structures and tools used to put type and photographs on paper in the volumes expected from printing presses.
Shen, Xin; Javidi, Bahram
2018-03-01
We have developed a three-dimensional (3D) dynamic integral-imaging (InIm)-system-based optical see-through augmented reality display with enhanced depth range of a 3D augmented image. A focus-tunable lens is adopted in the 3D display unit to relay the elemental images with various positions to the micro lens array. Based on resolution priority integral imaging, multiple lenslet image planes are generated to enhance the depth range of the 3D image. The depth range is further increased by utilizing both the real and virtual 3D imaging fields. The 3D reconstructed image and the real-world scene are overlaid using an optical see-through display for augmented reality. The proposed system can significantly enhance the depth range of a 3D reconstructed image with high image quality in the micro InIm unit. This approach provides enhanced functionality for augmented information and adjusts the vergence-accommodation conflict of a traditional augmented reality display.
Teaching systems thinking to 4th and 5th graders using Environmental Dashboard display technology.
Clark, Shane; Petersen, John E; Frantz, Cindy M; Roose, Deborah; Ginn, Joel; Rosenberg Daneri, Daniel
2017-01-01
Tackling complex environmental challenges requires the capacity to understand how relationships and interactions between parts result in dynamic behavior of whole systems. There has been convincing research that these "systems thinking" skills can be learned. However, there is little research on methods for teaching these skills to children or assessing their impact. The Environmental Dashboard is a technology that uses "sociotechnical" feedback-information feedback designed to affect thought and behavior. Environmental Dashboard (ED) combines real-time information on community resource use with images and words that reflect pro-environmental actions of community members. Prior research indicates that ED supports the development of systems thinking in adults. To assess its impact on children, the technology was installed in a primary school and children were passively exposed to ED displays. This resulted in no measurable impact on systems thinking skills. The next stage of this research examined the impact of actively integrating ED into lessons on electricity in 4th and 5th grade. This active integration enhanced both content-related systems thinking skills and content retention.
Teaching systems thinking to 4th and 5th graders using Environmental Dashboard display technology
Clark, Shane; Frantz, Cindy M.; Roose, Deborah; Ginn, Joel; Rosenberg Daneri, Daniel
2017-01-01
Tackling complex environmental challenges requires the capacity to understand how relationships and interactions between parts result in dynamic behavior of whole systems. There has been convincing research that these “systems thinking” skills can be learned. However, there is little research on methods for teaching these skills to children or assessing their impact. The Environmental Dashboard is a technology that uses “sociotechnical” feedback–information feedback designed to affect thought and behavior. Environmental Dashboard (ED) combines real-time information on community resource use with images and words that reflect pro-environmental actions of community members. Prior research indicates that ED supports the development of systems thinking in adults. To assess its impact on children, the technology was installed in a primary school and children were passively exposed to ED displays. This resulted in no measurable impact on systems thinking skills. The next stage of this research examined the impact of actively integrating ED into lessons on electricity in 4th and 5th grade. This active integration enhanced both content-related systems thinking skills and content retention. PMID:28448586
Automation of fluorescent differential display with digital readout.
Meade, Jonathan D; Cho, Yong-Jig; Fisher, Jeffrey S; Walden, Jamie C; Guo, Zhen; Liang, Peng
2006-01-01
Since its invention in 1992, differential display (DD) has become the most commonly used technique for identifying differentially expressed genes because of its many advantages over competing technologies such as DNA microarray, serial analysis of gene expression (SAGE), and subtractive hybridization. Despite the great impact of the method on biomedical research, there has been a lack of automation of DD technology to increase its throughput and accuracy for systematic gene expression analysis. Most of previous DD work has taken a "shot-gun" approach of identifying one gene at a time, with a limited number of polymerase chain reaction (PCR) reactions set up manually, giving DD a low-tech and low-throughput image. We have optimized the DD process with a new platform that incorporates fluorescent digital readout, automated liquid handling, and large-format gels capable of running entire 96-well plates. The resulting streamlined fluorescent DD (FDD) technology offers an unprecedented accuracy, sensitivity, and throughput in comprehensive and quantitative analysis of gene expression. These major improvements will allow researchers to find differentially expressed genes of interest, both known and novel, quickly and easily.
Dynamic beam steering at submm- and mm-wave frequencies using an optically controlled lens antenna
NASA Astrophysics Data System (ADS)
Gallacher, T. F.; Søndenâ, R.; Robertson, D. A.; Smith, G. M.
2013-05-01
We present details of our work which has been focused on improving the efficiency and scan rate of the photo-injected Fresnel zone plate antenna (piFZPA) technique which utilizes commercially available visible display technologies. This approach presents a viable low-cost solution for non-mechanical beam steering, suitable for many applications at (sub) mm-wave frequencies that require rapid beam steering capabilities in order to meet their technological goals, such as imaging, surveillance, and remote sensing. This method has the advantage of being comparatively low-cost, is based on a simple and flexible architecture, enabling rapid and precise arbitrary beam forming, and which is scalable to higher frame-rates and higher submm-wave frequencies. We discuss the various optimization stages of a range of piFZPA designs that implement fast visible projection displays, enabling up to 30,000 beams per second. We also outline the suitability of this technology across mm-wave and submm-wave frequencies as a low-cost and simple solution for dynamic optoelectronic beam steering.
A versatile stereoscopic visual display system for vestibular and oculomotor research.
Kramer, P D; Roberts, D C; Shelhamer, M; Zee, D S
1998-01-01
Testing of the vestibular system requires a vestibular stimulus (motion) and/or a visual stimulus. We have developed a versatile, low cost, stereoscopic visual display system, using "virtual reality" (VR) technology. The display system can produce images for each eye that correspond to targets at any virtual distance relative to the subject, and so require the appropriate ocular vergence. We elicited smooth pursuit, "stare" optokinetic nystagmus (OKN) and after-nystagmus (OKAN), vergence for targets at various distances, and short-term adaptation of the vestibulo-ocular reflex (VOR), using both conventional methods and the stereoscopic display. Pursuit, OKN, and OKAN were comparable with both methods. When used with a vestibular stimulus, VR induced appropriate adaptive changes of the phase and gain of the angular VOR. In addition, using the VR display system and a human linear acceleration sled, we adapted the phase of the linear VOR. The VR-based stimulus system not only offers an alternative to more cumbersome means of stimulating the visual system in vestibular experiments, it also can produce visual stimuli that would otherwise be impractical or impossible. Our techniques provide images without the latencies encountered in most VR systems. Its inherent versatility allows it to be useful in several different types of experiments, and because it is software driven it can be quickly adapted to provide a new stimulus. These two factors allow VR to provide considerable savings in time and money, as well as flexibility in developing experimental paradigms.
Visual-conformal display format for helicopter guidance
NASA Astrophysics Data System (ADS)
Doehler, H.-U.; Schmerwitz, Sven; Lueken, Thomas
2014-06-01
Helicopter guidance in situations where natural vision is reduced is still a challenging task. Beside new available sensors, which are able to "see" through darkness, fog and dust, display technology remains one of the key issues of pilot assistance systems. As long as we have pilots within aircraft cockpits, we have to keep them informed about the outside situation. "Situational awareness" of humans is mainly powered by their visual channel. Therefore, display systems which are able to cross-fade seamless from natural vision to artificial computer vision and vice versa, are of greatest interest within this context. Helmet-mounted displays (HMD) have this property when they apply a head-tracker for measuring the pilot's head orientation relative to the aircraft reference frame. Together with the aircraft's position and orientation relative to the world's reference frame, the on-board graphics computer can generate images which are perfectly aligned with the outside world. We call image elements which match the outside world, "visual-conformal". Published display formats for helicopter guidance in degraded visual environment apply mostly 2D-symbologies which stay far behind from what is possible. We propose a perspective 3D-symbology for a head-tracked HMD which shows as much as possible visual-conformal elements. We implemented and tested our proposal within our fixed based cockpit simulator as well as in our flying helicopter simulator (FHS). Recently conducted simulation trials with experienced helicopter pilots give some first evaluation results of our proposal.
Infrared Imagery of Shuttle (IRIS). Task 1, summary report
NASA Technical Reports Server (NTRS)
Chocol, C. J.
1977-01-01
The feasibility of remote, high-resolution infrared imagery of the Shuttle Orbiter lower surface during entry to obtain accurate measurements of aerodynamic heat transfer was demonstrated. Using available technology, such images can be taken from an existing aircraft/telescope system (the C141 AIRO) with minimum modification or addition of systems. Images with a spatial resolution of 1 m or better and a temperature resolution of 2.5% between temperatures of 800 and 1900 K can be obtained. Data reconstruction techniques can provide a geometrically and radiometrically corrected array on addressable magnetic tape ready for display by NASA.
NASA Astrophysics Data System (ADS)
Lee, Chang-Kun; Moon, Seokil; Lee, Byounghyo; Jeong, Youngmo; Lee, Byoungho
2016-10-01
A head-mounted compressive three-dimensional (3D) display system is proposed by combining polarization beam splitter (PBS), fast switching polarization rotator and micro display with high pixel density. According to the polarization state of the image controlled by polarization rotator, optical path of image in the PBS can be divided into transmitted and reflected components. Since optical paths of each image are spatially separated, it is possible to independently focus both images at different depth positions. Transmitted p-polarized and reflected s-polarized images can be focused by convex lens and mirror, respectively. When the focal lengths of the convex lens and mirror are properly determined, two image planes can be located in intended positions. The geometrical relationship is easily modulated by replacement of the components. The fast switching of polarization realizes the real-time operation of multi-focal image planes with a single display panel. Since it is possible to conserve the device characteristic of single panel, the high image quality, reliability and uniformity can be retained. For generating 3D images, layer images for compressive light field display between two image planes are calculated. Since the display panel with high pixel density is adopted, high quality 3D images are reconstructed. In addition, image degradation by diffraction between physically stacked display panels can be mitigated. Simple optical configuration of the proposed system is implemented and the feasibility of the proposed method is verified through experiments.
Collimated autostereoscopic displays for cockpit applications
NASA Astrophysics Data System (ADS)
Eichenlaub, Jesse B.
1995-06-01
The use of an autostereoscopic display (a display that produces stereoscopic images that the user can see without wearing special glasses) for cockpit applications is now under investigation at Wright Patterson Air Force Base. DTI reported on this display, built for testing in a simulator, at last year's conference. It is believed, based on testing performed at NASA's Langley Research Center, that collimating this type of display will accrue benefits to the user including a grater useful imaging volume and more accurate stereo perception. DTI has therefore investigated the feasibility of collimating an autostereoscopic display, and has experimentally demonstrated a proof of concept model of such a display. As in the case of conventional displays, a collimated autostereoscopic display utilizes an optical element located one focal length from the surface of the image forming device. The presence of this element must be taken into account when designing the optics used to create the autostereoscopic images. The major design issues associated with collimated 2D displays are also associated with collimated autostereoscopic displays.
Display nonlinearity in digital image processing for visual communications
NASA Astrophysics Data System (ADS)
Peli, Eli
1992-11-01
The luminance emitted from a cathode ray tube (CRT) display is a nonlinear function (the gamma function) of the input video signal voltage. In most analog video systems, compensation for this nonlinear transfer function is implemented in the camera amplifiers. When CRT displays are used to present psychophysical stimuli in vision research, the specific display nonlinearity usually is measured and accounted for to ensure that the luminance of each pixel in the synthetic image property represents the intended value. However, when using digital image processing, the linear analog-to-digital converters store a digital image that is nonlinearly related to the displayed or recorded image. The effect of this nonlinear transformation on a variety of image-processing applications used in visual communications is described.
Display nonlinearity in digital image processing for visual communications
NASA Astrophysics Data System (ADS)
Peli, Eli
1991-11-01
The luminance emitted from a cathode ray tube, (CRT) display is a nonlinear function (the gamma function) of the input video signal voltage. In most analog video systems, compensation for this nonlinear transfer function is implemented in the camera amplifiers. When CRT displays are used to present psychophysical stimuli in vision research, the specific display nonlinearity usually is measured and accounted for to ensure that the luminance of each pixel in the synthetic image properly represents the intended value. However, when using digital image processing, the linear analog-to-digital converters store a digital image that is nonlinearly related to the displayed or recorded image. This paper describes the effect of this nonlinear transformation on a variety of image-processing applications used in visual communications.
Fundamentals of cone beam computed tomography for a prosthodontist
John, George Puthenpurayil; Joy, Tatu Elenjickal; Mathew, Justin; Kumar, Vinod R. B.
2015-01-01
Cone beam computed tomography (CBCT, also referred to as C-arm computed tomography [CT], cone beam volume CT, or flat panel CT) is a medical imaging technique of X-ray CT where the X-rays are divergent, forming a cone.[1] CBCT systems have been designed for imaging hard tissues of the maxillofacial region. CBCT is capable of providing sub-millimeter resolution in images of high diagnostic quality, with short scanning times (10–70 s) and radiation dosages reportedly up to 15–100 times lower than those of conventional CT scans. Increasing availability of this technology provides the dental clinician with an imaging modality capable of providing a three-dimensional representation of the maxillofacial skeleton with minimal distortion. The aim of this article is to sensitize the Prosthodontist to CBCT technology, provide an overview of currently available maxillofacial CBCT systems and review the specific application of various CBCT display modes to clinical Prosthodontic practice. A MEDLINE search for relevant articles in this specific area of interest was conducted. The selected articles were critically reviewed and the data acquired were systematically compiled. PMID:26929479
Design of dual energy x-ray detector for conveyor belt with steel wire ropes
NASA Astrophysics Data System (ADS)
Dai, Yue; Miao, Changyun; Rong, Feng
2009-07-01
A dual energy X-ray detector for conveyor belt with steel wire ropes is researched in the paper. Conveyor belt with steel wire ropes is one of primary transfer equipments in modern production. The traditional test methods like electromagnetic induction principle could not display inner image of steel wire ropes directly. So X-ray detection technology has used to detect the conveyor belt. However the image was not so clear by the interference of the rubber belt. Therefore, the dualenergy X-ray detection technology with subtraction method is developed to numerically remove the rubber belt from radiograph, thus improving the definition of the ropes image. The purpose of this research is to design a dual energy Xray detector that could make the operator easier to found the faulty of the belt. This detection system is composed of Xray source, detector controlled by FPGA chip, PC for running image processing system and so on. With the result of the simulating, this design really improved the capability of the staff to test the conveyor belt.
NASA Astrophysics Data System (ADS)
Oswald, Helmut; Mueller-Jones, Kay; Builtjes, Jan; Fleck, Eckart
1998-07-01
The developments in information technologies -- computer hardware, networking and storage media -- has led to expectations that these advances make it possible to replace 35 mm film completely by digital techniques in the catheter laboratory. Besides the role of an archival medium, cine film is used as the major image review and exchange medium in cardiology. None of the today technologies can fulfill completely the requirements to replace cine film. One of the major drawbacks of cine film is the single access in time and location. For the four catheter laboratories in our institutions we have designed a complementary concept combining the CD-R, also called CD-medical, as a single patient storage and exchange medium, and a digital archive for on-line access and image review of selected frames or short sequences on adequate medical workstations. The image data from various modalities as well as all digital documents regarding to a patient are part of an electronic patient record. The access, the processing and the display of documents is supported by an integrated medical application.
Application of an imaging system to a museum exhibition for developing interactive exhibitions
NASA Astrophysics Data System (ADS)
Miyata, Kimiyoshi; Inoue, Yuka; Takiguchi, Takahiro; Tsumura, Norimichi; Nakaguchi, Toshiya; Miyake, Yoichi
2009-10-01
In the National Museum of Japanese History, 215,759 artifacts are stored and used for research and exhibitions. In museums, due to the limitation of space in the galleries, a guidance system is required to satisfy visitors' needs and to enhance their understanding of the artifacts. We introduce one exhibition using imaging technology to improve visitors' understanding of a kimono (traditional Japanese clothing) exhibition. In the imaging technology introduced, one data projector, one display with touch panel interface, and magnifiers were used as exhibition tools together with a real kimono. The validity of this exhibition method was confirmed by results from a visitors' interview survey. Second, to further develop the interactive guidance system, an augmented reality system that consisted of cooperation between the projector and a digital video camera was also examined. A white paper board in the observer's hand was used as a projection screen and also as an interface to control the images projected on the board. The basic performance of the proposed system was confirmed; however continuous development was necessary for applying the system to actual exhibitions.
SWIR hyperspectral imaging detector for surface residues
NASA Astrophysics Data System (ADS)
Nelson, Matthew P.; Mangold, Paul; Gomer, Nathaniel; Klueva, Oksana; Treado, Patrick
2013-05-01
ChemImage has developed a SWIR Hyperspectral Imaging (HSI) sensor which uses hyperspectral imaging for wide area surveillance and standoff detection of surface residues. Existing detection technologies often require close proximity for sensing or detecting, endangering operators and costly equipment. Furthermore, most of the existing sensors do not support autonomous, real-time, mobile platform based detection of threats. The SWIR HSI sensor provides real-time standoff detection of surface residues. The SWIR HSI sensor provides wide area surveillance and HSI capability enabled by liquid crystal tunable filter technology. Easy-to-use detection software with a simple, intuitive user interface produces automated alarms and real-time display of threat and type. The system has potential to be used for the detection of variety of threats including chemicals and illicit drug substances and allows for easy updates in the field for detection of new hazardous materials. SWIR HSI technology could be used by law enforcement for standoff screening of suspicious locations and vehicles in pursuit of illegal labs or combat engineers to support route-clearance applications- ultimately to save the lives of soldiers and civilians. In this paper, results from a SWIR HSI sensor, which include detection of various materials in bulk form, as well as residue amounts on vehicles, people and other surfaces, will be discussed.
NASA Astrophysics Data System (ADS)
Woo, Sungsoo; Kang, Sungsam; Yoon, Changhyeong; Choi, Wonshik
2016-03-01
With the advancement of 3D display technology, 3D imaging of macroscopic objects has drawn much attention as they provide the contents to display. The most widely used imaging methods include a depth camera, which measures time of flight for the depth discrimination, and various structured illumination techniques. However, these existing methods have poor depth resolution, which makes imaging complicated structures a difficult task. In order to resolve this issue, we propose an imaging system based upon low-coherence interferometry and off-axis digital holographic imaging. By using light source with coherence length of 200 micro, we achieved the depth resolution of 100 micro. In order to map the macroscopic objects with this high axial resolution, we installed a pair of prisms in the reference beam path for the long-range scanning of the optical path length. Specifically, one prism was fixed in position, and the other prism was mounted on a translation stage and translated in parallel to the first prism. Due to the multiple internal reflections between the two prisms, the overall path length was elongated by a factor of 50. In this way, we could cover a depth range more than 1 meter. In addition, we employed multiple speckle illuminations and incoherent averaging of the acquired holographic images for reducing the specular reflections from the target surface. Using this newly developed system, we performed imaging targets with multiple different layers and demonstrated imaging targets hidden behind the scattering layers. The method was also applied to imaging targets located around the corner.
Data grid: a distributed solution to PACS
NASA Astrophysics Data System (ADS)
Zhang, Xiaoyan; Zhang, Jianguo
2004-04-01
In a hospital, various kinds of medical images acquired from different modalities are generally used and stored in different department and each modality usually attaches several workstations to display or process images. To do better diagnosis, radiologists or physicians often need to retrieve other kinds of images for reference. The traditional image storage solution is to buildup a large-scale PACS archive server. However, the disadvantages of pure centralized management of PACS archive server are obvious. Besides high costs, any failure of PACS archive server would cripple the entire PACS operation. Here we present a new approach to develop the storage grid in PACS, which can provide more reliable image storage and more efficient query/retrieval for the whole hospital applications. In this paper, we also give the performance evaluation by comparing the three popular technologies mirror, cluster and grid.
3D augmented reality with integral imaging display
NASA Astrophysics Data System (ADS)
Shen, Xin; Hua, Hong; Javidi, Bahram
2016-06-01
In this paper, a three-dimensional (3D) integral imaging display for augmented reality is presented. By implementing the pseudoscopic-to-orthoscopic conversion method, elemental image arrays with different capturing parameters can be transferred into the identical format for 3D display. With the proposed merging algorithm, a new set of elemental images for augmented reality display is generated. The newly generated elemental images contain both the virtual objects and real world scene with desired depth information and transparency parameters. The experimental results indicate the feasibility of the proposed 3D augmented reality with integral imaging.
New ultraportable display technology and applications
NASA Astrophysics Data System (ADS)
Alvelda, Phillip; Lewis, Nancy D.
1998-08-01
MicroDisplay devices are based on a combination of technologies rooted in the extreme integration capability of conventionally fabricated CMOS active-matrix liquid crystal display substrates. Customized diffraction grating and optical distortion correction technology for lens-system compensation allow the elimination of many lenses and systems-level components. The MicroDisplay Corporation's miniature integrated information display technology is rapidly leading to many new defense and commercial applications. There are no moving parts in MicroDisplay substrates, and the fabrication of the color generating gratings, already part of the CMOS circuit fabrication process, is effectively cost and manufacturing process-free. The entire suite of the MicroDisplay Corporation's technologies was devised to create a line of application- specific integrated circuit single-chip display systems with integrated computing, memory, and communication circuitry. Next-generation portable communication, computer, and consumer electronic devices such as truly portable monitor and TV projectors, eyeglass and head mounted displays, pagers and Personal Communication Services hand-sets, and wristwatch-mounted video phones are among the may target commercial markets for MicroDisplay technology. Defense applications range from Maintenance and Repair support, to night-vision systems, to portable projectors for mobile command and control centers.
Three-dimensional hologram display system
NASA Technical Reports Server (NTRS)
Mintz, Frederick (Inventor); Chao, Tien-Hsin (Inventor); Bryant, Nevin (Inventor); Tsou, Peter (Inventor)
2009-01-01
The present invention relates to a three-dimensional (3D) hologram display system. The 3D hologram display system includes a projector device for projecting an image upon a display medium to form a 3D hologram. The 3D hologram is formed such that a viewer can view the holographic image from multiple angles up to 360 degrees. Multiple display media are described, namely a spinning diffusive screen, a circular diffuser screen, and an aerogel. The spinning diffusive screen utilizes spatial light modulators to control the image such that the 3D image is displayed on the rotating screen in a time-multiplexing manner. The circular diffuser screen includes multiple, simultaneously-operated projectors to project the image onto the circular diffuser screen from a plurality of locations, thereby forming the 3D image. The aerogel can use the projection device described as applicable to either the spinning diffusive screen or the circular diffuser screen.
Life Sciences Division Spaceflight Hardware
NASA Technical Reports Server (NTRS)
Yost, B.
1999-01-01
The Ames Research Center (ARC) is responsible for the development, integration, and operation of non-human life sciences payloads in support of NASA's Gravitational Biology and Ecology (GB&E) program. To help stimulate discussion and interest in the development and application of novel technologies for incorporation within non-human life sciences experiment systems, three hardware system models will be displayed with associated graphics/text explanations. First, an Animal Enclosure Model (AEM) will be shown to communicate the nature and types of constraints physiological researchers must deal with during manned space flight experiments using rodent specimens. Second, a model of the Modular Cultivation System (MCS) under development by ESA will be presented to highlight technologies that may benefit cell-based research, including advanced imaging technologies. Finally, subsystems of the Cell Culture Unit (CCU) in development by ARC will also be shown. A discussion will be provided on candidate technology requirements in the areas of specimen environmental control, biotelemetry, telescience and telerobotics, and in situ analytical techniques and imaging. In addition, an overview of the Center for Gravitational Biology Research facilities will be provided.
The x-ray light valve: a low-cost, digital radiographic imaging system-spatial resolution
NASA Astrophysics Data System (ADS)
MacDougall, Robert D.; Koprinarov, Ivaylo; Webster, Christie A.; Rowlands, J. A.
2007-03-01
In recent years, new x-ray radiographic systems based on large area flat panel technology have revolutionized our capability to produce digital x-ray radiographic images. However, these active matrix flat panel imagers (AMFPIs) are extraordinarily expensive compared to the systems they are replacing. Thus there is a need for a low cost digital imaging system for general applications in radiology. Different approaches have been considered to make lower cost, integrated x-ray imaging devices for digital radiography, including: scanned projection x-ray, an integrated approach based on computed radiography technology and optically demagnified x-ray screen/CCD systems. These approaches suffer from either high cost or high mechanical complexity and do not have the image quality of AMFPIs. We have identified a new approach - the X-ray Light Valve (XLV). The XLV has the potential to achieve the immediate readout in an integrated system with image quality comparable to AMFPIs. The XLV concept combines three well-established and hence lowcost technologies: an amorphous selenium (a-Se) layer to convert x-rays to image charge, a liquid crystal (LC) cell as an analog display, and an optical scanner for image digitization. Here we investigate the spatial resolution possible with XLV systems. Both a-Se and LC cells have both been shown separately to have inherently very high spatial resolution. Due to the close electrostatic coupling in the XLV, it can be expected that the spatial resolution of this system will also be very high. A prototype XLV was made and a typical office scanner was used for image digitization. The Modulation Transfer Function was measured and the limiting factor was seen to be the optical scanner. However, even with this limitation the XLV system is able to meet or exceed the resolution requirements for chest radiography.
Fetal cerebral imaging - ultrasound vs. MRI: an update.
Blondiaux, Eléonore; Garel, Catherine
2013-11-01
The purpose of this article is to analyze the advantages and limitations of prenatal ultrasonography (US) and magnetic resonance imaging (MRI) in the evaluation of the fetal brain. These imaging modalities should not be seen as competitive but rather as complementary. There are wide variations in the world regarding screening policies, technology, skills, and legislation about termination of pregnancy, and these variations markedly impact on the way of using prenatal imaging. According to the contribution expected from each technique and to local working conditions, one should choose the most appropriate imaging modality on a case-by-case basis. The advantages and limitations of US and MRI in the setting of fetal brain imaging are displayed. Different anatomical regions (midline, ventricles, subependymal area, cerebral parenchyma, pericerebral space, posterior fossa) and pathological conditions are analyzed and illustrated in order to compare the respective contribution of each technique. An accurate prenatal diagnosis of cerebral abnormalities is of utmost importance for prenatal counseling.
Optical characterization of display screens by speckle-contrast measurements
NASA Astrophysics Data System (ADS)
Pozo, Antonio M.; Castro, José J.; Rubiño, Manuel
2012-10-01
In recent years, the flat-panel display (FPD) technology has undergone great development. Currently, FPDs are present in many devices. A significant element in FPD manufacturing is the display front surface. Manufacturers sell FPDs with different types of front surface which can be matte (also called anti-glare) or glossy screens. Users who prefer glossy screens consider images shown in these types of displays to have more vivid colours compared with matte-screen displays. However, external light sources may cause unpleasant reflections on the glossy screens. These reflections can be reduced by a matte treatment in the front surface of FPDs. In this work, we present a method to characterize the front surface of FPDs using laser speckle patterns. We characterized three FPDs: a Samsung XL2370 LCD monitor of 23" with matte screen, a Toshiba Satellite A100 laptop of 15.4" with glossy screen, and a Papyre electronic book reader. The results show great differences in speckle contrast values for the three screens characterized and, therefore, this work shows the feasibility of this method for characterizing and comparing FPDs which have different types of front surfaces.
NASA Astrophysics Data System (ADS)
Lin, Chien-Liang; Su, Yu-Zheng; Hung, Min-Wei; Huang, Kuo-Cheng
2010-08-01
In recent years, Augmented Reality (AR)[1][2][3] is very popular in universities and research organizations. The AR technology has been widely used in Virtual Reality (VR) fields, such as sophisticated weapons, flight vehicle development, data model visualization, virtual training, entertainment and arts. AR has characteristics to enhance the display output as a real environment with specific user interactive functions or specific object recognitions. It can be use in medical treatment, anatomy training, precision instrument casting, warplane guidance, engineering and distance robot control. AR has a lot of vantages than VR. This system developed combines sensors, software and imaging algorithms to make users feel real, actual and existing. Imaging algorithms include gray level method, image binarization method, and white balance method in order to make accurate image recognition and overcome the effects of light.
Research and Construction Lunar Stereoscopic Visualization System Based on Chang'E Data
NASA Astrophysics Data System (ADS)
Gao, Xingye; Zeng, Xingguo; Zhang, Guihua; Zuo, Wei; Li, ChunLai
2017-04-01
With lunar exploration activities carried by Chang'E-1, Chang'E-2 and Chang'E-3 lunar probe, a large amount of lunar data has been obtained, including topographical and image data covering the whole moon, as well as the panoramic image data of the spot close to the landing point of Chang'E-3. In this paper, we constructed immersive virtual moon system based on acquired lunar exploration data by using advanced stereoscopic visualization technology, which will help scholars to carry out research on lunar topography, assist the further exploration of lunar science, and implement the facilitation of lunar science outreach to the public. In this paper, we focus on the building of lunar stereoscopic visualization system with the combination of software and hardware by using binocular stereoscopic display technology, real-time rendering algorithm for massive terrain data, and building virtual scene technology based on panorama, to achieve an immersive virtual tour of the whole moon and local moonscape of Chang'E-3 landing point.
Accommodation response measurements for integral 3D image
NASA Astrophysics Data System (ADS)
Hiura, H.; Mishina, T.; Arai, J.; Iwadate, Y.
2014-03-01
We measured accommodation responses under integral photography (IP), binocular stereoscopic, and real object display conditions, and viewing conditions of binocular and monocular viewing conditions. The equipment we used was an optometric device and a 3D display. We developed the 3D display for IP and binocular stereoscopic images that comprises a high-resolution liquid crystal display (LCD) and a high-density lens array. The LCD has a resolution of 468 dpi and a diagonal size of 4.8 inches. The high-density lens array comprises 106 x 69 micro lenses that have a focal length of 3 mm and diameter of 1 mm. The lenses are arranged in a honeycomb pattern. The 3D display was positioned 60 cm from an observer under IP and binocular stereoscopic display conditions. The target was presented at eight depth positions relative to the 3D display: 15, 10, and 5 cm in front of the 3D display, on the 3D display panel, and 5, 10, 15 and 30 cm behind the 3D display under the IP and binocular stereoscopic display conditions. Under the real object display condition, the target was displayed on the 3D display panel, and the 3D display was placed at the eight positions. The results suggest that the IP image induced more natural accommodation responses compared to the binocular stereoscopic image. The accommodation responses of the IP image were weaker than those of a real object; however, they showed a similar tendency with those of the real object under the two viewing conditions. Therefore, IP can induce accommodation to the depth positions of 3D images.
A Low-Cost PC-Based Image Workstation for Dynamic Interactive Display of Three-Dimensional Anatomy
NASA Astrophysics Data System (ADS)
Barrett, William A.; Raya, Sai P.; Udupa, Jayaram K.
1989-05-01
A system for interactive definition, automated extraction, and dynamic interactive display of three-dimensional anatomy has been developed and implemented on a low-cost PC-based image workstation. An iconic display is used for staging predefined image sequences through specified increments of tilt and rotation over a solid viewing angle. Use of a fast processor facilitates rapid extraction and rendering of the anatomy into predefined image views. These views are formatted into a display matrix in a large image memory for rapid interactive selection and display of arbitrary spatially adjacent images within the viewing angle, thereby providing motion parallax depth cueing for efficient and accurate perception of true three-dimensional shape, size, structure, and spatial interrelationships of the imaged anatomy. The visual effect is that of holding and rotating the anatomy in the hand.
[Display technologies for augmented reality in medical applications].
Eck, Ulrich; Winkler, Alexander
2018-04-01
One of the main challenges for modern surgery is the effective use of the many available imaging modalities and diagnostic methods. Augmented reality systems can be used in the future to blend patient and planning information into the view of surgeons, which can improve the efficiency and safety of interventions. In this article we present five visualization methods to integrate augmented reality displays into medical procedures and the advantages and disadvantages are explained. Based on an extensive literature review the various existing approaches for integration of augmented reality displays into medical procedures are divided into five categories and the most important research results for each approach are presented. A large number of mixed and augmented reality solutions for medical interventions have been developed as research prototypes; however, only very few systems have been tested on patients. In order to integrate mixed and augmented reality displays into medical practice, highly specialized solutions need to be developed. Such systems must comply with the requirements with respect to accuracy, fidelity, ergonomics and seamless integration into the surgical workflow.
Flexible Display and Integrated Communication Devices (FDICD) Technology. Volume 2
2008-06-01
AFRL-RH-WP-TR-2008-0072 Flexible Display and Integrated Communication Devices (FDICD) Technology, Volume II David Huffman Keith Tognoni...14 April 2004 – 20 June 2008 4. TITLE AND SUBTITLE Flexible Display and Integrated Communication Devices (FDICD) Technology, Volume II 5a...14. ABSTRACT This flexible display and integrated communication devices (FDICD) technology program sought to create a family of powerful
Human Visual Performance and Flat Panel Display Image Quality
1980-07-01
the research required to relate human operator performance to the geome- tric properties of these designs has characteristically lag- - 68 - tte ...see: A summary of basic principles. In Committee on Undersea Warfare, National Research Council, A Summary Report on Human Factors in Undersea ...Office of the Deputy Under Secretary of Defense OUSDRE (E&LS) The Pentagon, Room 3D129 Washington, D. C. 20301 Director, Undersea Technology Code 220
Display system for imaging scientific telemetric information
NASA Technical Reports Server (NTRS)
Zabiyakin, G. I.; Rykovanov, S. N.
1979-01-01
A system for imaging scientific telemetric information, based on the M-6000 minicomputer and the SIGD graphic display, is described. Two dimensional graphic display of telemetric information and interaction with the computer, in analysis and processing of telemetric parameters displayed on the screen is provided. The running parameter information output method is presented. User capabilities in the analysis and processing of telemetric information imaged on the display screen and the user language are discussed and illustrated.
NASA Technical Reports Server (NTRS)
2003-01-01
Dark smoke from oil fires extend for about 60 kilometers south of Iraq's capital city of Baghdad in these images acquired by the Multi-angle Imaging SpectroRadiometer (MISR) on April 2, 2003. The thick, almost black smoke is apparent near image center and contains chemical and particulate components hazardous to human health and the environment.The top panel is from MISR's vertical-viewing (nadir) camera. Vegetated areas appear red here because this display is constructed using near-infrared, red and blue band data, displayed as red, green and blue, respectively, to produce a false-color image. The bottom panel is a combination of two camera views of the same area and is a 3-D stereo anaglyph in which red band nadir camera data are displayed as red, and red band data from the 60-degree backward-viewing camera are displayed as green and blue. Both panels are oriented with north to the left in order to facilitate stereo viewing. Viewing the 3-D anaglyph with red/blue glasses (with the red filter placed over the left eye and the blue filter over the right) makes it possible to see the rising smoke against the surface terrain. This technique helps to distinguish features in the atmosphere from those on the surface. In addition to the smoke, several high, thin cirrus clouds (barely visible in the nadir view) are readily observed using the stereo image.The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously and every 9 days views the entire globe between 82 degrees north and 82 degrees south latitude. These data products were generated from a portion of the imagery acquired during Terra orbit 17489. The panels cover an area of about 187 kilometers x 123 kilometers, and use data from blocks 63 to 65 within World Reference System-2 path 168.MISR was built and is managed by NASA's Jet Propulsion Laboratory,Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.Trelease, R B
1996-01-01
Advances in computer visualization and user interface technologies have enabled development of "virtual reality" programs that allow users to perceive and to interact with objects in artificial three-dimensional environments. Such technologies were used to create an image database and program for studying the human skull, a specimen that has become increasingly expensive and scarce. Stereoscopic image pairs of a museum-quality skull were digitized from multiple views. For each view, the stereo pairs were interlaced into a single, field-sequential stereoscopic picture using an image processing program. The resulting interlaced image files are organized in an interactive multimedia program. At run-time, gray-scale 3-D images are displayed on a large-screen computer monitor and observed through liquid-crystal shutter goggles. Users can then control the program and change views with a mouse and cursor to point-and-click on screen-level control words ("buttons"). For each view of the skull, an ID control button can be used to overlay pointers and captions for important structures. Pointing and clicking on "hidden buttons" overlying certain structures triggers digitized audio spoken word descriptions or mini lectures.
Status review of field emission displays
NASA Astrophysics Data System (ADS)
Ghrayeb, Joseph; Daniels, Reginald
2001-09-01
Cathode ray tube (CRT) technology dominates the direct view display market. Mature CRT technology for many designs is still the preferred choice. CRT manufacturers have greatly improved the size and weight of the CRT displays. High performance CRTs continue to be in great demand, however, supply have to contend with the vanishing CRT vendor syndrome. Therefore, the vanishing CRT vendor syndrome fuels the search for an alternate display technology source. Within the past 10 years, field emission display (FED) technology had gained momentum and, at one time, was considered the most viable electronic display technology candidate [to replace the CRT]. The FED community had advocated and promised many advantages over active matrix liquid crystal displays (AMLCD), electro luminescent (EL) or Plasma displays. Some observers, including potential FED manufacturers and the Department of Defense, (especially the Defense Advanced Research Project Agency (DARPA)), consider the FED entry as having leapfrog potential. Despite major investments by US manufacturers as well as Asian manufacturers, reliability and manufacturing difficulties greatly slowed down the advancement of the technology. The FED manufacturing difficulties have caused many would-be FED manufacturing participants to abandon FED research. This paper will examine the trends, which are leading this nascent technology to its downfall. FED technology was once considered to have the potential to leapfrog over AMLCD's dominance in the display industry. At present the FED has suffered severe setbacks and there are very few [FED] manufacturers still pursuing research in the area. These companies have yet to deliver a display beyond the prototype stage.
Display technology - Human factors concepts
NASA Astrophysics Data System (ADS)
Stokes, Alan; Wickens, Christopher; Kite, Kirsten
1990-03-01
Recent advances in the design of aircraft cockpit displays are reviewed, with an emphasis on their applicability to automobiles. The fundamental principles of display technology are introduced, and individual chapters are devoted to selective visual attention, command and status displays, foveal and peripheral displays, navigational displays, auditory displays, color and pictorial displays, head-up displays, automated systems, and dual-task performance and pilot workload. Diagrams, drawings, and photographs of typical displays are provided.
Operator vision aids for space teleoperation assembly and servicing
NASA Technical Reports Server (NTRS)
Brooks, Thurston L.; Ince, Ilhan; Lee, Greg
1992-01-01
This paper investigates concepts for visual operator aids required for effective telerobotic control. Operator visual aids, as defined here, mean any operational enhancement that improves man-machine control through the visual system. These concepts were derived as part of a study of vision issues for space teleoperation. Extensive literature on teleoperation, robotics, and human factors was surveyed to definitively specify appropriate requirements. This paper presents these visual aids in three general categories of camera/lighting functions, display enhancements, and operator cues. In the area of camera/lighting functions concepts are discussed for: (1) automatic end effector or task tracking; (2) novel camera designs; (3) computer-generated virtual camera views; (4) computer assisted camera/lighting placement; and (5) voice control. In the technology area of display aids, concepts are presented for: (1) zone displays, such as imminent collision or indexing limits; (2) predictive displays for temporal and spatial location; (3) stimulus-response reconciliation displays; (4) graphical display of depth cues such as 2-D symbolic depth, virtual views, and perspective depth; and (5) view enhancements through image processing and symbolic representations. Finally, operator visual cues (e.g., targets) that help identify size, distance, shape, orientation and location are discussed.
[New simulation technologies in neurosurgery].
Byvaltsev, V A; Belykh, E G; Konovalov, N A
2016-01-01
The article presents a literature review on the current state of simulation technologies in neurosurgery, a brief description of the basic technology and the classification of simulation models, and examples of simulation models and skills simulators used in neurosurgery. Basic models for the development of physical skills, the spectrum of available computer virtual simulators, and their main characteristics are described. It would be instructive to include microneurosurgical training and a cadaver course of neurosurgical approaches in neurosurgery training programs and to extend the use of three-dimensional imaging. Technologies for producing three-dimensional anatomical models and patient-specific computer simulators as well as improvement of tactile feedback systems and display quality of virtual models are promising areas. Continued professional education necessitates further research for assessing the validity and practical use of simulators and physical models.
Gonçalves, Luís F; Romero, Roberto; Espinoza, Jimmy; Lee, Wesley; Treadwell, Marjorie; Chintala, Kavitha; Brandl, Helmut; Chaiworapongsa, Tinnakorn
2004-04-01
To describe clinical and research applications of 4-dimensional imaging of the fetal heart using color Doppler spatiotemporal image correlation. Forty-four volume data sets were acquired by color Doppler spatiotemporal image correlation. Seven subjects were examined: 4 fetuses without abnormalities, 1 fetus with ventriculomegaly and a hypoplastic cerebellum but normal cardiac anatomy, and 2 fetuses with cardiac anomalies detected by fetal echocardiography (1 case of a ventricular septal defect associated with trisomy 21 and 1 case of a double-inlet right ventricle with a 46,XX karyotype). The median gestational age at the time of examination was 21 3/7 weeks (range, 19 5/7-34 0/7 weeks). Volume data sets were reviewed offline by multiplanar display and volume-rendering methods. Representative images and online video clips illustrating the diagnostic potential of this technology are presented. Color Doppler spatiotemporal image correlation allowed multiplanar visualization of ventricular septal defects, multiplanar display and volume rendering of tricuspid regurgitation, volume rendering of the outflow tracts by color and power Doppler ultrasonography (both in a normal case and in a case of a double-inlet right ventricle with a double-outlet right ventricle), and visualization of venous streams at the level of the foramen ovale. Color Doppler spatiotemporal image correlation has the potential to simplify visualization of the outflow tracts and improve the evaluation of the location and extent of ventricular septal defects. Other applications include 3-dimensional evaluation of regurgitation jets and venous streams at the level of the foramen ovale.
Evaluating motion parallax and stereopsis as depth cues for autostereoscopic displays
NASA Astrophysics Data System (ADS)
Braun, Marius; Leiner, Ulrich; Ruschin, Detlef
2011-03-01
The perception of space in the real world is based on multifaceted depth cues, most of them monocular, some binocular. Developing 3D-displays raises the question, which of these depth cues are predominant and should be simulated by computational means in such a panel. Beyond the cues based on image content, such as shadows or patterns, Stereopsis and depth from motion parallax are the most significant mechanisms supporting observers with depth information. We set up a carefully designed test situation, widely excluding undesired other distance hints. Thereafter we conducted a user test to find out, which of these two depth cues is more relevant and whether a combination of both would increase accuracy in a depth estimation task. The trials were conducting utilizing our autostereoscopic "Free2C"-displays, which are capable to detect the user eye position and steer the image lobes dynamically into that direction. At the same time, eye position was used to update the virtual camera's location and thereby offering motion parallax to the observer. As far as we know, this was the first time that such a test has been conducted using an autosteresocopic display without any assistive technologies. Our results showed, in accordance with prior experiments, that both cues are effective, however Stereopsis is by order of magnitude more relevant. Combining both cues improved the precision of distance estimation by another 30-40%.
Adolescents' reactions to the imagery displayed in smoking and antismoking advertisements.
Shadel, William G; Niaura, Raymond; Abrams, David B
2002-06-01
This study compared adolescents' unbiased perceptions of the images displayed in smoking and antismoking advertising. Twenty-nine adolescents (ages 11-17) were shown images taken from both advertising types; all images were digitally edited so that no product information appeared in them. Participants described each image in a free-response format and rated each image on self-report dimensions. Content analyses of free-response descriptions and analyses of self-reports revealed that adolescents viewed images taken from cigarette advertisements more positively compared with images taken from antismoking advertisements. These findings suggest that I reason for the potency of cigarette advertising, compared with antismoking advertising, is the inherent positive appeal of the images displayed. Antismoking advertising may be more effective at limiting adolescent smoking if the images displayed have a more positive valence.
Display And Analysis Of Tomographic Volumetric Images Utilizing A Vari-Focal Mirror
NASA Astrophysics Data System (ADS)
Harris, L. D.; Camp, J. J.
1984-10-01
A system for the three-dimensional (3-D) display and analysis of stacks of tomographic images is described. The device utilizes the principle of a variable focal (vari-focal) length optical element in the form of an aluminized membrane stretched over a loudspeaker to generate a virtual 3-D image which is a visible representation of a 3-D array of image elements (voxels). The system displays 500,000 voxels per mirror cycle in a 3-D raster which appears continuous and demonstrates no distracting artifacts. The display is bright enough so that portions of the image can be dimmed without compromising the number of shades of gray. For x-ray CT, a displayed volume image looks like a 3-D radiograph which appears to be in the space directly behind the mirror. The viewer sees new views by moving his/her head from side to side or up and down. The system facilitates a variety of operator interactive functions which allow the user to point at objects within the image, control the orientation and location of brightened oblique planes within the volume, numerically dissect away selected image regions, and control intensity window levels. Photographs of example volume images displayed on the system illustrate, to the degree possible in a flat picture, the nature of displayed images and the capabilities of the system. Preliminary application of the display device to the analysis of volume reconstructions obtained from the Dynamic Spatial Reconstructor indicates significant utility of the system in selecting oblique sections and gaining an appreciation of the shape and dimensions of complex organ systems.
Development of a mini-mobile digital radiography system by using wireless smart devices.
Jeong, Chang-Won; Joo, Su-Chong; Ryu, Jong-Hyun; Lee, Jinseok; Kim, Kyong-Woo; Yoon, Kwon-Ha
2014-08-01
The current technologies that trend in digital radiology (DR) are toward systems using portable smart mobile as patient-centered care. We aimed to develop a mini-mobile DR system by using smart devices for wireless connection into medical information systems. We developed a mini-mobile DR system consisting of an X-ray source and a Complementary Metal-Oxide Semiconductor (CMOS) sensor based on a flat panel detector for small-field diagnostics in patients. It is used instead of the systems that are difficult to perform with a fixed traditional device. We also designed a method for embedded systems in the development of portable DR systems. The external interface used the fast and stable IEEE 802.11n wireless protocol, and we adapted the device for connections with Picture Archiving and Communication System (PACS) and smart devices. The smart device could display images on an external monitor other than the monitor in the DR system. The communication modules, main control board, and external interface supporting smart devices were implemented. Further, a smart viewer based on the external interface was developed to display image files on various smart devices. In addition, the advantage of operators is to reduce radiation dose when using remote smart devices. It is integrated with smart devices that can provide X-ray imaging services anywhere. With this technology, it can permit image observation on a smart device from a remote location by connecting to the external interface. We evaluated the response time of the mini-mobile DR system to compare to mobile PACS. The experimental results show that our system outperforms conventional mobile PACS in this regard.
Layered approach to workstation design for medical image viewing
NASA Astrophysics Data System (ADS)
Haynor, David R.; Zick, Gregory L.; Heritage, Marcus B.; Kim, Yongmin
1992-07-01
Software engineering principles suggest that complex software systems are best constructed from independent, self-contained modules, thereby maximizing the portability, maintainability and modifiability of the produced code. This principal is important in the design of medical imaging workstations, where further developments in technology (CPU, memory, interface devices, displays, network connections) are required for clinically acceptable workstations, and it is desirable to provide different hardware platforms with the ''same look and feel'' for the user. In addition, the set of desired functions is relatively well understood, but the optimal user interface for delivering these functions on a clinically acceptable workstation is still different depending on department, specialty, or individual preference. At the University of Washington, we are developing a viewing station based on the IBM RISC/6000 computer and on new technologies that are just becoming commercially available. These include advanced voice recognition systems and an ultra-high-speed network. We are developing a set of specifications and a conceptual design for the workstation, and will be producing a prototype. This paper presents our current concepts concerning the architecture and software system design of the future prototype. Our conceptual design specifies requirements for a Database Application Programming Interface (DBAPI) and for a User API (UAPI). The DBAPI consists of a set of subroutine calls that define the admissible transactions between the workstation and an image archive. The UAPI describes the requests a user interface program can make of the workstation. It incorporates basic display and image processing functions, yet is specifically designed to allow extensions to the basic set at the application level. We will discuss the fundamental elements of the two API''s and illustrate their application to workstation design.
Ehlers, Justis P.; Srivastava, Sunil K.; Feiler, Daniel; Noonan, Amanda I.; Rollins, Andrew M.; Tao, Yuankai K.
2014-01-01
Purpose To demonstrate key integrative advances in microscope-integrated intraoperative optical coherence tomography (iOCT) technology that will facilitate adoption and utilization during ophthalmic surgery. Methods We developed a second-generation prototype microscope-integrated iOCT system that interfaces directly with a standard ophthalmic surgical microscope. Novel features for improved design and functionality included improved profile and ergonomics, as well as a tunable lens system for optimized image quality and heads-up display (HUD) system for surgeon feedback. Novel material testing was performed for potential suitability for OCT-compatible instrumentation based on light scattering and transmission characteristics. Prototype surgical instruments were developed based on material testing and tested using the microscope-integrated iOCT system. Several surgical maneuvers were performed and imaged, and surgical motion visualization was evaluated with a unique scanning and image processing protocol. Results High-resolution images were successfully obtained with the microscope-integrated iOCT system with HUD feedback. Six semi-transparent materials were characterized to determine their attenuation coefficients and scatter density with an 830 nm OCT light source. Based on these optical properties, polycarbonate was selected as a material substrate for prototype instrument construction. A surgical pick, retinal forceps, and corneal needle were constructed with semi-transparent materials. Excellent visualization of both the underlying tissues and surgical instrument were achieved on OCT cross-section. Using model eyes, various surgical maneuvers were visualized, including membrane peeling, vessel manipulation, cannulation of the subretinal space, subretinal intraocular foreign body removal, and corneal penetration. Conclusions Significant iterative improvements in integrative technology related to iOCT and ophthalmic surgery are demonstrated. PMID:25141340
A Laboratory-Based Course in Display Technology
ERIC Educational Resources Information Center
Sarik, J.; Akinwande, A. I.; Kymissis, I.
2011-01-01
A laboratory-based class in flat-panel display technology is presented. The course introduces fundamental concepts of display systems and reinforces these concepts through the fabrication of three display devices--an inorganic electroluminescent seven-segment display, a dot-matrix organic light-emitting diode (OLED) display, and a dot-matrix…
NASA Technical Reports Server (NTRS)
1972-01-01
The growth of common as well as emerging visual display technologies are surveyed. The major inference is that contemporary society is rapidly growing evermore reliant on visual display for a variety of purposes. Because of its unique mission requirements, the National Aeronautics and Space Administration has contributed in an important and specific way to the growth of visual display technology. These contributions are characterized by the use of computer-driven visual displays to provide an enormous amount of information concisely, rapidly and accurately.
72-directional display having VGA resolution for high-appearance image generation
NASA Astrophysics Data System (ADS)
Takaki, Yasuhiro; Dairiki, Takeshi
2006-02-01
The high-density directional display, which was originally developed in order to realize a natural 3D display, is not only a 3D display but also a high-appearance display. The appearances of objects, such as glare and transparency, are the results of the reflection and the refraction of rays. The faithful reproduction of such appearances of objects is impossible using conventional 2D displays because rays diffuse on the display screen. The high-density directional display precisely controls the horizontal ray directions so that it can reproduce the appearances of objects. The fidelity of the reproduction of object appearances depends on the ray angle sampling pitch. The angle sampling pitch is determined by considering the human eye imaging system. In the present study the high-appearance display which has the resolution of 640×400 and emits rays in 72 different horizontal directions with the angle pitch of 0.38° was constructed. Two 72-directional displays were combined, each of which consisted of a high-resolution LCD panel (3,840×2,400) and a slanted lenticular sheet. Two images produced by two displays were superimposed by a half mirror. A slit array was placed at the focal plane of the lenticular sheet for each display to reduce the horizontal image crosstalk in the combined image. The impression analysis shows that the high-appearance display provides higher appearances and presence than the conventional 2D displays do.
Dual-view integral imaging three-dimensional display using polarized glasses.
Wu, Fei; Lv, Guo-Jiao; Deng, Huan; Zhao, Bai-Chuan; Wang, Qiong-Hua
2018-02-20
We propose a dual-view integral imaging (DVII) three-dimensional (3D) display using polarized glasses. The DVII 3D display consists of a display panel, a polarized parallax barrier, a microlens array, and two pairs of polarized glasses. Two kinds of elemental images, which are captured from two different 3D scenes, are alternately arranged on the display panel. The polarized parallax barrier is attached to the display panel and composed of two kinds of units that are also alternately arranged. The polarization directions between adjacent units are perpendicular. The polarization directions of the two pairs of polarized glasses are the same as those of the two kinds of units of the polarized parallax barrier, respectively. The lights emitted from the two kinds of elemental images are modulated by the corresponding polarizer units and microlenses, respectively. Two different 3D images are reconstructed in the viewing zone and separated by using two pairs of polarized glasses. A prototype of the DVII 3D display is developed and two 3D images can be presented simultaneously, verifying the hypothesis.
Display Techniques for Advanced Crew Stations (DTACS). Phase 1. Display Techniques Study.
1984-03-01
26 3.1.3 Off Screen Displays .. ................... 27 3.1.4 Flat Panel Displays. .. ................. 27 3.2 FORMAT REQUIREMENTS...Head-Up Display ....... .................... ... 96 4.5.2 Display Panel .... ................. 98 4.5.3 RGB Calligraphic Display ................ 99...117 3.4 VOICE WARNING/RESPONSE TECHNOLOGY .............. . i.117 5.5 TOUCH PANEL TECHNOLOGY ..... ................ ... 118 5.6
Multimission helicopter information display technology
NASA Astrophysics Data System (ADS)
Terry, William S.
1995-06-01
A new Operator display subsystem is being incorporated as part of the next generation United States Navy (USN) helicopter avionics system to be integrated into the Multi-Mission Helicopter (MMH) which will replace both the SH-60B and the SH- 60F in 2001. This subsystem exploits state-of-the-art technology for the display hardware, the display driver hardware, information presentation methodologies, and software architecture. The technologies to be base technologies have evolved during the development period and the solution has been modified to include current elements including high resolution AMLCD color displays that are sunlight readable, highly reliable, and significantly lighter that CRT technology, as well as Reduced Instruction Set Computer (RISC) based high-performance display generators that have only recently become feasible to implement in a military aircraft. This paper describes the overall subsystem architecture, some detail on the individual elements along with supporting rationale, the manner in which the display subsystem provides the necessary tools to significantly enhance the performance of the weapon system through the vital Operator-System Interface. Also addressed is a summary of the evolution of design leading to the current approach to MMH Operator displays and display processing as well as the growth path that the MMH display subsystem will most likely follow as additional technology evolution occurs.
Fractional screen video enhancement apparatus
Spletzer, Barry L [Albuquerque, NM; Davidson, George S [Albuquerque, NM; Zimmerer, Daniel J [Tijeras, NM; Marron, Lisa C [Albuquerque, NM
2005-07-19
The present invention provides a method and apparatus for displaying two portions of an image at two resolutions. For example, the invention can display an entire image at a first resolution, and a subset of the image at a second, higher resolution. Two inexpensive, low resolution displays can be used to produce a large image with high resolution only where needed.
New DICOM extensions for softcopy and hardcopy display consistency.
Eichelberg, M; Riesmeier, J; Kleber, K; Grönemeyer, D H; Oosterwijk, H; Jensch, P
2000-01-01
The DICOM standard defines in detail how medical images can be communicated. However, the rules on how to interpret the parameters contained in a DICOM image which deal with the image presentation were either lacking or not well defined. As a result, the same image frequently looks different when displayed on different workstations or printed on a film from various printers. Three new DICOM extensions attempt to close this gap by defining a comprehensive model for the display of images on softcopy and hardcopy devices: Grayscale Standard Display Function, Grayscale Softcopy Presentation State and Presentation Look Up Table.
Display gamma is an important factor in Web image viewing
NASA Astrophysics Data System (ADS)
Zhang, Xuemei; Lavin, Yingmei; Silverstein, D. Amnon
2001-06-01
We conducted a perceptual image preference experiment over the web to find our (1) if typical computer users have significant variations in their display gamma settings, and (2) if so, do the gamma settings have significant perceptual effect on the appearance of images in their web browsers. The digital image renderings used were found to have preferred tone characteristics from a previous lab- controlled experiment. They were rendered with 4 different gamma settings. The subjects were asked to view the images over the web, with their own computer equipment and web browsers. The subjects werewe asked to view the images over the web, with their own computer equipment and web browsers. The subjects made pair-wise subjective preference judgements on which rendering they liked bets for each image. Each subject's display gamma setting was estimated using a 'gamma estimator' tool, implemented as a Java applet. The results indicated that (1) the user's gamma settings, as estimated in the experiment, span a wide range from about 1.8 to about 3.0; (2) the subjects preferred images that werewe rendered with a 'correct' gamma value matching their display setting. Subjects disliked images rendered with a gamma value not matching their displays'. This indicates that display gamma estimation is a perceptually significant factor in web image optimization.
On-line 3-dimensional confocal imaging in vivo.
Li, J; Jester, J V; Cavanagh, H D; Black, T D; Petroll, W M
2000-09-01
In vivo confocal microscopy through focusing (CMTF) can provide a 3-D stack of high-resolution corneal images and allows objective measurements of corneal sublayer thickness and backscattering. However, current systems require time-consuming off-line image processing and analysis on multiple software platforms. Furthermore, there is a trade off between the CMTF speed and measurement precision. The purpose of this study was to develop a novel on-line system for in vivo corneal imaging and analysis that overcomes these limitations. A tandem scanning confocal microscope (TSCM) was used for corneal imaging. The TSCM video camera was interfaced directly to a PC image acquisition board to implement real-time digitization. Software was developed to allow in vivo 2-D imaging, CMTF image acquisition, interactive 3-D reconstruction, and analysis of CMTF data to be performed on line in a single user-friendly environment. A procedure was also incorporated to separate the odd/even video fields, thereby doubling the CMTF sampling rate and theoretically improving the precision of CMTF thickness measurements by a factor of two. In vivo corneal examinations of a normal human and a photorefractive keratectomy patient are presented to demonstrate the capabilities of the new system. Improvements in the convenience, speed, and functionality of in vivo CMTF image acquisition, display, and analysis are demonstrated. This is the first full-featured software package designed for in vivo TSCM imaging of the cornea, which performs both 2-D and 3-D image acquisition, display, and processing as well as CMTF analysis. The use of a PC platform and incorporation of easy to use, on line, and interactive features should help to improve the clinical utility of this technology.
Design and fabrication of vertically-integrated CMOS image sensors.
Skorka, Orit; Joseph, Dileepan
2011-01-01
Technologies to fabricate integrated circuits (IC) with 3D structures are an emerging trend in IC design. They are based on vertical stacking of active components to form heterogeneous microsystems. Electronic image sensors will benefit from these technologies because they allow increased pixel-level data processing and device optimization. This paper covers general principles in the design of vertically-integrated (VI) CMOS image sensors that are fabricated by flip-chip bonding. These sensors are composed of a CMOS die and a photodetector die. As a specific example, the paper presents a VI-CMOS image sensor that was designed at the University of Alberta, and fabricated with the help of CMC Microsystems and Micralyne Inc. To realize prototypes, CMOS dies with logarithmic active pixels were prepared in a commercial process, and photodetector dies with metal-semiconductor-metal devices were prepared in a custom process using hydrogenated amorphous silicon. The paper also describes a digital camera that was developed to test the prototype. In this camera, scenes captured by the image sensor are read using an FPGA board, and sent in real time to a PC over USB for data processing and display. Experimental results show that the VI-CMOS prototype has a higher dynamic range and a lower dark limit than conventional electronic image sensors.
Design and Fabrication of Vertically-Integrated CMOS Image Sensors
Skorka, Orit; Joseph, Dileepan
2011-01-01
Technologies to fabricate integrated circuits (IC) with 3D structures are an emerging trend in IC design. They are based on vertical stacking of active components to form heterogeneous microsystems. Electronic image sensors will benefit from these technologies because they allow increased pixel-level data processing and device optimization. This paper covers general principles in the design of vertically-integrated (VI) CMOS image sensors that are fabricated by flip-chip bonding. These sensors are composed of a CMOS die and a photodetector die. As a specific example, the paper presents a VI-CMOS image sensor that was designed at the University of Alberta, and fabricated with the help of CMC Microsystems and Micralyne Inc. To realize prototypes, CMOS dies with logarithmic active pixels were prepared in a commercial process, and photodetector dies with metal-semiconductor-metal devices were prepared in a custom process using hydrogenated amorphous silicon. The paper also describes a digital camera that was developed to test the prototype. In this camera, scenes captured by the image sensor are read using an FPGA board, and sent in real time to a PC over USB for data processing and display. Experimental results show that the VI-CMOS prototype has a higher dynamic range and a lower dark limit than conventional electronic image sensors. PMID:22163860
Knowledge representation in space flight operations
NASA Technical Reports Server (NTRS)
Busse, Carl
1989-01-01
In space flight operations rapid understanding of the state of the space vehicle is essential. Representation of knowledge depicting space vehicle status in a dynamic environment presents a difficult challenge. The NASA Jet Propulsion Laboratory has pursued areas of technology associated with the advancement of spacecraft operations environment. This has led to the development of several advanced mission systems which incorporate enhanced graphics capabilities. These systems include: (1) Spacecraft Health Automated Reasoning Prototype (SHARP); (2) Spacecraft Monitoring Environment (SME); (3) Electrical Power Data Monitor (EPDM); (4) Generic Payload Operations Control Center (GPOCC); and (5) Telemetry System Monitor Prototype (TSM). Knowledge representation in these systems provides a direct representation of the intrinsic images associated with the instrument and satellite telemetry and telecommunications systems. The man-machine interface includes easily interpreted contextual graphic displays. These interactive video displays contain multiple display screens with pop-up windows and intelligent, high resolution graphics linked through context and mouse-sensitive icons and text.
Actively addressed single pixel full-colour plasmonic display
NASA Astrophysics Data System (ADS)
Franklin, Daniel; Frank, Russell; Wu, Shin-Tson; Chanda, Debashis
2017-05-01
Dynamic, colour-changing surfaces have many applications including displays, wearables and active camouflage. Plasmonic nanostructures can fill this role by having the advantages of ultra-small pixels, high reflectivity and post-fabrication tuning through control of the surrounding media. However, previous reports of post-fabrication tuning have yet to cover a full red-green-blue (RGB) colour basis set with a single nanostructure of singular dimensions. Here, we report a method which greatly advances this tuning and demonstrates a liquid crystal-plasmonic system that covers the full RGB colour basis set, only as a function of voltage. This is accomplished through a surface morphology-induced, polarization-dependent plasmonic resonance and a combination of bulk and surface liquid crystal effects that manifest at different voltages. We further demonstrate the system's compatibility with existing LCD technology by integrating it with a commercially available thin-film-transistor array. The imprinted surface interfaces readily with computers to display images as well as video.
NASA Technical Reports Server (NTRS)
Randle, R. J.; Roscoe, S. N.; Petitt, J. C.
1980-01-01
Twenty professional pilots observed a computer-generated airport scene during simulated autopilot-coupled night landing approaches and at two points (20 sec and 10 sec before touchdown) judged whether the airplane would undershoot or overshoot the aimpoint. Visual accommodation was continuously measured using an automatic infrared optometer. Experimental variables included approach slope angle, display magnification, visual focus demand (using ophthalmic lenses), and presentation of the display as either a real (direct view) or a virtual (collimated) image. Aimpoint judgments shifted predictably with actual approach slope and display magnification. Both pilot judgments and measured accommodation interacted with focus demand with real-image displays but not with virtual-image displays. With either type of display, measured accommodation lagged far behind focus demand and was reliably less responsive to the virtual images. Pilot judgments shifted dramatically from an overwhelming perceived-overshoot bias 20 sec before touchdown to a reliable undershoot bias 10 sec later.
Image quality evaluation of medical color and monochrome displays using an imaging colorimeter
NASA Astrophysics Data System (ADS)
Roehrig, Hans; Gu, Xiliang; Fan, Jiahua
2012-10-01
The purpose of this presentation is to demonstrate the means which permit examining the accuracy of Image Quality with respect to MTF (Modulation Transfer Function) and NPS (Noise Power Spectrum) of Color Displays and Monochrome Displays. Indications were in the past that color displays could affect the clinical performance of color displays negatively compared to monochrome displays. Now colorimeters like the PM-1423 are available which have higher sensitivity and color accuracy than the traditional cameras like CCD cameras. Reference (1) was not based on measurements made with a colorimeter. This paper focuses on the measurements of physical characteristics of the spatial resolution and noise performance of color and monochrome medical displays which were made with a colorimeter and we will after this meeting submit the data to an ROC study so we have again a paper to present at a future SPIE Conference.Specifically, Modulation Transfer Function (MTF) and Noise Power Spectrum (NPS) were evaluated and compared at different digital driving levels (DDL) between the two medical displays. This paper focuses on the measurements of physical characteristics of the spatial resolution and noise performance of color and monochrome medical displays which were made with a colorimeter and we will after this meeting submit the data to an ROC study so we have again a paper to present at a future Annual SPIE Conference. Specifically, Modulation Transfer Function (MTF) and Noise Power Spectrum (NPS) were evaluated and compared at different digital driving levels (DDL) between the two medical displays. The Imaging Colorimeter. Measurement of color image quality needs were done with an imaging colorimeter as it is shown below. Imaging colorimetry is ideally suited to FPD measurement because imaging systems capture spatial data generating millions of data points in a single measurement operation. The imaging colorimeter which was used was the PM-1423 from Radiant Imaging. It uses full-frame CCDs with 100% fill factor which makes it very suitable to measure luminance and chrominance of individual LCD pixels and sub-pixels on an LCD display. The CCDs used are 14-bit thermoelectrically cooled and temperature stabilized , scientific grade.
Display Considerations For Intravascular Ultrasonic Imaging
NASA Astrophysics Data System (ADS)
Gessert, James M.; Krinke, Charlie; Mallery, John A.; Zalesky, Paul J.
1989-08-01
A display has been developed for intravascular ultrasonic imaging. Design of this display has a primary goal of providing guidance information for therapeutic interventions such as balloons, lasers, and atherectomy devices. Design considerations include catheter configuration, anatomy, acoustic properties of normal and diseased tissue, catheterization laboratory and operating room environment, acoustic and electrical safety, acoustic data sampling issues, and logistical support such as image measurement, storage and retrieval. Intravascular imaging is in an early stage of development so design flexibility and expandability are very important. The display which has been developed is capable of acquisition and display of grey scale images at rates varying from static B-scans to 30 frames per second. It stores images in a 640 X 480 X 8 bit format and is capable of black and white as well as color display in multiplevideo formats. The design is based on the industry standard PC-AT architecture and consists of two AT style circuit cards, one for high speed sampling and the other for scan conversion, graphics and video generation.
High-Resolution Large-Field-of-View Three-Dimensional Hologram Display System and Method Thereof
NASA Technical Reports Server (NTRS)
Chao, Tien-Hsin (Inventor); Mintz, Frederick W. (Inventor); Tsou, Peter (Inventor); Bryant, Nevin A. (Inventor)
2001-01-01
A real-time, dynamic, free space-virtual reality, 3-D image display system is enabled by using a unique form of Aerogel as the primary display media. A preferred embodiment of this system comprises a 3-D mosaic topographic map which is displayed by fusing four projected hologram images. In this embodiment, four holographic images are projected from four separate holograms. Each holographic image subtends a quadrant of the 4(pi) solid angle. By fusing these four holographic images, a static 3-D image such as a featured terrain map would be visible for 360 deg in the horizontal plane and 180 deg in the vertical plane. An input, either acquired by 3-D image sensor or generated by computer animation, is first converted into a 2-D computer generated hologram (CGH). This CGH is then downloaded into large liquid crystal (LC) panel. A laser projector illuminates the CGH-filled LC panel and generates and displays a real 3-D image in the Aerogel matrix.
Mobile display technologies: Past developments, present technologies, and future opportunities
NASA Astrophysics Data System (ADS)
Ohshima, Hiroyuki
2014-01-01
It has been thirty years since the first active matrix (AM) flat panel display (FPD) was industrialized for portable televisions (TVs) in 1984. The AM FPD has become a dominant electronic display technology widely used from mobile displays to large TVs. The development of AM FPDs for mobile displays has significantly changed our lives by enabling new applications, such as notebook personal computers (PCs), smartphones and tablet PCs. In the future, the role of mobile displays will become even more important, since mobile displays are the live interface for the world of mobile communications in the era of ubiquitous networks. Various developments are being conducted to improve visual performance, reduce power consumption and add new functionality. At the same time, innovative display concepts and novel manufacturing technologies are being investigated to create new values.
Digital Image Display Control System, DIDCS. [for astronomical analysis
NASA Technical Reports Server (NTRS)
Fischel, D.; Klinglesmith, D. A., III
1979-01-01
DIDCS is an interactive image display and manipulation system that is used for a variety of astronomical image reduction and analysis operations. The hardware system consists of a PDP 11/40 main frame with 32K of 16-bit core memory; 96K of 16-bit MOS memory; two 9 track 800 BPI tape drives; eight 2.5 million byte RKO5 type disk packs, three user terminals, and a COMTAL 8000-S display system which has sufficient memory to store and display three 512 x 512 x 8 bit images along with an overlay plane and function table for each image, a pseudo color table and the capability for displaying true color. The software system is based around the language FORTH, which will permit an open ended dictionary of user level words for image analyses and display. A description of the hardware and software systems will be presented along with examples of the types of astronomical research that are being performed. Also a short discussion of the commonality and exchange of this type of image analysis system will be given.
Image display device in digital TV
Choi, Seung Jong [Seoul, KR
2006-07-18
Disclosed is an image display device in a digital TV that is capable of carrying out the conversion into various kinds of resolution by using single bit map data in the digital TV. The image display device includes: a data processing part for executing bit map conversion, compression, restoration and format-conversion for text data; a memory for storing the bit map data obtained according to the bit map conversion and compression in the data processing part and image data inputted from an arbitrary receiving part, the receiving part receiving one of digital image data and analog image data; an image outputting part for reading the image data from the memory; and a display processing part for mixing the image data read from the image outputting part and the bit map data converted in format from the a data processing part. Therefore, the image display device according to the present invention can convert text data in such a manner as to correspond with various resolution, carry out the compression for bit map data, thereby reducing the memory space, and support text data of an HTML format, thereby providing the image with the text data of various shapes.
Head-Mounted Display Technology for Low Vision Rehabilitation and Vision Enhancement
Ehrlich, Joshua R.; Ojeda, Lauro V.; Wicker, Donna; Day, Sherry; Howson, Ashley; Lakshminarayanan, Vasudevan; Moroi, Sayoko E.
2017-01-01
Purpose To describe the various types of head-mounted display technology, their optical and human factors considerations, and their potential for use in low vision rehabilitation and vision enhancement. Design Expert perspective. Methods An overview of head-mounted display technology by an interdisciplinary team of experts drawing on key literature in the field. Results Head-mounted display technologies can be classified based on their display type and optical design. See-through displays such as retinal projection devices have the greatest potential for use as low vision aids. Devices vary by their relationship to the user’s eyes, field of view, illumination, resolution, color, stereopsis, effect on head motion and user interface. These optical and human factors considerations are important when selecting head-mounted displays for specific applications and patient groups. Conclusions Head-mounted display technologies may offer advantages over conventional low vision aids. Future research should compare head-mounted displays to commonly prescribed low vision aids in order to compare their effectiveness in addressing the impairments and rehabilitation goals of diverse patient populations. PMID:28048975
NASA Astrophysics Data System (ADS)
Eichenlaub, Jesse B.
1998-04-01
At the 1997 conference DTI first reported on a low cost, thin, lightweight backlight for LCDs that generates a special illumination pattern to create autostereoscopic 3D images and can switch to conventional diffuse illumination for 2D images. The backlight is thin and efficient enough for use in portable computer and hand held games, as well as thin desktop displays. The system has been embodied in 5' (13 cm) diagonal backlights for gambling machines, and in the 12.1' (31 cm) diagonal DTI Virtual Window(TM) desktop product. During the past year, DTI has improved the technology considerably, reducing crosstalk, increasing efficiency, improving components for mass production, and developing prototypes that move the 3D viewing zones in response to the observer's head position. The paper will describe the 2D/3D backlights, improvements that have been made to their function, and their embodiments within the latest display products and prototypes.
Relevance between SV and components based on water quality inspection by gas plumes
NASA Astrophysics Data System (ADS)
Nakanishi, A.; Aoyama, C.; Fukuoka, H.; Tajima, H.; Kumagai, H.; Takahashi, A.
2017-12-01
Gas and hydrate seeping from the seafloor into ocean water can be monitored on board, as images on echogram (acoustic equipment display inboard) by utilizing acoustic measurement equipment such as multi-beam sonars. Colors and shades of these images displayed on the monitor vary depending on the acoustic impedance. Backscattering strength (hereinafter referred as SV) depends on the type and density of plume components. Therefore, plume components should not be determined only by examining volume scattering density. By standardizing the relevance between gas plume SV and the components, types of plume components can be presumed just by calculating plume SV based on multi-beam data.Data from the following explorations will be utilized to perform the analysis of metal sensor, CTD measurement, and sampling. July, 2017 KAIYO-MARU2 (KAIYO ENGINEERING CO., LTD) @ Sea of Japan July, 2017 SIHNYO MARU (Tokyo University of Marine Science and Technology) @ Sea of Japan. And Chemical data obtained through YK16-07 cruise is also to be discussed.
The North Alabama Severe Thunderstorm Observations, Research, and Monitoring Network (STORMnet)
NASA Technical Reports Server (NTRS)
Goodman, S. J.; Blakeslee, R.; Christian, H.; Boccippio, D.; Koshak, W.; Bailey, J.; Hall, J.; Bateman, M.; McCaul, E.; Buechler, D.;
2002-01-01
The Severe Thunderstorm Observations, Research, and Monitoring network (STORMnet) became operational in 2001 as a test bed to infuse new science and technologies into the severe and hazardous weather forecasting and warning process. STORMnet is collaboration among NASA scientists, National Weather Service (NWS) forecasters, emergency managers and other partners. STORMnet integrates total lightning observations from a ten-station 3-D VHF regional lightning mapping array, the National Lightning Detection Network (NLDN), real-time regional NEXRAD Doppler radar, satellite visible and infrared imagers, and a mobile atmospheric profiling system to characterize storms and their evolution. The storm characteristics and life-cycle trending are accomplished in real-time through the second generation Lightning Imaging Sensor Demonstration and Display (LISDAD II), a distributed processing system with a JAVA-based display application that allows anyone, anywhere to track individual storm histories within the Tennessee Valley region of north Alabama and Tennessee, a region of the southeastern U.S. well known for abundant severe weather.
A noninvasive technique for real-time detection of bruises in apple surface based on machine vision
NASA Astrophysics Data System (ADS)
Zhao, Juan; Peng, Yankun; Dhakal, Sagar; Zhang, Leilei; Sasao, Akira
2013-05-01
Apple is one of the highly consumed fruit item in daily life. However, due to its high damage potential and massive influence on taste and export, the quality of apple has to be detected before it reaches the consumer's hand. This study was aimed to develop a hardware and software unit for real-time detection of apple bruises based on machine vision technology. The hardware unit consisted of a light shield installed two monochrome cameras at different angles, LED light source to illuminate the sample, and sensors at the entrance of box to signal the positioning of sample. Graphical Users Interface (GUI) was developed in VS2010 platform to control the overall hardware and display the image processing result. The hardware-software system was developed to acquire the images of 3 samples from each camera and display the image processing result in real time basis. An image processing algorithm was developed in Opencv and C++ platform. The software is able to control the hardware system to classify the apple into two grades based on presence/absence of surface bruises with the size of 5mm. The experimental result is promising and the system with further modification can be applicable for industrial production in near future.
NASA Astrophysics Data System (ADS)
Rahman, Hameedur; Arshad, Haslina; Mahmud, Rozi; Mahayuddin, Zainal Rasyid
2017-10-01
Breast Cancer patients who require breast biopsy has increased over the past years. Augmented Reality guided core biopsy of breast has become the method of choice for researchers. However, this cancer visualization has limitations to the extent of superimposing the 3D imaging data only. In this paper, we are introducing an Augmented Reality visualization framework that enables breast cancer biopsy image guidance by using X-Ray vision technique on a mobile display. This framework consists of 4 phases where it initially acquires the image from CT/MRI and process the medical images into 3D slices, secondly it will purify these 3D grayscale slices into 3D breast tumor model using 3D modeling reconstruction technique. Further, in visualization processing this virtual 3D breast tumor model has been enhanced using X-ray vision technique to see through the skin of the phantom and the final composition of it is displayed on handheld device to optimize the accuracy of the visualization in six degree of freedom. The framework is perceived as an improved visualization experience because the Augmented Reality x-ray vision allowed direct understanding of the breast tumor beyond the visible surface and direct guidance towards accurate biopsy targets.
NASA Astrophysics Data System (ADS)
Mohon, N.
A 'simulator' is defined as a machine which imitates the behavior of a real system in a very precise manner. The major components of a simulator and their interaction are outlined in brief form, taking into account the major components of an aircraft flight simulator. Particular attention is given to the visual display portion of the simulator, the basic components of the display, their interactions, and their characteristics. Real image displays are considered along with virtual image displays, and image generators. Attention is given to an advanced simulator for pilot training, a holographic pancake window, a scan laser image generator, the construction of an infrared target simulator, and the Apollo Command Module Simulator.
Biocular vehicle display optical designs
NASA Astrophysics Data System (ADS)
Chu, H.; Carter, Tom
2012-06-01
Biocular vehicle display optics is a fast collimating lens (f / # < 0.9) that presents the image of the display at infinity to both eyes of the viewer. Each eye captures the scene independently and the brain merges the two images into one through the overlapping portions of the images. With the recent conversion from analog CRT based displays to lighter, more compact active-matrix organic light-emitting diodes (AMOLED) digital image sources, display optical designs have evolved to take advantage of the higher resolution AMOLED image sources. To maximize the field of view of the display optics and fully resolve the smaller pixels, the digital image source is pre-magnified by relay optics or a coherent taper fiber optics plate. Coherent taper fiber optics plates are used extensively to: 1. Convert plano focal planes to spherical focal planes in order to eliminate Petzval field curvature. This elimination enables faster lens speed and/or larger field of view of eye pieces, display optics. 2. Provide pre-magnification to lighten the work load of the optics to further increase the numerical aperture and/or field of view. 3. Improve light flux collection efficiency and field of view by collecting all the light emitted by the image source and guiding imaging light bundles toward the lens aperture stop. 4. Reduce complexity of the optical design and overall packaging volume by replacing pre-magnification optics with a compact taper fiber optics plate. This paper will review and compare the performance of biocular vehicle display designs without and with taper fiber optics plate.
MRI Artifacts of a Metallic Stent Derived From a Human Aorta Specimen
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soto, M. E.; Flores, P.; Marrufo, O.
Magnetic resonance imaging has proved to be a useful technique to get images of the whole body. However, the presence of ferromagnetic material can cause susceptibility artifacts, which result from microscopic gradients that occur near the boundaries between areas displaying different magnetic susceptibility. These gradients cause dephasing of spins and frequency shifts in the surrounding tissues. Intravoxel dephasing and spatial mis-registration can degrade image quality. An aorta with a metallic stent was preserved in formaldehyde at 10% inside acrylic cylinders and used to obtain MR images. We tested pulsed spin echo and gradient echo sequences to improve image quality. Allmore » experiments were performed on a 7T/21 cm Varian system (Varian, Inc, Palo Alto, CA) equipped with Direct Drive technology and a 16-rung birdcage coil transceiver. The presence of metallic stents produces a lack of signal that might give falsely reassuring appearances within the vessel lumen.« less
Evaluation of visual acuity with Gen 3 night vision goggles
NASA Technical Reports Server (NTRS)
Bradley, Arthur; Kaiser, Mary K.
1994-01-01
Using laboratory simulations, visual performance was measured at luminance and night vision imaging system (NVIS) radiance levels typically encountered in the natural nocturnal environment. Comparisons were made between visual performance with unaided vision and that observed with subjects using image intensification. An Amplified Night Vision Imaging System (ANVIS6) binocular image intensifier was used. Light levels available in the experiments (using video display technology and filters) were matched to those of reflecting objects illuminated by representative night-sky conditions (e.g., full moon, starlight). Results show that as expected, the precipitous decline in foveal acuity experienced with decreasing mesopic luminance levels is effectively shifted to much lower light levels by use of an image intensification system. The benefits of intensification are most pronounced foveally, but still observable at 20 deg eccentricity. Binocularity provides a small improvement in visual acuity under both intensified and unintensified conditions.
Medical Robotic and Tele surgical Simulation Education Research
2017-05-01
training exercises, DVSS = 40, dVT = 65, and RoSS = 52 for skills development. All three offer 3D visual images but use different display technologies...capabilities with an emphasis on their educational skills. They offer unique advantages and capabilities in training robotic sur- geons. Each device has been...evaluate the transfer of training effect of each simulator. Collectively, this work will offer end users and potential buyers a comparison of the value
Range Image Processing for Local Navigation of an Autonomous Land Vehicle.
1986-09-01
such as doing long term exploration missions on the surface of the planets which mankind may wish to investigate . Certainly, mankind will soon return...intelligence programming, walking technology, and vision sensors to name but a few. 10 The purpose of this thesis will be to investigate , by simulation...bitmap graphics, both of which are important to this simulation. Finally, the methodology for displaying the symbolic information generated by the
Motmot, an open-source toolkit for realtime video acquisition and analysis.
Straw, Andrew D; Dickinson, Michael H
2009-07-22
Video cameras sense passively from a distance, offer a rich information stream, and provide intuitively meaningful raw data. Camera-based imaging has thus proven critical for many advances in neuroscience and biology, with applications ranging from cellular imaging of fluorescent dyes to tracking of whole-animal behavior at ecologically relevant spatial scales. Here we present 'Motmot': an open-source software suite for acquiring, displaying, saving, and analyzing digital video in real-time. At the highest level, Motmot is written in the Python computer language. The large amounts of data produced by digital cameras are handled by low-level, optimized functions, usually written in C. This high-level/low-level partitioning and use of select external libraries allow Motmot, with only modest complexity, to perform well as a core technology for many high-performance imaging tasks. In its current form, Motmot allows for: (1) image acquisition from a variety of camera interfaces (package motmot.cam_iface), (2) the display of these images with minimal latency and computer resources using wxPython and OpenGL (package motmot.wxglvideo), (3) saving images with no compression in a single-pass, low-CPU-use format (package motmot.FlyMovieFormat), (4) a pluggable framework for custom analysis of images in realtime and (5) firmware for an inexpensive USB device to synchronize image acquisition across multiple cameras, with analog input, or with other hardware devices (package motmot.fview_ext_trig). These capabilities are brought together in a graphical user interface, called 'FView', allowing an end user to easily view and save digital video without writing any code. One plugin for FView, 'FlyTrax', which tracks the movement of fruit flies in real-time, is included with Motmot, and is described to illustrate the capabilities of FView. Motmot enables realtime image processing and display using the Python computer language. In addition to the provided complete applications, the architecture allows the user to write relatively simple plugins, which can accomplish a variety of computer vision tasks and be integrated within larger software systems. The software is available at http://code.astraw.com/projects/motmot.
LCD displays performance comparison by MTF measurement using the white noise stimulus method
NASA Astrophysics Data System (ADS)
Mitjà, Carles; Escofet, Jaume
2011-01-01
The amount of images produced to be viewed as soft copies on output displays are significantly increasing. This growing occurs at the expense of the images targeted to hard copy versions on paper or any other physical support. Even in the case of high quality hard copy production, people working in professional imaging uses different displays in selecting, editing, processing and showing images, from laptop screen to specialized high end displays. Then, the quality performance of these devices is crucial in the chain of decisions to be taken in image production. Metrics of this quality performance can help in the equipment acquisition. Different metrics and methods have been described to determine the quality performance of CRT and LCD computer displays in clinical area. One of most important metrics in this field is the device spatial frequency response obtained measuring the modulation transfer function (MTF). This work presents a comparison between the MTF of three different LCD displays, Apple MacBook Pro 15", Apple LED Cinema Display 24" and Apple iPhone4, measured by the white noise stimulus method, over vertical and horizontal directions. Additionally, different displays show particular pixels structure pattern. In order to identify this pixel structure, a set of high magnification images is taken from each display to be related with the respective vertical and horizontal MTF.
Basics of Antibody Phage Display Technology.
Ledsgaard, Line; Kilstrup, Mogens; Karatt-Vellatt, Aneesh; McCafferty, John; Laustsen, Andreas H
2018-06-09
Antibody discovery has become increasingly important in almost all areas of modern medicine. Different antibody discovery approaches exist, but one that has gained increasing interest in the field of toxinology and antivenom research is phage display technology. In this review, the lifecycle of the M13 phage and the basics of phage display technology are presented together with important factors influencing the success rates of phage display experiments. Moreover, the pros and cons of different antigen display methods and the use of naïve versus immunized phage display antibody libraries is discussed, and selected examples from the field of antivenom research are highlighted. This review thus provides in-depth knowledge on the principles and use of phage display technology with a special focus on discovery of antibodies that target animal toxins.
Numerical image manipulation and display in solar astronomy
NASA Technical Reports Server (NTRS)
Levine, R. H.; Flagg, J. C.
1977-01-01
The paper describes the system configuration and data manipulation capabilities of a solar image display system which allows interactive analysis of visual images and on-line manipulation of digital data. Image processing features include smoothing or filtering of images stored in the display, contrast enhancement, and blinking or flickering images. A computer with a core memory of 28,672 words provides the capacity to perform complex calculations based on stored images, including computing histograms, selecting subsets of images for further analysis, combining portions of images to produce images with physical meaning, and constructing mathematical models of features in an image. Some of the processing modes are illustrated by some image sequences from solar observations.
NASA Astrophysics Data System (ADS)
Garcia-Belmonte, Germà
2017-06-01
Spatial visualization is a well-established topic of education research that has allowed improving science and engineering students' skills on spatial relations. Connections have been established between visualization as a comprehension tool and instruction in several scientific fields. Learning about dynamic processes mainly relies upon static spatial representations or images. Visualization of time is inherently problematic because time can be conceptualized in terms of two opposite conceptual metaphors based on spatial relations as inferred from conventional linguistic patterns. The situation is particularly demanding when time-varying signals are recorded using displaying electronic instruments, and the image should be properly interpreted. This work deals with the interplay between linguistic metaphors, visual thinking and scientific instrument mediation in the process of interpreting time-varying signals displayed by electronic instruments. The analysis draws on a simplified version of a communication system as example of practical signal recording and image visualization in a physics and engineering laboratory experience. Instrumentation delivers meaningful signal representations because it is designed to incorporate a specific and culturally favored time view. It is suggested that difficulties in interpreting time-varying signals are linked with the existing dual perception of conflicting time metaphors. The activation of specific space-time conceptual mapping might allow for a proper signal interpretation. Instruments play then a central role as visualization mediators by yielding an image that matches specific perception abilities and practical purposes. Here I have identified two ways of understanding time as used in different trajectories through which students are located. Interestingly specific displaying instruments belonging to different cultural traditions incorporate contrasting time views. One of them sees time in terms of a dynamic metaphor consisting of a static observer looking at passing events. This is a general and widespread practice common in the contemporary mass culture, which lies behind the process of making sense to moving images usually visualized by means of movie shots. In contrast scientific culture favored another way of time conceptualization (static time metaphor) that historically fostered the construction of graphs and the incorporation of time-dependent functions, as represented on the Cartesian plane, into displaying instruments. Both types of cultures, scientific and mass, are considered highly technological in the sense that complex instruments, apparatus or machines participate in their visual practices.
Researchers at the National Cancer Institute (NCI) have developed an improved class of heptamethine cyanine fluorophore dyes useful for imaging applications in the near-IR range (750-850 nm). A new chemical reaction has been developed that provides easy access to novel molecules with improved properties. Specifically, the dyes display greater resistance to thiol nucleophiles, and are more robust while maintaining excellent optical properties. The dyes have been successfully employed in various in vivo imaging applications and in vitro labeling and microscopy applications. The NCI seek co-development or licensees to develop them as targetable agents for optical-guided surgical interventions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cohen, Ezra E. W.; Ahmed, Omar; Kocherginsky, Masha
2013-10-01
Chemoradiotherapy (CRT) has led to improved efficacy in treating locally advanced squamous cell carcinoma of the head and neck (LA-SCCHN) but has led to almost universal in-field mucositis. Patients treated with the same regimen often have differences in mucositis occurrence and severity. Mucositis induced via radiation is known to represent an intense inflammatory response histologically. We hypothesized that patients destined to display severe mucocutaneous toxicity would demonstrate greater alterations in thermal intensity early in therapy than identically treated counterparts. This will allow identification of patients that will require more intensive supportive care using thermal imaging technology.
The ethics of clinical photography and social media.
Palacios-González, César
2015-02-01
Clinical photography is an important tool for medical practice, training and research. While in the past clinical pictures were confined to the stringent controls of surgeries and hospitals technological advances have made possible to take pictures and share them through the internet with only a few clicks. Confronted with this possibility I explore if a case could be made for using clinical photography in tandem with social media. In order to do this I explore: (1) if patient's informed consent is required for the publication of any clinical images that depicts her, irrespective of whether the patient can be identified from the image or not, (2) if social media is an adequate place for clinical images to be displayed, and finally (3) if there are special considerations that should be taken into account when publishing clinical images on social media.
Image-based electronic patient records for secured collaborative medical applications.
Zhang, Jianguo; Sun, Jianyong; Yang, Yuanyuan; Liang, Chenwen; Yao, Yihong; Cai, Weihua; Jin, Jin; Zhang, Guozhen; Sun, Kun
2005-01-01
We developed a Web-based system to interactively display image-based electronic patient records (EPR) for secured intranet and Internet collaborative medical applications. The system consists of four major components: EPR DICOM gateway (EPR-GW), Image-based EPR repository server (EPR-Server), Web Server and EPR DICOM viewer (EPR-Viewer). In the EPR-GW and EPR-Viewer, the security modules of Digital Signature and Authentication are integrated to perform the security processing on the EPR data with integrity and authenticity. The privacy of EPR in data communication and exchanging is provided by SSL/TLS-based secure communication. This presentation gave a new approach to create and manage image-based EPR from actual patient records, and also presented a way to use Web technology and DICOM standard to build an open architecture for collaborative medical applications.
Du, Weiqi; Zhang, Gaofei; Ye, Liangchen
2016-01-01
Micromirror-based scanning displays have been the focus of a variety of applications. Lissajous scanning displays have advantages in terms of power consumption; however, the image quality is not good enough. The main reason for this is the varying size and the contrast ratio of pixels at different positions of the image. In this paper, the Lissajous scanning trajectory is analyzed and a new method based on the diamond pixel is introduced to Lissajous displays. The optical performance of micromirrors is discussed. A display system demonstrator is built, and tests of resolution and contrast ratio are conducted. The test results show that the new Lissajous scanning method can be used in displays by using diamond pixels and image quality remains stable at different positions. PMID:27187390
Du, Weiqi; Zhang, Gaofei; Ye, Liangchen
2016-05-11
Micromirror-based scanning displays have been the focus of a variety of applications. Lissajous scanning displays have advantages in terms of power consumption; however, the image quality is not good enough. The main reason for this is the varying size and the contrast ratio of pixels at different positions of the image. In this paper, the Lissajous scanning trajectory is analyzed and a new method based on the diamond pixel is introduced to Lissajous displays. The optical performance of micromirrors is discussed. A display system demonstrator is built, and tests of resolution and contrast ratio are conducted. The test results show that the new Lissajous scanning method can be used in displays by using diamond pixels and image quality remains stable at different positions.
High-performance 4x4-inch AMLCD for avionic applications
NASA Astrophysics Data System (ADS)
Syroid, Daniel D.; Hansen, Glenn A.; Boling, Ed
1996-05-01
There is a need for high performance flat panel displays to replace and upgrade the electromechanical flight indicators and CRT based displays used in the cockpits of many older aircraft that are in active service today. The need for replacement of these older generation instruments is well known in the industry and was discussed in a previous paper by Duane Grave of Rockwell Collins. Furthermore, because of the limited activity in new aircraft development today, the need to upgrade existing aircraft avionics is accelerating. Many of the electromechanical instruments currently provide flight indications to the pilot and include horizontal situation (HSI) and attitude director indicators (ADI). These instruments are used on both military and commercial aircraft. The indicators are typically housed in a 5ATI case that slides into a 5 inch square opening in the cockpit. Image Quest has developed a 4 by 4 inch active area, flight quality, high resolution, full color, high luminance, wide temperature range display module based on active matrix liquid crystal display (AMLCD) technology that has excellent contrast in full sunlight. The display module is well suited for use in electronic instruments to replace or upgrade the electro-mechanical 5ATI flight indicators. THe AMLCD based display offers greatly improved display format flexibility, operating reliability and display contrast in all ambient lighting conditions as well as significant short and long term cost of ownership advantages.
Shaw, S L; Salmon, E D; Quatrano, R S
1995-12-01
In this report, we describe a relatively inexpensive method for acquiring, storing and processing light microscope images that combines the advantages of video technology with the powerful medium now termed digital photography. Digital photography refers to the recording of images as digital files that are stored, manipulated and displayed using a computer. This report details the use of a gated video-rate charge-coupled device (CCD) camera and a frame grabber board for capturing 256 gray-level digital images from the light microscope. This camera gives high-resolution bright-field, phase contrast and differential interference contrast (DIC) images but, also, with gated on-chip integration, has the capability to record low-light level fluorescent images. The basic components of the digital photography system are described, and examples are presented of fluorescence and bright-field micrographs. Digital processing of images to remove noise, to enhance contrast and to prepare figures for printing is discussed.
Retinal imaging analysis based on vessel detection.
Jamal, Arshad; Hazim Alkawaz, Mohammed; Rehman, Amjad; Saba, Tanzila
2017-07-01
With an increase in the advancement of digital imaging and computing power, computationally intelligent technologies are in high demand to be used in ophthalmology cure and treatment. In current research, Retina Image Analysis (RIA) is developed for optometrist at Eye Care Center in Management and Science University. This research aims to analyze the retina through vessel detection. The RIA assists in the analysis of the retinal images and specialists are served with various options like saving, processing and analyzing retinal images through its advanced interface layout. Additionally, RIA assists in the selection process of vessel segment; processing these vessels by calculating its diameter, standard deviation, length, and displaying detected vessel on the retina. The Agile Unified Process is adopted as the methodology in developing this research. To conclude, Retina Image Analysis might help the optometrist to get better understanding in analyzing the patient's retina. Finally, the Retina Image Analysis procedure is developed using MATLAB (R2011b). Promising results are attained that are comparable in the state of art. © 2017 Wiley Periodicals, Inc.
Multiscale infrared and visible image fusion using gradient domain guided image filtering
NASA Astrophysics Data System (ADS)
Zhu, Jin; Jin, Weiqi; Li, Li; Han, Zhenghao; Wang, Xia
2018-03-01
For better surveillance with infrared and visible imaging, a novel hybrid multiscale decomposition fusion method using gradient domain guided image filtering (HMSD-GDGF) is proposed in this study. In this method, hybrid multiscale decomposition with guided image filtering and gradient domain guided image filtering of source images are first applied before the weight maps of each scale are obtained using a saliency detection technology and filtering means with three different fusion rules at different scales. The three types of fusion rules are for small-scale detail level, large-scale detail level, and base level. Finally, the target becomes more salient and can be more easily detected in the fusion result, with the detail information of the scene being fully displayed. After analyzing the experimental comparisons with state-of-the-art fusion methods, the HMSD-GDGF method has obvious advantages in fidelity of salient information (including structural similarity, brightness, and contrast), preservation of edge features, and human visual perception. Therefore, visual effects can be improved by using the proposed HMSD-GDGF method.
Stereoscopic construction and practice of optoelectronic technology textbook
NASA Astrophysics Data System (ADS)
Zhou, Zigang; Zhang, Jinlong; Wang, Huili; Yang, Yongjia; Han, Yanling
2017-08-01
It is a professional degree course textbook for the Nation-class Specialty—Optoelectronic Information Science and Engineering, and it is also an engineering practice textbook for the cultivation of photoelectric excellent engineers. The book seeks to comprehensively introduce the theoretical and applied basis of optoelectronic technology, and it's closely linked to the current development of optoelectronic industry frontier and made up of following core contents, including the laser source, the light's transmission, modulation, detection, imaging and display. At the same time, it also embodies the features of the source of laser, the transmission of the waveguide, the electronic means and the optical processing methods.
NASA Technical Reports Server (NTRS)
Friedman, S. Z.; Walker, R. E.; Aitken, R. B.
1986-01-01
The Image Based Information System (IBIS) has been under development at the Jet Propulsion Laboratory (JPL) since 1975. It is a collection of more than 90 programs that enable processing of image, graphical, tabular data for spatial analysis. IBIS can be utilized to create comprehensive geographic data bases. From these data, an analyst can study various attributes describing characteristics of a given study area. Even complex combinations of disparate data types can be synthesized to obtain a new perspective on spatial phenomena. In 1984, new query software was developed enabling direct Boolean queries of IBIS data bases through the submission of easily understood expressions. An improved syntax methodology, a data dictionary, and display software simplified the analysts' tasks associated with building, executing, and subsequently displaying the results of a query. The primary purpose of this report is to describe the features and capabilities of the new query software. A secondary purpose of this report is to compare this new query software to the query software developed previously (Friedman, 1982). With respect to this topic, the relative merits and drawbacks of both approaches are covered.
NASA Astrophysics Data System (ADS)
Held, Marcel Philipp; Ley, Peer-Phillip; Lachmayer, Roland
2018-02-01
High-resolution vehicle headlamps represent a future-oriented technology that increases traffic safety and driving comfort in the dark. A further development to current matrix beam headlamps are LED-based pixellight systems which enable additional lighting functions (e.g. the projection of navigation information on the road) to be activated for given driving scenarios. The image generation is based on spatial light modulators (SLM) such as digital micromirror devices (DMD), liquid crystal displays (LCD), liquid crystal on silicon (LCoS) devices or LED arrays. For DMD-, LCD- and LCoSbased headlamps, the optical system uses illumining optics to ensure a precise illumination of the corresponding SLM. LED arrays, however, have to use imaging optics to project the LED die onto an intermediate image plane and thus create the light distribution via an apposition of gapless juxtapositional LED die images. Nevertheless, the lambertian radiation characteristics complex the design of imaging optics regarding a highefficiency setup with maximum resolution and luminous flux. Simplifying the light source model and its emitting characteristics allows to determine a balanced setup between these parameters by using the Etendue and to ´ calculate the maximum possible efficacy and luminous flux for each technology in an early designing stage. Therefore, we present a calculation comparison of how simplifying the light source model can affect the Etendue ´ conservation and the setup design for two high-resolution technologies. The shown approach is evaluated and compared to simulation models to show the occurring deviation and its applicability.
Enhancing the radiology learning experience with electronic whiteboard technology.
Lipton, Michael L; Lipton, Leah G
2010-06-01
The purpose of this study is to quantitatively evaluate the use of an interactive whiteboard for use in teaching diagnostic radiology and MRI physics. An interactive whiteboard (SMART Board model 3000i) was used during an MRI physics course and diagnostic radiology teaching conferences. A multiquestion instrument was used to quantify responses. Responses are reported as simple percentages of response number and, for ordinal scale questions, the two-tailed Student's t test was used to assess deviation from the neutral response. All of the subjects attended all sessions and completed the assessment questionnaire; 89% of respondents said that image quality of the SMART Board was superior to that of a projector-screen combination, 11% said that the image quality was similar, and none said that it was inferior. Sixty-seven percent of respondents said that the SMART Board's display of diagrams was superior to that of a conventional whiteboard, 33% said it was similar, and none said it was inferior. Participants thought that the smaller SMART Board display compared with the projector screen was an unimportant limitation (p = 0.03). Room lighting did not degrade image quality (p = 0.007), and a trend toward preference for the lighted room (while using the SMART Board) was detected (p = 0.15) but was not significant. The impact of the SMART Board on the visual material and flow of teaching sessions was favorable (p = 0.005). All of the subjects preferred the SMART Board over a traditional projector and screen combination. Learners endorsed that the SMART Board significantly enhanced learning, universally preferring it to the standard projector and screen approach. Major advantages include enhanced engagement of learners; enhanced integration of images and annotations or diagrams, including display of both images and diagrams simultaneously on a single screen; and the ability to review, revise, save, and distribute diagrams and annotated images. Disadvantages include cost and potentially complicated setup in very large auditoriums.
NASA Technical Reports Server (NTRS)
Lovelace, Jeffrey J.; Cios, Kryzsztof J.; Roth, Don J.; cAO, wEI n.
2001-01-01
Post-Scan Interactive Data Display (PSIDD) III is a user-oriented Windows-based system that facilitates the display and comparison of ultrasonic contact measurement data obtained at NASA Glenn Research Center's Ultrasonic Nondestructive Evaluation measurement facility. The system is optimized to compare ultrasonic measurements made at different locations within a material or at different stages of material degradation. PSIDD III provides complete analysis of the primary waveforms in the time and frequency domains along with the calculation of several frequency-dependent properties including phase velocity and attenuation coefficient and several frequency-independent properties, like the cross correlation velocity. The system allows image generation on all the frequency-dependent properties at any available frequency (limited by the bandwidth used in the scans) and on any of the frequency-independent properties. From ultrasonic contact scans, areas of interest on an image can be studied with regard to underlying raw waveforms and derived ultrasonic properties by simply selecting the point on the image. The system offers various modes of indepth comparison between scan points. Up to five scan points can be selected for comparative analysis at once. The system was developed with Borland Delphi software (Visual Pascal) and is based on an SQL data base. It is ideal for the classification of material properties or the location of microstructure variations in materials. Along with the ultrasonic contact measurement software that it is partnered with, this system is technology ready and can be transferred to users worldwide.
Augmented reality 3D display based on integral imaging
NASA Astrophysics Data System (ADS)
Deng, Huan; Zhang, Han-Le; He, Min-Yang; Wang, Qiong-Hua
2017-02-01
Integral imaging (II) is a good candidate for augmented reality (AR) display, since it provides various physiological depth cues so that viewers can freely change the accommodation and convergence between the virtual three-dimensional (3D) images and the real-world scene without feeling any visual discomfort. We propose two AR 3D display systems based on the theory of II. In the first AR system, a micro II display unit reconstructs a micro 3D image, and the mciro-3D image is magnified by a convex lens. The lateral and depth distortions of the magnified 3D image are analyzed and resolved by the pitch scaling and depth scaling. The magnified 3D image and real 3D scene are overlapped by using a half-mirror to realize AR 3D display. The second AR system uses a micro-lens array holographic optical element (HOE) as an image combiner. The HOE is a volume holographic grating which functions as a micro-lens array for the Bragg-matched light, and as a transparent glass for Bragg mismatched light. A reference beam can reproduce a virtual 3D image from one side and a reference beam with conjugated phase can reproduce the second 3D image from other side of the micro-lens array HOE, which presents double-sided 3D display feature.
Accommodation measurements of horizontally scanning holographic display.
Takaki, Yasuhiro; Yokouchi, Masahito
2012-02-13
Eye accommodation is considered to function properly for three-dimensional (3D) images generated by holography. We developed a horizontally scanning holographic display technique that enlarges both the screen size and viewing zone angle. A 3D image generated by this technique can be easily seen by both eyes. In this study, we measured the accommodation responses to a 3D image generated by the horizontally scanning holographic display technique that has a horizontal viewing zone angle of 14.6° and screen size of 4.3 in. We found that the accommodation responses to a 3D image displayed within 400 mm from the display screen were similar to those of a real object.
ZEISS Angioplex™ Spectral Domain Optical Coherence Tomography Angiography: Technical Aspects.
Rosenfeld, Philip J; Durbin, Mary K; Roisman, Luiz; Zheng, Fang; Miller, Andrew; Robbins, Gillian; Schaal, Karen B; Gregori, Giovanni
2016-01-01
ZEISS Angioplex™ optical coherence tomography (OCT) angiography generates high-resolution three-dimensional maps of the retinal and choroidal microvasculature while retaining all of the capabilities of the existing CIRRUS™ HD-OCT Model 5000 instrument. Angioplex™ OCT angiographic imaging on the CIRRUS™ HD-OCT platform was made possible by increasing the scanning rate to 68,000 A-scans per second and introducing improved tracking software known as FastTrac™ retinal-tracking technology. The generation of en face microvascular flow images with Angioplex™ OCT uses an algorithm known as OCT microangiography-complex, which incorporates differences in both the phase and intensity information contained within sequential B-scans performed at the same position. Current scanning patterns for en face angiographic visualization include a 3 × 3 and a 6 × 6 mm scan pattern on the retina. A volumetric dataset showing erythrocyte flow information can then be displayed as a color-coded retinal depth map in which the microvasculature of the superficial, deep, and avascular layers of the retina are displayed together with the colors red, representing the superficial microvasculature; green, representing the deep retinal vasculature; and blue, representing any vessels present in the normally avascular outer retina. Each retinal layer can be viewed separately, and the microvascular layers representing the choriocapillaris and the remaining choroid can be viewed separately as well. In addition, readjusting the contours of the slabs to target different layers of interest can generate custom en face flow images. Moreover, each en face flow image is accompanied by an en face intensity image to help with the interpretation of the flow results. Current clinical experience with this technology would suggest that OCT angiography should replace fluorescein angiography for retinovascular diseases involving any area of the retina that can be currently scanned with the CIRRUS™ HD-OCT instrument and may replace fluorescein angiography and indocyanine green angiography for some choroidal vascular diseases. © 2016 S. Karger AG, Basel.
ERIC Educational Resources Information Center
Edstrom, Malin
1987-01-01
Discusses the characteristics of different computer screen technologies including the possible harmful effects on health of cathode ray tube (CRT) terminals. CRT's are compared to other technologies including liquid crystal displays, plasma displays, electroluminiscence displays, and light emitting diodes. A chart comparing the different…
Demonstration of three gorges archaeological relics based on 3D-visualization technology
NASA Astrophysics Data System (ADS)
Xu, Wenli
2015-12-01
This paper mainly focuses on the digital demonstration of three gorges archeological relics to exhibit the achievements of the protective measures. A novel and effective method based on 3D-visualization technology, which includes large-scaled landscape reconstruction, virtual studio, and virtual panoramic roaming, etc, is proposed to create a digitized interactive demonstration system. The method contains three stages: pre-processing, 3D modeling and integration. Firstly, abundant archaeological information is classified according to its history and geographical information. Secondly, build up a 3D-model library with the technology of digital images processing and 3D modeling. Thirdly, use virtual reality technology to display the archaeological scenes and cultural relics vividly and realistically. The present work promotes the application of virtual reality to digital projects and enriches the content of digital archaeology.
NASA Astrophysics Data System (ADS)
Reed, Judd E.; Rumberger, John A.; Buithieu, Jean; Behrenbeck, Thomas; Breen, Jerome F.; Sheedy, Patrick F., II
1995-05-01
Electron beam computed tomography is unparalleled in its ability to consistently produce high quality dynamic images of the human heart. Its use in quantification of left ventricular dynamics is well established in both clinical and research applications. However, the image analysis tools supplied with the scanners offer a limited number of analysis options. They are based on embedded computer systems which have not been significantly upgraded since the scanner was introduced over a decade ago in spite of the explosive improvements in available computer power which have occured during this period. To address these shortcomings, a workstation-based ventricular analysis system has been developed at our institution. This system, which has been in use for over five years, is based on current workstation technology and therefore has benefited from the periodic upgrades in processor performance available to these systems. The dynamic image segmentation component of this system is an interactively supervised, semi-automatic surface identification and tracking system. It characterizes the endocardial and epicardial surfaces of the left ventricle as two concentric 4D hyper-space polyhedrons. Each of these polyhedrons have nearly ten thousand vertices which are deposited into a relational database. The right ventricle is also processed in a similar manner. This database is queried by other custom components which extract ventricular function parameters such as regional ejection fraction and wall stress. The interactive tool which supervises dynamic image segmentation has been enhanced with a temporal domain display. The operator interactively chooses the spatial location of the endpoints of a line segment while the corresponding space/time image is displayed. These images, with content resembling M-Mode echocardiography, benefit form electron beam computed tomography's high spatial and contrast resolution. The segmented surfaces are displayed along with the imagery. These displays give the operator valuable feedback pertaining to the contiguity of the extracted surfaces. As with M-Mode echocardiography, the velocity of moving structures can be easily visualized and measured. However, many views inaccessible to standard transthoracic echocardiography are easily generated. These features have augmented the interpretability of cine electron beam computed tomography and have prompted the recent cloning of this system into an 'omni-directional M-Mode display' system for use in digital post-processing of echocardiographic parasternal short axis tomograms. This enhances the functional assessment in orthogonal views of the left ventricle, accounting for shape changes particularly in the asymmetric post-infarction ventricle. Conclusions: A new tool has been developed for analysis and visualization of cine electron beam computed tomography. It has been found to be very useful in verifying the consistency of myocardial surface definition with a semi-automated segmentation tool. By drawing on M-Mode echocardiography experience, electron beam tomography's interpretability has been enhanced. Use of this feature, in conjunction with the existing image processing tools, will enhance the presentations of data on regional systolic and diastolic functions to clinicians in a format that is familiar to most cardiologists. Additionally, this tool reinforces the advantages of electron beam tomography as a single imaging modality for the assessment of left and right ventricular size, shape, and regional functions.
Digital Image Processing Overview For Helmet Mounted Displays
NASA Astrophysics Data System (ADS)
Parise, Michael J.
1989-09-01
Digital image processing provides a means to manipulate an image and presents a user with a variety of display formats that are not available in the analog image processing environment. When performed in real time and presented on a Helmet Mounted Display, system capability and flexibility are greatly enhanced. The information content of a display can be increased by the addition of real time insets and static windows from secondary sensor sources, near real time 3-D imaging from a single sensor can be achieved, graphical information can be added, and enhancement techniques can be employed. Such increased functionality is generating a considerable amount of interest in the military and commercial markets. This paper discusses some of these image processing techniques and their applications.
Vroom: designing an augmented environment for remote collaboration in digital cinema production
NASA Astrophysics Data System (ADS)
Margolis, Todd; Cornish, Tracy
2013-03-01
As media technologies become increasingly affordable, compact and inherently networked, new generations of telecollaborative platforms continue to arise which integrate these new affordances. Virtual reality has been primarily concerned with creating simulations of environments that can transport participants to real or imagined spaces that replace the "real world". Meanwhile Augmented Reality systems have evolved to interleave objects from Virtual Reality environments into the physical landscape. Perhaps now there is a new class of systems that reverse this precept to enhance dynamic media landscapes and immersive physical display environments to enable intuitive data exploration through collaboration. Vroom (Virtual Room) is a next-generation reconfigurable tiled display environment in development at the California Institute for Telecommunications and Information Technology (Calit2) at the University of California, San Diego. Vroom enables freely scalable digital collaboratories, connecting distributed, high-resolution visualization resources for collaborative work in the sciences, engineering and the arts. Vroom transforms a physical space into an immersive media environment with large format interactive display surfaces, video teleconferencing and spatialized audio built on a highspeed optical network backbone. Vroom enables group collaboration for local and remote participants to share knowledge and experiences. Possible applications include: remote learning, command and control, storyboarding, post-production editorial review, high resolution video playback, 3D visualization, screencasting and image, video and multimedia file sharing. To support these various scenarios, Vroom features support for multiple user interfaces (optical tracking, touch UI, gesture interface, etc.), support for directional and spatialized audio, giga-pixel image interactivity, 4K video streaming, 3D visualization and telematic production. This paper explains the design process that has been utilized to make Vroom an accessible and intuitive immersive environment for remote collaboration specifically for digital cinema production.
SPAM- SPECTRAL ANALYSIS MANAGER (DEC VAX/VMS VERSION)
NASA Technical Reports Server (NTRS)
Solomon, J. E.
1994-01-01
The Spectral Analysis Manager (SPAM) was developed to allow easy qualitative analysis of multi-dimensional imaging spectrometer data. Imaging spectrometers provide sufficient spectral sampling to define unique spectral signatures on a per pixel basis. Thus direct material identification becomes possible for geologic studies. SPAM provides a variety of capabilities for carrying out interactive analysis of the massive and complex datasets associated with multispectral remote sensing observations. In addition to normal image processing functions, SPAM provides multiple levels of on-line help, a flexible command interpretation, graceful error recovery, and a program structure which can be implemented in a variety of environments. SPAM was designed to be visually oriented and user friendly with the liberal employment of graphics for rapid and efficient exploratory analysis of imaging spectrometry data. SPAM provides functions to enable arithmetic manipulations of the data, such as normalization, linear mixing, band ratio discrimination, and low-pass filtering. SPAM can be used to examine the spectra of an individual pixel or the average spectra over a number of pixels. SPAM also supports image segmentation, fast spectral signature matching, spectral library usage, mixture analysis, and feature extraction. High speed spectral signature matching is performed by using a binary spectral encoding algorithm to separate and identify mineral components present in the scene. The same binary encoding allows automatic spectral clustering. Spectral data may be entered from a digitizing tablet, stored in a user library, compared to the master library containing mineral standards, and then displayed as a timesequence spectral movie. The output plots, histograms, and stretched histograms produced by SPAM can be sent to a lineprinter, stored as separate RGB disk files, or sent to a Quick Color Recorder. SPAM is written in C for interactive execution and is available for two different machine environments. There is a DEC VAX/VMS version with a central memory requirement of approximately 242K of 8 bit bytes and a machine independent UNIX 4.2 version. The display device currently supported is the Raster Technologies display processor. Other 512 x 512 resolution color display devices, such as De Anza, may be added with minor code modifications. This program was developed in 1986.
SPAM- SPECTRAL ANALYSIS MANAGER (UNIX VERSION)
NASA Technical Reports Server (NTRS)
Solomon, J. E.
1994-01-01
The Spectral Analysis Manager (SPAM) was developed to allow easy qualitative analysis of multi-dimensional imaging spectrometer data. Imaging spectrometers provide sufficient spectral sampling to define unique spectral signatures on a per pixel basis. Thus direct material identification becomes possible for geologic studies. SPAM provides a variety of capabilities for carrying out interactive analysis of the massive and complex datasets associated with multispectral remote sensing observations. In addition to normal image processing functions, SPAM provides multiple levels of on-line help, a flexible command interpretation, graceful error recovery, and a program structure which can be implemented in a variety of environments. SPAM was designed to be visually oriented and user friendly with the liberal employment of graphics for rapid and efficient exploratory analysis of imaging spectrometry data. SPAM provides functions to enable arithmetic manipulations of the data, such as normalization, linear mixing, band ratio discrimination, and low-pass filtering. SPAM can be used to examine the spectra of an individual pixel or the average spectra over a number of pixels. SPAM also supports image segmentation, fast spectral signature matching, spectral library usage, mixture analysis, and feature extraction. High speed spectral signature matching is performed by using a binary spectral encoding algorithm to separate and identify mineral components present in the scene. The same binary encoding allows automatic spectral clustering. Spectral data may be entered from a digitizing tablet, stored in a user library, compared to the master library containing mineral standards, and then displayed as a timesequence spectral movie. The output plots, histograms, and stretched histograms produced by SPAM can be sent to a lineprinter, stored as separate RGB disk files, or sent to a Quick Color Recorder. SPAM is written in C for interactive execution and is available for two different machine environments. There is a DEC VAX/VMS version with a central memory requirement of approximately 242K of 8 bit bytes and a machine independent UNIX 4.2 version. The display device currently supported is the Raster Technologies display processor. Other 512 x 512 resolution color display devices, such as De Anza, may be added with minor code modifications. This program was developed in 1986.
Design of embedded endoscopic ultrasonic imaging system
NASA Astrophysics Data System (ADS)
Li, Ming; Zhou, Hao; Wen, Shijie; Chen, Xiodong; Yu, Daoyin
2008-12-01
Endoscopic ultrasonic imaging system is an important component in the endoscopic ultrasonography system (EUS). Through the ultrasonic probe, the characteristics of the fault histology features of digestive organs is detected by EUS, and then received by the reception circuit which making up of amplifying, gain compensation, filtering and A/D converter circuit, in the form of ultrasonic echo. Endoscopic ultrasonic imaging system is the back-end processing system of the EUS, with the function of receiving digital ultrasonic echo modulated by the digestive tract wall from the reception circuit, acquiring and showing the fault histology features in the form of image and characteristic data after digital signal processing, such as demodulation, etc. Traditional endoscopic ultrasonic imaging systems are mainly based on image acquisition and processing chips, which connecting to personal computer with USB2.0 circuit, with the faults of expensive, complicated structure, poor portability, and difficult to popularize. To against the shortcomings above, this paper presents the methods of digital signal acquisition and processing specially based on embedded technology with the core hardware structure of ARM and FPGA for substituting the traditional design with USB2.0 and personal computer. With built-in FIFO and dual-buffer, FPGA implement the ping-pong operation of data storage, simultaneously transferring the image data into ARM through the EBI bus by DMA function, which is controlled by ARM to carry out the purpose of high-speed transmission. The ARM system is being chosen to implement the responsibility of image display every time DMA transmission over and actualizing system control with the drivers and applications running on the embedded operating system Windows CE, which could provide a stable, safe and reliable running platform for the embedded device software. Profiting from the excellent graphical user interface (GUI) and good performance of Windows CE, we can not only clearly show 511×511 pixels ultrasonic echo images through application program, but also provide a simple and friendly operating interface with mouse and touch screen which is more convenient than the traditional endoscopic ultrasonic imaging system. Including core and peripheral circuits of FPGA and ARM, power network circuit and LCD display circuit, we designed the whole embedded system, achieving the desired purpose by implementing ultrasonic image display properly after the experimental verification, solving the problem of hugeness and complexity of the traditional endoscopic ultrasonic imaging system.
Recent progress in flexible OLED displays
NASA Astrophysics Data System (ADS)
Hack, Michael G.; Weaver, Michael S.; Mahon, Janice K.; Brown, Julie J.
2001-09-01
Organic light emitting device (OLED) technology has recently been shown to demonstrate excellent performance and cost characteristics for use in numerous flat panel display (FPD) applications. OLED displays emit bright, colorful light with excellent power efficiency, wide viewing angle and video response rates. OLEDs are also demonstrating the requisite environmental robustness for a wide variety of applications. OLED technology is also the first FPD technology with the potential to be highly functional and durable in a flexible format. The use of plastic and other flexible substrate materials offers numerous advantages over commonly used glass substrates, including impact resistance, light weight, thinness and conformability. Currently, OLED displays are being fabricated on rigid substrates, such as glass or silicon wafers. At Universal Display Corporation (UDC), we are developing a new class of flexible OLED displays (FOLEDs). These displays also have extremely low power consumption through the use of electrophosphorescent doped OLEDs. To commercialize FOLED technology, a number of technical issues related to packaging and display processing on flexible substrates need to be addressed. In this paper, we report on our recent results to demonstrate the key technologies that enable the manufacture of power efficient, long-life flexible OLED displays for commercial and military applications.
NASA Technical Reports Server (NTRS)
Robb, R. A.; Ritman, E. L.; Wood, E. H.
1975-01-01
A device was developed which makes possible the dynamic reconstruction of the heart and lungs within the intact thorax of a living dog or human and which can record approximately 30 multiplanar X-ray images of the thorax practically instantaneously, and at frequent enough intervals of time and with sufficient density and spatial resolution to capture and resolve the most rapid changes in cardiac structural detail throughout each cardiac cycle. It can be installed in a clinical diagnostic setting as well as in a research environment and its construction and application for determination and display in real-time modes of cross sections of the functioning thorax and its contents of living animals and man is technologically feasible.
Beukelman, David R; Hux, Karen; Dietz, Aimee; McKelvey, Miechelle; Weissling, Kristy
2015-01-01
Research about the effectiveness of communicative supports and advances in photographic technology has prompted changes in the way speech-language pathologists design and implement interventions for people with aphasia. The purpose of this paper is to describe the use of photographic images as a basis for developing communication supports for people with chronic aphasia secondary to sudden-onset events due to cerebrovascular accidents (strokes). Topics include the evolution of AAC-based supports as they relate to people with aphasia, the development and key features of visual scene displays (VSDs), and future directions concerning the incorporation of photographs into communication supports for people with chronic and severe aphasia.
Krueger, Wesley W O
2011-01-01
An eyewear mounted visual display ("User-worn see-through display") projecting an artificial horizon aligned with the user's head and body position in space can prevent or lessen motion sickness in susceptible individuals when in a motion provocative environment as well as aid patients undergoing vestibular rehabilitation. In this project, a wearable display device, including software technology and hardware, was developed and a phase I feasibility study and phase II clinical trial for safety and efficacy were performed. Both phase I and phase II were prospective studies funded by the NIH. The phase II study used repeated measures for motion intolerant subjects and a randomized control group (display device/no display device) pre-posttest design for patients in vestibular rehabilitation. Following technology and display device development, 75 patients were evaluated by test and rating scales in the phase II study; 25 subjects with motion intolerance used the technology in the display device in provocative environments and completed subjective rating scales, whereas 50 patients were evaluated before and after vestibular rehabilitation (25 using the display device and 25 in a control group) using established test measures. All patients with motion intolerance rated the technology as helpful for nine symptoms assessed, and 96% rated the display device as simple and easy to use. Duration of symptoms significantly decreased with use of the technology displayed. In patients undergoing vestibular rehabilitation, there were no significant differences in amount of change from pre- to posttherapy on objective balance tests between display device users and controls. However, those using the technology required significantly fewer rehabilitation sessions to achieve those outcomes than the control group. A user-worn see-through display, utilizing a visual fixation target coupled with a stable artificial horizon and aligned with user movement, has demonstrated substantial benefit for individuals susceptible to motion intolerance and spatial disorientation and those undergoing vestibular rehabilitation. The technology developed has applications in any environment where motion sensitivity affects human performance.
NASA Astrophysics Data System (ADS)
Nolte, David D.
2016-03-01
Biodynamic imaging is an emerging 3D optical imaging technology that probes up to 1 mm deep inside three-dimensional living tissue using short-coherence dynamic light scattering to measure the intracellular motions of cells inside their natural microenvironments. Biodynamic imaging is label-free and non-invasive. The information content of biodynamic imaging is captured through tissue dynamics spectroscopy that displays the changes in the Doppler signatures from intracellular constituents in response to applied compounds. The affected dynamic intracellular mechanisms include organelle transport, membrane undulations, cytoskeletal restructuring, strain at cellular adhesions, cytokinesis, mitosis, exo- and endo-cytosis among others. The development of 3D high-content assays such as biodynamic profiling can become a critical new tool for assessing efficacy of drugs and the suitability of specific types of tissue growth for drug discovery and development. The use of biodynamic profiling to predict clinical outcome of living biopsies to cancer therapeutics can be developed into a phenotypic companion diagnostic, as well as a new tool for therapy selection in personalized medicine. This invited talk will present an overview of the optical, physical and physiological processes involved in biodynamic imaging. Several different biodynamic imaging modalities include motility contrast imaging (MCI), tissue-dynamics spectroscopy (TDS) and tissue-dynamics imaging (TDI). A wide range of potential applications will be described that include process monitoring for 3D tissue culture, drug discovery and development, cancer therapy selection, embryo assessment for in-vitro fertilization and artificial reproductive technologies, among others.
What Is A Picture Archiving And Communication System (PACS)?
NASA Astrophysics Data System (ADS)
Marceau, Carla
1982-01-01
A PACS is a digital system for acquiring, storing, moving and displaying picture or image information. It is an alternative to film jackets that has been made possible by recent breakthroughs in computer technology: telecommunications, local area nets and optical disks. The fundamental concept of the digital representation of image information is introduced. It is shown that freeing images from a material representation on film or paper leads to a dramatic increase in flexibility in our use of the images. The ultimate goal of a medical PACS system is a radiology department without film jackets. The inherent nature of digital images and the power of the computer allow instant free "copies" of images to be made and thrown away. These copies can be transmitted to distant sites in seconds, without the "original" ever leaving the archives of the radiology department. The result is a radiology department with much freer access to patient images and greater protection against lost or misplaced image information. Finally, images in digital form can be treated as data for the computer in image processing, which includes enhancement, reconstruction and even computer-aided analysis.
XML-based scripting of multimodality image presentations in multidisciplinary clinical conferences
NASA Astrophysics Data System (ADS)
Ratib, Osman M.; Allada, Vivekanand; Dahlbom, Magdalena; Marcus, Phillip; Fine, Ian; Lapstra, Lorelle
2002-05-01
We developed a multi-modality image presentation software for display and analysis of images and related data from different imaging modalities. The software is part of a cardiac image review and presentation platform that supports integration of digital images and data from digital and analog media such as videotapes, analog x-ray films and 35 mm cine films. The software supports standard DICOM image files as well as AVI and PDF data formats. The system is integrated in a digital conferencing room that includes projections of digital and analog sources, remote videoconferencing capabilities, and an electronic whiteboard. The goal of this pilot project is to: 1) develop a new paradigm for image and data management for presentation in a clinically meaningful sequence adapted to case-specific scenarios, 2) design and implement a multi-modality review and conferencing workstation using component technology and customizable 'plug-in' architecture to support complex review and diagnostic tasks applicable to all cardiac imaging modalities and 3) develop an XML-based scripting model of image and data presentation for clinical review and decision making during routine clinical tasks and multidisciplinary clinical conferences.
Method and apparatus for the simultaneous display and correlation of independently generated images
Vaitekunas, Jeffrey J.; Roberts, Ronald A.
1991-01-01
An apparatus and method for location by location correlation of multiple images from Non-Destructive Evaluation (NDE) and other sources. Multiple images of a material specimen are displayed on one or more monitors of an interactive graphics system. Specimen landmarks are located in each image and mapping functions from a reference image to each other image are calcuated using the landmark locations. A location selected by positioning a cursor in the reference image is mapped to the other images and location identifiers are simultaneously displayed in those images. Movement of the cursor in the reference image causes simultaneous movement of the location identifiers in the other images to positions corresponding to the location of the reference image cursor.