Video display engineering and optimization system
NASA Technical Reports Server (NTRS)
Larimer, James (Inventor)
1997-01-01
A video display engineering and optimization CAD simulation system for designing a LCD display integrates models of a display device circuit, electro-optics, surface geometry, and physiological optics to model the system performance of a display. This CAD system permits system performance and design trade-offs to be evaluated without constructing a physical prototype of the device. The systems includes a series of modules which permit analysis of design trade-offs in terms of their visual impact on a viewer looking at a display.
Using ARINC 818 Avionics Digital Video Bus (ADVB) for military displays
NASA Astrophysics Data System (ADS)
Alexander, Jon; Keller, Tim
2007-04-01
ARINC 818 Avionics Digital Video Bus (ADVB) is a new digital video interface and protocol standard developed especially for high bandwidth uncompressed digital video. The first draft of this standard, released in January of 2007, has been advanced by ARINC and the aerospace community to meet the acute needs of commercial aviation for higher performance digital video. This paper analyzes ARINC 818 for use in military display systems found in avionics, helicopters, and ground vehicles. The flexibility of ARINC 818 for the diverse resolutions, grayscales, pixel formats, and frame rates of military displays is analyzed as well as the suitability of ARINC 818 to support requirements for military video systems including bandwidth, latency, and reliability. Implementation issues relevant to military displays are presented.
A Scalable, Collaborative, Interactive Light-field Display System
2014-06-01
displays, 3D display, holographic video, integral photography, plenoptic , computed photography 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF...light-field, holographic displays, 3D display, holographic video, integral photography, plenoptic , computed photography 1 Distribution A: Approved
Video Games: A Human Factors Guide to Visual Display Design and Instructional System Design
1984-04-01
Electronic video games have many of the same technological and psychological characteristics that are found in military computer-based systems. For...both of which employ video games as experimental stimuli, are presented here. The first research program seeks to identify and exploit the...characteristics of video games in the design of game-based training devices. The second program is designed to explore the effects of electronic video display
Microcomputer Selection Guide for Construction Field Offices. Revision.
1984-09-01
the system, and the monitor displays information on a video display screen. Microcomputer systems today are available in a variety of configura- tions...background. White on black monitors report- edly caule more eye fatigue, while amber is reported to cause the least eye fatigue. Reverse video ...The video should be amber or green display with a resolution of at least 640 x 200 dots per in. Additional features of the monitor include an
NASA Technical Reports Server (NTRS)
Bogart, Edward H. (Inventor); Pope, Alan T. (Inventor)
2000-01-01
A system for display on a single video display terminal of multiple physiological measurements is provided. A subject is monitored by a plurality of instruments which feed data to a computer programmed to receive data, calculate data products such as index of engagement and heart rate, and display the data in a graphical format simultaneously on a single video display terminal. In addition live video representing the view of the subject and the experimental setup may also be integrated into the single data display. The display may be recorded on a standard video tape recorder for retrospective analysis.
Compression of stereoscopic video using MPEG-2
NASA Astrophysics Data System (ADS)
Puri, A.; Kollarits, Richard V.; Haskell, Barry G.
1995-10-01
Many current as well as emerging applications in areas of entertainment, remote operations, manufacturing industry and medicine can benefit from the depth perception offered by stereoscopic video systems which employ two views of a scene imaged under the constraints imposed by human visual system. Among the many challenges to be overcome for practical realization and widespread use of 3D/stereoscopic systems are good 3D displays and efficient techniques for digital compression of enormous amounts of data while maintaining compatibility with normal video decoding and display systems. After a brief introduction to the basics of 3D/stereo including issues of depth perception, stereoscopic 3D displays and terminology in stereoscopic imaging and display, we present an overview of tools in the MPEG-2 video standard that are relevant to our discussion on compression of stereoscopic video, which is the main topic of this paper. Next, we outilne the various approaches for compression of stereoscopic video and then focus on compatible stereoscopic video coding using MPEG-2 Temporal scalability concepts. Compatible coding employing two different types of prediction structures become potentially possible, disparity compensated prediction and combined disparity and motion compensated predictions. To further improve coding performance and display quality, preprocessing for reducing mismatch between the two views forming stereoscopic video is considered. Results of simulations performed on stereoscopic video of normal TV resolution are then reported comparing the performance of two prediction structures with the simulcast solution. It is found that combined disparity and motion compensated prediction offers the best performance. Results indicate that compression of both views of stereoscopic video of normal TV resolution appears feasible in a total of 6 to 8 Mbit/s. We then discuss regarding multi-viewpoint video, a generalization of stereoscopic video. Finally, we describe ongoing efforts within MPEG-2 to define a profile for stereoscopic video coding, as well as, the promise of MPEG-4 in addressing coding of multi-viewpoint video.
Compression of stereoscopic video using MPEG-2
NASA Astrophysics Data System (ADS)
Puri, Atul; Kollarits, Richard V.; Haskell, Barry G.
1995-12-01
Many current as well as emerging applications in areas of entertainment, remote operations, manufacturing industry and medicine can benefit from the depth perception offered by stereoscopic video systems which employ two views of a scene imaged under the constraints imposed by human visual system. Among the many challenges to be overcome for practical realization and widespread use of 3D/stereoscopic systems are good 3D displays and efficient techniques for digital compression of enormous amounts of data while maintaining compatibility with normal video decoding and display systems. After a brief introduction to the basics of 3D/stereo including issues of depth perception, stereoscopic 3D displays and terminology in stereoscopic imaging and display, we present an overview of tools in the MPEG-2 video standard that are relevant to our discussion on compression of stereoscopic video, which is the main topic of this paper. Next, we outline the various approaches for compression of stereoscopic video and then focus on compatible stereoscopic video coding using MPEG-2 Temporal scalability concepts. Compatible coding employing two different types of prediction structures become potentially possible, disparity compensated prediction and combined disparity and motion compensated predictions. To further improve coding performance and display quality, preprocessing for reducing mismatch between the two views forming stereoscopic video is considered. Results of simulations performed on stereoscopic video of normal TV resolution are then reported comparing the performance of two prediction structures with the simulcast solution. It is found that combined disparity and motion compensated prediction offers the best performance. Results indicate that compression of both views of stereoscopic video of normal TV resolution appears feasible in a total of 6 to 8 Mbit/s. We then discuss regarding multi-viewpoint video, a generalization of stereoscopic video. Finally, we describe ongoing efforts within MPEG-2 to define a profile for stereoscopic video coding, as well as, the promise of MPEG-4 in addressing coding of multi-viewpoint video.
A system for the real-time display of radar and video images of targets
NASA Technical Reports Server (NTRS)
Allen, W. W.; Burnside, W. D.
1990-01-01
Described here is a software and hardware system for the real-time display of radar and video images for use in a measurement range. The main purpose is to give the reader a clear idea of the software and hardware design and its functions. This system is designed around a Tektronix XD88-30 graphics workstation, used to display radar images superimposed on video images of the actual target. The system's purpose is to provide a platform for tha analysis and documentation of radar images and their associated targets in a menu-driven, user oriented environment.
Display Sharing: An Alternative Paradigm
NASA Technical Reports Server (NTRS)
Brown, Michael A.
2010-01-01
The current Johnson Space Center (JSC) Mission Control Center (MCC) Video Transport System (VTS) provides flight controllers and management the ability to meld raw video from various sources with telemetry to improve situational awareness. However, maintaining a separate infrastructure for video delivery and integration of video content with data adds significant complexity and cost to the system. When considering alternative architectures for a VTS, the current system's ability to share specific computer displays in their entirety to other locations, such as large projector systems, flight control rooms, and back supporting rooms throughout the facilities and centers must be incorporated into any new architecture. Internet Protocol (IP)-based systems also support video delivery and integration. IP-based systems generally have an advantage in terms of cost and maintainability. Although IP-based systems are versatile, the task of sharing a computer display from one workstation to another can be time consuming for an end-user and inconvenient to administer at a system level. The objective of this paper is to present a prototype display sharing enterprise solution. Display sharing is a system which delivers image sharing across the LAN while simultaneously managing bandwidth, supporting encryption, enabling recovery and resynchronization following a loss of signal, and, minimizing latency. Additional critical elements will include image scaling support, multi -sharing, ease of initial integration and configuration, integration with desktop window managers, collaboration tools, host and recipient controls. This goal of this paper is to summarize the various elements of an IP-based display sharing system that can be used in today's control center environment.
IVTS-CEV (Interactive Video Tape System-Combat Engineer Vehicle) Gunnery Trainer.
1981-07-01
video game technology developed for and marketed in consumer video games. The IVTS/CEV is a conceptual/breadboard-level classroom interactive training system designed to train Combat Engineer Vehicle (CEV) gunners in target acquisition and engagement with the main gun. The concept demonstration consists of two units: a gunner station and a display module. The gunner station has optics and gun controls replicating those of the CEV gunner station. The display module contains a standard large-screen color video monitor and a video tape player. The gunner’s sight
Natural 3D content on glasses-free light-field 3D cinema
NASA Astrophysics Data System (ADS)
Balogh, Tibor; Nagy, Zsolt; Kovács, Péter Tamás.; Adhikarla, Vamsi K.
2013-03-01
This paper presents a complete framework for capturing, processing and displaying the free viewpoint video on a large scale immersive light-field display. We present a combined hardware-software solution to visualize free viewpoint 3D video on a cinema-sized screen. The new glasses-free 3D projection technology can support larger audience than the existing autostereoscopic displays. We introduce and describe our new display system including optical and mechanical design considerations, the capturing system and render cluster for producing the 3D content, and the various software modules driving the system. The indigenous display is first of its kind, equipped with front-projection light-field HoloVizio technology, controlling up to 63 MP. It has all the advantages of previous light-field displays and in addition, allows a more flexible arrangement with a larger screen size, matching cinema or meeting room geometries, yet simpler to set-up. The software system makes it possible to show 3D applications in real-time, besides the natural content captured from dense camera arrangements as well as from sparse cameras covering a wider baseline. Our software system on the GPU accelerated render cluster, can also visualize pre-recorded Multi-view Video plus Depth (MVD4) videos on this light-field glasses-free cinema system, interpolating and extrapolating missing views.
NASA Technical Reports Server (NTRS)
Jedlovec, Gary; Srikishen, Jayanthi; Edwards, Rita; Cross, David; Welch, Jon; Smith, Matt
2013-01-01
The use of collaborative scientific visualization systems for the analysis, visualization, and sharing of "big data" available from new high resolution remote sensing satellite sensors or four-dimensional numerical model simulations is propelling the wider adoption of ultra-resolution tiled display walls interconnected by high speed networks. These systems require a globally connected and well-integrated operating environment that provides persistent visualization and collaboration services. This abstract and subsequent presentation describes a new collaborative visualization system installed for NASA's Shortterm Prediction Research and Transition (SPoRT) program at Marshall Space Flight Center and its use for Earth science applications. The system consists of a 3 x 4 array of 1920 x 1080 pixel thin bezel video monitors mounted on a wall in a scientific collaboration lab. The monitors are physically and virtually integrated into a 14' x 7' for video display. The display of scientific data on the video wall is controlled by a single Alienware Aurora PC with a 2nd Generation Intel Core 4.1 GHz processor, 32 GB memory, and an AMD Fire Pro W600 video card with 6 mini display port connections. Six mini display-to-dual DVI cables are used to connect the 12 individual video monitors. The open source Scalable Adaptive Graphics Environment (SAGE) windowing and media control framework, running on top of the Ubuntu 12 Linux operating system, allows several users to simultaneously control the display and storage of high resolution still and moving graphics in a variety of formats, on tiled display walls of any size. The Ubuntu operating system supports the open source Scalable Adaptive Graphics Environment (SAGE) software which provides a common environment, or framework, enabling its users to access, display and share a variety of data-intensive information. This information can be digital-cinema animations, high-resolution images, high-definition video-teleconferences, presentation slides, documents, spreadsheets or laptop screens. SAGE is cross-platform, community-driven, open-source visualization and collaboration middleware that utilizes shared national and international cyberinfrastructure for the advancement of scientific research and education.
NASA Astrophysics Data System (ADS)
Jedlovec, G.; Srikishen, J.; Edwards, R.; Cross, D.; Welch, J. D.; Smith, M. R.
2013-12-01
The use of collaborative scientific visualization systems for the analysis, visualization, and sharing of 'big data' available from new high resolution remote sensing satellite sensors or four-dimensional numerical model simulations is propelling the wider adoption of ultra-resolution tiled display walls interconnected by high speed networks. These systems require a globally connected and well-integrated operating environment that provides persistent visualization and collaboration services. This abstract and subsequent presentation describes a new collaborative visualization system installed for NASA's Short-term Prediction Research and Transition (SPoRT) program at Marshall Space Flight Center and its use for Earth science applications. The system consists of a 3 x 4 array of 1920 x 1080 pixel thin bezel video monitors mounted on a wall in a scientific collaboration lab. The monitors are physically and virtually integrated into a 14' x 7' for video display. The display of scientific data on the video wall is controlled by a single Alienware Aurora PC with a 2nd Generation Intel Core 4.1 GHz processor, 32 GB memory, and an AMD Fire Pro W600 video card with 6 mini display port connections. Six mini display-to-dual DVI cables are used to connect the 12 individual video monitors. The open source Scalable Adaptive Graphics Environment (SAGE) windowing and media control framework, running on top of the Ubuntu 12 Linux operating system, allows several users to simultaneously control the display and storage of high resolution still and moving graphics in a variety of formats, on tiled display walls of any size. The Ubuntu operating system supports the open source Scalable Adaptive Graphics Environment (SAGE) software which provides a common environment, or framework, enabling its users to access, display and share a variety of data-intensive information. This information can be digital-cinema animations, high-resolution images, high-definition video-teleconferences, presentation slides, documents, spreadsheets or laptop screens. SAGE is cross-platform, community-driven, open-source visualization and collaboration middleware that utilizes shared national and international cyberinfrastructure for the advancement of scientific research and education.
Display device-adapted video quality-of-experience assessment
NASA Astrophysics Data System (ADS)
Rehman, Abdul; Zeng, Kai; Wang, Zhou
2015-03-01
Today's viewers consume video content from a variety of connected devices, including smart phones, tablets, notebooks, TVs, and PCs. This imposes significant challenges for managing video traffic efficiently to ensure an acceptable quality-of-experience (QoE) for the end users as the perceptual quality of video content strongly depends on the properties of the display device and the viewing conditions. State-of-the-art full-reference objective video quality assessment algorithms do not take into account the combined impact of display device properties, viewing conditions, and video resolution while performing video quality assessment. We performed a subjective study in order to understand the impact of aforementioned factors on perceptual video QoE. We also propose a full reference video QoE measure, named SSIMplus, that provides real-time prediction of the perceptual quality of a video based on human visual system behaviors, video content characteristics (such as spatial and temporal complexity, and video resolution), display device properties (such as screen size, resolution, and brightness), and viewing conditions (such as viewing distance and angle). Experimental results have shown that the proposed algorithm outperforms state-of-the-art video quality measures in terms of accuracy and speed.
1983-12-01
storage included room for not only the video display incompatibilties which have been plaguing the terminal (VDT), but also for the disk drive, the...once at system implementation time. This sample Video Display Terminal - ---------------------------------- O(VT) screen shows the Appendix N Code...override theavalue with a different data value. Video Display Terminal (VDT): A cathode ray tube or gas plasma tube display screen terminal that allows
NASA Technical Reports Server (NTRS)
Boton, Matthew L.; Bass, Ellen J.; Comstock, James R., Jr.
2006-01-01
The evaluation of human-centered systems can be performed using a variety of different methodologies. This paper describes a human-centered systems evaluation methodology where participants watch 5-second non-interactive videos of a system in operation before supplying judgments and subjective measures based on the information conveyed in the videos. This methodology was used to evaluate the ability of different textures and fields of view to convey spatial awareness in synthetic vision systems (SVS) displays. It produced significant results for both judgment based and subjective measures. This method is compared to other methods commonly used to evaluate SVS displays based on cost, the amount of experimental time required, experimental flexibility, and the type of data provided.
Motion sickness, console video games, and head-mounted displays.
Merhi, Omar; Faugloire, Elise; Flanagan, Moira; Stoffregen, Thomas A
2007-10-01
We evaluated the nauseogenic properties of commercial console video games (i.e., games that are sold to the public) when presented through a head-mounted display. Anecdotal reports suggest that motion sickness may occur among players of contemporary commercial console video games. Participants played standard console video games using an Xbox game system. We varied the participants' posture (standing vs. sitting) and the game (two Xbox games). Participants played for up to 50 min and were asked to discontinue if they experienced any symptoms of motion sickness. Sickness occurred in all conditions, but it was more common during standing. During seated play there were significant differences in head motion between sick and well participants before the onset of motion sickness. The results indicate that commercial console video game systems can induce motion sickness when presented via a head-mounted display and support the hypothesis that motion sickness is preceded by instability in the control of seated posture. Potential applications of this research include changes in the design of console video games and recommendations for how such systems should be used.
Telemetry and Communication IP Video Player
NASA Technical Reports Server (NTRS)
OFarrell, Zachary L.
2011-01-01
Aegis Video Player is the name of the video over IP system for the Telemetry and Communications group of the Launch Services Program. Aegis' purpose is to display video streamed over a network connection to be viewed during launches. To accomplish this task, a VLC ActiveX plug-in was used in C# to provide the basic capabilities of video streaming. The program was then customized to be used during launches. The VLC plug-in can be configured programmatically to display a single stream, but for this project multiple streams needed to be accessed. To accomplish this, an easy to use, informative menu system was added to the program to enable users to quickly switch between videos. Other features were added to make the player more useful, such as watching multiple videos and watching a video in full screen.
Carlson, Jay; Kowalczuk, Jędrzej; Psota, Eric; Pérez, Lance C
2012-01-01
Robotic surgical platforms require vision feedback systems, which often consist of low-resolution, expensive, single-imager analog cameras. These systems are retooled for 3D display by simply doubling the cameras and outboard control units. Here, a fully-integrated digital stereoscopic video camera employing high-definition sensors and a class-compliant USB video interface is presented. This system can be used with low-cost PC hardware and consumer-level 3D displays for tele-medical surgical applications including military medical support, disaster relief, and space exploration.
Real-Time Acquisition and Display of Data and Video
NASA Technical Reports Server (NTRS)
Bachnak, Rafic; Chakinarapu, Ramya; Garcia, Mario; Kar, Dulal; Nguyen, Tien
2007-01-01
This paper describes the development of a prototype that takes in an analog National Television System Committee (NTSC) video signal generated by a video camera and data acquired by a microcontroller and display them in real-time on a digital panel. An 8051 microcontroller is used to acquire power dissipation by the display panel, room temperature, and camera zoom level. The paper describes the major hardware components and shows how they are interfaced into a functional prototype. Test data results are presented and discussed.
Video monitoring system for car seat
NASA Technical Reports Server (NTRS)
Elrod, Susan Vinz (Inventor); Dabney, Richard W. (Inventor)
2004-01-01
A video monitoring system for use with a child car seat has video camera(s) mounted in the car seat. The video images are wirelessly transmitted to a remote receiver/display encased in a portable housing that can be removably mounted in the vehicle in which the car seat is installed.
RAPID: A random access picture digitizer, display, and memory system
NASA Technical Reports Server (NTRS)
Yakimovsky, Y.; Rayfield, M.; Eskenazi, R.
1976-01-01
RAPID is a system capable of providing convenient digital analysis of video data in real-time. It has two modes of operation. The first allows for continuous digitization of an EIA RS-170 video signal. Each frame in the video signal is digitized and written in 1/30 of a second into RAPID's internal memory. The second mode leaves the content of the internal memory independent of the current input video. In both modes of operation the image contained in the memory is used to generate an EIA RS-170 composite video output signal representing the digitized image in the memory so that it can be displayed on a monitor.
Wrap-Around Out-the-Window Sensor Fusion System
NASA Technical Reports Server (NTRS)
Fox, Jeffrey; Boe, Eric A.; Delgado, Francisco; Secor, James B.; Clark, Michael R.; Ehlinger, Kevin D.; Abernathy, Michael F.
2009-01-01
The Advanced Cockpit Evaluation System (ACES) includes communication, computing, and display subsystems, mounted in a van, that synthesize out-the-window views to approximate the views of the outside world as it would be seen from the cockpit of a crewed spacecraft, aircraft, or remote control of a ground vehicle or UAV (unmanned aerial vehicle). The system includes five flat-panel display units arranged approximately in a semicircle around an operator, like cockpit windows. The scene displayed on each panel represents the view through the corresponding cockpit window. Each display unit is driven by a personal computer equipped with a video-capture card that accepts live input from any of a variety of sensors (typically, visible and/or infrared video cameras). Software running in the computers blends the live video images with synthetic images that could be generated, for example, from heads-up-display outputs, waypoints, corridors, or from satellite photographs of the same geographic region. Data from a Global Positioning System receiver and an inertial navigation system aboard the remote vehicle are used by the ACES software to keep the synthetic and live views in registration. If the live image were to fail, the synthetic scenes could still be displayed to maintain situational awareness.
77 FR 75617 - 36(b)(1) Arms Sales Notification
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-21
... transmittal, policy justification, and Sensitivity of Technology. Dated: December 18, 2012. Aaron Siegel... Processor Cabinets, 2 Video Wall Screen and Projector Systems, 46 Flat Panel Displays, and 2 Distributed Video Systems), 2 ship sets AN/SPQ-15 Digital Video Distribution Systems, 2 ship sets Operational...
Apparatus for monitoring crystal growth
Sachs, Emanual M.
1981-01-01
A system and method are disclosed for monitoring the growth of a crystalline body from a liquid meniscus in a furnace. The system provides an improved human/machine interface so as to reduce operator stress, strain and fatigue while improving the conditions for observation and control of the growing process. The system comprises suitable optics for forming an image of the meniscus and body wherein the image is anamorphic so that the entire meniscus can be viewed with good resolution in both the width and height dimensions. The system also comprises a video display for displaying the anamorphic image. The video display includes means for enhancing the contrast between any two contrasting points in the image. The video display also comprises a signal averager for averaging the intensity of at least one preselected portions of the image. The value of the average intensity, can in turn be utilized to control the growth of the body. The system and method are also capable of observing and monitoring multiple processes.
Method of monitoring crystal growth
Sachs, Emanual M.
1982-01-01
A system and method are disclosed for monitoring the growth of a crystalline body from a liquid meniscus in a furnace. The system provides an improved human/machine interface so as to reduce operator stress, strain and fatigue while improving the conditions for observation and control of the growing process. The system comprises suitable optics for forming an image of the meniscus and body wherein the image is anamorphic so that the entire meniscus can be viewed with good resolution in both the width and height dimensions. The system also comprises a video display for displaying the anamorphic image. The video display includes means for enhancing the contrast between any two contrasting points in the image. The video display also comprises a signal averager for averaging the intensity of at least one preselected portions of the image. The value of the average intensity, can in turn be utilized to control the growth of the body. The system and method are also capable of observing and monitoring multiple processes.
Live HDR video streaming on commodity hardware
NASA Astrophysics Data System (ADS)
McNamee, Joshua; Hatchett, Jonathan; Debattista, Kurt; Chalmers, Alan
2015-09-01
High Dynamic Range (HDR) video provides a step change in viewing experience, for example the ability to clearly see the soccer ball when it is kicked from the shadow of the stadium into sunshine. To achieve the full potential of HDR video, so-called true HDR, it is crucial that all the dynamic range that was captured is delivered to the display device and tone mapping is confined only to the display. Furthermore, to ensure widespread uptake of HDR imaging, it should be low cost and available on commodity hardware. This paper describes an end-to-end HDR pipeline for capturing, encoding and streaming high-definition HDR video in real-time using off-the-shelf components. All the lighting that is captured by HDR-enabled consumer cameras is delivered via the pipeline to any display, including HDR displays and even mobile devices with minimum latency. The system thus provides an integrated HDR video pipeline that includes everything from capture to post-production, archival and storage, compression, transmission, and display.
Psychophysical Comparison Of A Video Display System To Film By Using Bone Fracture Images
NASA Astrophysics Data System (ADS)
Seeley, George W.; Stempski, Mark; Roehrig, Hans; Nudelman, Sol; Capp, M. P.
1982-11-01
This study investigated the possibility of using a video display system instead of film for radiological diagnosis. Also investigated were the relationships between characteristics of the system and the observer's accuracy level. Radiologists were used as observers. Thirty-six clinical bone fractures were separated into two matched sets of equal difficulty. The difficulty parameters and ratings were defined by a panel of expert bone radiologists at the Arizona Health Sciences Center, Radiology Department. These two sets of fracture images were then matched with verifiably normal images using parameters such as film type, angle of view, size, portion of anatomy, the film's density range, and the patient's age and sex. The two sets of images were then displayed, using a counterbalanced design, to each of the participating radiologists for diagnosis. Whenever a response was given to a video image, the radiologist used enhancement controls to "window in" on the grey levels of interest. During the TV phase, the radiologist was required to record the settings of the calibrated controls of the image enhancer during interpretation. At no time did any single radiologist see the same film in both modes. The study was designed so that a standard analysis of variance would show the effects of viewing mode (film vs TV), the effects due to stimulus set, and any interactions with observers. A signal detection analysis of observer performance was also performed. Results indicate that the TV display system is almost as good as the view box display; an average of only two more errors were made on the TV display. The difference between the systems has been traced to four observers who had poor accuracy on a small number of films viewed on the TV display. This information is now being correlated with the video system's signal-to-noise ratio (SNR), signal transfer function (STF), and resolution measurements, to obtain information on the basic display and enhancement requirements for a video-based radiologic system. Due to time constraints the results are not included here. The complete results of this study will be reported at the conference.
Secure Video Surveillance System Acquisition Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
2009-12-04
The SVSS Acquisition Software collects and displays video images from two cameras through a VPN, and store the images onto a collection controller. The software is configured to allow a user to enter a time window to display up to 2 1/2, hours of video review. The software collects images from the cameras at a rate of 1 image per second and automatically deletes images older than 3 hours. The software code operates in a linux environment and can be run in a virtual machine on Windows XP. The Sandia software integrates the different COTS software together to build themore » video review system.« less
Obstacles encountered in the development of the low vision enhancement system.
Massof, R W; Rickman, D L
1992-01-01
The Johns Hopkins Wilmer Eye Institute and the NASA Stennis Space Center are collaborating on the development of a new high technology low vision aid called the Low Vision Enhancement System (LVES). The LVES consists of a binocular head-mounted video display system, video cameras mounted on the head-mounted display, and real-time video image processing in a system package that is battery powered and portable. Through a phased development approach, several generations of the LVES can be made available to the patient in a timely fashion. This paper describes the LVES project with major emphasis on technical problems encountered or anticipated during the development process.
Storing Data and Video on One Tape
NASA Technical Reports Server (NTRS)
Nixon, J. H.; Cater, J. P.
1985-01-01
Microprocessor-based system originally developed for anthropometric research merges digital data with video images for storage on video cassette recorder. Combined signals later retrieved and displayed simultaneously on television monitor. System also extracts digital portion of stored information and transfers it to solid-state memory.
Design of video processing and testing system based on DSP and FPGA
NASA Astrophysics Data System (ADS)
Xu, Hong; Lv, Jun; Chen, Xi'ai; Gong, Xuexia; Yang, Chen'na
2007-12-01
Based on high speed Digital Signal Processor (DSP) and Field Programmable Gate Array (FPGA), a video capture, processing and display system is presented, which is of miniaturization and low power. In this system, a triple buffering scheme was used for the capture and display, so that the application can always get a new buffer without waiting; The Digital Signal Processor has an image process ability and it can be used to test the boundary of workpiece's image. A video graduation technology is used to aim at the position which is about to be tested, also, it can enhance the system's flexibility. The character superposition technology realized by DSP is used to display the test result on the screen in character format. This system can process image information in real time, ensure test precision, and help to enhance product quality and quality management.
NASA Technical Reports Server (NTRS)
Deen, Robert G.; Andres, Paul M.; Mortensen, Helen B.; Parizher, Vadim; McAuley, Myche; Bartholomew, Paul
2009-01-01
The XVD [X-Windows VICAR (video image communication and retrieval) Display] computer program offers an interactive display of VICAR and PDS (planetary data systems) images. It is designed to efficiently display multiple-GB images and runs on Solaris, Linux, or Mac OS X systems using X-Windows.
Mesoscale and severe storms (Mass) data management and analysis system
NASA Technical Reports Server (NTRS)
Hickey, J. S.; Karitani, S.; Dickerson, M.
1984-01-01
Progress on the Mesoscale and Severe Storms (MASS) data management and analysis system is described. An interactive atmospheric data base management software package to convert four types of data (Sounding, Single Level, Grid, Image) into standard random access formats is implemented and integrated with the MASS AVE80 Series general purpose plotting and graphics display data analysis software package. An interactive analysis and display graphics software package (AVE80) to analyze large volumes of conventional and satellite derived meteorological data is enhanced to provide imaging/color graphics display utilizing color video hardware integrated into the MASS computer system. Local and remote smart-terminal capability is provided by installing APPLE III computer systems within individual scientist offices and integrated with the MASS system, thus providing color video display, graphics, and characters display of the four data types.
Increased ISR operator capability utilizing a centralized 360° full motion video display
NASA Astrophysics Data System (ADS)
Andryc, K.; Chamberlain, J.; Eagleson, T.; Gottschalk, G.; Kowal, B.; Kuzdeba, P.; LaValley, D.; Myers, E.; Quinn, S.; Rose, M.; Rusiecki, B.
2012-06-01
In many situations, the difference between success and failure comes down to taking the right actions quickly. While the myriad of electronic sensors available today can provide data quickly, it may overload the operator; where only a contextualized centralized display of information and intuitive human interface can help to support the quick and effective decisions needed. If these decisions are to result in quick actions, then the operator must be able to understand all of the data of his environment. In this paper we present a novel approach in contextualizing multi-sensor data onto a full motion video real-time 360 degree imaging display. The system described could function as a primary display system for command and control in security, military and observation posts. It has the ability to process and enable interactive control of multiple other sensor systems. It enhances the value of these other sensors by overlaying their information on a panorama of the surroundings. Also, it can be used to interface to other systems including: auxiliary electro-optical systems, aerial video, contact management, Hostile Fire Indicators (HFI), and Remote Weapon Stations (RWS).
NASA Astrophysics Data System (ADS)
Lee, Seokhee; Lee, Kiyoung; Kim, Man Bae; Kim, JongWon
2005-11-01
In this paper, we propose a design of multi-view stereoscopic HD video transmission system based on MPEG-21 Digital Item Adaptation (DIA). It focuses on the compatibility and scalability to meet various user preferences and terminal capabilities. There exist a large variety of multi-view 3D HD video types according to the methods for acquisition, display, and processing. By following the MPEG-21 DIA framework, the multi-view stereoscopic HD video is adapted according to user feedback. A user can be served multi-view stereoscopic video which corresponds with his or her preferences and terminal capabilities. In our preliminary prototype, we verify that the proposed design can support two deferent types of display device (stereoscopic and auto-stereoscopic) and switching viewpoints between two available viewpoints.
NASA Astrophysics Data System (ADS)
Deckard, Michael; Ratib, Osman M.; Rubino, Gregory
2002-05-01
Our project was to design and implement a ceiling-mounted multi monitor display unit for use in a high-field MRI surgical suite. The system is designed to simultaneously display images/data from four different digital and/or analog sources with: minimal interference from the adjacent high magnetic field, minimal signal-to-noise/artifact contribution to the MRI images and compliance with codes and regulations for the sterile neuro-surgical environment. Provisions were also made to accommodate the importing and exporting of video information via PACS and remote processing/display for clinical and education uses. Commercial fiber optic receivers/transmitters were implemented along with supporting video processing and distribution equipment to solve the video communication problem. A new generation of high-resolution color flat panel displays was selected for the project. A custom-made monitor mount and in-suite electronics enclosure was designed and constructed at UCLA. Difficulties with implementing an isolated AC power system are discussed and a work-around solution presented.
Multilocation Video Conference By Optical Fiber
NASA Astrophysics Data System (ADS)
Gray, Donald J.
1982-10-01
An experimental system that permits interconnection of many offices in a single video conference is described. Video images transmitted to conference participants are selected by the conference chairman and switched by a microprocessor-controlled video switch. Speakers can, at their choice, transmit their own images or images of graphics they wish to display. Users are connected to the Switching Center by optical fiber subscriber loops that carry analog video, digitized telephone, data and signaling. The same system also provides user-selectable distribution of video program and video library material. Experience in the operation of the conference system is discussed.
A generic flexible and robust approach for intelligent real-time video-surveillance systems
NASA Astrophysics Data System (ADS)
Desurmont, Xavier; Delaigle, Jean-Francois; Bastide, Arnaud; Macq, Benoit
2004-05-01
In this article we present a generic, flexible and robust approach for an intelligent real-time video-surveillance system. A previous version of the system was presented in [1]. The goal of these advanced tools is to provide help to operators by detecting events of interest in visual scenes and highlighting alarms and compute statistics. The proposed system is a multi-camera platform able to handle different standards of video inputs (composite, IP, IEEE1394 ) and which can basically compress (MPEG4), store and display them. This platform also integrates advanced video analysis tools, such as motion detection, segmentation, tracking and interpretation. The design of the architecture is optimised to playback, display, and process video flows in an efficient way for video-surveillance application. The implementation is distributed on a scalable computer cluster based on Linux and IP network. It relies on POSIX threads for multitasking scheduling. Data flows are transmitted between the different modules using multicast technology and under control of a TCP-based command network (e.g. for bandwidth occupation control). We report here some results and we show the potential use of such a flexible system in third generation video surveillance system. We illustrate the interest of the system in a real case study, which is the indoor surveillance.
Xiao, Yan; Dexter, Franklin; Hu, Peter; Dutton, Richard P
2008-02-01
On the day of surgery, real-time information of both room occupancy and activities within the operating room (OR) is needed for management of staff, equipment, and unexpected events. A status display system showed color OR video with controllable image quality and showed times that patients entered and exited each OR (obtained automatically). The system was installed and its use was studied in a 6-OR trauma suite and at four locations in a 19-OR tertiary suite. Trauma staff were surveyed for their perceptions of the system. Evidence of staff acceptance of distributed OR video included its operational use for >3 yr in the two suites, with no administrative complaints. Individuals of all job categories used the video. Anesthesiologists were the most frequent users for more than half of the days (95% confidence interval [CI] >50%) in the tertiary ORs. The OR charge nurses accessed the video mostly early in the day when the OR occupancy was high. In comparison (P < 0.001), anesthesiologists accessed it mostly at the end of the workday when occupancy was declining and few cases were starting. Of all 30-min periods during which the video was accessed in the trauma suite, many accesses (95% CI >42%) occurred in periods with no cases starting or ending (i.e., the video was used during the middle of cases). The three stated reasons for using video that had median surveyed responses of "very useful" were "to see if cases are finished," "to see if a room is ready," and "to see when cases are about to finish." Our nurses and physicians both accepted and used distributed OR video as it provided useful information, regardless of whether real-time display of milestones was available (e.g., through anesthesia information system data).
Rapid Damage Assessment. Volume II. Development and Testing of Rapid Damage Assessment System.
1981-02-01
pixels/s Camera Line Rate 732.4 lines/s Pixels per Line 1728 video 314 blank 4 line number (binary) 2 run number (BCD) 2048 total Pixel Resolution 8 bits...sists of an LSI-ll microprocessor, a VDI -200 video display processor, an FD-2 dual floppy diskette subsystem, an FT-I function key-trackball module...COMPONENT LIST FOR IMAGE PROCESSOR SYSTEM IMAGE PROCESSOR SYSTEM VIEWS I VDI -200 Display Processor Racks, Table FD-2 Dual Floppy Diskette Subsystem FT-l
Computer Graphics in Research: Some State -of-the-Art Systems
ERIC Educational Resources Information Center
Reddy, R.; And Others
1975-01-01
A description is given of the structure and functional characteristics of three types of interactive computer graphic systems, developed by the Department of Computer Science at Carnegie-Mellon; a high-speed programmable display capable of displaying 50,000 short vectors, flicker free; a shaded-color video display for the display of gray-scale…
Sequential color video to parallel color video converter
NASA Technical Reports Server (NTRS)
1975-01-01
The engineering design, development, breadboard fabrication, test, and delivery of a breadboard field sequential color video to parallel color video converter is described. The converter was designed for use onboard a manned space vehicle to eliminate a flickering TV display picture and to reduce the weight and bulk of previous ground conversion systems.
Ethernet direct display: a new dimension for in-vehicle video connectivity solutions
NASA Astrophysics Data System (ADS)
Rowley, Vincent
2009-05-01
To improve the local situational awareness (LSA) of personnel in light or heavily armored vehicles, most military organizations recognize the need to equip their fleets with high-resolution digital video systems. Several related upgrade programs are already in progress and, almost invariably, COTS IP/Ethernet is specified as the underlying transport mechanism. The high bandwidths, long reach, networking flexibility, scalability, and affordability of IP/Ethernet make it an attractive choice. There are significant technical challenges, however, in achieving high-performance, real-time video connectivity over the IP/Ethernet platform. As an early pioneer in performance-oriented video systems based on IP/Ethernet, Pleora Technologies has developed core expertise in meeting these challenges and applied a singular focus to innovating within the required framework. The company's field-proven iPORTTM Video Connectivity Solution is deployed successfully in thousands of real-world applications for medical, military, and manufacturing operations. Pleora's latest innovation is eDisplayTM, a smallfootprint, low-power, highly efficient IP engine that acquires video from an Ethernet connection and sends it directly to a standard HDMI/DVI monitor for real-time viewing. More costly PCs are not required. This paper describes Pleora's eDisplay IP Engine in more detail. It demonstrates how - in concert with other elements of the end-to-end iPORT Video Connectivity Solution - the engine can be used to build standards-based, in-vehicle video systems that increase the safety and effectiveness of military personnel while fully leveraging the advantages of the lowcost COTS IP/Ethernet platform.
Analysis and Selection of a Remote Docking Simulation Visual Display System
NASA Technical Reports Server (NTRS)
Shields, N., Jr.; Fagg, M. F.
1984-01-01
The development of a remote docking simulation visual display system is examined. Video system and operator performance are discussed as well as operator command and control requirements and a design analysis of the reconfigurable work station.
Affordable multisensor digital video architecture for 360° situational awareness displays
NASA Astrophysics Data System (ADS)
Scheiner, Steven P.; Khan, Dina A.; Marecki, Alexander L.; Berman, David A.; Carberry, Dana
2011-06-01
One of the major challenges facing today's military ground combat vehicle operations is the ability to achieve and maintain full-spectrum situational awareness while under armor (i.e. closed hatch). Thus, the ability to perform basic tasks such as driving, maintaining local situational awareness, surveillance, and targeting will require a high-density array of real time information be processed, distributed, and presented to the vehicle operators and crew in near real time (i.e. low latency). Advances in display and sensor technologies are providing never before seen opportunities to supply large amounts of high fidelity imagery and video to the vehicle operators and crew in real time. To fully realize the advantages of these emerging display and sensor technologies, an underlying digital architecture must be developed that is capable of processing these large amounts of video and data from separate sensor systems and distributing it simultaneously within the vehicle to multiple vehicle operators and crew. This paper will examine the systems and software engineering efforts required to overcome these challenges and will address development of an affordable, integrated digital video architecture. The approaches evaluated will enable both current and future ground combat vehicle systems the flexibility to readily adopt emerging display and sensor technologies, while optimizing the Warfighter Machine Interface (WMI), minimizing lifecycle costs, and improve the survivability of the vehicle crew working in closed-hatch systems during complex ground combat operations.
Markerless client-server augmented reality system with natural features
NASA Astrophysics Data System (ADS)
Ning, Shuangning; Sang, Xinzhu; Chen, Duo
2017-10-01
A markerless client-server augmented reality system is presented. In this research, the more extensive and mature virtual reality head-mounted display is adopted to assist the implementation of augmented reality. The viewer is provided an image in front of their eyes with the head-mounted display. The front-facing camera is used to capture video signals into the workstation. The generated virtual scene is merged with the outside world information received from the camera. The integrated video is sent to the helmet display system. The distinguishing feature and novelty is to realize the augmented reality with natural features instead of marker, which address the limitations of the marker, such as only black and white, the inapplicability of different environment conditions, and particularly cannot work when the marker is partially blocked. Further, 3D stereoscopic perception of virtual animation model is achieved. The high-speed and stable socket native communication method is adopted for transmission of the key video stream data, which can reduce the calculation burden of the system.
Objective video presentation QoE predictor for smart adaptive video streaming
NASA Astrophysics Data System (ADS)
Wang, Zhou; Zeng, Kai; Rehman, Abdul; Yeganeh, Hojatollah; Wang, Shiqi
2015-09-01
How to deliver videos to consumers over the network for optimal quality-of-experience (QoE) has been the central goal of modern video delivery services. Surprisingly, regardless of the large volume of videos being delivered everyday through various systems attempting to improve visual QoE, the actual QoE of end consumers is not properly assessed, not to say using QoE as the key factor in making critical decisions at the video hosting, network and receiving sites. Real-world video streaming systems typically use bitrate as the main video presentation quality indicator, but using the same bitrate to encode different video content could result in drastically different visual QoE, which is further affected by the display device and viewing condition of each individual consumer who receives the video. To correct this, we have to put QoE back to the driver's seat and redesign the video delivery systems. To achieve this goal, a major challenge is to find an objective video presentation QoE predictor that is accurate, fast, easy-to-use, display device adaptive, and provides meaningful QoE predictions across resolution and content. We propose to use the newly developed SSIMplus index (https://ece.uwaterloo.ca/~z70wang/research/ssimplus/) for this role. We demonstrate that based on SSIMplus, one can develop a smart adaptive video streaming strategy that leads to much smoother visual QoE impossible to achieve using existing adaptive bitrate video streaming approaches. Furthermore, SSIMplus finds many more applications, in live and file-based quality monitoring, in benchmarking video encoders and transcoders, and in guiding network resource allocations.
The Video PATSEARCH System: An Interview with Peter Urbach.
ERIC Educational Resources Information Center
Videodisc/Videotext, 1982
1982-01-01
The Video PATSEARCH system consists of a microcomputer with a special keyboard and two display screens which accesses the PATSEARCH database of United States government patents on the Bibliographic Retrieval Services (BRS) search system. The microcomputer retrieves text from BRS and matching graphics from an analog optical videodisc. (Author/JJD)
NASA Technical Reports Server (NTRS)
Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)
1991-01-01
A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.
Composite video and graphics display for camera viewing systems in robotics and teleoperation
NASA Technical Reports Server (NTRS)
Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)
1993-01-01
A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.
NASA Astrophysics Data System (ADS)
Starks, Michael R.
1990-09-01
A variety of low cost devices for capturing, editing and displaying field sequential 60 cycle stereoscopic video have recently been marketed by 3D TV Corp. and others. When properly used, they give very high quality images with most consumer and professional equipment. Our stereoscopic multiplexers for creating and editing field sequential video in NTSC or component(SVHS, Betacain, RGB) and Home 3D Theater system employing LCD eyeglasses have made 3D movies and television available to a large audience.
Video PATSEARCH: A Mixed-Media System.
ERIC Educational Resources Information Center
Schulman, Jacque-Lynne
1982-01-01
Describes a videodisc-based information display system in which a computer terminal is used to search the online PATSEARCH database from a remote host with local microcomputer control to select and display drawings from the retrieved records. System features and system components are discussed and criteria for system evaluation are presented.…
Standardized access, display, and retrieval of medical video
NASA Astrophysics Data System (ADS)
Bellaire, Gunter; Steines, Daniel; Graschew, Georgi; Thiel, Andreas; Bernarding, Johannes; Tolxdorff, Thomas; Schlag, Peter M.
1999-05-01
The system presented here enhances documentation and data- secured, second-opinion facilities by integrating video sequences into DICOM 3.0. We present an implementation for a medical video server extended by a DICOM interface. Security mechanisms conforming with DICOM are integrated to enable secure internet access. Digital video documents of diagnostic and therapeutic procedures should be examined regarding the clip length and size necessary for second opinion and manageable with today's hardware. Image sources relevant for this paper include 3D laparoscope, 3D surgical microscope, 3D open surgery camera, synthetic video, and monoscopic endoscopes, etc. The global DICOM video concept and three special workplaces of distinct applications are described. Additionally, an approach is presented to analyze the motion of the endoscopic camera for future automatic video-cutting. Digital stereoscopic video sequences are especially in demand for surgery . Therefore DSVS are also integrated into the DICOM video concept. Results are presented describing the suitability of stereoscopic display techniques for the operating room.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-20
... INTERNATIONAL TRADE COMMISSION [DN 2871] Certain Video Displays and Products Using and Containing... Trade Commission has received a complaint entitled In Re Certain Video Displays and Products Using and... for importation, and the sale within the United States after importation of certain video displays and...
Display system employing acousto-optic tunable filter
NASA Technical Reports Server (NTRS)
Lambert, James L. (Inventor)
1995-01-01
An acousto-optic tunable filter (AOTF) is employed to generate a display by driving the AOTF with a RF electrical signal comprising modulated red, green, and blue video scan line signals and scanning the AOTF with a linearly polarized, pulsed light beam, resulting in encoding of color video columns (scan lines) of an input video image into vertical columns of the AOTF output beam. The AOTF is illuminated periodically as each acoustically-encoded scan line fills the cell aperture of the AOTF. A polarizing beam splitter removes the unused first order beam component of the AOTF output and, if desired, overlays a real world scene on the output plane. Resolutions as high as 30,000 lines are possible, providing holographic display capability.
Display system employing acousto-optic tunable filter
NASA Technical Reports Server (NTRS)
Lambert, James L. (Inventor)
1993-01-01
An acousto-optic tunable filter (AOTF) is employed to generate a display by driving the AOTF with a RF electrical signal comprising modulated red, green, and blue video scan line signals and scanning the AOTF with a linearly polarized, pulsed light beam, resulting in encoding of color video columns (scan lines) of an input video image into vertical columns of the AOTF output beam. The AOTF is illuminated periodically as each acoustically-encoded scan line fills the cell aperture of the AOTF. A polarizing beam splitter removes the unused first order beam component of the AOTF output and, if desired, overlays a real world scene on the output plane. Resolutions as high as 30,000 lines are possible, providing holographic display capability.
Virtual displays for 360-degree video
NASA Astrophysics Data System (ADS)
Gilbert, Stephen; Boonsuk, Wutthigrai; Kelly, Jonathan W.
2012-03-01
In this paper we describe a novel approach for comparing users' spatial cognition when using different depictions of 360- degree video on a traditional 2D display. By using virtual cameras within a game engine and texture mapping of these camera feeds to an arbitrary shape, we were able to offer users a 360-degree interface composed of four 90-degree views, two 180-degree views, or one 360-degree view of the same interactive environment. An example experiment is described using these interfaces. This technique for creating alternative displays of wide-angle video facilitates the exploration of how compressed or fish-eye distortions affect spatial perception of the environment and can benefit the creation of interfaces for surveillance and remote system teleoperation.
Holo-Chidi video concentrator card
NASA Astrophysics Data System (ADS)
Nwodoh, Thomas A.; Prabhakar, Aditya; Benton, Stephen A.
2001-12-01
The Holo-Chidi Video Concentrator Card is a frame buffer for the Holo-Chidi holographic video processing system. Holo- Chidi is designed at the MIT Media Laboratory for real-time computation of computer generated holograms and the subsequent display of the holograms at video frame rates. The Holo-Chidi system is made of two sets of cards - the set of Processor cards and the set of Video Concentrator Cards (VCCs). The Processor cards are used for hologram computation, data archival/retrieval from a host system, and for higher-level control of the VCCs. The VCC formats computed holographic data from multiple hologram computing Processor cards, converting the digital data to analog form to feed the acousto-optic-modulators of the Media lab's Mark-II holographic display system. The Video Concentrator card is made of: a High-Speed I/O (HSIO) interface whence data is transferred from the hologram computing Processor cards, a set of FIFOs and video RAM used as buffer for data for the hololines being displayed, a one-chip integrated microprocessor and peripheral combination that handles communication with other VCCs and furnishes the card with a USB port, a co-processor which controls display data formatting, and D-to-A converters that convert digital fringes to analog form. The co-processor is implemented with an SRAM-based FPGA with over 500,000 gates and controls all the signals needed to format the data from the multiple Processor cards into the format required by Mark-II. A VCC has three HSIO ports through which up to 500 Megabytes of computed holographic data can flow from the Processor Cards to the VCC per second. A Holo-Chidi system with three VCCs has enough frame buffering capacity to hold up to thirty two 36Megabyte hologram frames at a time. Pre-computed holograms may also be loaded into the VCC from a host computer through the low- speed USB port. Both the microprocessor and the co- processor in the VCC can access the main system memory used to store control programs and data for the VCC. The Card also generates the control signals used by the scanning mirrors of Mark-II. In this paper we discuss the design of the VCC and its implementation in the Holo-Chidi system.
Video stereo-laparoscopy system
NASA Astrophysics Data System (ADS)
Xiang, Yang; Hu, Jiasheng; Jiang, Huilin
2006-01-01
Minimally invasive surgery (MIS) has contributed significantly to patient care by reducing the morbidity associated with more invasive procedures. MIS procedures have become standard treatment for gallbladder disease and some abdominal malignancies. The imaging system has played a major role in the evolving field of minimally invasive surgery (MIS). The image need to have good resolution, large magnification, especially, the image need to have depth cue at the same time the image have no flicker and suit brightness. The video stereo-laparoscopy system can meet the demand of the doctors. This paper introduces the 3d video laparoscopy has those characteristic, field frequency: 100Hz, the depth space: 150mm, resolution: 10pl/mm. The work principle of the system is introduced in detail, and the optical system and time-division stereo-display system are described briefly in this paper. The system has focusing image lens, it can image on the CCD chip, the optical signal can change the video signal, and through A/D switch of the image processing system become the digital signal, then display the polarized image on the screen of the monitor through the liquid crystal shutters. The doctors with the polarized glasses can watch the 3D image without flicker of the tissue or organ. The 3D video laparoscope system has apply in the MIS field and praised by doctors. Contrast to the traditional 2D video laparoscopy system, it has some merit such as reducing the time of surgery, reducing the problem of surgery and the trained time.
Use of videotape for off-line viewing of computer-assisted radionuclide cardiology studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thrall, J.H.; Pitt, B.; Marx, R.S.
1978-02-01
Videotape offers an inexpensive method for off-line viewing of dynamic radionuclide cardiac studies. Two approaches to videotaping have been explored and demonstrated to be feasible. In the first, a video camera in conjunction with a cassette-type recorder is used to record from the computer display scope. Alternatively, for computer systems already linked to video display units, the video signal can be routed directly to the recorder. Acceptance and use of tracer cardiology studies will be enhanced by increased availability of the studies for clinical review. Videotape offers an inexpensive flexible means of achieving this.
Help for the Visually Impaired
NASA Technical Reports Server (NTRS)
1995-01-01
The Low Vision Enhancement System (LVES) is a video headset that offers people with low vision a view of their surroundings equivalent to the image on a five-foot television screen four feet from the viewer. It will not make the blind see but for many people with low vision, it eases everyday activities such as reading, watching TV and shopping. LVES was developed over almost a decade of cooperation between Stennis Space Center, the Wilmer Eye Institute of the Johns Hopkins Medical Institutions, the Department of Veteran Affairs, and Visionics Corporation. With the aid of Stennis scientists, Wilmer researchers used NASA technology for computer processing of satellite images and head-mounted vision enhancement systems originally intended for the space station. The unit consists of a head-mounted video display, three video cameras, and a control unit for the cameras. The cameras feed images to the video display in the headset.
Design of video interface conversion system based on FPGA
NASA Astrophysics Data System (ADS)
Zhao, Heng; Wang, Xiang-jun
2014-11-01
This paper presents a FPGA based video interface conversion system that enables the inter-conversion between digital and analog video. Cyclone IV series EP4CE22F17C chip from Altera Corporation is used as the main video processing chip, and single-chip is used as the information interaction control unit between FPGA and PC. The system is able to encode/decode messages from the PC. Technologies including video decoding/encoding circuits, bus communication protocol, data stream de-interleaving and de-interlacing, color space conversion and the Camera Link timing generator module of FPGA are introduced. The system converts Composite Video Broadcast Signal (CVBS) from the CCD camera into Low Voltage Differential Signaling (LVDS), which will be collected by the video processing unit with Camera Link interface. The processed video signals will then be inputted to system output board and displayed on the monitor.The current experiment shows that it can achieve high-quality video conversion with minimum board size.
Design and implementation of H.264 based embedded video coding technology
NASA Astrophysics Data System (ADS)
Mao, Jian; Liu, Jinming; Zhang, Jiemin
2016-03-01
In this paper, an embedded system for remote online video monitoring was designed and developed to capture and record the real-time circumstances in elevator. For the purpose of improving the efficiency of video acquisition and processing, the system selected Samsung S5PV210 chip as the core processor which Integrated graphics processing unit. And the video was encoded with H.264 format for storage and transmission efficiently. Based on S5PV210 chip, the hardware video coding technology was researched, which was more efficient than software coding. After running test, it had been proved that the hardware video coding technology could obviously reduce the cost of system and obtain the more smooth video display. It can be widely applied for the security supervision [1].
Mobile Vehicle Teleoperated Over Wireless IP
2007-06-13
VideoLAN software suite. The VLC media player portion of this suite handles net- work streaming of video, as well as the receipt and display of the video...is found in appendix C.7. Video Display The video feed is displayed for the operator using VLC opened independently from the control sending program...This gives the operator the most choice in how to configure the display. To connect VLC to the feed all you need is the IP address from the Java
Display nonlinearity in digital image processing for visual communications
NASA Astrophysics Data System (ADS)
Peli, Eli
1992-11-01
The luminance emitted from a cathode ray tube (CRT) display is a nonlinear function (the gamma function) of the input video signal voltage. In most analog video systems, compensation for this nonlinear transfer function is implemented in the camera amplifiers. When CRT displays are used to present psychophysical stimuli in vision research, the specific display nonlinearity usually is measured and accounted for to ensure that the luminance of each pixel in the synthetic image property represents the intended value. However, when using digital image processing, the linear analog-to-digital converters store a digital image that is nonlinearly related to the displayed or recorded image. The effect of this nonlinear transformation on a variety of image-processing applications used in visual communications is described.
Display nonlinearity in digital image processing for visual communications
NASA Astrophysics Data System (ADS)
Peli, Eli
1991-11-01
The luminance emitted from a cathode ray tube, (CRT) display is a nonlinear function (the gamma function) of the input video signal voltage. In most analog video systems, compensation for this nonlinear transfer function is implemented in the camera amplifiers. When CRT displays are used to present psychophysical stimuli in vision research, the specific display nonlinearity usually is measured and accounted for to ensure that the luminance of each pixel in the synthetic image properly represents the intended value. However, when using digital image processing, the linear analog-to-digital converters store a digital image that is nonlinearly related to the displayed or recorded image. This paper describes the effect of this nonlinear transformation on a variety of image-processing applications used in visual communications.
Development of 40-in hybrid hologram screen for auto-stereoscopic video display
NASA Astrophysics Data System (ADS)
Song, Hyun Ho; Nakashima, Y.; Momonoi, Y.; Honda, Toshio
2004-06-01
Usually in auto stereoscopic display, there are two problems. The first problem is that large image display is difficult, and the second problem is that the view zone (which means the zone in which both eyes are put for stereoscopic or 3-D image observation) is very narrow. We have been developing an auto stereoscopic large video display system (over 100 inches diagonal) which a few people can view simultaneously1,2. Usually in displays that are over 100 inches diagonal, an optical video projection system is used. As one of auto stereoscopic display systems the hologram screen has been proposed3,4,5,6. However, if the hologram screen becomes too large, the view zone (corresponding to the reconstructed diffused object) causes color dispersion and color aberration7. We also proposed the additional Fresnel lens attached to the hologram screen. We call the screen a "hybrid hologram screen", (HHS in short). We made the HHS 866mm(H)×433mm(V) (about 40 inch diagonal)8,9,10,11. By using the lens in the reconstruction step, the angle between object light and reference light can be small, compared to without the lens. So, the spread of the view zone by the color dispersion and color aberration becomes small. And also, the virtual image which is reconstructed from the hologram screen can be transformed to a real image (view zone). So, it is not necessary to use a large lens or concave mirror while making a large hologram screen.
77 FR 9964 - Certain Video Displays and Products Using and Containing Same
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-21
... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-828] Certain Video Displays and Products... importation, and the sale within the United States after importation of certain video displays and products... States, the sale for importation, or the sale within the United States after importation of certain video...
Payload specialist station study. Part 2: CEI specifications (part 1). [space shuttles
NASA Technical Reports Server (NTRS)
1976-01-01
The performance, design, and verification specifications are established for the multifunction display system (MFDS) to be located at the payload station in the shuttle orbiter aft flight deck. The system provides the display units (with video, alphanumerics, and graphics capabilities), associated with electronic units and the keyboards in support of the payload dedicated controls and the displays concept.
Video bandwidth compression system
NASA Astrophysics Data System (ADS)
Ludington, D.
1980-08-01
The objective of this program was the development of a Video Bandwidth Compression brassboard model for use by the Air Force Avionics Laboratory, Wright-Patterson Air Force Base, in evaluation of bandwidth compression techniques for use in tactical weapons and to aid in the selection of particular operational modes to be implemented in an advanced flyable model. The bandwidth compression system is partitioned into two major divisions: the encoder, which processes the input video with a compression algorithm and transmits the most significant information; and the decoder where the compressed data is reconstructed into a video image for display.
Imaging System for Vaginal Surgery.
Taylor, G Bernard; Myers, Erinn M
2015-12-01
The vaginal surgeon is challenged with performing complex procedures within a surgical field of limited light and exposure. The video telescopic operating microscope is an illumination and imaging system that provides visualization during open surgical procedures with a limited field of view. The imaging system is positioned within the surgical field and then secured to the operating room table with a maneuverable holding arm. A high-definition camera and Xenon light source allow transmission of the magnified image to a high-definition monitor in the operating room. The monitor screen is positioned above the patient for the surgeon and assistants to view real time throughout the operation. The video telescopic operating microscope system was used to provide surgical illumination and magnification during total vaginal hysterectomy and salpingectomy, midurethral sling, and release of vaginal scar procedures. All procedures were completed without complications. The video telescopic operating microscope provided illumination of the vaginal operative field and display of the magnified image onto high-definition monitors in the operating room for the surgeon and staff to simultaneously view the procedures. The video telescopic operating microscope provides high-definition display, magnification, and illumination during vaginal surgery.
Multi-star processing and gyro filtering for the video inertial pointing system
NASA Technical Reports Server (NTRS)
Murphy, J. P.
1976-01-01
The video inertial pointing (VIP) system is being developed to satisfy the acquisition and pointing requirements of astronomical telescopes. The VIP system uses a single video sensor to provide star position information that can be used to generate three-axis pointing error signals (multi-star processing) and for input to a cathode ray tube (CRT) display of the star field. The pointing error signals are used to update the telescope's gyro stabilization system (gyro filtering). The CRT display facilitates target acquisition and positioning of the telescope by a remote operator. Linearized small angle equations are used for the multistar processing and a consideration of error performance and singularities lead to star pair location restrictions and equation selection criteria. A discrete steady-state Kalman filter which uses the integration of the gyros is developed and analyzed. The filter includes unit time delays representing asynchronous operations of the VIP microprocessor and video sensor. A digital simulation of a typical gyro stabilized gimbal is developed and used to validate the approach to the gyro filtering.
First Use of Heads-up Display for Astronomy Education
NASA Astrophysics Data System (ADS)
Mumford, Holly; Hintz, E. G.; Jones, M.; Lawler, J.; Fisler, A.
2013-01-01
As part of our work on deaf education in a planetarium environment we are exploring the use of heads-up display systems. This allows us to overlap an ASL interpreter with our educational videos. The overall goal is to allow a student to watch a full-dome planetarium show and have the interpreter tracking to any portion of the video. We will present the first results of using a heads-up display to provide an ASL ‘sound-track’ for a deaf audience. This work is partially funded by an NSF IIS-1124548 grant and funding from the Sorenson Foundation.
An integrated port camera and display system for laparoscopy.
Terry, Benjamin S; Ruppert, Austin D; Steinhaus, Kristen R; Schoen, Jonathan A; Rentschler, Mark E
2010-05-01
In this paper, we built and tested the port camera, a novel, inexpensive, portable, and battery-powered laparoscopic tool that integrates the components of a vision system with a cannula port. This new device 1) minimizes the invasiveness of laparoscopic surgery by combining a camera port and tool port; 2) reduces the cost of laparoscopic vision systems by integrating an inexpensive CMOS sensor and LED light source; and 3) enhances laparoscopic surgical procedures by mechanically coupling the camera, tool port, and liquid crystal display (LCD) screen to provide an on-patient visual display. The port camera video system was compared to two laparoscopic video systems: a standard resolution unit from Karl Storz (model 22220130) and a high definition unit from Stryker (model 1188HD). Brightness, contrast, hue, colorfulness, and sharpness were compared. The port camera video is superior to the Storz scope and approximately equivalent to the Stryker scope. An ex vivo study was conducted to measure the operative performance of the port camera. The results suggest that simulated tissue identification and biopsy acquisition with the port camera is as efficient as with a traditional laparoscopic system. The port camera was successfully used by a laparoscopic surgeon for exploratory surgery and liver biopsy during a porcine surgery, demonstrating initial surgical feasibility.
Naval Research Laboratory 1984 Review.
1985-07-16
pulsed infrared comprehensive characterization of ultrahigh trans- sources and electronics for video signal process- parency fluoride glasses and...operates a video system through this port if desired. The optical bench in consisting of visible and infrared television cam- the trailer holds a high...resolution Fourier eras, a high-quality video cassette recorder and transform spectrometer to use in the receiving display, and a digitizer to convert
Multi-Target Camera Tracking, Hand-off and Display LDRD 158819 Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Robert J.
2014-10-01
Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn’t lead to more alarms, moremore » monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identify individual moving targets from the background imagery, and then display the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.« less
Multi-target camera tracking, hand-off and display LDRD 158819 final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Robert J.
2014-10-01
Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn't lead to more alarms, moremore » monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identifies individual moving targets from the background imagery, and then displays the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.« less
Contour Detector and Data Acquisition System for the Left Ventricular Outline
NASA Technical Reports Server (NTRS)
Reiber, J. H. C. (Inventor)
1978-01-01
A real-time contour detector and data acquisition system is described for an angiographic apparatus having a video scanner for converting an X-ray image of a structure characterized by a change in brightness level compared with its surrounding into video format and displaying the X-ray image in recurring video fields. The real-time contour detector and data acqusition system includes track and hold circuits; a reference level analog computer circuit; an analog compartor; a digital processor; a field memory; and a computer interface.
Video-speed electronic paper based on electrowetting
NASA Astrophysics Data System (ADS)
Hayes, Robert A.; Feenstra, B. J.
2003-09-01
In recent years, a number of different technologies have been proposed for use in reflective displays. One of the most appealing applications of a reflective display is electronic paper, which combines the desirable viewing characteristics of conventional printed paper with the ability to manipulate the displayed information electronically. Electronic paper based on the electrophoretic motion of particles inside small capsules has been demonstrated and commercialized; but the response speed of such a system is rather slow, limited by the velocity of the particles. Recently, we have demonstrated that electrowetting is an attractive technology for the rapid manipulation of liquids on a micrometre scale. Here we show that electrowetting can also be used to form the basis of a reflective display that is significantly faster than electrophoretic displays, so that video content can be displayed. Our display principle utilizes the voltage-controlled movement of a coloured oil film adjacent to a white substrate. The reflectivity and contrast of our system approach those of paper. In addition, we demonstrate a colour concept, which is intrinsically four times brighter than reflective liquid-crystal displays and twice as bright as other emerging technologies. The principle of microfluidic motion at low voltages is applicable in a wide range of electro-optic devices.
Coupled auralization and virtual video for immersive multimedia displays
NASA Astrophysics Data System (ADS)
Henderson, Paul D.; Torres, Rendell R.; Shimizu, Yasushi; Radke, Richard; Lonsway, Brian
2003-04-01
The implementation of maximally-immersive interactive multimedia in exhibit spaces requires not only the presentation of realistic visual imagery but also the creation of a perceptually accurate aural experience. While conventional implementations treat audio and video problems as essentially independent, this research seeks to couple the visual sensory information with dynamic auralization in order to enhance perceptual accuracy. An implemented system has been developed for integrating accurate auralizations with virtual video techniques for both interactive presentation and multi-way communication. The current system utilizes a multi-channel loudspeaker array and real-time signal processing techniques for synthesizing the direct sound, early reflections, and reverberant field excited by a moving sound source whose path may be interactively defined in real-time or derived from coupled video tracking data. In this implementation, any virtual acoustic environment may be synthesized and presented in a perceptually-accurate fashion to many participants over a large listening and viewing area. Subject tests support the hypothesis that the cross-modal coupling of aural and visual displays significantly affects perceptual localization accuracy.
Backscatter absorption gas imaging system
McRae, Jr., Thomas G.
1985-01-01
A video imaging system for detecting hazardous gas leaks. Visual displays of invisible gas clouds are produced by radiation augmentation of the field of view of an imaging device by radiation corresponding to an absorption line of the gas to be detected. The field of view of an imager is irradiated by a laser. The imager receives both backscattered laser light and background radiation. When a detectable gas is present, the backscattered laser light is highly attenuated, producing a region of contrast or shadow on the image. A flying spot imaging system is utilized to synchronously irradiate and scan the area to lower laser power requirements. The imager signal is processed to produce a video display.
Backscatter absorption gas imaging system
McRae, T.G. Jr.
A video imaging system for detecting hazardous gas leaks. Visual displays of invisible gas clouds are produced by radiation augmentation of the field of view of an imaging device by radiation corresponding to an absorption line of the gas to be detected. The field of view of an imager is irradiated by a laser. The imager receives both backscattered laser light and background radiation. When a detectable gas is present, the backscattered laser light is highly attenuated, producing a region of contrast or shadow on the image. A flying spot imaging system is utilized to synchronously irradiate and scan the area to lower laser power requirements. The imager signal is processed to produce a video display.
NASA Astrophysics Data System (ADS)
Kim, Kyung-Su; Lee, Hae-Yeoun; Im, Dong-Hyuck; Lee, Heung-Kyu
Commercial markets employ digital right management (DRM) systems to protect valuable high-definition (HD) quality videos. DRM system uses watermarking to provide copyright protection and ownership authentication of multimedia contents. We propose a real-time video watermarking scheme for HD video in the uncompressed domain. Especially, our approach is in aspect of practical perspectives to satisfy perceptual quality, real-time processing, and robustness requirements. We simplify and optimize human visual system mask for real-time performance and also apply dithering technique for invisibility. Extensive experiments are performed to prove that the proposed scheme satisfies the invisibility, real-time processing, and robustness requirements against video processing attacks. We concentrate upon video processing attacks that commonly occur in HD quality videos to display on portable devices. These attacks include not only scaling and low bit-rate encoding, but also malicious attacks such as format conversion and frame rate change.
Yoshida, Soichiro; Kihara, Kazunori; Takeshita, Hideki; Fujii, Yasuhisa
2014-12-01
The head-mounted display (HMD) is a new image monitoring system. We developed the Personal Integrated-image Monitoring System (PIM System) using the HMD (HMZ-T2, Sony Corporation, Tokyo, Japan) in combination with video splitters and multiplexers as a surgical guide system for transurethral resection of the prostate (TURP). The imaging information obtained from the cystoscope, the transurethral ultrasonography (TRUS), the video camera attached to the HMD, and the patient's vital signs monitor were split and integrated by the PIM System and a composite image was displayed by the HMD using a four-split screen technique. Wearing the HMD, the lead surgeon and the assistant could simultaneously and continuously monitor the same information displayed by the HMD in an ergonomically efficient posture. Each participant could independently rearrange the images comprising the composite image depending on the engaging step. Two benign prostatic hyperplasia (BPH) patients underwent TURP performed by surgeons guided with this system. In both cases, the TURP procedure was successfully performed, and their postoperative clinical courses had no remarkable unfavorable events. During the procedure, none of the participants experienced any HMD-wear related adverse effects or reported any discomfort.
Autonomous spacecraft rendezvous and docking
NASA Technical Reports Server (NTRS)
Tietz, J. C.; Almand, B. J.
1985-01-01
A storyboard display is presented which summarizes work done recently in design and simulation of autonomous video rendezvous and docking systems for spacecraft. This display includes: photographs of the simulation hardware, plots of chase vehicle trajectories from simulations, pictures of the docking aid including image processing interpretations, and drawings of the control system strategy. Viewgraph-style sheets on the display bulletin board summarize the simulation objectives, benefits, special considerations, approach, and results.
Autonomous spacecraft rendezvous and docking
NASA Astrophysics Data System (ADS)
Tietz, J. C.; Almand, B. J.
A storyboard display is presented which summarizes work done recently in design and simulation of autonomous video rendezvous and docking systems for spacecraft. This display includes: photographs of the simulation hardware, plots of chase vehicle trajectories from simulations, pictures of the docking aid including image processing interpretations, and drawings of the control system strategy. Viewgraph-style sheets on the display bulletin board summarize the simulation objectives, benefits, special considerations, approach, and results.
19. SITE BUILDING 002 SCANNER BUILDING AIR POLICE ...
19. SITE BUILDING 002 - SCANNER BUILDING - AIR POLICE SITE SECURITY OFFICE WITH "SITE PERIMETER STATUS PANEL" AND REAL TIME VIDEO DISPLAY OUTPUT FROM VIDEO CAMERA SYSTEM AT SECURITY FENCE LOCATIONS. - Cape Cod Air Station, Technical Facility-Scanner Building & Power Plant, Massachusetts Military Reservation, Sandwich, Barnstable County, MA
High-speed reconstruction of compressed images
NASA Astrophysics Data System (ADS)
Cox, Jerome R., Jr.; Moore, Stephen M.
1990-07-01
A compression scheme is described that allows high-definition radiological images with greater than 8-bit intensity resolution to be represented by 8-bit pixels. Reconstruction of the images with their original intensity resolution can be carried out by means of a pipeline architecture suitable for compact, high-speed implementation. A reconstruction system is described that can be fabricated according to this approach and placed between an 8-bit display buffer and the display's video system thereby allowing contrast control of images at video rates. Results for 50 CR chest images are described showing that error-free reconstruction of the original 10-bit CR images can be achieved.
AOIPS water resources data management system
NASA Technical Reports Server (NTRS)
Vanwie, P.
1977-01-01
The text and computer-generated displays used to demonstrate the AOIPS (Atmospheric and Oceanographic Information Processing System) water resources data management system are investigated. The system was developed to assist hydrologists in analyzing the physical processes occurring in watersheds. It was designed to alleviate some of the problems encountered while investigating the complex interrelationships of variables such as land-cover type, topography, precipitation, snow melt, surface runoff, evapotranspiration, and streamflow rates. The system has an interactive image processing capability and a color video display to display results as they are obtained.
NASA Astrophysics Data System (ADS)
Newman, R. L.
2002-12-01
How many images can you display at one time with Power Point without getting "postage stamps"? Do you have fantastic datasets that you cannot view because your computer is too slow/small? Do you assume a few 2-D images of a 3-D picture are sufficient? High-end visualization centers can minimize and often eliminate these problems. The new visualization center [http://siovizcenter.ucsd.edu] at Scripps Institution of Oceanography [SIO] immerses users into a virtual world by projecting 3-D images onto a Panoram GVR-120E wall-sized floor-to-ceiling curved screen [7' x 23'] that has 3.2 mega-pixels of resolution. The Infinite Reality graphics subsystem is driven by a single-pipe SGI Onyx 3400 with a system bandwidth of 44 Gbps. The Onyx is powered by 16 MIPS R12K processors and 16 GB of addressable memory. The system is also equipped with transmitters and LCD shutter glasses which permit stereographic 3-D viewing of high-resolution images. This center is ideal for groups of up to 60 people who can simultaneously view these large-format images. A wide range of hardware and software is available, giving the users a totally immersive working environment in which to display, analyze, and discuss large datasets. The system enables simultaneous display of video and audio streams from sources such as SGI megadesktop and stereo megadesktop, S-VHS video, DVD video, and video from a Macintosh or PC. For instance, one-third of the screen might be displaying S-VHS video from a remotely-operated-vehicle [ROV], while the remaining portion of the screen might be used for an interactive 3-D flight over the same parcel of seafloor. The video and audio combinations using this system are numerous, allowing users to combine and explore data and images in innovative ways, greatly enhancing scientists' ability to visualize, understand and collaborate on complex datasets. In the not-distant future, with the rapid growth in networking speeds in the US, it will be possible for Earth Sciences Departments to collaborate effectively while limiting the amount of physical travel required. This includes porting visualization content to the popular, low-cost Geowall visualization systems, and providing web-based access to databanks filled with stock geoscience visualizations.
Real-Time Detection and Reading of LED/LCD Displays for Visually Impaired Persons
Tekin, Ender; Coughlan, James M.; Shen, Huiying
2011-01-01
Modern household appliances, such as microwave ovens and DVD players, increasingly require users to read an LED or LCD display to operate them, posing a severe obstacle for persons with blindness or visual impairment. While OCR-enabled devices are emerging to address the related problem of reading text in printed documents, they are not designed to tackle the challenge of finding and reading characters in appliance displays. Any system for reading these characters must address the challenge of first locating the characters among substantial amounts of background clutter; moreover, poor contrast and the abundance of specular highlights on the display surface – which degrade the image in an unpredictable way as the camera is moved – motivate the need for a system that processes images at a few frames per second, rather than forcing the user to take several photos, each of which can take seconds to acquire and process, until one is readable. We describe a novel system that acquires video, detects and reads LED/LCD characters in real time, reading them aloud to the user with synthesized speech. The system has been implemented on both a desktop and a cell phone. Experimental results are reported on videos of display images, demonstrating the feasibility of the system. PMID:21804957
The USL NASA PC R and D interactive presentation development system
NASA Technical Reports Server (NTRS)
Dominick, Wayne D. (Editor); Moreau, Dennis R.
1984-01-01
The Interactive Presentation Development System (IPFS) is a highly interactive system for creating, editing, and displaying video presentation sequences, e.g., for developing and presenting displays of instructional material similiar to overhead transparency or slide presentations. However, since this system is PC-based, users (instructors) can step through sequences forward or backward, focusing attention to areas of the display with special cursor pointers. Additionally, screen displays may be dynamically modified during the presentation to show assignments or to answer questions, much like a traditional blackboard. This system is now implemented at the University of Southwestern Louisiana for use within the piloting phases of the NASA contract work.
Preliminary experience with a stereoscopic video system in a remotely piloted aircraft application
NASA Technical Reports Server (NTRS)
Rezek, T. W.
1983-01-01
Remote piloting video display development at the Dryden Flight Research Facility of NASA's Ames Research Center is summarized, and the reasons for considering stereo television are presented. Pertinent equipment is described. Limited flight experience is also discussed, along with recommendations for further study.
Internet Protocol Display Sharing Solution for Mission Control Center Video System
NASA Technical Reports Server (NTRS)
Brown, Michael A.
2009-01-01
With the advent of broadcast television as a constant source of information throughout the NASA manned space flight Mission Control Center (MCC) at the Johnson Space Center (JSC), the current Video Transport System (VTS) characteristics provides the ability to visually enhance real-time applications as a broadcast channel that decision making flight controllers come to rely on, but can be difficult to maintain and costly. The Operations Technology Facility (OTF) of the Mission Operations Facility Division (MOFD) has been tasked to provide insight to new innovative technological solutions for the MCC environment focusing on alternative architectures for a VTS. New technology will be provided to enable sharing of all imagery from one specific computer display, better known as Display Sharing (DS), to other computer displays and display systems such as; large projector systems, flight control rooms, and back supporting rooms throughout the facilities and other offsite centers using IP networks. It has been stated that Internet Protocol (IP) applications are easily readied to substitute for the current visual architecture, but quality and speed may need to be forfeited for reducing cost and maintainability. Although the IP infrastructure can support many technologies, the simple task of sharing ones computer display can be rather clumsy and difficult to configure and manage to the many operators and products. The DS process shall invest in collectively automating the sharing of images while focusing on such characteristics as; managing bandwidth, encrypting security measures, synchronizing disconnections from loss of signal / loss of acquisitions, performance latency, and provide functions like, scalability, multi-sharing, ease of initial integration / sustained configuration, integration with video adjustments packages, collaborative tools, host / recipient controllability, and the utmost paramount priority, an enterprise solution that provides ownership to the whole process, while maintaining the integrity of the latest technological displayed image devices. This study will provide insights to the many possibilities that can be filtered down to a harmoniously responsive product that can be used in today's MCC environment.
Video System Highlights Hydrogen Fires
NASA Technical Reports Server (NTRS)
Youngquist, Robert C.; Gleman, Stuart M.; Moerk, John S.
1992-01-01
Video system combines images from visible spectrum and from three bands in infrared spectrum to produce color-coded display in which hydrogen fires distinguished from other sources of heat. Includes linear array of 64 discrete lead selenide mid-infrared detectors operating at room temperature. Images overlaid on black and white image of same scene from standard commercial video camera. In final image, hydrogen fires appear red; carbon-based fires, blue; and other hot objects, mainly green and combinations of green and red. Where no thermal source present, image remains in black and white. System enables high degree of discrimination between hydrogen flames and other thermal emitters.
Video enhancement of X-ray and neutron radiographs
NASA Technical Reports Server (NTRS)
Vary, A.
1973-01-01
System was devised for displaying radiographs on television screen and enhancing fine detail in picture. System uses analog-computer circuits to process television signal from low-noise television camera. Enhanced images are displayed in black and white and can be controlled to vary degree of enhancement and magnification of details in either radiographic transparencies or opaque photographs.
Exploiting spatio-temporal characteristics of human vision for mobile video applications
NASA Astrophysics Data System (ADS)
Jillani, Rashad; Kalva, Hari
2008-08-01
Video applications on handheld devices such as smart phones pose a significant challenge to achieve high quality user experience. Recent advances in processor and wireless networking technology are producing a new class of multimedia applications (e.g. video streaming) for mobile handheld devices. These devices are light weight and have modest sizes, and therefore very limited resources - lower processing power, smaller display resolution, lesser memory, and limited battery life as compared to desktop and laptop systems. Multimedia applications on the other hand have extensive processing requirements which make the mobile devices extremely resource hungry. In addition, the device specific properties (e.g. display screen) significantly influence the human perception of multimedia quality. In this paper we propose a saliency based framework that exploits the structure in content creation as well as the human vision system to find the salient points in the incoming bitstream and adapt it according to the target device, thus improving the quality of new adapted area around salient points. Our experimental results indicate that the adaptation process that is cognizant of video content and user preferences can produce better perceptual quality video for mobile devices. Furthermore, we demonstrated how such a framework can affect user experience on a handheld device.
A Low Cost Video Display System Using the Motorola 6811 Single-Chip Microcomputer.
1986-08-01
EB JSR VIDEO display data;wait for keyentry 0426 E1EB BD E2 4E JSR CLRBUFF clean out buffer 0427 EEE C601 LDAB #1 reset pointer 0428 ElFO D7 02 STAB...E768 Al 00 REGI CMPA OX 1303 E76A 27 OE BEQ REG3 1304 E76C E6 00 LDAB 0,X 1305 E76E 08 INX 1306 E76F Cl 53 CMPB #’S’ 1307 E771 26 15 BNE REGI jump if
Real-time rendering for multiview autostereoscopic displays
NASA Astrophysics Data System (ADS)
Berretty, R.-P. M.; Peters, F. J.; Volleberg, G. T. G.
2006-02-01
In video systems, the introduction of 3D video might be the next revolution after the introduction of color. Nowadays multiview autostereoscopic displays are in development. Such displays offer various views at the same time and the image content observed by the viewer depends upon his position with respect to the screen. His left eye receives a signal that is different from what his right eye gets; this gives, provided the signals have been properly processed, the impression of depth. The various views produced on the display differ with respect to their associated camera positions. A possible video format that is suited for rendering from different camera positions is the usual 2D format enriched with a depth related channel, e.g. for each pixel in the video not only its color is given, but also e.g. its distance to a camera. In this paper we provide a theoretical framework for the parallactic transformations which relates captured and observed depths to screen and image disparities. Moreover we present an efficient real time rendering algorithm that uses forward mapping to reduce aliasing artefacts and that deals properly with occlusions. For improved perceived resolution, we take the relative position of the color subpixels and the optics of the lenticular screen into account. Sophisticated filtering techniques results in high quality images.
Augmenting reality in Direct View Optical (DVO) overlay applications
NASA Astrophysics Data System (ADS)
Hogan, Tim; Edwards, Tim
2014-06-01
The integration of overlay displays into rifle scopes can transform precision Direct View Optical (DVO) sights into intelligent interactive fire-control systems. Overlay displays can provide ballistic solutions within the sight for dramatically improved targeting, can fuse sensor video to extend targeting into nighttime or dirty battlefield conditions, and can overlay complex situational awareness information over the real-world scene. High brightness overlay solutions for dismounted soldier applications have previously been hindered by excessive power consumption, weight and bulk making them unsuitable for man-portable, battery powered applications. This paper describes the advancements and capabilities of a high brightness, ultra-low power text and graphics overlay display module developed specifically for integration into DVO weapon sight applications. Central to the overlay display module was the development of a new general purpose low power graphics controller and dual-path display driver electronics. The graphics controller interface is a simple 2-wire RS-232 serial interface compatible with existing weapon systems such as the IBEAM ballistic computer and the RULR and STORM laser rangefinders (LRF). The module features include multiple graphics layers, user configurable fonts and icons, and parameterized vector rendering, making it suitable for general purpose DVO overlay applications. The module is configured for graphics-only operation for daytime use and overlays graphics with video for nighttime applications. The miniature footprint and ultra-low power consumption of the module enables a new generation of intelligent DVO systems and has been implemented for resolutions from VGA to SXGA, in monochrome and color, and in graphics applications with and without sensor video.
ERIC Educational Resources Information Center
Walsh, Janet
1982-01-01
Discusses the health hazards of working with the visual display systems of computers, in particular the eye problems associated with long-term use of video display terminals. Excerpts from and ordering information for the National Institute for Occupational Safety and Health report on such hazards are included. (JJD)
Motion sickness and postural sway in console video games.
Stoffregen, Thomas A; Faugloire, Elise; Yoshida, Ken; Flanagan, Moira B; Merhi, Omar
2008-04-01
We tested the hypotheses that (a) participants might develop motion sickness while playing "off-the-shelf" console video games and (b) postural motion would differ between sick and well participants, prior to the onset of motion sickness. There have been many anecdotal reports of motion sickness among people who play console video games (e.g., Xbox, PlayStation). Participants (40 undergraduate students) played a game continuously for up to 50 min while standing or sitting. We varied the distance to the display screen (and, consequently, the visual angle of the display). Across conditions, the incidence of motion sickness ranged from 42% to 56%; incidence did not differ across conditions. During game play, head and torso motion differed between sick and well participants prior to the onset of subjective symptoms of motion sickness. The results indicate that console video games carry a significant risk of motion sickness. Potential applications of this research include changes in the design of console video games and recommendations for how such systems should be used.
Video image processor on the Spacelab 2 Solar Optical Universal Polarimeter /SL2 SOUP/
NASA Technical Reports Server (NTRS)
Lindgren, R. W.; Tarbell, T. D.
1981-01-01
The SOUP instrument is designed to obtain diffraction-limited digital images of the sun with high photometric accuracy. The Video Processor originated from the requirement to provide onboard real-time image processing, both to reduce the telemetry rate and to provide meaningful video displays of scientific data to the payload crew. This original concept has evolved into a versatile digital processing system with a multitude of other uses in the SOUP program. The central element in the Video Processor design is a 16-bit central processing unit based on 2900 family bipolar bit-slice devices. All arithmetic, logical and I/O operations are under control of microprograms, stored in programmable read-only memory and initiated by commands from the LSI-11. Several functions of the Video Processor are described, including interface to the High Rate Multiplexer downlink, cosmetic and scientific data processing, scan conversion for crew displays, focus and exposure testing, and use as ground support equipment.
Polyplanar optical display electronics
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeSanto, L.; Biscardi, C.
The Polyplanar Optical Display (POD) is a unique display screen which can be used with any projection source. The prototype ten inch display is two inches thick and has a matte black face which allows for high contrast images. The prototype being developed is a form, fit and functional replacement display for the B-52 aircraft which uses a monochrome ten-inch display. In order to achieve a long lifetime, the new display uses a 100 milliwatt green solid-state laser (10,000 hr. life) at 532 nm as its light source. To produce real-time video, the laser light is being modulated by amore » Digital Light Processing (DLP{trademark}) chip manufactured by Texas Instruments. In order to use the solid-state laser as the light source and also fit within the constraints of the B-52 display, the Digital Micromirror Device (DMD{trademark}) circuit board is removed from the Texas Instruments DLP light engine assembly. Due to the compact architecture of the projection system within the display chassis, the DMD{trademark} chip is operated remotely from the Texas Instruments circuit board. The authors discuss the operation of the DMD{trademark} divorced from the light engine and the interfacing of the DMD{trademark} board with various video formats (CVBS, Y/C or S-video and RGB) including the format specific to the B-52 aircraft. A brief discussion of the electronics required to drive the laser is also presented.« less
Electronic data generation and display system
NASA Technical Reports Server (NTRS)
Wetekamm, Jules
1988-01-01
The Electronic Data Generation and Display System (EDGADS) is a field tested paperless technical manual system. The authoring provides subject matter experts the option of developing procedureware from digital or hardcopy inputs of technical information from text, graphics, pictures, and recorded media (video, audio, etc.). The display system provides multi-window presentations of graphics, pictures, animations, and action sequences with text and audio overlays on high resolution color CRT and monochrome portable displays. The database management system allows direct access via hierarchical menus, keyword name, ID number, voice command or touch of a screen pictoral of the item (ICON). It contains operations and maintenance technical information at three levels of intelligence for a total system.
ARINC 818 specification revisions enable new avionics architectures
NASA Astrophysics Data System (ADS)
Grunwald, Paul
2014-06-01
The ARINC 818 Avionics Digital Video Bus is the standard for cockpit video that has gained wide acceptance in both the commercial and military cockpits. The Boeing 787, A350XWB, A400M, KC-46A, and many other aircraft use it. The ARINC 818 specification, which was initially release in 2006, has recently undergone a major update to address new avionics architectures and capabilities. Over the seven years since its release, projects have gone beyond the specification due to the complexity of new architectures and desired capabilities, such as video switching, bi-directional communication, data-only paths, and camera and sensor control provisions. The ARINC 818 specification was revised in 2013, and ARINC 818-2 was approved in November 2013. The revisions to the ARINC 818-2 specification enable switching, stereo and 3-D provisions, color sequential implementations, regions of interest, bi-directional communication, higher link rates, data-only transmission, and synchronization signals. This paper discusses each of the new capabilities and the impact on avionics and display architectures, especially when integrating large area displays, stereoscopic displays, multiple displays, and systems that include a large number of sensors.
Using Globe Browsing Systems in Planetariums to Take Audiences to Other Worlds.
NASA Astrophysics Data System (ADS)
Emmart, C. B.
2014-12-01
For the last decade planetariums have been adding capability of "full dome video" systems for both movie playback and interactive display. True scientific data visualization has now come to planetarium audiences as a means to display the actual three dimensional layout of the universe, the time based array of planets, minor bodies and spacecraft across the solar system, and now globe browsing systems to examine planetary bodies to the limits of resolutions acquired. Additionally, such planetarium facilities can be networked for simultaneous display across the world for wider audience and reach to authoritative scientist description and commentary. Data repositories such as NASA's Lunar Mapping and Modeling Project (LMMP), NASA GSFC's LANCE-MODIS, and others conforming to the Open Geospatial Consortium (OGC) standard of Web Map Server (WMS) protocols make geospatial data available for a growing number of dome supporting globe visualization systems. The immersive surround graphics of full dome video replicates our visual system creating authentic virtual scenes effectively placing audiences on location in some cases to other worlds only mapped robotically.
Bar-Chart-Monitor System For Wind Tunnels
NASA Technical Reports Server (NTRS)
Jung, Oscar
1993-01-01
Real-time monitor system provides bar-chart displays of significant operating parameters developed for National Full-Scale Aerodynamic Complex at Ames Research Center. Designed to gather and process sensory data on operating conditions of wind tunnels and models, and displays data for test engineers and technicians concerned with safety and validation of operating conditions. Bar-chart video monitor displays data in as many as 50 channels at maximum update rate of 2 Hz in format facilitating quick interpretation.
NASA Astrophysics Data System (ADS)
Khosla, Deepak; Huber, David J.; Bhattacharyya, Rajan
2017-05-01
In this paper, we describe an algorithm and system for optimizing search and detection performance for "items of interest" (IOI) in large-sized images and videos that employ the Rapid Serial Visual Presentation (RSVP) based EEG paradigm and surprise algorithms that incorporate motion processing to determine whether static or video RSVP is used. The system works by first computing a motion surprise map on image sub-regions (chips) of incoming sensor video data and then uses those surprise maps to label the chips as either "static" or "moving". This information tells the system whether to use a static or video RSVP presentation and decoding algorithm in order to optimize EEG based detection of IOI in each chip. Using this method, we are able to demonstrate classification of a series of image regions from video with an azimuth value of 1, indicating perfect classification, over a range of display frequencies and video speeds.
NASA Technical Reports Server (NTRS)
Gilliland, M. G.; Rougelot, R. S.; Schumaker, R. A.
1966-01-01
Video signal processor uses special-purpose integrated circuits with nonsaturating current mode switching to accept texture and color information from a digital computer in a visual spaceflight simulator and to combine these, for display on color CRT with analog information concerning fading.
Riby, Deborah M; Whittle, Lisa; Doherty-Sneddon, Gwyneth
2012-01-01
The human face is a powerful elicitor of emotion, which induces autonomic nervous system responses. In this study, we explored physiological arousal and reactivity to affective facial displays shown in person and through video-mediated communication. We compared measures of physiological arousal and reactivity in typically developing individuals and those with the developmental disorders Williams syndrome (WS) and autism spectrum disorder (ASD). Participants attended to facial displays of happy, sad, and neutral expressions via live and video-mediated communication. Skin conductance level (SCL) indicated that live faces, but not video-mediated faces, increased arousal, especially for typically developing individuals and those with WS. There was less increase of SCL, and physiological reactivity was comparable for live and video-mediated faces in ASD. In typical development and WS, physiological reactivity was greater for live than for video-mediated communication. Individuals with WS showed lower SCL than typically developing individuals, suggesting possible hypoarousal in this group, even though they showed an increase in arousal for faces. The results are discussed in terms of the use of video-mediated communication with typically and atypically developing individuals and atypicalities of physiological arousal across neurodevelopmental disorder groups.
Travel guidance system for vehicles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takanabe, K.; Yamamoto, M.; Ito, K.
1987-02-24
A travel guidance system is described for vehicles including: a heading sensor for detecting a direction of movement of a vehicle; a distance sensor for detecting a distance traveled by the vehicle; a map data storage medium preliminarily storing map data; a control unit for receiving a heading signal from the heading sensor and a distance signal from the distance sensor to successively compute a present position of the vehicle and for generating video signals corresponding to display data including map data from the map data storage medium and data of the present position; and a display having first andmore » second display portions and responsive to the video signals from the control unit to display on the first display portion a map and a present portion mark, in which: the map data storage medium comprises means for preliminarily storing administrative division name data and landmark data; and the control unit comprises: landmark display means for: (1) determining a landmark closest to the present position, (2) causing a position of the landmark to be displayed on the map and (3) retrieving a landmark massage concerning the landmark from the storage medium to cause the display to display the landmark message on the second display portion; division name display means for retrieving the name of an administrative division to which the present position belongs from the storage medium and causing the display to display a division name message on the second display portion; and selection means for selectively actuating at least one of the landmark display means and the division name display means.« less
47 CFR 79.101 - Closed caption decoder requirements for analog television receivers.
Code of Federal Regulations, 2012 CFR
2012-10-01
...) BROADCAST RADIO SERVICES CLOSED CAPTIONING AND VIDEO DESCRIPTION OF VIDEO PROGRAMMING § 79.101 Closed... display the captioning for whichever channel the user selects. The TV Mode of operation allows the video... and rows. The characters must be displayed clearly separated from the video over which they are placed...
An Attention-Information-Based Spatial Adaptation Framework for Browsing Videos via Mobile Devices
NASA Astrophysics Data System (ADS)
Li, Houqiang; Wang, Yi; Chen, Chang Wen
2007-12-01
With the growing popularity of personal digital assistant devices and smart phones, more and more consumers are becoming quite enthusiastic to appreciate videos via mobile devices. However, limited display size of the mobile devices has been imposing significant barriers for users to enjoy browsing high-resolution videos. In this paper, we present an attention-information-based spatial adaptation framework to address this problem. The whole framework includes two major parts: video content generation and video adaptation system. During video compression, the attention information in video sequences will be detected using an attention model and embedded into bitstreams with proposed supplement-enhanced information (SEI) structure. Furthermore, we also develop an innovative scheme to adaptively adjust quantization parameters in order to simultaneously improve the quality of overall encoding and the quality of transcoding the attention areas. When the high-resolution bitstream is transmitted to mobile users, a fast transcoding algorithm we developed earlier will be applied to generate a new bitstream for attention areas in frames. The new low-resolution bitstream containing mostly attention information, instead of the high-resolution one, will be sent to users for display on the mobile devices. Experimental results show that the proposed spatial adaptation scheme is able to improve both subjective and objective video qualities.
[Development of a system for ultrasonic three-dimensional reconstruction of fetus].
Baba, K
1989-04-01
We have developed a system for ultrasonic three-dimensional (3-D) fetus reconstruction using computers. Either a real-time linear array probe or a convex array probe of an ultrasonic scanner was mounted on a position sensor arm of a manual compound scanner in order to detect the position of the probe. A microcomputer was used to convert the position information to what could be recorded on a video tape as an image. This image was superimposed on the ultrasonic tomographic image simultaneously with a superimposer and recorded on a video tape. Fetuses in utero were scanned in seven cases. More than forty ultrasonic section image on the video tape were fed into a minicomputer. The shape of the fetus was displayed three-dimensionally by means of computer graphics. The computer-generated display produced a 3-D image of the fetus and showed the usefulness and accuracy of this system. Since it took only a few seconds for data collection by ultrasonic inspection, fetal movement did not adversely affect the results. Data input took about ten minutes for 40 slices, and 3-D reconstruction and display took about two minutes. The system made it possible to observe and record the 3-D image of the fetus in utero non-invasively and therefore is expected to make it much easier to obtain a 3-D picture of the fetus in utero.
Orbital thermal analysis of lattice structured spacecraft using color video display techniques
NASA Technical Reports Server (NTRS)
Wright, R. L.; Deryder, D. D.; Palmer, M. T.
1983-01-01
A color video display technique is demonstrated as a tool for rapid determination of thermal problems during the preliminary design of complex space systems. A thermal analysis is presented for the lattice-structured Earth Observation Satellite (EOS) spacecraft at 32 points in a baseline non Sun-synchronous (60 deg inclination) orbit. Large temperature variations (on the order of 150 K) were observed on the majority of the members. A gradual decrease in temperature was observed as the spacecraft traversed the Earth's shadow, followed by a sudden rise in temperature (100 K) as the spacecraft exited the shadow. Heating rate and temperature histories of selected members and color graphic displays of temperatures on the spacecraft are presented.
1981 Image II Conference Proceedings.
1981-11-01
rapid motion of terrain detail across the display requires fast display processors. Other difficulties are perceptual: the visual displays must convey...has been a continuing effort by Vought in the last decade. Early systems were restricted by the unavailability of video bulk storage with fast random...each photograph. The calculations aided in the proper sequencing of the scanned scenes on the tape recorder and eventually facilitated fast random
Digital image processing of bone - Problems and potentials
NASA Technical Reports Server (NTRS)
Morey, E. R.; Wronski, T. J.
1980-01-01
The development of a digital image processing system for bone histomorphometry and fluorescent marker monitoring is discussed. The system in question is capable of making measurements of UV or light microscope features on a video screen with either video or computer-generated images, and comprises a microscope, low-light-level video camera, video digitizer and display terminal, color monitor, and PDP 11/34 computer. Capabilities demonstrated in the analysis of an undecalcified rat tibia include the measurement of perimeter and total bone area, and the generation of microscope images, false color images, digitized images and contoured images for further analysis. Software development will be based on an existing software library, specifically the mini-VICAR system developed at JPL. It is noted that the potentials of the system in terms of speed and reliability far exceed any problems associated with hardware and software development.
Millisecond accuracy video display using OpenGL under Linux.
Stewart, Neil
2006-02-01
To measure people's reaction times to the nearest millisecond, it is necessary to know exactly when a stimulus is displayed. This article describes how to display stimuli with millisecond accuracy on a normal CRT monitor, using a PC running Linux. A simple C program is presented to illustrate how this may be done within X Windows using the OpenGL rendering system. A test of this system is reported that demonstrates that stimuli may be consistently displayed with millisecond accuracy. An algorithm is presented that allows the exact time of stimulus presentation to be deduced, even if there are relatively large errors in measuring the display time.
Implementation of a Landscape Lighting System to Display Images
NASA Astrophysics Data System (ADS)
Sun, Gi-Ju; Cho, Sung-Jae; Kim, Chang-Beom; Moon, Cheol-Hong
The system implemented in this study consists of a PC, MASTER, SLAVEs and MODULEs. The PC sets the various landscape lighting displays, and the image files can be sent to the MASTER through a virtual serial port connected to the USB (Universal Serial Bus). The MASTER sends a sync signal to the SLAVE. The SLAVE uses the signal received from the MASTER and the landscape lighting display pattern. The video file is saved in the NAND Flash memory and the R, G, B signals are separated using the self-made display signal and sent to the MODULE so that it can display the image.
Video-Out Projection and Lecture Hall Set-Up. Microcomputing Working Paper Series.
ERIC Educational Resources Information Center
Gibson, Chris
This paper details the considerations involved in determining suitable video projection systems for displaying the Apple Macintosh's screen to large groups of people, both in classrooms with approximately 25 people, and in lecture halls with approximately 250. To project the Mac screen to groups in lecture halls, the Electrohome EDP-57 video…
VENI, video, VICI: The merging of computer and video technologies
NASA Technical Reports Server (NTRS)
Horowitz, Jay G.
1993-01-01
The topics covered include the following: High Definition Television (HDTV) milestones; visual information bandwidth; television frequency allocation and bandwidth; horizontal scanning; workstation RGB color domain; NTSC color domain; American HDTV time-table; HDTV image size; digital HDTV hierarchy; task force on digital image architecture; open architecture model; future displays; and the ULTIMATE imaging system.
Objective analysis of image quality of video image capture systems
NASA Astrophysics Data System (ADS)
Rowberg, Alan H.
1990-07-01
As Picture Archiving and Communication System (PACS) technology has matured, video image capture has become a common way of capturing digital images from many modalities. While digital interfaces, such as those which use the ACR/NEMA standard, will become more common in the future, and are preferred because of the accuracy of image transfer, video image capture will be the dominant method in the short term, and may continue to be used for some time because of the low cost and high speed often associated with such devices. Currently, virtually all installed systems use methods of digitizing the video signal that is produced for display on the scanner viewing console itself. A series of digital test images have been developed for display on either a GE CT9800 or a GE Signa MRI scanner. These images have been captured with each of five commercially available image capture systems, and the resultant images digitally transferred on floppy disk to a PC1286 computer containing Optimast' image analysis software. Here the images can be displayed in a comparative manner for visual evaluation, in addition to being analyzed statistically. Each of the images have been designed to support certain tests, including noise, accuracy, linearity, gray scale range, stability, slew rate, and pixel alignment. These image capture systems vary widely in these characteristics, in addition to the presence or absence of other artifacts, such as shading and moire pattern. Other accessories such as video distribution amplifiers and noise filters can also add or modify artifacts seen in the captured images, often giving unusual results. Each image is described, together with the tests which were performed using them. One image contains alternating black and white lines, each one pixel wide, after equilibration strips ten pixels wide. While some systems have a slew rate fast enough to track this correctly, others blur it to an average shade of gray, and do not resolve the lines, or give horizontal or vertical streaking. While many of these results are significant from an engineering standpoint alone, there are clinical implications and some anatomy or pathology may not be visualized if an image capture system is used improperly.
Predictive Displays for High Latency Teleoperation
2016-08-04
PREDICTIVE DISPLAYS FOR HIGH LATENCY TELEOPERATION” Analysis of existing approach 3 C om m s. C hannel Vehicle OCU D Throttle, Steer, Brake D Video ...presents opportunity mitigate outgoing latency. • Video is not governed by physics, however, video is dependent on the state of the vehicle, which...Commands, estimates UDP: H.264 Video UDP: Vehicle state • C++ implementation • 2 threads • OpenCV for image manipulation • FFMPEG for video decoding
High Resolution Displays Using NCAP Liquid Crystals
NASA Astrophysics Data System (ADS)
Macknick, A. Brian; Jones, Phil; White, Larry
1989-07-01
Nematic curvilinear aligned phase (NCAP) liquid crystals have been found useful for high information content video displays. NCAP materials are liquid crystals which have been encapsulated in a polymer matrix and which have a light transmission which is variable with applied electric fields. Because NCAP materials do not require polarizers, their on-state transmission is substantially better than twisted nematic cells. All dimensional tolerances are locked in during the encapsulation process and hence there are no critical sealing or spacing issues. By controlling the polymer/liquid crystal morphology, switching speeds of NCAP materials have been significantly improved over twisted nematic systems. Recent work has combined active matrix addressing with NCAP materials. Active matrices, such as thin film transistors, have given displays of high resolution. The paper will discuss the advantages of NCAP materials specifically designed for operation at video rates on transistor arrays; applications for both backlit and projection displays will be discussed.
Free viewpoint TV and its international standardization
NASA Astrophysics Data System (ADS)
Tanimoto, Masayuki
2009-05-01
We have developed a new type of television named FTV (Free-viewpoint TV). FTV is an innovative visual media that enables us to view a 3D scene by freely changing our viewpoints. We proposed the concept of FTV and constructed the world's first real-time system including the complete chain of operation from image capture to display. We also realized FTV on a single PC and FTV with free listening-point audio. FTV is based on the ray-space method that represents one ray in real space with one point in the ray-space. We have also developed new type of ray capture and display technologies such as a 360-degree mirror-scan ray capturing system and a 360 degree ray-reproducing display. MPEG regarded FTV as the most challenging 3D media and started the international standardization activities of FTV. The first phase of FTV is MVC (Multi-view Video Coding) and the second phase is 3DV (3D Video). MVC was completed in March 2009. 3DV is a standard that targets serving a variety of 3D displays. It will be completed within the next two years.
Military display performance parameters
NASA Astrophysics Data System (ADS)
Desjardins, Daniel D.; Meyer, Frederick
2012-06-01
The military display market is analyzed in terms of four of its segments: avionics, vetronics, dismounted soldier, and command and control. Requirements are summarized for a number of technology-driving parameters, to include luminance, night vision imaging system compatibility, gray levels, resolution, dimming range, viewing angle, video capability, altitude, temperature, shock and vibration, etc., for direct-view and virtual-view displays in cockpits and crew stations. Technical specifications are discussed for selected programs.
Jia, Jia; Chen, Jhensi; Yao, Jun; Chu, Daping
2017-03-17
A high quality 3D display requires a high amount of optical information throughput, which needs an appropriate mechanism to distribute information in space uniformly and efficiently. This study proposes a front-viewing system which is capable of managing the required amount of information efficiently from a high bandwidth source and projecting 3D images with a decent size and a large viewing angle at video rate in full colour. It employs variable gratings to support a high bandwidth distribution. This concept is scalable and the system can be made compact in size. A horizontal parallax only (HPO) proof-of-concept system is demonstrated by projecting holographic images from a digital micro mirror device (DMD) through rotational tiled gratings before they are realised on a vertical diffuser for front-viewing.
Integrating cockpit display and video recorder systems
NASA Astrophysics Data System (ADS)
Bailey, David C.; Jones, Romie; Testerman, David
1995-06-01
A pair of flight data recording and playback systems are described for the F-22 and F-15. These systems employ multiplexing techniques to expand the amount of data recorded and inherent benefit therefrom. Variations between the system accommodate the different avionics architecture of each aircraft.
VID-R and SCAN: Tools and Methods for the Automated Analysis of Visual Records.
ERIC Educational Resources Information Center
Ekman, Paul; And Others
The VID-R (Visual Information Display and Retrieval) system that enables computer-aided analysis of visual records is composed of a film-to-television chain, two videotape recorders with complete remote control of functions, a video-disc recorder, three high-resolution television monitors, a teletype, a PDP-8, a video and audio interface, three…
Study to Expand Simulation Cockpit Displays of Advanced Sensors
1981-03-01
common source is being used for multiple sensor types). If inde- pendent displays and controls are desired then two independent video sources or sensor...line is inserted in each gap, the result is the familiar 211 in- terlace. If two lines are inserted, the result is 31l interlace, and so on. The total...symbol generators. If these systems are oper- ating at various scan rates and if a common display device, such as a multifunction display (MFD) is to
NASA Astrophysics Data System (ADS)
Morita, Shinji; Yamazawa, Kazumasa; Yokoya, Naokazu
2003-01-01
This paper describes a new networked telepresence system which realizes virtual tours into a visualized dynamic real world without significant time delay. Our system is realized by the following three steps: (1) video-rate omnidirectional image acquisition, (2) transportation of an omnidirectional video stream via internet, and (3) real-time view-dependent perspective image generation from the omnidirectional video stream. Our system is applicable to real-time telepresence in the situation where the real world to be seen is far from an observation site, because the time delay from the change of user"s viewing direction to the change of displayed image is small and does not depend on the actual distance between both sites. Moreover, multiple users can look around from a single viewpoint in a visualized dynamic real world in different directions at the same time. In experiments, we have proved that the proposed system is useful for internet telepresence.
Lim, Tae Ho; Choi, Hyuk Joong; Kang, Bo Seung
2010-01-01
We assessed the feasibility of using a camcorder mobile phone for teleconsulting about cardiac echocardiography. The diagnostic performance of evaluating left ventricle (LV) systolic function was measured by three emergency medicine physicians. A total of 138 short echocardiography video sequences (from 70 subjects) was selected from previous emergency room ultrasound examinations. The measurement of LV ejection fraction based on the transmitted video displayed on a mobile phone was compared with the original video displayed on the LCD monitor of the ultrasound machine. The image quality was evaluated using the double stimulation impairment scale (DSIS). All observers showed high sensitivity. There was an improvement in specificity with the observer's increasing experience of cardiac ultrasound. Although the image quality of video on the mobile phone was lower than that of the original, a receiver operating characteristic (ROC) analysis indicated that there was no significant difference in diagnostic performance. Immediate basic teleconsulting of echocardiography movies is possible using current commercially-available mobile phone systems.
Computer-aided video exposure monitoring.
Walsh, P T; Clark, R D; Flaherty, S; Gentry, S J
2000-01-01
A computer-aided video exposure monitoring system was used to record exposure information. The system comprised a handheld camcorder, portable video cassette recorder, radio-telemetry transmitter/receiver, and handheld or notebook computers for remote data logging, photoionization gas/vapor detectors (PIDs), and a personal aerosol monitor. The following workplaces were surveyed using the system: dry cleaning establishments--monitoring tetrachoroethylene in the air and in breath; printing works--monitoring white spirit type solvent; tire manufacturing factory--monitoring rubber fume; and a slate quarry--monitoring respirable dust and quartz. The system based on the handheld computer, in particular, simplified the data acquisition process compared with earlier systems in use by our laboratory. The equipment is more compact and easier to operate, and allows more accurate calibration of the instrument reading on the video image. Although a variety of data display formats are possible, the best format for videos intended for educational and training purposes was the review-preview chart superimposed on the video image of the work process. Recommendations for reducing exposure by engineering or by modifying work practice were possible through use of the video exposure system in the dry cleaning and tire manufacturing applications. The slate quarry work illustrated how the technique can be used to test ventilation configurations quickly to see their effect on the worker's personal exposure.
Interactive display system having a scaled virtual target zone
Veligdan, James T.; DeSanto, Leonard
2006-06-13
A display system includes a waveguide optical panel having an inlet face and an opposite outlet face. A projector and imaging device cooperate with the panel for projecting a video image thereon. An optical detector bridges at least a portion of the waveguides for detecting a location on the outlet face within a target zone of an inbound light spot. A controller is operatively coupled to the imaging device and detector for displaying a cursor on the outlet face corresponding with the detected location of the spot within the target zone.
The 30/20 GHz fixed communications systems service demand assessment. Volume 3: Appendices
NASA Technical Reports Server (NTRS)
Gabriszeski, T.; Reiner, P.; Rogers, J.; Terbo, W.
1979-01-01
The market analysis of voice, video, and data 18/30 GHz communications systems services and satellite transmission services is discussed. Detail calculations, computer displays of traffic, survey questionnaires, and detailed service forecasts are presented.
Prevention: lessons from video display installations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Margach, C.B.
1983-04-01
Workers interacting with video display units for periods in excess of two hours per day report significantly increased visual discomfort, fatigue and inefficiencies, as compared with workers performing similar tasks, but without the video viewing component. Difficulties in focusing and the appearance of myopia are among the problems being described. With a view to preventing or minimizing such problems, principles and procedures are presented providing for (a) modification of physical features of the video workstation and (b) improvement in the visual performances of the individual video unit operator.
Broadening the interface bandwidth in simulation based training
NASA Technical Reports Server (NTRS)
Somers, Larry E.
1989-01-01
Currently most computer based simulations rely exclusively on computer generated graphics to create the simulation. When training is involved, the method almost exclusively used to display information to the learner is text displayed on the cathode ray tube. MICROEXPERT Systems is concentrating on broadening the communications bandwidth between the computer and user by employing a novel approach to video image storage combined with sound and voice output. An expert system is used to combine and control the presentation of analog video, sound, and voice output with computer based graphics and text. Researchers are currently involved in the development of several graphics based user interfaces for NASA, the U.S. Army, and the U.S. Navy. Here, the focus is on the human factors considerations, software modules, and hardware components being used to develop these interfaces.
Li, Xiangrui; Lu, Zhong-Lin
2012-02-29
Display systems based on conventional computer graphics cards are capable of generating images with 8-bit gray level resolution. However, most experiments in vision research require displays with more than 12 bits of luminance resolution. Several solutions are available. Bit++ (1) and DataPixx (2) use the Digital Visual Interface (DVI) output from graphics cards and high resolution (14 or 16-bit) digital-to-analog converters to drive analog display devices. The VideoSwitcher (3) described here combines analog video signals from the red and blue channels of graphics cards with different weights using a passive resister network (4) and an active circuit to deliver identical video signals to the three channels of color monitors. The method provides an inexpensive way to enable high-resolution monochromatic displays using conventional graphics cards and analog monitors. It can also provide trigger signals that can be used to mark stimulus onsets, making it easy to synchronize visual displays with physiological recordings or response time measurements. Although computer keyboards and mice are frequently used in measuring response times (RT), the accuracy of these measurements is quite low. The RTbox is a specialized hardware and software solution for accurate RT measurements. Connected to the host computer through a USB connection, the driver of the RTbox is compatible with all conventional operating systems. It uses a microprocessor and high-resolution clock to record the identities and timing of button events, which are buffered until the host computer retrieves them. The recorded button events are not affected by potential timing uncertainties or biases associated with data transmission and processing in the host computer. The asynchronous storage greatly simplifies the design of user programs. Several methods are available to synchronize the clocks of the RTbox and the host computer. The RTbox can also receive external triggers and be used to measure RT with respect to external events. Both VideoSwitcher and RTbox are available for users to purchase. The relevant information and many demonstration programs can be found at http://lobes.usc.edu/.
2008-04-01
Index ( NASA - TLX : Hart & Staveland, 1988), and a Post-Test Questionnaire. Demographic data/Background Questionnaire. This questionnaire was used...very confident). NASA - TLX . The NASA TLX (Hart & Staveland, 1988) is a subjective workload assessment tool. A multidimensional weighting...completed the NASA - TLX . The test trials were randomized across participants and occurred in a counterbalanced order that took into account video display
An evaluation of the efficacy of video displays for use with chimpanzees (Pan troglodytes).
Hopper, Lydia M; Lambeth, Susan P; Schapiro, Steven J
2012-05-01
Video displays for behavioral research lend themselves particularly well to studies with chimpanzees (Pan troglodytes), as their vision is comparable to humans', yet there has been no formal test of the efficacy of video displays as a form of social information for chimpanzees. To address this, we compared the learning success of chimpanzees shown video footage of a conspecific compared to chimpanzees shown a live conspecific performing the same novel task. Footage of an unfamiliar chimpanzee operating a bidirectional apparatus was presented to 24 chimpanzees (12 males, 12 females), and their responses were compared to those of a further 12 chimpanzees given the same task but with no form of information. Secondly, we also compared the responses of the chimpanzees in the video display condition to responses of eight chimpanzees from a previously published study of ours, in which chimpanzees observed live models. Chimpanzees shown a video display were more successful than those in the control condition and showed comparable success to those that saw a live model. Regarding fine-grained copying (i.e. the direction that the door was pushed), only chimpanzees that observed a live model showed significant matching to the model's methods with their first response. Yet, when all the responses made by the chimpanzees were considered, comparable levels of matching were shown by chimpanzees in both the live and video conditions. © 2012 Wiley Periodicals, Inc.
An Evaluation of the Efficacy of Video Displays for Use With Chimpanzees (Pan troglodytes)
HOPPER, LYDIA M.; LAMBETH, SUSAN P.; SCHAPIRO, STEVEN J.
2013-01-01
Video displays for behavioral research lend themselves particularly well to studies with chimpanzees (Pan troglodytes), as their vision is comparable to humans’, yet there has been no formal test of the efficacy of video displays as a form of social information for chimpanzees. To address this, we compared the learning success of chimpanzees shown video footage of a conspecific compared to chimpanzees shown a live conspecific performing the same novel task. Footage of an unfamiliar chimpanzee operating a bidirectional apparatus was presented to 24 chimpanzees (12 males, 12 females), and their responses were compared to those of a further 12 chimpanzees given the same task but with no form of information. Secondly, we also compared the responses of the chimpanzees in the video display condition to responses of eight chimpanzees from a previously published study of ours, in which chimpanzees observed live models. Chimpanzees shown a video display were more successful than those in the control condition and showed comparable success to those that saw a live model. Regarding fine-grained copying (i.e. the direction that the door was pushed), only chimpanzees that observed a live model showed significant matching to the model’s methods with their first response. Yet, when all the responses made by the chimpanzees were considered, comparable levels of matching were shown by chimpanzees in both the live and video conditions. PMID:22318867
Sector-scanning echocardiography
NASA Technical Reports Server (NTRS)
Henry, W. L.; Griffith, J. M.
1975-01-01
The mechanical sector scanner is described in detail, and its clinical application is discussed. Cross sectional images of the heart are obtained in real time using this system. The sector scanner has three major components: (a) hand held scanner, (b) video display, and (c) video recorder. The system provides diagnostic information in a wide spectrum of cardiac diseases, and it quantitates the severity of mitral stenosis by measurement of the mitral valve orifice area in diagnosing infants, children and adults with cyanotic congenital heart disease.
A design of real time image capturing and processing system using Texas Instrument's processor
NASA Astrophysics Data System (ADS)
Wee, Toon-Joo; Chaisorn, Lekha; Rahardja, Susanto; Gan, Woon-Seng
2007-09-01
In this work, we developed and implemented an image capturing and processing system that equipped with capability of capturing images from an input video in real time. The input video can be a video from a PC, video camcorder or DVD player. We developed two modes of operation in the system. In the first mode, an input image from the PC is processed on the processing board (development platform with a digital signal processor) and is displayed on the PC. In the second mode, current captured image from the video camcorder (or from DVD player) is processed on the board but is displayed on the LCD monitor. The major difference between our system and other existing conventional systems is that image-processing functions are performed on the board instead of the PC (so that the functions can be used for further developments on the board). The user can control the operations of the board through the Graphic User Interface (GUI) provided on the PC. In order to have a smooth image data transfer between the PC and the board, we employed Real Time Data Transfer (RTDX TM) technology to create a link between them. For image processing functions, we developed three main groups of function: (1) Point Processing; (2) Filtering and; (3) 'Others'. Point Processing includes rotation, negation and mirroring. Filter category provides median, adaptive, smooth and sharpen filtering in the time domain. In 'Others' category, auto-contrast adjustment, edge detection, segmentation and sepia color are provided, these functions either add effect on the image or enhance the image. We have developed and implemented our system using C/C# programming language on TMS320DM642 (or DM642) board from Texas Instruments (TI). The system was showcased in College of Engineering (CoE) exhibition 2006 at Nanyang Technological University (NTU) and have more than 40 users tried our system. It is demonstrated that our system is adequate for real time image capturing. Our system can be used or applied for applications such as medical imaging, video surveillance, etc.
Plant Chlorophyll Content Imager with Reference Detection Signals
NASA Technical Reports Server (NTRS)
Spiering, Bruce A. (Inventor); Carter, Gregory A. (Inventor)
2000-01-01
A portable plant chlorophyll imaging system is described which collects light reflected from a target plant and separates the collected light into two different wavelength bands. These wavelength bands, or channels, are described as having center wavelengths of 700 nm and 840 nm. The light collected in these two channels is processed using synchronized video cameras. A controller provided in the system compares the level of light of video images reflected from a target plant with a reference level of light from a source illuminating the plant. The percent of reflection in the two separate wavelength bands from a target plant are compared to provide a ratio video image which indicates a relative level of plant chlorophyll content and physiological stress. Multiple display modes are described for viewing the video images.
Polyplanar optical display electronics
NASA Astrophysics Data System (ADS)
DeSanto, Leonard; Biscardi, Cyrus
1997-07-01
The polyplanar optical display (POD) is a unique display screen which can be used with any projection source. The prototype ten inch display is two inches thick and has a matte black face which allows for high contrast images. The prototype being developed is a form, fit and functional replacement display for the B-52 aircraft which uses a monochrome ten-inch display. In order to achieve a long lifetime, the new display uses a 100 milliwatt green solid- state laser at 532 nm as its light source. To produce real- time video, the laser light is being modulated by a digital light processing (DLP) chip manufactured by Texas Instruments. In order to use the solid-state laser as the light source and also fit within the constraints of the B-52 display, the digital micromirror device (DMD) circuit board is removed from the Texas Instruments DLP light engine assembly. Due to the compact architecture of the projection system within the display chassis, the DMD chip is operated remotely from the Texas Instruments circuit board. We discuss the operation of the DMD divorced from the light engine and the interfacing of the DMD board with various video formats including the format specific to the B-52 aircraft. A brief discussion of the electronics required to drive the laser is also presented.
Jia, Jia; Chen, Jhensi; Yao, Jun; Chu, Daping
2017-01-01
A high quality 3D display requires a high amount of optical information throughput, which needs an appropriate mechanism to distribute information in space uniformly and efficiently. This study proposes a front-viewing system which is capable of managing the required amount of information efficiently from a high bandwidth source and projecting 3D images with a decent size and a large viewing angle at video rate in full colour. It employs variable gratings to support a high bandwidth distribution. This concept is scalable and the system can be made compact in size. A horizontal parallax only (HPO) proof-of-concept system is demonstrated by projecting holographic images from a digital micro mirror device (DMD) through rotational tiled gratings before they are realised on a vertical diffuser for front-viewing. PMID:28304371
Development of a microportable imaging system for otoscopy and nasoendoscopy evaluations.
VanLue, Michael; Cox, Kenneth M; Wade, James M; Tapp, Kevin; Linville, Raymond; Cosmato, Charlie; Smith, Tom
2007-03-01
Imaging systems for patients with cleft palate typically are not portable, but are essential to obtain an audiovisual record of nasoendoscopy and otoscopy procedures. Practitioners who evaluate patients in rural, remote, or otherwise medically underserved areas are expected to obtain audiovisual recordings of these procedures as part of standard clinical practice. Therefore, patients must travel substantial distances to medical facilities that have standard recording equipment. This project describes the specific components, strengths and weaknesses of an MPEG-4 digital recording system for otoscopy/nasoendoscopy evaluation of patients with cleft palate that is both portable and compatible with store-and-forward telemedicine applications. Three digital recording configurations (TabletPC, handheld digital video recorder, and an 8-mm digital camcorder) were used to record the audio/ video signal from an analog video scope system. The handheld digital video recorder was most effective at capturing audio/video and displaying procedures in real time. The system described was particularly easy to use, because it required no postrecording file capture or compression for later review, transfer, and/or archiving. The handheld digital recording system was assembled from commercially available components. The portability and the telemedicine compatibility of the handheld digital video recorder offers a viable solution for the documentation of nasoendosocopy and otoscopy procedures in remote, rural, or other locations where reduced medical access precludes the use of larger component audio/video systems.
A Macintosh-Based Scientific Images Video Analysis System
NASA Technical Reports Server (NTRS)
Groleau, Nicolas; Friedland, Peter (Technical Monitor)
1994-01-01
A set of experiments was designed at MIT's Man-Vehicle Laboratory in order to evaluate the effects of zero gravity on the human orientation system. During many of these experiments, the movements of the eyes are recorded on high quality video cassettes. The images must be analyzed off-line to calculate the position of the eyes at every moment in time. To this aim, I have implemented a simple inexpensive computerized system which measures the angle of rotation of the eye from digitized video images. The system is implemented on a desktop Macintosh computer, processes one play-back frame per second and exhibits adequate levels of accuracy and precision. The system uses LabVIEW, a digital output board, and a video input board to control a VCR, digitize video images, analyze them, and provide a user friendly interface for the various phases of the process. The system uses the Concept Vi LabVIEW library (Graftek's Image, Meudon la Foret, France) for image grabbing and displaying as well as translation to and from LabVIEW arrays. Graftek's software layer drives an Image Grabber board from Neotech (Eastleigh, United Kingdom). A Colour Adapter box from Neotech provides adequate video signal synchronization. The system also requires a LabVIEW driven digital output board (MacADIOS II from GW Instruments, Cambridge, MA) controlling a slightly modified VCR remote control used mainly to advance the video tape frame by frame.
NASA Astrophysics Data System (ADS)
Kuehl, C. Stephen
1996-06-01
Video signal system performance can be compromised in a military aircraft cockpit management system (CMS) with the tailoring of vintage Electronics Industries Association (EIA) RS170 and RS343A video interface standards. Video analog interfaces degrade when induced system noise is present. Further signal degradation has been traditionally associated with signal data conversions between avionics sensor outputs and the cockpit display system. If the CMS engineering process is not carefully applied during the avionics video and computing architecture development, extensive and costly redesign will occur when visual sensor technology upgrades are incorporated. Close monitoring and technical involvement in video standards groups provides the knowledge-base necessary for avionic systems engineering organizations to architect adaptable and extendible cockpit management systems. With the Federal Communications Commission (FCC) in the process of adopting the Digital HDTV Grand Alliance System standard proposed by the Advanced Television Systems Committee (ATSC), the entertainment and telecommunications industries are adopting and supporting the emergence of new serial/parallel digital video interfaces and data compression standards that will drastically alter present NTSC-M video processing architectures. The re-engineering of the U.S. Broadcasting system must initially preserve the electronic equipment wiring networks within broadcast facilities to make the transition to HDTV affordable. International committee activities in technical forums like ITU-R (former CCIR), ANSI/SMPTE, IEEE, and ISO/IEC are establishing global consensus on video signal parameterizations that support a smooth transition from existing analog based broadcasting facilities to fully digital computerized systems. An opportunity exists for implementing these new video interface standards over existing video coax/triax cabling in military aircraft cockpit management systems. Reductions in signal conversion processing steps, major improvement in video noise reduction, and an added capability to pass audio/embedded digital data within the digital video signal stream are the significant performance increases associated with the incorporation of digital video interface standards. By analyzing the historical progression of military CMS developments, establishing a systems engineering process for CMS design, tracing the commercial evolution of video signal standardization, adopting commercial video signal terminology/definitions, and comparing/contrasting CMS architecture modifications using digital video interfaces; this paper provides a technical explanation on how a systems engineering process approach to video interface standardization can result in extendible and affordable cockpit management systems.
A system for automatic analysis of blood pressure data for digital computer entry
NASA Technical Reports Server (NTRS)
Miller, R. L.
1972-01-01
Operation of automatic blood pressure data system is described. Analog blood pressure signal is analyzed by three separate circuits, systolic, diastolic, and cycle defect. Digital computer output is displayed on teletype paper tape punch and video screen. Illustration of system is included.
DMD: a digital light processing application to projection displays
NASA Astrophysics Data System (ADS)
Feather, Gary A.
1989-01-01
Summary Revolutionary technologies achieve rapid product and subsequent business diffusion only when the in- ventors focus on technology application, maturation, and proliferation. A revolutionary technology is emerg- ing with micro-electromechanical systems (MEMS). MEMS are being developed by leveraging mature semi- conductor processing coupled with mechanical systems into complete, integrated, useful systems. The digital micromirror device (DMD), a Texas Instruments invented MEMS, has focused on its application to projec- tion displays. The DMD has demonstrated its application as a digital light processor, processing and produc- ing compelling computer and video projection displays. This tutorial discusses requirements in the projection display market and the potential solutions offered by this digital light processing system. The seminar in- cludes an evaluation of the market, system needs, design, fabrication, application, and performance results of a system using digital light processing solutions.
Recent advances in nondestructive evaluation made possible by novel uses of video systems
NASA Technical Reports Server (NTRS)
Generazio, Edward R.; Roth, Don J.
1990-01-01
Complex materials are being developed for use in future advanced aerospace systems. High temperature materials have been targeted as a major area of materials development. The development of composites consisting of ceramic matrix and ceramic fibers or whiskers is currently being aggressively pursued internationally. These new advanced materials are difficult and costly to produce; however, their low density and high operating temperature range are needed for the next generation of advanced aerospace systems. These materials represent a challenge to the nondestructive evaluation community. Video imaging techniques not only enhance the nondestructive evaluation, but they are also required for proper evaluation of these advanced materials. Specific research examples are given, highlighting the impact that video systems have had on the nondestructive evaluation of ceramics. An image processing technique for computerized determination of grain and pore size distribution functions from microstructural images is discussed. The uses of video and computer systems for displaying, evaluating, and interpreting ultrasonic image data are presented.
Video Altimeter and Obstruction Detector for an Aircraft
NASA Technical Reports Server (NTRS)
Delgado, Frank J.; Abernathy, Michael F.; White, Janis; Dolson, William R.
2013-01-01
Video-based altimetric and obstruction detection systems for aircraft have been partially developed. The hardware of a system of this type includes a downward-looking video camera, a video digitizer, a Global Positioning System receiver or other means of measuring the aircraft velocity relative to the ground, a gyroscope based or other attitude-determination subsystem, and a computer running altimetric and/or obstruction-detection software. From the digitized video data, the altimetric software computes the pixel velocity in an appropriate part of the video image and the corresponding angular relative motion of the ground within the field of view of the camera. Then by use of trigonometric relationships among the aircraft velocity, the attitude of the camera, the angular relative motion, and the altitude, the software computes the altitude. The obstruction-detection software performs somewhat similar calculations as part of a larger task in which it uses the pixel velocity data from the entire video image to compute a depth map, which can be correlated with a terrain map, showing locations of potential obstructions. The depth map can be used as real-time hazard display and/or to update an obstruction database.
Young Children's Analogical Problem Solving: Gaining Insights from Video Displays
ERIC Educational Resources Information Center
Chen, Zhe; Siegler, Robert S.
2013-01-01
This study examined how toddlers gain insights from source video displays and use the insights to solve analogous problems. Two- to 2.5-year-olds viewed a source video illustrating a problem-solving strategy and then attempted to solve analogous problems. Older but not younger toddlers extracted the problem-solving strategy depicted in the video…
Platform for intraoperative analysis of video streams
NASA Astrophysics Data System (ADS)
Clements, Logan; Galloway, Robert L., Jr.
2004-05-01
Interactive, image-guided surgery (IIGS) has proven to increase the specificity of a variety of surgical procedures. However, current IIGS systems do not compensate for changes that occur intraoperatively and are not reflected in preoperative tomograms. Endoscopes and intraoperative ultrasound, used in minimally invasive surgery, provide real-time (RT) information in a surgical setting. Combining the information from RT imaging modalities with traditional IIGS techniques will further increase surgical specificity by providing enhanced anatomical information. In order to merge these techniques and obtain quantitative data from RT imaging modalities, a platform was developed to allow both the display and processing of video streams in RT. Using a Bandit-II CV frame grabber board (Coreco Imaging, St. Laurent, Quebec) and the associated library API, a dynamic link library was created in Microsoft Visual C++ 6.0 such that the platform could be incorporated into the IIGS system developed at Vanderbilt University. Performance characterization, using two relatively inexpensive host computers, has shown the platform capable of performing simple image processing operations on frames captured from a CCD camera and displaying the processed video data at near RT rates both independent of and while running the IIGS system.
Group tele-immersion:enabling natural interactions between groups at distant sites.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Christine L.; Stewart, Corbin; Nashel, Andrew
2005-08-01
We present techniques and a system for synthesizing views for video teleconferencing between small groups. In place of replicating one-to-one systems for each pair of users, we create a single unified display of the remote group. Instead of performing dense 3D scene computation, we use more cameras and trade-off storage and hardware for computation. While it is expensive to directly capture a scene from all possible viewpoints, we have observed that the participants viewpoints usually remain at a constant height (eye level) during video teleconferencing. Therefore, we can restrict the possible viewpoint to be within a virtual plane without sacrificingmore » much of the realism, and in cloning so we significantly reduce the number of required cameras. Based on this observation, we have developed a technique that uses light-field style rendering to guarantee the quality of the synthesized views, using a linear array of cameras with a life-sized, projected display. Our full-duplex prototype system between Sandia National Laboratories, California and the University of North Carolina at Chapel Hill has been able to synthesize photo-realistic views at interactive rates, and has been used to video conference during regular meetings between the sites.« less
Augmented Reality-Based Navigation System for Wrist Arthroscopy: Feasibility
Zemirline, Ahmed; Agnus, Vincent; Soler, Luc; Mathoulin, Christophe L.; Liverneaux, Philippe A.; Obdeijn, Miryam
2013-01-01
Purpose In video surgery, and more specifically in arthroscopy, one of the major problems is positioning the camera and instruments within the anatomic environment. The concept of computer-guided video surgery has already been used in ear, nose, and throat (ENT), gynecology, and even in hip arthroscopy. These systems, however, rely on optical or mechanical sensors, which turn out to be restricting and cumbersome. The aim of our study was to develop and evaluate the accuracy of a navigation system based on electromagnetic sensors in video surgery. Methods We used an electromagnetic localization device (Aurora, Northern Digital Inc., Ontario, Canada) to track the movements in space of both the camera and the instruments. We have developed a dedicated application in the Python language, using the VTK library for the graphic display and the OpenCV library for camera calibration. Results A prototype has been designed and evaluated for wrist arthroscopy. It allows display of the theoretical position of instruments onto the arthroscopic view with useful accuracy. Discussion The augmented reality view represents valuable assistance when surgeons want to position the arthroscope or locate their instruments. It makes the maneuver more intuitive, increases comfort, saves time, and enhances concentration. PMID:24436832
A portable high-definition electronic endoscope based on embedded system
NASA Astrophysics Data System (ADS)
Xu, Guang; Wang, Liqiang; Xu, Jin
2012-11-01
This paper presents a low power and portable highdefinition (HD) electronic endoscope based on CortexA8 embedded system. A 1/6 inch CMOS image sensor is used to acquire HD images with 1280 *800 pixels. The camera interface of A8 is designed to support images of various sizes and support multiple inputs of video format such as ITUR BT601/ 656 standard. Image rotation (90 degrees clockwise) and image process functions are achieved by CAMIF. The decode engine of the processor plays back or records HD videos at speed of 30 frames per second, builtin HDMI interface transmits high definition images to the external display. Image processing procedures such as demosaicking, color correction and auto white balance are realized on the A8 platform. Other functions are selected through OSD settings. An LCD panel displays the real time images. The snapshot pictures or compressed videos are saved in an SD card or transmited to a computer through USB interface. The size of the camera head is 4×4.8×15 mm with more than 3 meters working distance. The whole endoscope system can be powered by a lithium battery, with the advantages of miniature, low cost and portability.
Augmented reality-based navigation system for wrist arthroscopy: feasibility.
Zemirline, Ahmed; Agnus, Vincent; Soler, Luc; Mathoulin, Christophe L; Obdeijn, Miryam; Liverneaux, Philippe A
2013-11-01
In video surgery, and more specifically in arthroscopy, one of the major problems is positioning the camera and instruments within the anatomic environment. The concept of computer-guided video surgery has already been used in ear, nose, and throat (ENT), gynecology, and even in hip arthroscopy. These systems, however, rely on optical or mechanical sensors, which turn out to be restricting and cumbersome. The aim of our study was to develop and evaluate the accuracy of a navigation system based on electromagnetic sensors in video surgery. We used an electromagnetic localization device (Aurora, Northern Digital Inc., Ontario, Canada) to track the movements in space of both the camera and the instruments. We have developed a dedicated application in the Python language, using the VTK library for the graphic display and the OpenCV library for camera calibration. A prototype has been designed and evaluated for wrist arthroscopy. It allows display of the theoretical position of instruments onto the arthroscopic view with useful accuracy. The augmented reality view represents valuable assistance when surgeons want to position the arthroscope or locate their instruments. It makes the maneuver more intuitive, increases comfort, saves time, and enhances concentration.
Micro-video display with ocular tracking and interactive voice control
NASA Technical Reports Server (NTRS)
Miller, James E.
1993-01-01
In certain space-restricted environments, many of the benefits resulting from computer technology have been foregone because of the size, weight, inconvenience, and lack of mobility associated with existing computer interface devices. Accordingly, an effort to develop a highly miniaturized and 'wearable' computer display and control interface device, referred to as the Sensory Integrated Data Interface (SIDI), is underway. The system incorporates a micro-video display that provides data display and ocular tracking on a lightweight headset. Software commands are implemented by conjunctive eye movement and voice commands of the operator. In this initial prototyping effort, various 'off-the-shelf' components have been integrated into a desktop computer and with a customized menu-tree software application to demonstrate feasibility and conceptual capabilities. When fully developed as a customized system, the interface device will allow mobile, 'hand-free' operation of portable computer equipment. It will thus allow integration of information technology applications into those restrictive environments, both military and industrial, that have not yet taken advantage of the computer revolution. This effort is Phase 1 of Small Business Innovative Research (SBIR) Topic number N90-331 sponsored by the Naval Undersea Warfare Center Division, Newport. The prime contractor is Foster-Miller, Inc. of Waltham, MA.
NASA Tech Briefs, April 2000. Volume 24, No. 4
NASA Technical Reports Server (NTRS)
2000-01-01
Topics covered include: Imaging/Video/Display Technology; Electronic Components and Circuits; Electronic Systems; Physical Sciences; Materials; Computer Programs; Mechanics; Bio-Medical; Test and Measurement; Mathematics and Information Sciences; Books and Reports.
Development of Targeting UAVs Using Electric Helicopters and Yamaha RMAX
2007-05-17
including the QNX real - time operating system . The video overlay board is useful to display the onboard camera’s image with important information such as... real - time operating system . Fully utilizing the built-in multi-processing architecture with inter-process synchronization and communication
Code of Federal Regulations, 2011 CFR
2011-04-01
... capability to display the date and time of recorded events on video and/or digital recordings. The displayed... digital record retention. (1) All video recordings and/or digital records of coverage provided by the.... (3) Duly authenticated copies of video recordings and/or digital records shall be provided to the...
Code of Federal Regulations, 2014 CFR
2014-04-01
... capability to display the date and time of recorded events on video and/or digital recordings. The displayed... digital record retention. (1) All video recordings and/or digital records of coverage provided by the.... (3) Duly authenticated copies of video recordings and/or digital records shall be provided to the...
Code of Federal Regulations, 2013 CFR
2013-04-01
... capability to display the date and time of recorded events on video and/or digital recordings. The displayed... digital record retention. (1) All video recordings and/or digital records of coverage provided by the.... (3) Duly authenticated copies of video recordings and/or digital records shall be provided to the...
Code of Federal Regulations, 2010 CFR
2010-04-01
... capability to display the date and time of recorded events on video and/or digital recordings. The displayed... digital record retention. (1) All video recordings and/or digital records of coverage provided by the.... (3) Duly authenticated copies of video recordings and/or digital records shall be provided to the...
Code of Federal Regulations, 2012 CFR
2012-04-01
... capability to display the date and time of recorded events on video and/or digital recordings. The displayed... digital record retention. (1) All video recordings and/or digital records of coverage provided by the.... (3) Duly authenticated copies of video recordings and/or digital records shall be provided to the...
NASA Astrophysics Data System (ADS)
Schlam, E.
1983-01-01
Human factors in visible displays are discussed, taking into account an introduction to color vision, a laser optometric assessment of visual display viewability, the quantification of color contrast, human performance evaluations of digital image quality, visual problems of office video display terminals, and contemporary problems in airborne displays. Other topics considered are related to electroluminescent technology, liquid crystal and related technologies, plasma technology, and display terminal and systems. Attention is given to the application of electroluminescent technology to personal computers, electroluminescent driving techniques, thin film electroluminescent devices with memory, the fabrication of very large electroluminescent displays, the operating properties of thermally addressed dye switching liquid crystal display, light field dichroic liquid crystal displays for very large area displays, and hardening military plasma displays for a nuclear environment.
Stockdale, Laura; Coyne, Sarah M
2018-01-01
The Internet Gaming Disorder Scale (IGDS) is a widely used measure of video game addiction, a pathology affecting a small percentage of all people who play video games. Emerging adult males are significantly more likely to be video game addicts. Few researchers have examined how people who qualify as video game addicts based on the IGDS compared to matched controls based on age, gender, race, and marital status. The current study compared IGDS video game addicts to matched non-addicts in terms of their mental, physical, social-emotional health using self-report, survey methods. Addicts had poorer mental health and cognitive functioning including poorer impulse control and ADHD symptoms compared to controls. Additionally, addicts displayed increased emotional difficulties including increased depression and anxiety, felt more socially isolated, and were more likely to display internet pornography pathological use symptoms. Female video game addicts were at unique risk for negative outcomes. The sample for this study was undergraduate college students and self-report measures were used. Participants who met the IGDS criteria for video game addiction displayed poorer emotional, physical, mental, and social health, adding to the growing evidence that video game addictions are a valid phenomenon. Copyright © 2017 Elsevier B.V. All rights reserved.
A teleconference with three-dimensional surgical video presentation on the 'usual' Internet.
Obuchi, Toshiro; Moroga, Toshihiko; Nakamura, Hiroshige; Shima, Hiroji; Iwasaki, Akinori
2015-03-01
Endoscopic surgery employing three-dimensional (3D) video images, such as a robotic surgery, has recently become common. However, the number of opportunities to watch such actual 3D videos is still limited due to many technical difficulties associated with showing 3D videos in front of an audience. A teleconference with 3D video presentations of robotic surgeries was held between our institution and a distant institution using a commercially available telecommunication appliance on the 'usual' Internet. Although purpose-built video displays and 3D glasses were necessary, no technical problems occurred during the presentation and discussion. This high-definition 3D telecommunication system can be applied to discussions about and education on 3D endoscopic surgeries for many surgeons, even in distant places, without difficulty over the usual Internet connection.
Video integrated measurement system. [Diagnostic display devices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spector, B.; Eilbert, L.; Finando, S.
A Video Integrated Measurement (VIM) System is described which incorporates the use of various noninvasive diagnostic procedures (moire contourography, electromyography, posturometry, infrared thermography, etc.), used individually or in combination, for the evaluation of neuromusculoskeletal and other disorders and their management with biofeedback and other therapeutic procedures. The system provides for measuring individual diagnostic and therapeutic modes, or multiple modes by split screen superimposition, of real time (actual) images of the patient and idealized (ideal-normal) models on a video monitor, along with analog and digital data, graphics, color, and other transduced symbolic information. It is concluded that this system provides anmore » innovative and efficient method by which the therapist and patient can interact in biofeedback training/learning processes and holds considerable promise for more effective measurement and treatment of a wide variety of physical and behavioral disorders.« less
NASA Technical Reports Server (NTRS)
1988-01-01
Hughes Aircraft Corporation's Probeye Model 3300 Thermal Video System consists of tripod mounted infrared scanner that detects the degree of heat emitted by an object and a TV monitor on which results are displayed. Latest addition to Hughes line of infrared medical applications can detect temperature variations as fine as one-tenth of a degree centigrade. Thermography, proving to be a valuable screening tool in diagnosis, can produce information to preclude necessity of performing more invasive tests that may be painful and hazardous. Also useful in verifying a patient's progress through therapy and rehabilitation.
Automatic view synthesis by image-domain-warping.
Stefanoski, Nikolce; Wang, Oliver; Lang, Manuel; Greisen, Pierre; Heinzle, Simon; Smolic, Aljosa
2013-09-01
Today, stereoscopic 3D (S3D) cinema is already mainstream, and almost all new display devices for the home support S3D content. S3D distribution infrastructure to the home is already established partly in the form of 3D Blu-ray discs, video on demand services, or television channels. The necessity to wear glasses is, however, often considered as an obstacle, which hinders broader acceptance of this technology in the home. Multiviewautostereoscopic displays enable a glasses free perception of S3D content for several observers simultaneously, and support head motion parallax in a limited range. To support multiviewautostereoscopic displays in an already established S3D distribution infrastructure, a synthesis of new views from S3D video is needed. In this paper, a view synthesis method based on image-domain-warping (IDW) is presented that automatically synthesizes new views directly from S3D video and functions completely. IDW relies on an automatic and robust estimation of sparse disparities and image saliency information, and enforces target disparities in synthesized images using an image warping framework. Two configurations of the view synthesizer in the scope of a transmission and view synthesis framework are analyzed and evaluated. A transmission and view synthesis system that uses IDW is recently submitted to MPEG's call for proposals on 3D video technology, where it is ranked among the four best performing proposals.
Otto, Kristen J; Hapner, Edie R; Baker, Michael; Johns, Michael M
2006-02-01
Advances in commercial video technology have improved office-based laryngeal imaging. This study investigates the perceived image quality of a true high-definition (HD) video camera and the effect of magnification on laryngeal videostroboscopy. We performed a prospective, dual-armed, single-blinded analysis of a standard laryngeal videostroboscopic examination comparing 3 separate add-on camera systems: a 1-chip charge-coupled device (CCD) camera, a 3-chip CCD camera, and a true 720p (progressive scan) HD camera. Displayed images were controlled for magnification and image size (20-inch [50-cm] display, red-green-blue, and S-video cable for 1-chip and 3-chip cameras; digital visual interface cable and HD monitor for HD camera). Ten blinded observers were then asked to rate the following 5 items on a 0-to-100 visual analog scale: resolution, color, ability to see vocal fold vibration, sense of depth perception, and clarity of blood vessels. Eight unblinded observers were then asked to rate the difference in perceived resolution and clarity of laryngeal examination images when displayed on a 10-inch (25-cm) monitor versus a 42-inch (105-cm) monitor. A visual analog scale was used. These monitors were controlled for actual resolution capacity. For each item evaluated, randomized block design analysis demonstrated that the 3-chip camera scored significantly better than the 1-chip camera (p < .05). For the categories of color and blood vessel discrimination, the 3-chip camera scored significantly better than the HD camera (p < .05). For magnification alone, observers rated the 42-inch monitor statistically better than the 10-inch monitor. The expense of new medical technology must be judged against its added value. This study suggests that HD laryngeal imaging may not add significant value over currently available video systems, in perceived image quality, when a small monitor is used. Although differences in clarity between standard and HD cameras may not be readily apparent on small displays, a large display size coupled with HD technology may impart improved diagnosis of subtle vocal fold lesions and vibratory anomalies.
LMDS Lightweight Modular Display System.
1982-02-16
based on standard functions. This means that the cost to produce a particular display function can be met in the most economical fashion and at the same...not mean that the NTDS interface would be eliminated. What is anticipated is the use of ETHERNET at a low level of system interface, ie internal to...GENERATOR dSYMBOL GEN eCOMMUNICATION 3-2 The architecture of the unit’s (fig 3-4) input circuitry is based on a video table look-up ROM. The function
Video Bandwidth Compression System.
1980-08-01
scaling function, located between the inverse DPCM and inverse transform , on the decoder matrix multiplier chips. 1"V1 T.. ---- i.13 SECURITY...Bit Unpacker and Inverse DPCM Slave Sync Board 15 e. Inverse DPCM Loop Boards 15 f. Inverse Transform Board 16 g. Composite Video Output Board 16...36 a. Display Refresh Memory 36 (1) Memory Section 37 (2) Timing and Control 39 b. Bit Unpacker and Inverse DPCM 40 c. Inverse Transform Processor 43
New ultraportable display technology and applications
NASA Astrophysics Data System (ADS)
Alvelda, Phillip; Lewis, Nancy D.
1998-08-01
MicroDisplay devices are based on a combination of technologies rooted in the extreme integration capability of conventionally fabricated CMOS active-matrix liquid crystal display substrates. Customized diffraction grating and optical distortion correction technology for lens-system compensation allow the elimination of many lenses and systems-level components. The MicroDisplay Corporation's miniature integrated information display technology is rapidly leading to many new defense and commercial applications. There are no moving parts in MicroDisplay substrates, and the fabrication of the color generating gratings, already part of the CMOS circuit fabrication process, is effectively cost and manufacturing process-free. The entire suite of the MicroDisplay Corporation's technologies was devised to create a line of application- specific integrated circuit single-chip display systems with integrated computing, memory, and communication circuitry. Next-generation portable communication, computer, and consumer electronic devices such as truly portable monitor and TV projectors, eyeglass and head mounted displays, pagers and Personal Communication Services hand-sets, and wristwatch-mounted video phones are among the may target commercial markets for MicroDisplay technology. Defense applications range from Maintenance and Repair support, to night-vision systems, to portable projectors for mobile command and control centers.
Haptic display for the VR arthroscopy training simulator
NASA Astrophysics Data System (ADS)
Ziegler, Rolf; Brandt, Christoph; Kunstmann, Christian; Mueller, Wolfgang; Werkhaeuser, Holger
1997-05-01
A specific desire to find new training methods arose from the new fields called 'minimal invasive surgery.' With the technical advance modern video arthroscopy became the standard procedure in the ORs. Holding the optical system with the video camera in one hand, watching the operation field on the monitor, the other hand was free to guide, e.g., a probe. As arthroscopy became a more common procedure it became obvious that some sort of special training was necessary to guarantee a certain level of qualification of the surgeons. Therefore, a hospital in Frankfurt, Germany approached the Fraunhofer Institute for Computer Graphics to develop a training system for arthroscopy based on VR techniques. At least the main drawback of the developed simulator is the missing of haptic perception, especially of force feedback. In cooperation with the Department of Electro-Mechanical Construction at the Darmstadt Technical University we have designed and built a haptic display for the VR arthroscopy training simulator. In parallel we developed a concept for the integration of the haptic display in a configurable way.
Deaf-And-Mute Sign Language Generation System
NASA Astrophysics Data System (ADS)
Kawai, Hideo; Tamura, Shinichi
1984-08-01
We have developed a system which can recognize speech and generate the corresponding animation-like sign language sequence. The system is implemented in a popular personal computer. This has three video-RAM's and a voice recognition board which can recognize only registered voice of a specific speaker. Presently, fourty sign language patterns and fifty finger spellings are stored in two floppy disks. Each sign pattern is composed of one to four sub-patterns. That is, if the pattern is composed of one sub-pattern, it is displayed as a still pattern. If not, it is displayed as a motion pattern. This system will help communications between deaf-and-mute persons and healthy persons. In order to display in high speed, almost programs are written in a machine language.
High-definition video display based on the FPGA and THS8200
NASA Astrophysics Data System (ADS)
Qian, Jia; Sui, Xiubao
2014-11-01
This paper presents a high-definition video display solution based on the FPGA and THS8200. THS8200 is a video decoder chip launched by TI company, this chip has three 10-bit DAC channels which can capture video data in both 4:2:2 and 4:4:4 formats, and its data synchronization can be either through the dedicated synchronization signals HSYNC and VSYNC, or extracted from the embedded video stream synchronization information SAV / EAV code. In this paper, we will utilize the address and control signals generated by FPGA to access to the data-storage array, and then the FPGA generates the corresponding digital video signals YCbCr. These signals combined with the synchronization signals HSYNC and VSYNC that are also generated by the FPGA act as the input signals of THS8200. In order to meet the bandwidth requirements of the high-definition TV, we adopt video input in the 4:2:2 format over 2×10-bit interface. THS8200 is needed to be controlled by FPGA with I2C bus to set the internal registers, and as a result, it can generate the synchronous signal that is satisfied with the standard SMPTE and transfer the digital video signals YCbCr into analog video signals YPbPr. Hence, the composite analog output signals YPbPr are consist of image data signal and synchronous signal which are superimposed together inside the chip THS8200. The experimental research indicates that the method presented in this paper is a viable solution for high-definition video display, which conforms to the input requirements of the new high-definition display devices.
Analysis and design of stereoscopic display in stereo television endoscope system
NASA Astrophysics Data System (ADS)
Feng, Dawei
2008-12-01
Many 3D displays have been proposed for medical use. When we design and evaluate new system, there are three demands from surgeons. Priority is the precision. Secondly, displayed images should be easy to understand, In addition, surgery lasts hours and hours, they do not like fatiguing display. The stereo television endoscope researched in this paper make celiac viscera image on the photosurface of the left and right CCD by imitating human binocular stereo vision effect by using the double-optical lines system. The left and right video signal will be processed by frequency multiplication and display on the monitor, people can observe the stereo image which has depth impression by using a polarized LCD screen and a pair of polarized glasses. Clinical experiments show that by using the stereo TV endoscope people can make minimally invasive surgery more safe and reliable, and can shorten the operation time, and can improve the operation accuracy.
Woo, Kevin L; Rieucau, Guillaume
2008-07-01
The increasing use of the video playback technique in behavioural ecology reveals a growing need to ensure better control of the visual stimuli that focal animals experience. Technological advances now allow researchers to develop computer-generated animations instead of using video sequences of live-acting demonstrators. However, care must be taken to match the motion characteristics (speed and velocity) of the animation to the original video source. Here, we presented a tool based on the use of an optic flow analysis program to measure the resemblance of motion characteristics of computer-generated animations compared to videos of live-acting animals. We examined three distinct displays (tail-flick (TF), push-up body rock (PUBR), and slow arm wave (SAW)) exhibited by animations of Jacky dragons (Amphibolurus muricatus) that were compared to the original video sequences of live lizards. We found no significant differences between the motion characteristics of videos and animations across all three displays. Our results showed that our animations are similar the speed and velocity features of each display. Researchers need to ensure that similar motion characteristics in animation and video stimuli are represented, and this feature is a critical component in the future success of the video playback technique.
A faster technique for rendering meshes in multiple display systems
NASA Astrophysics Data System (ADS)
Hand, Randall E.; Moorhead, Robert J., II
2003-05-01
Level of detail algorithms have widely been implemented in architectural VR walkthroughs and video games, but have not had widespread use in VR terrain visualization systems. This thesis explains a set of optimizations to allow most current level of detail algorithms run in the types of multiple display systems used in VR. It improves both the visual quality of the system through use of graphics hardware acceleration, and improves the framerate and running time through moifications to the computaitons that drive the algorithms. Using ROAM as a testbed, results show improvements between 10% and 100% on varying machines.
NASA Tech Briefs, December 2000. Volume 24, No. 12
NASA Technical Reports Server (NTRS)
2000-01-01
Topics include: special coverage sections on Imaging/Video/Display Technology, and sections on electronic components and systems, test and measurement, software, information sciences, and special sections of Electronics Tech Briefs and Motion Control Tech Briefs.
The Use Of Videography For Three-Dimensional Motion Analysis
NASA Astrophysics Data System (ADS)
Hawkins, D. A.; Hawthorne, D. L.; DeLozier, G. S.; Campbell, K. R.; Grabiner, M. D.
1988-02-01
Special video path editing capabilities with custom hardware and software, have been developed for use in conjunction with existing video acquisition hardware and firmware. This system has simplified the task of quantifying the kinematics of human movement. A set of retro-reflective markers are secured to a subject performing a given task (i.e. walking, throwing, swinging a golf club, etc.). Multiple cameras, a video processor, and a computer work station collect video data while the task is performed. Software has been developed to edit video files, create centroid data, and identify marker paths. Multi-camera path files are combined to form a 3D path file using the DLT method of cinematography. A separate program converts the 3D path file into kinematic data by creating a set of local coordinate axes and performing a series of coordinate transformations from one local system to the next. The kinematic data is then displayed for appropriate review and/or comparison.
Vroom: designing an augmented environment for remote collaboration in digital cinema production
NASA Astrophysics Data System (ADS)
Margolis, Todd; Cornish, Tracy
2013-03-01
As media technologies become increasingly affordable, compact and inherently networked, new generations of telecollaborative platforms continue to arise which integrate these new affordances. Virtual reality has been primarily concerned with creating simulations of environments that can transport participants to real or imagined spaces that replace the "real world". Meanwhile Augmented Reality systems have evolved to interleave objects from Virtual Reality environments into the physical landscape. Perhaps now there is a new class of systems that reverse this precept to enhance dynamic media landscapes and immersive physical display environments to enable intuitive data exploration through collaboration. Vroom (Virtual Room) is a next-generation reconfigurable tiled display environment in development at the California Institute for Telecommunications and Information Technology (Calit2) at the University of California, San Diego. Vroom enables freely scalable digital collaboratories, connecting distributed, high-resolution visualization resources for collaborative work in the sciences, engineering and the arts. Vroom transforms a physical space into an immersive media environment with large format interactive display surfaces, video teleconferencing and spatialized audio built on a highspeed optical network backbone. Vroom enables group collaboration for local and remote participants to share knowledge and experiences. Possible applications include: remote learning, command and control, storyboarding, post-production editorial review, high resolution video playback, 3D visualization, screencasting and image, video and multimedia file sharing. To support these various scenarios, Vroom features support for multiple user interfaces (optical tracking, touch UI, gesture interface, etc.), support for directional and spatialized audio, giga-pixel image interactivity, 4K video streaming, 3D visualization and telematic production. This paper explains the design process that has been utilized to make Vroom an accessible and intuitive immersive environment for remote collaboration specifically for digital cinema production.
Novel use of video glasses during binocular microscopy in the otolaryngology clinic.
Fastenberg, Judd H; Fang, Christina H; Akbar, Nadeem A; Abuzeid, Waleed M; Moskowitz, Howard S
2018-06-06
The development of portable, high resolution video displays such as video glasses allows clinicians the opportunity to offer patients an increased ability to visualize aspects of their physical examination in an ergonomic and cost-effective manner. The objective of this pilot study is to trial the use of video glasses for patients undergoing binocular microscopy as well as to better understand some of the potential benefits of the enhanced display option. This study was comprised of a single treatment group. Patients seen in the otolaryngology clinic who required binocular microscopy for diagnosis and treatment were recruited. All patients wore video glasses during their otoscopic examination. An additional cohort of patients who required binocular microscopy were also recruited, but did not use the video glasses during their examination. Patients subsequently completed a 10-point Likert scale survey that assessed their comfort, anxiety, and satisfaction with the examination as well as their general understanding of their otologic condition. A total of 29 patients who used the video glasses were recruited, including those with normal examinations, cerumen impaction, or chronic ear disease. Based on the survey results, patients reported a high level of satisfaction and comfort during their exam with video glasses. Patients who used the video glasses did not exhibit any increased anxiety with their examination. Patients reported that video glasses improved their understanding and they expressed a desire to wear the glasses again during repeat exams. This pilot study demonstrates that video glasses may represent a viable alternative display option in the otolaryngology clinic. The results show that the use of video glasses is associated with high patient comfort and satisfaction during binocular microscopy. Further investigation is warranted to determine the potential for this display option in other facets of patient care as well as in expanding patient understanding of disease and anatomy. Copyright © 2018 Elsevier Inc. All rights reserved.
Spherical versus flat displays for communicating climate science concepts through stories
NASA Astrophysics Data System (ADS)
Schollaert Uz, S.; Storksdieck, M.; Duncan, B. N.
2016-12-01
One of the most compelling ways to display global Earth science data is through spherical displays. Museums around the world use Science On a Sphere for informal education of the general public, commonly for Earth science. An increasing number of universities and K-12 school systems are acquiring spheres to support formal education curriculum, but the use of spheres in education is relatively new and understanding of their advantages and best practices is still evolving. Many museums do not have the resources to staff their sphere with a facilitator or they have high turn-over of volunteer facilitators without a science background. Many K-12 teachers lack resources or training needed to utilize sphere technology to address global phenomena or Earth system science. One solution to this "facilitator-problem" has been the creation of "canned shows" for spheres, like ClimateBits. These are short videos that help people visualize Earth science concepts through global data sets and simple story-telling. To understand whether and when data driven story-telling works best on a sphere, we surveyed groups that saw identical Earth system science stories presented on a spherical display versus a flat screen. We also surveyed identical groups using live Earth science data story-telling compared to the ClimateBits videos. Some of the advantages of each format were most apparent in the qualitative comments at the end of the surveys
Impact of pain behaviors on evaluations of warmth and competence.
Ashton-James, Claire E; Richardson, Daniel C; de C Williams, Amanda C; Bianchi-Berthouze, Nadia; Dekker, Peter H
2014-12-01
This study investigated the social judgments that are made about people who appear to be in pain. Fifty-six participants viewed 2 video clips of human figures exercising. The videos were created by a motion tracking system, and showed dots that had been placed at various points on the body, so that body motion was the only visible cue. One of the figures displayed pain behaviors (eg, rubbing, holding, hesitating), while the other did not. Without any other information about the person in each video, participants evaluated each person on a variety of attributes associated with interpersonal warmth, competence, mood, and physical fitness. As well as judging them to be in more pain, participants evaluated the person who displayed pain behavior as less warm and less competent than the person who did not display pain behavior. In addition, the person who displayed pain behavior was perceived to be in a more negative mood and to have poorer physical fitness than the person who did not, and these perceptions contributed to the impact of pain behaviors on evaluations of warmth and competence, respectively. The implications of these negative social evaluations for social relationships, well-being, and pain assessment in persons in chronic pain are discussed. Copyright © 2014 International Association for the Study of Pain. Published by Elsevier B.V. All rights reserved.
Effects of Immediate Instructor Feedback on Group Discussion Participants.
ERIC Educational Resources Information Center
Jurma, William E.; Froelich, Deidre L.
1984-01-01
Investigated the effects of immediate instructor feedback, via a video display system (ComET system), on the performance of group discussion participants. Found that receivers of immediate feedback were more satisfied with their performances, participated in discussions of higher quality, and were no more anxious than individuals not receiving…
77 FR 53184 - 36(b)(1) Arms Sales Notification
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-31
..., multi-field of view EO/IR system. The system provides color daylight TV and night time IR video with a... along with ground moving target indicator (GMTI) modes. It will also have two onboard workstations that...-locate, collect, and display the relevant information to two operators for analysis and recording...
Does a video displaying a stair climbing model increase stair use in a worksite setting?
Van Calster, L; Van Hoecke, A-S; Octaef, A; Boen, F
2017-08-01
This study evaluated the effects of improving the visibility of the stairwell and of displaying a video with a stair climbing model on climbing and descending stair use in a worksite setting. Intervention study. Three consecutive one-week intervention phases were implemented: (1) the visibility of the stairs was improved by the attachment of pictograms that indicated the stairwell; (2) a video showing a stair climbing model was sent to the employees by email; and (3) the same video was displayed on a television screen at the point-of-choice (POC) between the stairs and the elevator. The interventions took place in two buildings. The implementation of the interventions varied between these buildings and the sequence was reversed. Improving the visibility of the stairs increased both stair climbing (+6%) and descending stair use (+7%) compared with baseline. Sending the video by email yielded no additional effect on stair use. By contrast, displaying the video at the POC increased stair climbing in both buildings by 12.5% on average. One week after the intervention, the positive effects on stair climbing remained in one of the buildings, but not in the other. These findings suggest that improving the visibility of the stairwell and displaying a stair climbing model on a screen at the POC can result in a short-term increase in both climbing and descending stair use. Copyright © 2017 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Richards, Stephanie E. (Compiler); Levine, Howard G.; Romero, Vergel
2016-01-01
Biotube was developed for plant gravitropic research investigating the potential for magnetic fields to orient plant roots as they grow in microgravity. Prior to flight, experimental seeds are placed into seed cassettes, that are capable of containing up to 10 seeds, and inserted between two magnets located within one of three Magnetic Field Chamber (MFC). Biotube is stored within an International Space Station (ISS) stowage locker and provides three levels of containment for chemical fixatives. Features include monitoring of temperature, fixative/ preservative delivery to specimens, and real-time video imaging downlink. Biotube's primary subsystems are: (1) The Water Delivery System that automatically activates and controls the delivery of water (to initiate seed germination). (2) The Fixative Storage and Delivery System that stores and delivers chemical fixative or RNA later to each seed cassette. (3) The Digital Imaging System consisting of 4 charge-coupled device (CCD) cameras, a video multiplexer, a lighting multiplexer, and 16 infrared light-emitting diodes (LEDs) that provide illumination while the photos are being captured. (4) The Command and Data Management System that provides overall control of the integrated subsystems, graphical user interface, system status and error message display, image display, and other functions.
Video Display Terminals: Radiation Issues.
ERIC Educational Resources Information Center
Murray, William E.
1985-01-01
Discusses information gathered in past few years related to health effects of video display terminals (VDTs) with particular emphasis given to issues raised by VDT users. Topics covered include radiation emissions, health concerns, radiation surveys, occupational radiation exposure standards, and long-term risks. (17 references) (EJS)
The Eye Catching Property of Digital-Signage with Scent and a Scent-Emitting Video Display System
NASA Astrophysics Data System (ADS)
Tomono, Akira; Otake, Syunya
In this paper, the effective method of inducing a glance aimed at the digital signage by emitting a scent is described. The simulation experiment was done using the immersive VR System because there were a lot of restrictions to the experiment in an actual passageway. In order to investigate the eye catching property of the digital signage, the passer-by's eye movement was analyzed. Through the experiment, they were clarified that the digital signage with the scent was paid to attention, and the strong impression remained in the memory. Next, a scent-emitting video display system applying to the digital signage is described. To this end, a scent-emitting device that is able to quickly change the scents it is releasing, and present them from a distance (by the non-contact method), thus maintaining a relationship between the scent and the image, must be developed. We propose a new method where a device that can release pressurized gases is placed behind the display screen filled with tiny pores. Scents are then ejected from this device, traveling through the pores to the front side of the screen. An excellent scent delivery characteristic was obtained because the distance to the user is close and the scent is presented from the front. We also present a method for inducing viewer reactions using on-screen images, thereby enabling scent release to coincide precisely with viewer inhalations. We anticipate that the simultaneous presentation of scents and video images will deepen viewers' comprehension of these images.
Quick-disconnect harness system for helmet-mounted displays
NASA Astrophysics Data System (ADS)
Bapu, P. T.; Aulds, M. J.; Fuchs, Steven P.; McCormick, David M.
1992-10-01
We have designed a pilot's harness-mounted, high voltage quick-disconnect connectors with 62 pins, to transmit voltages up to 13.5 kV and video signals with 70 MHz bandwidth, for a binocular helmet-mounted display system. It connects and disconnects with power off, and disconnects 'hot' without pilot intervention and without producing external sparks or exposing hot embers to the explosive cockpit environment. We have implemented a procedure in which the high voltage pins disconnect inside a hermetically-sealed unit before the physical separation of the connector. The 'hot' separation triggers a crowbar circuit in the high voltage power supplies for additional protection. Conductor locations and shields are designed to reduce capacitance in the circuit and avoid crosstalk among adjacent circuits. The quick- disconnect connector and wiring harness are human-engineered to ensure pilot safety and mobility. The connector backshell is equipped with two hybrid video amplifiers to improve the clarity of the video signals. Shielded wires and coaxial cables are molded as a multi-layered ribbon for maximum flexibility between the pilot's harness and helmet. Stiff cabling is provided between the quick-disconnect connector and the aircraft console to control behavior during seat ejection. The components of the system have been successfully tested for safety, performance, ergonomic considerations, and reliability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, V; James, J; Wang, B
Purpose: To describe an in-house video goggle feedback system for motion management during simulation and treatment of radiation therapy patients. Methods: This video goggle system works by splitting and amplifying the video output signal directly from the Varian Real-Time Position Management (RPM) workstation or TrueBeam imaging workstation into two signals using a Distribution Amplifier. The first signal S[1] gets reconnected back to the monitor. The second signal S[2] gets connected to the input of a Video Scaler. The S[2] signal can be scaled, cropped and panned in real time to display only the relevant information to the patient. The outputmore » signal from the Video Scaler gets connected to an HDMI Extender Transmitter via a DVI-D to HDMI converter cable. The S[2] signal can be transported from the HDMI Extender Transmitter to the HDMI Extender Receiver located inside the treatment room via a Cat5e/6 cable. Inside the treatment room, the HDMI Extender Receiver is permanently mounted on the wall near the conduit where the Cat5e/6 cable is located. An HDMI cable is used to connect from the output of the HDMI Receiver to the video goggles. Results: This video goggle feedback system is currently being used at two institutions. At one institution, the system was just recently implemented for simulation and treatments on two breath-hold gated patients with 8+ total fractions over a two month period. At the other institution, the system was used to treat 100+ breath-hold gated patients on three Varian TrueBeam linacs and has been operational for twelve months. The average time to prepare the video goggle system for treatment is less than 1 minute. Conclusion: The video goggle system provides an efficient and reliable method to set up a video feedback signal for radiotherapy patients with motion management.« less
Spatial constraints of stereopsis in video displays
NASA Technical Reports Server (NTRS)
Schor, Clifton
1989-01-01
Recent development in video technology, such as the liquid crystal displays and shutters, have made it feasible to incorporate stereoscopic depth into the 3-D representations on 2-D displays. However, depth has already been vividly portrayed in video displays without stereopsis using the classical artists' depth cues described by Helmholtz (1866) and the dynamic depth cues described in detail by Ittleson (1952). Successful static depth cues include overlap, size, linear perspective, texture gradients, and shading. Effective dynamic cues include looming (Regan and Beverly, 1979) and motion parallax (Rogers and Graham, 1982). Stereoscopic depth is superior to the monocular distance cues under certain circumstances. It is most useful at portraying depth intervals as small as 5 to 10 arc secs. For this reason it is extremely useful in user-video interactions such as telepresence. Objects can be manipulated in 3-D space, for example, while a person who controls the operations views a virtual image of the manipulated object on a remote 2-D video display. Stereopsis also provides structure and form information in camouflaged surfaces such as tree foliage. Motion parallax also reveals form; however, without other monocular cues such as overlap, motion parallax can yield an ambiguous perception. For example, a turning sphere, portrayed as solid by parallax can appear to rotate either leftward or rightward. However, only one direction of rotation is perceived when stereo-depth is included. If the scene is static, then stereopsis is the principal cue for revealing the camouflaged surface structure. Finally, dynamic stereopsis provides information about the direction of motion in depth (Regan and Beverly, 1979). Clearly there are many spatial constraints, including spatial frequency content, retinal eccentricity, exposure duration, target spacing, and disparity gradient, which - when properly adjusted - can greatly enhance stereodepth in video displays.
Feasibility study of utilizing ultraportable projectors for endoscopic video display (with videos).
Tang, Shou-Jiang; Fehring, Amanda; Mclemore, Mac; Griswold, Michael; Wang, Wanmei; Paine, Elizabeth R; Wu, Ruonan; To, Filip
2014-10-01
Modern endoscopy requires video display. Recent miniaturized, ultraportable projectors are affordable, durable, and offer quality image display. Explore feasibility of using ultraportable projectors in endoscopy. Prospective bench-top comparison; clinical feasibility study. Masked comparison study of images displayed via 2 Samsung ultraportable light-emitting diode projectors (pocket-sized SP-HO3; pico projector SP-P410M) and 1 Microvision Showwx-II Laser pico projector. BENCH-TOP FEASIBILITY STUDY: Prerecorded endoscopic video was streamed via computer. CLINICAL COMPARISON STUDY: Live high-definition endoscopy video was simultaneously displayed through each processor onto a standard liquid crystal display monitor and projected onto a portable, pull-down projection screen. Endoscopists, endoscopy nurses, and technicians rated video images; ratings were analyzed by linear mixed-effects regression models with random intercepts. All projectors were easy to set up, adjust, focus, and operate, with no real-time lapse for any. Bench-top study outcomes: Samsung pico preferred to Laser pico, overall rating 1.5 units higher (95% confidence interval [CI] = 0.7-2.4), P < .001; Samsung pocket preferred to Laser pico, 3.3 units higher (95% CI = 2.4-4.1), P < .001; Samsung pocket preferred to Samsung pico, 1.7 units higher (95% CI = 0.9-2.5), P < .001. The clinical comparison study confirmed the Samsung pocket projector as best, with a higher overall rating of 2.3 units (95% CI = 1.6-3.0), P < .001, than Samsung pico. Low brightness currently limits pico projector use in clinical endoscopy. The pocket projector, with higher brightness levels (170 lumens), is clinically useful. Continued improvements to ultraportable projectors will supply a needed niche in endoscopy through portability, reduced cost, and equal or better image quality. © The Author(s) 2013.
Advanced Extravehicular Mobility Unit Informatics Software Design
NASA Technical Reports Server (NTRS)
Wright, Theodore
2014-01-01
This is a description of the software design for the 2013 edition of the Advanced Extravehicular Mobility Unit (AEMU) Informatics computer assembly. The Informatics system is an optional part of the space suit assembly. It adds a graphical interface for displaying suit status, timelines, procedures, and caution and warning information. In the future it will display maps with GPS position data, and video and still images captured by the astronaut.
Flight simulator with spaced visuals
NASA Technical Reports Server (NTRS)
Gilson, Richard D. (Inventor); Thurston, Marlin O. (Inventor); Olson, Karl W. (Inventor); Ventola, Ronald W. (Inventor)
1980-01-01
A flight simulator arrangement wherein a conventional, movable base flight trainer is combined with a visual cue display surface spaced a predetermined distance from an eye position within the trainer. Thus, three degrees of motive freedom (roll, pitch and crab) are provided for a visual proprioceptive, and vestibular cue system by the trainer while the remaining geometric visual cue image alterations are developed by a video system. A geometric approach to computing runway image eliminates a need to electronically compute trigonometric functions, while utilization of a line generator and designated vanishing point at the video system raster permits facile development of the images of the longitudinal edges of the runway.
Method and System for Producing Full Motion Media to Display on a Spherical Surface
NASA Technical Reports Server (NTRS)
Starobin, Michael A. (Inventor)
2015-01-01
A method and system for producing full motion media for display on a spherical surface is described. The method may include selecting a subject of full motion media for display on a spherical surface. The method may then include capturing the selected subject as full motion media (e.g., full motion video) in a rectilinear domain. The method may then include processing the full motion media in the rectilinear domain for display on a spherical surface, such as by orienting the full motion media, adding rotation to the full motion media, processing edges of the full motion media, and/or distorting the full motion media in the rectilinear domain for instance. After processing the full motion media, the method may additionally include providing the processed full motion media to a spherical projection system, such as a Science on a Sphere system.
Janosik, Elzbieta; Grzesik, Jan
2003-01-01
The aim of this work was to evaluate the influence of different lighting levels at workstations with video display terminals (VDTs) on the course of the operators' visual work, and to determine the optimal levels of lighting at VDT workstations. For two kinds of job (entry of figures from a typescript and edition of the text displayed on the screen), the work capacity, the degree of the visual strain and the operators' subjective symptoms were determined for four lighting levels (200, 300, 500 and 750 lx). It was found that the work at VDT workstations may overload the visual system and cause eyes complaints as well as the reduction of accommodation or convergence strength. It was also noted that the edition of the text displayed on the screen is more burdening for operators than the entry of figures from a typescript. Moreover, the examination results showed that the lighting at VDT workstations should be higher than 200 lx and that 300 lx makes the work conditions most comfortable during the entry of figures from a typescript, and 500 lx during the edition of the text displayed on the screen.
ERIC Educational Resources Information Center
Walsh, Janet
1982-01-01
Discusses issues related to possible health hazards associated with viewing video display terminals. Includes some findings of the 1979 NIOSH report on Potential Hazards of Video Display Terminals indicating level of radiation emitted is low and providing recommendations related to glare and back pain/muscular fatigue problems. (JN)
Virtual navigation performance: the relationship to field of view and prior video gaming experience.
Richardson, Anthony E; Collaer, Marcia L
2011-04-01
Two experiments examined whether learning a virtual environment was influenced by field of view and how it related to prior video gaming experience. In the first experiment, participants (42 men, 39 women; M age = 19.5 yr., SD = 1.8) performed worse on a spatial orientation task displayed with a narrow field of view in comparison to medium and wide field-of-view displays. Counter to initial hypotheses, wide field-of-view displays did not improve performance over medium displays, and this was replicated in a second experiment (30 men, 30 women; M age = 20.4 yr., SD = 1.9) presenting a more complex learning environment. Self-reported video gaming experience correlated with several spatial tasks: virtual environment pointing and tests of Judgment of Line Angle and Position, mental rotation, and Useful Field of View (with correlations between .31 and .45). When prior video gaming experience was included as a covariate, sex differences in spatial tasks disappeared.
A Smart Spoofing Face Detector by Display Features Analysis.
Lai, ChinLun; Tai, ChiuYuan
2016-07-21
In this paper, a smart face liveness detector is proposed to prevent the biometric system from being "deceived" by the video or picture of a valid user that the counterfeiter took with a high definition handheld device (e.g., iPad with retina display). By analyzing the characteristics of the display platform and using an expert decision-making core, we can effectively detect whether a spoofing action comes from a fake face displayed in the high definition display by verifying the chromaticity regions in the captured face. That is, a live or spoof face can be distinguished precisely by the designed optical image sensor. To sum up, by the proposed method/system, a normal optical image sensor can be upgraded to a powerful version to detect the spoofing actions. The experimental results prove that the proposed detection system can achieve very high detection rate compared to the existing methods and thus be practical to implement directly in the authentication systems.
NASA Technical Reports Server (NTRS)
Sawyer, Kevin; Jacobsen, Robert; Aiken, Edwin W. (Technical Monitor)
1995-01-01
NASA Ames Research Center and the US Army are developing the Rotorcraft Aircrew Systems Concepts Airborne Laboratory (RASCAL) using a Sikorsky UH-60 helicopter for the purpose of flight systems research. A primary use of the RASCAL is in-flight simulation for which the visual scene will use computer generated imagery and synthetic vision. This research is made possible in part to a full color wide field of view Helmet Mounted Display (HMD) system that provides high performance color imagery suitable for daytime operations in a flight-rated package. This paper describes the design and performance characteristics of the HMD system. Emphasis is placed on the design specifications, testing, and integration into the aircraft of Kaiser Electronics' RASCAL HMD system that was designed and built under contract for NASA. The optical performance and design of the Helmet mounted display unit will be discussed as well as the unique capabilities provided by the system's Programmable Display Generator (PDG).
NASA Technical Reports Server (NTRS)
1991-01-01
When Michael Henry wanted to start an aerial video service, he turned to Johnson Space Center for assistance. Two NASA engineers - one had designed and developed TV systems in Apollo, Skylab, Apollo- Soyuz and Space Shuttle programs - designed a wing-mounted fiberglass camera pod. Camera head and angles are adjustable, and the pod is shaped to reduce vibration. The controls are located so a solo pilot can operate the system. A microprocessor displays latitude, longitude, and bearing, and a GPS receiver provides position data for possible legal references. The service has been successfully utilized by railroads, oil companies, real estate companies, etc.
Kim, Young Ju; Xiao, Yan; Hu, Peter; Dutton, Richard
2009-08-01
To understand staff acceptance of a remote video monitoring system for operating room (OR) coordination. Improved real-time remote visual access to OR may enhance situational awareness but also raises privacy concerns for patients and staff. Survey. A system was implemented in a six-room surgical suite to display OR monitoring video at an access restricted control desk area. Image quality was manipulated to improve staff acceptance. Two months after installation, interviews and a survey were conducted on staff acceptance of video monitoring. About half of all OR personnel responded (n = 63). Overall levels of concerns were low, with 53% rated no concerns and 42% little concern. Top two reported uses of the video were to see if cases are finished and to see if a room is ready. Viewing the video monitoring system as useful did not reduce levels of concern. Staff in supervisory positions perceived less concern about the system's impact on privacy than did those supervised (p < 0.03). Concerns for patient privacy correlated with concerns for staff privacy and performance monitoring. Technical means such as manipulating image quality helped staff acceptance. Manipulation of image quality resulted overall acceptance of monitoring video, with residual levels of concerns. OR nurses may express staff privacy concern in the form of concerns over patient privacy. This study provided suggestions for technological and implementation strategies of video monitoring for coordination use in OR. Deployment of communication technology and integration of clinical information will likely raise concerns over staff privacy and performance monitoring. The potential gain of increased information access may be offset by negative impact of a sense of loss of autonomy.
Optical links in handheld multimedia devices
NASA Astrophysics Data System (ADS)
van Geffen, S.; Duis, J.; Miller, R.
2008-04-01
Ever emerging applications in handheld multimedia devices such as mobile phones, laptop computers, portable video games and digital cameras requiring increased screen resolutions are driving higher aggregate bitrates between host processor and display(s) enabling services such as mobile video conferencing, video on demand and TV broadcasting. Larger displays and smaller phones require complex mechanical 3D hinge configurations striving to combine maximum functionality with compact building volumes. Conventional galvanic interconnections such as Micro-Coax and FPC carrying parallel digital data between host processor and display module may produce Electromagnetic Interference (EMI) and bandwidth limitations caused by small cable size and tight cable bends. To reduce the number of signals through a hinge, the mobile phone industry, organized in the MIPI (Mobile Industry Processor Interface) alliance, is currently defining an electrical interface transmitting serialized digital data at speeds >1Gbps. This interface allows for electrical or optical interconnects. Above 1Gbps optical links may offer a cost effective alternative because of their flexibility, increased bandwidth and immunity to EMI. This paper describes the development of optical links for handheld communication devices. A cable assembly based on a special Plastic Optical Fiber (POF) selected for its mechanical durability is terminated with a small form factor molded lens assembly which interfaces between an 850nm VCSEL transmitter and a receiving device on the printed circuit board of the display module. A statistical approach based on a Lean Design For Six Sigma (LDFSS) roadmap for new product development tries to find an optimum link definition which will be robust and low cost meeting the power consumption requirements appropriate for battery operated systems.
Hardware/Software Issues for Video Guidance Systems: The Coreco Frame Grabber
NASA Technical Reports Server (NTRS)
Bales, John W.
1996-01-01
The F64 frame grabber is a high performance video image acquisition and processing board utilizing the TMS320C40 and TMS34020 processors. The hardware is designed for the ISA 16 bit bus and supports multiple digital or analog cameras. It has an acquisition rate of 40 million pixels per second, with a variable sampling frequency of 510 kHz to MO MHz. The board has a 4MB frame buffer memory expandable to 32 MB, and has a simultaneous acquisition and processing capability. It supports both VGA and RGB displays, and accepts all analog and digital video input standards.
Method and apparatus for calibrating a tiled display
NASA Technical Reports Server (NTRS)
Chen, Chung-Jen (Inventor); Johnson, Michael J. (Inventor); Chandrasekhar, Rajesh (Inventor)
2001-01-01
A display system that can be calibrated and re-calibrated with a minimal amount of manual intervention. To accomplish this, one or more cameras are provided to capture an image of the display screen. The resulting captured image is processed to identify any non-desirable characteristics, including visible artifacts such as seams, bands, rings, etc. Once the non-desirable characteristics are identified, an appropriate transformation function is determined. The transformation function is used to pre-warp the input video signal that is provided to the display such that the non-desirable characteristics are reduced or eliminated from the display. The transformation function preferably compensates for spatial non-uniformity, color non-uniformity, luminance non-uniformity, and other visible artifacts.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-19
... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-828] Certain Video Displays and Products Using and Containing Same; Investigations: Terminations, Modifications and Rulings AGENCY: U.S. International Trade Commission. ACTION: Notice. SUMMARY: Notice is hereby given that the U.S. International...
Art History Interactive Videodisc Project at the University of Iowa.
ERIC Educational Resources Information Center
Sustik, Joan M.
A project which developed a retrieval system to evaluate the advantages and disadvantages of an interactive computer and video display system over traditional methods for using a slide library is described in this publication. The art school slide library of the University of Iowa stores transparencies which are arranged alphabetically within…
Integrating critical interface elements for intuitive single-display aviation control of UAVs
NASA Astrophysics Data System (ADS)
Cooper, Joseph L.; Goodrich, Michael A.
2006-05-01
Although advancing levels of technology allow UAV operators to give increasingly complex commands with expanding temporal scope, it is unlikely that the need for immediate situation awareness and local, short-term flight adjustment will ever be completely superseded. Local awareness and control are particularly important when the operator uses the UAV to perform a search or inspection task. There are many different tasks which would be facilitated by search and inspection capabilities of a camera-equipped UAV. These tasks range from bridge inspection and news reporting to wilderness search and rescue. The system should be simple, inexpensive, and intuitive for non-pilots. An appropriately designed interface should (a) provide a context for interpreting video and (b) support UAV tasking and control, all within a single display screen. In this paper, we present and analyze an interface that attempts to accomplish this goal. The interface utilizes a georeferenced terrain map rendered from publicly available altitude data and terrain imagery to create a context in which the location of the UAV and the source of the video are communicated to the operator. Rotated and transformed imagery from the UAV provides a stable frame of reference for the operator and integrates cleanly into the terrain model. Simple icons overlaid onto the main display provide intuitive control and feedback when necessary but fade to a semi-transparent state when not in use to avoid distracting the operator's attention from the video signal. With various interface elements integrated into a single display, the interface runs nicely on a small, portable, inexpensive system with a single display screen and simple input device, but is powerful enough to allow a single operator to deploy, control, and recover a small UAV when coupled with appropriate autonomy. As we present elements of the interface design, we will identify concepts that can be leveraged into a large class of UAV applications.
Multiple Target Tracking in a Wide-Field-of-View Camera System
1990-01-01
assembly is mounted on a Contraves alt-azi axis table with a pointing accuracy of < 2 Urad. * Work performed under the auspices of the U.S. Department of... Contraves SUN 3 CCD DR11W VME EITHERNET SUN 3 !3T 3 RS170 Video 1 Video ^mglifier^ I WWV Clock VCR Datacube u Monitor Monitor UL...displaying processed images with overlay from the Datacube. We control the Contraves table using a GPIB interface on the SUN. GPIB also interfaces a
Synchronized voltage contrast display analysis system
NASA Technical Reports Server (NTRS)
Johnston, M. F.; Shumka, A.; Miller, E.; Evans, K. C. (Inventor)
1982-01-01
An apparatus and method for comparing internal voltage potentials of first and second operating electronic components such as large scale integrated circuits (LSI's) in which voltage differentials are visually identified via an appropriate display means are described. More particularly, in a first embodiment of the invention a first and second scanning electron microscope (SEM) are configured to scan a first and second operating electronic component respectively. The scan pattern of the second SEM is synchronized to that of the first SEM so that both simultaneously scan corresponding portions of the two operating electronic components. Video signals from each SEM corresponding to secondary electron signals generated as a result of a primary electron beam intersecting each operating electronic component in accordance with a predetermined scan pattern are provided to a video mixer and color encoder.
3D laptop for defense applications
NASA Astrophysics Data System (ADS)
Edmondson, Richard; Chenault, David
2012-06-01
Polaris Sensor Technologies has developed numerous 3D display systems using a US Army patented approach. These displays have been developed as prototypes for handheld controllers for robotic systems and closed hatch driving, and as part of a TALON robot upgrade for 3D vision, providing depth perception for the operator for improved manipulation and hazard avoidance. In this paper we discuss the prototype rugged 3D laptop computer and its applications to defense missions. The prototype 3D laptop combines full temporal and spatial resolution display with the rugged Amrel laptop computer. The display is viewed through protective passive polarized eyewear, and allows combined 2D and 3D content. Uses include robot tele-operation with live 3D video or synthetically rendered scenery, mission planning and rehearsal, enhanced 3D data interpretation, and simulation.
Reconstruction, Enhancement, Visualization, and Ergonomic Assessment for Laparoscopic Surgery
2007-02-01
support and upgrade of the REVEAL display system and tool suite in the University of Maryland Medical Center’s Simulation Center, (2) stereo video display...technology deployment, (3) stereo probe calibration benchmarks and support tools , (4) the production of research media, (5) baseline results from...endoscope can be used to generate a stereoscopic view for a surgeon, as with the DaVinci robot in use today. In order to use such an endoscope for
An Airborne Programmable Digital to Video Converter Interface and Operation Manual.
1981-02-01
Identify by block number) SCAN CONVERTER VIDEO DISPLAY TELEVISION DISPLAY 20. ABSTRACT (Continue on reverse oide If necessary and Identify by block...programmable cathode ray tube (CRT) controller which is accessed by the CPU to permit operation in a wide variety of modes. The Alphanumeric Generator
Potential Health Hazards of Video Display Terminals.
ERIC Educational Resources Information Center
Murray, William E.; And Others
In response to a request from three California unions to evaluate potential health hazards from the use of video display terminals (VDT's) in information processing applications, the National Institute for Occupational Safety and Health (NIOSH) conducted a limited field investigation of three companies in the San Francisco-Oakland Bay Area. A…
NASA Technical Reports Server (NTRS)
Robbins, Woodrow E. (Editor); Fisher, Scott S. (Editor)
1989-01-01
Special attention was given to problems of stereoscopic display devices, such as CAD for enhancement of the design process in visual arts, stereo-TV improvement of remote manipulator performance, a voice-controlled stereographic video camera system, and head-mounted displays and their low-cost design alternatives. Also discussed was a novel approach to chromostereoscopic microscopy, computer-generated barrier-strip autostereography and lenticular stereograms, and parallax barrier three-dimensional TV. Additional topics include processing and user interface isssues and visualization applications, including automated analysis and fliud flow topology, optical tomographic measusrements of mixing fluids, visualization of complex data, visualization environments, and visualization management systems.
A large flat panel multifunction display for military and space applications
NASA Astrophysics Data System (ADS)
Pruitt, James S.
1992-09-01
A flat panel multifunction display (MFD) that offers the size and reliability benefits of liquid crystal display technology while achieving near-CRT display quality is presented. Display generation algorithms that provide exceptional display quality are being implemented in custom VLSI components to minimize MFD size. A high-performance processor converts user-specified display lists to graphics commands used by these components, resulting in high-speed updates of two-dimensional and three-dimensional images. The MFD uses the MIL-STD-1553B data bus for compatibility with virtually all avionics systems. The MFD can generate displays directly from display lists received from the MIL-STD-1553B bus. Complex formats can be stored in the MFD and displayed using parameters from the data bus. The MFD also accepts direct video input and performs special processing on this input to enhance image quality.
A computer-aided telescope pointing system utilizing a video star tracker
NASA Technical Reports Server (NTRS)
Murphy, J. P.; Lorell, K. R.; Swift, C. D.
1975-01-01
The Video Inertial Pointing (VIP) System developed to satisfy the acquisition and pointing requirements of astronomical telescopes is described. A unique feature of the system is the use of a single sensor to provide information for the generation of three axis pointing error signals and for a cathode ray tube (CRT) display of the star field. The pointing error signals are used to update the telescope's gyro stabilization and the CRT display is used by an operator to facilitate target acquisition and to aid in manual positioning of the telescope optical axis. A model of the system using a low light level vidicon built and flown on a balloon-borne infrared telescope is briefly described from a state of the art charge coupled device (CCD) sensor. The advanced system hardware is described and an analysis of the multi-star tracking and three axis error signal generation, along with an analysis and design of the gyro update filter, are presented. Results of a hybrid simulation are described in which the advanced VIP system hardware is driven by a digital simulation of the star field/CCD sensor and an analog simulation of the telescope and gyro stabilization dynamics.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-24
... Accessible Emergency Information; Apparatus Requirements for Emergency Information and Video Description...] Accessible Emergency Information; Apparatus Requirements for Emergency Information and Video Description... manufacturers of devices that display video programming to ensure that certain apparatus are able to make...
NASA Astrophysics Data System (ADS)
Hui, Jie; Cao, Yingchun; Zhang, Yi; Kole, Ayeeshik; Wang, Pu; Yu, Guangli; Eakins, Gregory; Sturek, Michael; Chen, Weibiao; Cheng, Ji-Xin
2017-03-01
Intravascular photoacoustic-ultrasound (IVPA-US) imaging is an emerging hybrid modality for the detection of lipidladen plaques by providing simultaneous morphological and lipid-specific chemical information of an artery wall. The clinical utility of IVPA-US technology requires real-time imaging and display at speed of video-rate level. Here, we demonstrate a compact and portable IVPA-US system capable of imaging at up to 25 frames per second in real-time display mode. This unprecedented imaging speed was achieved by concurrent innovations in excitation laser source, rotary joint assembly, 1 mm IVPA-US catheter, differentiated A-line strategy, and real-time image processing and display algorithms. By imaging pulsatile motion at different imaging speeds, 16 frames per second was deemed to be adequate to suppress motion artifacts from cardiac pulsation for in vivo applications. Our lateral resolution results further verified the number of A-lines used for a cross-sectional IVPA image reconstruction. The translational capability of this system for the detection of lipid-laden plaques was validated by ex vivo imaging of an atherosclerotic human coronary artery at 16 frames per second, which showed strong correlation to gold-standard histopathology.
Videotex and Education: A Review of British Developments.
ERIC Educational Resources Information Center
Real, Michael R.
Defining videotex, viewdata, teletext, and their cognates as systems that transmit computerized pages of information for remote display (on a television screen, variously integrating computers, and video, broadcasting, telephone, typewriter, and related technologies), this report explores educational and related applications of videotex…
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Definitions. 23.701... DRUG-FREE WORKPLACE Contracting for Environmentally Preferable Products and Services 23.701 Definitions. As used in this subpart— Computer monitor means a video display unit used with a computer. Desktop...
A clinical pilot study of a modular video-CT augmentation system for image-guided skull base surgery
NASA Astrophysics Data System (ADS)
Liu, Wen P.; Mirota, Daniel J.; Uneri, Ali; Otake, Yoshito; Hager, Gregory; Reh, Douglas D.; Ishii, Masaru; Gallia, Gary L.; Siewerdsen, Jeffrey H.
2012-02-01
Augmentation of endoscopic video with preoperative or intraoperative image data [e.g., planning data and/or anatomical segmentations defined in computed tomography (CT) and magnetic resonance (MR)], can improve navigation, spatial orientation, confidence, and tissue resection in skull base surgery, especially with respect to critical neurovascular structures that may be difficult to visualize in the video scene. This paper presents the engineering and evaluation of a video augmentation system for endoscopic skull base surgery translated to use in a clinical study. Extension of previous research yielded a practical system with a modular design that can be applied to other endoscopic surgeries, including orthopedic, abdominal, and thoracic procedures. A clinical pilot study is underway to assess feasibility and benefit to surgical performance by overlaying CT or MR planning data in realtime, high-definition endoscopic video. Preoperative planning included segmentation of the carotid arteries, optic nerves, and surgical target volume (e.g., tumor). An automated camera calibration process was developed that demonstrates mean re-projection accuracy (0.7+/-0.3) pixels and mean target registration error of (2.3+/-1.5) mm. An IRB-approved clinical study involving fifteen patients undergoing skull base tumor surgery is underway in which each surgery includes the experimental video-CT system deployed in parallel to the standard-of-care (unaugmented) video display. Questionnaires distributed to one neurosurgeon and two otolaryngologists are used to assess primary outcome measures regarding the benefit to surgical confidence in localizing critical structures and targets by means of video overlay during surgical approach, resection, and reconstruction.
NASA Astrophysics Data System (ADS)
Figl, Michael; Birkfellner, Wolfgang; Watzinger, Franz; Wanschitz, Felix; Hummel, Johann; Hanel, Rudolf A.; Ewers, Rolf; Bergmann, Helmar
2002-05-01
Two main concepts of Head Mounted Displays (HMD) for augmented reality (AR) visualization exist, the optical and video-see through type. Several research groups have pursued both approaches for utilizing HMDs for computer aided surgery. While the hardware requirements for a video see through HMD to achieve acceptable time delay and frame rate seem to be enormous the clinical acceptance of such a device is doubtful from a practical point of view. Starting from previous work in displaying additional computer-generated graphics in operating microscopes, we have adapted a miniature head mounted operating microscope for AR by integrating two very small computer displays. To calibrate the projection parameters of this so called Varioscope AR we have used Tsai's Algorithm for camera calibration. Connection to a surgical navigation system was performed by defining an open interface to the control unit of the Varioscope AR. The control unit consists of a standard PC with a dual head graphics adapter to render and display the desired augmentation of the scene. We connected this control unit to a computer aided surgery (CAS) system by the TCP/IP interface. In this paper we present the control unit for the HMD and its software design. We tested two different optical tracking systems, the Flashpoint (Image Guided Technologies, Boulder, CO), which provided about 10 frames per second, and the Polaris (Northern Digital, Ontario, Canada) which provided at least 30 frames per second, both with a time delay of one frame.
Introducing a Public Stereoscopic 3D High Dynamic Range (SHDR) Video Database
NASA Astrophysics Data System (ADS)
Banitalebi-Dehkordi, Amin
2017-03-01
High dynamic range (HDR) displays and cameras are paving their ways through the consumer market at a rapid growth rate. Thanks to TV and camera manufacturers, HDR systems are now becoming available commercially to end users. This is taking place only a few years after the blooming of 3D video technologies. MPEG/ITU are also actively working towards the standardization of these technologies. However, preliminary research efforts in these video technologies are hammered by the lack of sufficient experimental data. In this paper, we introduce a Stereoscopic 3D HDR database of videos that is made publicly available to the research community. We explain the procedure taken to capture, calibrate, and post-process the videos. In addition, we provide insights on potential use-cases, challenges, and research opportunities, implied by the combination of higher dynamic range of the HDR aspect, and depth impression of the 3D aspect.
A real-time remote video streaming platform for ultrasound imaging.
Ahmadi, Mehdi; Gross, Warren J; Kadoury, Samuel
2016-08-01
Ultrasound is a viable imaging technology in remote and resources-limited areas. Ultrasonography is a user-dependent skill which depends on a high degree of training and hands-on experience. However, there is a limited number of skillful sonographers located in remote areas. In this work, we aim to develop a real-time video streaming platform which allows specialist physicians to remotely monitor ultrasound exams. To this end, an ultrasound stream is captured and transmitted through a wireless network into remote computers, smart-phones and tablets. In addition, the system is equipped with a camera to track the position of the ultrasound probe. The main advantage of our work is using an open source platform for video streaming which gives us more control over streaming parameters than the available commercial products. The transmission delays of the system are evaluated for several ultrasound video resolutions and the results show that ultrasound videos close to the high-definition (HD) resolution can be received and displayed on an Android tablet with the delay of 0.5 seconds which is acceptable for accurate real-time diagnosis.
Real-time 3D visualization of volumetric video motion sensor data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlson, J.; Stansfield, S.; Shawver, D.
1996-11-01
This paper addresses the problem of improving detection, assessment, and response capabilities of security systems. Our approach combines two state-of-the-art technologies: volumetric video motion detection (VVMD) and virtual reality (VR). This work capitalizes on the ability of VVMD technology to provide three-dimensional (3D) information about the position, shape, and size of intruders within a protected volume. The 3D information is obtained by fusing motion detection data from multiple video sensors. The second component involves the application of VR technology to display information relating to the sensors and the sensor environment. VR technology enables an operator, or security guard, to bemore » immersed in a 3D graphical representation of the remote site. VVMD data is transmitted from the remote site via ordinary telephone lines. There are several benefits to displaying VVMD information in this way. Because the VVMD system provides 3D information and because the sensor environment is a physical 3D space, it seems natural to display this information in 3D. Also, the 3D graphical representation depicts essential details within and around the protected volume in a natural way for human perception. Sensor information can also be more easily interpreted when the operator can `move` through the virtual environment and explore the relationships between the sensor data, objects and other visual cues present in the virtual environment. By exploiting the powerful ability of humans to understand and interpret 3D information, we expect to improve the means for visualizing and interpreting sensor information, allow a human operator to assess a potential threat more quickly and accurately, and enable a more effective response. This paper will detail both the VVMD and VR technologies and will discuss a prototype system based upon their integration.« less
Privacy-protecting video surveillance
NASA Astrophysics Data System (ADS)
Wickramasuriya, Jehan; Alhazzazi, Mohanned; Datt, Mahesh; Mehrotra, Sharad; Venkatasubramanian, Nalini
2005-02-01
Forms of surveillance are very quickly becoming an integral part of crime control policy, crisis management, social control theory and community consciousness. In turn, it has been used as a simple and effective solution to many of these problems. However, privacy-related concerns have been expressed over the development and deployment of this technology. Used properly, video cameras help expose wrongdoing but typically come at the cost of privacy to those not involved in any maleficent activity. This work describes the design and implementation of a real-time, privacy-protecting video surveillance infrastructure that fuses additional sensor information (e.g. Radio-frequency Identification) with video streams and an access control framework in order to make decisions about how and when to display the individuals under surveillance. This video surveillance system is a particular instance of a more general paradigm of privacy-protecting data collection. In this paper we describe in detail the video processing techniques used in order to achieve real-time tracking of users in pervasive spaces while utilizing the additional sensor data provided by various instrumented sensors. In particular, we discuss background modeling techniques, object tracking and implementation techniques that pertain to the overall development of this system.
Study of a direct visualization display tool for space applications
NASA Astrophysics Data System (ADS)
Pereira do Carmo, J.; Gordo, P. R.; Martins, M.; Rodrigues, F.; Teodoro, P.
2017-11-01
The study of a Direct Visualization Display Tool (DVDT) for space applications is reported. The review of novel technologies for a compact display tool is described. Several applications for this tool have been identified with the support of ESA astronauts and are presented. A baseline design is proposed. It consists mainly of OLEDs as image source; a specially designed optical prism as relay optics; a Personal Digital Assistant (PDA), with data acquisition card, as control unit; and voice control and simplified keyboard as interfaces. Optical analysis and the final estimated performance are reported. The system is able to display information (text, pictures or/and video) with SVGA resolution directly to the astronaut using a Field of View (FOV) of 20x14.5 degrees. The image delivery system is a monocular Head Mounted Display (HMD) that weights less than 100g. The HMD optical system has an eye pupil of 7mm and an eye relief distance of 30mm.
Use of Internet Resources in the Biology Lecture Classroom.
ERIC Educational Resources Information Center
Francis, Joseph W.
2000-01-01
Introduces internet resources that are available for instructional use in biology classrooms. Provides information on video-based technologies to create and capture video sequences, interactive web sites that allow interaction with biology simulations, online texts, and interactive videos that display animated video sequences. (YDS)
3D video coding: an overview of present and upcoming standards
NASA Astrophysics Data System (ADS)
Merkle, Philipp; Müller, Karsten; Wiegand, Thomas
2010-07-01
An overview of existing and upcoming 3D video coding standards is given. Various different 3D video formats are available, each with individual pros and cons. The 3D video formats can be separated into two classes: video-only formats (such as stereo and multiview video) and depth-enhanced formats (such as video plus depth and multiview video plus depth). Since all these formats exist of at least two video sequences and possibly additional depth data, efficient compression is essential for the success of 3D video applications and technologies. For the video-only formats the H.264 family of coding standards already provides efficient and widely established compression algorithms: H.264/AVC simulcast, H.264/AVC stereo SEI message, and H.264/MVC. For the depth-enhanced formats standardized coding algorithms are currently being developed. New and specially adapted coding approaches are necessary, as the depth or disparity information included in these formats has significantly different characteristics than video and is not displayed directly, but used for rendering. Motivated by evolving market needs, MPEG has started an activity to develop a generic 3D video standard within the 3DVC ad-hoc group. Key features of the standard are efficient and flexible compression of depth-enhanced 3D video representations and decoupling of content creation and display requirements.
Airborne Navigation Remote Map Reader Evaluation.
1986-03-01
EVALUATION ( James C. Byrd Intergrated Controls/Displays Branch SAvionics Systems Division Directorate of Avionics Engineering SMarch 1986 Final Report...Resolution 15 3.2 Accuracy 15 3.3 Symbology 15 3.4 Video Standard 18 3.5 Simulator Control Box 18 3.6 Software 18 3.7 Display Performance 21 3.8 Reliability 24...can be selected depending on the detail required and will automatically be presented at his present position. .The French RMR uses a Flying Spot Scanner
The virtual brain: 30 years of video-game play and cognitive abilities.
Latham, Andrew J; Patston, Lucy L M; Tippett, Lynette J
2013-09-13
Forty years have passed since video-games were first made widely available to the public and subsequently playing games has become a favorite past-time for many. Players continuously engage with dynamic visual displays with success contingent on the time-pressured deployment, and flexible allocation, of attention as well as precise bimanual movements. Evidence to date suggests that both brief and extensive exposure to video-game play can result in a broad range of enhancements to various cognitive faculties that generalize beyond the original context. Despite promise, video-game research is host to a number of methodological issues that require addressing before progress can be made in this area. Here an effort is made to consolidate the past 30 years of literature examining the effects of video-game play on cognitive faculties and, more recently, neural systems. Future work is required to identify the mechanism that allows the act of video-game play to generate such a broad range of generalized enhancements.
The virtual brain: 30 years of video-game play and cognitive abilities
Latham, Andrew J.; Patston, Lucy L. M.; Tippett, Lynette J.
2013-01-01
Forty years have passed since video-games were first made widely available to the public and subsequently playing games has become a favorite past-time for many. Players continuously engage with dynamic visual displays with success contingent on the time-pressured deployment, and flexible allocation, of attention as well as precise bimanual movements. Evidence to date suggests that both brief and extensive exposure to video-game play can result in a broad range of enhancements to various cognitive faculties that generalize beyond the original context. Despite promise, video-game research is host to a number of methodological issues that require addressing before progress can be made in this area. Here an effort is made to consolidate the past 30 years of literature examining the effects of video-game play on cognitive faculties and, more recently, neural systems. Future work is required to identify the mechanism that allows the act of video-game play to generate such a broad range of generalized enhancements. PMID:24062712
An attentive multi-camera system
NASA Astrophysics Data System (ADS)
Napoletano, Paolo; Tisato, Francesco
2014-03-01
Intelligent multi-camera systems that integrate computer vision algorithms are not error free, and thus both false positive and negative detections need to be revised by a specialized human operator. Traditional multi-camera systems usually include a control center with a wall of monitors displaying videos from each camera of the network. Nevertheless, as the number of cameras increases, switching from a camera to another becomes hard for a human operator. In this work we propose a new method that dynamically selects and displays the content of a video camera from all the available contents in the multi-camera system. The proposed method is based on a computational model of human visual attention that integrates top-down and bottom-up cues. We believe that this is the first work that tries to use a model of human visual attention for the dynamic selection of the camera view of a multi-camera system. The proposed method has been experimented in a given scenario and has demonstrated its effectiveness with respect to the other methods and manually generated ground-truth. The effectiveness has been evaluated in terms of number of correct best-views generated by the method with respect to the camera views manually generated by a human operator.
NASA Technical Reports Server (NTRS)
Liu, Hua-Kuang (Inventor)
1992-01-01
A relatively small and low-cost system is provided for projecting a large and bright television image onto a screen. A miniature liquid crystal array is driven by video circuitry to produce a pattern of transparencies in the array corresponding to a television image. Light is directed against the rear surface of the array to illuminate it, while a projection lens lies in front of the array to project the image of the array onto a large screen. Grid lines in the liquid crystal array are eliminated by a spacial filter which comprises a negative of the Fourier transform of the grid.
System for training and evaluation of security personnel in use of firearms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hall, H.F.
This patent describes an interactive video display system comprising a laser disc player with a remote large-screen projector to view life-size video scenarios and a control computer. A video disc has at least one basic scenario and one or more branches of the basic scenario with one or more subbranches from any one or more of the branches and further subbranches, if desired, to any level of programming desired. The control computer is programmed for interactive control of the branching, and control of other effects that enhance the scenario, in response to detection of when the trainee has drawn anmore » infrared laser handgun from his holster, fired his laser handgun, taken cover, advanced or retreated from the adversary on the screen, and when the adversary has fired his gun at the trainee.« less
System for training and evaluation of security personnel in use of firearms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hall, H.F.
An interactive video display system comprising a laser disc player with a remote large-screen projector to view life-size video scenarios and a control computer. A video disc has at least one basic scenario and one or more branches of the basic scenario with one or more subbranches from any one or more of the branches and further subbranches, if desired, to any level of programming desired. The control computer is programmed for interactive control of the branching, and control of other effects that enhance the scenario, in response to detection of when the trainee has drawn an infrared laser handgunmore » from high holster, fired his laser handgun, taken cover, advanced or retreated from the adversary on the screen, and when the adversary has fired his gun at the trainee. 8 figs.« less
System for training and evaluation of security personnel in use of firearms
Hall, Howard F.
1990-01-01
An interactive video display system comprising a laser disc player with a remote large-screen projector to view life-size video scenarios and a control computer. A video disc has at least one basic scenario and one or more branches of the basic scenario with one or more subbranches from any one or more of the branches and further subbranches, if desired, to any level of programming desired. The control computer is programmed for interactive control of the branching, and control of other effects that enhance the scenario, in response to detection of when the trainee has (1) drawn an infrared laser handgun from his holster, (2) fired his laser handgun, (3) taken cover, (4) advanced or retreated from the adversary on the screen, and (5) when the adversary has fired his gun at the trainee.
36 CFR 1194.24 - Video and multimedia products.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 36 Parks, Forests, and Public Property 3 2010-07-01 2010-07-01 false Video and multimedia products... Video and multimedia products. (a) All analog television displays 13 inches and larger, and computer... training and informational video and multimedia productions which support the agency's mission, regardless...
36 CFR 1194.24 - Video and multimedia products.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 36 Parks, Forests, and Public Property 3 2011-07-01 2011-07-01 false Video and multimedia products... Video and multimedia products. (a) All analog television displays 13 inches and larger, and computer... training and informational video and multimedia productions which support the agency's mission, regardless...
36 CFR 1194.24 - Video and multimedia products.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 36 Parks, Forests, and Public Property 3 2012-07-01 2012-07-01 false Video and multimedia products... Video and multimedia products. (a) All analog television displays 13 inches and larger, and computer... training and informational video and multimedia productions which support the agency's mission, regardless...
36 CFR 1194.24 - Video and multimedia products.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 36 Parks, Forests, and Public Property 3 2014-07-01 2014-07-01 false Video and multimedia products... Video and multimedia products. (a) All analog television displays 13 inches and larger, and computer... training and informational video and multimedia productions which support the agency's mission, regardless...
DC-8 Scanning Lidar Characterization of Aircraft Contrails and Cirrus Clouds
NASA Technical Reports Server (NTRS)
Uthe, Edward E.; Nielsen, Norman B.; Oseberg, Terje E.
1998-01-01
An angular-scanning large-aperture (36 cm) backscatter lidar was developed and deployed on the NASA DC-8 research aircraft as part of the SUCCESS (Subsonic Aircraft: Contrail and Cloud Effects Special Study) program. The lidar viewing direction could be scanned continuously during aircraft flight from vertically upward to forward to vertically downward, or the viewing could be at fixed angles. Real-time pictorial displays generated from the lidar signatures were broadcast on the DC-8 video network and used to locate clouds and contrails above, ahead of, and below the DC-8 to depict their spatial structure and to help select DC-8 altitudes for achieving optimum sampling by onboard in situ sensors. Several lidar receiver systems and real-time data displays were evaluated to help extend in situ data into vertical dimensions and to help establish possible lidar configurations and applications on future missions. Digital lidar signatures were recorded on 8 mm Exabyte tape and generated real-time displays were recorded on 8mm video tape. The digital records were transcribed in a common format to compact disks to facilitate data analysis and delivery to SUCCESS participants. Data selected from the real-time display video recordings were processed for publication-quality displays incorporating several standard lidar data corrections. Data examples are presented that illustrate: (1) correlation with particulate, gas, and radiometric measurements made by onboard sensors, (2) discrimination and identification between contrails observed by onboard sensors, (3) high-altitude (13 km) scattering layer that exhibits greatly enhanced vertical backscatter relative to off-vertical backscatter, and (4) mapping of vertical distributions of individual precipitating ice crystals and their capture by cloud layers. An angular scan plotting program was developed that accounts for DC-8 pitch and velocity.
NASA Astrophysics Data System (ADS)
Veligdan, James T.; Beiser, Leo; Biscardi, Cyrus; Brewster, Calvin; DeSanto, Leonard
1997-07-01
The polyplanar optical display (POD) is a unique display screen which can be use with any projection source. This display screen is 2 inches thick and has a matte black face which allows for high contrast images. The prototype being developed is a form, fit and functional replacement display for the B-52 aircraft which uses a monochrome ten-inch display. The new display uses a 100 milliwatt green solid state laser as its optical source. In order to produce real- time video, the laser light is being modulated by a digital light processing (DLP) chip manufactured by Texas Instruments, Inc. A variable astigmatic focusing system is used to produce a stigmatic image on the viewing face of the POD. In addition to the optical design, we discuss the electronic interfacing to the DLP chip, the opto-mechanical design and viewing angle characteristics.
Laser-driven polyplanar optic display
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veligdan, J.T.; Biscardi, C.; Brewster, C.
1998-01-01
The Polyplanar Optical Display (POD) is a unique display screen which can be used with any projection source. This display screen is 2 inches thick and has a matte-black face which allows for high contrast images. The prototype being developed is a form, fit and functional replacement display for the B-52 aircraft which uses a monochrome ten-inch display. The new display uses a 200 milliwatt green solid-state laser (532 nm) as its optical source. In order to produce real-time video, the laser light is being modulated by a Digital Light Processing (DLP) chip manufactured by Texas Instruments, Inc. A variablemore » astigmatic focusing system is used to produce a stigmatic image on the viewing face of the POD. In addition to the optical design, the authors discuss the DLP chip, the optomechanical design and viewing angle characteristics.« less
Laser-driven polyplanar optic display
NASA Astrophysics Data System (ADS)
Veligdan, James T.; Beiser, Leo; Biscardi, Cyrus; Brewster, Calvin; DeSanto, Leonard
1998-05-01
The Polyplanar Optical Display (POD) is a unique display screen which can be used with any projection source. This display screen is 2 inches thick and has a matte-black face which allows for high contrast images. The prototype being developed is a form, fit and functional replacement display for the B-52 aircraft which uses a monochrome ten-inch display. The new display uses a 200 milliwatt green solid- state laser (532 nm) as its optical source. In order to produce real-time video, the laser light is being modulated by a Digital Light Processing (DLPTM) chip manufactured by Texas Instruments, Inc. A variable astigmatic focusing system is used to produce a stigmatic image on the viewing face of the POD. In addition to the optical design, we discuss the DLPTM chip, the opto-mechanical design and viewing angle characteristics.
Immersive video for virtual tourism
NASA Astrophysics Data System (ADS)
Hernandez, Luis A.; Taibo, Javier; Seoane, Antonio J.
2001-11-01
This paper describes a new panoramic, 360 degree(s) video system and its use in a real application for virtual tourism. The development of this system has required to design new hardware for multi-camera recording, and software for video processing in order to elaborate the panorama frames and to playback the resulting high resolution video footage on a regular PC. The system makes use of new VR display hardware, such as WindowVR, in order to make the view dependent on the viewer's spatial orientation and so enhance immersiveness. There are very few examples of similar technologies and the existing ones are extremely expensive and/or impossible to be implemented on personal computers with acceptable quality. The idea of the system starts from the concept of Panorama picture, developed in technologies such as QuickTimeVR. This idea is extended to the concept of panorama frame that leads to panorama video. However, many problems are to be solved to implement this simple scheme. Data acquisition involves simultaneously footage recording in every direction, and latter processing to convert every set of frames in a single high resolution panorama frame. Since there is no common hardware capable of 4096x512 video playback at 25 fps rate, it must be stripped in smaller pieces which the system must manage to get the right frames of the right parts as the user movement demands it. As the system must be immersive, the physical interface to watch the 360 degree(s) video is a WindowVR, that is, a flat screen with an orientation tracker that the user holds in his hands, moving it like if it were a virtual window through which the city and its activity is being shown.
Knowledge representation in space flight operations
NASA Technical Reports Server (NTRS)
Busse, Carl
1989-01-01
In space flight operations rapid understanding of the state of the space vehicle is essential. Representation of knowledge depicting space vehicle status in a dynamic environment presents a difficult challenge. The NASA Jet Propulsion Laboratory has pursued areas of technology associated with the advancement of spacecraft operations environment. This has led to the development of several advanced mission systems which incorporate enhanced graphics capabilities. These systems include: (1) Spacecraft Health Automated Reasoning Prototype (SHARP); (2) Spacecraft Monitoring Environment (SME); (3) Electrical Power Data Monitor (EPDM); (4) Generic Payload Operations Control Center (GPOCC); and (5) Telemetry System Monitor Prototype (TSM). Knowledge representation in these systems provides a direct representation of the intrinsic images associated with the instrument and satellite telemetry and telecommunications systems. The man-machine interface includes easily interpreted contextual graphic displays. These interactive video displays contain multiple display screens with pop-up windows and intelligent, high resolution graphics linked through context and mouse-sensitive icons and text.
A gaze-contingent display to study contrast sensitivity under natural viewing conditions
NASA Astrophysics Data System (ADS)
Dorr, Michael; Bex, Peter J.
2011-03-01
Contrast sensitivity has been extensively studied over the last decades and there are well-established models of early vision that were derived by presenting the visual system with synthetic stimuli such as sine-wave gratings near threshold contrasts. Natural scenes, however, contain a much wider distribution of orientations, spatial frequencies, and both luminance and contrast values. Furthermore, humans typically move their eyes two to three times per second under natural viewing conditions, but most laboratory experiments require subjects to maintain central fixation. We here describe a gaze-contingent display capable of performing real-time contrast modulations of video in retinal coordinates, thus allowing us to study contrast sensitivity when dynamically viewing dynamic scenes. Our system is based on a Laplacian pyramid for each frame that efficiently represents individual frequency bands. Each output pixel is then computed as a locally weighted sum of pyramid levels to introduce local contrast changes as a function of gaze. Our GPU implementation achieves real-time performance with more than 100 fps on high-resolution video (1920 by 1080 pixels) and a synthesis latency of only 1.5ms. Psychophysical data show that contrast sensitivity is greatly decreased in natural videos and under dynamic viewing conditions. Synthetic stimuli therefore only poorly characterize natural vision.
New generation of 3D desktop computer interfaces
NASA Astrophysics Data System (ADS)
Skerjanc, Robert; Pastoor, Siegmund
1997-05-01
Today's computer interfaces use 2-D displays showing windows, icons and menus and support mouse interactions for handling programs and data files. The interface metaphor is that of a writing desk with (partly) overlapping sheets of documents placed on its top. Recent advances in the development of 3-D display technology give the opportunity to take the interface concept a radical stage further by breaking the design limits of the desktop metaphor. The major advantage of the envisioned 'application space' is, that it offers an additional, immediately perceptible dimension to clearly and constantly visualize the structure and current state of interrelations between documents, videos, application programs and networked systems. In this context, we describe the development of a visual operating system (VOS). Under VOS, applications appear as objects in 3-D space. Users can (graphically connect selected objects to enable communication between the respective applications. VOS includes a general concept of visual and object oriented programming for tasks ranging from, e.g., low-level programming up to high-level application configuration. In order to enable practical operation in an office or at home for many hours, the system should be very comfortable to use. Since typical 3-D equipment used, e.g., in virtual-reality applications (head-mounted displays, data gloves) is rather cumbersome and straining, we suggest to use off-head displays and contact-free interaction techniques. In this article, we introduce an autostereoscopic 3-D display and connected video based interaction techniques which allow viewpoint-depending imaging (by head tracking) and visually controlled modification of data objects and links (by gaze tracking, e.g., to pick, 3-D objects just by looking at them).
NASA Technical Reports Server (NTRS)
Cohen, Tamar E.; Lees, David S.; Deans, Matthew C.; Lim, Darlene S. S.; Lee, Yeon Jin Grace
2018-01-01
Exploration Ground Data Systems (xGDS) supports rapid scientific decision making by synchronizing video in context with map, instrument data visualization, geo-located notes and any other collected data. xGDS is an open source web-based software suite developed at NASA Ames Research Center to support remote science operations in analog missions and prototype solutions for remote planetary exploration. (See Appendix B) Typical video systems are designed to play or stream video only, independent of other data collected in the context of the video. Providing customizable displays for monitoring live video and data as well as replaying recorded video and data helps end users build up a rich situational awareness. xGDS was designed to support remote field exploration with unreliable networks. Commercial digital recording systems operate under the assumption that there is a stable and reliable network between the source of the video and the recording system. In many field deployments and space exploration scenarios, this is not the case - there are both anticipated and unexpected network losses. xGDS' Video Module handles these interruptions, storing the available video, organizing and characterizing the dropouts, and presenting the video for streaming or replay to the end user including visualization of the dropouts. Scientific instruments often require custom or expensive software to analyze and visualize collected data. This limits the speed at which the data can be visualized and limits access to the data to those users with the software. xGDS' Instrument Module integrates with instruments that collect and broadcast data in a single snapshot or that continually collect and broadcast a stream of data. While seeing a visualization of collected instrument data is informative, showing the context for the collected data, other data collected nearby along with events indicating current status helps remote science teams build a better understanding of the environment. Further, sharing geo-located, tagged notes recorded by the scientists and others on the team spurs deeper analysis of the data.
Development of a real time multiple target, multi camera tracker for civil security applications
NASA Astrophysics Data System (ADS)
Åkerlund, Hans
2009-09-01
A surveillance system has been developed that can use multiple TV-cameras to detect and track personnel and objects in real time in public areas. The document describes the development and the system setup. The system is called NIVS Networked Intelligent Video Surveillance. Persons in the images are tracked and displayed on a 3D map of the surveyed area.
Large-screen display technology assessment for military applications
NASA Astrophysics Data System (ADS)
Blaha, Richard J.
1990-08-01
Full-color, large screen display systems can enhance military applications that require group presentation, coordinated decisions, or interaction between decision makers. The technology already plays an important role in operations centers, simulation facilities, conference rooms, and training centers. Some applications display situational, status, or briefing information, while others portray instructional material for procedural training or depict realistic panoramic scenes that are used in simulators. While each specific application requires unique values of luminance, resolution, response time, reliability, and the video interface, suitable performance can be achieved with available commercial large screen displays. Advances in the technology of large screen displays are driven by the commercial applications because the military applications do not provide the significant market share enjoyed by high definition television (HDTV), entertainment, advertisement, training, and industrial applications. This paper reviews the status of full-color, large screen display technologies and includes the performance and cost metrics of available systems. For this discussion, performance data is based upon either measurements made by our personnel or extractions from vendors' data sheets.
Benady-Chorney, Jessica; Yau, Yvonne; Zeighami, Yashar; Bohbot, Veronique D; West, Greg L
2018-03-21
Action video game players (aVGPs) display increased performance in attention-based tasks and enhanced procedural motor learning. In parallel, the anterior cingulate cortex (ACC) is centrally implicated in specific types of reward-based learning and attentional control, the execution or inhibition of motor commands, and error detection. These processes are hypothesized to support aVGP in-game performance and enhanced learning though in-game feedback. We, therefore, tested the hypothesis that habitual aVGPs would display increased cortical thickness compared with nonvideo game players (nonVGPs). Results showed that the aVGP group (n=17) displayed significantly higher levels of cortical thickness specifically in the dorsal ACC compared with the nonVGP group (n=16). Results are discussed in the context of previous findings examining video game experience, attention/performance, and responses to affective components such as pain and fear.
Wrist display concept demonstration based on 2-in. color AMOLED
NASA Astrophysics Data System (ADS)
Meyer, Frederick M.; Longo, Sam J.; Hopper, Darrel G.
2004-09-01
The wrist watch needs an upgrade. Recent advances in optoelectronics, microelectronics, and communication theory have established a technology base that now make the multimedia Dick Tracy watch attainable during the next decade. As a first step towards stuffing the functionality of an entire personnel computer (PC) and television receiver under a watch face, we have set a goal of providing wrist video capability to warfighters. Commercial sector work on the wrist form factor already includes all the functionality of a personal digital assistant (PDA) and full PC operating system. Our strategy is to leverage these commercial developments. In this paper we describe our use of a 2.2 in. diagonal color active matrix light emitting diode (AMOLED) device as a wrist-mounted display (WMD) to present either full motion video or computer generated graphical image formats.
36 CFR § 1194.24 - Video and multimedia products.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 36 Parks, Forests, and Public Property 3 2013-07-01 2012-07-01 true Video and multimedia products... § 1194.24 Video and multimedia products. (a) All analog television displays 13 inches and larger, and... circuitry. (c) All training and informational video and multimedia productions which support the agency's...
Peden, Robert G; Mercer, Rachel; Tatham, Andrew J
2016-10-01
To investigate whether 'surgeon's eye view' videos provided via head-mounted displays can improve skill acquisition and satisfaction in basic surgical training compared with conventional wet-lab teaching. A prospective randomised study of 14 medical students with no prior suturing experience, randomised to 3 groups: 1) conventional teaching; 2) head-mounted display-assisted teaching and 3) head-mounted display self-learning. All were instructed in interrupted suturing followed by 15 minutes' practice. Head-mounted displays provided a 'surgeon's eye view' video demonstrating the technique, available during practice. Subsequently students undertook a practical assessment, where suturing was videoed and graded by masked assessors using a 10-point surgical skill score (1 = very poor technique, 10 = very good technique). Students completed a questionnaire assessing confidence and satisfaction. Suturing ability after teaching was similar between groups (P = 0.229, Kruskal-Wallis test). Median surgical skill scores were 7.5 (range 6-10), 6 (range 3-8) and 7 (range 1-7) following head-mounted display-assisted teaching, conventional teaching, and head-mounted display self-learning respectively. There was good agreement between graders regarding surgical skill scores (rho.c = 0.599, r = 0.603), and no difference in number of sutures placed between groups (P = 0.120). The head-mounted display-assisted teaching group reported greater enjoyment than those attending conventional teaching (P = 0.033). Head-mounted display self-learning was regarded as least useful (7.4 vs 9.0 for conventional teaching, P = 0.021), but more enjoyable than conventional teaching (9.6 vs 8.0, P = 0.050). Teaching augmented with head-mounted displays was significantly more enjoyable than conventional teaching. Students undertaking self-directed learning using head-mounted displays with pre-recorded videos had comparable skill acquisition to those attending traditional wet-lab tutorials. Copyright © 2016 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.
Vision systems for manned and robotic ground vehicles
NASA Astrophysics Data System (ADS)
Sanders-Reed, John N.; Koon, Phillip L.
2010-04-01
A Distributed Aperture Vision System for ground vehicles is described. An overview of the hardware including sensor pod, processor, video compression, and displays is provided. This includes a discussion of the choice between an integrated sensor pod and individually mounted sensors, open architecture design, and latency issues as well as flat panel versus head mounted displays. This technology is applied to various ground vehicle scenarios, including closed-hatch operations (operator in the vehicle), remote operator tele-operation, and supervised autonomy for multi-vehicle unmanned convoys. In addition, remote vision for automatic perimeter surveillance using autonomous vehicles and automatic detection algorithms is demonstrated.
Conformal, Transparent Printed Antenna Developed for Communication and Navigation Systems
NASA Technical Reports Server (NTRS)
Lee, Richard Q.; Simons, Rainee N.
1999-01-01
Conformal, transparent printed antennas have advantages over conventional antennas in terms of space reuse and aesthetics. Because of their compactness and thin profile, these antennas can be mounted on video displays for efficient integration in communication systems such as palmtop computers, digital telephones, and flat-panel television displays. As an array of multiple elements, the antenna subsystem may save weight by reusing space (via vertical stacking) on photovoltaic arrays or on Earth-facing sensors. Also, the antenna could go unnoticed on automobile windshields or building windows, enabling satellite uplinks and downlinks or other emerging high-frequency communications.
NASA Technical Reports Server (NTRS)
1975-01-01
Signal processing equipment specifications, operating and test procedures, and systems design and engineering are described. Five subdivisions of the overall circuitry are treated: (1) the spectrum analyzer; (2) the spectrum integrator; (3) the velocity discriminator; (4) the display interface; and (5) the formatter. They function in series: (1) first in analog form to provide frequency resolution, (2) then in digital form to achieve signal to noise improvement (video integration) and frequency discrimination, and (3) finally in analog form again for the purpose of real-time display of the significant velocity data. The formatter collects binary data from various points in the processor and provides a serial output for bi-phase recording. Block diagrams are used to illustrate the system.
Flexible active-matrix displays and shift registers based on solution-processed organic transistors.
Gelinck, Gerwin H; Huitema, H Edzer A; van Veenendaal, Erik; Cantatore, Eugenio; Schrijnemakers, Laurens; van der Putten, Jan B P H; Geuns, Tom C T; Beenhakkers, Monique; Giesbers, Jacobus B; Huisman, Bart-Hendrik; Meijer, Eduard J; Benito, Estrella Mena; Touwslager, Fred J; Marsman, Albert W; van Rens, Bas J E; de Leeuw, Dago M
2004-02-01
At present, flexible displays are an important focus of research. Further development of large, flexible displays requires a cost-effective manufacturing process for the active-matrix backplane, which contains one transistor per pixel. One way to further reduce costs is to integrate (part of) the display drive circuitry, such as row shift registers, directly on the display substrate. Here, we demonstrate flexible active-matrix monochrome electrophoretic displays based on solution-processed organic transistors on 25-microm-thick polyimide substrates. The displays can be bent to a radius of 1 cm without significant loss in performance. Using the same process flow we prepared row shift registers. With 1,888 transistors, these are the largest organic integrated circuits reported to date. More importantly, the operating frequency of 5 kHz is sufficiently high to allow integration with the display operating at video speed. This work therefore represents a major step towards 'system-on-plastic'.
A color video display technique for flow field surveys
NASA Technical Reports Server (NTRS)
Winkelmann, A. E.; Tsao, C. P.
1982-01-01
A computer driven color video display technique has been developed for the presentation of wind tunnel flow field survey data. The results of both qualitative and quantitative flow field surveys can be presented in high spatial resolutions color coded displays. The technique has been used for data obtained with a hot-wire probe, a split-film probe, a Conrad (pitch) probe and a 5-tube pressure probe in surveys above and behind a wing with partially stalled and fully stalled flow.
Depth assisted compression of full parallax light fields
NASA Astrophysics Data System (ADS)
Graziosi, Danillo B.; Alpaslan, Zahir Y.; El-Ghoroury, Hussein S.
2015-03-01
Full parallax light field displays require high pixel density and huge amounts of data. Compression is a necessary tool used by 3D display systems to cope with the high bandwidth requirements. One of the formats adopted by MPEG for 3D video coding standards is the use of multiple views with associated depth maps. Depth maps enable the coding of a reduced number of views, and are used by compression and synthesis software to reconstruct the light field. However, most of the developed coding and synthesis tools target linearly arranged cameras with small baselines. Here we propose to use the 3D video coding format for full parallax light field coding. We introduce a view selection method inspired by plenoptic sampling followed by transform-based view coding and view synthesis prediction to code residual views. We determine the minimal requirements for view sub-sampling and present the rate-distortion performance of our proposal. We also compare our method with established video compression techniques, such as H.264/AVC, H.264/MVC, and the new 3D video coding algorithm, 3DV-ATM. Our results show that our method not only has an improved rate-distortion performance, it also preserves the structure of the perceived light fields better.
Segmented cold cathode display panel
NASA Technical Reports Server (NTRS)
Payne, Leslie (Inventor)
1998-01-01
The present invention is a video display device that utilizes the novel concept of generating an electronically controlled pattern of electron emission at the output of a segmented photocathode. This pattern of electron emission is amplified via a channel plate. The result is that an intense electronic image can be accelerated toward a phosphor thus creating a bright video image. This novel arrangement allows for one to provide a full color flat video display capable of implementation in large formats. In an alternate arrangement, the present invention is provided without the channel plate and a porous conducting surface is provided instead. In this alternate arrangement, the brightness of the image is reduced but the cost of the overall device is significantly lowered because fabrication complexity is significantly decreased.
Microcomputer-Based Digital Signal Processing Laboratory Experiments.
ERIC Educational Resources Information Center
Tinari, Jr., Rocco; Rao, S. Sathyanarayan
1985-01-01
Describes a system (Apple II microcomputer interfaced to flexible, custom-designed digital hardware) which can provide: (1) Fast Fourier Transform (FFT) computation on real-time data with a video display of spectrum; (2) frequency synthesis experiments using the inverse FFT; and (3) real-time digital filtering experiments. (JN)
NASA Astrophysics Data System (ADS)
Jridi, Maher; Alfalou, Ayman
2017-05-01
By this paper, the major goal is to investigate the Multi-CPU/FPGA SoC (System on Chip) design flow and to transfer a know-how and skills to rapidly design embedded real-time vision system. Our aim is to show how the use of these devices can be benefit for system level integration since they make possible simultaneous hardware and software development. We take the facial detection and pretreatments as case study since they have a great potential to be used in several applications such as video surveillance, building access control and criminal identification. The designed system use the Xilinx Zedboard platform. The last is the central element of the developed vision system. The video acquisition is performed using either standard webcam connected to the Zedboard via USB interface or several camera IP devices. The visualization of video content and intermediate results are possible with HDMI interface connected to HD display. The treatments embedded in the system are as follow: (i) pre-processing such as edge detection implemented in the ARM and in the reconfigurable logic, (ii) software implementation of motion detection and face detection using either ViolaJones or LBP (Local Binary Pattern), and (iii) application layer to select processing application and to display results in a web page. One uniquely interesting feature of the proposed system is that two functions have been developed to transmit data from and to the VDMA port. With the proposed optimization, the hardware implementation of the Sobel filter takes 27 ms and 76 ms for 640x480, and 720p resolutions, respectively. Hence, with the FPGA implementation, an acceleration of 5 times is obtained which allow the processing of 37 fps and 13 fps for 640x480, and 720p resolutions, respectively.
American Carrier Air Power at the Dawn of a New Century
2005-01-01
Systems, Office of the Secretary of Defense (Operational Test and Evaluation); then–Commander Calvin Craig, OPNAV N81; Captain Kenneth Neubauer and...TACP Tactical Air Control Party TARPS Tactical Air Reconnaissance Pod System TCS Television Camera System TLAM Tomahawk Land-Attack Missile TST Time...store any video imagery acquired by the aircraft’s systems, including the TARPS pod, the pilot’s head-up display (HUD), the Television Camera System (TCS
NASA Astrophysics Data System (ADS)
Maurer, Calvin R., Jr.; Sauer, Frank; Hu, Bo; Bascle, Benedicte; Geiger, Bernhard; Wenzel, Fabian; Recchi, Filippo; Rohlfing, Torsten; Brown, Christopher R.; Bakos, Robert J.; Maciunas, Robert J.; Bani-Hashemi, Ali R.
2001-05-01
We are developing a video see-through head-mounted display (HMD) augmented reality (AR) system for image-guided neurosurgical planning and navigation. The surgeon wears a HMD that presents him with the augmented stereo view. The HMD is custom fitted with two miniature color video cameras that capture a stereo view of the real-world scene. We are concentrating specifically at this point on cranial neurosurgery, so the images will be of the patient's head. A third video camera, operating in the near infrared, is also attached to the HMD and is used for head tracking. The pose (i.e., position and orientation) of the HMD is used to determine where to overlay anatomic structures segmented from preoperative tomographic images (e.g., CT, MR) on the intraoperative video images. Two SGI 540 Visual Workstation computers process the three video streams and render the augmented stereo views for display on the HMD. The AR system operates in real time at 30 frames/sec with a temporal latency of about three frames (100 ms) and zero relative lag between the virtual objects and the real-world scene. For an initial evaluation of the system, we created AR images using a head phantom with actual internal anatomic structures (segmented from CT and MR scans of a patient) realistically positioned inside the phantom. When using shaded renderings, many users had difficulty appreciating overlaid brain structures as being inside the head. When using wire frames, and texture-mapped dot patterns, most users correctly visualized brain anatomy as being internal and could generally appreciate spatial relationships among various objects. The 3D perception of these structures is based on both stereoscopic depth cues and kinetic depth cues, with the user looking at the head phantom from varying positions. The perception of the augmented visualization is natural and convincing. The brain structures appear rigidly anchored in the head, manifesting little or no apparent swimming or jitter. The initial evaluation of the system is encouraging, and we believe that AR visualization might become an important tool for image-guided neurosurgical planning and navigation.
NASA Astrophysics Data System (ADS)
Zetterlind, V.; Pledgie, S.
2009-12-01
Low-cost, low-latency, robust geolocation and display of aerial video is a common need for a wide range of earth observing as well as emergency response and security applications. While hardware costs for aerial video collection systems, GPS, and inertial sensors have been decreasing, software costs for geolocation algorithms and reference imagery/DTED remain expensive and highly proprietary. As part of a Federal Small Business Innovative Research project, MosaicATM and EarthNC, Inc have developed a simple geolocation system based on the Google Earth API and Google's 'built-in' DTED and reference imagery libraries. This system geolocates aerial video based on platform and camera position, attitude, and field-of-view metadata using geometric photogrammetric principles of ray-intersection with DTED. Geolocated video can be directly rectified and viewed in the Google Earth API during processing. Work is underway to extend our geolocation code to NASA World Wind for additional flexibility and a fully open-source platform. In addition to our airborne remote sensing work, MosaicATM has developed the Surface Operations Data Analysis and Adaptation (SODAA) tool, funded by NASA Ames, which supports analysis of airport surface operations to optimize aircraft movements and reduce fuel burn and delays. As part of SODAA, MosaicATM and EarthNC, Inc have developed powerful tools to display national airspace data and time-animated 3D flight tracks in Google Earth for 4D analysis. The SODAA tool can convert raw format flight track data, FAA National Flight Data (NFD), and FAA 'Adaptation' airport surface data to a spatial database representation and then to Google Earth KML. The SODAA client provides users with a simple graphical interface through which to generate queries with a wide range of predefined and custom filters, plot results, and export for playback in Google Earth in conjunction with NFD and Adaptation overlays.
ERIC Educational Resources Information Center
Plavnick, Joshua B.
2012-01-01
Video modeling is an effective and efficient methodology for teaching new skills to individuals with autism. New technology may enhance video modeling as smartphones or tablet computers allow for portable video displays. However, the reduced screen size may decrease the likelihood of attending to the video model for some children. The present…
Design of multifunction anti-terrorism robotic system based on police dog
NASA Astrophysics Data System (ADS)
You, Bo; Liu, Suju; Xu, Jun; Li, Dongjie
2007-11-01
Aimed at some typical constraints of police dogs and robots used in the areas of reconnaissance and counterterrorism currently, the multifunction anti-terrorism robotic system based on police dog has been introduced. The system is made up of two parts: portable commanding device and police dog robotic system. The portable commanding device consists of power supply module, microprocessor module, LCD display module, wireless data receiving and dispatching module and commanding module, which implements the remote control to the police dogs and takes real time monitor to the video and images. The police dog robotic system consists of microprocessor module, micro video module, wireless data transmission module, power supply module and offence weapon module, which real time collects and transmits video and image data of the counter-terrorism sites, and gives military attack based on commands. The system combines police dogs' biological intelligence with micro robot. Not only does it avoid the complexity of general anti-terrorism robots' mechanical structure and the control algorithm, but it also widens the working scope of police dog, which meets the requirements of anti-terrorism in the new era.
Interactive color display for multispectral imagery using correlation clustering
NASA Technical Reports Server (NTRS)
Haskell, R. E. (Inventor)
1979-01-01
A method for processing multispectral data is provided, which permits an operator to make parameter level changes during the processing of the data. The system is directed to production of a color classification map on a video display in which a given color represents a localized region in multispectral feature space. Interactive controls permit an operator to alter the size and change the location of these regions, permitting the classification of such region to be changed from a broad to a narrow classification.
Protective laser beam viewing device
Neil, George R.; Jordan, Kevin Carl
2012-12-18
A protective laser beam viewing system or device including a camera selectively sensitive to laser light wavelengths and a viewing screen receiving images from the laser sensitive camera. According to a preferred embodiment of the invention, the camera is worn on the head of the user or incorporated into a goggle-type viewing display so that it is always aimed at the area of viewing interest to the user and the viewing screen is incorporated into a video display worn as goggles over the eyes of the user.
Smart Camera System for Aircraft and Spacecraft
NASA Technical Reports Server (NTRS)
Delgado, Frank; White, Janis; Abernathy, Michael F.
2003-01-01
This paper describes a new approach to situation awareness that combines video sensor technology and synthetic vision technology in a unique fashion to create a hybrid vision system. Our implementation of the technology, called "SmartCam3D" (SC3D) has been flight tested by both NASA and the Department of Defense with excellent results. This paper details its development and flight test results. Windshields and windows add considerable weight and risk to vehicle design, and because of this, many future vehicles will employ a windowless cockpit design. This windowless cockpit design philosophy prompted us to look at what would be required to develop a system that provides crewmembers and awareness. The system created to date provides a real-time operations personnel an appropriate level of situation 3D perspective display that can be used during all-weather and visibility conditions. While the advantages of a synthetic vision only system are considerable, the major disadvantage of such a system is that it displays the synthetic scene created using "static" data acquired by an aircraft or satellite at some point in the past. The SC3D system we are presenting in this paper is a hybrid synthetic vision system that fuses live video stream information with a computer generated synthetic scene. This hybrid system can display a dynamic, real-time scene of a region of interest, enriched by information from a synthetic environment system, see figure 1. The SC3D system has been flight tested on several X-38 flight tests performed over the last several years and on an ARMY Unmanned Aerial Vehicle (UAV) ground control station earlier this year. Additional testing using an assortment of UAV ground control stations and UAV simulators from the Army and Air Force will be conducted later this year.
47 CFR 79.109 - Activating accessibility features.
Code of Federal Regulations, 2014 CFR
2014-10-01
... ACCESSIBILITY OF VIDEO PROGRAMMING Apparatus § 79.109 Activating accessibility features. (a) Requirements... video programming transmitted in digital format simultaneously with sound, including apparatus designed to receive or display video programming transmitted in digital format using Internet protocol, with...
ERIC Educational Resources Information Center
Huang, Hsiu-Mei; Liaw, Shu-Sheng; Lai, Chung-Min
2016-01-01
Advanced technologies have been widely applied in medical education, including human-patient simulators, immersive virtual reality Cave Automatic Virtual Environment systems, and video conferencing. Evaluating learner acceptance of such virtual reality (VR) learning environments is a critical issue for ensuring that such technologies are used to…
Video System for Viewing From a Remote or Windowless Cockpit
NASA Technical Reports Server (NTRS)
Banerjee, Amamath
2009-01-01
A system of electronic hardware and software synthesizes, in nearly real time, an image of a portion of a scene surveyed by as many as eight video cameras aimed, in different directions, at portions of the scene. This is a prototype of systems that would enable a pilot to view the scene outside a remote or windowless cockpit. The outputs of the cameras are digitized. Direct memory addressing is used to store the data of a few captured images in sequence, and the sequence is repeated in cycles. Cylindrical warping is used in merging adjacent images at their borders to construct a mosaic image of the scene. The mosaic-image data are written to a memory block from which they can be rendered on a head-mounted display (HMD) device. A subsystem in the HMD device tracks the direction of gaze of the wearer, providing data that are used to select, for display, the portion of the mosaic image corresponding to the direction of gaze. The basic functionality of the system has been demonstrated by mounting the cameras on the roof of a van and steering the van by use of the images presented on the HMD device.
Impact of packet losses in scalable 3D holoscopic video coding
NASA Astrophysics Data System (ADS)
Conti, Caroline; Nunes, Paulo; Ducla Soares, Luís.
2014-05-01
Holoscopic imaging became a prospective glassless 3D technology to provide more natural 3D viewing experiences to the end user. Additionally, holoscopic systems also allow new post-production degrees of freedom, such as controlling the plane of focus or the viewing angle presented to the user. However, to successfully introduce this technology into the consumer market, a display scalable coding approach is essential to achieve backward compatibility with legacy 2D and 3D displays. Moreover, to effectively transmit 3D holoscopic content over error-prone networks, e.g., wireless networks or the Internet, error resilience techniques are required to mitigate the impact of data impairments in the user quality perception. Therefore, it is essential to deeply understand the impact of packet losses in terms of decoding video quality for the specific case of 3D holoscopic content, notably when a scalable approach is used. In this context, this paper studies the impact of packet losses when using a three-layer display scalable 3D holoscopic video coding architecture previously proposed, where each layer represents a different level of display scalability (i.e., L0 - 2D, L1 - stereo or multiview, and L2 - full 3D holoscopic). For this, a simple error concealment algorithm is used, which makes use of inter-layer redundancy between multiview and 3D holoscopic content and the inherent correlation of the 3D holoscopic content to estimate lost data. Furthermore, a study of the influence of 2D views generation parameters used in lower layers on the performance of the used error concealment algorithm is also presented.
Code of Federal Regulations, 2012 CFR
2012-01-01
... concerning the accessibility of videos, DVDs, and other audio-visual presentations shown on-aircraft to... meet concerning the accessibility of videos, DVDs, and other audio-visual presentations shown on... videos, DVDs, and other audio-visual displays played on aircraft for safety purposes, and all such new...
Code of Federal Regulations, 2013 CFR
2013-01-01
... concerning the accessibility of videos, DVDs, and other audio-visual presentations shown on-aircraft to... meet concerning the accessibility of videos, DVDs, and other audio-visual presentations shown on... videos, DVDs, and other audio-visual displays played on aircraft for safety purposes, and all such new...
47 CFR Appendix - Technical Appendix 1
Code of Federal Regulations, 2010 CFR
2010-10-01
... display program material that has been encoded in any and all of the video formats contained in Table A3... frame rate of the transmitted video format. 2. Output Formats Equipment shall support 4:3 center cut-out... for composite video (yellow). Output shall produce video with ITU-R BT.500-11 quality scale of Grade 4...
Technique for improving solid state mosaic images
NASA Technical Reports Server (NTRS)
Saboe, J. M.
1969-01-01
Method identifies and corrects mosaic image faults in solid state visual displays and opto-electronic presentation systems. Composite video signals containing faults due to defective sensing elements are corrected by a memory unit that contains the stored fault pattern and supplies the appropriate fault word to the blanking circuit.
User interface using a 3D model for video surveillance
NASA Astrophysics Data System (ADS)
Hata, Toshihiko; Boh, Satoru; Tsukada, Akihiro; Ozaki, Minoru
1998-02-01
These days fewer people, who must carry out their tasks quickly and precisely, are required in industrial surveillance and monitoring applications such as plant control or building security. Utilizing multimedia technology is a good approach to meet this need, and we previously developed Media Controller, which is designed for the applications and provides realtime recording and retrieval of digital video data in a distributed environment. In this paper, we propose a user interface for such a distributed video surveillance system in which 3D models of buildings and facilities are connected to the surveillance video. A novel method of synchronizing camera field data with each frame of a video stream is considered. This method records and reads the camera field data similarity to the video data and transmits it synchronously with the video stream. This enables the user interface to have such useful functions as comprehending the camera field immediately and providing clues when visibility is poor, for not only live video but also playback video. We have also implemented and evaluated the display function which makes surveillance video and 3D model work together using Media Controller with Java and Virtual Reality Modeling Language employed for multi-purpose and intranet use of 3D model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beiser, L.; Veligdan, J.
A Planar Optic Display (POD) is being built and tested for suitability as a high brightness replacement for the cathode ray tube, (CRT). The POD display technology utilizes a laminated optical waveguide structure which allows a projection type of display to be constructed in a thin (I to 2 inch) housing. Inherent in the optical waveguide is a black cladding matrix which gives the display a black appearance leading to very high contrast. A Digital Micromirror Device, (DMD) from Texas Instruments is used to create video images in conjunction with a 100 milliwatt green solid state laser. An anamorphic opticalmore » system is used to inject light into the POD to form a stigmatic image. In addition to the design of the POD screen, we discuss: image formation, image projection, and optical design constraints.« less
Polyplanar optic display for cockpit application
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veligdan, J.; Biscardi, C.; Brewster, C.
1998-04-01
The Polyplanar Optical Display (POD) is a high contrast display screen being developed for cockpit applications. This display screen is 2 inches thick and has a matte black face which allows for high contrast images. The prototype being developed is a form, fit and functional replacement display for the B-52 aircraft which uses a monochrome ten-inch display. The new display uses a long lifetime, (10,000 hour), 200 mW green solid-state laser (532 nm) as its optical source. In order to produce real-time video, the laser light is being modulated by a Digital Light Processing (DLP{trademark}) chip manufactured by Texas Instruments,more » Inc. A variable astigmatic focusing system is used to produce a stigmatic image on the viewing face of the POD. In addition to the optical design and speckle reduction, the authors discuss the electronic interfacing to the DLP{trademark} chip, the opto-mechanical design and viewing angle characteristics.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veligdan, J.; Biscardi, C.; Brewster, C.
1997-07-01
The Polyplanar Optical Display (POD) is a unique display screen which can be used with any projection source. This display screen is 2 inches thick and has a matte black face which allows for high contrast images. The prototype being developed is a form, fit and functional replacement display for the B-52 aircraft which uses a monochrome ten-inch display. The new display uses a 100 milliwatt green solid state laser (532 nm) as its optical source. In order to produce real-time video, the laser light is being modulated by a Digital Light Processing (DLP{trademark}) chip manufactured by Texas Instruments, Inc.more » A variable astigmatic focusing system is used to produce a stigmatic image on the viewing face of the POD. In addition to the optical design, the authors discuss the electronic interfacing to the DLP{trademark} chip, the opto-mechanical design and viewing angle characteristics.« less
Polyplanar optic display for cockpit application
NASA Astrophysics Data System (ADS)
Veligdan, James T.; Biscardi, Cyrus; Brewster, Calvin; DeSanto, Leonard; Freibott, William C.
1998-09-01
The Polyplanar Optical Display (POD) is a high contrast display screen being developed for cockpit applications. This display screen is 2 inches thick and has a matte black face which allows for high contrast images. The prototype being developed is a form, fit and functional replacement display for the B-52 aircraft which uses a monochrome ten-inch display. The new display uses a long lifetime, (10,000 hour), 200 mW green solid-state laser (532 nm) as its optical source. In order to produce real-time video, the laser light is being modulated by a Digital Light Processing (DLPTM) chip manufactured by Texas Instruments, Inc. A variable astigmatic focusing system is used to produce a stigmatic image on the viewing face of the POD. In addition to the optical design and speckle reduction, we discuss the electronic interfacing to the DLPTM chip, the opto-mechanical design and viewing angle characteristics.
Three-dimensional simulation, surgical navigation and thoracoscopic lung resection
Kanzaki, Masato; Kikkawa, Takuma; Sakamoto, Kei; Maeda, Hideyuki; Wachi, Naoko; Komine, Hiroshi; Oyama, Kunihiro; Murasugi, Masahide; Onuki, Takamasa
2013-01-01
This report describes a 3-dimensional (3-D) video-assisted thoracoscopic lung resection guided by a 3-D video navigation system having a patient-specific 3-D reconstructed pulmonary model obtained by preoperative simulation. A 78-year-old man was found to have a small solitary pulmonary nodule in the left upper lobe in chest computed tomography. By a virtual 3-D pulmonary model the tumor was found to be involved in two subsegments (S1 + 2c and S3a). Complete video-assisted thoracoscopic surgery bi-subsegmentectomy was selected in simulation and was performed with lymph node dissection. A 3-D digital vision system was used for 3-D thoracoscopic performance. Wearing 3-D glasses, the patient's actual reconstructed 3-D model on 3-D liquid-crystal displays was observed, and the 3-D intraoperative field and the picture of 3-D reconstructed pulmonary model were compared. PMID:24964426
Research of Pedestrian Crossing Safety Facilities Based on the Video Detection
NASA Astrophysics Data System (ADS)
Li, Sheng-Zhen; Xie, Quan-Long; Zang, Xiao-Dong; Tang, Guo-Jun
Since that the pedestrian crossing facilities at present is not perfect, pedestrian crossing is in chaos and pedestrians from opposite direction conflict and congest with each other, which severely affects the pedestrian traffic efficiency, obstructs the vehicle and bringing about some potential security problems. To solve these problems, based on video identification, a pedestrian crossing guidance system was researched and designed. It uses the camera to monitor the pedestrians in real time and sums up the number of pedestrians through video detection program, and a group of pedestrian's induction lamp array is installed at the interval of crosswalk, which adjusts color display according to the proportion of pedestrians from both sides to guide pedestrians from both opposite directions processing separately. The emulation analysis result from cellular automaton shows that the system reduces the pedestrian crossing conflict, shortens the time of pedestrian crossing and improves the safety of pedestrians crossing.
C-130 Automated Digital Data System (CADDS)
NASA Technical Reports Server (NTRS)
Scofield, C. P.; Nguyen, Chien
1991-01-01
Real time airborne data acquisition, archiving and distribution on the NASA/Ames Research Center (ARC) C-130 has been improved over the past three years due to the implementation of the C-130 Automated Digital Data System (CADDS). CADDS is a real time, multitasking, multiprocessing ROM-based system. CADDS acquires data from both avionics and environmental sensors inflight for all C-130 data lines. The system also displays the data on video monitors throughout the aircraft.
Full color laser projection display using Kr-Ar laser (white laser) beam-scanning technology
NASA Astrophysics Data System (ADS)
Kim, Yonghoon; Lee, Hang W.; Cha, Seungnam; Lee, Jin-Ho; Park, Youngjun; Park, Jungho; Hong, Sung S.; Hwang, Young M.
1997-07-01
Full color laser projection display is realized on the large screen using a krypton-argon laser (white laser) as a light source, and acousto-optic devices as light modulators. The main wavelengths of red, green and blue color are 647, 515, and 488 nm separated by dichroic mirrors which are designed to obtain the best performance for the s-polarized beam with the 45 degree incident angle. The separated beams are modulated by three acousto-optic modulators driven by rf drivers which has energy level of 1 watt at 144 MHz and recombined by dichroic mirrors again. Acousto-optic modulators (AOM) are fabricated to satisfy high diffraction efficiency over 80% and fast rising time less than 50 ns at the video bandwidth of 5 MHz. The recombined three beams (RGB) are scanned by polygonal mirrors for horizontal lines and a galvanometer for vertical lines. The photodiode detection for monitoring of rotary polygonal mirrors is adopted in this system for the compensation of the tolerance in the mechanical scanning to prevent the image joggling in the horizontal direction. The laser projection display system described in this paper is expected to apply HDTV from the exploitation of the acousto- optic modulator with the video bandwidth of 30 MHz.
Graphic overlays in high-precision teleoperation: Current and future work at JPL
NASA Technical Reports Server (NTRS)
Diner, Daniel B.; Venema, Steven C.
1989-01-01
In space teleoperation additional problems arise, including signal transmission time delays. These can greatly reduce operator performance. Recent advances in graphics open new possibilities for addressing these and other problems. Currently a multi-camera system with normal 3-D TV and video graphics capabilities is being developed. Trained and untrained operators will be tested for high precision performance using two force reflecting hand controllers and a voice recognition system to control two robot arms and up to 5 movable stereo or non-stereo TV cameras. A number of new techniques of integrating TV and video graphics displays to improve operator training and performance in teleoperation and supervised automation are evaluated.
Real-Time Visualization of Tissue Ischemia
NASA Technical Reports Server (NTRS)
Bearman, Gregory H. (Inventor); Chrien, Thomas D. (Inventor); Eastwood, Michael L. (Inventor)
2000-01-01
A real-time display of tissue ischemia which comprises three CCD video cameras, each with a narrow bandwidth filter at the correct wavelength is discussed. The cameras simultaneously view an area of tissue suspected of having ischemic areas through beamsplitters. The output from each camera is adjusted to give the correct signal intensity for combining with, the others into an image for display. If necessary a digital signal processor (DSP) can implement algorithms for image enhancement prior to display. Current DSP engines are fast enough to give real-time display. Measurement at three, wavelengths, combined into a real-time Red-Green-Blue (RGB) video display with a digital signal processing (DSP) board to implement image algorithms, provides direct visualization of ischemic areas.
Recent progress of flexible AMOLED displays
NASA Astrophysics Data System (ADS)
Pang, Huiqing; Rajan, Kamala; Silvernail, Jeff; Mandlik, Prashant; Ma, Ruiqing; Hack, Mike; Brown, Julie J.; Yoo, Juhn S.; Jung, Sang-Hoon; Kim, Yong-Cheol; Byun, Seung-Chan; Kim, Jong-Moo; Yoon, Soo-Young; Kim, Chang-Dong; Hwang, Yong-Kee; Chung, In-Jae; Fletcher, Mark; Green, Derek; Pangle, Mike; McIntyre, Jim; Smith, Randal D.
2011-03-01
Significant progress has been made in recent years in flexible AMOLED displays and numerous prototypes have been demonstrated. Replacing rigid glass with flexible substrates and thin-film encapsulation makes displays thinner, lighter, and non-breakable - all attractive features for portable applications. Flexible AMOLEDs equipped with phosphorescent OLEDs are considered one of the best candidates for low-power, rugged, full-color video applications. Recently, we have demonstrated a portable communication display device, built upon a full-color 4.3-inch HVGA foil display with a resolution of 134 dpi using an all-phosphorescent OLED frontplane. The prototype is shaped into a thin and rugged housing that will fit over a user's wrist, providing situational awareness and enabling the wearer to see real-time video and graphics information.
Testing the accuracy of timing reports in visual timing tasks with a consumer-grade digital camera.
Smyth, Rachael E; Oram Cardy, Janis; Purcell, David
2017-06-01
This study tested the accuracy of a visual timing task using a readily available and relatively inexpensive consumer grade digital camera. A visual inspection time task was recorded using short high-speed video clips and the timing as reported by the task's program was compared to the timing as recorded in the video clips. Discrepancies in these two timing reports were investigated further and based on display refresh rate, a decision was made whether the discrepancy was large enough to affect the results as reported by the task. In this particular study, the errors in timing were not large enough to impact the results of the study. The procedure presented in this article offers an alternative method for performing a timing test, which uses readily available hardware and can be used to test the timing in any software program on any operating system and display.
Image sequence analysis workstation for multipoint motion analysis
NASA Astrophysics Data System (ADS)
Mostafavi, Hassan
1990-08-01
This paper describes an application-specific engineering workstation designed and developed to analyze motion of objects from video sequences. The system combines the software and hardware environment of a modem graphic-oriented workstation with the digital image acquisition, processing and display techniques. In addition to automation and Increase In throughput of data reduction tasks, the objective of the system Is to provide less invasive methods of measurement by offering the ability to track objects that are more complex than reflective markers. Grey level Image processing and spatial/temporal adaptation of the processing parameters is used for location and tracking of more complex features of objects under uncontrolled lighting and background conditions. The applications of such an automated and noninvasive measurement tool include analysis of the trajectory and attitude of rigid bodies such as human limbs, robots, aircraft in flight, etc. The system's key features are: 1) Acquisition and storage of Image sequences by digitizing and storing real-time video; 2) computer-controlled movie loop playback, freeze frame display, and digital Image enhancement; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored Image sequence; 4) model-based estimation and tracking of the six degrees of freedom of a rigid body: 5) field-of-view and spatial calibration: 6) Image sequence and measurement data base management; and 7) offline analysis software for trajectory plotting and statistical analysis.
Space Shuttle Placement Announcement
2011-04-12
A video highlighting the 30 years of space flight and more than 130 missions of the space shuttle transportation system is shown at an event where NASA Administrator Charles Bolden announced where the four space shuttle orbiters will be permanently displayed, Tuesday, April 12, 2011, at Kennedy Space Center in Cape Canaveral, Fla. The four orbiters, Enterprise, which currently is on display at the Smithsonian's Steven F. Udvar-Hazy Center near Washington Dulles International Airport, will move to the Intrepid Sea, Air & Space Museum in New York, Discovery will move to Udvar-Hazy, Endeavour will be displayed at the California Science Center in Los Angeles and Atlantis, in background, will be displayed at the Kennedy Space Center Visitor’s Complex. Photo Credit: (NASA/Bill Ingalls)
Statis omnidirectional stereoscopic display system
NASA Astrophysics Data System (ADS)
Barton, George G.; Feldman, Sidney; Beckstead, Jeffrey A.
1999-11-01
A unique three camera stereoscopic omnidirectional viewing system based on the periscopic panoramic camera described in the 11/98 SPIE proceedings (AM13). The 3 panoramic cameras are equilaterally combined so each leg of the triangle approximates the human inter-ocular spacing allowing each panoramic camera to view 240 degree(s) of the panoramic scene, the most counter clockwise 120 degree(s) being the left eye field and the other 120 degree(s) segment being the right eye field. Field definition may be by green/red filtration or time discrimination of the video signal. In the first instance a 2 color spectacle is used in viewing the display or in the 2nd instance LCD goggles are used to differentiate the R/L fields. Radially scanned vidicons or re-mapped CCDs may be used. The display consists of three vertically stacked 120 degree(s) segments of the panoramic field of view with 2 fields/frame. Field A being the left eye display and Field B the right eye display.
Analysis of the color rendition of flexible endoscopes
NASA Astrophysics Data System (ADS)
Murphy, Edward M.; Hegarty, Francis J.; McMahon, Barry P.; Boyle, Gerard
2003-03-01
Endoscopes are imaging devices routinely used for the diagnosis of disease within the human digestive tract. Light is transmitted into the body cavity via incoherent fibreoptic bundles and is controlled by a light feedback system. Fibreoptic endoscopes use coherent fibreoptic bundles to provide the clinician with an image. It is also possible to couple fibreoptic endoscopes to a clip-on video camera. Video endoscopes consist of a small CCD camera, which is inserted into gastrointestinal tract, and associated image processor to convert the signal to analogue RGB video signals. Images from both types of endoscope are displayed on standard video monitors. Diagnosis is dependent upon being able to determine changes in the structure and colour of tissues and biological fluids, and therefore is dependent upon the ability of the endoscope to reproduce the colour of these tissues and fluids with fidelity. This study investigates the colour reproduction of flexible optical and video endoscopes. Fibreoptic and video endoscopes alter image colour characteristics in different ways. The colour rendition of fibreoptic endoscopes was assessed by coupling them to a video camera and applying video colorimetric techniques. These techniques were then used on video endoscopes to assess how the colour rendition of video endoscopes compared with that of optical endoscopes. In both cases results were obtained at fixed illumination settings. Video endoscopes were then assessed with varying levels of illumination. Initial results show that at constant luminance endoscopy systems introduce non-linear shifts in colour. Techniques for examining how this colour shift varies with illumination intensity were developed and both methodology and results will be presented. We conclude that more rigorous quality assurance is required to reduce colour error and are developing calibration procedures applicable to medical endoscopes.
Optical Head-Mounted Computer Display for Education, Research, and Documentation in Hand Surgery.
Funk, Shawn; Lee, Donald H
2016-01-01
Intraoperative photography and capturing videos is important for the hand surgeon. Recently, optical head-mounted computer display has been introduced as a means of capturing photographs and videos. In this article, we discuss this new technology and review its potential use in hand surgery. Copyright © 2016 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.
Impact of video games on plasticity of the hippocampus.
West, G L; Konishi, K; Diarra, M; Benady-Chorney, J; Drisdelle, B L; Dahmani, L; Sodums, D J; Lepore, F; Jolicoeur, P; Bohbot, V D
2017-08-08
The hippocampus is critical to healthy cognition, yet results in the current study show that action video game players have reduced grey matter within the hippocampus. A subsequent randomised longitudinal training experiment demonstrated that first-person shooting games reduce grey matter within the hippocampus in participants using non-spatial memory strategies. Conversely, participants who use hippocampus-dependent spatial strategies showed increased grey matter in the hippocampus after training. A control group that trained on 3D-platform games displayed growth in either the hippocampus or the functionally connected entorhinal cortex. A third study replicated the effect of action video game training on grey matter in the hippocampus. These results show that video games can be beneficial or detrimental to the hippocampal system depending on the navigation strategy that a person employs and the genre of the game.Molecular Psychiatry advance online publication, 8 August 2017; doi:10.1038/mp.2017.155.
Hanse, J J; Forsman, M
2001-02-01
A method for psychosocial evaluation of potentially stressful or unsatisfactory situations in manual work was developed. It focuses on subjective responses regarding specific situations and is based on interactive worker assessment when viewing video recordings of oneself. The worker is first video-recorded during work. The video is then displayed on the computer terminal, and the filmed worker clicks on virtual controls on the screen whenever an unsatisfactory psychosocial situation appears; a window of questions regarding psychological demands, mental strain and job control is then opened. A library with pictorial information and comments on the selected situations is formed in the computer. The evaluation system, called PSIDAR, was applied in two case studies, one of manual materials handling in an automotive workshop and one of a group of workers producing and testing instrument panels. The findings indicate that PSIDAR can provide data that are useful in a participatory ergonomic process of change.
Meghdadi, Amir H; Irani, Pourang
2013-12-01
We propose a novel video visual analytics system for interactive exploration of surveillance video data. Our approach consists of providing analysts with various views of information related to moving objects in a video. To do this we first extract each object's movement path. We visualize each movement by (a) creating a single action shot image (a still image that coalesces multiple frames), (b) plotting its trajectory in a space-time cube and (c) displaying an overall timeline view of all the movements. The action shots provide a still view of the moving object while the path view presents movement properties such as speed and location. We also provide tools for spatial and temporal filtering based on regions of interest. This allows analysts to filter out large amounts of movement activities while the action shot representation summarizes the content of each movement. We incorporated this multi-part visual representation of moving objects in sViSIT, a tool to facilitate browsing through the video content by interactive querying and retrieval of data. Based on our interaction with security personnel who routinely interact with surveillance video data, we identified some of the most common tasks performed. This resulted in designing a user study to measure time-to-completion of the various tasks. These generally required searching for specific events of interest (targets) in videos. Fourteen different tasks were designed and a total of 120 min of surveillance video were recorded (indoor and outdoor locations recording movements of people and vehicles). The time-to-completion of these tasks were compared against a manual fast forward video browsing guided with movement detection. We demonstrate how our system can facilitate lengthy video exploration and significantly reduce browsing time to find events of interest. Reports from expert users identify positive aspects of our approach which we summarize in our recommendations for future video visual analytics systems.
Flat-panel display solutions for ground-environment military displays (Invited Paper)
NASA Astrophysics Data System (ADS)
Thomas, J., II; Roach, R.
2005-05-01
Displays for military vehicles have very distinct operational and cost requirements that differ from other military applications. These requirements demand that display suppliers to Army and Marine ground-environments provide low cost equipment that is capable of operation across environmental extremes. Inevitably, COTS components form the foundation of these "affordable" display solutions. This paper will outline the major display requirements and review the options that satisfy conflicting and difficult operational demands, using newly developed equipment as an example. Recently, a new supplier was selected for the Drivers Vision Enhancer (DVE) equipment, including the Display Control Module (DCM). The paper will outline the DVE and describe development of a new DCM solution. The DVE programme, with several thousand units presently in service and operational in conflicts such as "Operation Iraqi Freedom", represents a critical balance between cost and performance. We shall describe design considerations that include selection of COTS sources, the need to minimise display modification; video interfaces, power interfaces, operator interfaces and new provisions to optimise displayed video content.
Helping Video Games Rewire "Our Minds"
NASA Technical Reports Server (NTRS)
Pope, Alan T.; Palsson, Olafur S.
2001-01-01
Biofeedback-modulated video games are games that respond to physiological signals as well as mouse, joystick or game controller input; they embody the concept of improving physiological functioning by rewarding specific healthy body signals with success at playing a video game. The NASA patented biofeedback-modulated game method blends biofeedback into popular off-the- shelf video games in such a way that the games do not lose their entertainment value. This method uses physiological signals (e.g., electroencephalogram frequency band ratio) not simply to drive a biofeedback display directly, or periodically modify a task as in other systems, but to continuously modulate parameters (e.g., game character speed and mobility) of a game task in real time while the game task is being performed by other means (e.g., a game controller). Biofeedback-modulated video games represent a new generation of computer and video game environments that train valuable mental skills beyond eye-hand coordination. These psychophysiological training technologies are poised to exploit the revolution in interactive multimedia home entertainment for the personal improvement, not just the diversion, of the user.
North, Frederick; Hanna, Barbara K; Crane, Sarah J; Smith, Steven A; Tulledge-Scheitel, Sidna M; Stroebel, Robert J
2011-12-01
The patient portal is a web service which allows patients to view their electronic health record, communicate online with their care teams, and manage healthcare appointments and medications. Despite advantages of the patient portal, registrations for portal use have often been slow. Using a secure video system on our existing exam room electronic health record displays during regular office visits, the authors showed patients a video which promoted use of the patient portal. The authors compared portal registrations and portal use following the video to providing a paper instruction sheet and to a control (no additional portal promotion). From the 12,050 office appointments examined, portal registrations within 45 days of the appointment were 11.7%, 7.1%, and 2.5% for video, paper instructions, and control respectively (p<0.0001). Within 6 months following the interventions, 3.5% in the video cohort, 1.2% in the paper, and 0.75% of the control patients demonstrated portal use by initiating portal messages to their providers (p<0.0001).
PeakVizor: Visual Analytics of Peaks in Video Clickstreams from Massive Open Online Courses.
Chen, Qing; Chen, Yuanzhe; Liu, Dongyu; Shi, Conglei; Wu, Yingcai; Qu, Huamin
2016-10-01
Massive open online courses (MOOCs) aim to facilitate open-access and massive-participation education. These courses have attracted millions of learners recently. At present, most MOOC platforms record the web log data of learner interactions with course videos. Such large amounts of multivariate data pose a new challenge in terms of analyzing online learning behaviors. Previous studies have mainly focused on the aggregate behaviors of learners from a summative view; however, few attempts have been made to conduct a detailed analysis of such behaviors. To determine complex learning patterns in MOOC video interactions, this paper introduces a comprehensive visualization system called PeakVizor. This system enables course instructors and education experts to analyze the "peaks" or the video segments that generate numerous clickstreams. The system features three views at different levels: the overview with glyphs to display valuable statistics regarding the peaks detected; the flow view to present spatio-temporal information regarding the peaks; and the correlation view to show the correlation between different learner groups and the peaks. Case studies and interviews conducted with domain experts have demonstrated the usefulness and effectiveness of PeakVizor, and new findings about learning behaviors in MOOC platforms have been reported.
A System for Video Surveillance and Monitoring CMU VSAM Final Report
1999-11-30
motion-based skeletonization, neural network , spatio-temporal salience Patterns inside image chips, spurious motion rejection, model -based... network of sensors with respect to the model coordinate system, computation of 3D geolocation estimates, and graphical display of object hypotheses...rithms have been developed. The first uses view dependent visual properties to train a neural network classifier to recognize four classes: single
NASA Astrophysics Data System (ADS)
Froehlich, Jan; Grandinetti, Stefan; Eberhardt, Bernd; Walter, Simon; Schilling, Andreas; Brendel, Harald
2014-03-01
High quality video sequences are required for the evaluation of tone mapping operators and high dynamic range (HDR) displays. We provide scenic and documentary scenes with a dynamic range of up to 18 stops. The scenes are staged using professional film lighting, make-up and set design to enable the evaluation of image and material appearance. To address challenges for HDR-displays and temporal tone mapping operators, the sequences include highlights entering and leaving the image, brightness changing over time, high contrast skin tones, specular highlights and bright, saturated colors. HDR-capture is carried out using two cameras mounted on a mirror-rig. To achieve a cinematic depth of field, digital motion picture cameras with Super-35mm size sensors are used. We provide HDR-video sequences to serve as a common ground for the evaluation of temporal tone mapping operators and HDR-displays. They are available to the scientific community for further research.
Feasibility of video codec algorithms for software-only playback
NASA Astrophysics Data System (ADS)
Rodriguez, Arturo A.; Morse, Ken
1994-05-01
Software-only video codecs can provide good playback performance in desktop computers with a 486 or 68040 CPU running at 33 MHz without special hardware assistance. Typically, playback of compressed video can be categorized into three tasks: the actual decoding of the video stream, color conversion, and the transfer of decoded video data from system RAM to video RAM. By current standards, good playback performance is the decoding and display of video streams of 320 by 240 (or larger) compressed frames at 15 (or greater) frames-per- second. Software-only video codecs have evolved by modifying and tailoring existing compression methodologies to suit video playback in desktop computers. In this paper we examine the characteristics used to evaluate software-only video codec algorithms, namely: image fidelity (i.e., image quality), bandwidth (i.e., compression) ease-of-decoding (i.e., playback performance), memory consumption, compression to decompression asymmetry, scalability, and delay. We discuss the tradeoffs among these variables and the compromises that can be made to achieve low numerical complexity for software-only playback. Frame- differencing approaches are described since software-only video codecs typically employ them to enhance playback performance. To complement other papers that appear in this session of the Proceedings, we review methods derived from binary pattern image coding since these methods are amenable for software-only playback. In particular, we introduce a novel approach called pixel distribution image coding.
Satellite-aided coastal zone monitoring and vessel traffic system
NASA Technical Reports Server (NTRS)
Baker, J. L.
1981-01-01
The development and demonstration of a coastal zone monitoring and vessel traffic system is described. This technique uses a LORAN-C navigational system and relays signals via the ATS-3 satellite to a computer driven color video display for real time control. Multi-use applications of the system to search and rescue operations, coastal zone management and marine safety are described. It is emphasized that among the advantages of the system are: its unlimited range; compatibility with existing navigation systems; and relatively inexpensive cost.
A method for the real-time construction of a full parallax light field
NASA Astrophysics Data System (ADS)
Tanaka, Kenji; Aoki, Soko
2006-02-01
We designed and implemented a light field acquisition and reproduction system for dynamic objects called LiveDimension, which serves as a 3D live video system for multiple viewers. The acquisition unit consists of circularly arranged NTSC cameras surrounding an object. The display consists of circularly arranged projectors and a rotating screen. The projectors are constantly projecting images captured by the corresponding cameras onto the screen. The screen rotates around an in-plane vertical axis at a sufficient speed so that it faces each of the projectors in sequence. Since the Lambertian surfaces of the screens are covered by light-collimating plastic films with vertical louver patterns that are used for the selection of appropriate light rays, viewers can only observe images from a projector located in the same direction as the viewer. Thus, the dynamic view of an object is dependent on the viewer's head position. We evaluated the system by projecting both objects and human figures and confirmed that the entire system can reproduce light fields with a horizontal parallax to display video sequences of 430x770 pixels at a frame rate of 45 fps. Applications of this system include product design reviews, sales promotion, art exhibits, fashion shows, and sports training with form checking.
47 CFR 79.107 - User interfaces provided by digital apparatus.
Code of Federal Regulations, 2014 CFR
2014-10-01
... SERVICES ACCESSIBILITY OF VIDEO PROGRAMMING Apparatus § 79.107 User interfaces provided by digital... States and designed to receive or play back video programming transmitted in digital format simultaneously with sound, including apparatus designed to receive or display video programming transmitted in...
47 CFR 79.103 - Closed caption decoder requirements for apparatus.
Code of Federal Regulations, 2014 CFR
2014-10-01
... RADIO SERVICES ACCESSIBILITY OF VIDEO PROGRAMMING Apparatus § 79.103 Closed caption decoder requirements... video programming transmitted simultaneously with sound, if such apparatus is manufactured in the United... with built-in closed caption decoder circuitry or capability designed to display closed-captioned video...
Patterned Video Sensors For Low Vision
NASA Technical Reports Server (NTRS)
Juday, Richard D.
1996-01-01
Miniature video cameras containing photoreceptors arranged in prescribed non-Cartesian patterns to compensate partly for some visual defects proposed. Cameras, accompanied by (and possibly integrated with) miniature head-mounted video display units restore some visual function in humans whose visual fields reduced by defects like retinitis pigmentosa.
Spatiotemporal video deinterlacing using control grid interpolation
NASA Astrophysics Data System (ADS)
Venkatesan, Ragav; Zwart, Christine M.; Frakes, David H.; Li, Baoxin
2015-03-01
With the advent of progressive format display and broadcast technologies, video deinterlacing has become an important video-processing technique. Numerous approaches exist in the literature to accomplish deinterlacing. While most earlier methods were simple linear filtering-based approaches, the emergence of faster computing technologies and even dedicated video-processing hardware in display units has allowed higher quality but also more computationally intense deinterlacing algorithms to become practical. Most modern approaches analyze motion and content in video to select different deinterlacing methods for various spatiotemporal regions. We introduce a family of deinterlacers that employs spectral residue to choose between and weight control grid interpolation based spatial and temporal deinterlacing methods. The proposed approaches perform better than the prior state-of-the-art based on peak signal-to-noise ratio, other visual quality metrics, and simple perception-based subjective evaluations conducted by human viewers. We further study the advantages of using soft and hard decision thresholds on the visual performance.
Manufacturing Methods and Technology Program Automatic In-Process Microcircuit Evaluation.
1980-10-01
methods of controlling the AIME system are with the computer and associated inter- face (CPU control), and with controls located on the front panels...Sync and Blanking signals When the AIME system is being operated by the front panel controls , the computer does not influence the system operation. SU...the color video monitor display. The operator controls these parameters by 1) depressing the appropriate key on the keyboard, 2) observing on the
DoD Acquisition Programs. Status of Selected Systems
1988-06-01
the Sense and Destroy Armor Munition were the preferred munitions mix to sat- isfy this need. In addition, a December 1986 System Threat Assessment...alternative for meeting the need. Recent budget decisions indicate that the Army is wavering on what the system is to consist of or whether all...through a video display, which will portray what the missile seeker sees as the missile cruises at low altitudes. These images will pass through the fiber
A Tool for the Analysis of Motion Picture Film or Video Tape.
ERIC Educational Resources Information Center
Ekman, Paul; Friesen, Wallace V.
1969-01-01
A visual information display and retrieval system (VID-R) is described for application to visual records. VID-R searches and retrieves events by time address (location) or by previously stored ovservations or measurements. Fields are labeled by writing discriminable binary addresses on the horizontal lines outside the normal viewing area. The…
Distance Learning Plan for the Defense Finance and Accounting Service (DFAS): A Study for the DBMU
1994-09-01
according to the standard (H.261) motion video compression algorithm.24 n Schaphorst, Richard, notes presented at TELECON XIII, San Jose , California, 10...include automatic microphone mixing systems with one microphone for every two student seats, a large screen interactive computer display and the Socrates
Field-Sequential Color Converter
NASA Technical Reports Server (NTRS)
Studer, Victor J.
1989-01-01
Electronic conversion circuit enables display of signals from field-sequential color-television camera on color video camera. Designed for incorporation into color-television monitor on Space Shuttle, circuit weighs less, takes up less space, and consumes less power than previous conversion equipment. Incorporates state-of-art memory devices, also used in terrestrial stationary or portable closed-circuit television systems.
Display Device Color Management and Visual Surveillance of Vehicles
ERIC Educational Resources Information Center
Srivastava, Satyam
2011-01-01
Digital imaging has seen an enormous growth in the last decade. Today users have numerous choices in creating, accessing, and viewing digital image/video content. Color management is important to ensure consistent visual experience across imaging systems. This is typically achieved using color profiles. In this thesis we identify the limitations…
2007-05-07
Queen Elizabeth II and Prince Philip, The Duke of Edinburgh look on as Goddard employees demonstrate “Science on a Sphere.” This system, developed by the National Oceanic and Atmospheric Administration (NOAA), uses computers and four video projectors to display animated images on the outside of a 6-foot diameter sphere. Photo Credit: (NASA/Pat Izzo)
Flat-panel video resolution LED display system
NASA Astrophysics Data System (ADS)
Wareberg, P. G.; Kennedy, D. I.
The system consists of a 128 x 128 element X-Y addressable LED array fabricated from green-emitting gallium phosphide. The LED array is interfaced with a 128 x 128 matrix TV camera. Associated electronics provides for seven levels of grey scale above zero with a grey scale ratio of square root of 2. Picture elements are on 0.008 inch centers resulting in a resolution of 125 lines-per-inch and a display area of approximately 1 sq. in. The LED array concept lends itself to modular construction, permitting assembly of a flat panel screen of any desired size from 1 x 1 inch building blocks without loss of resolution. A wide range of prospective aerospace applications exist extending from helmet-mounted systems involving small dedicated arrays to multimode cockpit displays constructed as modular screens. High-resolution LED arrays are already used as CRT replacements in military film-marking reconnaissance applications.
Large size three-dimensional video by electronic holography using multiple spatial light modulators
Sasaki, Hisayuki; Yamamoto, Kenji; Wakunami, Koki; Ichihashi, Yasuyuki; Oi, Ryutaro; Senoh, Takanori
2014-01-01
In this paper, we propose a new method of using multiple spatial light modulators (SLMs) to increase the size of three-dimensional (3D) images that are displayed using electronic holography. The scalability of images produced by the previous method had an upper limit that was derived from the path length of the image-readout part. We were able to produce larger colour electronic holographic images with a newly devised space-saving image-readout optical system for multiple reflection-type SLMs. This optical system is designed so that the path length of the image-readout part is half that of the previous method. It consists of polarization beam splitters (PBSs), half-wave plates (HWPs), and polarizers. We used 16 (4 × 4) 4K×2K-pixel SLMs for displaying holograms. The experimental device we constructed was able to perform 20 fps video reproduction in colour of full-parallax holographic 3D images with a diagonal image size of 85 mm and a horizontal viewing-zone angle of 5.6 degrees. PMID:25146685
Large size three-dimensional video by electronic holography using multiple spatial light modulators.
Sasaki, Hisayuki; Yamamoto, Kenji; Wakunami, Koki; Ichihashi, Yasuyuki; Oi, Ryutaro; Senoh, Takanori
2014-08-22
In this paper, we propose a new method of using multiple spatial light modulators (SLMs) to increase the size of three-dimensional (3D) images that are displayed using electronic holography. The scalability of images produced by the previous method had an upper limit that was derived from the path length of the image-readout part. We were able to produce larger colour electronic holographic images with a newly devised space-saving image-readout optical system for multiple reflection-type SLMs. This optical system is designed so that the path length of the image-readout part is half that of the previous method. It consists of polarization beam splitters (PBSs), half-wave plates (HWPs), and polarizers. We used 16 (4 × 4) 4K×2K-pixel SLMs for displaying holograms. The experimental device we constructed was able to perform 20 fps video reproduction in colour of full-parallax holographic 3D images with a diagonal image size of 85 mm and a horizontal viewing-zone angle of 5.6 degrees.
Horowitz, L; Sarkin, J M
1992-01-01
Surveys indicate over 50 million Americans, mostly women, currently operate video display terminals (VDTs) at home or in the workplace. Recent epidemiological studies reveal more than 75% of approximately 30 million American temporomandibular disorder (TMD) sufferers are women. What does the VDT and TMD have in common besides an affinity for the female gender? TMD is associated with numerous risk factors that commonly initiate sympathetic nervous system and stress hormone response mechanisms resulting in muscle spasms, trigger point formation, and pain in the head and neck. Likewise VDT operation may be linked to three additional sympathetic nervous system irritants including: (1) electrostatic ambient air negative ion depletion, (2) electromagnetic radiation, and (3) eyestrain and postural stress associated with poor work habits and improper work station design. Additional research considering the roles these three factors may play in the etiology of TMD and other myofascial pain problems is indicated. Furthermore, dentists are advised to educate patients as to these possible risks, encourage preventive behaviors on the part of employers and employees, and recommend workplace health, safety, and ergonomic upgrades when indicated.
Overview of FTV (free-viewpoint television)
NASA Astrophysics Data System (ADS)
Tanimoto, Masayuki
2010-07-01
We have developed a new type of television named FTV (Free-viewpoint TV). FTV is the ultimate 3DTV that enables us to view a 3D scene by freely changing our viewpoints. We proposed the concept of FTV and constructed the world's first real-time system including the complete chain of operation from image capture to display. FTV is based on the rayspace method that represents one ray in real space with one point in the ray-space. We have developed ray capture, processing and display technologies for FTV. FTV can be carried out today in real time on a single PC or on a mobile player. We also realized FTV with free listening-point audio. The international standardization of FTV has been conducted in MPEG. The first phase of FTV was MVC (Multi-view Video Coding) and the second phase is 3DV (3D Video). MVC was completed in May 2009. The Blu-ray 3D specification has adopted MVC for compression. 3DV is a standard that targets serving a variety of 3D displays. The view generation function of FTV is used to decouple capture and display in 3DV. FDU (FTV Data Unit) is proposed as a data format for 3DV. FTU can compensate errors of the synthesized views caused by depth error.
Efficient stereoscopic contents file format on the basis of ISO base media file format
NASA Astrophysics Data System (ADS)
Kim, Kyuheon; Lee, Jangwon; Suh, Doug Young; Park, Gwang Hoon
2009-02-01
A lot of 3D contents haven been widely used for multimedia services, however, real 3D video contents have been adopted for a limited applications such as a specially designed 3D cinema. This is because of the difficulty of capturing real 3D video contents and the limitation of display devices available in a market. However, diverse types of display devices for stereoscopic video contents for real 3D video contents have been recently released in a market. Especially, a mobile phone with a stereoscopic camera has been released in a market, which provides a user as a consumer to have more realistic experiences without glasses, and also, as a content creator to take stereoscopic images or record the stereoscopic video contents. However, a user can only store and display these acquired stereoscopic contents with his/her own devices due to the non-existence of a common file format for these contents. This limitation causes a user not share his/her contents with any other users, which makes it difficult the relevant market to stereoscopic contents is getting expanded. Therefore, this paper proposes the common file format on the basis of ISO base media file format for stereoscopic contents, which enables users to store and exchange pure stereoscopic contents. This technology is also currently under development for an international standard of MPEG as being called as a stereoscopic video application format.
OPSO - The OpenGL based Field Acquisition and Telescope Guiding System
NASA Astrophysics Data System (ADS)
Škoda, P.; Fuchs, J.; Honsa, J.
2006-07-01
We present OPSO, a modular pointing and auto-guiding system for the coudé spectrograph of the Ondřejov observatory 2m telescope. The current field and slit viewing CCD cameras with image intensifiers are giving only standard TV video output. To allow the acquisition and guiding of very faint targets, we have designed an image enhancing system working in real time on TV frames grabbed by BT878-based video capture card. Its basic capabilities include the sliding averaging of hundreds of frames with bad pixel masking and removal of outliers, display of median of set of frames, quick zooming, contrast and brightness adjustment, plotting of horizontal and vertical cross cuts of seeing disk within given intensity range and many more. From the programmer's point of view, the system consists of three tasks running in parallel on a Linux PC. One C task controls the video capturing over Video for Linux (v4l2) interface and feeds the frames into the large block of shared memory, where the core image processing is done by another C program calling the OpenGL library. The GUI is, however, dynamically built in Python from XML description of widgets prepared in Glade. All tasks are exchanging information by IPC calls using the shared memory segments.
NASA Technical Reports Server (NTRS)
2004-01-01
Ever wonder whether a still shot from a home video could serve as a "picture perfect" photograph worthy of being framed and proudly displayed on the mantle? Wonder no more. A critical imaging code used to enhance video footage taken from spaceborne imaging instruments is now available within a portable photography tool capable of producing an optimized, high-resolution image from multiple video frames.
Code of Federal Regulations, 2013 CFR
2013-04-01
... recordings and/or digital records shall be provided to the Commission upon request. (x) Video library log. A... events on video and/or digital recordings. The displayed date and time shall not significantly obstruct... each gaming machine change booth. (w) Video recording and/or digital record retention. (1) All video...
Code of Federal Regulations, 2012 CFR
2012-04-01
... recordings and/or digital records shall be provided to the Commission upon request. (x) Video library log. A... events on video and/or digital recordings. The displayed date and time shall not significantly obstruct... each gaming machine change booth. (w) Video recording and/or digital record retention. (1) All video...
Code of Federal Regulations, 2014 CFR
2014-04-01
... recordings and/or digital records shall be provided to the Commission upon request. (x) Video library log. A... events on video and/or digital recordings. The displayed date and time shall not significantly obstruct... each gaming machine change booth. (w) Video recording and/or digital record retention. (1) All video...
Effectiveness of Immersive Videos in Inducing Awe: An Experimental Study.
Chirico, Alice; Cipresso, Pietro; Yaden, David B; Biassoni, Federica; Riva, Giuseppe; Gaggioli, Andrea
2017-04-27
Awe, a complex emotion composed by the appraisal components of vastness and need for accommodation, is a profound and often meaningful experience. Despite its importance, psychologists have only recently begun empirical study of awe. At the experimental level, a main issue concerns how to elicit high intensity awe experiences in the lab. To address this issue, Virtual Reality (VR) has been proposed as a potential solution. Here, we considered the highest realistic form of VR: immersive videos. 42 participants watched at immersive and normal 2D videos displaying an awe or a neutral content. After the experience, they rated their level of awe and sense of presence. Participants' psychophysiological responses (BVP, SC, sEMG) were recorded during the whole video exposure. We hypothesized that the immersive video condition would increase the intensity of awe experienced compared to 2D screen videos. Results indicated that immersive videos significantly enhanced the self-reported intensity of awe as well as the sense of presence. Immersive videos displaying an awe content also led to higher parasympathetic activation. These findings indicate the advantages of using VR in the experimental study of awe, with methodological implications for the study of other emotions.
High speed imager test station
Yates, George J.; Albright, Kevin L.; Turko, Bojan T.
1995-01-01
A test station enables the performance of a solid state imager (herein called a focal plane array or FPA) to be determined at high image frame rates. A programmable waveform generator is adapted to generate clock pulses at determinable rates for clock light-induced charges from a FPA. The FPA is mounted on an imager header board for placing the imager in operable proximity to level shifters for receiving the clock pulses and outputting pulses effective to clock charge from the pixels forming the FPA. Each of the clock level shifters is driven by leading and trailing edge portions of the clock pulses to reduce power dissipation in the FPA. Analog circuits receive output charge pulses clocked from the FPA pixels. The analog circuits condition the charge pulses to cancel noise in the pulses and to determine and hold a peak value of the charge for digitizing. A high speed digitizer receives the peak signal value and outputs a digital representation of each one of the charge pulses. A video system then displays an image associated with the digital representation of the output charge pulses clocked from the FPA. In one embodiment, the FPA image is formatted to a standard video format for display on conventional video equipment.
High speed imager test station
Yates, G.J.; Albright, K.L.; Turko, B.T.
1995-11-14
A test station enables the performance of a solid state imager (herein called a focal plane array or FPA) to be determined at high image frame rates. A programmable waveform generator is adapted to generate clock pulses at determinable rates for clock light-induced charges from a FPA. The FPA is mounted on an imager header board for placing the imager in operable proximity to level shifters for receiving the clock pulses and outputting pulses effective to clock charge from the pixels forming the FPA. Each of the clock level shifters is driven by leading and trailing edge portions of the clock pulses to reduce power dissipation in the FPA. Analog circuits receive output charge pulses clocked from the FPA pixels. The analog circuits condition the charge pulses to cancel noise in the pulses and to determine and hold a peak value of the charge for digitizing. A high speed digitizer receives the peak signal value and outputs a digital representation of each one of the charge pulses. A video system then displays an image associated with the digital representation of the output charge pulses clocked from the FPA. In one embodiment, the FPA image is formatted to a standard video format for display on conventional video equipment. 12 figs.
Glasses-free large size high-resolution three-dimensional display based on the projector array
NASA Astrophysics Data System (ADS)
Sang, Xinzhu; Wang, Peng; Yu, Xunbo; Zhao, Tianqi; Gao, Xing; Xing, Shujun; Yu, Chongxiu; Xu, Daxiong
2014-11-01
Normally, it requires a huge amount of spatial information to increase the number of views and to provide smooth motion parallax for natural three-dimensional (3D) display similar to real life. To realize natural 3D video display without eye-wears, a huge amount of 3D spatial information is normal required. However, minimum 3D information for eyes should be used to reduce the requirements for display devices and processing time. For the 3D display with smooth motion parallax similar to the holographic stereogram, the size the virtual viewing slit should be smaller than the pupil size of eye at the largest viewing distance. To increase the resolution, two glass-free 3D display systems rear and front projection are presented based on the space multiplexing with the micro-projector array and the special designed 3D diffuse screens with the size above 1.8 m× 1.2 m. The displayed clear depths are larger 1.5m. The flexibility in terms of digitized recording and reconstructed based on the 3D diffuse screen relieves the limitations of conventional 3D display technologies, which can realize fully continuous, natural 3-D display. In the display system, the aberration is well suppressed and the low crosstalk is achieved.
1987-07-01
OVER TIME The phosphor stability over time was studied by measuring the spectrum over an extended period of time. On each day the spectrum of the...intensity, it causes the display to change in order to keep the light intensity constant. For example, in one case , the high intensity room lights were...MC1445. This device has the capability of switching! from one video source to another in a very shoi t time, 20 ns. The MC1445 is used to switch from
1989-09-01
additional information on the TSV or CGV records. The added capability of directly accss ing the TSV or CGV records from the CRT display would be very...or CGV records to find the appropriate time tag and begin playback at that point. With a dual or split screen arrange, the TSV and OGV recording...features that will be needed in a computer generated video ( CGV ) map display in order to provide feedback on tactical movement as it relates to crew
Mask, Lisa; Blanchard, Céline M
2011-09-01
The present study examines the protective role of an autonomous regulation of eating behaviors (AREB) on the relationship between trait body dissatisfaction and women's body image concerns and eating-related intentions in response to "thin ideal" media. Undergraduate women (n=138) were randomly assigned to view a "thin ideal" video or a neutral video. As hypothesized, trait body dissatisfaction predicted more negative affect and size dissatisfaction following exposure to the "thin ideal" video among women who displayed less AREB. Conversely, trait body dissatisfaction predicted greater intentions to monitor food intake and limit unhealthy foods following exposure to the "thin ideal" video among women who displayed more AREB. Copyright © 2011 Elsevier Ltd. All rights reserved.
Video image stabilization and registration--plus
NASA Technical Reports Server (NTRS)
Hathaway, David H. (Inventor)
2009-01-01
A method of stabilizing a video image displayed in multiple video fields of a video sequence includes the steps of: subdividing a selected area of a first video field into nested pixel blocks; determining horizontal and vertical translation of each of the pixel blocks in each of the pixel block subdivision levels from the first video field to a second video field; and determining translation of the image from the first video field to the second video field by determining a change in magnification of the image from the first video field to the second video field in each of horizontal and vertical directions, and determining shear of the image from the first video field to the second video field in each of the horizontal and vertical directions.
Hardware and software improvements to a low-cost horizontal parallax holographic video monitor.
Henrie, Andrew; Codling, Jesse R; Gneiting, Scott; Christensen, Justin B; Awerkamp, Parker; Burdette, Mark J; Smalley, Daniel E
2018-01-01
Displays capable of true holographic video have been prohibitively expensive and difficult to build. With this paper, we present a suite of modularized hardware components and software tools needed to build a HoloMonitor with basic "hacker-space" equipment, highlighting improvements that have enabled the total materials cost to fall to $820, well below that of other holographic displays. It is our hope that the current level of simplicity, development, design flexibility, and documentation will enable the lay engineer, programmer, and scientist to relatively easily replicate, modify, and build upon our designs, bringing true holographic video to the masses.
NASA Technical Reports Server (NTRS)
Culp, Robert D. (Editor); Bickley, George (Editor)
1993-01-01
Papers from the sixteenth annual American Astronautical Society Rocky Mountain Guidance and Control Conference are presented. The topics covered include the following: advances in guidance, navigation, and control; control system videos; guidance, navigation and control embedded flight control systems; recent experiences; guidance and control storyboard displays; and applications of modern control, featuring the Hubble Space Telescope (HST) performance enhancement study.
Viewing Welds By Computer Tomography
NASA Technical Reports Server (NTRS)
Pascua, Antonio G.; Roy, Jagatjit
1990-01-01
Computer tomography system used to inspect welds for root penetration. Source illuminates rotating welded part with fan-shaped beam of x rays or gamma rays. Detectors in circular array on opposite side of part intercept beam and convert it into electrical signals. Computer processes signals into image of cross section of weld. Image displayed on video monitor. System offers only nondestructive way to check penetration from outside when inner surfaces inaccessible.
NASA Astrophysics Data System (ADS)
Kachejian, Kerry C.; Vujcic, Doug
1999-07-01
The Tactical Visualization Module (TVM) research effort will develop and demonstrate a portable, tactical information system to enhance the situational awareness of individual warfighters and small military units by providing real-time access to manned and unmanned aircraft, tactically mobile robots, and unattended sensors. TVM consists of a family of portable and hand-held devices being advanced into a next- generation, embedded capability. It enables warfighters to visualize the tactical situation by providing real-time video, imagery, maps, floor plans, and 'fly-through' video on demand. When combined with unattended ground sensors, such as Combat- Q, TVM permits warfighters to validate and verify tactical targets. The use of TVM results in faster target engagement times, increased survivability, and reduction of the potential for fratricide. TVM technology can support both mounted and dismounted tactical forces involved in land, sea, and air warfighting operations. As a PCMCIA card, TVM can be embedded in portable, hand-held, and wearable PCs. Thus, it leverages emerging tactical displays including flat-panel, head-mounted displays. The end result of the program will be the demonstration of the system with U.S. Army and USMC personnel in an operational environment. Raytheon Systems Company, the U.S. Army Soldier Systems Command -- Natick RDE Center (SSCOM- NRDEC) and the Defense Advanced Research Projects Agency (DARPA) are partners in developing and demonstrating the TVM technology.
Real-time free-viewpoint DIBR for large-size 3DLED
NASA Astrophysics Data System (ADS)
Wang, NengWen; Sang, Xinzhu; Guo, Nan; Wang, Kuiru
2017-10-01
Three-dimensional (3D) display technologies make great progress in recent years, and lenticular array based 3D display is a relatively mature technology, which most likely to commercial. In naked-eye-3D display, the screen size is one of the most important factors that affect the viewing experience. In order to construct a large-size naked-eye-3D display system, the LED display is used. However, the pixel misalignment is an inherent defect of the LED screen, which will influences the rendering quality. To address this issue, an efficient image synthesis algorithm is proposed. The Texture-Plus-Depth(T+D) format is chosen for the display content, and the modified Depth Image Based Rendering (DIBR) method is proposed to synthesize new views. In order to achieve realtime, the whole algorithm is implemented on GPU. With the state-of-the-art hardware and the efficient algorithm, a naked-eye-3D display system with a LED screen size of 6m × 1.8m is achieved. Experiment shows that the algorithm can process the 43-view 3D video with 4K × 2K resolution in real time on GPU, and vivid 3D experience is perceived.
A Web-based, secure, light weight clinical multimedia data capture and display system.
Wang, S S; Starren, J
2000-01-01
Computer-based patient records are traditionally composed of textual data. Integration of multimedia data has been historically slow. Multimedia data such as image, audio, and video have been traditionally more difficult to handle. An implementation of a clinical system for multimedia data is discussed. The system implementation uses Java, Secure Socket Layer (SSL), and Oracle 8i. The system is on top of the Internet so it is architectural independent, cross-platform, cross-vendor, and secure. Design and implementations issues are discussed.
Veligdan, James T.
2005-05-31
A video image is displayed from an optical panel by splitting the image into a plurality of image components, and then projecting the image components through corresponding portions of the panel to collectively form the image. Depth of the display is correspondingly reduced.
Veligdan, James T [Manorville, NY
2007-05-29
A video image is displayed from an optical panel by splitting the image into a plurality of image components, and then projecting the image components through corresponding portions of the panel to collectively form the image. Depth of the display is correspondingly reduced.
An Imaging And Graphics Workstation For Image Sequence Analysis
NASA Astrophysics Data System (ADS)
Mostafavi, Hassan
1990-01-01
This paper describes an application-specific engineering workstation designed and developed to analyze imagery sequences from a variety of sources. The system combines the software and hardware environment of the modern graphic-oriented workstations with the digital image acquisition, processing and display techniques. The objective is to achieve automation and high throughput for many data reduction tasks involving metric studies of image sequences. The applications of such an automated data reduction tool include analysis of the trajectory and attitude of aircraft, missile, stores and other flying objects in various flight regimes including launch and separation as well as regular flight maneuvers. The workstation can also be used in an on-line or off-line mode to study three-dimensional motion of aircraft models in simulated flight conditions such as wind tunnels. The system's key features are: 1) Acquisition and storage of image sequences by digitizing real-time video or frames from a film strip; 2) computer-controlled movie loop playback, slow motion and freeze frame display combined with digital image sharpening, noise reduction, contrast enhancement and interactive image magnification; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored image sequence; 4) automatic and manual field-of-view and spatial calibration; 5) image sequence data base generation and management, including the measurement data products; 6) off-line analysis software for trajectory plotting and statistical analysis; 7) model-based estimation and tracking of object attitude angles; and 8) interface to a variety of video players and film transport sub-systems.
System for clinical photometric stereo endoscopy
NASA Astrophysics Data System (ADS)
Durr, Nicholas J.; González, Germán.; Lim, Daryl; Traverso, Giovanni; Nishioka, Norman S.; Vakoc, Benjamin J.; Parot, Vicente
2014-02-01
Photometric stereo endoscopy is a technique that captures information about the high-spatial-frequency topography of the field of view simultaneously with a conventional color image. Here we describe a system that will enable photometric stereo endoscopy to be clinically evaluated in the large intestine of human patients. The clinical photometric stereo endoscopy system consists of a commercial gastroscope, a commercial video processor, an image capturing and processing unit, custom synchronization electronics, white light LEDs, a set of four fibers with diffusing tips, and an alignment cap. The custom pieces that come into contact with the patient are composed of biocompatible materials that can be sterilized before use. The components can then be assembled in the endoscopy suite before use. The resulting endoscope has the same outer diameter as a conventional colonoscope (14 mm), plugs into a commercial video processor, captures topography and color images at 15 Hz, and displays the conventional color image to the gastroenterologist in real-time. We show that this system can capture a color and topographical video in a tubular colon phantom, demonstrating robustness to complex geometries and motion. The reported system is suitable for in vivo evaluation of photometric stereo endoscopy in the human large intestine.
Holodeck: Telepresence Dome Visualization System Simulations
NASA Technical Reports Server (NTRS)
Hite, Nicolas
2012-01-01
This paper explores the simulation and consideration of different image-projection strategies for the Holodeck, a dome that will be used for highly immersive telepresence operations in future endeavors of the National Aeronautics and Space Administration (NASA). Its visualization system will include a full 360 degree projection onto the dome's interior walls in order to display video streams from both simulations and recorded video. Because humans innately trust their vision to precisely report their surroundings, the Holodeck's visualization system is crucial to its realism. This system will be rigged with an integrated hardware and software infrastructure-namely, a system of projectors that will relay with a Graphics Processing Unit (GPU) and computer to both project images onto the dome and correct warping in those projections in real-time. Using both Computer-Aided Design (CAD) and ray-tracing software, virtual models of various dome/projector geometries were created and simulated via tracking and analysis of virtual light sources, leading to the selection of two possible configurations for installation. Research into image warping and the generation of dome-ready video content was also conducted, including generation of fisheye images, distortion correction, and the generation of a reliable content-generation pipeline.
Digital video system for on-line portal verification
NASA Astrophysics Data System (ADS)
Leszczynski, Konrad W.; Shalev, Shlomo; Cosby, N. Scott
1990-07-01
A digital system has been developed for on-line acquisition, processing and display of portal images during radiation therapy treatment. A metal/phosphor screen combination is the primary detector, where the conversion from high-energy photons to visible light takes place. A mirror angled at 45 degrees reflects the primary image to a low-light-level camera, which is removed from the direct radiation beam. The image registered by the camera is digitized, processed and displayed on a CRT monitor. Advanced digital techniques for processing of on-line images have been developed and implemented to enhance image contrast and suppress the noise. Some elements of automated radiotherapy treatment verification have been introduced.
The optical design of ultra-short throw system for panel emitted theater video system
NASA Astrophysics Data System (ADS)
Huang, Jiun-Woei
2015-07-01
In the past decade, the display format from (HD High Definition) through Full HD(1920X1080) to UHD(4kX2k), mainly guides display industry to two directions: one is liquid crystal display(LCD) from 10 inch to 100 inch and more, and the other is projector. Although LCD has been popularly used in market; however, the investment for production such kind displays cost more money expenditure, and less consideration of environmental pollution and protection[1]. The Projection system may be considered, due to more viewing access, flexible in location, energy saving and environmental protection issues. The topic is to design and fabricate a short throw factor liquid crystal on silicon (LCoS) projection system for cinema. It provides a projection lens system, including a tele-centric lens fitted for emitted LCoS to collimate light to enlarge the field angle. Then, the optical path is guided by a symmetric lens. Light of LCoS may pass through the lens, hit on and reflect through an aspherical mirror, to form a less distortion image on blank wall or screen for home cinema. The throw ratio is less than 0.33.
Rehm, K; Seeley, G W; Dallas, W J; Ovitt, T W; Seeger, J F
1990-01-01
One of the goals of our research in the field of digital radiography has been to develop contrast-enhancement algorithms for eventual use in the display of chest images on video devices with the aim of preserving the diagnostic information presently available with film, some of which would normally be lost because of the smaller dynamic range of video monitors. The ASAHE algorithm discussed in this article has been tested by investigating observer performance in a difficult detection task involving phantoms and simulated lung nodules, using film as the output medium. The results of the experiment showed that the algorithm is successful in providing contrast-enhanced, natural-looking chest images while maintaining diagnostic information. The algorithm did not effect an increase in nodule detectability, but this was not unexpected because film is a medium capable of displaying a wide range of gray levels. It is sufficient at this stage to show that there is no degradation in observer performance. Future tests will evaluate the performance of the ASAHE algorithm in preparing chest images for video display.
Interactive Video in Training. Computers in Personnel--Making Management Profitable.
ERIC Educational Resources Information Center
Copeland, Peter
Interactive video is achieved by merging the two powerful technologies of microcomputing and video. Using television as the vehicle for display, text and diagrams, filmic images, and sound can be used separately or in combination to achieve a specific training task. An interactive program can check understanding, determine progress, and challenge…
Future of photorefractive based holographic 3D display
NASA Astrophysics Data System (ADS)
Blanche, P.-A.; Bablumian, A.; Voorakaranam, R.; Christenson, C.; Lemieux, D.; Thomas, J.; Norwood, R. A.; Yamamoto, M.; Peyghambarian, N.
2010-02-01
The very first demonstration of our refreshable holographic display based on photorefractive polymer was published in Nature early 20081. Based on the unique properties of a new organic photorefractive material and the holographic stereography technique, this display addressed a gap between large static holograms printed in permanent media (photopolymers) and small real time holographic systems like the MIT holovideo. Applications range from medical imaging to refreshable maps and advertisement. Here we are presenting several technical solutions for improving the performance parameters of the initial display from an optical point of view. Full color holograms can be generated thanks to angular multiplexing, the recording time can be reduced from minutes to seconds with a pulsed laser, and full parallax hologram can be recorded in a reasonable time thanks to parallel writing. We also discuss the future of such a display and the possibility of video rate.
Tactile Cueing for Target Acquisition and Identification
2005-09-01
method of coding tactile information, and the method of presenting elevation information were studied. Results: Subjects were divided into video game experienced...VGP) subjects and non- video game (NVGP) experienced subjects. VGPs showed a significantly lower’ target acquisition time with the 12...that video game players performed better with the highest level of tactile resolution, while non- video game players performed better with simpler pattern and a lower resolution display.
Image system for three dimensional, 360 DEGREE, time sequence surface mapping of moving objects
Lu, Shin-Yee
1998-01-01
A three-dimensional motion camera system comprises a light projector placed between two synchronous video cameras all focused on an object-of-interest. The light projector shines a sharp pattern of vertical lines (Ronchi ruling) on the object-of-interest that appear to be bent differently to each camera by virtue of the surface shape of the object-of-interest and the relative geometry of the cameras, light projector and object-of-interest Each video frame is captured in a computer memory and analyzed. Since the relative geometry is known and the system pre-calibrated, the unknown three-dimensional shape of the object-of-interest can be solved for by matching the intersections of the projected light lines with orthogonal epipolar lines corresponding to horizontal rows in the video camera frames. A surface reconstruction is made and displayed on a monitor screen. For 360.degree. all around coverage of theobject-of-interest, two additional sets of light projectors and corresponding cameras are distributed about 120.degree. apart from one another.
Image system for three dimensional, 360{degree}, time sequence surface mapping of moving objects
Lu, S.Y.
1998-12-22
A three-dimensional motion camera system comprises a light projector placed between two synchronous video cameras all focused on an object-of-interest. The light projector shines a sharp pattern of vertical lines (Ronchi ruling) on the object-of-interest that appear to be bent differently to each camera by virtue of the surface shape of the object-of-interest and the relative geometry of the cameras, light projector and object-of-interest. Each video frame is captured in a computer memory and analyzed. Since the relative geometry is known and the system pre-calibrated, the unknown three-dimensional shape of the object-of-interest can be solved for by matching the intersections of the projected light lines with orthogonal epipolar lines corresponding to horizontal rows in the video camera frames. A surface reconstruction is made and displayed on a monitor screen. For 360{degree} all around coverage of the object-of-interest, two additional sets of light projectors and corresponding cameras are distributed about 120{degree} apart from one another. 20 figs.
Backwards compatible high dynamic range video compression
NASA Astrophysics Data System (ADS)
Dolzhenko, Vladimir; Chesnokov, Vyacheslav; Edirisinghe, Eran A.
2014-02-01
This paper presents a two layer CODEC architecture for high dynamic range video compression. The base layer contains the tone mapped video stream encoded with 8 bits per component which can be decoded using conventional equipment. The base layer content is optimized for rendering on low dynamic range displays. The enhancement layer contains the image difference, in perceptually uniform color space, between the result of inverse tone mapped base layer content and the original video stream. Prediction of the high dynamic range content reduces the redundancy in the transmitted data while still preserves highlights and out-of-gamut colors. Perceptually uniform colorspace enables using standard ratedistortion optimization algorithms. We present techniques for efficient implementation and encoding of non-uniform tone mapping operators with low overhead in terms of bitstream size and number of operations. The transform representation is based on human vision system model and suitable for global and local tone mapping operators. The compression techniques include predicting the transform parameters from previously decoded frames and from already decoded data for current frame. Different video compression techniques are compared: backwards compatible and non-backwards compatible using AVC and HEVC codecs.
Jedi training: playful evaluation of head-mounted augmented reality display systems
NASA Astrophysics Data System (ADS)
Ozbek, Christopher S.; Giesler, Bjorn; Dillmann, Ruediger
2004-05-01
A fundamental decision in building augmented reality (AR) systems is how to accomplish the combining of the real and virtual worlds. Nowadays this key-question boils down to the two alternatives video-see-through (VST) vs. optical-see-through (OST). Both systems have advantages and disadvantages in areas like production-simplicity, resolution, flexibility in composition strategies, field of view etc. To provide additional decision criteria for high dexterity, accuracy tasks and subjective user-acceptance a gaming environment was programmed that allowed good evaluation of hand-eye coordination, and that was inspired by the Star Wars movies. During an experimentation session with more than thirty participants a preference for optical-see-through glasses in conjunction with infra-red-tracking was found. Especially the high-computational demand for video-capture, processing and the resulting drop in frame rate emerged as a key-weakness of the VST-system.
Jiao, Yang; Xu, Liang; Gao, Min-Guang; Feng, Ming-Chun; Jin, Ling; Tong, Jing-Jing; Li, Sheng
2012-07-01
Passive remote sensing by Fourier-transform infrared (FTIR) spectrometry allows detection of air pollution. However, for the localization of a leak and a complete assessment of the situation in the case of the release of a hazardous cloud, information about the position and the distribution of a cloud is essential. Therefore, an imaging passive remote sensing system comprising an interferometer, a data acquisition and processing software, scan system, a video system, and a personal computer has been developed. The remote sensing of SF6 was done. The column densities of all directions in which a target compound has been identified may be retrieved by a nonlinear least squares fitting algorithm and algorithm of radiation transfer, and a false color image is displayed. The results were visualized by a video image, overlaid by false color concentration distribution image. The system has a high selectivity, and allows visualization and quantification of pollutant clouds.
Hu, Peter F; Xiao, Yan; Ho, Danny; Mackenzie, Colin F; Hu, Hao; Voigt, Roger; Martz, Douglas
2006-06-01
One of the major challenges for day-of-surgery operating room coordination is accurate and timely situation awareness. Distributed and secure real-time status information is key to addressing these challenges. This article reports on the design and implementation of a passive status monitoring system in a 19-room surgical suite of a major academic medical center. Key design requirements considered included integrated real-time operating room status display, access control, security, and network impact. The system used live operating room video images and patient vital signs obtained through monitors to automatically update events and operating room status. Images were presented on a "need-to-know" basis, and access was controlled by identification badge authorization. The system delivered reliable real-time operating room images and status with acceptable network impact. Operating room status was visualized at 4 separate locations and was used continuously by clinicians and operating room service providers to coordinate operating room activities.
Task-dependent color discrimination
NASA Technical Reports Server (NTRS)
Poirson, Allen B.; Wandell, Brian A.
1990-01-01
When color video displays are used in time-critical applications (e.g., head-up displays, video control panels), the observer must discriminate among briefly presented targets seen within a complex spatial scene. Color-discrimination threshold are compared by using two tasks. In one task the observer makes color matches between two halves of a continuously displayed bipartite field. In a second task the observer detects a color target in a set of briefly presented objects. The data from both tasks are well summarized by ellipsoidal isosensitivity contours. The fitted ellipsoids differ both in their size, which indicates an absolute sensitivity difference, and orientation, which indicates a relative sensitivity difference.
NASA Astrophysics Data System (ADS)
Baca, Michael J.
1990-09-01
A system to display images generated by the Naval Postgraduate School Infrared Search and Target Designation (a modified AN/SAR-8 Advanced Development Model) in near real time was developed using a 33 MHz NIC computer as the central controller. This computer was enhanced with a Data Translation DT2861 Frame Grabber for image processing and an interface board designed and constructed at NPS to provide synchronization between the IRSTD and Frame Grabber. Images are displayed in false color in a video raster format on a 512 by 480 pixel resolution monitor. Using FORTRAN, programs have been written to acquire, unscramble, expand and display a 3 deg sector of data. The time line for acquisition, processing and display has been analyzed and repetition periods of less than four seconds for successive screen displays have been achieved. This represents a marked improvement over previous methods necessitating slower Direct Memory Access transfers of data into the Frame Grabber. Recommendations are made for further improvements to enhance the speed and utility of images produced.
Remote stereoscopic video play platform for naked eyes based on the Android system
NASA Astrophysics Data System (ADS)
Jia, Changxin; Sang, Xinzhu; Liu, Jing; Cheng, Mingsheng
2014-11-01
As people's life quality have been improved significantly, the traditional 2D video technology can not meet people's urgent desire for a better video quality, which leads to the rapid development of 3D video technology. Simultaneously people want to watch 3D video in portable devices,. For achieving the above purpose, we set up a remote stereoscopic video play platform. The platform consists of a server and clients. The server is used for transmission of different formats of video and the client is responsible for receiving remote video for the next decoding and pixel restructuring. We utilize and improve Live555 as video transmission server. Live555 is a cross-platform open source project which provides solutions for streaming media such as RTSP protocol and supports transmission of multiple video formats. At the receiving end, we use our laboratory own player. The player for Android, which is with all the basic functions as the ordinary players do and able to play normal 2D video, is the basic structure for redevelopment. Also RTSP is implemented into this structure for telecommunication. In order to achieve stereoscopic display, we need to make pixel rearrangement in this player's decoding part. The decoding part is the local code which JNI interface calls so that we can extract video frames more effectively. The video formats that we process are left and right, up and down and nine grids. In the design and development, a large number of key technologies from Android application development have been employed, including a variety of wireless transmission, pixel restructuring and JNI call. By employing these key technologies, the design plan has been finally completed. After some updates and optimizations, the video player can play remote 3D video well anytime and anywhere and meet people's requirement.
Design of a system based on DSP and FPGA for video recording and replaying
NASA Astrophysics Data System (ADS)
Kang, Yan; Wang, Heng
2013-08-01
This paper brings forward a video recording and replaying system with the architecture of Digital Signal Processor (DSP) and Field Programmable Gate Array (FPGA). The system achieved encoding, recording, decoding and replaying of Video Graphics Array (VGA) signals which are displayed on a monitor during airplanes and ships' navigating. In the architecture, the DSP is a main processor which is used for a large amount of complicated calculation during digital signal processing. The FPGA is a coprocessor for preprocessing video signals and implementing logic control in the system. In the hardware design of the system, Peripheral Device Transfer (PDT) function of the External Memory Interface (EMIF) is utilized to implement seamless interface among the DSP, the synchronous dynamic RAM (SDRAM) and the First-In-First-Out (FIFO) in the system. This transfer mode can avoid the bottle-neck of the data transfer and simplify the circuit between the DSP and its peripheral chips. The DSP's EMIF and two level matching chips are used to implement Advanced Technology Attachment (ATA) protocol on physical layer of the interface of an Integrated Drive Electronics (IDE) Hard Disk (HD), which has a high speed in data access and does not rely on a computer. Main functions of the logic on the FPGA are described and the screenshots of the behavioral simulation are provided in this paper. In the design of program on the DSP, Enhanced Direct Memory Access (EDMA) channels are used to transfer data between the FIFO and the SDRAM to exert the CPU's high performance on computing without intervention by the CPU and save its time spending. JPEG2000 is implemented to obtain high fidelity in video recording and replaying. Ways and means of acquiring high performance for code are briefly present. The ability of data processing of the system is desirable. And smoothness of the replayed video is acceptable. By right of its design flexibility and reliable operation, the system based on DSP and FPGA for video recording and replaying has a considerable perspective in analysis after the event, simulated exercitation and so forth.
ICL: The Image Composition Language
NASA Technical Reports Server (NTRS)
Foley, James D.; Kim, Won Chul
1986-01-01
The Image Composition Language (ICL) provides a convenient way for programmers of interactive graphics application programs to define how the video look-up table of a raster display system is to be loaded. The ICL allows one or several images stored in the frame buffer to be combined in a variety of ways. The ICL treats these images as variables, and provides arithematic, relational, and conditional operators to combine the images, scalar variables, and constants in image composition expressions. The objective of ICL is to provide programmers with a simple way to compose images, to relieve the tedium usually associated with loading the video look-up table to obtain desired results.
VSTOL Systems Research Aircraft (VSRA) Harrier
NASA Technical Reports Server (NTRS)
1994-01-01
NASA's Ames Research Center has developed and is testing a new integrated flight and propulsion control system that will help pilots land aircraft in adverse weather conditions and in small confined ares (such as, on a small ship or flight deck). The system is being tested in the V/STOL (Vertical/Short Takeoff and Landing) Systems research Aircraft (VSRA), which is a modified version of the U.S. Marine Corps's AV-8B Harrier jet fighter, which can take off and land vertically. The new automated flight control system features both head-up and panel-mounted computer displays and also automatically integrates control of the aircraft's thrust and thrust vector control, thereby reducing the pilot's workload and help stabilize the aircraft for landing. Visiting pilots will be encouraged to test the new system and provide formal evaluation flights data and feedback. An actual flight test and the display panel of control system are shown in this video.
[Effects on visual functions following several hours' usage of a head mounted display].
Hara, N; Ukai, K; Ishikawa, S; Takagi, M; Bando, T; Oyamada, H
1996-07-01
We investigated the effects of viewing video movies with a head-mounted display (HMD) for 4 to 6 hours on visual functions such as refraction, visual acuity, and accommodation-vergence system. Two or three video movies were watched without any breaks by 13 normal volunteers (age: 22 approximately 40). Measurements were made of (1) objective and subjective refraction, (2) corrected visual acuity, (3) tonic level and step response of accommodation with a computer-assisted infrared optometer, and (4) near and far phorias and AC/A ratio. Significant transient myopia was found following 4 hours' viewing, but not following 6 hours' viewing. Scrutinizing individual data, myopia was consistently found in some subjects, and hyperopia in others. We presumed that many subjects might have been influenced by initial instrumental myopia when they adjusted the focus by using the mechanism built in the HMD. No significant change was observed in any other examination. However, there was a tendency for the AC/A ratio to change after a short time, and then to recover to its original value. Based on the results in this study, it appears that some changes in accommodation and vergence systems are caused by viewing video movies with the HMD. Although the amount of changes was within normal physiological variation in this study, the possibility still remains that usage for a longer time may lead to other changes in visual function. Care is also necessary when using the HMD in subjects with subclinical problems.
DLP™-based dichoptic vision test system
NASA Astrophysics Data System (ADS)
Woods, Russell L.; Apfelbaum, Henry L.; Peli, Eli
2010-01-01
It can be useful to present a different image to each of the two eyes while they cooperatively view the world. Such dichoptic presentation can occur in investigations of stereoscopic and binocular vision (e.g., strabismus, amblyopia) and vision rehabilitation in clinical and research settings. Various techniques have been used to construct dichoptic displays. The most common and most flexible modern technique uses liquid-crystal (LC) shutters. When used in combination with cathode ray tube (CRT) displays, there is often leakage of light from the image intended for one eye into the view of the other eye. Such interocular crosstalk is 14% even in our state of the art CRT-based dichoptic system. While such crosstalk may have minimal impact on stereo movie or video game experiences, it can defeat clinical and research investigations. We use micromirror digital light processing (DLP™) technology to create a novel dichoptic visual display system with substantially lower interocular crosstalk (0.3% remaining crosstalk comes from the LC shutters). The DLP system normally uses a color wheel to display color images. Our approach is to disable the color wheel, synchronize the display directly to the computer's sync signal, allocate each of the three (former) color presentations to one or both eyes, and open and close the LC shutters in synchrony with those color events.
1979-11-01
a generalized cooccurrence matrix. Describing image texture is an important problem in the design of image understanding systems . Applications as...display system design optimization and video signal processing. Based on a study by Southern Research Institute , a number of options were identified...Specification for Target Acquisition Designation System (U), RFP # AMC-DP-AAH-H4020, i2 Apr 77. 4. Terminal Homing Applications of Solid State Image
NASA Astrophysics Data System (ADS)
Yamada, Takayuki; Gohshi, Seiichi; Echizen, Isao
A method is described to prevent video images and videos displayed on screens from being re-shot by digital cameras and camcorders. Conventional methods using digital watermarking for re-shooting prevention embed content IDs into images and videos, and they help to identify the place and time where the actual content was shot. However, these methods do not actually prevent digital content from being re-shot by camcorders. We developed countermeasures to stop re-shooting by exploiting the differences between the sensory characteristics of humans and devices. The countermeasures require no additional functions to use-side devices. It uses infrared light (IR) to corrupt the content recorded by CCD or CMOS devices. In this way, re-shot content will be unusable. To validate the method, we developed a prototype system and implemented it on a 100-inch cinema screen. Experimental evaluations showed that the method effectively prevents re-shooting.
Efficient implementation of neural network deinterlacing
NASA Astrophysics Data System (ADS)
Seo, Guiwon; Choi, Hyunsoo; Lee, Chulhee
2009-02-01
Interlaced scanning has been widely used in most broadcasting systems. However, there are some undesirable artifacts such as jagged patterns, flickering, and line twitters. Moreover, most recent TV monitors utilize flat panel display technologies such as LCD or PDP monitors and these monitors require progressive formats. Consequently, the conversion of interlaced video into progressive video is required in many applications and a number of deinterlacing methods have been proposed. Recently deinterlacing methods based on neural network have been proposed with good results. On the other hand, with high resolution video contents such as HDTV, the amount of video data to be processed is very large. As a result, the processing time and hardware complexity become an important issue. In this paper, we propose an efficient implementation of neural network deinterlacing using polynomial approximation of the sigmoid function. Experimental results show that these approximations provide equivalent performance with a considerable reduction of complexity. This implementation of neural network deinterlacing can be efficiently incorporated in HW implementation.
NASA Astrophysics Data System (ADS)
Schuck, Miller Harry
Automotive head-up displays require compact, bright, and inexpensive imaging systems. In this thesis, a compact head-up display (HUD) utilizing liquid-crystal-on-silicon microdisplay technology is presented from concept to implementation. The thesis comprises three primary areas of HUD research: the specification, design and implementation of a compact HUD optical system, the development of a wafer planarization process to enhance reflective device brightness and light immunity and the design, fabrication and testing of an inexpensive 640 x 512 pixel active matrix backplane intended to meet the HUD requirements. The thesis addresses the HUD problem at three levels, the systems level, the device level, and the materials level. At the systems level, the optical design of an automotive HUD must meet several competing requirements, including high image brightness, compact packaging, video-rate performance, and low cost. An optical system design which meets the competing requirements has been developed utilizing a fully-reconfigurable reflective microdisplay. The design consists of two optical stages, the first a projector stage which magnifies the display, and a second stage which forms the virtual image eventually seen by the driver. A key component of the optical system is a diffraction grating/field lens which forms a large viewing eyebox while reducing the optical system complexity. Image quality biocular disparity and luminous efficacy were analyzed and results of the optical implementation are presented. At the device level, the automotive HUD requires a reconfigurable, video-rate, high resolution image source for applications such as navigation and night vision. The design of a 640 x 512 pixel active matrix backplane which meets the requirements of the HUD is described. The backplane was designed to produce digital field sequential color images at video rates utilizing fast switching liquid crystal as the modulation layer. The design methodology is discussed, and the example of a clock generator is described from design to implementation. Electrical and optical test results of the fabricated backplane are presented. At the materials level, a planarization method was developed to meet the stringent brightness requirements of automotive HUD's. The research efforts described here have resulted in a simple, low cost post-processing method for planarizing microdisplay substrates based on a spin-cast polymeric resin, benzocyclobutene (BCB). Six- fold reductions in substrate step height were accomplished with a single coating. Via masking and dry etching methods were developed. High reflectivity metal was deposited and patterned over the planarized substrate to produce high aperture pixel mirrors. The process is simple, rapid, and results in microdisplays better able to meet the stringent requirements of high brightness display systems. Methods and results of the post- processing are described.
Computer-based desktop system for surgical videotape editing.
Vincent-Hamelin, E; Sarmiento, J M; de la Puente, J M; Vicente, M
1997-05-01
The educational role of surgical video presentations should be optimized by linking surgical images to graphic evaluation of indications, techniques, and results. We describe a PC-based video production system for personal editing of surgical tapes, according to the objectives of each presentation. The hardware requirement is a personal computer (100 MHz processor, 1-Gb hard disk, 16 Mb RAM) with a PC-to-TV/video transfer card plugged into a slot. Computer-generated numerical data, texts, and graphics are transformed into analog signals displayed on TV/video. A Genlock interface (a special interface card) synchronizes digital and analog signals, to overlay surgical images to electronic illustrations. The presentation is stored as digital information or recorded on a tape. The proliferation of multimedia tools is leading us to adapt presentations to the objectives of lectures and to integrate conceptual analyses with dynamic image-based information. We describe a system that handles both digital and analog signals, production being recorded on a tape. Movies may be managed in a digital environment, with either an "on-line" or "off-line" approach. System requirements are high, but handling a single device optimizes editing without incurring such complexity that management becomes impractical to surgeons. Our experience suggests that computerized editing allows linking surgical scientific and didactic messages on a single communication medium, either a videotape or a CD-ROM.
Co-Located Collaborative Learning Video Game with Single Display Groupware
ERIC Educational Resources Information Center
Infante, Cristian; Weitz, Juan; Reyes, Tomas; Nussbaum, Miguel; Gomez, Florencia; Radovic, Darinka
2010-01-01
Role Game is a co-located CSCL video game played by three students sitting at one machine sharing a single screen, each with their own input device. Inspired by video console games, Role Game enables students to learn by doing, acquiring social abilities and mastering subject matter in a context of co-located collaboration. After describing the…
Author Correction: Single-molecule imaging by optical absorption
NASA Astrophysics Data System (ADS)
Celebrano, Michele; Kukura, Philipp; Renn, Alois; Sandoghdar, Vahid
2018-05-01
In the Supplementary Video initially published with this Letter, the right-hand panel displaying the fluorescence emission was not showing on some video players due to a formatting problem; this has now been fixed. The video has also now been amended to include colour scale bars for both the left- (differential transmission signal) and right-hand panels.
Aidlen, Jeremy T; Glick, Sara; Silverman, Kenneth; Silverman, Harvey F; Luks, Francois I
2009-08-01
Light-weight, low-profile, and high-resolution head-mounted displays (HMDs) now allow personalized viewing, of a laparoscopic image. The advantages include unobstructed viewing, regardless of position at the operating table, and the possibility to customize the image (i.e., enhanced reality, picture-in-picture, etc.). The bright image display allows use in daylight surroundings and the low profile of the HMD provides adequate peripheral vision. Theoretic disadvantages include reliance for all on the same image capture and anticues (i.e., reality disconnect) when the projected image remains static, despite changes in head position. This can lead to discomfort and even nausea. We have developed a prototype of interactive laparoscopic image display that allows hands-free control of the displayed image by changes in spatial orientation of the operator's head. The prototype consists of an HMD, a spatial orientation device, and computer software to enable hands-free panning and zooming of a video-endoscopic image display. The spatial orientation device uses magnetic fields created by a transmitter and receiver, each containing three orthogonal coils. The transmitter coils are efficiently driven, using USB power only, by a newly developed circuit, each at a unique frequency. The HMD-mounted receiver system links to a commercially available PC-interface PCI-bus sound card (M-Audiocard Delta 44; Avid Technology, Tewksbury, MA). Analog signals at the receiver are filtered, amplified, and converted to digital signals, which are processed to control the image display. The prototype uses a proprietary static fish-eye lens and software for the distortion-free reconstitution of any portion of the captured image. Left-right and up-down motions of the head (and HMD) produce real-time panning of the displayed image. Motion of the head toward, or away from, the transmitter causes real-time zooming in or out, respectively, of the displayed image. This prototype of the interactive HMD allows hands-free, intuitive control of the laparoscopic field, independent of the captured image.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-28
...In this document, the Commission proposes rules to implement provisions of the Twenty-First Century Communications and Video Accessibility Act of 2010 (``CVAA'') that mandate rules for closed captioning of certain video programming delivered using Internet protocol (``IP''). The Commission seeks comment on rules that would apply to the distributors, providers, and owners of IP-delivered video programming, as well as the devices that display such programming.
Actively addressed single pixel full-colour plasmonic display
NASA Astrophysics Data System (ADS)
Franklin, Daniel; Frank, Russell; Wu, Shin-Tson; Chanda, Debashis
2017-05-01
Dynamic, colour-changing surfaces have many applications including displays, wearables and active camouflage. Plasmonic nanostructures can fill this role by having the advantages of ultra-small pixels, high reflectivity and post-fabrication tuning through control of the surrounding media. However, previous reports of post-fabrication tuning have yet to cover a full red-green-blue (RGB) colour basis set with a single nanostructure of singular dimensions. Here, we report a method which greatly advances this tuning and demonstrates a liquid crystal-plasmonic system that covers the full RGB colour basis set, only as a function of voltage. This is accomplished through a surface morphology-induced, polarization-dependent plasmonic resonance and a combination of bulk and surface liquid crystal effects that manifest at different voltages. We further demonstrate the system's compatibility with existing LCD technology by integrating it with a commercially available thin-film-transistor array. The imprinted surface interfaces readily with computers to display images as well as video.
VEVI: A Virtual Reality Tool For Robotic Planetary Explorations
NASA Technical Reports Server (NTRS)
Piguet, Laurent; Fong, Terry; Hine, Butler; Hontalas, Phil; Nygren, Erik
1994-01-01
The Virtual Environment Vehicle Interface (VEVI), developed by the NASA Ames Research Center's Intelligent Mechanisms Group, is a modular operator interface for direct teleoperation and supervisory control of robotic vehicles. Virtual environments enable the efficient display and visualization of complex data. This characteristic allows operators to perceive and control complex systems in a natural fashion, utilizing the highly-evolved human sensory system. VEVI utilizes real-time, interactive, 3D graphics and position / orientation sensors to produce a range of interface modalities from the flat panel (windowed or stereoscopic) screen displays to head mounted/head-tracking stereo displays. The interface provides generic video control capability and has been used to control wheeled, legged, air bearing, and underwater vehicles in a variety of different environments. VEVI was designed and implemented to be modular, distributed and easily operated through long-distance communication links, using a communication paradigm called SYNERGY.
NASA Astrophysics Data System (ADS)
Choe, Giseok; Nang, Jongho
The tiled-display system has been used as a Computer Supported Cooperative Work (CSCW) environment, in which multiple local (and/or remote) participants cooperate using some shared applications whose outputs are displayed on a large-scale and high-resolution tiled-display, which is controlled by a cluster of PC's, one PC per display. In order to make the collaboration effective, each remote participant should be aware of all CSCW activities on the titled display system in real-time. This paper presents a capturing and delivering mechanism of all activities on titled-display system to remote participants in real-time. In the proposed mechanism, the screen images of all PC's are periodically captured and delivered to the Merging Server that maintains separate buffers to store the captured images from the PCs. The mechanism selects one tile image from each buffer, merges the images to make a screen shot of the whole tiled-display, clips a Region of Interest (ROI), compresses and streams it to remote participants in real-time. A technical challenge in the proposed mechanism is how to select a set of tile images, one from each buffer, for merging so that the tile images displayed at the same time on the tiled-display can be properly merged together. This paper presents three selection algorithms; a sequential selection algorithm, a capturing time based algorithm, and a capturing time and visual consistency based algorithm. It also proposes a mechanism of providing several virtual cameras on tiled-display system to remote participants by concurrently clipping several different ROI's from the same merged tiled-display images, and delivering them after compressing with video encoders requested by the remote participants. By interactively changing and resizing his/her own ROI, a remote participant can check the activities on the tiled-display effectively. Experiments on a 3 × 2 tiled-display system show that the proposed merging algorithm can build a tiled-display image stream synchronously, and the ROI-based clipping and delivering mechanism can provide individual views on the tiled-display system to multiple remote participants in real-time.
Heo, Hwan; Lee, Won Oh; Shin, Kwang Yong; Park, Kang Ryoung
2014-05-15
We propose a new method for measuring the degree of eyestrain on 3D stereoscopic displays using a glasses-type of eye tracking device. Our study is novel in the following four ways: first, the circular area where a user's gaze position exists is defined based on the calculated gaze position and gaze estimation error. Within this circular area, the position where edge strength is maximized can be detected, and we determine this position as the gaze position that has a higher probability of being the correct one. Based on this gaze point, the eye foveation model is defined. Second, we quantitatively evaluate the correlation between the degree of eyestrain and the causal factors of visual fatigue, such as the degree of change of stereoscopic disparity (CSD), stereoscopic disparity (SD), frame cancellation effect (FCE), and edge component (EC) of the 3D stereoscopic display using the eye foveation model. Third, by comparing the eyestrain in conventional 3D video and experimental 3D sample video, we analyze the characteristics of eyestrain according to various factors and types of 3D video. Fourth, by comparing the eyestrain with or without the compensation of eye saccades movement in 3D video, we analyze the characteristics of eyestrain according to the types of eye movements in 3D video. Experimental results show that the degree of CSD causes more eyestrain than other factors.
Design of batch audio/video conversion platform based on JavaEE
NASA Astrophysics Data System (ADS)
Cui, Yansong; Jiang, Lianpin
2018-03-01
With the rapid development of digital publishing industry, the direction of audio / video publishing shows the diversity of coding standards for audio and video files, massive data and other significant features. Faced with massive and diverse data, how to quickly and efficiently convert to a unified code format has brought great difficulties to the digital publishing organization. In view of this demand and present situation in this paper, basing on the development architecture of Sptring+SpringMVC+Mybatis, and combined with the open source FFMPEG format conversion tool, a distributed online audio and video format conversion platform with a B/S structure is proposed. Based on the Java language, the key technologies and strategies designed in the design of platform architecture are analyzed emphatically in this paper, designing and developing a efficient audio and video format conversion system, which is composed of “Front display system”, "core scheduling server " and " conversion server ". The test results show that, compared with the ordinary audio and video conversion scheme, the use of batch audio and video format conversion platform can effectively improve the conversion efficiency of audio and video files, and reduce the complexity of the work. Practice has proved that the key technology discussed in this paper can be applied in the field of large batch file processing, and has certain practical application value.
Presentation of Information on Visual Displays.
ERIC Educational Resources Information Center
Pettersson, Rune
This discussion of factors involved in the presentation of text, numeric data, and/or visuals using video display devices describes in some detail the following types of presentation: (1) visual displays, with attention to additive color combination; measurements, including luminance, radiance, brightness, and lightness; and standards, with…
The Objective Force Soldier/Soldier Team. Volume II - The Science and Technology Challenges
2001-11-01
processing for buried mines Chemical sniffing to detect explosives • UGV/Robotic systems to carry sensors into risk areas • Specialized electronic...CLOSE COMBAT OPTIC 1.4 THERMAL WEAPONS SIGHT 4.9 AN/ PAQ -4C AIMING LIGHT 0.6 DAYLIGHT VIDEO SIGHT 0.2 IMPROVED HELMET 3.2 HELMET MOUNTED DISPLAY 1.5 w
ERIC Educational Resources Information Center
Dwyer, Paul F.
Drawing on testimony presented at hearings before the Subcommittee on Health and Safety of the House of Representatives conducted between February 28 and June 12, 1984, this staff report addresses the general topic of video display terminals (VDTs) and possible health hazards in the workplace. An introduction presents the history of the…
The Development of the AFIT Communications Laboratory and Experiments for Communications Students.
1985-12-01
Actiatesdigtal wag*andPermits monitoring of max. Actiatesdigial sorag animum signal excursions over selects the "A" or " porn indeienite time...level at which the vertical display is installed in the 71.5. either peak detected or digitally averaged. Video signals above the level set by the... Video signals below the level set by the PEAK AVERAGE control or VERT P05 Positions the display Or baseline on digitally averaged and stored. th c_
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olson, B.M.
1985-01-01
The USAF OEHL conducted an extensive literature review of Video Display Terminals (VDTs) and the health problems commonly associated with them. The report is presented in a question-and-answer format in an attempt to paraphrase the most commonly asked questions about VDTs that are forwarded to USAF OEHL/RZN. The questions and answers have been divided into several topic areas: Ionizing Radiation; Nonionizing Radiation; Optical Radiation; Ultrasound; Static Electricity; Health Complaints/Ergonomics; Pregnancy.
NASA Technical Reports Server (NTRS)
Lotz, Robert W. (Inventor); Westerman, David J. (Inventor)
1980-01-01
The visual system within an aircraft flight simulation system receives flight data and terrain data which is formated into a buffer memory. The image data is forwarded to an image processor which translates the image data into face vertex vectors Vf, defining the position relationship between the vertices of each terrain object and the aircraft. The image processor then rotates, clips, and projects the image data into two-dimensional display vectors (Vd). A display generator receives the Vd faces, and other image data to provide analog inputs to CRT devices which provide the window displays for the simulated aircraft. The video signal to the CRT devices passes through an edge smoothing device which prolongs the rise time (and fall time) of the video data inversely as the slope of the edge being smoothed. An operational amplifier within the edge smoothing device has a plurality of independently selectable feedback capacitors each having a different value. The values of the capacitors form a series which doubles as a power of two. Each feedback capacitor has a fast switch responsive to the corresponding bit of a digital binary control word for selecting (1) or not selecting (0) that capacitor. The control word is determined by the slope of each edge. The resulting actual feedback capacitance for each edge is the sum of all the selected capacitors and is directly proportional to the value of the binary control word. The output rise time (or fall time) is a function of the feedback capacitance, and is controlled by the slope through the binary control word.
TRECVID: the utility of a content-based video retrieval evaluation
NASA Astrophysics Data System (ADS)
Hauptmann, Alexander G.
2006-01-01
TRECVID, an annual retrieval evaluation benchmark organized by NIST, encourages research in information retrieval from digital video. TRECVID benchmarking covers both interactive and manual searching by end users, as well as the benchmarking of some supporting technologies including shot boundary detection, extraction of semantic features, and the automatic segmentation of TV news broadcasts. Evaluations done in the context of the TRECVID benchmarks show that generally, speech transcripts and annotations provide the single most important clue for successful retrieval. However, automatically finding the individual images is still a tremendous and unsolved challenge. The evaluations repeatedly found that none of the multimedia analysis and retrieval techniques provide a significant benefit over retrieval using only textual information such as from automatic speech recognition transcripts or closed captions. In interactive systems, we do find significant differences among the top systems, indicating that interfaces can make a huge difference for effective video/image search. For interactive tasks efficient interfaces require few key clicks, but display large numbers of images for visual inspection by the user. The text search finds the right context region in the video in general, but to select specific relevant images we need good interfaces to easily browse the storyboard pictures. In general, TRECVID has motivated the video retrieval community to be honest about what we don't know how to do well (sometimes through painful failures), and has focused us to work on the actual task of video retrieval, as opposed to flashy demos based on technological capabilities.
Experiences in teleoperation of land vehicles
NASA Technical Reports Server (NTRS)
Mcgovern, Douglas E.
1989-01-01
Teleoperation of land vehicles allows the removal of the operator from the vehicle to a remote location. This can greatly increase operator safety and comfort in applications such as security patrol or military combat. The cost includes system complexity and reduced system performance. All feedback on vehicle performance and on environmental conditions must pass through sensors, a communications channel, and displays. In particular, this requires vision to be transmitted by close-circuit television with a consequent degradation of information content. Vehicular teleoperation, as a result, places severe demands on the operator. Teleoperated land vehicles have been built and tested by many organizations, including Sandia National Laboratories (SNL). The SNL fleet presently includes eight vehicles of varying capability. These vehicles have been operated using different types of controls, displays, and visual systems. Experimentation studying the effects of vision system characteristics on off-road, remote driving was performed for conditions of fixed camera versus steering-coupled camera and of color versus black and white video display. Additionally, much experience was gained through system demonstrations and hardware development trials. The preliminary experimental findings and the results of the accumulated operational experience are discussed.
Augmented reality system for CT-guided interventions: system description and initial phantom trials
NASA Astrophysics Data System (ADS)
Sauer, Frank; Schoepf, Uwe J.; Khamene, Ali; Vogt, Sebastian; Das, Marco; Silverman, Stuart G.
2003-05-01
We are developing an augmented reality (AR) image guidance system, in which information derived from medical images is overlaid onto a video view of the patient. The interventionalist wears a head-mounted display (HMD) that presents him with the augmented stereo view. The HMD is custom fitted with two miniature color video cameras that capture the stereo view of the scene. A third video camera, operating in the near IR, is also attached to the HMD and is used for head tracking. The system achieves real-time performance of 30 frames per second. The graphics appears firmly anchored in the scne, without any noticeable swimming or jitter or time lag. For the application of CT-guided interventions, we extended our original prototype system to include tracking of a biopsy needle to which we attached a set of optical markers. The AR visualization provides very intuitive guidance for planning and placement of the needle and reduces radiation to patient and radiologist. We used an interventional abdominal phantom with simulated liver lesions to perform an inital set of experiments. The users were consistently able to locate the target lesion with the first needle pass. These results provide encouragement to move the system towards clinical trials.
Head-mounted display for use in functional endoscopic sinus surgery
NASA Astrophysics Data System (ADS)
Wong, Brian J.; Lee, Jon P.; Dugan, F. Markoe; MacArthur, Carol J.
1995-05-01
Since the introduction of functional endoscopic sinus surgery (FESS), the procedure has undergone rapid change with evolution keeping pace with technological advances. The advent of low cost charge coupled device 9CCD) cameras revolutionized the practice and instruction of FESS. Video-based FESS has allowed for documentation of the surgical procedure as well as interactive instruction during surgery. Presently, the technical requirements of video-based FESS include the addition of one or more television monitors positioned strategically in the operating room. Thought video monitors have greatly enhanced surgical endoscopy by re- involving nurses and assistants in the actual mechanics of surgery, video monitors require the operating surgeon to be focused on the screen instead of the patient. In this study, we describe the use of a new low-cost liquid crystal display (LCD) based device that functions as a monitor but is mounted on the head on a visor (PT-O1, O1 Products, Westlake Village, CA). This study illustrates the application of these HMD devices to FESS operations. The same surgeon performed the operation in each patient. In one nasal fossa, surgery was performed using conventional video FESS methods. The contralateral side was operated on while wearing the head mounted video display. The device had adequate resolution for the purposes of FESS. No adverse effects were noted intraoperatively. The results on the patients ipsalateral and contralateral sides were similar. The visor did eliminated significant torsion of the surgeon's neck during the operation, while at the same time permitted simultaneous viewing of both the patient and the intranasal surgical field.
NASA Astrophysics Data System (ADS)
Rhzanov, Y.; Beaulieu, S.; Soule, S. A.; Shank, T.; Fornari, D.; Mayer, L. A.
2005-12-01
Many advances in understanding geologic, tectonic, biologic, and sedimentologic processes in the deep ocean are facilitated by direct observation of the seafloor. However, making such observations is both difficult and expensive. Optical systems (e.g., video, still camera, or direct observation) will always be constrained by the severe attenuation of light in the deep ocean, limiting the field of view to distances that are typically less than 10 meters. Acoustic systems can 'see' much larger areas, but at the cost of spatial resolution. Ultimately, scientists want to study and observe deep-sea processes in the same way we do land-based phenomena so that the spatial distribution and juxtaposition of processes and features can be resolved. We have begun development of algorithms that will, in near real-time, generate mosaics from video collected by deep-submergence vehicles. Mosaics consist of >>10 video frames and can cover 100's of square-meters. This work builds on a publicly available still and video mosaicking software package developed by Rzhanov and Mayer. Here we present the results of initial tests of data collection methodologies (e.g., transects across the seafloor and panoramas across features of interest), algorithm application, and GIS integration conducted during a recent cruise to the Eastern Galapagos Spreading Center (0 deg N, 86 deg W). We have developed a GIS database for the region that will act as a means to access and display mosaics within a geospatially-referenced framework. We have constructed numerous mosaics using both video and still imagery and assessed the quality of the mosaics (including registration errors) under different lighting conditions and with different navigation procedures. We have begun to develop algorithms for efficient and timely mosaicking of collected video as well as integration with navigation data for georeferencing the mosaics. Initial results indicate that operators must be properly versed in the control of the video systems as well as maintaining vehicle attitude and altitude in order to achieve the best results possible.
Prototype microprocessor controller. [for STDN antennas
NASA Technical Reports Server (NTRS)
Zarur, J.; Kraeuter, R.
1980-01-01
A microcomputer controller for STDN antennas was developed. The microcomputer technology reduces the system's physical size by the implementation in firmware of functions. The reduction in the number of components increases system reliability and similar benefit is derived when a graphic video display is substituted for several control and indicator panels. A substantial reduction in the number of cables, connectors, and mechanical switches is achieved. The microcomputer based system is programmed to perform calibration and diagnostics, to update the satellite orbital vector, and to communicate with other network systems. The design is applicable to antennas and lasers.
A Web-based, secure, light weight clinical multimedia data capture and display system.
Wang, S. S.; Starren, J.
2000-01-01
Computer-based patient records are traditionally composed of textual data. Integration of multimedia data has been historically slow. Multimedia data such as image, audio, and video have been traditionally more difficult to handle. An implementation of a clinical system for multimedia data is discussed. The system implementation uses Java, Secure Socket Layer (SSL), and Oracle 8i. The system is on top of the Internet so it is architectural independent, cross-platform, cross-vendor, and secure. Design and implementations issues are discussed. Images Figure 2 Figure 3 PMID:11080014
NASA Technical Reports Server (NTRS)
1986-01-01
The FluoroScan Imaging System is a high resolution, low radiation device for viewing stationary or moving objects. It resulted from NASA technology developed for x-ray astronomy and Goddard application to a low intensity x-ray imaging scope. FlouroScan Imaging Systems, Inc, (formerly HealthMate, Inc.), a NASA licensee, further refined the FluoroScan System. It is used for examining fractures, placement of catheters, and in veterinary medicine. Its major components include an x-ray generator, scintillator, visible light image intensifier and video display. It is small, light and maneuverable.
Video quality assesment using M-SVD
NASA Astrophysics Data System (ADS)
Tao, Peining; Eskicioglu, Ahmet M.
2007-01-01
Objective video quality measurement is a challenging problem in a variety of video processing application ranging from lossy compression to printing. An ideal video quality measure should be able to mimic the human observer. We present a new video quality measure, M-SVD, to evaluate distorted video sequences based on singular value decomposition. A computationally efficient approach is developed for full-reference (FR) video quality assessment. This measure is tested on the Video Quality Experts Group (VQEG) phase I FR-TV test data set. Our experiments show the graphical measure displays the amount of distortion as well as the distribution of error in all frames of the video sequence while the numerical measure has a good correlation with perceived video quality outperforms PSNR and other objective measures by a clear margin.
Code of Federal Regulations, 2010 CFR
2010-01-01
... carriers must use an equivalent non-video alternative for transmitting the briefing to passengers with... audio-visual displays played on aircraft for informational purposes that were created under your control...
Code of Federal Regulations, 2011 CFR
2011-01-01
... carriers must use an equivalent non-video alternative for transmitting the briefing to passengers with... audio-visual displays played on aircraft for informational purposes that were created under your control...
Code of Federal Regulations, 2014 CFR
2014-01-01
... carriers must use an equivalent non-video alternative for transmitting the briefing to passengers with... audio-visual displays played on aircraft for informational purposes that were created under your control...
Predictable Programming on a Precision Timed Architecture
2008-04-18
Application: A Video Game Figure 6: Structure of the Video Game Example Inspired by an example game sup- plied with the Hydra development board [17...we implemented a sim- ple video game in C targeted to our PRET architecture. Our example centers on rendering graphics and is otherwise fairly simple...background image. 13 Figure 10: A Screen Dump From Our Video Game Ultimately, each displayed pixel is one of only four col- ors, but the pixels in
Shenai, Mahesh B; Tubbs, R Shane; Guthrie, Barton L; Cohen-Gadol, Aaron A
2014-08-01
The shortage of surgeons compels the development of novel technologies that geographically extend the capabilities of individual surgeons and enhance surgical skills. The authors have developed "Virtual Interactive Presence" (VIP), a platform that allows remote participants to simultaneously view each other's visual field, creating a shared field of view for real-time surgical telecollaboration. The authors demonstrate the capability of VIP to facilitate long-distance telecollaboration during cadaveric dissection. Virtual Interactive Presence consists of local and remote workstations with integrated video capture devices and video displays. Each workstation mutually connects via commercial teleconferencing devices, allowing worldwide point-to-point communication. Software composites the local and remote video feeds, displaying a hybrid perspective to each participant. For demonstration, local and remote VIP stations were situated in Indianapolis, Indiana, and Birmingham, Alabama, respectively. A suboccipital craniotomy and microsurgical dissection of the pineal region was performed in a cadaveric specimen using VIP. Task and system performance were subjectively evaluated, while additional video analysis was used for objective assessment of delay and resolution. Participants at both stations were able to visually and verbally interact while identifying anatomical structures, guiding surgical maneuvers, and discussing overall surgical strategy. Video analysis of 3 separate video clips yielded a mean compositing delay of 760 ± 606 msec (when compared with the audio signal). Image resolution was adequate to visualize complex intracranial anatomy and provide interactive guidance. Virtual Interactive Presence is a feasible paradigm for real-time, long-distance surgical telecollaboration. Delay, resolution, scaling, and registration are parameters that require further optimization, but are within the realm of current technology. The paradigm potentially enables remotely located experts to mentor less experienced personnel located at the surgical site with applications in surgical training programs, remote proctoring for proficiency, and expert support for rural settings and across different counties.
Slow Monitoring Systems for CUORE
NASA Astrophysics Data System (ADS)
Dutta, Suryabrata; Cuore Collaboration
2016-09-01
The Cryogenic Underground Observatory for Rare Events (CUORE) is a ton-scale neutrinoless double-beta decay experiment under construction at the Laboratori Nazionali del Gran Sasso (LNGS). The experiment is comprised of 988 TeO2 bolometric crystals arranged into 19 towers and operated at a temperature of 10 mK. We have developed slow monitoring systems to monitor the cryostat during detector installation, commissioning, data taking, and other crucial phases of the experiment. Our systems use responsive LabVIEW virtual instruments and video streams of the cryostat. We built a website using the Angular, Bootstrap, and MongoDB frameworks to display this data in real-time. The website can also display archival data and send alarms. I will present how we constructed these slow monitoring systems to be robust, accurate, and secure, while maintaining reliable access for the entire collaboration from any platform in order to ensure efficient communications and fast diagnoses of all CUORE systems.
Fast repurposing of high-resolution stereo video content for mobile use
NASA Astrophysics Data System (ADS)
Karaoglu, Ali; Lee, Bong Ho; Boev, Atanas; Cheong, Won-Sik; Gotchev, Atanas
2012-06-01
3D video content is captured and created mainly in high resolution targeting big cinema or home TV screens. For 3D mobile devices, equipped with small-size auto-stereoscopic displays, such content has to be properly repurposed, preferably in real-time. The repurposing requires not only spatial resizing but also properly maintaining the output stereo disparity, as it should deliver realistic, pleasant and harmless 3D perception. In this paper, we propose an approach to adapt the disparity range of the source video to the comfort disparity zone of the target display. To achieve this, we adapt the scale and the aspect ratio of the source video. We aim at maximizing the disparity range of the retargeted content within the comfort zone, and minimizing the letterboxing of the cropped content. The proposed algorithm consists of five stages. First, we analyse the display profile, which characterises what 3D content can be comfortably observed in the target display. Then, we perform fast disparity analysis of the input stereoscopic content. Instead of returning the dense disparity map, it returns an estimate of the disparity statistics (min, max, meanand variance) per frame. Additionally, we detect scene cuts, where sharp transitions in disparities occur. Based on the estimated input, and desired output disparity ranges, we derive the optimal cropping parameters and scale of the cropping window, which would yield the targeted disparity range and minimize the area of cropped and letterboxed content. Once the rescaling and cropping parameters are known, we perform resampling procedure using spline-based and perceptually optimized resampling (anti-aliasing) kernels, which have also a very efficient computational structure. Perceptual optimization is achieved through adjusting the cut-off frequency of the anti-aliasing filter with the throughput of the target display.
Innovative railroad information displays : video guide
DOT National Transportation Integrated Search
1998-01-01
The objectives of this study were to explore the potential of advanced digital technology, : novel concepts of information management, geographic information databases and : display capabilities in order to enhance planning and decision-making proces...
Perceptual tools for quality-aware video networks
NASA Astrophysics Data System (ADS)
Bovik, A. C.
2014-01-01
Monitoring and controlling the quality of the viewing experience of videos transmitted over increasingly congested networks (especially wireless networks) is a pressing problem owing to rapid advances in video-centric mobile communication and display devices that are straining the capacity of the network infrastructure. New developments in automatic perceptual video quality models offer tools that have the potential to be used to perceptually optimize wireless video, leading to more efficient video data delivery and better received quality. In this talk I will review key perceptual principles that are, or could be used to create effective video quality prediction models, and leading quality prediction models that utilize these principles. The goal is to be able to monitor and perceptually optimize video networks by making them "quality-aware."
Representing videos in tangible products
NASA Astrophysics Data System (ADS)
Fageth, Reiner; Weiting, Ralf
2014-03-01
Videos can be taken with nearly every camera, digital point and shoot cameras, DSLRs as well as smartphones and more and more with so-called action cameras mounted on sports devices. The implementation of videos while generating QR codes and relevant pictures out of the video stream via a software implementation was contents in last years' paper. This year we present first data about what contents is displayed and how the users represent their videos in printed products, e.g. CEWE PHOTOBOOKS and greeting cards. We report the share of the different video formats used, the number of images extracted out of the video in order to represent the video, the positions in the book and different design strategies compared to regular books.
Usability testing of an mHealth device for swallowing therapy in head and neck cancer survivors.
Constantinescu, Gabriela; Kuffel, Kristina; King, Ben; Hodgetts, William; Rieger, Jana
2018-04-01
The objective of this study was to conduct the first patient usability testing of a mobile health (mHealth) system for in-home swallowing therapy. Five participants with a history of head and neck cancer evaluated the mHealth system. After completing an in-application (app) tutorial with the clinician, participants were asked to independently complete five tasks: pair the device to the smartphone, place the device correctly, exercise, interpret progress displays, and close the system. Quantitative and qualitative methods were used to evaluate the effectiveness, efficiency, and satisfaction with the system. Critical changes to the app were found in three of the tasks, resulting in recommendations for the next iteration. These issues were related to ease of Bluetooth pairing, placement of device, and interpretation of statistics. Usability testing with patients identified issues that were essential to address prior to implementing the mHealth system in subsequent clinical trials. Of the usability methods used, video observation (synced screen capture with videoed gestures) revealed the most information.
Design and implementation of a PC-based image-guided surgical system.
Stefansic, James D; Bass, W Andrew; Hartmann, Steven L; Beasley, Ryan A; Sinha, Tuhin K; Cash, David M; Herline, Alan J; Galloway, Robert L
2002-11-01
In interactive, image-guided surgery, current physical space position in the operating room is displayed on various sets of medical images used for surgical navigation. We have developed a PC-based surgical guidance system (ORION) which synchronously displays surgical position on up to four image sets and updates them in real time. There are three essential components which must be developed for this system: (1) accurately tracked instruments; (2) accurate registration techniques to map physical space to image space; and (3) methods to display and update the image sets on a computer monitor. For each of these components, we have developed a set of dynamic link libraries in MS Visual C++ 6.0 supporting various hardware tools and software techniques. Surgical instruments are tracked in physical space using an active optical tracking system. Several of the different registration algorithms were developed with a library of robust math kernel functions, and the accuracy of all registration techniques was thoroughly investigated. Our display was developed using the Win32 API for windows management and tomographic visualization, a frame grabber for live video capture, and OpenGL for visualization of surface renderings. We have begun to use this current implementation of our system for several surgical procedures, including open and minimally invasive liver surgery.
Advanced Training Techniques Using Computer Generated Imagery.
1983-02-28
described in this report has been made and is submitted along with this report. Unfortunately, the quality possible on standard monochromic 525 line...video tape is not representative of the quality of the presentations as displayed on a color beam penetration visual system, but one can, through the...YORK - LAGUARDIA (TWILIGHT) SEA SURFACE AND WAKE MINNEAPOLIS - ST. PAUL KC-135 TANKER INTERNATIONAL (TWILIGHT) MINNEAPOLIS - ST. PAUL GROUND TARGETS
Competitive action video game players display rightward error bias during on-line video game play.
Roebuck, Andrew J; Dubnyk, Aurora J B; Cochran, David; Mandryk, Regan L; Howland, John G; Harms, Victoria
2017-09-12
Research in asymmetrical visuospatial attention has identified a leftward bias in the general population across a variety of measures including visual attention and line-bisection tasks. In addition, increases in rightward collisions, or bumping, during visuospatial navigation tasks have been demonstrated in real world and virtual environments. However, little research has investigated these biases beyond the laboratory. The present study uses a semi-naturalistic approach and the online video game streaming service Twitch to examine navigational errors and assaults as skilled action video game players (n = 60) compete in Counter Strike: Global Offensive. This study showed a significant rightward bias in both fatal assaults and navigational errors. Analysis using the in-game ranking system as a measure of skill failed to show a relationship between bias and skill. These results suggest that a leftward visuospatial bias may exist in skilled players during online video game play. However, the present study was unable to account for some factors such as environmental symmetry and player handedness. In conclusion, video game streaming is a promising method for behavioural research in the future, however further study is required before one can determine whether these results are an artefact of the method applied, or representative of a genuine rightward bias.
Eye movement analysis of reading from computer displays, eReaders and printed books.
Zambarbieri, Daniela; Carniglia, Elena
2012-09-01
To compare eye movements during silent reading of three eBooks and a printed book. The three different eReading tools were a desktop PC, iPad tablet and Kindle eReader. Video-oculographic technology was used for recording eye movements. In the case of reading from the computer display the recordings were made by a video camera placed below the computer screen, whereas for reading from the iPad tablet, eReader and printed book the recording system was worn by the subject and had two cameras: one for recording the movement of the eyes and the other for recording the scene in front of the subject. Data analysis provided quantitative information in terms of number of fixations, their duration, and the direction of the movement, the latter to distinguish between fixations and regressions. Mean fixation duration was different only in reading from the computer display, and was similar for the Tablet, eReader and printed book. The percentage of regressions with respect to the total amount of fixations was comparable for eReading tools and the printed book. The analysis of eye movements during reading an eBook from different eReading tools suggests that subjects' reading behaviour is similar to reading from a printed book. © 2012 The College of Optometrists.
Individual recognition based on communication behaviour of male fowl.
Smith, Carolynn L; Taubert, Jessica; Weldon, Kimberly; Evans, Christopher S
2016-04-01
Correctly directing social behaviour towards a specific individual requires an ability to discriminate between conspecifics. The mechanisms of individual recognition include phenotype matching and familiarity-based recognition. Communication-based recognition is a subset of familiarity-based recognition wherein the classification is based on behavioural or distinctive signalling properties. Male fowl (Gallus gallus) produce a visual display (tidbitting) upon finding food in the presence of a female. Females typically approach displaying males. However, males may tidbit without food. We used the distinctiveness of the visual display and the unreliability of some males to test for communication-based recognition in female fowl. We manipulated the prior experience of the hens with the males to create two classes of males: S(+) wherein the tidbitting signal was paired with a food reward to the female, and S (-) wherein the tidbitting signal occurred without food reward. We then conducted a sequential discrimination test with hens using a live video feed of a familiar male. The results of the discrimination tests revealed that hens discriminated between categories of males based on their signalling behaviour. These results suggest that fowl possess a communication-based recognition system. This is the first demonstration of live-to-video transfer of recognition in any species of bird. Copyright © 2016 Elsevier B.V. All rights reserved.
Dissecting children's observational learning of complex actions through selective video displays.
Flynn, Emma; Whiten, Andrew
2013-10-01
Children can learn how to use complex objects by watching others, yet the relative importance of different elements they may observe, such as the interactions of the individual parts of the apparatus, a model's movements, and desirable outcomes, remains unclear. In total, 140 3-year-olds and 140 5-year-olds participated in a study where they observed a video showing tools being used to extract a reward item from a complex puzzle box. Conditions varied according to the elements that could be seen in the video: (a) the whole display, including the model's hands, the tools, and the box; (b) the tools and the box but not the model's hands; (c) the model's hands and the tools but not the box; (d) only the end state with the box opened; and (e) no demonstration. Children's later attempts at the task were coded to establish whether they imitated the hierarchically organized sequence of the model's actions, the action details, and/or the outcome. Children's successful retrieval of the reward from the box and the replication of hierarchical sequence information were reduced in all but the whole display condition. Only once children had attempted the task and witnessed a second demonstration did the display focused on the tools and box prove to be better for hierarchical sequence information than the display focused on the tools and hands only. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Haines, Richard F.; Chuang, Sherry L.
1993-01-01
Current plans indicate that there will be a large number of life science experiments carried out during the thirty year-long mission of the Biological Flight Research Laboratory (BFRL) on board Space Station Freedom (SSF). Non-human life science experiments will be performed in the BFRL. Two distinct types of activities have already been identified for this facility: (1) collect, store, distribute, analyze and manage engineering and science data from the Habitats, Glovebox and Centrifuge, (2) perform a broad range of remote science activities in the Glovebox and Habitat chambers in conjunction with the remotely located principal investigator (PI). These activities require extensive video coverage, viewing and/or recording and distribution to video displays on board SSF and to the ground. This paper concentrates mainly on the second type of activity. Each of the two BFRL habitat racks are designed to be configurable for either six rodent habitats per rack, four plant habitats per rack, or a combination of the above. Two video cameras will be installed in each habitat with a spare attachment for a third camera when needed. Therefore, a video system that can accommodate up to 12-18 camera inputs per habitat rack must be considered.
Heo, Hwan; Lee, Won Oh; Shin, Kwang Yong; Park, Kang Ryoung
2014-01-01
We propose a new method for measuring the degree of eyestrain on 3D stereoscopic displays using a glasses-type of eye tracking device. Our study is novel in the following four ways: first, the circular area where a user's gaze position exists is defined based on the calculated gaze position and gaze estimation error. Within this circular area, the position where edge strength is maximized can be detected, and we determine this position as the gaze position that has a higher probability of being the correct one. Based on this gaze point, the eye foveation model is defined. Second, we quantitatively evaluate the correlation between the degree of eyestrain and the causal factors of visual fatigue, such as the degree of change of stereoscopic disparity (CSD), stereoscopic disparity (SD), frame cancellation effect (FCE), and edge component (EC) of the 3D stereoscopic display using the eye foveation model. Third, by comparing the eyestrain in conventional 3D video and experimental 3D sample video, we analyze the characteristics of eyestrain according to various factors and types of 3D video. Fourth, by comparing the eyestrain with or without the compensation of eye saccades movement in 3D video, we analyze the characteristics of eyestrain according to the types of eye movements in 3D video. Experimental results show that the degree of CSD causes more eyestrain than other factors. PMID:24834910
Network and user interface for PAT DOME virtual motion environment system
NASA Technical Reports Server (NTRS)
Worthington, J. W.; Duncan, K. M.; Crosier, W. G.
1993-01-01
The Device for Orientation and Motion Environments Preflight Adaptation Trainer (DOME PAT) provides astronauts a virtual microgravity sensory environment designed to help alleviate tye symptoms of space motion sickness (SMS). The system consists of four microcomputers networked to provide real time control, and an image generator (IG) driving a wide angle video display inside a dome structure. The spherical display demands distortion correction. The system is currently being modified with a new graphical user interface (GUI) and a new Silicon Graphics IG. This paper will concentrate on the new GUI and the networking scheme. The new GUI eliminates proprietary graphics hardware and software, and instead makes use of standard and low cost PC video (CGA) and off the shelf software (Microsoft's Quick C). Mouse selection for user input is supported. The new Silicon Graphics IG requires an Ethernet interface. The microcomputer known as the Real Time Controller (RTC), which has overall control of the system and is written in Ada, was modified to use the free public domain NCSA Telnet software for Ethernet communications with the Silicon Graphics IG. The RTC also maintains the original ARCNET communications through Novell Netware IPX with the rest of the system. The Telnet TCP/IP protocol was first used for real-time communication, but because of buffering problems the Telnet datagram (UDP) protocol needed to be implemented. Since the Telnet modules are written in C, the Adap pragma 'Interface' was used to interface with the network calls.
Giera, Brian; Bukosky, Scott; Lee, Elaine; ...
2018-01-23
Here, quantitative color analysis is performed on videos of high contrast, low power reversible electrophoretic deposition (EPD)-based displays operated under different applied voltages. This analysis is coded in an open-source software, relies on a color differentiation metric, ΔE * 00, derived from digital video, and provides an intuitive relationship between the operating conditions of the devices and their performance. Time-dependent ΔE * 00 color analysis reveals color relaxation behavior, recoverability for different voltage sequences, and operating conditions that can lead to optimal performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giera, Brian; Bukosky, Scott; Lee, Elaine
Here, quantitative color analysis is performed on videos of high contrast, low power reversible electrophoretic deposition (EPD)-based displays operated under different applied voltages. This analysis is coded in an open-source software, relies on a color differentiation metric, ΔE * 00, derived from digital video, and provides an intuitive relationship between the operating conditions of the devices and their performance. Time-dependent ΔE * 00 color analysis reveals color relaxation behavior, recoverability for different voltage sequences, and operating conditions that can lead to optimal performance.
Neutrons Image Additive Manufactured Turbine Blade in 3-D
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2016-04-29
The video displays the Inconel 718 Turbine Blade made by Additive Manufacturing. First a gray scale neutron computed tomogram (CT) is displayed with transparency in order to show the internal structure. Then the neutron CT is overlapped with the engineering drawing that was used to print the part and a comparison of external and internal structures is possible. This provides a map of the accuracy of the printed turbine (printing tolerance). Internal surface roughness can also be observed. Credits: Experimental Measurements: Hassina Z. Bilheaux, Video and Printing Tolerance Analysis: Jean C. Bilheaux
Evaluating the content and reception of messages from incarcerated parents to their children.
Folk, Johanna B; Nichols, Emily B; Dallaire, Danielle H; Loper, Ann B
2012-10-01
In the current study, children's reactions to video messages from their incarcerated parents were evaluated. Previous research has yielded mixed results when it examined the impact of contact between incarcerated parents and their children; one reason for these mixed results may be a lack of attention to the quality of contact. This is the first study to examine the actual content and quality of a remote form of contact in this population. Participants included 186 incarcerated parents (54% mothers) who participated in a filming with The Messages Project and 61 caregivers of their children. Parental mood prior to filming the message and children's mood after viewing the message were assessed using the Positive and Negative Affect Scale. After coding the content of 172 videos, the data from the 61 videos with caregiver responses were used in subsequent path analyses. Analyses indicated that when parents were in more negative moods prior to filming their message, they displayed more negative emotions in the video messages ( = .210), and their children were in more negative moods after viewing the message ( = .288). Considering that displays of negative emotion can directly affect how children respond to contact, it seems important for parents to learn to regulate these emotional displays to improve the quality of their contact with their children. © 2012 American Orthopsychiatric Association.
Stereoscopic 3D video games and their effects on engagement
NASA Astrophysics Data System (ADS)
Hogue, Andrew; Kapralos, Bill; Zerebecki, Chris; Tawadrous, Mina; Stanfield, Brodie; Hogue, Urszula
2012-03-01
With television manufacturers developing low-cost stereoscopic 3D displays, a large number of consumers will undoubtedly have access to 3D-capable televisions at home. The availability of 3D technology places the onus on content creators to develop interesting and engaging content. While the technology of stereoscopic displays and content generation are well understood, there are many questions yet to be answered surrounding its effects on the viewer. Effects of stereoscopic display on passive viewers for film are known, however video games are fundamentally different since the viewer/player is actively (rather than passively) engaged in the content. Questions of how stereoscopic viewing affects interaction mechanics have previously been studied in the context of player performance but very few have attempted to quantify the player experience to determine whether stereoscopic 3D has a positive or negative influence on their overall engagement. In this paper we present a preliminary study of the effects stereoscopic 3D have on player engagement in video games. Participants played a video game in two conditions, traditional 2D and stereoscopic 3D and their engagement was quantified using a previously validated self-reporting tool. The results suggest that S3D has a positive effect on immersion, presence, flow, and absorption.
Raster Scan Computer Image Generation (CIG) System Based On Refresh Memory
NASA Astrophysics Data System (ADS)
Dichter, W.; Doris, K.; Conkling, C.
1982-06-01
A full color, Computer Image Generation (CIG) raster visual system has been developed which provides a high level of training sophistication by utilizing advanced semiconductor technology and innovative hardware and firmware techniques. Double buffered refresh memory and efficient algorithms eliminate the problem of conventional raster line ordering by allowing the generated image to be stored in a random fashion. Modular design techniques and simplified architecture provide significant advantages in reduced system cost, standardization of parts, and high reliability. The major system components are a general purpose computer to perform interfacing and data base functions; a geometric processor to define the instantaneous scene image; a display generator to convert the image to a video signal; an illumination control unit which provides final image processing; and a CRT monitor for display of the completed image. Additional optional enhancements include texture generators, increased edge and occultation capability, curved surface shading, and data base extensions.
Sensor fusion and augmented reality with the SAFIRE system
NASA Astrophysics Data System (ADS)
Saponaro, Philip; Treible, Wayne; Phelan, Brian; Sherbondy, Kelly; Kambhamettu, Chandra
2018-04-01
The Spectrally Agile Frequency-Incrementing Reconfigurable (SAFIRE) mobile radar system was developed and exercised at an arid U.S. test site. The system can detect hidden target using radar, a global positioning system (GPS), dual stereo color cameras, and dual stereo thermal cameras. An Augmented Reality (AR) software interface allows the user to see a single fused video stream containing the SAR, color, and thermal imagery. The stereo sensors allow the AR system to display both fused 2D imagery and 3D metric reconstructions, where the user can "fly" around the 3D model and switch between the modalities.
Real-time interactive 3D computer stereography for recreational applications
NASA Astrophysics Data System (ADS)
Miyazawa, Atsushi; Ishii, Motonaga; Okuzawa, Kazunori; Sakamoto, Ryuuichi
2008-02-01
With the increasing calculation costs of 3D computer stereography, low-cost, high-speed implementation of the latter requires effective distribution of computing resources. In this paper, we attempt to re-classify 3D display technologies on the basis of humans' 3D perception, in order to determine what level of presence or reality is required in recreational video game systems. We then discuss the design and implementation of stereography systems in two categories of the new classification.
NASA Technical Reports Server (NTRS)
Mohlenbrink, Christoph P.; Omar, Faisal Gamal; Homola, Jeffrey R.
2017-01-01
This is a video replay of system data that was generated from the UAS Traffic Management (UTM) Technical Capability Level (TCL) 2 flight demonstration in Nevada and rendered in Google Earth. What is depicted in the replay is a particular set of flights conducted as part of what was referred to as the Ocean scenario. The test range and surrounding area are presented followed by an overview of operational volumes. System messaging is also displayed as well as a replay of all of the five test flights as they occurred.
NASA Astrophysics Data System (ADS)
Culp, Robert D.; Bickley, George
Papers from the sixteenth annual American Astronautical Society Rocky Mountain Guidance and Control Conference are presented. The topics covered include the following: advances in guidance, navigation, and control; control system videos; guidance, navigation and control embedded flight control systems; recent experiences; guidance and control storyboard displays; and applications of modern control, featuring the Hubble Space Telescope (HST) performance enhancement study. For individual titles, see A95-80390 through A95-80436.
NASA Technical Reports Server (NTRS)
1998-01-01
Crystal River Engineering was originally featured in Spinoff 1992 with the Convolvotron, a high speed digital audio processing system that delivers three-dimensional sound over headphones. The Convolvotron was developed for Ames' research on virtual acoustic displays. Crystal River is a now a subsidiary of Aureal Semiconductor, Inc. and they together develop and market the technology, which is a 3-D (three dimensional) audio technology known commercially today as Aureal 3D (A-3D). The technology has been incorporated into video games, surround sound systems, and sound cards.
Web-based video monitoring of CT and MRI procedures
NASA Astrophysics Data System (ADS)
Ratib, Osman M.; Dahlbom, Magdalena; Kho, Hwa T.; Valentino, Daniel J.; McCoy, J. Michael
2000-05-01
A web-based video transmission of images from CT and MRI consoles was implemented in an Intranet environment for real- time monitoring of ongoing procedures. Images captured from the consoles are compressed to video resolution and broadcasted through a web server. When called upon, the attending radiologists can view these live images on any computer within the secured Intranet network. With adequate compression, these images can be displayed simultaneously in different locations at a rate of 2 to 5 images/sec through standard LAN. The quality of the images being insufficient for diagnostic purposes, our users survey showed that they were suitable for supervising a procedure, positioning the imaging slices and for routine quality checking before completion of a study. The system was implemented at UCLA to monitor 9 CTs and 6 MRIs distributed in 4 buildings. This system significantly improved the radiologists productivity by saving precious time spent in trips between reading rooms and examination rooms. It also improved patient throughput by reducing the waiting time for the radiologists to come to check a study before moving the patient from the scanner.
New space sensor and mesoscale data analysis
NASA Technical Reports Server (NTRS)
Hickey, John S.
1987-01-01
The developed Earth Science and Application Division (ESAD) system/software provides the research scientist with the following capabilities: an extensive data base management capibility to convert various experiment data types into a standard format; and interactive analysis and display package (AVE80); an interactive imaging/color graphics capability utilizing the Apple III and IBM PC workstations integrated into the ESAD computer system; and local and remote smart-terminal capability which provides color video, graphics, and Laserjet output. Recommendations for updating and enhancing the performance of the ESAD computer system are listed.
NASA work unit system users manual
NASA Technical Reports Server (NTRS)
1972-01-01
The NASA Work Unit System is a management information system for research tasks (i.e., work units) performed under NASA grants and contracts. It supplies profiles to indicate how much effort is being expended to what types of research, where the effort is being expended, and how funds are being distributed. The user obtains information by entering requests on the keyboard of a time-sharing terminal. Responses are received as video displays or typed messages at the terminal, or as lists printed in the computer room for subsequent delivery by messenger.
Pictorial communication in virtual and real environments
NASA Technical Reports Server (NTRS)
Ellis, Stephen R. (Editor)
1991-01-01
Papers about the communication between human users and machines in real and synthetic environments are presented. Individual topics addressed include: pictorial communication, distortions in memory for visual displays, cartography and map displays, efficiency of graphical perception, volumetric visualization of 3D data, spatial displays to increase pilot situational awareness, teleoperation of land vehicles, computer graphics system for visualizing spacecraft in orbit, visual display aid for orbital maneuvering, multiaxis control in telemanipulation and vehicle guidance, visual enhancements in pick-and-place tasks, target axis effects under transformed visual-motor mappings, adapting to variable prismatic displacement. Also discussed are: spatial vision within egocentric and exocentric frames of reference, sensory conflict in motion sickness, interactions of form and orientation, perception of geometrical structure from congruence, prediction of three-dimensionality across continuous surfaces, effects of viewpoint in the virtual space of pictures, visual slant underestimation, spatial constraints of stereopsis in video displays, stereoscopic stance perception, paradoxical monocular stereopsis and perspective vergence. (No individual items are abstracted in this volume)
Lei, Tim C.; Pendyala, Srinivas; Scherrer, Larry; Li, Buhong; Glazner, Gregory F.; Huang, Zheng
2016-01-01
Recent clinical reports suggest that overexposure to light emissions generated from cathode ray tube (CRT) and liquid crystal display (LCD) color monitors after topical or systemic administration of a photosensitizer could cause noticeable skin phototoxicity. In this study, we examined the light emission profiles (optical irradiance, spectral irradiance) of CRT and LCD monitors under simulated movie and video game modes. Results suggest that peak emissions and integrated fluence generated from monitors are clinically relevant and therefore prolonged exposure to these light sources at a close distance should be avoided after the administration of a photosensitizer or phototoxic drug. PMID:23669681
Membrane-mirror-based autostereoscopic display for tele-operation and teleprescence applications
NASA Astrophysics Data System (ADS)
McKay, Stuart; Mair, Gordon M.; Mason, Steven; Revie, Kenneth
2000-05-01
An autostereoscopic display for telepresence and tele- operation applications has been developed at the University of Strathclyde in Glasgow, Scotland. The research is a collaborative effort between the Imaging Group and the Transparent Telepresence Research Group, both based at Strathclyde. A key component of the display is the directional screen; a 1.2-m diameter Stretchable Membrane Mirror is currently used. This patented technology enables large diameter, small f No., mirrors to be produced at a fraction of the cost of conventional optics. Another key element of the present system is an anthropomorphic and anthropometric stereo camera sensor platform. Thus, in addition to mirror development, research areas include sensor platform design focused on sight, hearing, research areas include sensor platform design focused on sight, hearing, and smell, telecommunications, display systems for all visual, aural and other senses, tele-operation, and augmented reality. The sensor platform is located at the remote site and transmits live video to the home location. Applications for this technology are as diverse as they are numerous, ranging from bomb disposal and other hazardous environment applications to tele-conferencing, sales, education and entertainment.
Fiber-channel audio video standard for military and commercial aircraft product lines
NASA Astrophysics Data System (ADS)
Keller, Jack E.
2002-08-01
Fibre channel is an emerging high-speed digital network technology that combines to make inroads into the avionics arena. The suitability of fibre channel for such applications is largely due to its flexibility in these key areas: Network topologies can be configured in point-to-point, arbitrated loop or switched fabric connections. The physical layer supports either copper or fiber optic implementations with a Bit Error Rate of less than 10-12. Multiple Classes of Service are available. Multiple Upper Level Protocols are supported. Multiple high speed data rates offer open ended growth paths providing speed negotiation within a single network. Current speeds supported by commercially available hardware are 1 and 2 Gbps providing effective data rates of 100 and 200 MBps respectively. Such networks lend themselves well to the transport of digital video and audio data. This paper summarizes an ANSI standard currently in the final approval cycle of the InterNational Committee for Information Technology Standardization (INCITS). This standard defines a flexible mechanism whereby digital video, audio and ancillary data are systematically packaged for transport over a fibre channel network. The basic mechanism, called a container, houses audio and video content functionally grouped as elements of the container called objects. Featured in this paper is a specific container mapping called Simple Parametric Digital Video (SPDV) developed particularly to address digital video in avionics systems. SPDV provides pixel-based video with associated ancillary data typically sourced by various sensors to be processed and/or distributed in the cockpit for presentation via high-resolution displays. Also highlighted in this paper is a streamlined Upper Level Protocol (ULP) called Frame Header Control Procedure (FHCP) targeted for avionics systems where the functionality of a more complex ULP is not required.
Fakhruddin, Kausar Sadia; El Batawi, Hisham; Gorduysus, Mehmet Omer
2015-01-01
The aim of this study is to assess the effectiveness of audiovisual distraction technique with video eyewear and computerized delivery system-intrasulcular (CDS-IS) during the application of local anesthetic in phobic pediatric patients undergoing pulp therapy of primary molars. This randomized, crossover clinical study includes 60 children, aged between 4 and 7-year-old (31 boys and 29 girls). Children were randomly distributed equally into two groups as A and B. This study involved two treatment sessions of pulp therapy, 1-week apart. During treatment session I, group A had an audiovisual distraction with video eyewear, whereas group B had audiovisual distraction using projector display only without video eyewear. During treatment session II, group A had undergone pulp therapy without video eyewear distraction, whereas group B had the pulp treatment using video eyewear distraction. Each session involved the pulp therapy of equivalent teeth in the opposite sides of the mouth. At each visit scores on the Modified Child Dental Anxiety Scale (MCDAS) (f) were used to evaluate the level of anxiety before treatment. After the procedure, children were instructed to rate their pain during treatment on the Wong Bakers' faces pain scale. Changes in pulse oximeter and heart rate were recorded in every 10 min. From preoperative treatment session I (with video eyewear) to preoperative treatment session II (without video eyewear) for the MCDAS (f), a significant (P > 0.03) change in the mean anxiety score was observed for group A. Self-reported mean pain score decreases dramatically after treatment sessions' with video eyewear for both groups. The use of audiovisual distraction with video eyewear and the use of CDS-IS system for anesthetic delivery was demonstrated to be effective in improving children's cooperation, than routine psychological interventions and is, therefore, highly recommended as an effective behavior management technique for long invasive procedures of pulp therapy in young children.
ERIC Educational Resources Information Center
Dahlgren, Sally
2000-01-01
Discusses how advances in light-emitting diode (LED) technology is helping video displays at sporting events get fans closer to the action than ever before. The types of LED displays available are discussed as are their operation and maintenance issues. (GR)
Abrahamsen, Emil Riis; Christensen, Ann-Eva; Hougaard, Dan Dupont
2018-02-01
To evaluate intra- and interexaminer variability of the video Head Impulse Test (v-HIT) when assessing all six semicircular canals (SCCs) of two separate v-HIT systems. Prospective study. Department of Otolaryngology, Head and Neck Surgery, Aalborg University Hospital, Denmark. One hundred twenty healthy subjects. Four separate tests of all six SCCs with either system A or system B. Two examiners tested all subjects twice. Pretest randomization included type of v-HIT system, order of paired SCC testing, as well as initial examiner. Gain values and the presence of pathological saccades were registered. Ninety-five percent limits of agreement (LOAs) were calculated for both intra- and interexaminer variability. Adding or subtracting the value from the mean difference achieves the upper and lower bound LOA. Ninety-five percent of the differences lie within these limits. Interexaminer reliability: System A: LOAs between 0.13 and 0.24 for the horizontal SCCs and between 0.42 and 0.74 for the vertical SCCs. System B: LOAs between 0.09 and 0.13 for the horizontal SCCs and between 0.13 and 0.20 for the vertical SCCs. Intraexaminer reliability: System A: LOAs were 0.19 and 0.14 for the horizontal SCCs and varied from 0.43 to 0.53 for the vertical SCCs. System B: LOAs were 0.14 for the horizontal SCCs and varied from 0.13 to 0.22 for the vertical SCCs. Horizontal SCC testing: both v-HIT systems displayed good intra- and interexaminer variability. Vertical SCC testing: System B displayed good intra- and interexaminer variability whereas the opposite was true with system A.
3-D video techniques in endoscopic surgery.
Becker, H; Melzer, A; Schurr, M O; Buess, G
1993-02-01
Three-dimensional visualisation of the operative field is an important requisite for precise and fast handling of open surgical operations. Up to now it has only been possible to display a two-dimensional image on the monitor during endoscopic procedures. The increasing complexity of minimal invasive interventions requires endoscopic suturing and ligatures of larger vessels which are difficult to perform without the impression of space. Three-dimensional vision therefore may decrease the operative risk, accelerate interventions and widen the operative spectrum. In April 1992 a 3-D video system developed at the Nuclear Research Center Karlsruhe, Germany (IAI Institute) was applied in various animal experimental procedures and clinically in laparoscopic cholecystectomy. The system works with a single monitor and active high-speed shutter glasses. Our first trials with this new 3-D imaging system clearly showed a facilitation of complex surgical manoeuvres like mobilisation of organs, preparation in the deep space and suture techniques. The 3-D-system introduced in this article will enter the market in 1993 (Opticon Co., Karlsruhe, Germany.
Expert Behavior in Children's Video Game Play.
ERIC Educational Resources Information Center
VanDeventer, Stephanie S.; White, James A.
2002-01-01
Investigates the display of expert behavior by seven outstanding video game-playing children ages 10 and 11. Analyzes observation and debriefing transcripts for evidence of self-monitoring, pattern recognition, principled decision making, qualitative thinking, and superior memory, and discusses implications for educators regarding the development…
Video-laryngoscopy introduction in a Sub-Saharan national teaching hospital: luxury or necessity?
Alain, Traoré Ibrahim; Drissa, Barro Sié; Flavien, Kaboré; Serge, Ilboudo; Idriss, Traoré
2015-01-01
Tracheal intubation using Macintosh blade is the technique of choice for the liberation of airways. It can turn out to be difficult, causing severe complications which can entail the prognosis for survival or the adjournment of the surgical operation. The video-laryngoscope allows a better display of the larynx and a good exposure of the glottis and then making tracheal intubation simpler compared with a conventional laryngoscope. It is little spread in sub-Saharan Africa and more particularly in Burkina Faso because of its high cost. We report our first experiences of use of the video-laryngoscope through two cases of difficult tracheal intubation which had required the adjournment of the interventions. It results that the video-laryngoscope makes tracheal intubation easier even in it's the first use because of the good glottal display which it gives and because its allows apprenticeship easy. Therefore, it is not a luxury to have it in our therapeutic arsenal. PMID:27047621
An Augmented Reality-Based Approach for Surgical Telementoring in Austere Environments.
Andersen, Dan; Popescu, Voicu; Cabrera, Maria Eugenia; Shanghavi, Aditya; Mullis, Brian; Marley, Sherri; Gomez, Gerardo; Wachs, Juan P
2017-03-01
Telementoring can improve treatment of combat trauma injuries by connecting remote experienced surgeons with local less-experienced surgeons in an austere environment. Current surgical telementoring systems force the local surgeon to regularly shift focus away from the operating field to receive expert guidance, which can lead to surgery delays or even errors. The System for Telementoring with Augmented Reality (STAR) integrates expert-created annotations directly into the local surgeon's field of view. The local surgeon views the operating field by looking at a tablet display suspended between the patient and the surgeon that captures video of the surgical field. The remote surgeon remotely adds graphical annotations to the video. The annotations are sent back and displayed to the local surgeon while being automatically anchored to the operating field elements they describe. A technical evaluation demonstrates that STAR robustly anchors annotations despite tablet repositioning and occlusions. In a user study, participants used either STAR or a conventional telementoring system to precisely mark locations on a surgical simulator under a remote surgeon's guidance. Participants who used STAR completed the task with fewer focus shifts and with greater accuracy. The STAR reduces the local surgeon's need to shift attention during surgery, allowing him or her to continuously work while looking "through" the tablet screen. Reprint & Copyright © 2017 Association of Military Surgeons of the U.S.
Image Descriptors for Displays
1975-03-01
sampled with composite blanking signal; (c) signal in (a) formed into composite video signal ... 24 3. Power spectral density of the signals shown in...Curve A: composite video signal formed from 20 Hz to 2.5 MH.i band-limited, Gaussian white noise. Curve B: average spectrum of off-the-air video...previously. Our experimental procedure was the following. Off-the-air television signals broadcast on VHP channels were analyzed with a commercially
An Augmented Virtuality Display for Improving UAV Usability
2005-01-01
cockpit. For a more universally-understood metaphor, we have turned to virtual environments of the type represented in video games . Many of the...people who have the need to fly UAVs (such as military personnel) have experience with playing video games . They are skilled in navigating virtual...Another aspect of tailoring the interface to those with video game experience is to use familiar controls. Microsoft has developed a popular and
Toward a 3D video format for auto-stereoscopic displays
NASA Astrophysics Data System (ADS)
Vetro, Anthony; Yea, Sehoon; Smolic, Aljoscha
2008-08-01
There has been increased momentum recently in the production of 3D content for cinema applications; for the most part, this has been limited to stereo content. There are also a variety of display technologies on the market that support 3DTV, each offering a different viewing experience and having different input requirements. More specifically, stereoscopic displays support stereo content and require glasses, while auto-stereoscopic displays avoid the need for glasses by rendering view-dependent stereo pairs for a multitude of viewing angles. To realize high quality auto-stereoscopic displays, multiple views of the video must either be provided as input to the display, or these views must be created locally at the display. The former approach has difficulties in that the production environment is typically limited to stereo, and transmission bandwidth for a large number of views is not likely to be available. This paper discusses an emerging 3D data format that enables the latter approach to be realized. A new framework for efficiently representing a 3D scene and enabling the reconstruction of an arbitrarily large number of views prior to rendering is introduced. Several design challenges are also highlighted through experimental results.
Data acquisition and analysis in the DOE/NASA Wind Energy Program
NASA Technical Reports Server (NTRS)
Neustadter, H. E.
1980-01-01
Four categories of data systems, each responding to a distinct information need are presented. The categories are: control, technology, engineering and performance. The focus is on the technology data system which consists of the following elements: sensors which measure critical parameters such as wind speed and direction, output power, blade loads and strains, and tower vibrations; remote multiplexing units (RMU) mounted on each wind turbine which frequency modulate, multiplex and transmit sensor outputs; the instrumentation available to record, process and display these signals; and centralized computer analysis of data. The RMU characteristics and multiplexing techniques are presented. Data processing is illustrated by following a typical signal through instruments such as the analog tape recorder, analog to digital converter, data compressor, digital tape recorder, video (CRT) display, and strip chart recorder.
Wide-Field-of-View, High-Resolution, Stereoscopic Imager
NASA Technical Reports Server (NTRS)
Prechtl, Eric F.; Sedwick, Raymond J.
2010-01-01
A device combines video feeds from multiple cameras to provide wide-field-of-view, high-resolution, stereoscopic video to the user. The prototype under development consists of two camera assemblies, one for each eye. One of these assemblies incorporates a mounting structure with multiple cameras attached at offset angles. The video signals from the cameras are fed to a central processing platform where each frame is color processed and mapped into a single contiguous wide-field-of-view image. Because the resolution of most display devices is typically smaller than the processed map, a cropped portion of the video feed is output to the display device. The positioning of the cropped window will likely be controlled through the use of a head tracking device, allowing the user to turn his or her head side-to-side or up and down to view different portions of the captured image. There are multiple options for the display of the stereoscopic image. The use of head mounted displays is one likely implementation. However, the use of 3D projection technologies is another potential technology under consideration, The technology can be adapted in a multitude of ways. The computing platform is scalable, such that the number, resolution, and sensitivity of the cameras can be leveraged to improve image resolution and field of view. Miniaturization efforts can be pursued to shrink the package down for better mobility. Power savings studies can be performed to enable unattended, remote sensing packages. Image compression and transmission technologies can be incorporated to enable an improved telepresence experience.
NASA Astrophysics Data System (ADS)
Sasaki, T.; Azuma, S.; Matsuda, S.; Nagayama, A.; Ogido, M.; Saito, H.; Hanafusa, Y.
2016-12-01
The Japan Agency for Marine-Earth Science and Technology (JAMSTEC) archives a large amount of deep-sea research videos and photos obtained by JAMSTEC's research submersibles and vehicles with cameras. The web site "JAMSTEC E-library of Deep-sea Images : J-EDI" (http://www.godac.jamstec.go.jp/jedi/e/) has made videos and photos available to the public via the Internet since 2011. Users can search for target videos and photos by keywords, easy-to-understand icons, and dive information at J-EDI because operating staffs classify videos and photos as to contents, e.g. living organism and geological environment, and add comments to them.Dive survey data including videos and photos are not only valiant academically but also helpful for education and outreach activities. With the aim of the improvement of visibility for broader communities, we added new functions of 3-dimensional display synchronized various dive survey data with videos in this year.New Functions Users can search for dive survey data by 3D maps with plotted dive points using the WebGL virtual map engine "Cesium". By selecting a dive point, users can watch deep-sea videos and photos and associated environmental data, e.g. water temperature, salinity, rock and biological sample photos, obtained by the dive survey. Users can browse a dive track visualized in 3D virtual spaces using the WebGL JavaScript library. By synchronizing this virtual dive track with videos, users can watch deep-sea videos recorded at a point on a dive track. Users can play an animation which a submersible-shaped polygon automatically traces a 3D virtual dive track and displays of dive survey data are synchronized with tracing a dive track. Users can directly refer to additional information of other JAMSTEC data sites such as marine biodiversity database, marine biological sample database, rock sample database, and cruise and dive information database, on each page which a 3D virtual dive track is displayed. A 3D visualization of a dive track makes users experience a virtual dive survey. In addition, by synchronizing a virtual dive track with videos, it is easy to understand living organisms and geological environments of a dive point. Therefore, these functions will visually support understanding of deep-sea environments in lectures and educational activities.
[The prevalence and influencing factors of eye diseases for IT industry video operation workers].
Zhao, Liang-liang; Yu, Yan-yan; Yu, Wen-lan; Xu, Ming; Cao, Wen-dong; Zhang, Hong-bing; Han, Lei; Zhang, Heng-dong
2013-05-01
To investigate the situation of video-contact and eye diseases for IT industry video operation workers, and to analyze the influencing factors, providing scientific evidence for the make of health-strategy for IT industry video operation workers. We take the random cluster sampling method to choose 190 IT industry video operation workers in a city of Jiangsu province, analyzing the relations between video contact and eye diseases. The daily video contact time of IT industry video operation workers is 6.0-16.0 hours, whose mean value is (I 0.1 ± 1.8) hours. 79.5% of workers in this survey wear myopic lens, 35.8% of workers have a rest during their working, and 14.2% of IT workers use protective products when they feel unwell of their eyes. Following the BUT experiment, 54.7% of IT workers have the normal examine results of hinoculus, while 45.3% have the abnormal results of at least one eye. Simultaneously, 54.7% workers have the normal examine results of hinoculus in the SIT experiment, however, 42.1% workers are abnormal. According to the broad linear model, there are six influencing factors (daily mean time to video, distance between eye and displayer, the frequency of rest, whether to use protective products when they feel unwell of their eyes, the type of dis player and daily time watching TV.) have significant influence on vision, having statistical significance. At the same time, there are also six influencing factors (whether have a rest regularly,sex, the situation of diaphaneity for cornea, the shape of pupil, family history and whether to use protective products when they feel unwell of their eyes.) have significant influence on the results of BUT experiment,having statistical significance. However, there are seven influencing factors (the type of computer, sex, the shape of pupil, the situation of diaphaneity for cornea, the angle between displayer and workers' sight, the type of displayer and the height of operating floor.) have significant influence on the results of SIT experiment,having statistical significance. The health-situation of IT industry video operation workers' eye is not optimistic, most of workers are lack of protection awareness; we need to strengthen propaganda and education according to its influencing factors and to improve the level of medical control and prevention for eye diseases in relevant industries.
Optical cross-talk and visual comfort of a stereoscopic display used in a real-time application
NASA Astrophysics Data System (ADS)
Pala, S.; Stevens, R.; Surman, P.
2007-02-01
Many 3D systems work by presenting to the observer stereoscopic pairs of images that are combined to give the impression of a 3D image. Discomfort experienced when viewing for extended periods may be due to several factors, including the presence of optical crosstalk between the stereo image channels. In this paper we use two video cameras and two LCD panels viewed via a Helmholtz arrangement of mirrors, to display a stereoscopic image inherently free of crosstalk. Simple depth discrimination tasks are performed whilst viewing the 3D image and controlled amounts of image crosstalk are introduced by electronically mixing the video signals. Error monitoring and skin conductance are used as measures of workload as well as traditional subjective questionnaires. We report qualitative measurements of user workload under a variety of viewing conditions. This pilot study revealed a decrease in task performance and increased workload as crosstalk was increased. The observations will assist in the design of further trials planned to be conducted in a medical environment.
Novel ultrasonic real-time scanner featuring servo controlled transducers displaying a sector image.
Matzuk, T; Skolnick, M L
1978-07-01
This paper describes a new real-time servo controlled sector scanner that produces high resolution images and has functionally programmable features similar to phased array systems, but possesses the simplicity of design and low cost best achievable in a mechanical sector scanner. The unique feature is the transducer head which contains a single moving part--the transducer--enclosed within a light-weight, hand held, and vibration free case. The frame rate, sector width, stop action angle, are all operator programmable. The frame rate can be varied from 12 to 30 frames s-1 and the sector width from 0 degrees to 60 degrees. Conversion from sector to time motion (T/M) modes are instant and two options are available, a freeze position high density T/M and a low density T/M obtainable simultaneously during sector visualization. Unusual electronic features are: automatic gain control, electronic recording of images on video tape in rf format, and ability to post-process images during video playback to extract T/M display and to change time gain control (tgc) and image size.
Planetary Education and Outreach Using the NOAA Science on a Sphere
NASA Technical Reports Server (NTRS)
Simon-Miller, A. A.; Williams, D. R.; Smith, S. M.; Friedlander, J. S.; Mayo, L. A.; Clark, P. E.; Henderson, M. A.
2011-01-01
Science On a Sphere (SOS) is a large visualization system, developed by the National Oceanic and Atmospheric Administration (NOAH), that uses computers running Redhat Linux and four video projectors to display animated data onto the outside of a sphere. Said another way, SOS is a stationary globe that can show dynamic, animated images in spherical form. Visualization of cylindrical data maps show planets, their atmosphere, oceans, and land, in very realistic form. The SOS system uses 4 video projectors to display images onto the sphere. Each projector is driven by a separate computer, and a fifth computer is used to control the operation of the display computers. Each computer is a relatively powerful PC with a high-end graphics card. The video projectors have native XGA resolution. The projectors are placed at the corners of a 30' x 30' square with a 68" carbon fiber sphere suspended in the center of the square. The equator of the sphere is typically located 86" off the floor. SOS uses common image formats such as JPEG, or TIFF in a very specific, but simple form; the images are plotted on an equatorial cylindrical equidistant projection, or as it is commonly known, a latitude/longitude grid, where the image is twice as wide as it is high (rectangular). 2048x] 024 is the minimum usable spatial resolution without some noticeable pixelation. Labels and text can be applied within the image, or using a timestamp-like feature within the SOS system software. There are two basic modes of operation for SOS: displaying a single image or an animated sequence of frames. The frame or frames can be setup to rotate or tilt, as in a planetary rotation. Sequences of images that animate through time produce a movie visualization, with or without an overlain soundtrack. After the images are processed, SOS will display the images in sequence and play them like a movie across the entire sphere surface. Movies can be of any arbitrary length, limited mainly by disk space and can be animated at frame rates up to 30 frames per second. Transitions, special effects, and other computer graphics techniques can be added to a sequence through the use of off-the-shelf software, like Final Cut Pro. However, one drawback is that the Sphere cannot be used in the same manner as a flat movie screen; images cannot be pushed to a "side", a highlighted area must be viewable to all sides of the room simultaneously, and some transitions do not work as well as others. We discuss these issues and workarounds in our poster.
Method and apparatus for telemetry adaptive bandwidth compression
NASA Technical Reports Server (NTRS)
Graham, Olin L.
1987-01-01
Methods and apparatus are provided for automatic and/or manual adaptive bandwidth compression of telemetry. An adaptive sampler samples a video signal from a scanning sensor and generates a sequence of sampled fields. Each field and range rate information from the sensor are hence sequentially transmitted to and stored in a multiple and adaptive field storage means. The field storage means then, in response to an automatic or manual control signal, transfers the stored sampled field signals to a video monitor in a form for sequential or simultaneous display of a desired number of stored signal fields. The sampling ratio of the adaptive sample, the relative proportion of available communication bandwidth allocated respectively to transmitted data and video information, and the number of fields simultaneously displayed are manually or automatically selectively adjustable in functional relationship to each other and detected range rate. In one embodiment, when relatively little or no scene motion is detected, the control signal maximizes sampling ratio and causes simultaneous display of all stored fields, thus maximizing resolution and bandwidth available for data transmission. When increased scene motion is detected, the control signal is adjusted accordingly to cause display of fewer fields. If greater resolution is desired, the control signal is adjusted to increase the sampling ratio.
Nissen, Nicholas N; Menon, Vijay; Williams, James; Berci, George
2011-01-01
Background The use of loupe magnification during complex hepatobiliary and pancreatic (HBP) surgery has become routine. Unfortunately, loupe magnification has several disadvantages including limited magnification, a fixed field and non-variable magnification parameters. The aim of this report is to describe a simple system of video-microscopy for use in open surgery as an alternative to loupe magnification. Methods In video-microscopy, the operative field is displayed on a TV monitor using a high-definition (HD) camera with a special optic mounted on an adjustable mechanical arm. The set-up and application of this system are described and illustrated using examples drawn from pancreaticoduodenectomy, bile duct repair and liver transplantation. Results This system is easy to use and can provide variable magnification of ×4–12 at a camera distance of 25–35 cm from the operative field and a depth of field of 15 mm. This system allows the surgeon and assistant to work from a HD TV screen during critical phases of microsurgery. Conclusions The system described here provides better magnification than loupe lenses and thus may be beneficial during complex HPB procedures. Other benefits of this system include the fact that its use decreases neck strain and postural fatigue in the surgeon and it can be used as a tool for documentation and teaching. PMID:21929677
NASA Astrophysics Data System (ADS)
Minamoto, Masahiko; Matsunaga, Katsuya
1999-05-01
Operator performance while using a remote controlled backhoe shovel is described for three different stereoscopic viewing conditions: direct view, fixed stereoscopic cameras connected to a helmet mounted display (HMD), and rotating stereo camera connected and slaved to the head orientation of a free moving stereo HMD. Results showed that the head- slaved system provided the best performance.
Eavesdropping and signal matching in visual courtship displays of spiders.
Clark, David L; Roberts, J Andrew; Uetz, George W
2012-06-23
Eavesdropping on communication is widespread among animals, e.g. bystanders observing male-male contests, female mate choice copying and predator detection of prey cues. Some animals also exhibit signal matching, e.g. overlapping of competitors' acoustic signals in aggressive interactions. Fewer studies have examined male eavesdropping on conspecific courtship, although males could increase mating success by attending to others' behaviour and displaying whenever courtship is detected. In this study, we show that field-experienced male Schizocosa ocreata wolf spiders exhibit eavesdropping and signal matching when exposed to video playback of courting male conspecifics. Male spiders had longer bouts of interaction with a courting male stimulus, and more bouts of courtship signalling during and after the presence of a male on the video screen. Rates of courtship (leg tapping) displayed by individual focal males were correlated with the rates of the video exemplar to which they were exposed. These findings suggest male wolf spiders might gain information by eavesdropping on conspecific courtship and adjust performance to match that of rivals. This represents a novel finding, as these behaviours have previously been seen primarily among vertebrates.
Eavesdropping and signal matching in visual courtship displays of spiders
Clark, David L.; Roberts, J. Andrew; Uetz, George W.
2012-01-01
Eavesdropping on communication is widespread among animals, e.g. bystanders observing male–male contests, female mate choice copying and predator detection of prey cues. Some animals also exhibit signal matching, e.g. overlapping of competitors' acoustic signals in aggressive interactions. Fewer studies have examined male eavesdropping on conspecific courtship, although males could increase mating success by attending to others' behaviour and displaying whenever courtship is detected. In this study, we show that field-experienced male Schizocosa ocreata wolf spiders exhibit eavesdropping and signal matching when exposed to video playback of courting male conspecifics. Male spiders had longer bouts of interaction with a courting male stimulus, and more bouts of courtship signalling during and after the presence of a male on the video screen. Rates of courtship (leg tapping) displayed by individual focal males were correlated with the rates of the video exemplar to which they were exposed. These findings suggest male wolf spiders might gain information by eavesdropping on conspecific courtship and adjust performance to match that of rivals. This represents a novel finding, as these behaviours have previously been seen primarily among vertebrates. PMID:22219390
Wireless Augmented Reality Prototype (WARP)
NASA Technical Reports Server (NTRS)
Devereaux, A. S.
1999-01-01
Initiated in January, 1997, under NASA's Office of Life and Microgravity Sciences and Applications, the Wireless Augmented Reality Prototype (WARP) is a means to leverage recent advances in communications, displays, imaging sensors, biosensors, voice recognition and microelectronics to develop a hands-free, tetherless system capable of real-time personal display and control of computer system resources. Using WARP, an astronaut may efficiently operate and monitor any computer-controllable activity inside or outside the vehicle or station. The WARP concept is a lightweight, unobtrusive heads-up display with a wireless wearable control unit. Connectivity to the external system is achieved through a high-rate radio link from the WARP personal unit to a base station unit installed into any system PC. The radio link has been specially engineered to operate within the high- interference, high-multipath environment of a space shuttle or space station module. Through this virtual terminal, the astronaut will be able to view and manipulate imagery, text or video, using voice commands to control the terminal operations. WARP's hands-free access to computer-based instruction texts, diagrams and checklists replaces juggling manuals and clipboards, and tetherless computer system access allows free motion throughout a cabin while monitoring and operating equipment.
Actively addressed single pixel full-colour plasmonic display
Franklin, Daniel; Frank, Russell; Wu, Shin-Tson; Chanda, Debashis
2017-01-01
Dynamic, colour-changing surfaces have many applications including displays, wearables and active camouflage. Plasmonic nanostructures can fill this role by having the advantages of ultra-small pixels, high reflectivity and post-fabrication tuning through control of the surrounding media. However, previous reports of post-fabrication tuning have yet to cover a full red-green-blue (RGB) colour basis set with a single nanostructure of singular dimensions. Here, we report a method which greatly advances this tuning and demonstrates a liquid crystal-plasmonic system that covers the full RGB colour basis set, only as a function of voltage. This is accomplished through a surface morphology-induced, polarization-dependent plasmonic resonance and a combination of bulk and surface liquid crystal effects that manifest at different voltages. We further demonstrate the system's compatibility with existing LCD technology by integrating it with a commercially available thin-film-transistor array. The imprinted surface interfaces readily with computers to display images as well as video. PMID:28488671
Fractional screen video enhancement apparatus
Spletzer, Barry L [Albuquerque, NM; Davidson, George S [Albuquerque, NM; Zimmerer, Daniel J [Tijeras, NM; Marron, Lisa C [Albuquerque, NM
2005-07-19
The present invention provides a method and apparatus for displaying two portions of an image at two resolutions. For example, the invention can display an entire image at a first resolution, and a subset of the image at a second, higher resolution. Two inexpensive, low resolution displays can be used to produce a large image with high resolution only where needed.
X-window-based 2K display workstation
NASA Astrophysics Data System (ADS)
Weinberg, Wolfram S.; Hayrapetian, Alek S.; Cho, Paul S.; Valentino, Daniel J.; Taira, Ricky K.; Huang, H. K.
1991-07-01
A high-definition, high-performance display station for reading and review of digital radiological images is introduced. The station is based on a Sun SPARC Station 4 and employs X window system for display and manipulation of images. A mouse-operated graphic user interface is implemented utilizing Motif-style tools. The system supports up to four MegaScan gray-scale 2560 X 2048 monitors. A special configuration of frame and video buffer yields a data transfer of 50 M pixels/s. A magnetic disk array supplies a storage capacity of 2 GB with a data transfer rate of 4-6 MB/s. The system has access to the central archive through an ultrahigh-speed fiber-optic network and patient studies are automatically transferred to the local disk. The available image processing functions include change of lookup table, zoom and pan, and cine. Future enhancements will provide for manual contour tracing, length, area, and density measurements, text and graphic overlay, as well as composition of selected images. Additional preprocessing procedures under development will optimize the initial lookup table and adjust the images to a standard orientation.
Highly Reflective Multi-stable Electrofluidic Display Pixels
NASA Astrophysics Data System (ADS)
Yang, Shu
Electronic papers (E-papers) refer to the displays that mimic the appearance of printed papers, but still owning the features of conventional electronic displays, such as the abilities of browsing websites and playing videos. The motivation of creating paper-like displays is inspired by the truths that reading on a paper caused least eye fatigue due to the paper's reflective and light diffusive nature, and, unlike the existing commercial displays, there is no cost of any form of energy for sustaining the displayed image. To achieve the equivalent visual effect of a paper print, an ideal E-paper has to be a highly reflective with good contrast ratio and full-color capability. To sustain the image with zero power consumption, the display pixels need to be bistable, which means the "on" and "off" states are both lowest energy states. Pixel can change its state only when sufficient external energy is given. There are many emerging technologies competing to demonstrate the first ideal E-paper device. However, none is able to achieve satisfactory visual effect, bistability and video speed at the same time. Challenges come from either the inherent physical/chemical properties or the fabrication process. Electrofluidic display is one of the most promising E-paper technologies. It has successfully demonstrated high reflectivity, brilliant color and video speed operation by moving colored pigment dispersion between visible and invisible places with electrowetting force. However, the pixel design did not allow the image bistability. Presented in this dissertation are the multi-stable electrofluidic display pixels that are able to sustain grayscale levels without any power consumption, while keeping the favorable features of the previous generation electrofluidic display. The pixel design, fabrication method using multiple layer dry film photoresist lamination, and physical/optical characterizations are discussed in details. Based on the pixel structure, the preliminary results of a simplified design and fabrication method are demonstrated. As advanced research topics regarding the device optical performance, firstly an optical model for evaluating reflective displays' light out-coupling efficiency is established to guide the pixel design; Furthermore, Aluminum surface diffusers are analytically modeled and then fabricated onto multi-stable electrofluidic display pixels to demonstrate truly "white" multi-stable electrofluidic display modules. The achieved results successfully promoted multi-stable electrofluidic display as excellent candidate for the ultimate E-paper device especially for larger scale signage applications.
Stereoscopic Integrated Imaging Goggles for Multimodal Intraoperative Image Guidance
Mela, Christopher A.; Patterson, Carrie; Thompson, William K.; Papay, Francis; Liu, Yang
2015-01-01
We have developed novel stereoscopic wearable multimodal intraoperative imaging and display systems entitled Integrated Imaging Goggles for guiding surgeries. The prototype systems offer real time stereoscopic fluorescence imaging and color reflectance imaging capacity, along with in vivo handheld microscopy and ultrasound imaging. With the Integrated Imaging Goggle, both wide-field fluorescence imaging and in vivo microscopy are provided. The real time ultrasound images can also be presented in the goggle display. Furthermore, real time goggle-to-goggle stereoscopic video sharing is demonstrated, which can greatly facilitate telemedicine. In this paper, the prototype systems are described, characterized and tested in surgeries in biological tissues ex vivo. We have found that the system can detect fluorescent targets with as low as 60 nM indocyanine green and can resolve structures down to 0.25 mm with large FOV stereoscopic imaging. The system has successfully guided simulated cancer surgeries in chicken. The Integrated Imaging Goggle is novel in 4 aspects: it is (a) the first wearable stereoscopic wide-field intraoperative fluorescence imaging and display system, (b) the first wearable system offering both large FOV and microscopic imaging simultaneously, (c) the first wearable system that offers both ultrasound imaging and fluorescence imaging capacities, and (d) the first demonstration of goggle-to-goggle communication to share stereoscopic views for medical guidance. PMID:26529249
Display aids for remote control of untethered undersea vehicles
NASA Technical Reports Server (NTRS)
Verplank, W. L.
1978-01-01
A predictor display superimposed on slow-scan video or sonar data is proposed as a method to allow better remote manual control of an untethered submersible. Simulation experiments show good control under circumstances which otherwise make control practically impossible.
Development of a Low Cost Graphics Terminal.
ERIC Educational Resources Information Center
Lehr, Ted
1985-01-01
Describes modifications made to expand the capabilities of a display unit (Lear Siegler ADM-3A) to include medium resolution graphics. The modifying circuitry is detailed along with software subroutined written in Z-80 machine language for controlling the video display. (JN)
Comparing Pictures and Videos for Teaching Action Labels to Children with Communication Delays
ERIC Educational Resources Information Center
Schebell, Shannon; Shepley, Collin; Mataras, Theologia; Wunderlich, Kara
2018-01-01
Children with communication delays often display difficulties labeling stimuli in their environment, particularly related to actions. Research supports direct instruction with video and picture stimuli for increasing children's action labeling repertoires; however, no studies have compared which type of stimuli results in more efficient,…
1996-01-01
Ted Brunzie and Peter Mason observe the float package and the data rack aboard the DC-9 reduced gravity aircraft. The float package contains a cryostat, a video camera, a pump and accelerometers. The data rack displays and record the video signal from the float package on tape and stores acceleration and temperature measurements on disk.
ERIC Educational Resources Information Center
Krumboltz, John D.; Babineaux, Ryan; Wientjes, Greg
2010-01-01
The supply of occupational information appears to exceed the demand. A website displaying over 100 videos about various occupations was created to help career searchers find attractive alternatives. Access to the videos was free for anyone in the world. It had been hoped that many thousands of people would make use of the resource. However, the…
Realizing the increased potential of an open-system high-definition digital projector design
NASA Astrophysics Data System (ADS)
Daniels, Reginald
1999-05-01
Modern video projectors are becoming more compact and capable. Various display technologies are very competitive and are delivering higher performance and more compact projectors to market at an ever quickening pace. However the end users are often left with the daunting task of integrating the 'off the self projectors' into a previously existing system. As the projectors become more digitally enhanced, there will be a series of designs, and the digital projector technology matures. The design solutions will be restricted by the state of the art at the time of manufacturing. In order to allow the most growth and performance for a given price, many design decisions will be made and revisited over a period of years or decades. A modular open digital system design concept is indeed a major challenge of the future high definition digital displays for al applications.
Computed intraoperative navigation guidance--a preliminary report on a new technique.
Enislidis, G; Wagner, A; Ploder, O; Ewers, R
1997-08-01
To assess the value of a computer-assisted three-dimensional guidance system (Virtual Patient System) in maxillofacial operations. Laboratory and open clinical study. Teaching Hospital, Austria. 6 patients undergoing various procedures including removal of foreign body (n=3) and biopsy, maxillary advancement, and insertion of implants (n=1 each). Storage of computed tomographic (CT) pictures on an optical disc, and imposition of intraoperative video images on to these. The resulting display is shown to the surgeon on a micromonitor in his head-up display for guidance during the operations. To improve orientation during complex or minimally invasive maxillofacial procedures and to make such operations easier and less traumatic. Successful transferral of computed navigation technology into an operation room environment and positive evaluation of the method by the surgeons involved. Computer-assisted three-dimensional guidance systems have the potential for making complex or minimally invasive procedures easier to do, thereby reducing postoperative morbidity.
Portable Computer Technology (PCT) Research and Development Program Phase 2
NASA Technical Reports Server (NTRS)
Castillo, Michael; McGuire, Kenyon; Sorgi, Alan
1995-01-01
The subject of this project report, focused on: (1) Design and development of two Advanced Portable Workstation 2 (APW 2) units. These units incorporate advanced technology features such as a low power Pentium processor, a high resolution color display, National Television Standards Committee (NTSC) video handling capabilities, a Personal Computer Memory Card International Association (PCMCIA) interface, and Small Computer System Interface (SCSI) and ethernet interfaces. (2) Use these units to integrate and demonstrate advanced wireless network and portable video capabilities. (3) Qualification of the APW 2 systems for use in specific experiments aboard the Mir Space Station. A major objective of the PCT Phase 2 program was to help guide future choices in computing platforms and techniques for meeting National Aeronautics and Space Administration (NASA) mission objectives. The focus being on the development of optimal configurations of computing hardware, software applications, and network technologies for use on NASA missions.
PCI-based WILDFIRE reconfigurable computing engines
NASA Astrophysics Data System (ADS)
Fross, Bradley K.; Donaldson, Robert L.; Palmer, Douglas J.
1996-10-01
WILDFORCE is the first PCI-based custom reconfigurable computer that is based on the Splash 2 technology transferred from the National Security Agency and the Institute for Defense Analyses, Supercomputing Research Center (SRC). The WILDFORCE architecture has many of the features of the WILDFIRE computer, such as field- programmable gate array (FPGA) based processing elements, linear array and crossbar interconnection, and high- performance memory and I/O subsystems. New features introduced in the PCI-based WILDFIRE systems include memory/processor options that can be added to any processing element. These options include static and dynamic memory, digital signal processors (DSPs), FPGAs, and microprocessors. In addition to memory/processor options, many different application specific connectors can be used to extend the I/O capabilities of the system, including systolic I/O, camera input and video display output. This paper also discusses how this new PCI-based reconfigurable computing engine is used for rapid-prototyping, real-time video processing and other DSP applications.
On-line content creation for photo products: understanding what the user wants
NASA Astrophysics Data System (ADS)
Fageth, Reiner
2015-03-01
This paper describes how videos can be implemented into printed photo books and greeting cards. We will show that - surprisingly or not- pictures from videos are similarly used such as classical images to tell compelling stories. Videos can be taken with nearly every camera, digital point and shoot cameras, DSLRs as well as smartphones and more and more with so-called action cameras mounted on sports devices. The implementation of videos while generating QR codes and relevant pictures out of the video stream via a software implementation was contents in last years' paper. This year we present first data about what contents is displayed and how the users represent their videos in printed products, e.g. CEWE PHOTOBOOKS and greeting cards. We report the share of the different video formats used.
High dynamic range adaptive real-time smart camera: an overview of the HDR-ARTiST project
NASA Astrophysics Data System (ADS)
Lapray, Pierre-Jean; Heyrman, Barthélémy; Ginhac, Dominique
2015-04-01
Standard cameras capture only a fraction of the information that is visible to the human visual system. This is specifically true for natural scenes including areas of low and high illumination due to transitions between sunlit and shaded areas. When capturing such a scene, many cameras are unable to store the full Dynamic Range (DR) resulting in low quality video where details are concealed in shadows or washed out by sunlight. The imaging technique that can overcome this problem is called HDR (High Dynamic Range) imaging. This paper describes a complete smart camera built around a standard off-the-shelf LDR (Low Dynamic Range) sensor and a Virtex-6 FPGA board. This smart camera called HDR-ARtiSt (High Dynamic Range Adaptive Real-time Smart camera) is able to produce a real-time HDR live video color stream by recording and combining multiple acquisitions of the same scene while varying the exposure time. This technique appears as one of the most appropriate and cheapest solution to enhance the dynamic range of real-life environments. HDR-ARtiSt embeds real-time multiple captures, HDR processing, data display and transfer of a HDR color video for a full sensor resolution (1280 1024 pixels) at 60 frames per second. The main contributions of this work are: (1) Multiple Exposure Control (MEC) dedicated to the smart image capture with alternating three exposure times that are dynamically evaluated from frame to frame, (2) Multi-streaming Memory Management Unit (MMMU) dedicated to the memory read/write operations of the three parallel video streams, corresponding to the different exposure times, (3) HRD creating by combining the video streams using a specific hardware version of the Devebecs technique, and (4) Global Tone Mapping (GTM) of the HDR scene for display on a standard LCD monitor.
Projecting 2D gene expression data into 3D and 4D space.
Gerth, Victor E; Katsuyama, Kaori; Snyder, Kevin A; Bowes, Jeff B; Kitayama, Atsushi; Ueno, Naoto; Vize, Peter D
2007-04-01
Video games typically generate virtual 3D objects by texture mapping an image onto a 3D polygonal frame. The feeling of movement is then achieved by mathematically simulating camera movement relative to the polygonal frame. We have built customized scripts that adapt video game authoring software to texture mapping images of gene expression data onto b-spline based embryo models. This approach, known as UV mapping, associates two-dimensional (U and V) coordinates within images to the three dimensions (X, Y, and Z) of a b-spline model. B-spline model frameworks were built either from confocal data or de novo extracted from 2D images, once again using video game authoring approaches. This system was then used to build 3D models of 182 genes expressed in developing Xenopus embryos and to implement these in a web-accessible database. Models can be viewed via simple Internet browsers and utilize openGL hardware acceleration via a Shockwave plugin. Not only does this database display static data in a dynamic and scalable manner, the UV mapping system also serves as a method to align different images to a common framework, an approach that may make high-throughput automated comparisons of gene expression patterns possible. Finally, video game systems also have elegant methods for handling movement, allowing biomechanical algorithms to drive the animation of models. With further development, these biomechanical techniques offer practical methods for generating virtual embryos that recapitulate morphogenesis.
Optimization of the polyplanar optical display electronics for a monochrome B-52 display
NASA Astrophysics Data System (ADS)
DeSanto, Leonard
1998-09-01
The Polyplanar Optical Display (POD) is a unique display screen which can be used with any projection source. The prototype ten-inch display is two inches thick and has a matte black face which allows for high contrast images. The prototype being developed is a form, fit and functional replacement display for the B-52 aircraft which uses a monochrome ten-inch display. In order to achieve a long lifetime, the new display uses a new 200 mW green solid-state laser (10,000 hr. life) at 532 nm as its light source. To produce real-time video, the laser light is being modulated by a Digital Light Processing (DLPTM) chip manufactured by Texas Instruments (TI). In order to use the solid-state laser as the light source and also fit within the constraints of the B-52 display, the Digital Micromirror Device (DMDTM) chip is operated remotely from the Texas Instruments circuit board. In order to achieve increased brightness a monochrome digitizing interface was investigated. The operation of the DMDTM divorced from the light engine and the interfacing of the DMDTM board with the RS-170 video format specific to the B-52 aircraft will be discussed, including the increased brightness of the monochrome digitizing interface. A brief description of the electronics required to drive the new 200 mW laser is also presented.
NASA Astrophysics Data System (ADS)
Bada, Adedayo; Alcaraz-Calero, Jose M.; Wang, Qi; Grecos, Christos
2014-05-01
This paper describes a comprehensive empirical performance evaluation of 3D video processing employing the physical/virtual architecture implemented in a cloud environment. Different virtualization technologies, virtual video cards and various 3D benchmarks tools have been utilized in order to analyse the optimal performance in the context of 3D online gaming applications. This study highlights 3D video rendering performance under each type of hypervisors, and other factors including network I/O, disk I/O and memory usage. Comparisons of these factors under well-known virtual display technologies such as VNC, Spice and Virtual 3D adaptors reveal the strengths and weaknesses of the various hypervisors with respect to 3D video rendering and streaming.
Low-cost telepresence for collaborative virtual environments.
Rhee, Seon-Min; Ziegler, Remo; Park, Jiyoung; Naef, Martin; Gross, Markus; Kim, Myoung-Hee
2007-01-01
We present a novel low-cost method for visual communication and telepresence in a CAVE -like environment, relying on 2D stereo-based video avatars. The system combines a selection of proven efficient algorithms and approximations in a unique way, resulting in a convincing stereoscopic real-time representation of a remote user acquired in a spatially immersive display. The system was designed to extend existing projection systems with acquisition capabilities requiring minimal hardware modifications and cost. The system uses infrared-based image segmentation to enable concurrent acquisition and projection in an immersive environment without a static background. The system consists of two color cameras and two additional b/w cameras used for segmentation in the near-IR spectrum. There is no need for special optics as the mask and color image are merged using image-warping based on a depth estimation. The resulting stereo image stream is compressed, streamed across a network, and displayed as a frame-sequential stereo texture on a billboard in the remote virtual environment.
AlliedSignal driver's viewer enhancement (DVE) for paramilitary and commercial applications
NASA Astrophysics Data System (ADS)
Emanuel, Michael; Caron, Hubert; Kovacevic, Branislav; Faina-Cherkaoui, Marcela; Wrobel, Leslie; Turcotte, Gilles
1999-07-01
AlliedSignal Driver's Viewer Enhancement (DVE) system is a thermal imager using a 320 X 240 uncooled microbolometer array. This high performance system was initially developed for military combat and tactical wheeled vehicles. It features a very small sensor head remotely mounted from the display, control and processing module. The sensor head has a modular design and is being adapted to various commercial applications such as truck and car-driving aid, using specifically designed low cost optics. Tradeoffs in the system design, system features and test results are discussed in this paper. A short video shows footage of the DVE system while driving at night.
Improving School Lighting for Video Display Units.
ERIC Educational Resources Information Center
Parker-Jenkins, Marie; Parker-Jenkins, William
1985-01-01
Provides information to identify and implement the key characteristics which contribute to an efficient and comfortable visual display unit (VDU) lighting installation. Areas addressed include VDU lighting requirements, glare, lighting controls, VDU environment, lighting retrofit, optical filters, and lighting recommendations. A checklist to…
When less is best: female brown-headed cowbirds prefer less intense male displays.
O'Loghlen, Adrian L; Rothstein, Stephen I
2012-01-01
Sexual selection theory predicts that females should prefer males with the most intense courtship displays. However, wing-spread song displays that male brown-headed cowbirds (Molothrus ater) direct at females are generally less intense than versions of this display that are directed at other males. Because male-directed displays are used in aggressive signaling, we hypothesized that females should prefer lower intensity performances of this display. To test this hypothesis, we played audiovisual recordings showing the same males performing both high intensity male-directed and low intensity female-directed displays to females (N = 8) and recorded the females' copulation solicitation display (CSD) responses. All eight females responded strongly to both categories of playbacks but were more sexually stimulated by the low intensity female-directed displays. Because each pair of high and low intensity playback videos had the exact same audio track, the divergent responses of females must have been based on differences in the visual content of the displays shown in the videos. Preferences female cowbirds show in acoustic CSD studies are correlated with mate choice in field and captivity studies and this is also likely to be true for preferences elucidated by playback of audiovisual displays. Female preferences for low intensity female-directed displays may explain why male cowbirds rarely use high intensity displays when signaling to females. Repetitive high intensity displays may demonstrate a male's current condition and explain why these displays are used in male-male interactions which can escalate into physical fights in which males in poorer condition could be injured or killed. This is the first study in songbirds to use audiovisual playbacks to assess how female sexual behavior varies in response to variation in a male visual display.
Modern Display Technologies for Airborne Applications.
1983-04-01
the case of LED head-down direct view displays, this requires that special attention be paid to the optical filtering , the electrical drive/address...effectively attenuates the LED specular reflectance component, the colour and neutral density filtering attentuate the diffuse component and the... filter techniques are planned for use with video, multi- colour and advanced versions of numeric, alphanumeric and graphic displays; this technique
Motmot, an open-source toolkit for realtime video acquisition and analysis.
Straw, Andrew D; Dickinson, Michael H
2009-07-22
Video cameras sense passively from a distance, offer a rich information stream, and provide intuitively meaningful raw data. Camera-based imaging has thus proven critical for many advances in neuroscience and biology, with applications ranging from cellular imaging of fluorescent dyes to tracking of whole-animal behavior at ecologically relevant spatial scales. Here we present 'Motmot': an open-source software suite for acquiring, displaying, saving, and analyzing digital video in real-time. At the highest level, Motmot is written in the Python computer language. The large amounts of data produced by digital cameras are handled by low-level, optimized functions, usually written in C. This high-level/low-level partitioning and use of select external libraries allow Motmot, with only modest complexity, to perform well as a core technology for many high-performance imaging tasks. In its current form, Motmot allows for: (1) image acquisition from a variety of camera interfaces (package motmot.cam_iface), (2) the display of these images with minimal latency and computer resources using wxPython and OpenGL (package motmot.wxglvideo), (3) saving images with no compression in a single-pass, low-CPU-use format (package motmot.FlyMovieFormat), (4) a pluggable framework for custom analysis of images in realtime and (5) firmware for an inexpensive USB device to synchronize image acquisition across multiple cameras, with analog input, or with other hardware devices (package motmot.fview_ext_trig). These capabilities are brought together in a graphical user interface, called 'FView', allowing an end user to easily view and save digital video without writing any code. One plugin for FView, 'FlyTrax', which tracks the movement of fruit flies in real-time, is included with Motmot, and is described to illustrate the capabilities of FView. Motmot enables realtime image processing and display using the Python computer language. In addition to the provided complete applications, the architecture allows the user to write relatively simple plugins, which can accomplish a variety of computer vision tasks and be integrated within larger software systems. The software is available at http://code.astraw.com/projects/motmot.
Near-real-time biplanar fluoroscopic tracking system for the video tumor fighter
NASA Astrophysics Data System (ADS)
Lawson, Michael A.; Wika, Kevin G.; Gilles, George T.; Ritter, Rogers C.
1991-06-01
We have developed software capable of the three-dimensional tracking of objects in the brain volume, and the subsequent overlaying of an image of the object onto previously obtained MR or CT scans. This software has been developed for use with the Magnetic Stereotaxis System (MSS), also called the 'Video Tumor Fighter' (VTF). The software was written for a Sun 4/110 SPARC workstation with an ANDROX ICS-400 image processing card installed to manage this task. At present, the system uses input from two orthogonally-oriented, visible- light cameras and a simulated scene to determine the three-dimensional position of the object of interest. The coordinates are then transformed into MR or CT coordinates and an image of the object is displayed in the appropriate intersecting MR slice on a computer screen. This paper describes the tracking algorithm and discusses how it was implemented in software. The system's hardware is also described. The limitations of the present system are discussed and plans for incorporating bi-planar, x-ray fluoroscopy are presented.
NASA Technical Reports Server (NTRS)
1984-01-01
Key tool of Redken Laboratories new line of hair styling appliances is an instrument called a thermograph, a heat sensing device originally developed by Hughes Aircraft Co. under U.S. Army and NASA funding. Redken Laboratories bought one of the early models of the Hughes Probeye Thermal Video System or TVS which detects the various degrees of heat emitted by an object and displays the results in color on a TV monitor with colors representing different temperatures detected.