Using ARINC 818 Avionics Digital Video Bus (ADVB) for military displays
NASA Astrophysics Data System (ADS)
Alexander, Jon; Keller, Tim
2007-04-01
ARINC 818 Avionics Digital Video Bus (ADVB) is a new digital video interface and protocol standard developed especially for high bandwidth uncompressed digital video. The first draft of this standard, released in January of 2007, has been advanced by ARINC and the aerospace community to meet the acute needs of commercial aviation for higher performance digital video. This paper analyzes ARINC 818 for use in military display systems found in avionics, helicopters, and ground vehicles. The flexibility of ARINC 818 for the diverse resolutions, grayscales, pixel formats, and frame rates of military displays is analyzed as well as the suitability of ARINC 818 to support requirements for military video systems including bandwidth, latency, and reliability. Implementation issues relevant to military displays are presented.
Live HDR video streaming on commodity hardware
NASA Astrophysics Data System (ADS)
McNamee, Joshua; Hatchett, Jonathan; Debattista, Kurt; Chalmers, Alan
2015-09-01
High Dynamic Range (HDR) video provides a step change in viewing experience, for example the ability to clearly see the soccer ball when it is kicked from the shadow of the stadium into sunshine. To achieve the full potential of HDR video, so-called true HDR, it is crucial that all the dynamic range that was captured is delivered to the display device and tone mapping is confined only to the display. Furthermore, to ensure widespread uptake of HDR imaging, it should be low cost and available on commodity hardware. This paper describes an end-to-end HDR pipeline for capturing, encoding and streaming high-definition HDR video in real-time using off-the-shelf components. All the lighting that is captured by HDR-enabled consumer cameras is delivered via the pipeline to any display, including HDR displays and even mobile devices with minimum latency. The system thus provides an integrated HDR video pipeline that includes everything from capture to post-production, archival and storage, compression, transmission, and display.
Display device-adapted video quality-of-experience assessment
NASA Astrophysics Data System (ADS)
Rehman, Abdul; Zeng, Kai; Wang, Zhou
2015-03-01
Today's viewers consume video content from a variety of connected devices, including smart phones, tablets, notebooks, TVs, and PCs. This imposes significant challenges for managing video traffic efficiently to ensure an acceptable quality-of-experience (QoE) for the end users as the perceptual quality of video content strongly depends on the properties of the display device and the viewing conditions. State-of-the-art full-reference objective video quality assessment algorithms do not take into account the combined impact of display device properties, viewing conditions, and video resolution while performing video quality assessment. We performed a subjective study in order to understand the impact of aforementioned factors on perceptual video QoE. We also propose a full reference video QoE measure, named SSIMplus, that provides real-time prediction of the perceptual quality of a video based on human visual system behaviors, video content characteristics (such as spatial and temporal complexity, and video resolution), display device properties (such as screen size, resolution, and brightness), and viewing conditions (such as viewing distance and angle). Experimental results have shown that the proposed algorithm outperforms state-of-the-art video quality measures in terms of accuracy and speed.
Microcomputer Selection Guide for Construction Field Offices. Revision.
1984-09-01
the system, and the monitor displays information on a video display screen. Microcomputer systems today are available in a variety of configura- tions...background. White on black monitors report- edly caule more eye fatigue, while amber is reported to cause the least eye fatigue. Reverse video ...The video should be amber or green display with a resolution of at least 640 x 200 dots per in. Additional features of the monitor include an
1983-12-01
storage included room for not only the video display incompatibilties which have been plaguing the terminal (VDT), but also for the disk drive, the...once at system implementation time. This sample Video Display Terminal - ---------------------------------- O(VT) screen shows the Appendix N Code...override theavalue with a different data value. Video Display Terminal (VDT): A cathode ray tube or gas plasma tube display screen terminal that allows
Spatial constraints of stereopsis in video displays
NASA Technical Reports Server (NTRS)
Schor, Clifton
1989-01-01
Recent development in video technology, such as the liquid crystal displays and shutters, have made it feasible to incorporate stereoscopic depth into the 3-D representations on 2-D displays. However, depth has already been vividly portrayed in video displays without stereopsis using the classical artists' depth cues described by Helmholtz (1866) and the dynamic depth cues described in detail by Ittleson (1952). Successful static depth cues include overlap, size, linear perspective, texture gradients, and shading. Effective dynamic cues include looming (Regan and Beverly, 1979) and motion parallax (Rogers and Graham, 1982). Stereoscopic depth is superior to the monocular distance cues under certain circumstances. It is most useful at portraying depth intervals as small as 5 to 10 arc secs. For this reason it is extremely useful in user-video interactions such as telepresence. Objects can be manipulated in 3-D space, for example, while a person who controls the operations views a virtual image of the manipulated object on a remote 2-D video display. Stereopsis also provides structure and form information in camouflaged surfaces such as tree foliage. Motion parallax also reveals form; however, without other monocular cues such as overlap, motion parallax can yield an ambiguous perception. For example, a turning sphere, portrayed as solid by parallax can appear to rotate either leftward or rightward. However, only one direction of rotation is perceived when stereo-depth is included. If the scene is static, then stereopsis is the principal cue for revealing the camouflaged surface structure. Finally, dynamic stereopsis provides information about the direction of motion in depth (Regan and Beverly, 1979). Clearly there are many spatial constraints, including spatial frequency content, retinal eccentricity, exposure duration, target spacing, and disparity gradient, which - when properly adjusted - can greatly enhance stereodepth in video displays.
Video display engineering and optimization system
NASA Technical Reports Server (NTRS)
Larimer, James (Inventor)
1997-01-01
A video display engineering and optimization CAD simulation system for designing a LCD display integrates models of a display device circuit, electro-optics, surface geometry, and physiological optics to model the system performance of a display. This CAD system permits system performance and design trade-offs to be evaluated without constructing a physical prototype of the device. The systems includes a series of modules which permit analysis of design trade-offs in terms of their visual impact on a viewer looking at a display.
Stockdale, Laura; Coyne, Sarah M
2018-01-01
The Internet Gaming Disorder Scale (IGDS) is a widely used measure of video game addiction, a pathology affecting a small percentage of all people who play video games. Emerging adult males are significantly more likely to be video game addicts. Few researchers have examined how people who qualify as video game addicts based on the IGDS compared to matched controls based on age, gender, race, and marital status. The current study compared IGDS video game addicts to matched non-addicts in terms of their mental, physical, social-emotional health using self-report, survey methods. Addicts had poorer mental health and cognitive functioning including poorer impulse control and ADHD symptoms compared to controls. Additionally, addicts displayed increased emotional difficulties including increased depression and anxiety, felt more socially isolated, and were more likely to display internet pornography pathological use symptoms. Female video game addicts were at unique risk for negative outcomes. The sample for this study was undergraduate college students and self-report measures were used. Participants who met the IGDS criteria for video game addiction displayed poorer emotional, physical, mental, and social health, adding to the growing evidence that video game addictions are a valid phenomenon. Copyright © 2017 Elsevier B.V. All rights reserved.
Video Display Terminals: Radiation Issues.
ERIC Educational Resources Information Center
Murray, William E.
1985-01-01
Discusses information gathered in past few years related to health effects of video display terminals (VDTs) with particular emphasis given to issues raised by VDT users. Topics covered include radiation emissions, health concerns, radiation surveys, occupational radiation exposure standards, and long-term risks. (17 references) (EJS)
ERIC Educational Resources Information Center
Walsh, Janet
1982-01-01
Discusses issues related to possible health hazards associated with viewing video display terminals. Includes some findings of the 1979 NIOSH report on Potential Hazards of Video Display Terminals indicating level of radiation emitted is low and providing recommendations related to glare and back pain/muscular fatigue problems. (JN)
47 CFR 79.109 - Activating accessibility features.
Code of Federal Regulations, 2014 CFR
2014-10-01
... ACCESSIBILITY OF VIDEO PROGRAMMING Apparatus § 79.109 Activating accessibility features. (a) Requirements... video programming transmitted in digital format simultaneously with sound, including apparatus designed to receive or display video programming transmitted in digital format using Internet protocol, with...
Compression of stereoscopic video using MPEG-2
NASA Astrophysics Data System (ADS)
Puri, A.; Kollarits, Richard V.; Haskell, Barry G.
1995-10-01
Many current as well as emerging applications in areas of entertainment, remote operations, manufacturing industry and medicine can benefit from the depth perception offered by stereoscopic video systems which employ two views of a scene imaged under the constraints imposed by human visual system. Among the many challenges to be overcome for practical realization and widespread use of 3D/stereoscopic systems are good 3D displays and efficient techniques for digital compression of enormous amounts of data while maintaining compatibility with normal video decoding and display systems. After a brief introduction to the basics of 3D/stereo including issues of depth perception, stereoscopic 3D displays and terminology in stereoscopic imaging and display, we present an overview of tools in the MPEG-2 video standard that are relevant to our discussion on compression of stereoscopic video, which is the main topic of this paper. Next, we outilne the various approaches for compression of stereoscopic video and then focus on compatible stereoscopic video coding using MPEG-2 Temporal scalability concepts. Compatible coding employing two different types of prediction structures become potentially possible, disparity compensated prediction and combined disparity and motion compensated predictions. To further improve coding performance and display quality, preprocessing for reducing mismatch between the two views forming stereoscopic video is considered. Results of simulations performed on stereoscopic video of normal TV resolution are then reported comparing the performance of two prediction structures with the simulcast solution. It is found that combined disparity and motion compensated prediction offers the best performance. Results indicate that compression of both views of stereoscopic video of normal TV resolution appears feasible in a total of 6 to 8 Mbit/s. We then discuss regarding multi-viewpoint video, a generalization of stereoscopic video. Finally, we describe ongoing efforts within MPEG-2 to define a profile for stereoscopic video coding, as well as, the promise of MPEG-4 in addressing coding of multi-viewpoint video.
Compression of stereoscopic video using MPEG-2
NASA Astrophysics Data System (ADS)
Puri, Atul; Kollarits, Richard V.; Haskell, Barry G.
1995-12-01
Many current as well as emerging applications in areas of entertainment, remote operations, manufacturing industry and medicine can benefit from the depth perception offered by stereoscopic video systems which employ two views of a scene imaged under the constraints imposed by human visual system. Among the many challenges to be overcome for practical realization and widespread use of 3D/stereoscopic systems are good 3D displays and efficient techniques for digital compression of enormous amounts of data while maintaining compatibility with normal video decoding and display systems. After a brief introduction to the basics of 3D/stereo including issues of depth perception, stereoscopic 3D displays and terminology in stereoscopic imaging and display, we present an overview of tools in the MPEG-2 video standard that are relevant to our discussion on compression of stereoscopic video, which is the main topic of this paper. Next, we outline the various approaches for compression of stereoscopic video and then focus on compatible stereoscopic video coding using MPEG-2 Temporal scalability concepts. Compatible coding employing two different types of prediction structures become potentially possible, disparity compensated prediction and combined disparity and motion compensated predictions. To further improve coding performance and display quality, preprocessing for reducing mismatch between the two views forming stereoscopic video is considered. Results of simulations performed on stereoscopic video of normal TV resolution are then reported comparing the performance of two prediction structures with the simulcast solution. It is found that combined disparity and motion compensated prediction offers the best performance. Results indicate that compression of both views of stereoscopic video of normal TV resolution appears feasible in a total of 6 to 8 Mbit/s. We then discuss regarding multi-viewpoint video, a generalization of stereoscopic video. Finally, we describe ongoing efforts within MPEG-2 to define a profile for stereoscopic video coding, as well as, the promise of MPEG-4 in addressing coding of multi-viewpoint video.
Flat-panel display solutions for ground-environment military displays (Invited Paper)
NASA Astrophysics Data System (ADS)
Thomas, J., II; Roach, R.
2005-05-01
Displays for military vehicles have very distinct operational and cost requirements that differ from other military applications. These requirements demand that display suppliers to Army and Marine ground-environments provide low cost equipment that is capable of operation across environmental extremes. Inevitably, COTS components form the foundation of these "affordable" display solutions. This paper will outline the major display requirements and review the options that satisfy conflicting and difficult operational demands, using newly developed equipment as an example. Recently, a new supplier was selected for the Drivers Vision Enhancer (DVE) equipment, including the Display Control Module (DCM). The paper will outline the DVE and describe development of a new DCM solution. The DVE programme, with several thousand units presently in service and operational in conflicts such as "Operation Iraqi Freedom", represents a critical balance between cost and performance. We shall describe design considerations that include selection of COTS sources, the need to minimise display modification; video interfaces, power interfaces, operator interfaces and new provisions to optimise displayed video content.
47 CFR 79.107 - User interfaces provided by digital apparatus.
Code of Federal Regulations, 2014 CFR
2014-10-01
... SERVICES ACCESSIBILITY OF VIDEO PROGRAMMING Apparatus § 79.107 User interfaces provided by digital... States and designed to receive or play back video programming transmitted in digital format simultaneously with sound, including apparatus designed to receive or display video programming transmitted in...
Wrap-Around Out-the-Window Sensor Fusion System
NASA Technical Reports Server (NTRS)
Fox, Jeffrey; Boe, Eric A.; Delgado, Francisco; Secor, James B.; Clark, Michael R.; Ehlinger, Kevin D.; Abernathy, Michael F.
2009-01-01
The Advanced Cockpit Evaluation System (ACES) includes communication, computing, and display subsystems, mounted in a van, that synthesize out-the-window views to approximate the views of the outside world as it would be seen from the cockpit of a crewed spacecraft, aircraft, or remote control of a ground vehicle or UAV (unmanned aerial vehicle). The system includes five flat-panel display units arranged approximately in a semicircle around an operator, like cockpit windows. The scene displayed on each panel represents the view through the corresponding cockpit window. Each display unit is driven by a personal computer equipped with a video-capture card that accepts live input from any of a variety of sensors (typically, visible and/or infrared video cameras). Software running in the computers blends the live video images with synthetic images that could be generated, for example, from heads-up-display outputs, waypoints, corridors, or from satellite photographs of the same geographic region. Data from a Global Positioning System receiver and an inertial navigation system aboard the remote vehicle are used by the ACES software to keep the synthetic and live views in registration. If the live image were to fail, the synthetic scenes could still be displayed to maintain situational awareness.
Virtual navigation performance: the relationship to field of view and prior video gaming experience.
Richardson, Anthony E; Collaer, Marcia L
2011-04-01
Two experiments examined whether learning a virtual environment was influenced by field of view and how it related to prior video gaming experience. In the first experiment, participants (42 men, 39 women; M age = 19.5 yr., SD = 1.8) performed worse on a spatial orientation task displayed with a narrow field of view in comparison to medium and wide field-of-view displays. Counter to initial hypotheses, wide field-of-view displays did not improve performance over medium displays, and this was replicated in a second experiment (30 men, 30 women; M age = 20.4 yr., SD = 1.9) presenting a more complex learning environment. Self-reported video gaming experience correlated with several spatial tasks: virtual environment pointing and tests of Judgment of Line Angle and Position, mental rotation, and Useful Field of View (with correlations between .31 and .45). When prior video gaming experience was included as a covariate, sex differences in spatial tasks disappeared.
Carlson, Jay; Kowalczuk, Jędrzej; Psota, Eric; Pérez, Lance C
2012-01-01
Robotic surgical platforms require vision feedback systems, which often consist of low-resolution, expensive, single-imager analog cameras. These systems are retooled for 3D display by simply doubling the cameras and outboard control units. Here, a fully-integrated digital stereoscopic video camera employing high-definition sensors and a class-compliant USB video interface is presented. This system can be used with low-cost PC hardware and consumer-level 3D displays for tele-medical surgical applications including military medical support, disaster relief, and space exploration.
Standardized access, display, and retrieval of medical video
NASA Astrophysics Data System (ADS)
Bellaire, Gunter; Steines, Daniel; Graschew, Georgi; Thiel, Andreas; Bernarding, Johannes; Tolxdorff, Thomas; Schlag, Peter M.
1999-05-01
The system presented here enhances documentation and data- secured, second-opinion facilities by integrating video sequences into DICOM 3.0. We present an implementation for a medical video server extended by a DICOM interface. Security mechanisms conforming with DICOM are integrated to enable secure internet access. Digital video documents of diagnostic and therapeutic procedures should be examined regarding the clip length and size necessary for second opinion and manageable with today's hardware. Image sources relevant for this paper include 3D laparoscope, 3D surgical microscope, 3D open surgery camera, synthetic video, and monoscopic endoscopes, etc. The global DICOM video concept and three special workplaces of distinct applications are described. Additionally, an approach is presented to analyze the motion of the endoscopic camera for future automatic video-cutting. Digital stereoscopic video sequences are especially in demand for surgery . Therefore DSVS are also integrated into the DICOM video concept. Results are presented describing the suitability of stereoscopic display techniques for the operating room.
Motion sickness, console video games, and head-mounted displays.
Merhi, Omar; Faugloire, Elise; Flanagan, Moira; Stoffregen, Thomas A
2007-10-01
We evaluated the nauseogenic properties of commercial console video games (i.e., games that are sold to the public) when presented through a head-mounted display. Anecdotal reports suggest that motion sickness may occur among players of contemporary commercial console video games. Participants played standard console video games using an Xbox game system. We varied the participants' posture (standing vs. sitting) and the game (two Xbox games). Participants played for up to 50 min and were asked to discontinue if they experienced any symptoms of motion sickness. Sickness occurred in all conditions, but it was more common during standing. During seated play there were significant differences in head motion between sick and well participants before the onset of motion sickness. The results indicate that commercial console video game systems can induce motion sickness when presented via a head-mounted display and support the hypothesis that motion sickness is preceded by instability in the control of seated posture. Potential applications of this research include changes in the design of console video games and recommendations for how such systems should be used.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-20
... INTERNATIONAL TRADE COMMISSION [DN 2871] Certain Video Displays and Products Using and Containing... Trade Commission has received a complaint entitled In Re Certain Video Displays and Products Using and... for importation, and the sale within the United States after importation of certain video displays and...
Natural 3D content on glasses-free light-field 3D cinema
NASA Astrophysics Data System (ADS)
Balogh, Tibor; Nagy, Zsolt; Kovács, Péter Tamás.; Adhikarla, Vamsi K.
2013-03-01
This paper presents a complete framework for capturing, processing and displaying the free viewpoint video on a large scale immersive light-field display. We present a combined hardware-software solution to visualize free viewpoint 3D video on a cinema-sized screen. The new glasses-free 3D projection technology can support larger audience than the existing autostereoscopic displays. We introduce and describe our new display system including optical and mechanical design considerations, the capturing system and render cluster for producing the 3D content, and the various software modules driving the system. The indigenous display is first of its kind, equipped with front-projection light-field HoloVizio technology, controlling up to 63 MP. It has all the advantages of previous light-field displays and in addition, allows a more flexible arrangement with a larger screen size, matching cinema or meeting room geometries, yet simpler to set-up. The software system makes it possible to show 3D applications in real-time, besides the natural content captured from dense camera arrangements as well as from sparse cameras covering a wider baseline. Our software system on the GPU accelerated render cluster, can also visualize pre-recorded Multi-view Video plus Depth (MVD4) videos on this light-field glasses-free cinema system, interpolating and extrapolating missing views.
Mobile Vehicle Teleoperated Over Wireless IP
2007-06-13
VideoLAN software suite. The VLC media player portion of this suite handles net- work streaming of video, as well as the receipt and display of the video...is found in appendix C.7. Video Display The video feed is displayed for the operator using VLC opened independently from the control sending program...This gives the operator the most choice in how to configure the display. To connect VLC to the feed all you need is the IP address from the Java
Presentation of Information on Visual Displays.
ERIC Educational Resources Information Center
Pettersson, Rune
This discussion of factors involved in the presentation of text, numeric data, and/or visuals using video display devices describes in some detail the following types of presentation: (1) visual displays, with attention to additive color combination; measurements, including luminance, radiance, brightness, and lightness; and standards, with…
77 FR 9964 - Certain Video Displays and Products Using and Containing Same
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-21
... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-828] Certain Video Displays and Products... importation, and the sale within the United States after importation of certain video displays and products... States, the sale for importation, or the sale within the United States after importation of certain video...
Novel use of video glasses during binocular microscopy in the otolaryngology clinic.
Fastenberg, Judd H; Fang, Christina H; Akbar, Nadeem A; Abuzeid, Waleed M; Moskowitz, Howard S
2018-06-06
The development of portable, high resolution video displays such as video glasses allows clinicians the opportunity to offer patients an increased ability to visualize aspects of their physical examination in an ergonomic and cost-effective manner. The objective of this pilot study is to trial the use of video glasses for patients undergoing binocular microscopy as well as to better understand some of the potential benefits of the enhanced display option. This study was comprised of a single treatment group. Patients seen in the otolaryngology clinic who required binocular microscopy for diagnosis and treatment were recruited. All patients wore video glasses during their otoscopic examination. An additional cohort of patients who required binocular microscopy were also recruited, but did not use the video glasses during their examination. Patients subsequently completed a 10-point Likert scale survey that assessed their comfort, anxiety, and satisfaction with the examination as well as their general understanding of their otologic condition. A total of 29 patients who used the video glasses were recruited, including those with normal examinations, cerumen impaction, or chronic ear disease. Based on the survey results, patients reported a high level of satisfaction and comfort during their exam with video glasses. Patients who used the video glasses did not exhibit any increased anxiety with their examination. Patients reported that video glasses improved their understanding and they expressed a desire to wear the glasses again during repeat exams. This pilot study demonstrates that video glasses may represent a viable alternative display option in the otolaryngology clinic. The results show that the use of video glasses is associated with high patient comfort and satisfaction during binocular microscopy. Further investigation is warranted to determine the potential for this display option in other facets of patient care as well as in expanding patient understanding of disease and anatomy. Copyright © 2018 Elsevier Inc. All rights reserved.
Author Correction: Single-molecule imaging by optical absorption
NASA Astrophysics Data System (ADS)
Celebrano, Michele; Kukura, Philipp; Renn, Alois; Sandoghdar, Vahid
2018-05-01
In the Supplementary Video initially published with this Letter, the right-hand panel displaying the fluorescence emission was not showing on some video players due to a formatting problem; this has now been fixed. The video has also now been amended to include colour scale bars for both the left- (differential transmission signal) and right-hand panels.
3D video coding: an overview of present and upcoming standards
NASA Astrophysics Data System (ADS)
Merkle, Philipp; Müller, Karsten; Wiegand, Thomas
2010-07-01
An overview of existing and upcoming 3D video coding standards is given. Various different 3D video formats are available, each with individual pros and cons. The 3D video formats can be separated into two classes: video-only formats (such as stereo and multiview video) and depth-enhanced formats (such as video plus depth and multiview video plus depth). Since all these formats exist of at least two video sequences and possibly additional depth data, efficient compression is essential for the success of 3D video applications and technologies. For the video-only formats the H.264 family of coding standards already provides efficient and widely established compression algorithms: H.264/AVC simulcast, H.264/AVC stereo SEI message, and H.264/MVC. For the depth-enhanced formats standardized coding algorithms are currently being developed. New and specially adapted coding approaches are necessary, as the depth or disparity information included in these formats has significantly different characteristics than video and is not displayed directly, but used for rendering. Motivated by evolving market needs, MPEG has started an activity to develop a generic 3D video standard within the 3DVC ad-hoc group. Key features of the standard are efficient and flexible compression of depth-enhanced 3D video representations and decoupling of content creation and display requirements.
Display Sharing: An Alternative Paradigm
NASA Technical Reports Server (NTRS)
Brown, Michael A.
2010-01-01
The current Johnson Space Center (JSC) Mission Control Center (MCC) Video Transport System (VTS) provides flight controllers and management the ability to meld raw video from various sources with telemetry to improve situational awareness. However, maintaining a separate infrastructure for video delivery and integration of video content with data adds significant complexity and cost to the system. When considering alternative architectures for a VTS, the current system's ability to share specific computer displays in their entirety to other locations, such as large projector systems, flight control rooms, and back supporting rooms throughout the facilities and centers must be incorporated into any new architecture. Internet Protocol (IP)-based systems also support video delivery and integration. IP-based systems generally have an advantage in terms of cost and maintainability. Although IP-based systems are versatile, the task of sharing a computer display from one workstation to another can be time consuming for an end-user and inconvenient to administer at a system level. The objective of this paper is to present a prototype display sharing enterprise solution. Display sharing is a system which delivers image sharing across the LAN while simultaneously managing bandwidth, supporting encryption, enabling recovery and resynchronization following a loss of signal, and, minimizing latency. Additional critical elements will include image scaling support, multi -sharing, ease of initial integration and configuration, integration with desktop window managers, collaboration tools, host and recipient controls. This goal of this paper is to summarize the various elements of an IP-based display sharing system that can be used in today's control center environment.
NASA Technical Reports Server (NTRS)
Bogart, Edward H. (Inventor); Pope, Alan T. (Inventor)
2000-01-01
A system for display on a single video display terminal of multiple physiological measurements is provided. A subject is monitored by a plurality of instruments which feed data to a computer programmed to receive data, calculate data products such as index of engagement and heart rate, and display the data in a graphical format simultaneously on a single video display terminal. In addition live video representing the view of the subject and the experimental setup may also be integrated into the single data display. The display may be recorded on a standard video tape recorder for retrospective analysis.
Video image stabilization and registration--plus
NASA Technical Reports Server (NTRS)
Hathaway, David H. (Inventor)
2009-01-01
A method of stabilizing a video image displayed in multiple video fields of a video sequence includes the steps of: subdividing a selected area of a first video field into nested pixel blocks; determining horizontal and vertical translation of each of the pixel blocks in each of the pixel block subdivision levels from the first video field to a second video field; and determining translation of the image from the first video field to the second video field by determining a change in magnification of the image from the first video field to the second video field in each of horizontal and vertical directions, and determining shear of the image from the first video field to the second video field in each of the horizontal and vertical directions.
NASA Astrophysics Data System (ADS)
Froehlich, Jan; Grandinetti, Stefan; Eberhardt, Bernd; Walter, Simon; Schilling, Andreas; Brendel, Harald
2014-03-01
High quality video sequences are required for the evaluation of tone mapping operators and high dynamic range (HDR) displays. We provide scenic and documentary scenes with a dynamic range of up to 18 stops. The scenes are staged using professional film lighting, make-up and set design to enable the evaluation of image and material appearance. To address challenges for HDR-displays and temporal tone mapping operators, the sequences include highlights entering and leaving the image, brightness changing over time, high contrast skin tones, specular highlights and bright, saturated colors. HDR-capture is carried out using two cameras mounted on a mirror-rig. To achieve a cinematic depth of field, digital motion picture cameras with Super-35mm size sensors are used. We provide HDR-video sequences to serve as a common ground for the evaluation of temporal tone mapping operators and HDR-displays. They are available to the scientific community for further research.
Apparatus for monitoring crystal growth
Sachs, Emanual M.
1981-01-01
A system and method are disclosed for monitoring the growth of a crystalline body from a liquid meniscus in a furnace. The system provides an improved human/machine interface so as to reduce operator stress, strain and fatigue while improving the conditions for observation and control of the growing process. The system comprises suitable optics for forming an image of the meniscus and body wherein the image is anamorphic so that the entire meniscus can be viewed with good resolution in both the width and height dimensions. The system also comprises a video display for displaying the anamorphic image. The video display includes means for enhancing the contrast between any two contrasting points in the image. The video display also comprises a signal averager for averaging the intensity of at least one preselected portions of the image. The value of the average intensity, can in turn be utilized to control the growth of the body. The system and method are also capable of observing and monitoring multiple processes.
Method of monitoring crystal growth
Sachs, Emanual M.
1982-01-01
A system and method are disclosed for monitoring the growth of a crystalline body from a liquid meniscus in a furnace. The system provides an improved human/machine interface so as to reduce operator stress, strain and fatigue while improving the conditions for observation and control of the growing process. The system comprises suitable optics for forming an image of the meniscus and body wherein the image is anamorphic so that the entire meniscus can be viewed with good resolution in both the width and height dimensions. The system also comprises a video display for displaying the anamorphic image. The video display includes means for enhancing the contrast between any two contrasting points in the image. The video display also comprises a signal averager for averaging the intensity of at least one preselected portions of the image. The value of the average intensity, can in turn be utilized to control the growth of the body. The system and method are also capable of observing and monitoring multiple processes.
A Scalable, Collaborative, Interactive Light-field Display System
2014-06-01
displays, 3D display, holographic video, integral photography, plenoptic , computed photography 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF...light-field, holographic displays, 3D display, holographic video, integral photography, plenoptic , computed photography 1 Distribution A: Approved
Autonomous spacecraft rendezvous and docking
NASA Technical Reports Server (NTRS)
Tietz, J. C.; Almand, B. J.
1985-01-01
A storyboard display is presented which summarizes work done recently in design and simulation of autonomous video rendezvous and docking systems for spacecraft. This display includes: photographs of the simulation hardware, plots of chase vehicle trajectories from simulations, pictures of the docking aid including image processing interpretations, and drawings of the control system strategy. Viewgraph-style sheets on the display bulletin board summarize the simulation objectives, benefits, special considerations, approach, and results.
Autonomous spacecraft rendezvous and docking
NASA Astrophysics Data System (ADS)
Tietz, J. C.; Almand, B. J.
A storyboard display is presented which summarizes work done recently in design and simulation of autonomous video rendezvous and docking systems for spacecraft. This display includes: photographs of the simulation hardware, plots of chase vehicle trajectories from simulations, pictures of the docking aid including image processing interpretations, and drawings of the control system strategy. Viewgraph-style sheets on the display bulletin board summarize the simulation objectives, benefits, special considerations, approach, and results.
Development of a Low Cost Graphics Terminal.
ERIC Educational Resources Information Center
Lehr, Ted
1985-01-01
Describes modifications made to expand the capabilities of a display unit (Lear Siegler ADM-3A) to include medium resolution graphics. The modifying circuitry is detailed along with software subroutined written in Z-80 machine language for controlling the video display. (JN)
Improving School Lighting for Video Display Units.
ERIC Educational Resources Information Center
Parker-Jenkins, Marie; Parker-Jenkins, William
1985-01-01
Provides information to identify and implement the key characteristics which contribute to an efficient and comfortable visual display unit (VDU) lighting installation. Areas addressed include VDU lighting requirements, glare, lighting controls, VDU environment, lighting retrofit, optical filters, and lighting recommendations. A checklist to…
Polyplanar optical display electronics
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeSanto, L.; Biscardi, C.
The Polyplanar Optical Display (POD) is a unique display screen which can be used with any projection source. The prototype ten inch display is two inches thick and has a matte black face which allows for high contrast images. The prototype being developed is a form, fit and functional replacement display for the B-52 aircraft which uses a monochrome ten-inch display. In order to achieve a long lifetime, the new display uses a 100 milliwatt green solid-state laser (10,000 hr. life) at 532 nm as its light source. To produce real-time video, the laser light is being modulated by amore » Digital Light Processing (DLP{trademark}) chip manufactured by Texas Instruments. In order to use the solid-state laser as the light source and also fit within the constraints of the B-52 display, the Digital Micromirror Device (DMD{trademark}) circuit board is removed from the Texas Instruments DLP light engine assembly. Due to the compact architecture of the projection system within the display chassis, the DMD{trademark} chip is operated remotely from the Texas Instruments circuit board. The authors discuss the operation of the DMD{trademark} divorced from the light engine and the interfacing of the DMD{trademark} board with various video formats (CVBS, Y/C or S-video and RGB) including the format specific to the B-52 aircraft. A brief discussion of the electronics required to drive the laser is also presented.« less
Motion sickness and postural sway in console video games.
Stoffregen, Thomas A; Faugloire, Elise; Yoshida, Ken; Flanagan, Moira B; Merhi, Omar
2008-04-01
We tested the hypotheses that (a) participants might develop motion sickness while playing "off-the-shelf" console video games and (b) postural motion would differ between sick and well participants, prior to the onset of motion sickness. There have been many anecdotal reports of motion sickness among people who play console video games (e.g., Xbox, PlayStation). Participants (40 undergraduate students) played a game continuously for up to 50 min while standing or sitting. We varied the distance to the display screen (and, consequently, the visual angle of the display). Across conditions, the incidence of motion sickness ranged from 42% to 56%; incidence did not differ across conditions. During game play, head and torso motion differed between sick and well participants prior to the onset of subjective symptoms of motion sickness. The results indicate that console video games carry a significant risk of motion sickness. Potential applications of this research include changes in the design of console video games and recommendations for how such systems should be used.
Video image processor on the Spacelab 2 Solar Optical Universal Polarimeter /SL2 SOUP/
NASA Technical Reports Server (NTRS)
Lindgren, R. W.; Tarbell, T. D.
1981-01-01
The SOUP instrument is designed to obtain diffraction-limited digital images of the sun with high photometric accuracy. The Video Processor originated from the requirement to provide onboard real-time image processing, both to reduce the telemetry rate and to provide meaningful video displays of scientific data to the payload crew. This original concept has evolved into a versatile digital processing system with a multitude of other uses in the SOUP program. The central element in the Video Processor design is a 16-bit central processing unit based on 2900 family bipolar bit-slice devices. All arithmetic, logical and I/O operations are under control of microprograms, stored in programmable read-only memory and initiated by commands from the LSI-11. Several functions of the Video Processor are described, including interface to the High Rate Multiplexer downlink, cosmetic and scientific data processing, scan conversion for crew displays, focus and exposure testing, and use as ground support equipment.
ERIC Educational Resources Information Center
Walsh, Janet
1982-01-01
Discusses the health hazards of working with the visual display systems of computers, in particular the eye problems associated with long-term use of video display terminals. Excerpts from and ordering information for the National Institute for Occupational Safety and Health report on such hazards are included. (JJD)
ARINC 818 specification revisions enable new avionics architectures
NASA Astrophysics Data System (ADS)
Grunwald, Paul
2014-06-01
The ARINC 818 Avionics Digital Video Bus is the standard for cockpit video that has gained wide acceptance in both the commercial and military cockpits. The Boeing 787, A350XWB, A400M, KC-46A, and many other aircraft use it. The ARINC 818 specification, which was initially release in 2006, has recently undergone a major update to address new avionics architectures and capabilities. Over the seven years since its release, projects have gone beyond the specification due to the complexity of new architectures and desired capabilities, such as video switching, bi-directional communication, data-only paths, and camera and sensor control provisions. The ARINC 818 specification was revised in 2013, and ARINC 818-2 was approved in November 2013. The revisions to the ARINC 818-2 specification enable switching, stereo and 3-D provisions, color sequential implementations, regions of interest, bi-directional communication, higher link rates, data-only transmission, and synchronization signals. This paper discusses each of the new capabilities and the impact on avionics and display architectures, especially when integrating large area displays, stereoscopic displays, multiple displays, and systems that include a large number of sensors.
Increased ISR operator capability utilizing a centralized 360° full motion video display
NASA Astrophysics Data System (ADS)
Andryc, K.; Chamberlain, J.; Eagleson, T.; Gottschalk, G.; Kowal, B.; Kuzdeba, P.; LaValley, D.; Myers, E.; Quinn, S.; Rose, M.; Rusiecki, B.
2012-06-01
In many situations, the difference between success and failure comes down to taking the right actions quickly. While the myriad of electronic sensors available today can provide data quickly, it may overload the operator; where only a contextualized centralized display of information and intuitive human interface can help to support the quick and effective decisions needed. If these decisions are to result in quick actions, then the operator must be able to understand all of the data of his environment. In this paper we present a novel approach in contextualizing multi-sensor data onto a full motion video real-time 360 degree imaging display. The system described could function as a primary display system for command and control in security, military and observation posts. It has the ability to process and enable interactive control of multiple other sensor systems. It enhances the value of these other sensors by overlaying their information on a panorama of the surroundings. Also, it can be used to interface to other systems including: auxiliary electro-optical systems, aerial video, contact management, Hostile Fire Indicators (HFI), and Remote Weapon Stations (RWS).
Xiao, Yan; Dexter, Franklin; Hu, Peter; Dutton, Richard P
2008-02-01
On the day of surgery, real-time information of both room occupancy and activities within the operating room (OR) is needed for management of staff, equipment, and unexpected events. A status display system showed color OR video with controllable image quality and showed times that patients entered and exited each OR (obtained automatically). The system was installed and its use was studied in a 6-OR trauma suite and at four locations in a 19-OR tertiary suite. Trauma staff were surveyed for their perceptions of the system. Evidence of staff acceptance of distributed OR video included its operational use for >3 yr in the two suites, with no administrative complaints. Individuals of all job categories used the video. Anesthesiologists were the most frequent users for more than half of the days (95% confidence interval [CI] >50%) in the tertiary ORs. The OR charge nurses accessed the video mostly early in the day when the OR occupancy was high. In comparison (P < 0.001), anesthesiologists accessed it mostly at the end of the workday when occupancy was declining and few cases were starting. Of all 30-min periods during which the video was accessed in the trauma suite, many accesses (95% CI >42%) occurred in periods with no cases starting or ending (i.e., the video was used during the middle of cases). The three stated reasons for using video that had median surveyed responses of "very useful" were "to see if cases are finished," "to see if a room is ready," and "to see when cases are about to finish." Our nurses and physicians both accepted and used distributed OR video as it provided useful information, regardless of whether real-time display of milestones was available (e.g., through anesthesia information system data).
Travel guidance system for vehicles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takanabe, K.; Yamamoto, M.; Ito, K.
1987-02-24
A travel guidance system is described for vehicles including: a heading sensor for detecting a direction of movement of a vehicle; a distance sensor for detecting a distance traveled by the vehicle; a map data storage medium preliminarily storing map data; a control unit for receiving a heading signal from the heading sensor and a distance signal from the distance sensor to successively compute a present position of the vehicle and for generating video signals corresponding to display data including map data from the map data storage medium and data of the present position; and a display having first andmore » second display portions and responsive to the video signals from the control unit to display on the first display portion a map and a present portion mark, in which: the map data storage medium comprises means for preliminarily storing administrative division name data and landmark data; and the control unit comprises: landmark display means for: (1) determining a landmark closest to the present position, (2) causing a position of the landmark to be displayed on the map and (3) retrieving a landmark massage concerning the landmark from the storage medium to cause the display to display the landmark message on the second display portion; division name display means for retrieving the name of an administrative division to which the present position belongs from the storage medium and causing the display to display a division name message on the second display portion; and selection means for selectively actuating at least one of the landmark display means and the division name display means.« less
47 CFR 79.101 - Closed caption decoder requirements for analog television receivers.
Code of Federal Regulations, 2012 CFR
2012-10-01
...) BROADCAST RADIO SERVICES CLOSED CAPTIONING AND VIDEO DESCRIPTION OF VIDEO PROGRAMMING § 79.101 Closed... display the captioning for whichever channel the user selects. The TV Mode of operation allows the video... and rows. The characters must be displayed clearly separated from the video over which they are placed...
Psychophysical Comparison Of A Video Display System To Film By Using Bone Fracture Images
NASA Astrophysics Data System (ADS)
Seeley, George W.; Stempski, Mark; Roehrig, Hans; Nudelman, Sol; Capp, M. P.
1982-11-01
This study investigated the possibility of using a video display system instead of film for radiological diagnosis. Also investigated were the relationships between characteristics of the system and the observer's accuracy level. Radiologists were used as observers. Thirty-six clinical bone fractures were separated into two matched sets of equal difficulty. The difficulty parameters and ratings were defined by a panel of expert bone radiologists at the Arizona Health Sciences Center, Radiology Department. These two sets of fracture images were then matched with verifiably normal images using parameters such as film type, angle of view, size, portion of anatomy, the film's density range, and the patient's age and sex. The two sets of images were then displayed, using a counterbalanced design, to each of the participating radiologists for diagnosis. Whenever a response was given to a video image, the radiologist used enhancement controls to "window in" on the grey levels of interest. During the TV phase, the radiologist was required to record the settings of the calibrated controls of the image enhancer during interpretation. At no time did any single radiologist see the same film in both modes. The study was designed so that a standard analysis of variance would show the effects of viewing mode (film vs TV), the effects due to stimulus set, and any interactions with observers. A signal detection analysis of observer performance was also performed. Results indicate that the TV display system is almost as good as the view box display; an average of only two more errors were made on the TV display. The difference between the systems has been traced to four observers who had poor accuracy on a small number of films viewed on the TV display. This information is now being correlated with the video system's signal-to-noise ratio (SNR), signal transfer function (STF), and resolution measurements, to obtain information on the basic display and enhancement requirements for a video-based radiologic system. Due to time constraints the results are not included here. The complete results of this study will be reported at the conference.
An Attention-Information-Based Spatial Adaptation Framework for Browsing Videos via Mobile Devices
NASA Astrophysics Data System (ADS)
Li, Houqiang; Wang, Yi; Chen, Chang Wen
2007-12-01
With the growing popularity of personal digital assistant devices and smart phones, more and more consumers are becoming quite enthusiastic to appreciate videos via mobile devices. However, limited display size of the mobile devices has been imposing significant barriers for users to enjoy browsing high-resolution videos. In this paper, we present an attention-information-based spatial adaptation framework to address this problem. The whole framework includes two major parts: video content generation and video adaptation system. During video compression, the attention information in video sequences will be detected using an attention model and embedded into bitstreams with proposed supplement-enhanced information (SEI) structure. Furthermore, we also develop an innovative scheme to adaptively adjust quantization parameters in order to simultaneously improve the quality of overall encoding and the quality of transcoding the attention areas. When the high-resolution bitstream is transmitted to mobile users, a fast transcoding algorithm we developed earlier will be applied to generate a new bitstream for attention areas in frames. The new low-resolution bitstream containing mostly attention information, instead of the high-resolution one, will be sent to users for display on the mobile devices. Experimental results show that the proposed spatial adaptation scheme is able to improve both subjective and objective video qualities.
VENI, video, VICI: The merging of computer and video technologies
NASA Technical Reports Server (NTRS)
Horowitz, Jay G.
1993-01-01
The topics covered include the following: High Definition Television (HDTV) milestones; visual information bandwidth; television frequency allocation and bandwidth; horizontal scanning; workstation RGB color domain; NTSC color domain; American HDTV time-table; HDTV image size; digital HDTV hierarchy; task force on digital image architecture; open architecture model; future displays; and the ULTIMATE imaging system.
Video Games: A Human Factors Guide to Visual Display Design and Instructional System Design
1984-04-01
Electronic video games have many of the same technological and psychological characteristics that are found in military computer-based systems. For...both of which employ video games as experimental stimuli, are presented here. The first research program seeks to identify and exploit the...characteristics of video games in the design of game-based training devices. The second program is designed to explore the effects of electronic video display
Predictive Displays for High Latency Teleoperation
2016-08-04
PREDICTIVE DISPLAYS FOR HIGH LATENCY TELEOPERATION” Analysis of existing approach 3 C om m s. C hannel Vehicle OCU D Throttle, Steer, Brake D Video ...presents opportunity mitigate outgoing latency. • Video is not governed by physics, however, video is dependent on the state of the vehicle, which...Commands, estimates UDP: H.264 Video UDP: Vehicle state • C++ implementation • 2 threads • OpenCV for image manipulation • FFMPEG for video decoding
Multi-Target Camera Tracking, Hand-off and Display LDRD 158819 Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Robert J.
2014-10-01
Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn’t lead to more alarms, moremore » monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identify individual moving targets from the background imagery, and then display the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.« less
Multi-target camera tracking, hand-off and display LDRD 158819 final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Robert J.
2014-10-01
Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn't lead to more alarms, moremore » monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identifies individual moving targets from the background imagery, and then displays the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.« less
Military display performance parameters
NASA Astrophysics Data System (ADS)
Desjardins, Daniel D.; Meyer, Frederick
2012-06-01
The military display market is analyzed in terms of four of its segments: avionics, vetronics, dismounted soldier, and command and control. Requirements are summarized for a number of technology-driving parameters, to include luminance, night vision imaging system compatibility, gray levels, resolution, dimming range, viewing angle, video capability, altitude, temperature, shock and vibration, etc., for direct-view and virtual-view displays in cockpits and crew stations. Technical specifications are discussed for selected programs.
NASA Astrophysics Data System (ADS)
Bada, Adedayo; Alcaraz-Calero, Jose M.; Wang, Qi; Grecos, Christos
2014-05-01
This paper describes a comprehensive empirical performance evaluation of 3D video processing employing the physical/virtual architecture implemented in a cloud environment. Different virtualization technologies, virtual video cards and various 3D benchmarks tools have been utilized in order to analyse the optimal performance in the context of 3D online gaming applications. This study highlights 3D video rendering performance under each type of hypervisors, and other factors including network I/O, disk I/O and memory usage. Comparisons of these factors under well-known virtual display technologies such as VNC, Spice and Virtual 3D adaptors reveal the strengths and weaknesses of the various hypervisors with respect to 3D video rendering and streaming.
Reconfigurable work station for a video display unit and keyboard
NASA Technical Reports Server (NTRS)
Shields, Nicholas L. (Inventor); Roe, Fred D., Jr. (Inventor); Fagg, Mary F. (Inventor); Henderson, David E. (Inventor)
1988-01-01
A reconfigurable workstation is described having video, keyboard, and hand operated motion controller capabilities. The workstation includes main side panels between which a primary work panel is pivotally carried in a manner in which the primary work panel may be adjusted and set in a negatively declined or positively inclined position for proper forearm support when operating hand controllers. A keyboard table supports a keyboard in such a manner that the keyboard is set in a positively inclined position with respect to the negatively declined work panel. Various adjustable devices are provided for adjusting the relative declinations and inclinations of the work panels, tables, and visual display panels.
Prevention: lessons from video display installations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Margach, C.B.
1983-04-01
Workers interacting with video display units for periods in excess of two hours per day report significantly increased visual discomfort, fatigue and inefficiencies, as compared with workers performing similar tasks, but without the video viewing component. Difficulties in focusing and the appearance of myopia are among the problems being described. With a view to preventing or minimizing such problems, principles and procedures are presented providing for (a) modification of physical features of the video workstation and (b) improvement in the visual performances of the individual video unit operator.
2008-04-01
Index ( NASA - TLX : Hart & Staveland, 1988), and a Post-Test Questionnaire. Demographic data/Background Questionnaire. This questionnaire was used...very confident). NASA - TLX . The NASA TLX (Hart & Staveland, 1988) is a subjective workload assessment tool. A multidimensional weighting...completed the NASA - TLX . The test trials were randomized across participants and occurred in a counterbalanced order that took into account video display
An evaluation of the efficacy of video displays for use with chimpanzees (Pan troglodytes).
Hopper, Lydia M; Lambeth, Susan P; Schapiro, Steven J
2012-05-01
Video displays for behavioral research lend themselves particularly well to studies with chimpanzees (Pan troglodytes), as their vision is comparable to humans', yet there has been no formal test of the efficacy of video displays as a form of social information for chimpanzees. To address this, we compared the learning success of chimpanzees shown video footage of a conspecific compared to chimpanzees shown a live conspecific performing the same novel task. Footage of an unfamiliar chimpanzee operating a bidirectional apparatus was presented to 24 chimpanzees (12 males, 12 females), and their responses were compared to those of a further 12 chimpanzees given the same task but with no form of information. Secondly, we also compared the responses of the chimpanzees in the video display condition to responses of eight chimpanzees from a previously published study of ours, in which chimpanzees observed live models. Chimpanzees shown a video display were more successful than those in the control condition and showed comparable success to those that saw a live model. Regarding fine-grained copying (i.e. the direction that the door was pushed), only chimpanzees that observed a live model showed significant matching to the model's methods with their first response. Yet, when all the responses made by the chimpanzees were considered, comparable levels of matching were shown by chimpanzees in both the live and video conditions. © 2012 Wiley Periodicals, Inc.
An Evaluation of the Efficacy of Video Displays for Use With Chimpanzees (Pan troglodytes)
HOPPER, LYDIA M.; LAMBETH, SUSAN P.; SCHAPIRO, STEVEN J.
2013-01-01
Video displays for behavioral research lend themselves particularly well to studies with chimpanzees (Pan troglodytes), as their vision is comparable to humans’, yet there has been no formal test of the efficacy of video displays as a form of social information for chimpanzees. To address this, we compared the learning success of chimpanzees shown video footage of a conspecific compared to chimpanzees shown a live conspecific performing the same novel task. Footage of an unfamiliar chimpanzee operating a bidirectional apparatus was presented to 24 chimpanzees (12 males, 12 females), and their responses were compared to those of a further 12 chimpanzees given the same task but with no form of information. Secondly, we also compared the responses of the chimpanzees in the video display condition to responses of eight chimpanzees from a previously published study of ours, in which chimpanzees observed live models. Chimpanzees shown a video display were more successful than those in the control condition and showed comparable success to those that saw a live model. Regarding fine-grained copying (i.e. the direction that the door was pushed), only chimpanzees that observed a live model showed significant matching to the model’s methods with their first response. Yet, when all the responses made by the chimpanzees were considered, comparable levels of matching were shown by chimpanzees in both the live and video conditions. PMID:22318867
Head-mounted display for use in functional endoscopic sinus surgery
NASA Astrophysics Data System (ADS)
Wong, Brian J.; Lee, Jon P.; Dugan, F. Markoe; MacArthur, Carol J.
1995-05-01
Since the introduction of functional endoscopic sinus surgery (FESS), the procedure has undergone rapid change with evolution keeping pace with technological advances. The advent of low cost charge coupled device 9CCD) cameras revolutionized the practice and instruction of FESS. Video-based FESS has allowed for documentation of the surgical procedure as well as interactive instruction during surgery. Presently, the technical requirements of video-based FESS include the addition of one or more television monitors positioned strategically in the operating room. Thought video monitors have greatly enhanced surgical endoscopy by re- involving nurses and assistants in the actual mechanics of surgery, video monitors require the operating surgeon to be focused on the screen instead of the patient. In this study, we describe the use of a new low-cost liquid crystal display (LCD) based device that functions as a monitor but is mounted on the head on a visor (PT-O1, O1 Products, Westlake Village, CA). This study illustrates the application of these HMD devices to FESS operations. The same surgeon performed the operation in each patient. In one nasal fossa, surgery was performed using conventional video FESS methods. The contralateral side was operated on while wearing the head mounted video display. The device had adequate resolution for the purposes of FESS. No adverse effects were noted intraoperatively. The results on the patients ipsalateral and contralateral sides were similar. The visor did eliminated significant torsion of the surgeon's neck during the operation, while at the same time permitted simultaneous viewing of both the patient and the intranasal surgical field.
Evaluating the content and reception of messages from incarcerated parents to their children.
Folk, Johanna B; Nichols, Emily B; Dallaire, Danielle H; Loper, Ann B
2012-10-01
In the current study, children's reactions to video messages from their incarcerated parents were evaluated. Previous research has yielded mixed results when it examined the impact of contact between incarcerated parents and their children; one reason for these mixed results may be a lack of attention to the quality of contact. This is the first study to examine the actual content and quality of a remote form of contact in this population. Participants included 186 incarcerated parents (54% mothers) who participated in a filming with The Messages Project and 61 caregivers of their children. Parental mood prior to filming the message and children's mood after viewing the message were assessed using the Positive and Negative Affect Scale. After coding the content of 172 videos, the data from the 61 videos with caregiver responses were used in subsequent path analyses. Analyses indicated that when parents were in more negative moods prior to filming their message, they displayed more negative emotions in the video messages ( = .210), and their children were in more negative moods after viewing the message ( = .288). Considering that displays of negative emotion can directly affect how children respond to contact, it seems important for parents to learn to regulate these emotional displays to improve the quality of their contact with their children. © 2012 American Orthopsychiatric Association.
Dissecting children's observational learning of complex actions through selective video displays.
Flynn, Emma; Whiten, Andrew
2013-10-01
Children can learn how to use complex objects by watching others, yet the relative importance of different elements they may observe, such as the interactions of the individual parts of the apparatus, a model's movements, and desirable outcomes, remains unclear. In total, 140 3-year-olds and 140 5-year-olds participated in a study where they observed a video showing tools being used to extract a reward item from a complex puzzle box. Conditions varied according to the elements that could be seen in the video: (a) the whole display, including the model's hands, the tools, and the box; (b) the tools and the box but not the model's hands; (c) the model's hands and the tools but not the box; (d) only the end state with the box opened; and (e) no demonstration. Children's later attempts at the task were coded to establish whether they imitated the hierarchically organized sequence of the model's actions, the action details, and/or the outcome. Children's successful retrieval of the reward from the box and the replication of hierarchical sequence information were reduced in all but the whole display condition. Only once children had attempted the task and witnessed a second demonstration did the display focused on the tools and box prove to be better for hierarchical sequence information than the display focused on the tools and hands only. Copyright © 2013 Elsevier Inc. All rights reserved.
Polyplanar optical display electronics
NASA Astrophysics Data System (ADS)
DeSanto, Leonard; Biscardi, Cyrus
1997-07-01
The polyplanar optical display (POD) is a unique display screen which can be used with any projection source. The prototype ten inch display is two inches thick and has a matte black face which allows for high contrast images. The prototype being developed is a form, fit and functional replacement display for the B-52 aircraft which uses a monochrome ten-inch display. In order to achieve a long lifetime, the new display uses a 100 milliwatt green solid- state laser at 532 nm as its light source. To produce real- time video, the laser light is being modulated by a digital light processing (DLP) chip manufactured by Texas Instruments. In order to use the solid-state laser as the light source and also fit within the constraints of the B-52 display, the digital micromirror device (DMD) circuit board is removed from the Texas Instruments DLP light engine assembly. Due to the compact architecture of the projection system within the display chassis, the DMD chip is operated remotely from the Texas Instruments circuit board. We discuss the operation of the DMD divorced from the light engine and the interfacing of the DMD board with various video formats including the format specific to the B-52 aircraft. A brief discussion of the electronics required to drive the laser is also presented.
Real-Time Acquisition and Display of Data and Video
NASA Technical Reports Server (NTRS)
Bachnak, Rafic; Chakinarapu, Ramya; Garcia, Mario; Kar, Dulal; Nguyen, Tien
2007-01-01
This paper describes the development of a prototype that takes in an analog National Television System Committee (NTSC) video signal generated by a video camera and data acquired by a microcontroller and display them in real-time on a digital panel. An 8051 microcontroller is used to acquire power dissipation by the display panel, room temperature, and camera zoom level. The paper describes the major hardware components and shows how they are interfaced into a functional prototype. Test data results are presented and discussed.
Contour Detector and Data Acquisition System for the Left Ventricular Outline
NASA Technical Reports Server (NTRS)
Reiber, J. H. C. (Inventor)
1978-01-01
A real-time contour detector and data acquisition system is described for an angiographic apparatus having a video scanner for converting an X-ray image of a structure characterized by a change in brightness level compared with its surrounding into video format and displaying the X-ray image in recurring video fields. The real-time contour detector and data acqusition system includes track and hold circuits; a reference level analog computer circuit; an analog compartor; a digital processor; a field memory; and a computer interface.
IVTS-CEV (Interactive Video Tape System-Combat Engineer Vehicle) Gunnery Trainer.
1981-07-01
video game technology developed for and marketed in consumer video games. The IVTS/CEV is a conceptual/breadboard-level classroom interactive training system designed to train Combat Engineer Vehicle (CEV) gunners in target acquisition and engagement with the main gun. The concept demonstration consists of two units: a gunner station and a display module. The gunner station has optics and gun controls replicating those of the CEV gunner station. The display module contains a standard large-screen color video monitor and a video tape player. The gunner’s sight
Young Children's Analogical Problem Solving: Gaining Insights from Video Displays
ERIC Educational Resources Information Center
Chen, Zhe; Siegler, Robert S.
2013-01-01
This study examined how toddlers gain insights from source video displays and use the insights to solve analogous problems. Two- to 2.5-year-olds viewed a source video illustrating a problem-solving strategy and then attempted to solve analogous problems. Older but not younger toddlers extracted the problem-solving strategy depicted in the video…
NASA Tech Briefs, April 2000. Volume 24, No. 4
NASA Technical Reports Server (NTRS)
2000-01-01
Topics covered include: Imaging/Video/Display Technology; Electronic Components and Circuits; Electronic Systems; Physical Sciences; Materials; Computer Programs; Mechanics; Bio-Medical; Test and Measurement; Mathematics and Information Sciences; Books and Reports.
Code of Federal Regulations, 2011 CFR
2011-04-01
... capability to display the date and time of recorded events on video and/or digital recordings. The displayed... digital record retention. (1) All video recordings and/or digital records of coverage provided by the.... (3) Duly authenticated copies of video recordings and/or digital records shall be provided to the...
Code of Federal Regulations, 2014 CFR
2014-04-01
... capability to display the date and time of recorded events on video and/or digital recordings. The displayed... digital record retention. (1) All video recordings and/or digital records of coverage provided by the.... (3) Duly authenticated copies of video recordings and/or digital records shall be provided to the...
Code of Federal Regulations, 2013 CFR
2013-04-01
... capability to display the date and time of recorded events on video and/or digital recordings. The displayed... digital record retention. (1) All video recordings and/or digital records of coverage provided by the.... (3) Duly authenticated copies of video recordings and/or digital records shall be provided to the...
Code of Federal Regulations, 2010 CFR
2010-04-01
... capability to display the date and time of recorded events on video and/or digital recordings. The displayed... digital record retention. (1) All video recordings and/or digital records of coverage provided by the.... (3) Duly authenticated copies of video recordings and/or digital records shall be provided to the...
Code of Federal Regulations, 2012 CFR
2012-04-01
... capability to display the date and time of recorded events on video and/or digital recordings. The displayed... digital record retention. (1) All video recordings and/or digital records of coverage provided by the.... (3) Duly authenticated copies of video recordings and/or digital records shall be provided to the...
Multi-star processing and gyro filtering for the video inertial pointing system
NASA Technical Reports Server (NTRS)
Murphy, J. P.
1976-01-01
The video inertial pointing (VIP) system is being developed to satisfy the acquisition and pointing requirements of astronomical telescopes. The VIP system uses a single video sensor to provide star position information that can be used to generate three-axis pointing error signals (multi-star processing) and for input to a cathode ray tube (CRT) display of the star field. The pointing error signals are used to update the telescope's gyro stabilization system (gyro filtering). The CRT display facilitates target acquisition and positioning of the telescope by a remote operator. Linearized small angle equations are used for the multistar processing and a consideration of error performance and singularities lead to star pair location restrictions and equation selection criteria. A discrete steady-state Kalman filter which uses the integration of the gyros is developed and analyzed. The filter includes unit time delays representing asynchronous operations of the VIP microprocessor and video sensor. A digital simulation of a typical gyro stabilized gimbal is developed and used to validate the approach to the gyro filtering.
NASA Astrophysics Data System (ADS)
Kim, Kyung-Su; Lee, Hae-Yeoun; Im, Dong-Hyuck; Lee, Heung-Kyu
Commercial markets employ digital right management (DRM) systems to protect valuable high-definition (HD) quality videos. DRM system uses watermarking to provide copyright protection and ownership authentication of multimedia contents. We propose a real-time video watermarking scheme for HD video in the uncompressed domain. Especially, our approach is in aspect of practical perspectives to satisfy perceptual quality, real-time processing, and robustness requirements. We simplify and optimize human visual system mask for real-time performance and also apply dithering technique for invisibility. Extensive experiments are performed to prove that the proposed scheme satisfies the invisibility, real-time processing, and robustness requirements against video processing attacks. We concentrate upon video processing attacks that commonly occur in HD quality videos to display on portable devices. These attacks include not only scaling and low bit-rate encoding, but also malicious attacks such as format conversion and frame rate change.
Design of video interface conversion system based on FPGA
NASA Astrophysics Data System (ADS)
Zhao, Heng; Wang, Xiang-jun
2014-11-01
This paper presents a FPGA based video interface conversion system that enables the inter-conversion between digital and analog video. Cyclone IV series EP4CE22F17C chip from Altera Corporation is used as the main video processing chip, and single-chip is used as the information interaction control unit between FPGA and PC. The system is able to encode/decode messages from the PC. Technologies including video decoding/encoding circuits, bus communication protocol, data stream de-interleaving and de-interlacing, color space conversion and the Camera Link timing generator module of FPGA are introduced. The system converts Composite Video Broadcast Signal (CVBS) from the CCD camera into Low Voltage Differential Signaling (LVDS), which will be collected by the video processing unit with Camera Link interface. The processed video signals will then be inputted to system output board and displayed on the monitor.The current experiment shows that it can achieve high-quality video conversion with minimum board size.
Ergonomic Training for Tomorrow's Office.
ERIC Educational Resources Information Center
Gross, Clifford M.; Chapnik, Elissa Beth
1987-01-01
The authors focus on issues related to the continual use of video display terminals in the office, including safety and health regulations, potential health problems, and the role of training in minimizing work-related health problems. (CH)
Optimization of the polyplanar optical display electronics for a monochrome B-52 display
NASA Astrophysics Data System (ADS)
DeSanto, Leonard
1998-09-01
The Polyplanar Optical Display (POD) is a unique display screen which can be used with any projection source. The prototype ten-inch display is two inches thick and has a matte black face which allows for high contrast images. The prototype being developed is a form, fit and functional replacement display for the B-52 aircraft which uses a monochrome ten-inch display. In order to achieve a long lifetime, the new display uses a new 200 mW green solid-state laser (10,000 hr. life) at 532 nm as its light source. To produce real-time video, the laser light is being modulated by a Digital Light Processing (DLPTM) chip manufactured by Texas Instruments (TI). In order to use the solid-state laser as the light source and also fit within the constraints of the B-52 display, the Digital Micromirror Device (DMDTM) chip is operated remotely from the Texas Instruments circuit board. In order to achieve increased brightness a monochrome digitizing interface was investigated. The operation of the DMDTM divorced from the light engine and the interfacing of the DMDTM board with the RS-170 video format specific to the B-52 aircraft will be discussed, including the increased brightness of the monochrome digitizing interface. A brief description of the electronics required to drive the new 200 mW laser is also presented.
NASA Technical Reports Server (NTRS)
Jedlovec, Gary; Srikishen, Jayanthi; Edwards, Rita; Cross, David; Welch, Jon; Smith, Matt
2013-01-01
The use of collaborative scientific visualization systems for the analysis, visualization, and sharing of "big data" available from new high resolution remote sensing satellite sensors or four-dimensional numerical model simulations is propelling the wider adoption of ultra-resolution tiled display walls interconnected by high speed networks. These systems require a globally connected and well-integrated operating environment that provides persistent visualization and collaboration services. This abstract and subsequent presentation describes a new collaborative visualization system installed for NASA's Shortterm Prediction Research and Transition (SPoRT) program at Marshall Space Flight Center and its use for Earth science applications. The system consists of a 3 x 4 array of 1920 x 1080 pixel thin bezel video monitors mounted on a wall in a scientific collaboration lab. The monitors are physically and virtually integrated into a 14' x 7' for video display. The display of scientific data on the video wall is controlled by a single Alienware Aurora PC with a 2nd Generation Intel Core 4.1 GHz processor, 32 GB memory, and an AMD Fire Pro W600 video card with 6 mini display port connections. Six mini display-to-dual DVI cables are used to connect the 12 individual video monitors. The open source Scalable Adaptive Graphics Environment (SAGE) windowing and media control framework, running on top of the Ubuntu 12 Linux operating system, allows several users to simultaneously control the display and storage of high resolution still and moving graphics in a variety of formats, on tiled display walls of any size. The Ubuntu operating system supports the open source Scalable Adaptive Graphics Environment (SAGE) software which provides a common environment, or framework, enabling its users to access, display and share a variety of data-intensive information. This information can be digital-cinema animations, high-resolution images, high-definition video-teleconferences, presentation slides, documents, spreadsheets or laptop screens. SAGE is cross-platform, community-driven, open-source visualization and collaboration middleware that utilizes shared national and international cyberinfrastructure for the advancement of scientific research and education.
NASA Astrophysics Data System (ADS)
Jedlovec, G.; Srikishen, J.; Edwards, R.; Cross, D.; Welch, J. D.; Smith, M. R.
2013-12-01
The use of collaborative scientific visualization systems for the analysis, visualization, and sharing of 'big data' available from new high resolution remote sensing satellite sensors or four-dimensional numerical model simulations is propelling the wider adoption of ultra-resolution tiled display walls interconnected by high speed networks. These systems require a globally connected and well-integrated operating environment that provides persistent visualization and collaboration services. This abstract and subsequent presentation describes a new collaborative visualization system installed for NASA's Short-term Prediction Research and Transition (SPoRT) program at Marshall Space Flight Center and its use for Earth science applications. The system consists of a 3 x 4 array of 1920 x 1080 pixel thin bezel video monitors mounted on a wall in a scientific collaboration lab. The monitors are physically and virtually integrated into a 14' x 7' for video display. The display of scientific data on the video wall is controlled by a single Alienware Aurora PC with a 2nd Generation Intel Core 4.1 GHz processor, 32 GB memory, and an AMD Fire Pro W600 video card with 6 mini display port connections. Six mini display-to-dual DVI cables are used to connect the 12 individual video monitors. The open source Scalable Adaptive Graphics Environment (SAGE) windowing and media control framework, running on top of the Ubuntu 12 Linux operating system, allows several users to simultaneously control the display and storage of high resolution still and moving graphics in a variety of formats, on tiled display walls of any size. The Ubuntu operating system supports the open source Scalable Adaptive Graphics Environment (SAGE) software which provides a common environment, or framework, enabling its users to access, display and share a variety of data-intensive information. This information can be digital-cinema animations, high-resolution images, high-definition video-teleconferences, presentation slides, documents, spreadsheets or laptop screens. SAGE is cross-platform, community-driven, open-source visualization and collaboration middleware that utilizes shared national and international cyberinfrastructure for the advancement of scientific research and education.
Video System Highlights Hydrogen Fires
NASA Technical Reports Server (NTRS)
Youngquist, Robert C.; Gleman, Stuart M.; Moerk, John S.
1992-01-01
Video system combines images from visible spectrum and from three bands in infrared spectrum to produce color-coded display in which hydrogen fires distinguished from other sources of heat. Includes linear array of 64 discrete lead selenide mid-infrared detectors operating at room temperature. Images overlaid on black and white image of same scene from standard commercial video camera. In final image, hydrogen fires appear red; carbon-based fires, blue; and other hot objects, mainly green and combinations of green and red. Where no thermal source present, image remains in black and white. System enables high degree of discrimination between hydrogen flames and other thermal emitters.
Advances in Projection Technology for On-Line Instruction.
ERIC Educational Resources Information Center
Davis, H. Scott; Miller, Marsha
This document consists of supplemental information designed to accompany a presentation on the application of projection technology, including video projectors and liquid crystal display (LCD) devices, in the online catalog library instruction program at the Indiana State University libraries. Following an introductory letter, the packet includes:…
Sexual Orientation and U.S. Military Personnel Policy: Options and Assessment
1993-01-01
include smaller actions, such as allocation of time to the new policy and keeping the change before members through video or other messages such as...were also taken. A condensed video and still picture S record has been provided separately, and the complete videotape and all photography have been...touching, leering. las- s’ylimilter,.Ies attouchements.Iles regards concupis-* civous remarks and the display of porno - cents, les remarques lascives et
High-definition video display based on the FPGA and THS8200
NASA Astrophysics Data System (ADS)
Qian, Jia; Sui, Xiubao
2014-11-01
This paper presents a high-definition video display solution based on the FPGA and THS8200. THS8200 is a video decoder chip launched by TI company, this chip has three 10-bit DAC channels which can capture video data in both 4:2:2 and 4:4:4 formats, and its data synchronization can be either through the dedicated synchronization signals HSYNC and VSYNC, or extracted from the embedded video stream synchronization information SAV / EAV code. In this paper, we will utilize the address and control signals generated by FPGA to access to the data-storage array, and then the FPGA generates the corresponding digital video signals YCbCr. These signals combined with the synchronization signals HSYNC and VSYNC that are also generated by the FPGA act as the input signals of THS8200. In order to meet the bandwidth requirements of the high-definition TV, we adopt video input in the 4:2:2 format over 2×10-bit interface. THS8200 is needed to be controlled by FPGA with I2C bus to set the internal registers, and as a result, it can generate the synchronous signal that is satisfied with the standard SMPTE and transfer the digital video signals YCbCr into analog video signals YPbPr. Hence, the composite analog output signals YPbPr are consist of image data signal and synchronous signal which are superimposed together inside the chip THS8200. The experimental research indicates that the method presented in this paper is a viable solution for high-definition video display, which conforms to the input requirements of the new high-definition display devices.
Woo, Kevin L; Rieucau, Guillaume
2008-07-01
The increasing use of the video playback technique in behavioural ecology reveals a growing need to ensure better control of the visual stimuli that focal animals experience. Technological advances now allow researchers to develop computer-generated animations instead of using video sequences of live-acting demonstrators. However, care must be taken to match the motion characteristics (speed and velocity) of the animation to the original video source. Here, we presented a tool based on the use of an optic flow analysis program to measure the resemblance of motion characteristics of computer-generated animations compared to videos of live-acting animals. We examined three distinct displays (tail-flick (TF), push-up body rock (PUBR), and slow arm wave (SAW)) exhibited by animations of Jacky dragons (Amphibolurus muricatus) that were compared to the original video sequences of live lizards. We found no significant differences between the motion characteristics of videos and animations across all three displays. Our results showed that our animations are similar the speed and velocity features of each display. Researchers need to ensure that similar motion characteristics in animation and video stimuli are represented, and this feature is a critical component in the future success of the video playback technique.
Telemetry and Communication IP Video Player
NASA Technical Reports Server (NTRS)
OFarrell, Zachary L.
2011-01-01
Aegis Video Player is the name of the video over IP system for the Telemetry and Communications group of the Launch Services Program. Aegis' purpose is to display video streamed over a network connection to be viewed during launches. To accomplish this task, a VLC ActiveX plug-in was used in C# to provide the basic capabilities of video streaming. The program was then customized to be used during launches. The VLC plug-in can be configured programmatically to display a single stream, but for this project multiple streams needed to be accessed. To accomplish this, an easy to use, informative menu system was added to the program to enable users to quickly switch between videos. Other features were added to make the player more useful, such as watching multiple videos and watching a video in full screen.
Eye movements while viewing narrated, captioned, and silent videos
Ross, Nicholas M.; Kowler, Eileen
2013-01-01
Videos are often accompanied by narration delivered either by an audio stream or by captions, yet little is known about saccadic patterns while viewing narrated video displays. Eye movements were recorded while viewing video clips with (a) audio narration, (b) captions, (c) no narration, or (d) concurrent captions and audio. A surprisingly large proportion of time (>40%) was spent reading captions even in the presence of a redundant audio stream. Redundant audio did not affect the saccadic reading patterns but did lead to skipping of some portions of the captions and to delays of saccades made into the caption region. In the absence of captions, fixations were drawn to regions with a high density of information, such as the central region of the display, and to regions with high levels of temporal change (actions and events), regardless of the presence of narration. The strong attraction to captions, with or without redundant audio, raises the question of what determines how time is apportioned between captions and video regions so as to minimize information loss. The strategies of apportioning time may be based on several factors, including the inherent attraction of the line of sight to any available text, the moment by moment impressions of the relative importance of the information in the caption and the video, and the drive to integrate visual text accompanied by audio into a single narrative stream. PMID:23457357
A system for the real-time display of radar and video images of targets
NASA Technical Reports Server (NTRS)
Allen, W. W.; Burnside, W. D.
1990-01-01
Described here is a software and hardware system for the real-time display of radar and video images for use in a measurement range. The main purpose is to give the reader a clear idea of the software and hardware design and its functions. This system is designed around a Tektronix XD88-30 graphics workstation, used to display radar images superimposed on video images of the actual target. The system's purpose is to provide a platform for tha analysis and documentation of radar images and their associated targets in a menu-driven, user oriented environment.
NASA Tech Briefs, December 2000. Volume 24, No. 12
NASA Technical Reports Server (NTRS)
2000-01-01
Topics include: special coverage sections on Imaging/Video/Display Technology, and sections on electronic components and systems, test and measurement, software, information sciences, and special sections of Electronics Tech Briefs and Motion Control Tech Briefs.
Does a video displaying a stair climbing model increase stair use in a worksite setting?
Van Calster, L; Van Hoecke, A-S; Octaef, A; Boen, F
2017-08-01
This study evaluated the effects of improving the visibility of the stairwell and of displaying a video with a stair climbing model on climbing and descending stair use in a worksite setting. Intervention study. Three consecutive one-week intervention phases were implemented: (1) the visibility of the stairs was improved by the attachment of pictograms that indicated the stairwell; (2) a video showing a stair climbing model was sent to the employees by email; and (3) the same video was displayed on a television screen at the point-of-choice (POC) between the stairs and the elevator. The interventions took place in two buildings. The implementation of the interventions varied between these buildings and the sequence was reversed. Improving the visibility of the stairs increased both stair climbing (+6%) and descending stair use (+7%) compared with baseline. Sending the video by email yielded no additional effect on stair use. By contrast, displaying the video at the POC increased stair climbing in both buildings by 12.5% on average. One week after the intervention, the positive effects on stair climbing remained in one of the buildings, but not in the other. These findings suggest that improving the visibility of the stairwell and displaying a stair climbing model on a screen at the POC can result in a short-term increase in both climbing and descending stair use. Copyright © 2017 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.
Feasibility study of utilizing ultraportable projectors for endoscopic video display (with videos).
Tang, Shou-Jiang; Fehring, Amanda; Mclemore, Mac; Griswold, Michael; Wang, Wanmei; Paine, Elizabeth R; Wu, Ruonan; To, Filip
2014-10-01
Modern endoscopy requires video display. Recent miniaturized, ultraportable projectors are affordable, durable, and offer quality image display. Explore feasibility of using ultraportable projectors in endoscopy. Prospective bench-top comparison; clinical feasibility study. Masked comparison study of images displayed via 2 Samsung ultraportable light-emitting diode projectors (pocket-sized SP-HO3; pico projector SP-P410M) and 1 Microvision Showwx-II Laser pico projector. BENCH-TOP FEASIBILITY STUDY: Prerecorded endoscopic video was streamed via computer. CLINICAL COMPARISON STUDY: Live high-definition endoscopy video was simultaneously displayed through each processor onto a standard liquid crystal display monitor and projected onto a portable, pull-down projection screen. Endoscopists, endoscopy nurses, and technicians rated video images; ratings were analyzed by linear mixed-effects regression models with random intercepts. All projectors were easy to set up, adjust, focus, and operate, with no real-time lapse for any. Bench-top study outcomes: Samsung pico preferred to Laser pico, overall rating 1.5 units higher (95% confidence interval [CI] = 0.7-2.4), P < .001; Samsung pocket preferred to Laser pico, 3.3 units higher (95% CI = 2.4-4.1), P < .001; Samsung pocket preferred to Samsung pico, 1.7 units higher (95% CI = 0.9-2.5), P < .001. The clinical comparison study confirmed the Samsung pocket projector as best, with a higher overall rating of 2.3 units (95% CI = 1.6-3.0), P < .001, than Samsung pico. Low brightness currently limits pico projector use in clinical endoscopy. The pocket projector, with higher brightness levels (170 lumens), is clinically useful. Continued improvements to ultraportable projectors will supply a needed niche in endoscopy through portability, reduced cost, and equal or better image quality. © The Author(s) 2013.
STS-114 Flight Day 13 and 14 Highlights
NASA Technical Reports Server (NTRS)
2005-01-01
On Flight Day 13, the crew of Space Shuttle Discovery on the STS-114 Return to Flight mission (Commander Eileen Collins, Pilot James Kelly, Mission Specialists Soichi Noguchi, Stephen Robinson, Andrew Thomas, Wendy Lawrence, and Charles Camarda) hear a weather report from Mission Control on conditions at the shuttle's possible landing sites. The video includes a view of a storm at sea. Noguchi appears in front of a banner for the Japanese Space Agency JAXA, displaying a baseball signed by Japanese MLB players, demonstrating origami, displaying other crafts, and playing the keyboard. The primary event on the video is an interview of the whole crew, in which they discuss the importance of their mission, lessons learned, shuttle operations, shuttle safety and repair, extravehicular activities (EVAs), astronaut training, and shuttle landing. Mission Control dedicates the song "A Piece of Sky" to the Shuttle crew, while the Earth is visible below the orbiter. The video ends with a view of the Earth limb lit against a dark background.
Evaluation of a HDR image sensor with logarithmic response for mobile video-based applications
NASA Astrophysics Data System (ADS)
Tektonidis, Marco; Pietrzak, Mateusz; Monnin, David
2017-10-01
The performance of mobile video-based applications using conventional LDR (Low Dynamic Range) image sensors highly depends on the illumination conditions. As an alternative, HDR (High Dynamic Range) image sensors with logarithmic response are capable to acquire illumination-invariant HDR images in a single shot. We have implemented a complete image processing framework for a HDR sensor, including preprocessing methods (nonuniformity correction (NUC), cross-talk correction (CTC), and demosaicing) as well as tone mapping (TM). We have evaluated the HDR sensor for video-based applications w.r.t. the display of images and w.r.t. image analysis techniques. Regarding the display we have investigated the image intensity statistics over time, and regarding image analysis we assessed the number of feature correspondences between consecutive frames of temporal image sequences. For the evaluation we used HDR image data recorded from a vehicle on outdoor or combined outdoor/indoor itineraries, and we performed a comparison with corresponding conventional LDR image data.
Resources for Improving Computerized Learning Environments.
ERIC Educational Resources Information Center
Yeaman, Andrew R. J.
1989-01-01
Presents an annotated review of human factors literature that discusses computerized environments. Topics discussed include the application of office automation practices to educational environments; video display terminal (VDT) workstations; health and safety hazards; planning educational facilities; ergonomics in computerized offices; and…
Method and System for Producing Full Motion Media to Display on a Spherical Surface
NASA Technical Reports Server (NTRS)
Starobin, Michael A. (Inventor)
2015-01-01
A method and system for producing full motion media for display on a spherical surface is described. The method may include selecting a subject of full motion media for display on a spherical surface. The method may then include capturing the selected subject as full motion media (e.g., full motion video) in a rectilinear domain. The method may then include processing the full motion media in the rectilinear domain for display on a spherical surface, such as by orienting the full motion media, adding rotation to the full motion media, processing edges of the full motion media, and/or distorting the full motion media in the rectilinear domain for instance. After processing the full motion media, the method may additionally include providing the processed full motion media to a spherical projection system, such as a Science on a Sphere system.
Free viewpoint TV and its international standardization
NASA Astrophysics Data System (ADS)
Tanimoto, Masayuki
2009-05-01
We have developed a new type of television named FTV (Free-viewpoint TV). FTV is an innovative visual media that enables us to view a 3D scene by freely changing our viewpoints. We proposed the concept of FTV and constructed the world's first real-time system including the complete chain of operation from image capture to display. We also realized FTV on a single PC and FTV with free listening-point audio. FTV is based on the ray-space method that represents one ray in real space with one point in the ray-space. We have also developed new type of ray capture and display technologies such as a 360-degree mirror-scan ray capturing system and a 360 degree ray-reproducing display. MPEG regarded FTV as the most challenging 3D media and started the international standardization activities of FTV. The first phase of FTV is MVC (Multi-view Video Coding) and the second phase is 3DV (3D Video). MVC was completed in March 2009. 3DV is a standard that targets serving a variety of 3D displays. It will be completed within the next two years.
NASA Astrophysics Data System (ADS)
Sasaki, T.; Azuma, S.; Matsuda, S.; Nagayama, A.; Ogido, M.; Saito, H.; Hanafusa, Y.
2016-12-01
The Japan Agency for Marine-Earth Science and Technology (JAMSTEC) archives a large amount of deep-sea research videos and photos obtained by JAMSTEC's research submersibles and vehicles with cameras. The web site "JAMSTEC E-library of Deep-sea Images : J-EDI" (http://www.godac.jamstec.go.jp/jedi/e/) has made videos and photos available to the public via the Internet since 2011. Users can search for target videos and photos by keywords, easy-to-understand icons, and dive information at J-EDI because operating staffs classify videos and photos as to contents, e.g. living organism and geological environment, and add comments to them.Dive survey data including videos and photos are not only valiant academically but also helpful for education and outreach activities. With the aim of the improvement of visibility for broader communities, we added new functions of 3-dimensional display synchronized various dive survey data with videos in this year.New Functions Users can search for dive survey data by 3D maps with plotted dive points using the WebGL virtual map engine "Cesium". By selecting a dive point, users can watch deep-sea videos and photos and associated environmental data, e.g. water temperature, salinity, rock and biological sample photos, obtained by the dive survey. Users can browse a dive track visualized in 3D virtual spaces using the WebGL JavaScript library. By synchronizing this virtual dive track with videos, users can watch deep-sea videos recorded at a point on a dive track. Users can play an animation which a submersible-shaped polygon automatically traces a 3D virtual dive track and displays of dive survey data are synchronized with tracing a dive track. Users can directly refer to additional information of other JAMSTEC data sites such as marine biodiversity database, marine biological sample database, rock sample database, and cruise and dive information database, on each page which a 3D virtual dive track is displayed. A 3D visualization of a dive track makes users experience a virtual dive survey. In addition, by synchronizing a virtual dive track with videos, it is easy to understand living organisms and geological environments of a dive point. Therefore, these functions will visually support understanding of deep-sea environments in lectures and educational activities.
Interactive display system having a scaled virtual target zone
Veligdan, James T.; DeSanto, Leonard
2006-06-13
A display system includes a waveguide optical panel having an inlet face and an opposite outlet face. A projector and imaging device cooperate with the panel for projecting a video image thereon. An optical detector bridges at least a portion of the waveguides for detecting a location on the outlet face within a target zone of an inbound light spot. A controller is operatively coupled to the imaging device and detector for displaying a cursor on the outlet face corresponding with the detected location of the spot within the target zone.
Augmenting reality in Direct View Optical (DVO) overlay applications
NASA Astrophysics Data System (ADS)
Hogan, Tim; Edwards, Tim
2014-06-01
The integration of overlay displays into rifle scopes can transform precision Direct View Optical (DVO) sights into intelligent interactive fire-control systems. Overlay displays can provide ballistic solutions within the sight for dramatically improved targeting, can fuse sensor video to extend targeting into nighttime or dirty battlefield conditions, and can overlay complex situational awareness information over the real-world scene. High brightness overlay solutions for dismounted soldier applications have previously been hindered by excessive power consumption, weight and bulk making them unsuitable for man-portable, battery powered applications. This paper describes the advancements and capabilities of a high brightness, ultra-low power text and graphics overlay display module developed specifically for integration into DVO weapon sight applications. Central to the overlay display module was the development of a new general purpose low power graphics controller and dual-path display driver electronics. The graphics controller interface is a simple 2-wire RS-232 serial interface compatible with existing weapon systems such as the IBEAM ballistic computer and the RULR and STORM laser rangefinders (LRF). The module features include multiple graphics layers, user configurable fonts and icons, and parameterized vector rendering, making it suitable for general purpose DVO overlay applications. The module is configured for graphics-only operation for daytime use and overlays graphics with video for nighttime applications. The miniature footprint and ultra-low power consumption of the module enables a new generation of intelligent DVO systems and has been implemented for resolutions from VGA to SXGA, in monochrome and color, and in graphics applications with and without sensor video.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-19
... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-828] Certain Video Displays and Products Using and Containing Same; Investigations: Terminations, Modifications and Rulings AGENCY: U.S. International Trade Commission. ACTION: Notice. SUMMARY: Notice is hereby given that the U.S. International...
RAPID: A random access picture digitizer, display, and memory system
NASA Technical Reports Server (NTRS)
Yakimovsky, Y.; Rayfield, M.; Eskenazi, R.
1976-01-01
RAPID is a system capable of providing convenient digital analysis of video data in real-time. It has two modes of operation. The first allows for continuous digitization of an EIA RS-170 video signal. Each frame in the video signal is digitized and written in 1/30 of a second into RAPID's internal memory. The second mode leaves the content of the internal memory independent of the current input video. In both modes of operation the image contained in the memory is used to generate an EIA RS-170 composite video output signal representing the digitized image in the memory so that it can be displayed on a monitor.
Optimization of the polyplanar optical display electronics for a monochrome B-52 display
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeSanto, L.
The Polyplanar Optical Display (POD) is a unique display screen which can be used with any projection source. The prototype ten-inch display is two inches thick and has a matte black face which allows for high contrast images. The prototype being developed is a form, fit and functional replacement display for the B-52 aircraft which uses a monochrome ten-inch display. In order to achieve a long lifetime, the new display uses a new 200 mW green solid-state laser (10,000 hr. life) at 532 nm as its light source. To produce real-time video, the laser light is being modulated by amore » Digital Light Processing (DLP{trademark}) chip manufactured by Texas Instruments (TI). In order to use the solid-state laser as the light source and also fit within the constraints of the B-52 display, the Digital Micromirror Device (DMD{trademark}) chip is operated remotely from the Texas Instruments circuit board. In order to achieve increased brightness a monochrome digitizing interface was investigated. The operation of the DMD{trademark} divorced from the light engine and the interfacing of the DMD{trademark} board with the RS-170 video format specific to the B-52 aircraft will be discussed, including the increased brightness of the monochrome digitizing interface. A brief description of the electronics required to drive the new 200 mW laser is also presented.« less
Vision systems for manned and robotic ground vehicles
NASA Astrophysics Data System (ADS)
Sanders-Reed, John N.; Koon, Phillip L.
2010-04-01
A Distributed Aperture Vision System for ground vehicles is described. An overview of the hardware including sensor pod, processor, video compression, and displays is provided. This includes a discussion of the choice between an integrated sensor pod and individually mounted sensors, open architecture design, and latency issues as well as flat panel versus head mounted displays. This technology is applied to various ground vehicle scenarios, including closed-hatch operations (operator in the vehicle), remote operator tele-operation, and supervised autonomy for multi-vehicle unmanned convoys. In addition, remote vision for automatic perimeter surveillance using autonomous vehicles and automatic detection algorithms is demonstrated.
Method and apparatus for calibrating a tiled display
NASA Technical Reports Server (NTRS)
Chen, Chung-Jen (Inventor); Johnson, Michael J. (Inventor); Chandrasekhar, Rajesh (Inventor)
2001-01-01
A display system that can be calibrated and re-calibrated with a minimal amount of manual intervention. To accomplish this, one or more cameras are provided to capture an image of the display screen. The resulting captured image is processed to identify any non-desirable characteristics, including visible artifacts such as seams, bands, rings, etc. Once the non-desirable characteristics are identified, an appropriate transformation function is determined. The transformation function is used to pre-warp the input video signal that is provided to the display such that the non-desirable characteristics are reduced or eliminated from the display. The transformation function preferably compensates for spatial non-uniformity, color non-uniformity, luminance non-uniformity, and other visible artifacts.
An Airborne Programmable Digital to Video Converter Interface and Operation Manual.
1981-02-01
Identify by block number) SCAN CONVERTER VIDEO DISPLAY TELEVISION DISPLAY 20. ABSTRACT (Continue on reverse oide If necessary and Identify by block...programmable cathode ray tube (CRT) controller which is accessed by the CPU to permit operation in a wide variety of modes. The Alphanumeric Generator
Potential Health Hazards of Video Display Terminals.
ERIC Educational Resources Information Center
Murray, William E.; And Others
In response to a request from three California unions to evaluate potential health hazards from the use of video display terminals (VDT's) in information processing applications, the National Institute for Occupational Safety and Health (NIOSH) conducted a limited field investigation of three companies in the San Francisco-Oakland Bay Area. A…
NASA Technical Reports Server (NTRS)
Robbins, Woodrow E. (Editor); Fisher, Scott S. (Editor)
1989-01-01
Special attention was given to problems of stereoscopic display devices, such as CAD for enhancement of the design process in visual arts, stereo-TV improvement of remote manipulator performance, a voice-controlled stereographic video camera system, and head-mounted displays and their low-cost design alternatives. Also discussed was a novel approach to chromostereoscopic microscopy, computer-generated barrier-strip autostereography and lenticular stereograms, and parallax barrier three-dimensional TV. Additional topics include processing and user interface isssues and visualization applications, including automated analysis and fliud flow topology, optical tomographic measusrements of mixing fluids, visualization of complex data, visualization environments, and visualization management systems.
NASA Astrophysics Data System (ADS)
Dastageeri, H.; Storz, M.; Koukofikis, A.; Knauth, S.; Coors, V.
2016-09-01
Providing mobile location-based information for pedestrians faces many challenges. On one hand the accuracy of localisation indoors and outdoors is restricted due to technical limitations of GPS and Beacons. Then again only a small display is available to display information as well as to develop a user interface. Plus, the software solution has to consider the hardware characteristics of mobile devices during the implementation process for aiming a performance with minimum latency. This paper describes our approach by including a combination of image tracking and GPS or Beacons to ensure orientation and precision of localisation. To communicate the information on Points of Interest (POIs), we decided to choose Augmented Reality (AR). For this concept of operations, we used besides the display also the acceleration and positions sensors as a user interface. This paper especially goes into detail on the optimization of the image tracking algorithms, the development of the video-based AR player for the Android platform and the evaluation of videos as an AR element in consideration of providing a good user experience. For setting up content for the POIs or even generate a tour we used and extended the Open Geospatial Consortium (OGC) standard Augmented Reality Markup Language (ARML).
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-24
... Accessible Emergency Information; Apparatus Requirements for Emergency Information and Video Description...] Accessible Emergency Information; Apparatus Requirements for Emergency Information and Video Description... manufacturers of devices that display video programming to ensure that certain apparatus are able to make...
ERIC Educational Resources Information Center
McKimmie, Tim; Smith, Jeanette
1994-01-01
Presents an overview of the issues related to extremely low frequency (ELF) radiation from computer video display terminals. Highlights include electromagnetic fields; measuring ELF; computer use in libraries; possible health effects; electromagnetic radiation; litigation and legislation; standards and safety; and what libraries can do. (Contains…
Use of Internet Resources in the Biology Lecture Classroom.
ERIC Educational Resources Information Center
Francis, Joseph W.
2000-01-01
Introduces internet resources that are available for instructional use in biology classrooms. Provides information on video-based technologies to create and capture video sequences, interactive web sites that allow interaction with biology simulations, online texts, and interactive videos that display animated video sequences. (YDS)
Secure Video Surveillance System Acquisition Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
2009-12-04
The SVSS Acquisition Software collects and displays video images from two cameras through a VPN, and store the images onto a collection controller. The software is configured to allow a user to enter a time window to display up to 2 1/2, hours of video review. The software collects images from the cameras at a rate of 1 image per second and automatically deletes images older than 3 hours. The software code operates in a linux environment and can be run in a virtual machine on Windows XP. The Sandia software integrates the different COTS software together to build themore » video review system.« less
36 CFR 1194.24 - Video and multimedia products.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 36 Parks, Forests, and Public Property 3 2010-07-01 2010-07-01 false Video and multimedia products... Video and multimedia products. (a) All analog television displays 13 inches and larger, and computer... training and informational video and multimedia productions which support the agency's mission, regardless...
36 CFR 1194.24 - Video and multimedia products.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 36 Parks, Forests, and Public Property 3 2011-07-01 2011-07-01 false Video and multimedia products... Video and multimedia products. (a) All analog television displays 13 inches and larger, and computer... training and informational video and multimedia productions which support the agency's mission, regardless...
36 CFR 1194.24 - Video and multimedia products.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 36 Parks, Forests, and Public Property 3 2012-07-01 2012-07-01 false Video and multimedia products... Video and multimedia products. (a) All analog television displays 13 inches and larger, and computer... training and informational video and multimedia productions which support the agency's mission, regardless...
36 CFR 1194.24 - Video and multimedia products.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 36 Parks, Forests, and Public Property 3 2014-07-01 2014-07-01 false Video and multimedia products... Video and multimedia products. (a) All analog television displays 13 inches and larger, and computer... training and informational video and multimedia productions which support the agency's mission, regardless...
Development of Targeting UAVs Using Electric Helicopters and Yamaha RMAX
2007-05-17
including the QNX real - time operating system . The video overlay board is useful to display the onboard camera’s image with important information such as... real - time operating system . Fully utilizing the built-in multi-processing architecture with inter-process synchronization and communication
Aidlen, Jeremy T; Glick, Sara; Silverman, Kenneth; Silverman, Harvey F; Luks, Francois I
2009-08-01
Light-weight, low-profile, and high-resolution head-mounted displays (HMDs) now allow personalized viewing, of a laparoscopic image. The advantages include unobstructed viewing, regardless of position at the operating table, and the possibility to customize the image (i.e., enhanced reality, picture-in-picture, etc.). The bright image display allows use in daylight surroundings and the low profile of the HMD provides adequate peripheral vision. Theoretic disadvantages include reliance for all on the same image capture and anticues (i.e., reality disconnect) when the projected image remains static, despite changes in head position. This can lead to discomfort and even nausea. We have developed a prototype of interactive laparoscopic image display that allows hands-free control of the displayed image by changes in spatial orientation of the operator's head. The prototype consists of an HMD, a spatial orientation device, and computer software to enable hands-free panning and zooming of a video-endoscopic image display. The spatial orientation device uses magnetic fields created by a transmitter and receiver, each containing three orthogonal coils. The transmitter coils are efficiently driven, using USB power only, by a newly developed circuit, each at a unique frequency. The HMD-mounted receiver system links to a commercially available PC-interface PCI-bus sound card (M-Audiocard Delta 44; Avid Technology, Tewksbury, MA). Analog signals at the receiver are filtered, amplified, and converted to digital signals, which are processed to control the image display. The prototype uses a proprietary static fish-eye lens and software for the distortion-free reconstitution of any portion of the captured image. Left-right and up-down motions of the head (and HMD) produce real-time panning of the displayed image. Motion of the head toward, or away from, the transmitter causes real-time zooming in or out, respectively, of the displayed image. This prototype of the interactive HMD allows hands-free, intuitive control of the laparoscopic field, independent of the captured image.
Benady-Chorney, Jessica; Yau, Yvonne; Zeighami, Yashar; Bohbot, Veronique D; West, Greg L
2018-03-21
Action video game players (aVGPs) display increased performance in attention-based tasks and enhanced procedural motor learning. In parallel, the anterior cingulate cortex (ACC) is centrally implicated in specific types of reward-based learning and attentional control, the execution or inhibition of motor commands, and error detection. These processes are hypothesized to support aVGP in-game performance and enhanced learning though in-game feedback. We, therefore, tested the hypothesis that habitual aVGPs would display increased cortical thickness compared with nonvideo game players (nonVGPs). Results showed that the aVGP group (n=17) displayed significantly higher levels of cortical thickness specifically in the dorsal ACC compared with the nonVGP group (n=16). Results are discussed in the context of previous findings examining video game experience, attention/performance, and responses to affective components such as pain and fear.
Video monitoring system for car seat
NASA Technical Reports Server (NTRS)
Elrod, Susan Vinz (Inventor); Dabney, Richard W. (Inventor)
2004-01-01
A video monitoring system for use with a child car seat has video camera(s) mounted in the car seat. The video images are wirelessly transmitted to a remote receiver/display encased in a portable housing that can be removably mounted in the vehicle in which the car seat is installed.
36 CFR § 1194.24 - Video and multimedia products.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 36 Parks, Forests, and Public Property 3 2013-07-01 2012-07-01 true Video and multimedia products... § 1194.24 Video and multimedia products. (a) All analog television displays 13 inches and larger, and... circuitry. (c) All training and informational video and multimedia productions which support the agency's...
Peden, Robert G; Mercer, Rachel; Tatham, Andrew J
2016-10-01
To investigate whether 'surgeon's eye view' videos provided via head-mounted displays can improve skill acquisition and satisfaction in basic surgical training compared with conventional wet-lab teaching. A prospective randomised study of 14 medical students with no prior suturing experience, randomised to 3 groups: 1) conventional teaching; 2) head-mounted display-assisted teaching and 3) head-mounted display self-learning. All were instructed in interrupted suturing followed by 15 minutes' practice. Head-mounted displays provided a 'surgeon's eye view' video demonstrating the technique, available during practice. Subsequently students undertook a practical assessment, where suturing was videoed and graded by masked assessors using a 10-point surgical skill score (1 = very poor technique, 10 = very good technique). Students completed a questionnaire assessing confidence and satisfaction. Suturing ability after teaching was similar between groups (P = 0.229, Kruskal-Wallis test). Median surgical skill scores were 7.5 (range 6-10), 6 (range 3-8) and 7 (range 1-7) following head-mounted display-assisted teaching, conventional teaching, and head-mounted display self-learning respectively. There was good agreement between graders regarding surgical skill scores (rho.c = 0.599, r = 0.603), and no difference in number of sutures placed between groups (P = 0.120). The head-mounted display-assisted teaching group reported greater enjoyment than those attending conventional teaching (P = 0.033). Head-mounted display self-learning was regarded as least useful (7.4 vs 9.0 for conventional teaching, P = 0.021), but more enjoyable than conventional teaching (9.6 vs 8.0, P = 0.050). Teaching augmented with head-mounted displays was significantly more enjoyable than conventional teaching. Students undertaking self-directed learning using head-mounted displays with pre-recorded videos had comparable skill acquisition to those attending traditional wet-lab tutorials. Copyright © 2016 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.
Miniaturized LEDs for flat-panel displays
NASA Astrophysics Data System (ADS)
Radauscher, Erich J.; Meitl, Matthew; Prevatte, Carl; Bonafede, Salvatore; Rotzoll, Robert; Gomez, David; Moore, Tanya; Raymond, Brook; Cok, Ronald; Fecioru, Alin; Trindade, António Jose; Fisher, Brent; Goodwin, Scott; Hines, Paul; Melnik, George; Barnhill, Sam; Bower, Christopher A.
2017-02-01
Inorganic light emitting diodes (LEDs) serve as bright pixel-level emitters in displays, from indoor/outdoor video walls with pixel sizes ranging from one to thirty millimeters to micro displays with more than one thousand pixels per inch. Pixel sizes that fall between those ranges, roughly 50 to 500 microns, are some of the most commercially significant ones, including flat panel displays used in smart phones, tablets, and televisions. Flat panel displays that use inorganic LEDs as pixel level emitters (μILED displays) can offer levels of brightness, transparency, and functionality that are difficult to achieve with other flat panel technologies. Cost-effective production of μILED displays requires techniques for precisely arranging sparse arrays of extremely miniaturized devices on a panel substrate, such as transfer printing with an elastomer stamp. Here we present lab-scale demonstrations of transfer printed μILED displays and the processes used to make them. Demonstrations include passive matrix μILED displays that use conventional off-the shelf drive ASICs and active matrix μILED displays that use miniaturized pixel-level control circuits from CMOS wafers. We present a discussion of key considerations in the design and fabrication of highly miniaturized emitters for μILED displays.
Wrist display concept demonstration based on 2-in. color AMOLED
NASA Astrophysics Data System (ADS)
Meyer, Frederick M.; Longo, Sam J.; Hopper, Darrel G.
2004-09-01
The wrist watch needs an upgrade. Recent advances in optoelectronics, microelectronics, and communication theory have established a technology base that now make the multimedia Dick Tracy watch attainable during the next decade. As a first step towards stuffing the functionality of an entire personnel computer (PC) and television receiver under a watch face, we have set a goal of providing wrist video capability to warfighters. Commercial sector work on the wrist form factor already includes all the functionality of a personal digital assistant (PDA) and full PC operating system. Our strategy is to leverage these commercial developments. In this paper we describe our use of a 2.2 in. diagonal color active matrix light emitting diode (AMOLED) device as a wrist-mounted display (WMD) to present either full motion video or computer generated graphical image formats.
A color video display technique for flow field surveys
NASA Technical Reports Server (NTRS)
Winkelmann, A. E.; Tsao, C. P.
1982-01-01
A computer driven color video display technique has been developed for the presentation of wind tunnel flow field survey data. The results of both qualitative and quantitative flow field surveys can be presented in high spatial resolutions color coded displays. The technique has been used for data obtained with a hot-wire probe, a split-film probe, a Conrad (pitch) probe and a 5-tube pressure probe in surveys above and behind a wing with partially stalled and fully stalled flow.
NASA Astrophysics Data System (ADS)
Lee, Seokhee; Lee, Kiyoung; Kim, Man Bae; Kim, JongWon
2005-11-01
In this paper, we propose a design of multi-view stereoscopic HD video transmission system based on MPEG-21 Digital Item Adaptation (DIA). It focuses on the compatibility and scalability to meet various user preferences and terminal capabilities. There exist a large variety of multi-view 3D HD video types according to the methods for acquisition, display, and processing. By following the MPEG-21 DIA framework, the multi-view stereoscopic HD video is adapted according to user feedback. A user can be served multi-view stereoscopic video which corresponds with his or her preferences and terminal capabilities. In our preliminary prototype, we verify that the proposed design can support two deferent types of display device (stereoscopic and auto-stereoscopic) and switching viewpoints between two available viewpoints.
Display system employing acousto-optic tunable filter
NASA Technical Reports Server (NTRS)
Lambert, James L. (Inventor)
1995-01-01
An acousto-optic tunable filter (AOTF) is employed to generate a display by driving the AOTF with a RF electrical signal comprising modulated red, green, and blue video scan line signals and scanning the AOTF with a linearly polarized, pulsed light beam, resulting in encoding of color video columns (scan lines) of an input video image into vertical columns of the AOTF output beam. The AOTF is illuminated periodically as each acoustically-encoded scan line fills the cell aperture of the AOTF. A polarizing beam splitter removes the unused first order beam component of the AOTF output and, if desired, overlays a real world scene on the output plane. Resolutions as high as 30,000 lines are possible, providing holographic display capability.
Display system employing acousto-optic tunable filter
NASA Technical Reports Server (NTRS)
Lambert, James L. (Inventor)
1993-01-01
An acousto-optic tunable filter (AOTF) is employed to generate a display by driving the AOTF with a RF electrical signal comprising modulated red, green, and blue video scan line signals and scanning the AOTF with a linearly polarized, pulsed light beam, resulting in encoding of color video columns (scan lines) of an input video image into vertical columns of the AOTF output beam. The AOTF is illuminated periodically as each acoustically-encoded scan line fills the cell aperture of the AOTF. A polarizing beam splitter removes the unused first order beam component of the AOTF output and, if desired, overlays a real world scene on the output plane. Resolutions as high as 30,000 lines are possible, providing holographic display capability.
Virtual displays for 360-degree video
NASA Astrophysics Data System (ADS)
Gilbert, Stephen; Boonsuk, Wutthigrai; Kelly, Jonathan W.
2012-03-01
In this paper we describe a novel approach for comparing users' spatial cognition when using different depictions of 360- degree video on a traditional 2D display. By using virtual cameras within a game engine and texture mapping of these camera feeds to an arbitrary shape, we were able to offer users a 360-degree interface composed of four 90-degree views, two 180-degree views, or one 360-degree view of the same interactive environment. An example experiment is described using these interfaces. This technique for creating alternative displays of wide-angle video facilitates the exploration of how compressed or fish-eye distortions affect spatial perception of the environment and can benefit the creation of interfaces for surveillance and remote system teleoperation.
NASA Technical Reports Server (NTRS)
Boton, Matthew L.; Bass, Ellen J.; Comstock, James R., Jr.
2006-01-01
The evaluation of human-centered systems can be performed using a variety of different methodologies. This paper describes a human-centered systems evaluation methodology where participants watch 5-second non-interactive videos of a system in operation before supplying judgments and subjective measures based on the information conveyed in the videos. This methodology was used to evaluate the ability of different textures and fields of view to convey spatial awareness in synthetic vision systems (SVS) displays. It produced significant results for both judgment based and subjective measures. This method is compared to other methods commonly used to evaluate SVS displays based on cost, the amount of experimental time required, experimental flexibility, and the type of data provided.
Segmented cold cathode display panel
NASA Technical Reports Server (NTRS)
Payne, Leslie (Inventor)
1998-01-01
The present invention is a video display device that utilizes the novel concept of generating an electronically controlled pattern of electron emission at the output of a segmented photocathode. This pattern of electron emission is amplified via a channel plate. The result is that an intense electronic image can be accelerated toward a phosphor thus creating a bright video image. This novel arrangement allows for one to provide a full color flat video display capable of implementation in large formats. In an alternate arrangement, the present invention is provided without the channel plate and a porous conducting surface is provided instead. In this alternate arrangement, the brightness of the image is reduced but the cost of the overall device is significantly lowered because fabrication complexity is significantly decreased.
NASA Astrophysics Data System (ADS)
Newman, R. L.
2002-12-01
How many images can you display at one time with Power Point without getting "postage stamps"? Do you have fantastic datasets that you cannot view because your computer is too slow/small? Do you assume a few 2-D images of a 3-D picture are sufficient? High-end visualization centers can minimize and often eliminate these problems. The new visualization center [http://siovizcenter.ucsd.edu] at Scripps Institution of Oceanography [SIO] immerses users into a virtual world by projecting 3-D images onto a Panoram GVR-120E wall-sized floor-to-ceiling curved screen [7' x 23'] that has 3.2 mega-pixels of resolution. The Infinite Reality graphics subsystem is driven by a single-pipe SGI Onyx 3400 with a system bandwidth of 44 Gbps. The Onyx is powered by 16 MIPS R12K processors and 16 GB of addressable memory. The system is also equipped with transmitters and LCD shutter glasses which permit stereographic 3-D viewing of high-resolution images. This center is ideal for groups of up to 60 people who can simultaneously view these large-format images. A wide range of hardware and software is available, giving the users a totally immersive working environment in which to display, analyze, and discuss large datasets. The system enables simultaneous display of video and audio streams from sources such as SGI megadesktop and stereo megadesktop, S-VHS video, DVD video, and video from a Macintosh or PC. For instance, one-third of the screen might be displaying S-VHS video from a remotely-operated-vehicle [ROV], while the remaining portion of the screen might be used for an interactive 3-D flight over the same parcel of seafloor. The video and audio combinations using this system are numerous, allowing users to combine and explore data and images in innovative ways, greatly enhancing scientists' ability to visualize, understand and collaborate on complex datasets. In the not-distant future, with the rapid growth in networking speeds in the US, it will be possible for Earth Sciences Departments to collaborate effectively while limiting the amount of physical travel required. This includes porting visualization content to the popular, low-cost Geowall visualization systems, and providing web-based access to databanks filled with stock geoscience visualizations.
Fronto-parietal regulation of media violence exposure in adolescents: a multi-method study
Strenziok, Maren; Krueger, Frank; Deshpande, Gopikrishna; Lenroot, Rhoshel K.; van der Meer, Elke
2011-01-01
Adolescents spend a significant part of their leisure time watching TV programs and movies that portray violence. It is unknown, however, how the extent of violent media use and the severity of aggression displayed affect adolescents’ brain function. We investigated skin conductance responses, brain activation and functional brain connectivity to media violence in healthy adolescents. In an event-related functional magnetic resonance imaging experiment, subjects repeatedly viewed normed videos that displayed different degrees of aggressive behavior. We found a downward linear adaptation in skin conductance responses with increasing aggression and desensitization towards more aggressive videos. Our results further revealed adaptation in a fronto-parietal network including the left lateral orbitofrontal cortex (lOFC), right precuneus and bilateral inferior parietal lobules, again showing downward linear adaptations and desensitization towards more aggressive videos. Granger causality mapping analyses revealed attenuation in the left lOFC, indicating that activation during viewing aggressive media is driven by input from parietal regions that decreased over time, for more aggressive videos. We conclude that aggressive media activates an emotion–attention network that has the capability to blunt emotional responses through reduced attention with repeated viewing of aggressive media contents, which may restrict the linking of the consequences of aggression with an emotional response, and therefore potentially promotes aggressive attitudes and behavior. PMID:20934985
High Tech and Library Access for People with Disabilities.
ERIC Educational Resources Information Center
Roatch, Mary A.
1992-01-01
Describes tools that enable people with disabilities to access print information, including optical character recognition, synthetic voice output, other input devices, Braille access devices, large print displays, television and video, TDD (Telecommunications Devices for the Deaf), and Telebraille. Use of technology by libraries to meet mandates…
A system for automatic analysis of blood pressure data for digital computer entry
NASA Technical Reports Server (NTRS)
Miller, R. L.
1972-01-01
Operation of automatic blood pressure data system is described. Analog blood pressure signal is analyzed by three separate circuits, systolic, diastolic, and cycle defect. Digital computer output is displayed on teletype paper tape punch and video screen. Illustration of system is included.
NASA Astrophysics Data System (ADS)
Starks, Michael R.
1990-09-01
A variety of low cost devices for capturing, editing and displaying field sequential 60 cycle stereoscopic video have recently been marketed by 3D TV Corp. and others. When properly used, they give very high quality images with most consumer and professional equipment. Our stereoscopic multiplexers for creating and editing field sequential video in NTSC or component(SVHS, Betacain, RGB) and Home 3D Theater system employing LCD eyeglasses have made 3D movies and television available to a large audience.
ERIC Educational Resources Information Center
Plavnick, Joshua B.
2012-01-01
Video modeling is an effective and efficient methodology for teaching new skills to individuals with autism. New technology may enhance video modeling as smartphones or tablet computers allow for portable video displays. However, the reduced screen size may decrease the likelihood of attending to the video model for some children. The present…
Digital image processing of bone - Problems and potentials
NASA Technical Reports Server (NTRS)
Morey, E. R.; Wronski, T. J.
1980-01-01
The development of a digital image processing system for bone histomorphometry and fluorescent marker monitoring is discussed. The system in question is capable of making measurements of UV or light microscope features on a video screen with either video or computer-generated images, and comprises a microscope, low-light-level video camera, video digitizer and display terminal, color monitor, and PDP 11/34 computer. Capabilities demonstrated in the analysis of an undecalcified rat tibia include the measurement of perimeter and total bone area, and the generation of microscope images, false color images, digitized images and contoured images for further analysis. Software development will be based on an existing software library, specifically the mini-VICAR system developed at JPL. It is noted that the potentials of the system in terms of speed and reliability far exceed any problems associated with hardware and software development.
Protective laser beam viewing device
Neil, George R.; Jordan, Kevin Carl
2012-12-18
A protective laser beam viewing system or device including a camera selectively sensitive to laser light wavelengths and a viewing screen receiving images from the laser sensitive camera. According to a preferred embodiment of the invention, the camera is worn on the head of the user or incorporated into a goggle-type viewing display so that it is always aimed at the area of viewing interest to the user and the viewing screen is incorporated into a video display worn as goggles over the eyes of the user.
A clinical pilot study of a modular video-CT augmentation system for image-guided skull base surgery
NASA Astrophysics Data System (ADS)
Liu, Wen P.; Mirota, Daniel J.; Uneri, Ali; Otake, Yoshito; Hager, Gregory; Reh, Douglas D.; Ishii, Masaru; Gallia, Gary L.; Siewerdsen, Jeffrey H.
2012-02-01
Augmentation of endoscopic video with preoperative or intraoperative image data [e.g., planning data and/or anatomical segmentations defined in computed tomography (CT) and magnetic resonance (MR)], can improve navigation, spatial orientation, confidence, and tissue resection in skull base surgery, especially with respect to critical neurovascular structures that may be difficult to visualize in the video scene. This paper presents the engineering and evaluation of a video augmentation system for endoscopic skull base surgery translated to use in a clinical study. Extension of previous research yielded a practical system with a modular design that can be applied to other endoscopic surgeries, including orthopedic, abdominal, and thoracic procedures. A clinical pilot study is underway to assess feasibility and benefit to surgical performance by overlaying CT or MR planning data in realtime, high-definition endoscopic video. Preoperative planning included segmentation of the carotid arteries, optic nerves, and surgical target volume (e.g., tumor). An automated camera calibration process was developed that demonstrates mean re-projection accuracy (0.7+/-0.3) pixels and mean target registration error of (2.3+/-1.5) mm. An IRB-approved clinical study involving fifteen patients undergoing skull base tumor surgery is underway in which each surgery includes the experimental video-CT system deployed in parallel to the standard-of-care (unaugmented) video display. Questionnaires distributed to one neurosurgeon and two otolaryngologists are used to assess primary outcome measures regarding the benefit to surgical confidence in localizing critical structures and targets by means of video overlay during surgical approach, resection, and reconstruction.
Sequential color video to parallel color video converter
NASA Technical Reports Server (NTRS)
1975-01-01
The engineering design, development, breadboard fabrication, test, and delivery of a breadboard field sequential color video to parallel color video converter is described. The converter was designed for use onboard a manned space vehicle to eliminate a flickering TV display picture and to reduce the weight and bulk of previous ground conversion systems.
Code of Federal Regulations, 2012 CFR
2012-01-01
... concerning the accessibility of videos, DVDs, and other audio-visual presentations shown on-aircraft to... meet concerning the accessibility of videos, DVDs, and other audio-visual presentations shown on... videos, DVDs, and other audio-visual displays played on aircraft for safety purposes, and all such new...
Code of Federal Regulations, 2013 CFR
2013-01-01
... concerning the accessibility of videos, DVDs, and other audio-visual presentations shown on-aircraft to... meet concerning the accessibility of videos, DVDs, and other audio-visual presentations shown on... videos, DVDs, and other audio-visual displays played on aircraft for safety purposes, and all such new...
47 CFR Appendix - Technical Appendix 1
Code of Federal Regulations, 2010 CFR
2010-10-01
... display program material that has been encoded in any and all of the video formats contained in Table A3... frame rate of the transmitted video format. 2. Output Formats Equipment shall support 4:3 center cut-out... for composite video (yellow). Output shall produce video with ITU-R BT.500-11 quality scale of Grade 4...
Things the Teacher of Your Media Utilization Course May Not Have Told You.
ERIC Educational Resources Information Center
Ekhaml, Leticia
1995-01-01
Discusses maintenance and safety information that may not be covered in a technology training program. Topics include computers, printers, televisions, video and audio equipment, electric roll laminators, overhead and slide projectors, equipment carts, power cords and outlets, batteries, darkrooms, barcode readers, Liquid Crystal Display units,…
Real-Time Visualization of Tissue Ischemia
NASA Technical Reports Server (NTRS)
Bearman, Gregory H. (Inventor); Chrien, Thomas D. (Inventor); Eastwood, Michael L. (Inventor)
2000-01-01
A real-time display of tissue ischemia which comprises three CCD video cameras, each with a narrow bandwidth filter at the correct wavelength is discussed. The cameras simultaneously view an area of tissue suspected of having ischemic areas through beamsplitters. The output from each camera is adjusted to give the correct signal intensity for combining with, the others into an image for display. If necessary a digital signal processor (DSP) can implement algorithms for image enhancement prior to display. Current DSP engines are fast enough to give real-time display. Measurement at three, wavelengths, combined into a real-time Red-Green-Blue (RGB) video display with a digital signal processing (DSP) board to implement image algorithms, provides direct visualization of ischemic areas.
Display nonlinearity in digital image processing for visual communications
NASA Astrophysics Data System (ADS)
Peli, Eli
1992-11-01
The luminance emitted from a cathode ray tube (CRT) display is a nonlinear function (the gamma function) of the input video signal voltage. In most analog video systems, compensation for this nonlinear transfer function is implemented in the camera amplifiers. When CRT displays are used to present psychophysical stimuli in vision research, the specific display nonlinearity usually is measured and accounted for to ensure that the luminance of each pixel in the synthetic image property represents the intended value. However, when using digital image processing, the linear analog-to-digital converters store a digital image that is nonlinearly related to the displayed or recorded image. The effect of this nonlinear transformation on a variety of image-processing applications used in visual communications is described.
Display nonlinearity in digital image processing for visual communications
NASA Astrophysics Data System (ADS)
Peli, Eli
1991-11-01
The luminance emitted from a cathode ray tube, (CRT) display is a nonlinear function (the gamma function) of the input video signal voltage. In most analog video systems, compensation for this nonlinear transfer function is implemented in the camera amplifiers. When CRT displays are used to present psychophysical stimuli in vision research, the specific display nonlinearity usually is measured and accounted for to ensure that the luminance of each pixel in the synthetic image properly represents the intended value. However, when using digital image processing, the linear analog-to-digital converters store a digital image that is nonlinearly related to the displayed or recorded image. This paper describes the effect of this nonlinear transformation on a variety of image-processing applications used in visual communications.
Recent progress of flexible AMOLED displays
NASA Astrophysics Data System (ADS)
Pang, Huiqing; Rajan, Kamala; Silvernail, Jeff; Mandlik, Prashant; Ma, Ruiqing; Hack, Mike; Brown, Julie J.; Yoo, Juhn S.; Jung, Sang-Hoon; Kim, Yong-Cheol; Byun, Seung-Chan; Kim, Jong-Moo; Yoon, Soo-Young; Kim, Chang-Dong; Hwang, Yong-Kee; Chung, In-Jae; Fletcher, Mark; Green, Derek; Pangle, Mike; McIntyre, Jim; Smith, Randal D.
2011-03-01
Significant progress has been made in recent years in flexible AMOLED displays and numerous prototypes have been demonstrated. Replacing rigid glass with flexible substrates and thin-film encapsulation makes displays thinner, lighter, and non-breakable - all attractive features for portable applications. Flexible AMOLEDs equipped with phosphorescent OLEDs are considered one of the best candidates for low-power, rugged, full-color video applications. Recently, we have demonstrated a portable communication display device, built upon a full-color 4.3-inch HVGA foil display with a resolution of 134 dpi using an all-phosphorescent OLED frontplane. The prototype is shaped into a thin and rugged housing that will fit over a user's wrist, providing situational awareness and enabling the wearer to see real-time video and graphics information.
Engineering visualization utilizing advanced animation
NASA Technical Reports Server (NTRS)
Sabionski, Gunter R.; Robinson, Thomas L., Jr.
1989-01-01
Engineering visualization is the use of computer graphics to depict engineering analysis and simulation in visual form from project planning through documentation. Graphics displays let engineers see data represented dynamically which permits the quick evaluation of results. The current state of graphics hardware and software generally allows the creation of two types of 3D graphics. The use of animated video as an engineering visualization tool is presented. The engineering, animation, and videography aspects of animated video production are each discussed. Specific issues include the integration of staffing expertise, hardware, software, and the various production processes. A detailed explanation of the animation process reveals the capabilities of this unique engineering visualization method. Automation of animation and video production processes are covered and future directions are proposed.
Projection display industry market and technology trends
NASA Astrophysics Data System (ADS)
Castellano, Joseph A.; Mentley, David E.
1995-04-01
The projection display industry is diverse, embracing a variety of technologies and applications. In recent years, there has been a high level of interest in projection displays, particularly those using LCD panels or light valves because of the difficulty in making large screen, direct view displays. Many developers feel that projection displays will be the wave of the future for large screen HDTV (high-definition television), penetrating the huge existing market for direct view CRT-based televisions. Projection displays can have the images projected onto a screen either from the rear or the front; the main characteristic is their ability to be viewed by more than one person. In addition to large screen home television receivers, there are numerous other uses for projection displays including conference room presentations, video conferences, closed circuit programming, computer-aided design, and military command/control. For any given application, the user can usually choose from several alternative technologies. These include CRT front or rear projectors, LCD front or rear projectors, LCD overhead projector plate monitors, various liquid or solid-state light valve projectors, or laser-addressed systems. The overall worldwide market for projection information displays of all types and for all applications, including home television, will top DOL4.6 billion in 1995 and DOL6.45 billion in 2001.
Ethernet direct display: a new dimension for in-vehicle video connectivity solutions
NASA Astrophysics Data System (ADS)
Rowley, Vincent
2009-05-01
To improve the local situational awareness (LSA) of personnel in light or heavily armored vehicles, most military organizations recognize the need to equip their fleets with high-resolution digital video systems. Several related upgrade programs are already in progress and, almost invariably, COTS IP/Ethernet is specified as the underlying transport mechanism. The high bandwidths, long reach, networking flexibility, scalability, and affordability of IP/Ethernet make it an attractive choice. There are significant technical challenges, however, in achieving high-performance, real-time video connectivity over the IP/Ethernet platform. As an early pioneer in performance-oriented video systems based on IP/Ethernet, Pleora Technologies has developed core expertise in meeting these challenges and applied a singular focus to innovating within the required framework. The company's field-proven iPORTTM Video Connectivity Solution is deployed successfully in thousands of real-world applications for medical, military, and manufacturing operations. Pleora's latest innovation is eDisplayTM, a smallfootprint, low-power, highly efficient IP engine that acquires video from an Ethernet connection and sends it directly to a standard HDMI/DVI monitor for real-time viewing. More costly PCs are not required. This paper describes Pleora's eDisplay IP Engine in more detail. It demonstrates how - in concert with other elements of the end-to-end iPORT Video Connectivity Solution - the engine can be used to build standards-based, in-vehicle video systems that increase the safety and effectiveness of military personnel while fully leveraging the advantages of the lowcost COTS IP/Ethernet platform.
Use of videotape for off-line viewing of computer-assisted radionuclide cardiology studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thrall, J.H.; Pitt, B.; Marx, R.S.
1978-02-01
Videotape offers an inexpensive method for off-line viewing of dynamic radionuclide cardiac studies. Two approaches to videotaping have been explored and demonstrated to be feasible. In the first, a video camera in conjunction with a cassette-type recorder is used to record from the computer display scope. Alternatively, for computer systems already linked to video display units, the video signal can be routed directly to the recorder. Acceptance and use of tracer cardiology studies will be enhanced by increased availability of the studies for clinical review. Videotape offers an inexpensive flexible means of achieving this.
Optical Head-Mounted Computer Display for Education, Research, and Documentation in Hand Surgery.
Funk, Shawn; Lee, Donald H
2016-01-01
Intraoperative photography and capturing videos is important for the hand surgeon. Recently, optical head-mounted computer display has been introduced as a means of capturing photographs and videos. In this article, we discuss this new technology and review its potential use in hand surgery. Copyright © 2016 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.
Vroom: designing an augmented environment for remote collaboration in digital cinema production
NASA Astrophysics Data System (ADS)
Margolis, Todd; Cornish, Tracy
2013-03-01
As media technologies become increasingly affordable, compact and inherently networked, new generations of telecollaborative platforms continue to arise which integrate these new affordances. Virtual reality has been primarily concerned with creating simulations of environments that can transport participants to real or imagined spaces that replace the "real world". Meanwhile Augmented Reality systems have evolved to interleave objects from Virtual Reality environments into the physical landscape. Perhaps now there is a new class of systems that reverse this precept to enhance dynamic media landscapes and immersive physical display environments to enable intuitive data exploration through collaboration. Vroom (Virtual Room) is a next-generation reconfigurable tiled display environment in development at the California Institute for Telecommunications and Information Technology (Calit2) at the University of California, San Diego. Vroom enables freely scalable digital collaboratories, connecting distributed, high-resolution visualization resources for collaborative work in the sciences, engineering and the arts. Vroom transforms a physical space into an immersive media environment with large format interactive display surfaces, video teleconferencing and spatialized audio built on a highspeed optical network backbone. Vroom enables group collaboration for local and remote participants to share knowledge and experiences. Possible applications include: remote learning, command and control, storyboarding, post-production editorial review, high resolution video playback, 3D visualization, screencasting and image, video and multimedia file sharing. To support these various scenarios, Vroom features support for multiple user interfaces (optical tracking, touch UI, gesture interface, etc.), support for directional and spatialized audio, giga-pixel image interactivity, 4K video streaming, 3D visualization and telematic production. This paper explains the design process that has been utilized to make Vroom an accessible and intuitive immersive environment for remote collaboration specifically for digital cinema production.
NASA Astrophysics Data System (ADS)
Deckard, Michael; Ratib, Osman M.; Rubino, Gregory
2002-05-01
Our project was to design and implement a ceiling-mounted multi monitor display unit for use in a high-field MRI surgical suite. The system is designed to simultaneously display images/data from four different digital and/or analog sources with: minimal interference from the adjacent high magnetic field, minimal signal-to-noise/artifact contribution to the MRI images and compliance with codes and regulations for the sterile neuro-surgical environment. Provisions were also made to accommodate the importing and exporting of video information via PACS and remote processing/display for clinical and education uses. Commercial fiber optic receivers/transmitters were implemented along with supporting video processing and distribution equipment to solve the video communication problem. A new generation of high-resolution color flat panel displays was selected for the project. A custom-made monitor mount and in-suite electronics enclosure was designed and constructed at UCLA. Difficulties with implementing an isolated AC power system are discussed and a work-around solution presented.
Fat stigmatization on YouTube: a content analysis.
Hussin, Mallory; Frazier, Savannah; Thompson, J Kevin
2011-01-01
YouTube.com is an internet website that is viewed by two billion individuals daily, and thus may serve as the source of images and messages regarding weight acceptance or weight bias. In the current study, a targeted sample of YouTube videos that displayed fat stigmatization were content rated on a variety of video characteristics. The findings revealed that men were the target of fat stigmatization (62.1%) almost twice as often as women (36.4%). When there was an antagonist present in the video, the great majority of the time, the aggressor was male (88.5%) rather than female (7.7%). These findings indicate that men were the antagonist 11.5 times the rate of women, but they were only 1.7 times more often stigmatized. Future research avenues, including an experimental analysis of viewing stigmatizing videos on body image, are recommended. Copyright © 2010 Elsevier Ltd. All rights reserved.
Is Tickling Torture? Assessing Welfare towards Slow Lorises (Nycticebus spp.) within Web 2.0 Videos.
Nekaris, K Anne I; Musing, Louisa; Vazquez, Asier Gil; Donati, Giuseppe
2015-01-01
Videos, memes and images of pet slow lorises have become increasingly popular on the Internet. Although some video sites allow viewers to tag material as 'animal cruelty', no site has yet acknowledged the presence of cruelty in slow loris videos. We examined 100 online videos to assess whether they violated the 'five freedoms' of animal welfare and whether presence or absence of these conditions contributed to the number of thumbs up and views received by the videos. We found that all 100 videos showed at least 1 condition known as negative for lorises, indicating absence of the necessary freedom; 4% showed only 1 condition, but in nearly one third (31.3%) all 5 chosen criteria were present, including human contact (57%), daylight (87%), signs of stress/ill health (53%), unnatural environment (91%) and isolation from conspecifics (77%). The public were more likely to like videos where a slow loris was kept in the light or displayed signs of stress. Recent work on primates has shown that imagery of primates in a human context can cause viewers to perceive them as less threatened. Prevalence of a positive public opinion of such videos is a real threat towards awareness of the conservation crisis faced by slow lorises. © 2016 S. Karger AG, Basel.
Storing Data and Video on One Tape
NASA Technical Reports Server (NTRS)
Nixon, J. H.; Cater, J. P.
1985-01-01
Microprocessor-based system originally developed for anthropometric research merges digital data with video images for storage on video cassette recorder. Combined signals later retrieved and displayed simultaneously on television monitor. System also extracts digital portion of stored information and transfers it to solid-state memory.
47 CFR 79.103 - Closed caption decoder requirements for apparatus.
Code of Federal Regulations, 2014 CFR
2014-10-01
... RADIO SERVICES ACCESSIBILITY OF VIDEO PROGRAMMING Apparatus § 79.103 Closed caption decoder requirements... video programming transmitted simultaneously with sound, if such apparatus is manufactured in the United... with built-in closed caption decoder circuitry or capability designed to display closed-captioned video...
Patterned Video Sensors For Low Vision
NASA Technical Reports Server (NTRS)
Juday, Richard D.
1996-01-01
Miniature video cameras containing photoreceptors arranged in prescribed non-Cartesian patterns to compensate partly for some visual defects proposed. Cameras, accompanied by (and possibly integrated with) miniature head-mounted video display units restore some visual function in humans whose visual fields reduced by defects like retinitis pigmentosa.
Spatiotemporal video deinterlacing using control grid interpolation
NASA Astrophysics Data System (ADS)
Venkatesan, Ragav; Zwart, Christine M.; Frakes, David H.; Li, Baoxin
2015-03-01
With the advent of progressive format display and broadcast technologies, video deinterlacing has become an important video-processing technique. Numerous approaches exist in the literature to accomplish deinterlacing. While most earlier methods were simple linear filtering-based approaches, the emergence of faster computing technologies and even dedicated video-processing hardware in display units has allowed higher quality but also more computationally intense deinterlacing algorithms to become practical. Most modern approaches analyze motion and content in video to select different deinterlacing methods for various spatiotemporal regions. We introduce a family of deinterlacers that employs spectral residue to choose between and weight control grid interpolation based spatial and temporal deinterlacing methods. The proposed approaches perform better than the prior state-of-the-art based on peak signal-to-noise ratio, other visual quality metrics, and simple perception-based subjective evaluations conducted by human viewers. We further study the advantages of using soft and hard decision thresholds on the visual performance.
NASA Technical Reports Server (NTRS)
Deen, Robert G.; Andres, Paul M.; Mortensen, Helen B.; Parizher, Vadim; McAuley, Myche; Bartholomew, Paul
2009-01-01
The XVD [X-Windows VICAR (video image communication and retrieval) Display] computer program offers an interactive display of VICAR and PDS (planetary data systems) images. It is designed to efficiently display multiple-GB images and runs on Solaris, Linux, or Mac OS X systems using X-Windows.
The Use of Smart Glasses for Surgical Video Streaming.
Hiranaka, Takafumi; Nakanishi, Yuta; Fujishiro, Takaaki; Hida, Yuichi; Tsubosaka, Masanori; Shibata, Yosaku; Okimura, Kenjiro; Uemoto, Harunobu
2017-04-01
Observation of surgical procedures performed by experts is extremely important for acquisition and improvement of surgical skills. Smart glasses are small computers, which comprise a head-mounted monitor and video camera, and can be connected to the internet. They can be used for remote observation of surgeries by video streaming. Although Google Glass is the most commonly used smart glasses for medical purposes, it is still unavailable commercially and has some limitations. This article reports the use of a different type of smart glasses, InfoLinker, for surgical video streaming. InfoLinker has been commercially available in Japan for industrial purposes for more than 2 years. It is connected to a video server via wireless internet directly, and streaming video can be seen anywhere an internet connection is available. We have attempted live video streaming of knee arthroplasty operations that were viewed at several different locations, including foreign countries, on a common web browser. Although the quality of video images depended on the resolution and dynamic range of the video camera, speed of internet connection, and the wearer's attention to minimize image shaking, video streaming could be easily performed throughout the procedure. The wearer could confirm the quality of the video as the video was being shot by the head-mounted display. The time and cost for observation of surgical procedures can be reduced by InfoLinker, and further improvement of hardware as well as the wearer's video shooting technique is expected. We believe that this can be used in other medical settings.
Efficient stereoscopic contents file format on the basis of ISO base media file format
NASA Astrophysics Data System (ADS)
Kim, Kyuheon; Lee, Jangwon; Suh, Doug Young; Park, Gwang Hoon
2009-02-01
A lot of 3D contents haven been widely used for multimedia services, however, real 3D video contents have been adopted for a limited applications such as a specially designed 3D cinema. This is because of the difficulty of capturing real 3D video contents and the limitation of display devices available in a market. However, diverse types of display devices for stereoscopic video contents for real 3D video contents have been recently released in a market. Especially, a mobile phone with a stereoscopic camera has been released in a market, which provides a user as a consumer to have more realistic experiences without glasses, and also, as a content creator to take stereoscopic images or record the stereoscopic video contents. However, a user can only store and display these acquired stereoscopic contents with his/her own devices due to the non-existence of a common file format for these contents. This limitation causes a user not share his/her contents with any other users, which makes it difficult the relevant market to stereoscopic contents is getting expanded. Therefore, this paper proposes the common file format on the basis of ISO base media file format for stereoscopic contents, which enables users to store and exchange pure stereoscopic contents. This technology is also currently under development for an international standard of MPEG as being called as a stereoscopic video application format.
Overview of FTV (free-viewpoint television)
NASA Astrophysics Data System (ADS)
Tanimoto, Masayuki
2010-07-01
We have developed a new type of television named FTV (Free-viewpoint TV). FTV is the ultimate 3DTV that enables us to view a 3D scene by freely changing our viewpoints. We proposed the concept of FTV and constructed the world's first real-time system including the complete chain of operation from image capture to display. FTV is based on the rayspace method that represents one ray in real space with one point in the ray-space. We have developed ray capture, processing and display technologies for FTV. FTV can be carried out today in real time on a single PC or on a mobile player. We also realized FTV with free listening-point audio. The international standardization of FTV has been conducted in MPEG. The first phase of FTV was MVC (Multi-view Video Coding) and the second phase is 3DV (3D Video). MVC was completed in May 2009. The Blu-ray 3D specification has adopted MVC for compression. 3DV is a standard that targets serving a variety of 3D displays. The view generation function of FTV is used to decouple capture and display in 3DV. FDU (FTV Data Unit) is proposed as a data format for 3DV. FTU can compensate errors of the synthesized views caused by depth error.
NASA Technical Reports Server (NTRS)
2004-01-01
Ever wonder whether a still shot from a home video could serve as a "picture perfect" photograph worthy of being framed and proudly displayed on the mantle? Wonder no more. A critical imaging code used to enhance video footage taken from spaceborne imaging instruments is now available within a portable photography tool capable of producing an optimized, high-resolution image from multiple video frames.
Code of Federal Regulations, 2013 CFR
2013-04-01
... recordings and/or digital records shall be provided to the Commission upon request. (x) Video library log. A... events on video and/or digital recordings. The displayed date and time shall not significantly obstruct... each gaming machine change booth. (w) Video recording and/or digital record retention. (1) All video...
Code of Federal Regulations, 2012 CFR
2012-04-01
... recordings and/or digital records shall be provided to the Commission upon request. (x) Video library log. A... events on video and/or digital recordings. The displayed date and time shall not significantly obstruct... each gaming machine change booth. (w) Video recording and/or digital record retention. (1) All video...
Code of Federal Regulations, 2014 CFR
2014-04-01
... recordings and/or digital records shall be provided to the Commission upon request. (x) Video library log. A... events on video and/or digital recordings. The displayed date and time shall not significantly obstruct... each gaming machine change booth. (w) Video recording and/or digital record retention. (1) All video...
Effectiveness of Immersive Videos in Inducing Awe: An Experimental Study.
Chirico, Alice; Cipresso, Pietro; Yaden, David B; Biassoni, Federica; Riva, Giuseppe; Gaggioli, Andrea
2017-04-27
Awe, a complex emotion composed by the appraisal components of vastness and need for accommodation, is a profound and often meaningful experience. Despite its importance, psychologists have only recently begun empirical study of awe. At the experimental level, a main issue concerns how to elicit high intensity awe experiences in the lab. To address this issue, Virtual Reality (VR) has been proposed as a potential solution. Here, we considered the highest realistic form of VR: immersive videos. 42 participants watched at immersive and normal 2D videos displaying an awe or a neutral content. After the experience, they rated their level of awe and sense of presence. Participants' psychophysiological responses (BVP, SC, sEMG) were recorded during the whole video exposure. We hypothesized that the immersive video condition would increase the intensity of awe experienced compared to 2D screen videos. Results indicated that immersive videos significantly enhanced the self-reported intensity of awe as well as the sense of presence. Immersive videos displaying an awe content also led to higher parasympathetic activation. These findings indicate the advantages of using VR in the experimental study of awe, with methodological implications for the study of other emotions.
77 FR 75617 - 36(b)(1) Arms Sales Notification
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-21
... transmittal, policy justification, and Sensitivity of Technology. Dated: December 18, 2012. Aaron Siegel... Processor Cabinets, 2 Video Wall Screen and Projector Systems, 46 Flat Panel Displays, and 2 Distributed Video Systems), 2 ship sets AN/SPQ-15 Digital Video Distribution Systems, 2 ship sets Operational...
Distance Learning Plan for the Defense Finance and Accounting Service (DFAS): A Study for the DBMU
1994-09-01
according to the standard (H.261) motion video compression algorithm.24 n Schaphorst, Richard, notes presented at TELECON XIII, San Jose , California, 10...include automatic microphone mixing systems with one microphone for every two student seats, a large screen interactive computer display and the Socrates
Teachers Should Be Concerned, Chosen and Cared For.
ERIC Educational Resources Information Center
McDonald, Mary C.
2001-01-01
Describes the strategies employed by the Diocese of Memphis, Tennessee, to alleviate its teacher shortage, including: (1) starting a teacher Recruitment and Retention Leadership Team; and (2) creating a Web site, radio spots, display booths at job fairs, and a teacher recruitment video that was sent to religious communities. States that, for the…
Video Altimeter and Obstruction Detector for an Aircraft
NASA Technical Reports Server (NTRS)
Delgado, Frank J.; Abernathy, Michael F.; White, Janis; Dolson, William R.
2013-01-01
Video-based altimetric and obstruction detection systems for aircraft have been partially developed. The hardware of a system of this type includes a downward-looking video camera, a video digitizer, a Global Positioning System receiver or other means of measuring the aircraft velocity relative to the ground, a gyroscope based or other attitude-determination subsystem, and a computer running altimetric and/or obstruction-detection software. From the digitized video data, the altimetric software computes the pixel velocity in an appropriate part of the video image and the corresponding angular relative motion of the ground within the field of view of the camera. Then by use of trigonometric relationships among the aircraft velocity, the attitude of the camera, the angular relative motion, and the altitude, the software computes the altitude. The obstruction-detection software performs somewhat similar calculations as part of a larger task in which it uses the pixel velocity data from the entire video image to compute a depth map, which can be correlated with a terrain map, showing locations of potential obstructions. The depth map can be used as real-time hazard display and/or to update an obstruction database.
NASA Astrophysics Data System (ADS)
Schmalz, Mark S.; Ritter, Gerhard X.; Caimi, Frank M.
2001-12-01
A wide variety of digital image compression transforms developed for still imaging and broadcast video transmission are unsuitable for Internet video applications due to insufficient compression ratio, poor reconstruction fidelity, or excessive computational requirements. Examples include hierarchical transforms that require all, or large portion of, a source image to reside in memory at one time, transforms that induce significant locking effect at operationally salient compression ratios, and algorithms that require large amounts of floating-point computation. The latter constraint holds especially for video compression by small mobile imaging devices for transmission to, and compression on, platforms such as palmtop computers or personal digital assistants (PDAs). As Internet video requirements for frame rate and resolution increase to produce more detailed, less discontinuous motion sequences, a new class of compression transforms will be needed, especially for small memory models and displays such as those found on PDAs. In this, the third series of papers, we discuss the EBLAST compression transform and its application to Internet communication. Leading transforms for compression of Internet video and still imagery are reviewed and analyzed, including GIF, JPEG, AWIC (wavelet-based), wavelet packets, and SPIHT, whose performance is compared with EBLAST. Performance analysis criteria include time and space complexity and quality of the decompressed image. The latter is determined by rate-distortion data obtained from a database of realistic test images. Discussion also includes issues such as robustness of the compressed format to channel noise. EBLAST has been shown to perform superiorly to JPEG and, unlike current wavelet compression transforms, supports fast implementation on embedded processors with small memory models.
Real-time rendering for multiview autostereoscopic displays
NASA Astrophysics Data System (ADS)
Berretty, R.-P. M.; Peters, F. J.; Volleberg, G. T. G.
2006-02-01
In video systems, the introduction of 3D video might be the next revolution after the introduction of color. Nowadays multiview autostereoscopic displays are in development. Such displays offer various views at the same time and the image content observed by the viewer depends upon his position with respect to the screen. His left eye receives a signal that is different from what his right eye gets; this gives, provided the signals have been properly processed, the impression of depth. The various views produced on the display differ with respect to their associated camera positions. A possible video format that is suited for rendering from different camera positions is the usual 2D format enriched with a depth related channel, e.g. for each pixel in the video not only its color is given, but also e.g. its distance to a camera. In this paper we provide a theoretical framework for the parallactic transformations which relates captured and observed depths to screen and image disparities. Moreover we present an efficient real time rendering algorithm that uses forward mapping to reduce aliasing artefacts and that deals properly with occlusions. For improved perceived resolution, we take the relative position of the color subpixels and the optics of the lenticular screen into account. Sophisticated filtering techniques results in high quality images.
Objective video presentation QoE predictor for smart adaptive video streaming
NASA Astrophysics Data System (ADS)
Wang, Zhou; Zeng, Kai; Rehman, Abdul; Yeganeh, Hojatollah; Wang, Shiqi
2015-09-01
How to deliver videos to consumers over the network for optimal quality-of-experience (QoE) has been the central goal of modern video delivery services. Surprisingly, regardless of the large volume of videos being delivered everyday through various systems attempting to improve visual QoE, the actual QoE of end consumers is not properly assessed, not to say using QoE as the key factor in making critical decisions at the video hosting, network and receiving sites. Real-world video streaming systems typically use bitrate as the main video presentation quality indicator, but using the same bitrate to encode different video content could result in drastically different visual QoE, which is further affected by the display device and viewing condition of each individual consumer who receives the video. To correct this, we have to put QoE back to the driver's seat and redesign the video delivery systems. To achieve this goal, a major challenge is to find an objective video presentation QoE predictor that is accurate, fast, easy-to-use, display device adaptive, and provides meaningful QoE predictions across resolution and content. We propose to use the newly developed SSIMplus index (https://ece.uwaterloo.ca/~z70wang/research/ssimplus/) for this role. We demonstrate that based on SSIMplus, one can develop a smart adaptive video streaming strategy that leads to much smoother visual QoE impossible to achieve using existing adaptive bitrate video streaming approaches. Furthermore, SSIMplus finds many more applications, in live and file-based quality monitoring, in benchmarking video encoders and transcoders, and in guiding network resource allocations.
Mask, Lisa; Blanchard, Céline M
2011-09-01
The present study examines the protective role of an autonomous regulation of eating behaviors (AREB) on the relationship between trait body dissatisfaction and women's body image concerns and eating-related intentions in response to "thin ideal" media. Undergraduate women (n=138) were randomly assigned to view a "thin ideal" video or a neutral video. As hypothesized, trait body dissatisfaction predicted more negative affect and size dissatisfaction following exposure to the "thin ideal" video among women who displayed less AREB. Conversely, trait body dissatisfaction predicted greater intentions to monitor food intake and limit unhealthy foods following exposure to the "thin ideal" video among women who displayed more AREB. Copyright © 2011 Elsevier Ltd. All rights reserved.
First Use of Heads-up Display for Astronomy Education
NASA Astrophysics Data System (ADS)
Mumford, Holly; Hintz, E. G.; Jones, M.; Lawler, J.; Fisler, A.
2013-01-01
As part of our work on deaf education in a planetarium environment we are exploring the use of heads-up display systems. This allows us to overlap an ASL interpreter with our educational videos. The overall goal is to allow a student to watch a full-dome planetarium show and have the interpreter tracking to any portion of the video. We will present the first results of using a heads-up display to provide an ASL ‘sound-track’ for a deaf audience. This work is partially funded by an NSF IIS-1124548 grant and funding from the Sorenson Foundation.
Hardware and software improvements to a low-cost horizontal parallax holographic video monitor.
Henrie, Andrew; Codling, Jesse R; Gneiting, Scott; Christensen, Justin B; Awerkamp, Parker; Burdette, Mark J; Smalley, Daniel E
2018-01-01
Displays capable of true holographic video have been prohibitively expensive and difficult to build. With this paper, we present a suite of modularized hardware components and software tools needed to build a HoloMonitor with basic "hacker-space" equipment, highlighting improvements that have enabled the total materials cost to fall to $820, well below that of other holographic displays. It is our hope that the current level of simplicity, development, design flexibility, and documentation will enable the lay engineer, programmer, and scientist to relatively easily replicate, modify, and build upon our designs, bringing true holographic video to the masses.
Obstacles encountered in the development of the low vision enhancement system.
Massof, R W; Rickman, D L
1992-01-01
The Johns Hopkins Wilmer Eye Institute and the NASA Stennis Space Center are collaborating on the development of a new high technology low vision aid called the Low Vision Enhancement System (LVES). The LVES consists of a binocular head-mounted video display system, video cameras mounted on the head-mounted display, and real-time video image processing in a system package that is battery powered and portable. Through a phased development approach, several generations of the LVES can be made available to the patient in a timely fashion. This paper describes the LVES project with major emphasis on technical problems encountered or anticipated during the development process.
3D laptop for defense applications
NASA Astrophysics Data System (ADS)
Edmondson, Richard; Chenault, David
2012-06-01
Polaris Sensor Technologies has developed numerous 3D display systems using a US Army patented approach. These displays have been developed as prototypes for handheld controllers for robotic systems and closed hatch driving, and as part of a TALON robot upgrade for 3D vision, providing depth perception for the operator for improved manipulation and hazard avoidance. In this paper we discuss the prototype rugged 3D laptop computer and its applications to defense missions. The prototype 3D laptop combines full temporal and spatial resolution display with the rugged Amrel laptop computer. The display is viewed through protective passive polarized eyewear, and allows combined 2D and 3D content. Uses include robot tele-operation with live 3D video or synthetically rendered scenery, mission planning and rehearsal, enhanced 3D data interpretation, and simulation.
Informative-frame filtering in endoscopy videos
NASA Astrophysics Data System (ADS)
An, Yong Hwan; Hwang, Sae; Oh, JungHwan; Lee, JeongKyu; Tavanapong, Wallapak; de Groen, Piet C.; Wong, Johnny
2005-04-01
Advances in video technology are being incorporated into today"s healthcare practice. For example, colonoscopy is an important screening tool for colorectal cancer. Colonoscopy allows for the inspection of the entire colon and provides the ability to perform a number of therapeutic operations during a single procedure. During a colonoscopic procedure, a tiny video camera at the tip of the endoscope generates a video signal of the internal mucosa of the colon. The video data are displayed on a monitor for real-time analysis by the endoscopist. Other endoscopic procedures include upper gastrointestinal endoscopy, enteroscopy, bronchoscopy, cystoscopy, and laparoscopy. However, a significant number of out-of-focus frames are included in this type of videos since current endoscopes are equipped with a single, wide-angle lens that cannot be focused. The out-of-focus frames do not hold any useful information. To reduce the burdens of the further processes such as computer-aided image processing or human expert"s examinations, these frames need to be removed. We call an out-of-focus frame as non-informative frame and an in-focus frame as informative frame. We propose a new technique to classify the video frames into two classes, informative and non-informative frames using a combination of Discrete Fourier Transform (DFT), Texture Analysis, and K-Means Clustering. The proposed technique can evaluate the frames without any reference image, and does not need any predefined threshold value. Our experimental studies indicate that it achieves over 96% of four different performance metrics (i.e. precision, sensitivity, specificity, and accuracy).
Design of video processing and testing system based on DSP and FPGA
NASA Astrophysics Data System (ADS)
Xu, Hong; Lv, Jun; Chen, Xi'ai; Gong, Xuexia; Yang, Chen'na
2007-12-01
Based on high speed Digital Signal Processor (DSP) and Field Programmable Gate Array (FPGA), a video capture, processing and display system is presented, which is of miniaturization and low power. In this system, a triple buffering scheme was used for the capture and display, so that the application can always get a new buffer without waiting; The Digital Signal Processor has an image process ability and it can be used to test the boundary of workpiece's image. A video graduation technology is used to aim at the position which is about to be tested, also, it can enhance the system's flexibility. The character superposition technology realized by DSP is used to display the test result on the screen in character format. This system can process image information in real time, ensure test precision, and help to enhance product quality and quality management.
Veligdan, James T.
2005-05-31
A video image is displayed from an optical panel by splitting the image into a plurality of image components, and then projecting the image components through corresponding portions of the panel to collectively form the image. Depth of the display is correspondingly reduced.
Veligdan, James T [Manorville, NY
2007-05-29
A video image is displayed from an optical panel by splitting the image into a plurality of image components, and then projecting the image components through corresponding portions of the panel to collectively form the image. Depth of the display is correspondingly reduced.
Mesoscale and severe storms (Mass) data management and analysis system
NASA Technical Reports Server (NTRS)
Hickey, J. S.; Karitani, S.; Dickerson, M.
1984-01-01
Progress on the Mesoscale and Severe Storms (MASS) data management and analysis system is described. An interactive atmospheric data base management software package to convert four types of data (Sounding, Single Level, Grid, Image) into standard random access formats is implemented and integrated with the MASS AVE80 Series general purpose plotting and graphics display data analysis software package. An interactive analysis and display graphics software package (AVE80) to analyze large volumes of conventional and satellite derived meteorological data is enhanced to provide imaging/color graphics display utilizing color video hardware integrated into the MASS computer system. Local and remote smart-terminal capability is provided by installing APPLE III computer systems within individual scientist offices and integrated with the MASS system, thus providing color video display, graphics, and characters display of the four data types.
Multilocation Video Conference By Optical Fiber
NASA Astrophysics Data System (ADS)
Gray, Donald J.
1982-10-01
An experimental system that permits interconnection of many offices in a single video conference is described. Video images transmitted to conference participants are selected by the conference chairman and switched by a microprocessor-controlled video switch. Speakers can, at their choice, transmit their own images or images of graphics they wish to display. Users are connected to the Switching Center by optical fiber subscriber loops that carry analog video, digitized telephone, data and signaling. The same system also provides user-selectable distribution of video program and video library material. Experience in the operation of the conference system is discussed.
Rehm, K; Seeley, G W; Dallas, W J; Ovitt, T W; Seeger, J F
1990-01-01
One of the goals of our research in the field of digital radiography has been to develop contrast-enhancement algorithms for eventual use in the display of chest images on video devices with the aim of preserving the diagnostic information presently available with film, some of which would normally be lost because of the smaller dynamic range of video monitors. The ASAHE algorithm discussed in this article has been tested by investigating observer performance in a difficult detection task involving phantoms and simulated lung nodules, using film as the output medium. The results of the experiment showed that the algorithm is successful in providing contrast-enhanced, natural-looking chest images while maintaining diagnostic information. The algorithm did not effect an increase in nodule detectability, but this was not unexpected because film is a medium capable of displaying a wide range of gray levels. It is sufficient at this stage to show that there is no degradation in observer performance. Future tests will evaluate the performance of the ASAHE algorithm in preparing chest images for video display.
Interactive Video in Training. Computers in Personnel--Making Management Profitable.
ERIC Educational Resources Information Center
Copeland, Peter
Interactive video is achieved by merging the two powerful technologies of microcomputing and video. Using television as the vehicle for display, text and diagrams, filmic images, and sound can be used separately or in combination to achieve a specific training task. An interactive program can check understanding, determine progress, and challenge…
Riby, Deborah M; Whittle, Lisa; Doherty-Sneddon, Gwyneth
2012-01-01
The human face is a powerful elicitor of emotion, which induces autonomic nervous system responses. In this study, we explored physiological arousal and reactivity to affective facial displays shown in person and through video-mediated communication. We compared measures of physiological arousal and reactivity in typically developing individuals and those with the developmental disorders Williams syndrome (WS) and autism spectrum disorder (ASD). Participants attended to facial displays of happy, sad, and neutral expressions via live and video-mediated communication. Skin conductance level (SCL) indicated that live faces, but not video-mediated faces, increased arousal, especially for typically developing individuals and those with WS. There was less increase of SCL, and physiological reactivity was comparable for live and video-mediated faces in ASD. In typical development and WS, physiological reactivity was greater for live than for video-mediated communication. Individuals with WS showed lower SCL than typically developing individuals, suggesting possible hypoarousal in this group, even though they showed an increase in arousal for faces. The results are discussed in terms of the use of video-mediated communication with typically and atypically developing individuals and atypicalities of physiological arousal across neurodevelopmental disorder groups.
An Imaging And Graphics Workstation For Image Sequence Analysis
NASA Astrophysics Data System (ADS)
Mostafavi, Hassan
1990-01-01
This paper describes an application-specific engineering workstation designed and developed to analyze imagery sequences from a variety of sources. The system combines the software and hardware environment of the modern graphic-oriented workstations with the digital image acquisition, processing and display techniques. The objective is to achieve automation and high throughput for many data reduction tasks involving metric studies of image sequences. The applications of such an automated data reduction tool include analysis of the trajectory and attitude of aircraft, missile, stores and other flying objects in various flight regimes including launch and separation as well as regular flight maneuvers. The workstation can also be used in an on-line or off-line mode to study three-dimensional motion of aircraft models in simulated flight conditions such as wind tunnels. The system's key features are: 1) Acquisition and storage of image sequences by digitizing real-time video or frames from a film strip; 2) computer-controlled movie loop playback, slow motion and freeze frame display combined with digital image sharpening, noise reduction, contrast enhancement and interactive image magnification; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored image sequence; 4) automatic and manual field-of-view and spatial calibration; 5) image sequence data base generation and management, including the measurement data products; 6) off-line analysis software for trajectory plotting and statistical analysis; 7) model-based estimation and tracking of object attitude angles; and 8) interface to a variety of video players and film transport sub-systems.
The compatibility of consumer DLP projectors with time-sequential stereoscopic 3D visualisation
NASA Astrophysics Data System (ADS)
Woods, Andrew J.; Rourke, Tegan
2007-02-01
A range of advertised "Stereo-Ready" DLP projectors are now available in the market which allow high-quality flickerfree stereoscopic 3D visualization using the time-sequential stereoscopic display method. The ability to use a single projector for stereoscopic viewing offers a range of advantages, including extremely good stereoscopic alignment, and in some cases, portability. It has also recently become known that some consumer DLP projectors can be used for timesequential stereoscopic visualization, however it was not well understood which projectors are compatible and incompatible, what display modes (frequency and resolution) are compatible, and what stereoscopic display quality attributes are important. We conducted a study to test a wide range of projectors for stereoscopic compatibility. This paper reports on the testing of 45 consumer DLP projectors of widely different specifications (brand, resolution, brightness, etc). The projectors were tested for stereoscopic compatibility with various video formats (PAL, NTSC, 480P, 576P, and various VGA resolutions) and video input connections (composite, SVideo, component, and VGA). Fifteen projectors were found to work well at up to 85Hz stereo in VGA mode. Twenty three projectors would work at 60Hz stereo in VGA mode.
Keebler, Joseph R; Jentsch, Florian; Schuster, David
2014-12-01
We investigated the effects of active stereoscopic simulation-based training and individual differences in video game experience on multiple indices of combat identification (CID) performance. Fratricide is a major problem in combat operations involving military vehicles. In this research, we aimed to evaluate the effects of training on CID performance in order to reduce fratricide errors. Individuals were trained on 12 combat vehicles in a simulation, which were presented via either a non-stereoscopic or active stereoscopic display using NVIDIA's GeForce shutter glass technology. Self-report was used to assess video game experience, leading to four between-subjects groups: high video game experience with stereoscopy, low video game experience with stereoscopy, high video game experience without stereoscopy, and low video game experience without stereoscopy. We then tested participants on their memory of each vehicle's alliance and name across multiple measures, including photographs and videos. There was a main effect for both video game experience and stereoscopy across many of the dependent measures. Further, we found interactions between video game experience and stereoscopic training, such that those individuals with high video game experience in the non-stereoscopic group had the highest performance outcomes in the sample on multiple dependent measures. This study suggests that individual differences in video game experience may be predictive of enhanced performance in CID tasks. Selection based on video game experience in CID tasks may be a useful strategy for future military training. Future research should investigate the generalizability of these effects, such as identification through unmanned vehicle sensors.
Tactile Cueing for Target Acquisition and Identification
2005-09-01
method of coding tactile information, and the method of presenting elevation information were studied. Results: Subjects were divided into video game experienced...VGP) subjects and non- video game (NVGP) experienced subjects. VGPs showed a significantly lower’ target acquisition time with the 12...that video game players performed better with the highest level of tactile resolution, while non- video game players performed better with simpler pattern and a lower resolution display.
Backwards compatible high dynamic range video compression
NASA Astrophysics Data System (ADS)
Dolzhenko, Vladimir; Chesnokov, Vyacheslav; Edirisinghe, Eran A.
2014-02-01
This paper presents a two layer CODEC architecture for high dynamic range video compression. The base layer contains the tone mapped video stream encoded with 8 bits per component which can be decoded using conventional equipment. The base layer content is optimized for rendering on low dynamic range displays. The enhancement layer contains the image difference, in perceptually uniform color space, between the result of inverse tone mapped base layer content and the original video stream. Prediction of the high dynamic range content reduces the redundancy in the transmitted data while still preserves highlights and out-of-gamut colors. Perceptually uniform colorspace enables using standard ratedistortion optimization algorithms. We present techniques for efficient implementation and encoding of non-uniform tone mapping operators with low overhead in terms of bitstream size and number of operations. The transform representation is based on human vision system model and suitable for global and local tone mapping operators. The compression techniques include predicting the transform parameters from previously decoded frames and from already decoded data for current frame. Different video compression techniques are compared: backwards compatible and non-backwards compatible using AVC and HEVC codecs.
Task-dependent color discrimination
NASA Technical Reports Server (NTRS)
Poirson, Allen B.; Wandell, Brian A.
1990-01-01
When color video displays are used in time-critical applications (e.g., head-up displays, video control panels), the observer must discriminate among briefly presented targets seen within a complex spatial scene. Color-discrimination threshold are compared by using two tasks. In one task the observer makes color matches between two halves of a continuously displayed bipartite field. In a second task the observer detects a color target in a set of briefly presented objects. The data from both tasks are well summarized by ellipsoidal isosensitivity contours. The fitted ellipsoids differ both in their size, which indicates an absolute sensitivity difference, and orientation, which indicates a relative sensitivity difference.
Affordable multisensor digital video architecture for 360° situational awareness displays
NASA Astrophysics Data System (ADS)
Scheiner, Steven P.; Khan, Dina A.; Marecki, Alexander L.; Berman, David A.; Carberry, Dana
2011-06-01
One of the major challenges facing today's military ground combat vehicle operations is the ability to achieve and maintain full-spectrum situational awareness while under armor (i.e. closed hatch). Thus, the ability to perform basic tasks such as driving, maintaining local situational awareness, surveillance, and targeting will require a high-density array of real time information be processed, distributed, and presented to the vehicle operators and crew in near real time (i.e. low latency). Advances in display and sensor technologies are providing never before seen opportunities to supply large amounts of high fidelity imagery and video to the vehicle operators and crew in real time. To fully realize the advantages of these emerging display and sensor technologies, an underlying digital architecture must be developed that is capable of processing these large amounts of video and data from separate sensor systems and distributing it simultaneously within the vehicle to multiple vehicle operators and crew. This paper will examine the systems and software engineering efforts required to overcome these challenges and will address development of an affordable, integrated digital video architecture. The approaches evaluated will enable both current and future ground combat vehicle systems the flexibility to readily adopt emerging display and sensor technologies, while optimizing the Warfighter Machine Interface (WMI), minimizing lifecycle costs, and improve the survivability of the vehicle crew working in closed-hatch systems during complex ground combat operations.
Digital Light Processing update: status and future applications
NASA Astrophysics Data System (ADS)
Hornbeck, Larry J.
1999-05-01
Digital Light Processing (DLP) projection displays based on the Digital Micromirror Device (DMD) were introduced to the market in 1996. Less than 3 years later, DLP-based projectors are found in such diverse applications as mobile, conference room, video wall, home theater, and large-venue. They provide high-quality, seamless, all-digital images that have exceptional stability as well as freedom from both flicker and image lag. Marked improvements have been made in the image quality of DLP-based projection display, including brightness, resolution, contrast ratio, and border image. DLP-based mobile projectors that weighted about 27 pounds in 1996 now weight only about 7 pounds. This weight reduction has been responsible for the definition of an entirely new projector class, the ultraportable. New applications are being developed for this important new projection display technology; these include digital photofinishing for high process speed minilab and maxilab applications and DLP Cinema for the digital delivery of films to audiences around the world. This paper describes the status of DLP-based projection display technology, including its manufacturing, performance improvements, and new applications, with emphasis on DLP Cinema.
Remote stereoscopic video play platform for naked eyes based on the Android system
NASA Astrophysics Data System (ADS)
Jia, Changxin; Sang, Xinzhu; Liu, Jing; Cheng, Mingsheng
2014-11-01
As people's life quality have been improved significantly, the traditional 2D video technology can not meet people's urgent desire for a better video quality, which leads to the rapid development of 3D video technology. Simultaneously people want to watch 3D video in portable devices,. For achieving the above purpose, we set up a remote stereoscopic video play platform. The platform consists of a server and clients. The server is used for transmission of different formats of video and the client is responsible for receiving remote video for the next decoding and pixel restructuring. We utilize and improve Live555 as video transmission server. Live555 is a cross-platform open source project which provides solutions for streaming media such as RTSP protocol and supports transmission of multiple video formats. At the receiving end, we use our laboratory own player. The player for Android, which is with all the basic functions as the ordinary players do and able to play normal 2D video, is the basic structure for redevelopment. Also RTSP is implemented into this structure for telecommunication. In order to achieve stereoscopic display, we need to make pixel rearrangement in this player's decoding part. The decoding part is the local code which JNI interface calls so that we can extract video frames more effectively. The video formats that we process are left and right, up and down and nine grids. In the design and development, a large number of key technologies from Android application development have been employed, including a variety of wireless transmission, pixel restructuring and JNI call. By employing these key technologies, the design plan has been finally completed. After some updates and optimizations, the video player can play remote 3D video well anytime and anywhere and meet people's requirement.
Computer Graphics in Research: Some State -of-the-Art Systems
ERIC Educational Resources Information Center
Reddy, R.; And Others
1975-01-01
A description is given of the structure and functional characteristics of three types of interactive computer graphic systems, developed by the Department of Computer Science at Carnegie-Mellon; a high-speed programmable display capable of displaying 50,000 short vectors, flicker free; a shaded-color video display for the display of gray-scale…
Effects of blurring and vertical misalignment on visual fatigue of stereoscopic displays
NASA Astrophysics Data System (ADS)
Baek, Sangwook; Lee, Chulhee
2015-03-01
In this paper, we investigate two error issues in stereo images, which may produce visual fatigue. When two cameras are used to produce 3D video sequences, vertical misalignment can be a problem. Although this problem may not occur in professionally produced 3D programs, it is still a major issue in many low-cost 3D programs. Recently, efforts have been made to produce 3D video programs using smart phones or tablets, which may present the vertical alignment problem. Also, in 2D-3D conversion techniques, the simulated frame may have blur effects, which can also introduce visual fatigue in 3D programs. In this paper, to investigate the relationship between these two errors (vertical misalignment and blurring in one image), we performed a subjective test using simulated 3D video sequences that include stereo video sequences with various vertical misalignments and blurring in a stereo image. We present some analyses along with objective models to predict the degree of visual fatigue from vertical misalignment and blurring.
Co-Located Collaborative Learning Video Game with Single Display Groupware
ERIC Educational Resources Information Center
Infante, Cristian; Weitz, Juan; Reyes, Tomas; Nussbaum, Miguel; Gomez, Florencia; Radovic, Darinka
2010-01-01
Role Game is a co-located CSCL video game played by three students sitting at one machine sharing a single screen, each with their own input device. Inspired by video console games, Role Game enables students to learn by doing, acquiring social abilities and mastering subject matter in a context of co-located collaboration. After describing the…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-28
...In this document, the Commission proposes rules to implement provisions of the Twenty-First Century Communications and Video Accessibility Act of 2010 (``CVAA'') that mandate rules for closed captioning of certain video programming delivered using Internet protocol (``IP''). The Commission seeks comment on rules that would apply to the distributors, providers, and owners of IP-delivered video programming, as well as the devices that display such programming.
Objective analysis of image quality of video image capture systems
NASA Astrophysics Data System (ADS)
Rowberg, Alan H.
1990-07-01
As Picture Archiving and Communication System (PACS) technology has matured, video image capture has become a common way of capturing digital images from many modalities. While digital interfaces, such as those which use the ACR/NEMA standard, will become more common in the future, and are preferred because of the accuracy of image transfer, video image capture will be the dominant method in the short term, and may continue to be used for some time because of the low cost and high speed often associated with such devices. Currently, virtually all installed systems use methods of digitizing the video signal that is produced for display on the scanner viewing console itself. A series of digital test images have been developed for display on either a GE CT9800 or a GE Signa MRI scanner. These images have been captured with each of five commercially available image capture systems, and the resultant images digitally transferred on floppy disk to a PC1286 computer containing Optimast' image analysis software. Here the images can be displayed in a comparative manner for visual evaluation, in addition to being analyzed statistically. Each of the images have been designed to support certain tests, including noise, accuracy, linearity, gray scale range, stability, slew rate, and pixel alignment. These image capture systems vary widely in these characteristics, in addition to the presence or absence of other artifacts, such as shading and moire pattern. Other accessories such as video distribution amplifiers and noise filters can also add or modify artifacts seen in the captured images, often giving unusual results. Each image is described, together with the tests which were performed using them. One image contains alternating black and white lines, each one pixel wide, after equilibration strips ten pixels wide. While some systems have a slew rate fast enough to track this correctly, others blur it to an average shade of gray, and do not resolve the lines, or give horizontal or vertical streaking. While many of these results are significant from an engineering standpoint alone, there are clinical implications and some anatomy or pathology may not be visualized if an image capture system is used improperly.
Direct measurements of protein-stabilized gold nanoparticle interactions.
Eichmann, Shannon L; Bevan, Michael A
2010-09-21
We report integrated video and total internal reflection microscopy measurements of protein stabilized 110 nm Au nanoparticles confined in 280 nm gaps in physiological media. Measured potential energy profiles display quantitative agreement with Brownian dynamic simulations that include hydrodynamic interactions and camera exposure time and noise effects. Our results demonstrate agreement between measured nonspecific van der Waals and adsorbed protein interactions with theoretical potentials. Confined, lateral nanoparticle diffusivity measurements also display excellent agreement with predictions. These findings provide a basis to interrogate specific biomacromolecular interactions in similar experimental configurations and to design future improved measurement methods.
Heo, Hwan; Lee, Won Oh; Shin, Kwang Yong; Park, Kang Ryoung
2014-05-15
We propose a new method for measuring the degree of eyestrain on 3D stereoscopic displays using a glasses-type of eye tracking device. Our study is novel in the following four ways: first, the circular area where a user's gaze position exists is defined based on the calculated gaze position and gaze estimation error. Within this circular area, the position where edge strength is maximized can be detected, and we determine this position as the gaze position that has a higher probability of being the correct one. Based on this gaze point, the eye foveation model is defined. Second, we quantitatively evaluate the correlation between the degree of eyestrain and the causal factors of visual fatigue, such as the degree of change of stereoscopic disparity (CSD), stereoscopic disparity (SD), frame cancellation effect (FCE), and edge component (EC) of the 3D stereoscopic display using the eye foveation model. Third, by comparing the eyestrain in conventional 3D video and experimental 3D sample video, we analyze the characteristics of eyestrain according to various factors and types of 3D video. Fourth, by comparing the eyestrain with or without the compensation of eye saccades movement in 3D video, we analyze the characteristics of eyestrain according to the types of eye movements in 3D video. Experimental results show that the degree of CSD causes more eyestrain than other factors.
Knowledge representation in space flight operations
NASA Technical Reports Server (NTRS)
Busse, Carl
1989-01-01
In space flight operations rapid understanding of the state of the space vehicle is essential. Representation of knowledge depicting space vehicle status in a dynamic environment presents a difficult challenge. The NASA Jet Propulsion Laboratory has pursued areas of technology associated with the advancement of spacecraft operations environment. This has led to the development of several advanced mission systems which incorporate enhanced graphics capabilities. These systems include: (1) Spacecraft Health Automated Reasoning Prototype (SHARP); (2) Spacecraft Monitoring Environment (SME); (3) Electrical Power Data Monitor (EPDM); (4) Generic Payload Operations Control Center (GPOCC); and (5) Telemetry System Monitor Prototype (TSM). Knowledge representation in these systems provides a direct representation of the intrinsic images associated with the instrument and satellite telemetry and telecommunications systems. The man-machine interface includes easily interpreted contextual graphic displays. These interactive video displays contain multiple display screens with pop-up windows and intelligent, high resolution graphics linked through context and mouse-sensitive icons and text.
NASA Technical Reports Server (NTRS)
Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)
1991-01-01
A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.
Composite video and graphics display for camera viewing systems in robotics and teleoperation
NASA Technical Reports Server (NTRS)
Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)
1993-01-01
A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.
Help for the Visually Impaired
NASA Technical Reports Server (NTRS)
1995-01-01
The Low Vision Enhancement System (LVES) is a video headset that offers people with low vision a view of their surroundings equivalent to the image on a five-foot television screen four feet from the viewer. It will not make the blind see but for many people with low vision, it eases everyday activities such as reading, watching TV and shopping. LVES was developed over almost a decade of cooperation between Stennis Space Center, the Wilmer Eye Institute of the Johns Hopkins Medical Institutions, the Department of Veteran Affairs, and Visionics Corporation. With the aid of Stennis scientists, Wilmer researchers used NASA technology for computer processing of satellite images and head-mounted vision enhancement systems originally intended for the space station. The unit consists of a head-mounted video display, three video cameras, and a control unit for the cameras. The cameras feed images to the video display in the headset.
ERIC Educational Resources Information Center
Dwyer, Paul F.
Drawing on testimony presented at hearings before the Subcommittee on Health and Safety of the House of Representatives conducted between February 28 and June 12, 1984, this staff report addresses the general topic of video display terminals (VDTs) and possible health hazards in the workplace. An introduction presents the history of the…
Rapid Damage Assessment. Volume II. Development and Testing of Rapid Damage Assessment System.
1981-02-01
pixels/s Camera Line Rate 732.4 lines/s Pixels per Line 1728 video 314 blank 4 line number (binary) 2 run number (BCD) 2048 total Pixel Resolution 8 bits...sists of an LSI-ll microprocessor, a VDI -200 video display processor, an FD-2 dual floppy diskette subsystem, an FT-I function key-trackball module...COMPONENT LIST FOR IMAGE PROCESSOR SYSTEM IMAGE PROCESSOR SYSTEM VIEWS I VDI -200 Display Processor Racks, Table FD-2 Dual Floppy Diskette Subsystem FT-l
The Development of the AFIT Communications Laboratory and Experiments for Communications Students.
1985-12-01
Actiatesdigtal wag*andPermits monitoring of max. Actiatesdigial sorag animum signal excursions over selects the "A" or " porn indeienite time...level at which the vertical display is installed in the 71.5. either peak detected or digitally averaged. Video signals above the level set by the... Video signals below the level set by the PEAK AVERAGE control or VERT P05 Positions the display Or baseline on digitally averaged and stored. th c_
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olson, B.M.
1985-01-01
The USAF OEHL conducted an extensive literature review of Video Display Terminals (VDTs) and the health problems commonly associated with them. The report is presented in a question-and-answer format in an attempt to paraphrase the most commonly asked questions about VDTs that are forwarded to USAF OEHL/RZN. The questions and answers have been divided into several topic areas: Ionizing Radiation; Nonionizing Radiation; Optical Radiation; Ultrasound; Static Electricity; Health Complaints/Ergonomics; Pregnancy.
Large-screen display technology assessment for military applications
NASA Astrophysics Data System (ADS)
Blaha, Richard J.
1990-08-01
Full-color, large screen display systems can enhance military applications that require group presentation, coordinated decisions, or interaction between decision makers. The technology already plays an important role in operations centers, simulation facilities, conference rooms, and training centers. Some applications display situational, status, or briefing information, while others portray instructional material for procedural training or depict realistic panoramic scenes that are used in simulators. While each specific application requires unique values of luminance, resolution, response time, reliability, and the video interface, suitable performance can be achieved with available commercial large screen displays. Advances in the technology of large screen displays are driven by the commercial applications because the military applications do not provide the significant market share enjoyed by high definition television (HDTV), entertainment, advertisement, training, and industrial applications. This paper reviews the status of full-color, large screen display technologies and includes the performance and cost metrics of available systems. For this discussion, performance data is based upon either measurements made by our personnel or extractions from vendors' data sheets.
Display Considerations For Intravascular Ultrasonic Imaging
NASA Astrophysics Data System (ADS)
Gessert, James M.; Krinke, Charlie; Mallery, John A.; Zalesky, Paul J.
1989-08-01
A display has been developed for intravascular ultrasonic imaging. Design of this display has a primary goal of providing guidance information for therapeutic interventions such as balloons, lasers, and atherectomy devices. Design considerations include catheter configuration, anatomy, acoustic properties of normal and diseased tissue, catheterization laboratory and operating room environment, acoustic and electrical safety, acoustic data sampling issues, and logistical support such as image measurement, storage and retrieval. Intravascular imaging is in an early stage of development so design flexibility and expandability are very important. The display which has been developed is capable of acquisition and display of grey scale images at rates varying from static B-scans to 30 frames per second. It stores images in a 640 X 480 X 8 bit format and is capable of black and white as well as color display in multiplevideo formats. The design is based on the industry standard PC-AT architecture and consists of two AT style circuit cards, one for high speed sampling and the other for scan conversion, graphics and video generation.
ERIC Educational Resources Information Center
Huang, Hsiu-Mei; Liaw, Shu-Sheng; Lai, Chung-Min
2016-01-01
Advanced technologies have been widely applied in medical education, including human-patient simulators, immersive virtual reality Cave Automatic Virtual Environment systems, and video conferencing. Evaluating learner acceptance of such virtual reality (VR) learning environments is a critical issue for ensuring that such technologies are used to…
People with Hemianopia Report Difficulty with TV, Computer, Cinema Use, and Photography.
Costela, Francisco M; Sheldon, Sarah S; Walker, Bethany; Woods, Russell L
2018-05-01
Our survey found that participants with hemianopia report more difficulties watching video in various formats, including television (TV), on computers, and in a movie theater, compared with participants with normal vision (NV). These reported difficulties were not as marked as those reported by people with central vision loss. The aim of this study was to survey the viewing experience (e.g., frequency, difficulty) of viewing video on TV, computers and portable visual display devices, and at the cinema of people with hemianopia and NV. This information may guide vision rehabilitation. We administered a cross-sectional survey to investigate the viewing habits of people with hemianopia (n = 91) or NV (n = 192). The survey, consisting of 22 items, was administered either in person or in a telephone interview. Descriptive statistics are reported. There were five major differences between the hemianopia and NV groups. Many participants with hemianopia reported (1) at least "some" difficulty watching TV (39/82); (2) at least "some" difficulty watching video on a computer (16/62); (3) never attending the cinema (30/87); (4) at least some difficulty watching movies in the cinema (20/56), among those who did attend the cinema; and (5) never taking photographs (24/80). Some people with hemianopia reported methods that they used to help them watch video, including video playback and head turn. Although people with hemianopia report more difficulty with viewing video on TV and at the cinema, we are not aware of any rehabilitation methods specifically designed to assist people with hemianopia to watch video. The results of this survey may guide future vision rehabilitation.
Video quality assesment using M-SVD
NASA Astrophysics Data System (ADS)
Tao, Peining; Eskicioglu, Ahmet M.
2007-01-01
Objective video quality measurement is a challenging problem in a variety of video processing application ranging from lossy compression to printing. An ideal video quality measure should be able to mimic the human observer. We present a new video quality measure, M-SVD, to evaluate distorted video sequences based on singular value decomposition. A computationally efficient approach is developed for full-reference (FR) video quality assessment. This measure is tested on the Video Quality Experts Group (VQEG) phase I FR-TV test data set. Our experiments show the graphical measure displays the amount of distortion as well as the distribution of error in all frames of the video sequence while the numerical measure has a good correlation with perceived video quality outperforms PSNR and other objective measures by a clear margin.
A generic flexible and robust approach for intelligent real-time video-surveillance systems
NASA Astrophysics Data System (ADS)
Desurmont, Xavier; Delaigle, Jean-Francois; Bastide, Arnaud; Macq, Benoit
2004-05-01
In this article we present a generic, flexible and robust approach for an intelligent real-time video-surveillance system. A previous version of the system was presented in [1]. The goal of these advanced tools is to provide help to operators by detecting events of interest in visual scenes and highlighting alarms and compute statistics. The proposed system is a multi-camera platform able to handle different standards of video inputs (composite, IP, IEEE1394 ) and which can basically compress (MPEG4), store and display them. This platform also integrates advanced video analysis tools, such as motion detection, segmentation, tracking and interpretation. The design of the architecture is optimised to playback, display, and process video flows in an efficient way for video-surveillance application. The implementation is distributed on a scalable computer cluster based on Linux and IP network. It relies on POSIX threads for multitasking scheduling. Data flows are transmitted between the different modules using multicast technology and under control of a TCP-based command network (e.g. for bandwidth occupation control). We report here some results and we show the potential use of such a flexible system in third generation video surveillance system. We illustrate the interest of the system in a real case study, which is the indoor surveillance.
Code of Federal Regulations, 2010 CFR
2010-01-01
... carriers must use an equivalent non-video alternative for transmitting the briefing to passengers with... audio-visual displays played on aircraft for informational purposes that were created under your control...
Code of Federal Regulations, 2011 CFR
2011-01-01
... carriers must use an equivalent non-video alternative for transmitting the briefing to passengers with... audio-visual displays played on aircraft for informational purposes that were created under your control...
Code of Federal Regulations, 2014 CFR
2014-01-01
... carriers must use an equivalent non-video alternative for transmitting the briefing to passengers with... audio-visual displays played on aircraft for informational purposes that were created under your control...
Predictable Programming on a Precision Timed Architecture
2008-04-18
Application: A Video Game Figure 6: Structure of the Video Game Example Inspired by an example game sup- plied with the Hydra development board [17...we implemented a sim- ple video game in C targeted to our PRET architecture. Our example centers on rendering graphics and is otherwise fairly simple...background image. 13 Figure 10: A Screen Dump From Our Video Game Ultimately, each displayed pixel is one of only four col- ors, but the pixels in
Markerless client-server augmented reality system with natural features
NASA Astrophysics Data System (ADS)
Ning, Shuangning; Sang, Xinzhu; Chen, Duo
2017-10-01
A markerless client-server augmented reality system is presented. In this research, the more extensive and mature virtual reality head-mounted display is adopted to assist the implementation of augmented reality. The viewer is provided an image in front of their eyes with the head-mounted display. The front-facing camera is used to capture video signals into the workstation. The generated virtual scene is merged with the outside world information received from the camera. The integrated video is sent to the helmet display system. The distinguishing feature and novelty is to realize the augmented reality with natural features instead of marker, which address the limitations of the marker, such as only black and white, the inapplicability of different environment conditions, and particularly cannot work when the marker is partially blocked. Further, 3D stereoscopic perception of virtual animation model is achieved. The high-speed and stable socket native communication method is adopted for transmission of the key video stream data, which can reduce the calculation burden of the system.
Naval Research Laboratory 1984 Review.
1985-07-16
pulsed infrared comprehensive characterization of ultrahigh trans- sources and electronics for video signal process- parency fluoride glasses and...operates a video system through this port if desired. The optical bench in consisting of visible and infrared television cam- the trailer holds a high...resolution Fourier eras, a high-quality video cassette recorder and transform spectrometer to use in the receiving display, and a digitizer to convert
Fast repurposing of high-resolution stereo video content for mobile use
NASA Astrophysics Data System (ADS)
Karaoglu, Ali; Lee, Bong Ho; Boev, Atanas; Cheong, Won-Sik; Gotchev, Atanas
2012-06-01
3D video content is captured and created mainly in high resolution targeting big cinema or home TV screens. For 3D mobile devices, equipped with small-size auto-stereoscopic displays, such content has to be properly repurposed, preferably in real-time. The repurposing requires not only spatial resizing but also properly maintaining the output stereo disparity, as it should deliver realistic, pleasant and harmless 3D perception. In this paper, we propose an approach to adapt the disparity range of the source video to the comfort disparity zone of the target display. To achieve this, we adapt the scale and the aspect ratio of the source video. We aim at maximizing the disparity range of the retargeted content within the comfort zone, and minimizing the letterboxing of the cropped content. The proposed algorithm consists of five stages. First, we analyse the display profile, which characterises what 3D content can be comfortably observed in the target display. Then, we perform fast disparity analysis of the input stereoscopic content. Instead of returning the dense disparity map, it returns an estimate of the disparity statistics (min, max, meanand variance) per frame. Additionally, we detect scene cuts, where sharp transitions in disparities occur. Based on the estimated input, and desired output disparity ranges, we derive the optimal cropping parameters and scale of the cropping window, which would yield the targeted disparity range and minimize the area of cropped and letterboxed content. Once the rescaling and cropping parameters are known, we perform resampling procedure using spline-based and perceptually optimized resampling (anti-aliasing) kernels, which have also a very efficient computational structure. Perceptual optimization is achieved through adjusting the cut-off frequency of the anti-aliasing filter with the throughput of the target display.
PCI-based WILDFIRE reconfigurable computing engines
NASA Astrophysics Data System (ADS)
Fross, Bradley K.; Donaldson, Robert L.; Palmer, Douglas J.
1996-10-01
WILDFORCE is the first PCI-based custom reconfigurable computer that is based on the Splash 2 technology transferred from the National Security Agency and the Institute for Defense Analyses, Supercomputing Research Center (SRC). The WILDFORCE architecture has many of the features of the WILDFIRE computer, such as field- programmable gate array (FPGA) based processing elements, linear array and crossbar interconnection, and high- performance memory and I/O subsystems. New features introduced in the PCI-based WILDFIRE systems include memory/processor options that can be added to any processing element. These options include static and dynamic memory, digital signal processors (DSPs), FPGAs, and microprocessors. In addition to memory/processor options, many different application specific connectors can be used to extend the I/O capabilities of the system, including systolic I/O, camera input and video display output. This paper also discusses how this new PCI-based reconfigurable computing engine is used for rapid-prototyping, real-time video processing and other DSP applications.
Innovative railroad information displays : video guide
DOT National Transportation Integrated Search
1998-01-01
The objectives of this study were to explore the potential of advanced digital technology, : novel concepts of information management, geographic information databases and : display capabilities in order to enhance planning and decision-making proces...
Development of 40-in hybrid hologram screen for auto-stereoscopic video display
NASA Astrophysics Data System (ADS)
Song, Hyun Ho; Nakashima, Y.; Momonoi, Y.; Honda, Toshio
2004-06-01
Usually in auto stereoscopic display, there are two problems. The first problem is that large image display is difficult, and the second problem is that the view zone (which means the zone in which both eyes are put for stereoscopic or 3-D image observation) is very narrow. We have been developing an auto stereoscopic large video display system (over 100 inches diagonal) which a few people can view simultaneously1,2. Usually in displays that are over 100 inches diagonal, an optical video projection system is used. As one of auto stereoscopic display systems the hologram screen has been proposed3,4,5,6. However, if the hologram screen becomes too large, the view zone (corresponding to the reconstructed diffused object) causes color dispersion and color aberration7. We also proposed the additional Fresnel lens attached to the hologram screen. We call the screen a "hybrid hologram screen", (HHS in short). We made the HHS 866mm(H)×433mm(V) (about 40 inch diagonal)8,9,10,11. By using the lens in the reconstruction step, the angle between object light and reference light can be small, compared to without the lens. So, the spread of the view zone by the color dispersion and color aberration becomes small. And also, the virtual image which is reconstructed from the hologram screen can be transformed to a real image (view zone). So, it is not necessary to use a large lens or concave mirror while making a large hologram screen.
Perceptual tools for quality-aware video networks
NASA Astrophysics Data System (ADS)
Bovik, A. C.
2014-01-01
Monitoring and controlling the quality of the viewing experience of videos transmitted over increasingly congested networks (especially wireless networks) is a pressing problem owing to rapid advances in video-centric mobile communication and display devices that are straining the capacity of the network infrastructure. New developments in automatic perceptual video quality models offer tools that have the potential to be used to perceptually optimize wireless video, leading to more efficient video data delivery and better received quality. In this talk I will review key perceptual principles that are, or could be used to create effective video quality prediction models, and leading quality prediction models that utilize these principles. The goal is to be able to monitor and perceptually optimize video networks by making them "quality-aware."
Representing videos in tangible products
NASA Astrophysics Data System (ADS)
Fageth, Reiner; Weiting, Ralf
2014-03-01
Videos can be taken with nearly every camera, digital point and shoot cameras, DSLRs as well as smartphones and more and more with so-called action cameras mounted on sports devices. The implementation of videos while generating QR codes and relevant pictures out of the video stream via a software implementation was contents in last years' paper. This year we present first data about what contents is displayed and how the users represent their videos in printed products, e.g. CEWE PHOTOBOOKS and greeting cards. We report the share of the different video formats used, the number of images extracted out of the video in order to represent the video, the positions in the book and different design strategies compared to regular books.
Video-speed electronic paper based on electrowetting
NASA Astrophysics Data System (ADS)
Hayes, Robert A.; Feenstra, B. J.
2003-09-01
In recent years, a number of different technologies have been proposed for use in reflective displays. One of the most appealing applications of a reflective display is electronic paper, which combines the desirable viewing characteristics of conventional printed paper with the ability to manipulate the displayed information electronically. Electronic paper based on the electrophoretic motion of particles inside small capsules has been demonstrated and commercialized; but the response speed of such a system is rather slow, limited by the velocity of the particles. Recently, we have demonstrated that electrowetting is an attractive technology for the rapid manipulation of liquids on a micrometre scale. Here we show that electrowetting can also be used to form the basis of a reflective display that is significantly faster than electrophoretic displays, so that video content can be displayed. Our display principle utilizes the voltage-controlled movement of a coloured oil film adjacent to a white substrate. The reflectivity and contrast of our system approach those of paper. In addition, we demonstrate a colour concept, which is intrinsically four times brighter than reflective liquid-crystal displays and twice as bright as other emerging technologies. The principle of microfluidic motion at low voltages is applicable in a wide range of electro-optic devices.
Ohira, Yoshiyuki; Uehara, Takanori; Noda, Kazutaka; Suzuki, Shingo; Shikino, Kiyoshi; Kajiwara, Hideki; Kondo, Takeshi; Hirota, Yusuke; Ikusaka, Masatomi
2017-01-01
Objectives We examined whether problem-based learning tutorials using patient-simulated videos showing daily life are more practical for clinical learning, compared with traditional paper-based problem-based learning, for the consideration rate of psychosocial issues and the recall rate for experienced learning. Methods Twenty-two groups with 120 fifth-year students were each assigned paper-based problem-based learning and video-based problem-based learning using patient-simulated videos. We compared target achievement rates in questionnaires using the Wilcoxon signed-rank test and discussion contents diversity using the Mann-Whitney U test. A follow-up survey used a chi-square test to measure students’ recall of cases in three categories: video, paper, and non-experienced. Results Video-based problem-based learning displayed significantly higher achievement rates for imagining authentic patients (p=0.001), incorporating a comprehensive approach including psychosocial aspects (p<0.001), and satisfaction with sessions (p=0.001). No significant differences existed in the discussion contents diversity regarding the International Classification of Primary Care Second Edition codes and chapter types or in the rate of psychological codes. In a follow-up survey comparing video and paper groups to non-experienced groups, the rates were higher for video (χ2=24.319, p<0.001) and paper (χ2=11.134, p=0.001). Although the video rate tended to be higher than the paper rate, no significant difference was found between the two. Conclusions Patient-simulated videos showing daily life facilitate imagining true patients and support a comprehensive approach that fosters better memory. The clinical patient-simulated video method is more practical and clinical problem-based tutorials can be implemented if we create patient-simulated videos for each symptom as teaching materials. PMID:28245193
Ikegami, Akiko; Ohira, Yoshiyuki; Uehara, Takanori; Noda, Kazutaka; Suzuki, Shingo; Shikino, Kiyoshi; Kajiwara, Hideki; Kondo, Takeshi; Hirota, Yusuke; Ikusaka, Masatomi
2017-02-27
We examined whether problem-based learning tutorials using patient-simulated videos showing daily life are more practical for clinical learning, compared with traditional paper-based problem-based learning, for the consideration rate of psychosocial issues and the recall rate for experienced learning. Twenty-two groups with 120 fifth-year students were each assigned paper-based problem-based learning and video-based problem-based learning using patient-simulated videos. We compared target achievement rates in questionnaires using the Wilcoxon signed-rank test and discussion contents diversity using the Mann-Whitney U test. A follow-up survey used a chi-square test to measure students' recall of cases in three categories: video, paper, and non-experienced. Video-based problem-based learning displayed significantly higher achievement rates for imagining authentic patients (p=0.001), incorporating a comprehensive approach including psychosocial aspects (p<0.001), and satisfaction with sessions (p=0.001). No significant differences existed in the discussion contents diversity regarding the International Classification of Primary Care Second Edition codes and chapter types or in the rate of psychological codes. In a follow-up survey comparing video and paper groups to non-experienced groups, the rates were higher for video (χ 2 =24.319, p<0.001) and paper (χ 2 =11.134, p=0.001). Although the video rate tended to be higher than the paper rate, no significant difference was found between the two. Patient-simulated videos showing daily life facilitate imagining true patients and support a comprehensive approach that fosters better memory. The clinical patient-simulated video method is more practical and clinical problem-based tutorials can be implemented if we create patient-simulated videos for each symptom as teaching materials.
An iPod treatment of amblyopia: an updated binocular approach.
Hess, Robert F; Thompson, B; Black, J M; Machara, G; Zhang, P; Bobier, W R; Cooperstock, J
2012-02-15
We describe the successful translation of computerized and space-consuming laboratory equipment for the treatment of suppression to a small handheld iPod device (Apple iPod; Apple Inc., Cupertino, California). A portable and easily obtainable Apple iPod display, using current video technology offers an ideal solution for the clinical treatment of suppression. The following is a description of the iPod device and illustrates how a video game has been adapted to provide the appropriate stimulation to implement our recent antisuppression treatment protocol. One to 2 hours per day of video game playing under controlled conditions for 1 to 3 weeks can improve acuity and restore binocular function, including stereopsis in adults, well beyond the age at which traditional patching is used. This handheld platform provides a convenient and effective platform for implementing the newly proposed binocular treatment of amblyopia in the clinic, home, or elsewhere. American Optometric Association.
Walker, H Jack; Feild, Hubert S; Giles, William F; Armenakis, Achilles A; Bernerth, Jeremy B
2009-09-01
This study investigated participants' reactions to employee testimonials presented on recruitment Web sites. The authors manipulated the presence of employee testimonials, richness of media communicating testimonials (video with audio vs. picture with text), and representation of racial minorities in employee testimonials. Participants were more attracted to organizations and perceived information as more credible when testimonials were included on recruitment Web sites. Testimonials delivered via video with audio had higher attractiveness and information credibility ratings than those given via picture with text. Results also showed that Blacks responded more favorably, whereas Whites responded more negatively, to the recruiting organization as the proportion of minorities shown giving testimonials on the recruitment Web site increased. However, post hoc analyses revealed that use of a richer medium (video with audio vs. picture with text) to communicate employee testimonials tended to attenuate these racial effects.
NASA Technical Reports Server (NTRS)
2002-01-01
Dimension Technologies Inc., developed a line of 2-D/3-D Liquid Crystal Display (LCD) screens, including a 15-inch model priced at consumer levels. DTI's family of flat panel LCD displays, called the Virtual Window(TM), provide real-time 3-D images without the use of glasses, head trackers, helmets, or other viewing aids. Most of the company initial 3-D display research was funded through NASA's Small Business Innovation Research (SBIR) program. The images on DTI's displays appear to leap off the screen and hang in space. The display accepts input from computers or stereo video sources, and can be switched from 3-D to full-resolution 2-D viewing with the push of a button. The Virtual Window displays have applications in data visualization, medicine, architecture, business, real estate, entertainment, and other research, design, military, and consumer applications. Displays are currently used for computer games, protein analysis, and surgical imaging. The technology greatly benefits the medical field, as surgical simulators are helping to increase the skills of surgical residents. Virtual Window(TM) is a trademark of Dimension Technologies Inc.
Graphic overlays in high-precision teleoperation: Current and future work at JPL
NASA Technical Reports Server (NTRS)
Diner, Daniel B.; Venema, Steven C.
1989-01-01
In space teleoperation additional problems arise, including signal transmission time delays. These can greatly reduce operator performance. Recent advances in graphics open new possibilities for addressing these and other problems. Currently a multi-camera system with normal 3-D TV and video graphics capabilities is being developed. Trained and untrained operators will be tested for high precision performance using two force reflecting hand controllers and a voice recognition system to control two robot arms and up to 5 movable stereo or non-stereo TV cameras. A number of new techniques of integrating TV and video graphics displays to improve operator training and performance in teleoperation and supervised automation are evaluated.
Competitive action video game players display rightward error bias during on-line video game play.
Roebuck, Andrew J; Dubnyk, Aurora J B; Cochran, David; Mandryk, Regan L; Howland, John G; Harms, Victoria
2017-09-12
Research in asymmetrical visuospatial attention has identified a leftward bias in the general population across a variety of measures including visual attention and line-bisection tasks. In addition, increases in rightward collisions, or bumping, during visuospatial navigation tasks have been demonstrated in real world and virtual environments. However, little research has investigated these biases beyond the laboratory. The present study uses a semi-naturalistic approach and the online video game streaming service Twitch to examine navigational errors and assaults as skilled action video game players (n = 60) compete in Counter Strike: Global Offensive. This study showed a significant rightward bias in both fatal assaults and navigational errors. Analysis using the in-game ranking system as a measure of skill failed to show a relationship between bias and skill. These results suggest that a leftward visuospatial bias may exist in skilled players during online video game play. However, the present study was unable to account for some factors such as environmental symmetry and player handedness. In conclusion, video game streaming is a promising method for behavioural research in the future, however further study is required before one can determine whether these results are an artefact of the method applied, or representative of a genuine rightward bias.
Individual recognition based on communication behaviour of male fowl.
Smith, Carolynn L; Taubert, Jessica; Weldon, Kimberly; Evans, Christopher S
2016-04-01
Correctly directing social behaviour towards a specific individual requires an ability to discriminate between conspecifics. The mechanisms of individual recognition include phenotype matching and familiarity-based recognition. Communication-based recognition is a subset of familiarity-based recognition wherein the classification is based on behavioural or distinctive signalling properties. Male fowl (Gallus gallus) produce a visual display (tidbitting) upon finding food in the presence of a female. Females typically approach displaying males. However, males may tidbit without food. We used the distinctiveness of the visual display and the unreliability of some males to test for communication-based recognition in female fowl. We manipulated the prior experience of the hens with the males to create two classes of males: S(+) wherein the tidbitting signal was paired with a food reward to the female, and S (-) wherein the tidbitting signal occurred without food reward. We then conducted a sequential discrimination test with hens using a live video feed of a familiar male. The results of the discrimination tests revealed that hens discriminated between categories of males based on their signalling behaviour. These results suggest that fowl possess a communication-based recognition system. This is the first demonstration of live-to-video transfer of recognition in any species of bird. Copyright © 2016 Elsevier B.V. All rights reserved.
Biological Response to the Dynamic Spectral-Polarized Underwater Light Field
2009-01-01
Station phone: (831) 655-6219 fax: (831) 375 -0793 email: lignje@stanford.edu George W. Kattawar Department of Physics Texas A & M...temperature sensors plus 3-dimensional accelerometers (all sampled at 1 Hz , Figure 8). Videos revealed squid fickering display that is visually similar to...include anti- submarine warfare, special operations, clandestine reconnaissance, and harbor security operations. RELATED PROJECTS The CCNY group
Delay/Disruption Tolerant Networks for Human Space Flight Video Project
NASA Technical Reports Server (NTRS)
Fink, Patrick W.; Ngo, Phong; Schlesinger, Adam
2010-01-01
The movie describes collaboration between NASA and Vint Cerf on the development of Disruption Tolerant Networks (DTN) for use in space exploration. Current evaluation efforts at Johnson Space Center are focused on the use of DTNs in space communications. Tests include the ability of rovers to store data for later display, tracking local and remote habitat inventory using radio-frequency identification tags, and merging networks.
Heo, Hwan; Lee, Won Oh; Shin, Kwang Yong; Park, Kang Ryoung
2014-01-01
We propose a new method for measuring the degree of eyestrain on 3D stereoscopic displays using a glasses-type of eye tracking device. Our study is novel in the following four ways: first, the circular area where a user's gaze position exists is defined based on the calculated gaze position and gaze estimation error. Within this circular area, the position where edge strength is maximized can be detected, and we determine this position as the gaze position that has a higher probability of being the correct one. Based on this gaze point, the eye foveation model is defined. Second, we quantitatively evaluate the correlation between the degree of eyestrain and the causal factors of visual fatigue, such as the degree of change of stereoscopic disparity (CSD), stereoscopic disparity (SD), frame cancellation effect (FCE), and edge component (EC) of the 3D stereoscopic display using the eye foveation model. Third, by comparing the eyestrain in conventional 3D video and experimental 3D sample video, we analyze the characteristics of eyestrain according to various factors and types of 3D video. Fourth, by comparing the eyestrain with or without the compensation of eye saccades movement in 3D video, we analyze the characteristics of eyestrain according to the types of eye movements in 3D video. Experimental results show that the degree of CSD causes more eyestrain than other factors. PMID:24834910
Recent progress in flexible OLED displays
NASA Astrophysics Data System (ADS)
Hack, Michael G.; Weaver, Michael S.; Mahon, Janice K.; Brown, Julie J.
2001-09-01
Organic light emitting device (OLED) technology has recently been shown to demonstrate excellent performance and cost characteristics for use in numerous flat panel display (FPD) applications. OLED displays emit bright, colorful light with excellent power efficiency, wide viewing angle and video response rates. OLEDs are also demonstrating the requisite environmental robustness for a wide variety of applications. OLED technology is also the first FPD technology with the potential to be highly functional and durable in a flexible format. The use of plastic and other flexible substrate materials offers numerous advantages over commonly used glass substrates, including impact resistance, light weight, thinness and conformability. Currently, OLED displays are being fabricated on rigid substrates, such as glass or silicon wafers. At Universal Display Corporation (UDC), we are developing a new class of flexible OLED displays (FOLEDs). These displays also have extremely low power consumption through the use of electrophosphorescent doped OLEDs. To commercialize FOLED technology, a number of technical issues related to packaging and display processing on flexible substrates need to be addressed. In this paper, we report on our recent results to demonstrate the key technologies that enable the manufacture of power efficient, long-life flexible OLED displays for commercial and military applications.
Giera, Brian; Bukosky, Scott; Lee, Elaine; ...
2018-01-23
Here, quantitative color analysis is performed on videos of high contrast, low power reversible electrophoretic deposition (EPD)-based displays operated under different applied voltages. This analysis is coded in an open-source software, relies on a color differentiation metric, ΔE * 00, derived from digital video, and provides an intuitive relationship between the operating conditions of the devices and their performance. Time-dependent ΔE * 00 color analysis reveals color relaxation behavior, recoverability for different voltage sequences, and operating conditions that can lead to optimal performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giera, Brian; Bukosky, Scott; Lee, Elaine
Here, quantitative color analysis is performed on videos of high contrast, low power reversible electrophoretic deposition (EPD)-based displays operated under different applied voltages. This analysis is coded in an open-source software, relies on a color differentiation metric, ΔE * 00, derived from digital video, and provides an intuitive relationship between the operating conditions of the devices and their performance. Time-dependent ΔE * 00 color analysis reveals color relaxation behavior, recoverability for different voltage sequences, and operating conditions that can lead to optimal performance.
A Low Cost Video Display System Using the Motorola 6811 Single-Chip Microcomputer.
1986-08-01
EB JSR VIDEO display data;wait for keyentry 0426 E1EB BD E2 4E JSR CLRBUFF clean out buffer 0427 EEE C601 LDAB #1 reset pointer 0428 ElFO D7 02 STAB...E768 Al 00 REGI CMPA OX 1303 E76A 27 OE BEQ REG3 1304 E76C E6 00 LDAB 0,X 1305 E76E 08 INX 1306 E76F Cl 53 CMPB #’S’ 1307 E771 26 15 BNE REGI jump if
Neutrons Image Additive Manufactured Turbine Blade in 3-D
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2016-04-29
The video displays the Inconel 718 Turbine Blade made by Additive Manufacturing. First a gray scale neutron computed tomogram (CT) is displayed with transparency in order to show the internal structure. Then the neutron CT is overlapped with the engineering drawing that was used to print the part and a comparison of external and internal structures is possible. This provides a map of the accuracy of the printed turbine (printing tolerance). Internal surface roughness can also be observed. Credits: Experimental Measurements: Hassina Z. Bilheaux, Video and Printing Tolerance Analysis: Jean C. Bilheaux
Stereoscopic 3D video games and their effects on engagement
NASA Astrophysics Data System (ADS)
Hogue, Andrew; Kapralos, Bill; Zerebecki, Chris; Tawadrous, Mina; Stanfield, Brodie; Hogue, Urszula
2012-03-01
With television manufacturers developing low-cost stereoscopic 3D displays, a large number of consumers will undoubtedly have access to 3D-capable televisions at home. The availability of 3D technology places the onus on content creators to develop interesting and engaging content. While the technology of stereoscopic displays and content generation are well understood, there are many questions yet to be answered surrounding its effects on the viewer. Effects of stereoscopic display on passive viewers for film are known, however video games are fundamentally different since the viewer/player is actively (rather than passively) engaged in the content. Questions of how stereoscopic viewing affects interaction mechanics have previously been studied in the context of player performance but very few have attempted to quantify the player experience to determine whether stereoscopic 3D has a positive or negative influence on their overall engagement. In this paper we present a preliminary study of the effects stereoscopic 3D have on player engagement in video games. Participants played a video game in two conditions, traditional 2D and stereoscopic 3D and their engagement was quantified using a previously validated self-reporting tool. The results suggest that S3D has a positive effect on immersion, presence, flow, and absorption.
Orbital thermal analysis of lattice structured spacecraft using color video display techniques
NASA Technical Reports Server (NTRS)
Wright, R. L.; Deryder, D. D.; Palmer, M. T.
1983-01-01
A color video display technique is demonstrated as a tool for rapid determination of thermal problems during the preliminary design of complex space systems. A thermal analysis is presented for the lattice-structured Earth Observation Satellite (EOS) spacecraft at 32 points in a baseline non Sun-synchronous (60 deg inclination) orbit. Large temperature variations (on the order of 150 K) were observed on the majority of the members. A gradual decrease in temperature was observed as the spacecraft traversed the Earth's shadow, followed by a sudden rise in temperature (100 K) as the spacecraft exited the shadow. Heating rate and temperature histories of selected members and color graphic displays of temperatures on the spacecraft are presented.
Pictorial communication in virtual and real environments
NASA Technical Reports Server (NTRS)
Ellis, Stephen R. (Editor)
1991-01-01
Papers about the communication between human users and machines in real and synthetic environments are presented. Individual topics addressed include: pictorial communication, distortions in memory for visual displays, cartography and map displays, efficiency of graphical perception, volumetric visualization of 3D data, spatial displays to increase pilot situational awareness, teleoperation of land vehicles, computer graphics system for visualizing spacecraft in orbit, visual display aid for orbital maneuvering, multiaxis control in telemanipulation and vehicle guidance, visual enhancements in pick-and-place tasks, target axis effects under transformed visual-motor mappings, adapting to variable prismatic displacement. Also discussed are: spatial vision within egocentric and exocentric frames of reference, sensory conflict in motion sickness, interactions of form and orientation, perception of geometrical structure from congruence, prediction of three-dimensionality across continuous surfaces, effects of viewpoint in the virtual space of pictures, visual slant underestimation, spatial constraints of stereopsis in video displays, stereoscopic stance perception, paradoxical monocular stereopsis and perspective vergence. (No individual items are abstracted in this volume)
Emotions are understood from biological motion across remote cultures.
Parkinson, Carolyn; Walker, Trent T; Memmi, Sarah; Wheatley, Thalia
2017-04-01
Patterns of bodily movement can be used to signal a wide variety of information, including emotional states. Are these signals reliant on culturally learned cues or are they intelligible across individuals lacking exposure to a common culture? To find out, we traveled to a remote Kreung village in Ratanakiri, Cambodia. First, we recorded Kreung portrayals of 5 emotions through bodily movement. These videos were later shown to American participants, who matched the videos with appropriate emotional labels with above chance accuracy (Study 1). The Kreung also viewed Western point-light displays of emotions. After each display, they were asked to either freely describe what was being expressed (Study 2) or choose from 5 predetermined response options (Study 3). Across these studies, Kreung participants recognized Western point-light displays of anger, fear, happiness, sadness, and pride with above chance accuracy. Kreung raters were not above chance in deciphering an American point-light display depicting love, suggesting that recognizing love may rely, at least in part, on culturally specific cues or modalities other than bodily movement. In addition, multidimensional scaling of the patterns of nonverbal behavior associated with each emotion in each culture suggested that similar patterns of nonverbal behavior are used to convey the same emotions across cultures. The considerable cross-cultural intelligibility observed across these studies suggests that the communication of emotion through movement is largely shaped by aspects of physiology and the environment shared by all humans, irrespective of differences in cultural context. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Membrane-mirror-based autostereoscopic display for tele-operation and teleprescence applications
NASA Astrophysics Data System (ADS)
McKay, Stuart; Mair, Gordon M.; Mason, Steven; Revie, Kenneth
2000-05-01
An autostereoscopic display for telepresence and tele- operation applications has been developed at the University of Strathclyde in Glasgow, Scotland. The research is a collaborative effort between the Imaging Group and the Transparent Telepresence Research Group, both based at Strathclyde. A key component of the display is the directional screen; a 1.2-m diameter Stretchable Membrane Mirror is currently used. This patented technology enables large diameter, small f No., mirrors to be produced at a fraction of the cost of conventional optics. Another key element of the present system is an anthropomorphic and anthropometric stereo camera sensor platform. Thus, in addition to mirror development, research areas include sensor platform design focused on sight, hearing, research areas include sensor platform design focused on sight, hearing, and smell, telecommunications, display systems for all visual, aural and other senses, tele-operation, and augmented reality. The sensor platform is located at the remote site and transmits live video to the home location. Applications for this technology are as diverse as they are numerous, ranging from bomb disposal and other hazardous environment applications to tele-conferencing, sales, education and entertainment.
ERIC Educational Resources Information Center
Dahlgren, Sally
2000-01-01
Discusses how advances in light-emitting diode (LED) technology is helping video displays at sporting events get fans closer to the action than ever before. The types of LED displays available are discussed as are their operation and maintenance issues. (GR)
High Resolution Displays Using NCAP Liquid Crystals
NASA Astrophysics Data System (ADS)
Macknick, A. Brian; Jones, Phil; White, Larry
1989-07-01
Nematic curvilinear aligned phase (NCAP) liquid crystals have been found useful for high information content video displays. NCAP materials are liquid crystals which have been encapsulated in a polymer matrix and which have a light transmission which is variable with applied electric fields. Because NCAP materials do not require polarizers, their on-state transmission is substantially better than twisted nematic cells. All dimensional tolerances are locked in during the encapsulation process and hence there are no critical sealing or spacing issues. By controlling the polymer/liquid crystal morphology, switching speeds of NCAP materials have been significantly improved over twisted nematic systems. Recent work has combined active matrix addressing with NCAP materials. Active matrices, such as thin film transistors, have given displays of high resolution. The paper will discuss the advantages of NCAP materials specifically designed for operation at video rates on transistor arrays; applications for both backlit and projection displays will be discussed.
Oh, Ding Yuan; Barr, Ian G.; Hurt, Aeron C.
2015-01-01
Ferrets are the preferred animal model to assess influenza virus infection, virulence and transmission as they display similar clinical symptoms and pathogenesis to those of humans. Measures of disease severity in the ferret include weight loss, temperature rise, sneezing, viral shedding and reduced activity. To date, the only available method for activity measurement has been the assignment of an arbitrary score by a ‘blind’ observer based on pre-defined responsiveness scale. This manual scoring method is subjective and can be prone to bias. In this study, we described a novel video-tracking methodology for determining activity changes in a ferret model of influenza infection. This method eliminates the various limitations of manual scoring, which include the need for a sole ‘blind’ observer and the requirement to recognise the ‘normal’ activity of ferrets in order to assign relative activity scores. In ferrets infected with an A(H1N1)pdm09 virus, video-tracking was more sensitive than manual scoring in detecting ferret activity changes. Using this video-tracking method, oseltamivir treatment was found to ameliorate the effect of influenza infection on activity in ferret. Oseltamivir treatment of animals was associated with an improvement in clinical symptoms, including reduced inflammatory responses in the upper respiratory tract, lower body weight loss and a smaller rise in body temperature, despite there being no significant reduction in viral shedding. In summary, this novel video-tracking is an easy-to-use, objective and sensitive methodology for measuring ferret activity. PMID:25738900
Expert Behavior in Children's Video Game Play.
ERIC Educational Resources Information Center
VanDeventer, Stephanie S.; White, James A.
2002-01-01
Investigates the display of expert behavior by seven outstanding video game-playing children ages 10 and 11. Analyzes observation and debriefing transcripts for evidence of self-monitoring, pattern recognition, principled decision making, qualitative thinking, and superior memory, and discusses implications for educators regarding the development…
SU-C-209-06: Improving X-Ray Imaging with Computer Vision and Augmented Reality
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacDougall, R.D.; Scherrer, B; Don, S
Purpose: To determine the feasibility of using a computer vision algorithm and augmented reality interface to reduce repeat rates and improve consistency of image quality and patient exposure in general radiography. Methods: A prototype device, designed for use with commercially available hardware (Microsoft Kinect 2.0) capable of depth sensing and high resolution/frame rate video, was mounted to the x-ray tube housing as part of a Philips DigitalDiagnost digital radiography room. Depth data and video was streamed to a Windows 10 PC. Proprietary software created an augmented reality interface where overlays displayed selectable information projected over real-time video of the patient.more » The information displayed prior to and during x-ray acquisition included: recognition and position of ordered body part, position of image receptor, thickness of anatomy, location of AEC cells, collimated x-ray field, degree of patient motion and suggested x-ray technique. Pre-clinical data was collected in a volunteer study to validate patient thickness measurements and x-ray images were not acquired. Results: Proprietary software correctly identified ordered body part, measured patient motion, and calculated thickness of anatomy. Pre-clinical data demonstrated accuracy and precision of body part thickness measurement when compared with other methods (e.g. laser measurement tool). Thickness measurements provided the basis for developing a database of thickness-based technique charts that can be automatically displayed to the technologist. Conclusion: The utilization of computer vision and commercial hardware to create an augmented reality view of the patient and imaging equipment has the potential to drastically improve the quality and safety of x-ray imaging by reducing repeats and optimizing technique based on patient thickness. Society of Pediatric Radiology Pilot Grant; Washington University Bear Cub Fund.« less
Lim, Tae Ho; Choi, Hyuk Joong; Kang, Bo Seung
2010-01-01
We assessed the feasibility of using a camcorder mobile phone for teleconsulting about cardiac echocardiography. The diagnostic performance of evaluating left ventricle (LV) systolic function was measured by three emergency medicine physicians. A total of 138 short echocardiography video sequences (from 70 subjects) was selected from previous emergency room ultrasound examinations. The measurement of LV ejection fraction based on the transmitted video displayed on a mobile phone was compared with the original video displayed on the LCD monitor of the ultrasound machine. The image quality was evaluated using the double stimulation impairment scale (DSIS). All observers showed high sensitivity. There was an improvement in specificity with the observer's increasing experience of cardiac ultrasound. Although the image quality of video on the mobile phone was lower than that of the original, a receiver operating characteristic (ROC) analysis indicated that there was no significant difference in diagnostic performance. Immediate basic teleconsulting of echocardiography movies is possible using current commercially-available mobile phone systems.
Video-laryngoscopy introduction in a Sub-Saharan national teaching hospital: luxury or necessity?
Alain, Traoré Ibrahim; Drissa, Barro Sié; Flavien, Kaboré; Serge, Ilboudo; Idriss, Traoré
2015-01-01
Tracheal intubation using Macintosh blade is the technique of choice for the liberation of airways. It can turn out to be difficult, causing severe complications which can entail the prognosis for survival or the adjournment of the surgical operation. The video-laryngoscope allows a better display of the larynx and a good exposure of the glottis and then making tracheal intubation simpler compared with a conventional laryngoscope. It is little spread in sub-Saharan Africa and more particularly in Burkina Faso because of its high cost. We report our first experiences of use of the video-laryngoscope through two cases of difficult tracheal intubation which had required the adjournment of the interventions. It results that the video-laryngoscope makes tracheal intubation easier even in it's the first use because of the good glottal display which it gives and because its allows apprenticeship easy. Therefore, it is not a luxury to have it in our therapeutic arsenal. PMID:27047621
Coupled auralization and virtual video for immersive multimedia displays
NASA Astrophysics Data System (ADS)
Henderson, Paul D.; Torres, Rendell R.; Shimizu, Yasushi; Radke, Richard; Lonsway, Brian
2003-04-01
The implementation of maximally-immersive interactive multimedia in exhibit spaces requires not only the presentation of realistic visual imagery but also the creation of a perceptually accurate aural experience. While conventional implementations treat audio and video problems as essentially independent, this research seeks to couple the visual sensory information with dynamic auralization in order to enhance perceptual accuracy. An implemented system has been developed for integrating accurate auralizations with virtual video techniques for both interactive presentation and multi-way communication. The current system utilizes a multi-channel loudspeaker array and real-time signal processing techniques for synthesizing the direct sound, early reflections, and reverberant field excited by a moving sound source whose path may be interactively defined in real-time or derived from coupled video tracking data. In this implementation, any virtual acoustic environment may be synthesized and presented in a perceptually-accurate fashion to many participants over a large listening and viewing area. Subject tests support the hypothesis that the cross-modal coupling of aural and visual displays significantly affects perceptual localization accuracy.
Image Descriptors for Displays
1975-03-01
sampled with composite blanking signal; (c) signal in (a) formed into composite video signal ... 24 3. Power spectral density of the signals shown in...Curve A: composite video signal formed from 20 Hz to 2.5 MH.i band-limited, Gaussian white noise. Curve B: average spectrum of off-the-air video...previously. Our experimental procedure was the following. Off-the-air television signals broadcast on VHP channels were analyzed with a commercially
An Augmented Virtuality Display for Improving UAV Usability
2005-01-01
cockpit. For a more universally-understood metaphor, we have turned to virtual environments of the type represented in video games . Many of the...people who have the need to fly UAVs (such as military personnel) have experience with playing video games . They are skilled in navigating virtual...Another aspect of tailoring the interface to those with video game experience is to use familiar controls. Microsoft has developed a popular and
Toward a 3D video format for auto-stereoscopic displays
NASA Astrophysics Data System (ADS)
Vetro, Anthony; Yea, Sehoon; Smolic, Aljoscha
2008-08-01
There has been increased momentum recently in the production of 3D content for cinema applications; for the most part, this has been limited to stereo content. There are also a variety of display technologies on the market that support 3DTV, each offering a different viewing experience and having different input requirements. More specifically, stereoscopic displays support stereo content and require glasses, while auto-stereoscopic displays avoid the need for glasses by rendering view-dependent stereo pairs for a multitude of viewing angles. To realize high quality auto-stereoscopic displays, multiple views of the video must either be provided as input to the display, or these views must be created locally at the display. The former approach has difficulties in that the production environment is typically limited to stereo, and transmission bandwidth for a large number of views is not likely to be available. This paper discusses an emerging 3D data format that enables the latter approach to be realized. A new framework for efficiently representing a 3D scene and enabling the reconstruction of an arbitrarily large number of views prior to rendering is introduced. Several design challenges are also highlighted through experimental results.
Rosman, Yossi; Eisenkraft, Arik; Milk, Nadav; Shiyovich, Arthur; Ophir, Nimrod; Shrot, Shai; Kreiss, Yitshak; Kassirer, Michael
2014-05-06
On the night of 21 August 2013, sarin was dispersed in the eastern outskirts of Damascus, killing 1400 civilians and severely affecting thousands more. This article aims to delineate the clinical presentation and management of a mass casualty event caused by a nerve agent as shown in the social media. Authors searched YouTube for videos uploaded of this attack and identified 210 videos. Of these, 67 met inclusion criteria and were evaluated in the final analysis.These videos displayed 130 casualties; 119 (91.5%) of which were defined as moderately injured or worse. The most common clinical signs were dyspnea (53.0%), diaphoresis (48.5%), and loss of consciousness (40.7%). Important findings included a severe shortage of supporting measures and lack of antidotal autoinjectors. Decontamination, documented in 25% of the videos, was done in an inefficient manner. Protective gear was not noticed, except for sporadic use of latex gloves and surgical masks.This is believed to be the first time that social media was used to evaluate clinical data and management protocols to better prepare against future possible events.
Wide-Field-of-View, High-Resolution, Stereoscopic Imager
NASA Technical Reports Server (NTRS)
Prechtl, Eric F.; Sedwick, Raymond J.
2010-01-01
A device combines video feeds from multiple cameras to provide wide-field-of-view, high-resolution, stereoscopic video to the user. The prototype under development consists of two camera assemblies, one for each eye. One of these assemblies incorporates a mounting structure with multiple cameras attached at offset angles. The video signals from the cameras are fed to a central processing platform where each frame is color processed and mapped into a single contiguous wide-field-of-view image. Because the resolution of most display devices is typically smaller than the processed map, a cropped portion of the video feed is output to the display device. The positioning of the cropped window will likely be controlled through the use of a head tracking device, allowing the user to turn his or her head side-to-side or up and down to view different portions of the captured image. There are multiple options for the display of the stereoscopic image. The use of head mounted displays is one likely implementation. However, the use of 3D projection technologies is another potential technology under consideration, The technology can be adapted in a multitude of ways. The computing platform is scalable, such that the number, resolution, and sensitivity of the cameras can be leveraged to improve image resolution and field of view. Miniaturization efforts can be pursued to shrink the package down for better mobility. Power savings studies can be performed to enable unattended, remote sensing packages. Image compression and transmission technologies can be incorporated to enable an improved telepresence experience.
[The prevalence and influencing factors of eye diseases for IT industry video operation workers].
Zhao, Liang-liang; Yu, Yan-yan; Yu, Wen-lan; Xu, Ming; Cao, Wen-dong; Zhang, Hong-bing; Han, Lei; Zhang, Heng-dong
2013-05-01
To investigate the situation of video-contact and eye diseases for IT industry video operation workers, and to analyze the influencing factors, providing scientific evidence for the make of health-strategy for IT industry video operation workers. We take the random cluster sampling method to choose 190 IT industry video operation workers in a city of Jiangsu province, analyzing the relations between video contact and eye diseases. The daily video contact time of IT industry video operation workers is 6.0-16.0 hours, whose mean value is (I 0.1 ± 1.8) hours. 79.5% of workers in this survey wear myopic lens, 35.8% of workers have a rest during their working, and 14.2% of IT workers use protective products when they feel unwell of their eyes. Following the BUT experiment, 54.7% of IT workers have the normal examine results of hinoculus, while 45.3% have the abnormal results of at least one eye. Simultaneously, 54.7% workers have the normal examine results of hinoculus in the SIT experiment, however, 42.1% workers are abnormal. According to the broad linear model, there are six influencing factors (daily mean time to video, distance between eye and displayer, the frequency of rest, whether to use protective products when they feel unwell of their eyes, the type of dis player and daily time watching TV.) have significant influence on vision, having statistical significance. At the same time, there are also six influencing factors (whether have a rest regularly,sex, the situation of diaphaneity for cornea, the shape of pupil, family history and whether to use protective products when they feel unwell of their eyes.) have significant influence on the results of BUT experiment,having statistical significance. However, there are seven influencing factors (the type of computer, sex, the shape of pupil, the situation of diaphaneity for cornea, the angle between displayer and workers' sight, the type of displayer and the height of operating floor.) have significant influence on the results of SIT experiment,having statistical significance. The health-situation of IT industry video operation workers' eye is not optimistic, most of workers are lack of protection awareness; we need to strengthen propaganda and education according to its influencing factors and to improve the level of medical control and prevention for eye diseases in relevant industries.
Gabbiadini, Alessandro; Riva, Paolo
2018-03-01
Violent video game playing has been linked to a wide range of negative outcomes, especially in adolescents. In the present research, we focused on a potential determinant of adolescents' willingness to play violent video games: social exclusion. We also tested whether exclusion can predict increased aggressiveness following violent video game playing. In two experiments, we predicted that exclusion could increase adolescents' preferences for violent video games and interact with violent game playing fostering adolescents' aggressive inclinations. In Study 1, 121 adolescents (aged 10-18 years) were randomly assigned to a manipulation of social exclusion. Then, they evaluated the violent content of nine different video games (violent, nonviolent, or prosocial) and reported their willingness to play each presented video game. The results showed that excluded participants expressed a greater willingness to play violent games than nonviolent or prosocial games. No such effect was found for included participants. In Study 2, both inclusionary status and video game contents were manipulated. After a manipulation of inclusionary status, 113 adolescents (aged 11-16 years) were randomly assigned to play either a violent or a nonviolent video game. Then, they were given an opportunity to express their aggressive inclinations toward the excluders. Results showed that excluded participants who played a violent game displayed the highest level of aggressive inclinations than participants who were assigned to the other experimental conditions. Overall, these findings suggest that exclusion increases preferences for violent games and that the combination of exclusion and violent game playing fuels aggressive inclinations. © 2017 Wiley Periodicals, Inc.
Holo-Chidi video concentrator card
NASA Astrophysics Data System (ADS)
Nwodoh, Thomas A.; Prabhakar, Aditya; Benton, Stephen A.
2001-12-01
The Holo-Chidi Video Concentrator Card is a frame buffer for the Holo-Chidi holographic video processing system. Holo- Chidi is designed at the MIT Media Laboratory for real-time computation of computer generated holograms and the subsequent display of the holograms at video frame rates. The Holo-Chidi system is made of two sets of cards - the set of Processor cards and the set of Video Concentrator Cards (VCCs). The Processor cards are used for hologram computation, data archival/retrieval from a host system, and for higher-level control of the VCCs. The VCC formats computed holographic data from multiple hologram computing Processor cards, converting the digital data to analog form to feed the acousto-optic-modulators of the Media lab's Mark-II holographic display system. The Video Concentrator card is made of: a High-Speed I/O (HSIO) interface whence data is transferred from the hologram computing Processor cards, a set of FIFOs and video RAM used as buffer for data for the hololines being displayed, a one-chip integrated microprocessor and peripheral combination that handles communication with other VCCs and furnishes the card with a USB port, a co-processor which controls display data formatting, and D-to-A converters that convert digital fringes to analog form. The co-processor is implemented with an SRAM-based FPGA with over 500,000 gates and controls all the signals needed to format the data from the multiple Processor cards into the format required by Mark-II. A VCC has three HSIO ports through which up to 500 Megabytes of computed holographic data can flow from the Processor Cards to the VCC per second. A Holo-Chidi system with three VCCs has enough frame buffering capacity to hold up to thirty two 36Megabyte hologram frames at a time. Pre-computed holograms may also be loaded into the VCC from a host computer through the low- speed USB port. Both the microprocessor and the co- processor in the VCC can access the main system memory used to store control programs and data for the VCC. The Card also generates the control signals used by the scanning mirrors of Mark-II. In this paper we discuss the design of the VCC and its implementation in the Holo-Chidi system.
Method and apparatus for telemetry adaptive bandwidth compression
NASA Technical Reports Server (NTRS)
Graham, Olin L.
1987-01-01
Methods and apparatus are provided for automatic and/or manual adaptive bandwidth compression of telemetry. An adaptive sampler samples a video signal from a scanning sensor and generates a sequence of sampled fields. Each field and range rate information from the sensor are hence sequentially transmitted to and stored in a multiple and adaptive field storage means. The field storage means then, in response to an automatic or manual control signal, transfers the stored sampled field signals to a video monitor in a form for sequential or simultaneous display of a desired number of stored signal fields. The sampling ratio of the adaptive sample, the relative proportion of available communication bandwidth allocated respectively to transmitted data and video information, and the number of fields simultaneously displayed are manually or automatically selectively adjustable in functional relationship to each other and detected range rate. In one embodiment, when relatively little or no scene motion is detected, the control signal maximizes sampling ratio and causes simultaneous display of all stored fields, thus maximizing resolution and bandwidth available for data transmission. When increased scene motion is detected, the control signal is adjusted accordingly to cause display of fewer fields. If greater resolution is desired, the control signal is adjusted to increase the sampling ratio.
Optical cross-talk and visual comfort of a stereoscopic display used in a real-time application
NASA Astrophysics Data System (ADS)
Pala, S.; Stevens, R.; Surman, P.
2007-02-01
Many 3D systems work by presenting to the observer stereoscopic pairs of images that are combined to give the impression of a 3D image. Discomfort experienced when viewing for extended periods may be due to several factors, including the presence of optical crosstalk between the stereo image channels. In this paper we use two video cameras and two LCD panels viewed via a Helmholtz arrangement of mirrors, to display a stereoscopic image inherently free of crosstalk. Simple depth discrimination tasks are performed whilst viewing the 3D image and controlled amounts of image crosstalk are introduced by electronically mixing the video signals. Error monitoring and skin conductance are used as measures of workload as well as traditional subjective questionnaires. We report qualitative measurements of user workload under a variety of viewing conditions. This pilot study revealed a decrease in task performance and increased workload as crosstalk was increased. The observations will assist in the design of further trials planned to be conducted in a medical environment.
NASA Technical Reports Server (NTRS)
Richards, Stephanie E. (Compiler); Levine, Howard G.; Romero, Vergel
2016-01-01
Biotube was developed for plant gravitropic research investigating the potential for magnetic fields to orient plant roots as they grow in microgravity. Prior to flight, experimental seeds are placed into seed cassettes, that are capable of containing up to 10 seeds, and inserted between two magnets located within one of three Magnetic Field Chamber (MFC). Biotube is stored within an International Space Station (ISS) stowage locker and provides three levels of containment for chemical fixatives. Features include monitoring of temperature, fixative/ preservative delivery to specimens, and real-time video imaging downlink. Biotube's primary subsystems are: (1) The Water Delivery System that automatically activates and controls the delivery of water (to initiate seed germination). (2) The Fixative Storage and Delivery System that stores and delivers chemical fixative or RNA later to each seed cassette. (3) The Digital Imaging System consisting of 4 charge-coupled device (CCD) cameras, a video multiplexer, a lighting multiplexer, and 16 infrared light-emitting diodes (LEDs) that provide illumination while the photos are being captured. (4) The Command and Data Management System that provides overall control of the integrated subsystems, graphical user interface, system status and error message display, image display, and other functions.
Horowitz, L; Sarkin, J M
1992-01-01
Surveys indicate over 50 million Americans, mostly women, currently operate video display terminals (VDTs) at home or in the workplace. Recent epidemiological studies reveal more than 75% of approximately 30 million American temporomandibular disorder (TMD) sufferers are women. What does the VDT and TMD have in common besides an affinity for the female gender? TMD is associated with numerous risk factors that commonly initiate sympathetic nervous system and stress hormone response mechanisms resulting in muscle spasms, trigger point formation, and pain in the head and neck. Likewise VDT operation may be linked to three additional sympathetic nervous system irritants including: (1) electrostatic ambient air negative ion depletion, (2) electromagnetic radiation, and (3) eyestrain and postural stress associated with poor work habits and improper work station design. Additional research considering the roles these three factors may play in the etiology of TMD and other myofascial pain problems is indicated. Furthermore, dentists are advised to educate patients as to these possible risks, encourage preventive behaviors on the part of employers and employees, and recommend workplace health, safety, and ergonomic upgrades when indicated.
Analysis and Selection of a Remote Docking Simulation Visual Display System
NASA Technical Reports Server (NTRS)
Shields, N., Jr.; Fagg, M. F.
1984-01-01
The development of a remote docking simulation visual display system is examined. Video system and operator performance are discussed as well as operator command and control requirements and a design analysis of the reconfigurable work station.
Eavesdropping and signal matching in visual courtship displays of spiders.
Clark, David L; Roberts, J Andrew; Uetz, George W
2012-06-23
Eavesdropping on communication is widespread among animals, e.g. bystanders observing male-male contests, female mate choice copying and predator detection of prey cues. Some animals also exhibit signal matching, e.g. overlapping of competitors' acoustic signals in aggressive interactions. Fewer studies have examined male eavesdropping on conspecific courtship, although males could increase mating success by attending to others' behaviour and displaying whenever courtship is detected. In this study, we show that field-experienced male Schizocosa ocreata wolf spiders exhibit eavesdropping and signal matching when exposed to video playback of courting male conspecifics. Male spiders had longer bouts of interaction with a courting male stimulus, and more bouts of courtship signalling during and after the presence of a male on the video screen. Rates of courtship (leg tapping) displayed by individual focal males were correlated with the rates of the video exemplar to which they were exposed. These findings suggest male wolf spiders might gain information by eavesdropping on conspecific courtship and adjust performance to match that of rivals. This represents a novel finding, as these behaviours have previously been seen primarily among vertebrates.
Eavesdropping and signal matching in visual courtship displays of spiders
Clark, David L.; Roberts, J. Andrew; Uetz, George W.
2012-01-01
Eavesdropping on communication is widespread among animals, e.g. bystanders observing male–male contests, female mate choice copying and predator detection of prey cues. Some animals also exhibit signal matching, e.g. overlapping of competitors' acoustic signals in aggressive interactions. Fewer studies have examined male eavesdropping on conspecific courtship, although males could increase mating success by attending to others' behaviour and displaying whenever courtship is detected. In this study, we show that field-experienced male Schizocosa ocreata wolf spiders exhibit eavesdropping and signal matching when exposed to video playback of courting male conspecifics. Male spiders had longer bouts of interaction with a courting male stimulus, and more bouts of courtship signalling during and after the presence of a male on the video screen. Rates of courtship (leg tapping) displayed by individual focal males were correlated with the rates of the video exemplar to which they were exposed. These findings suggest male wolf spiders might gain information by eavesdropping on conspecific courtship and adjust performance to match that of rivals. This represents a novel finding, as these behaviours have previously been seen primarily among vertebrates. PMID:22219390
19. SITE BUILDING 002 SCANNER BUILDING AIR POLICE ...
19. SITE BUILDING 002 - SCANNER BUILDING - AIR POLICE SITE SECURITY OFFICE WITH "SITE PERIMETER STATUS PANEL" AND REAL TIME VIDEO DISPLAY OUTPUT FROM VIDEO CAMERA SYSTEM AT SECURITY FENCE LOCATIONS. - Cape Cod Air Station, Technical Facility-Scanner Building & Power Plant, Massachusetts Military Reservation, Sandwich, Barnstable County, MA
Video bandwidth compression system
NASA Astrophysics Data System (ADS)
Ludington, D.
1980-08-01
The objective of this program was the development of a Video Bandwidth Compression brassboard model for use by the Air Force Avionics Laboratory, Wright-Patterson Air Force Base, in evaluation of bandwidth compression techniques for use in tactical weapons and to aid in the selection of particular operational modes to be implemented in an advanced flyable model. The bandwidth compression system is partitioned into two major divisions: the encoder, which processes the input video with a compression algorithm and transmits the most significant information; and the decoder where the compressed data is reconstructed into a video image for display.
ERIC Educational Resources Information Center
Robson, Sue
2016-01-01
Recent years have seen considerable growth of evidence that young children possess metacognitive and self-regulatory skills, alongside a view that some research tools, including observation and video-stimulated interviews, may provide better opportunities to see them. This paper examines possible differences in the evidence these two tools may…
NASA Technical Reports Server (NTRS)
Culp, Robert D. (Editor); Bickley, George (Editor)
1993-01-01
Papers from the sixteenth annual American Astronautical Society Rocky Mountain Guidance and Control Conference are presented. The topics covered include the following: advances in guidance, navigation, and control; control system videos; guidance, navigation and control embedded flight control systems; recent experiences; guidance and control storyboard displays; and applications of modern control, featuring the Hubble Space Telescope (HST) performance enhancement study.
RMS active damping augmentation
NASA Technical Reports Server (NTRS)
Gilbert, Michael G.; Scott, Michael A.; Demeo, Martha E.
1992-01-01
The topics are presented in viewgraph form and include: RMS active damping augmentation; potential space station assembly benefits to CSI; LaRC/JSC bridge program; control law design process; draper RMS simulator; MIMO acceleration control laws improve damping; potential load reduction benefit; DRS modified to model distributed accelerations; accelerometer location; Space Shuttle aft cockpit simulator; simulated shuttle video displays; SES test goals and objectives; and SES modifications to support RMS active damping augmentation.
Ota, Nao; Gahr, Manfred; Soma, Masayo
2015-11-19
According to classical sexual selection theory, complex multimodal courtship displays have evolved in males through female choice. While it is well-known that socially monogamous songbird males sing to attract females, we report here the first example of a multimodal dance display that is not a uniquely male trait in these birds. In the blue-capped cordon-bleu (Uraeginthus cyanocephalus), a socially monogamous songbird, both sexes perform courtship displays that are characterised by singing and simultaneous visual displays. By recording these displays with a high-speed video camera, we discovered that in addition to bobbing, their visual courtship display includes quite rapid step-dancing, which is assumed to produce vibrations and/or presumably non-vocal sounds. Dance performances did not differ between sexes but varied among individuals. Both male and female cordon-bleus intensified their dance performances when their mate was on the same perch. The multimodal (acoustic, visual, tactile) and multicomponent (vocal and non-vocal sounds) courtship display observed was a combination of several motor behaviours (singing, bobbing, stepping). The fact that both sexes of this socially monogamous songbird perform such a complex courtship display is a novel finding and suggests that the evolution of multimodal courtship display as an intersexual communication should be considered.
Fractional screen video enhancement apparatus
Spletzer, Barry L [Albuquerque, NM; Davidson, George S [Albuquerque, NM; Zimmerer, Daniel J [Tijeras, NM; Marron, Lisa C [Albuquerque, NM
2005-07-19
The present invention provides a method and apparatus for displaying two portions of an image at two resolutions. For example, the invention can display an entire image at a first resolution, and a subset of the image at a second, higher resolution. Two inexpensive, low resolution displays can be used to produce a large image with high resolution only where needed.
Exploiting spatio-temporal characteristics of human vision for mobile video applications
NASA Astrophysics Data System (ADS)
Jillani, Rashad; Kalva, Hari
2008-08-01
Video applications on handheld devices such as smart phones pose a significant challenge to achieve high quality user experience. Recent advances in processor and wireless networking technology are producing a new class of multimedia applications (e.g. video streaming) for mobile handheld devices. These devices are light weight and have modest sizes, and therefore very limited resources - lower processing power, smaller display resolution, lesser memory, and limited battery life as compared to desktop and laptop systems. Multimedia applications on the other hand have extensive processing requirements which make the mobile devices extremely resource hungry. In addition, the device specific properties (e.g. display screen) significantly influence the human perception of multimedia quality. In this paper we propose a saliency based framework that exploits the structure in content creation as well as the human vision system to find the salient points in the incoming bitstream and adapt it according to the target device, thus improving the quality of new adapted area around salient points. Our experimental results indicate that the adaptation process that is cognizant of video content and user preferences can produce better perceptual quality video for mobile devices. Furthermore, we demonstrated how such a framework can affect user experience on a handheld device.
Highly Reflective Multi-stable Electrofluidic Display Pixels
NASA Astrophysics Data System (ADS)
Yang, Shu
Electronic papers (E-papers) refer to the displays that mimic the appearance of printed papers, but still owning the features of conventional electronic displays, such as the abilities of browsing websites and playing videos. The motivation of creating paper-like displays is inspired by the truths that reading on a paper caused least eye fatigue due to the paper's reflective and light diffusive nature, and, unlike the existing commercial displays, there is no cost of any form of energy for sustaining the displayed image. To achieve the equivalent visual effect of a paper print, an ideal E-paper has to be a highly reflective with good contrast ratio and full-color capability. To sustain the image with zero power consumption, the display pixels need to be bistable, which means the "on" and "off" states are both lowest energy states. Pixel can change its state only when sufficient external energy is given. There are many emerging technologies competing to demonstrate the first ideal E-paper device. However, none is able to achieve satisfactory visual effect, bistability and video speed at the same time. Challenges come from either the inherent physical/chemical properties or the fabrication process. Electrofluidic display is one of the most promising E-paper technologies. It has successfully demonstrated high reflectivity, brilliant color and video speed operation by moving colored pigment dispersion between visible and invisible places with electrowetting force. However, the pixel design did not allow the image bistability. Presented in this dissertation are the multi-stable electrofluidic display pixels that are able to sustain grayscale levels without any power consumption, while keeping the favorable features of the previous generation electrofluidic display. The pixel design, fabrication method using multiple layer dry film photoresist lamination, and physical/optical characterizations are discussed in details. Based on the pixel structure, the preliminary results of a simplified design and fabrication method are demonstrated. As advanced research topics regarding the device optical performance, firstly an optical model for evaluating reflective displays' light out-coupling efficiency is established to guide the pixel design; Furthermore, Aluminum surface diffusers are analytically modeled and then fabricated onto multi-stable electrofluidic display pixels to demonstrate truly "white" multi-stable electrofluidic display modules. The achieved results successfully promoted multi-stable electrofluidic display as excellent candidate for the ultimate E-paper device especially for larger scale signage applications.
NASA Technical Reports Server (NTRS)
Serebreny, S. M.; Evans, W. E.; Wiegman, E. J.
1974-01-01
The usefulness of dynamic display techniques in exploiting the repetitive nature of ERTS imagery was investigated. A specially designed Electronic Satellite Image Analysis Console (ESIAC) was developed and employed to process data for seven ERTS principal investigators studying dynamic hydrological conditions for diverse applications. These applications include measurement of snowfield extent and sediment plumes from estuary discharge, Playa Lake inventory, and monitoring of phreatophyte and other vegetation changes. The ESIAC provides facilities for storing registered image sequences in a magnetic video disc memory for subsequent recall, enhancement, and animated display in monochrome or color. The most unique feature of the system is the capability to time lapse the imagery and analytic displays of the imagery. Data products included quantitative measurements of distances and areas, binary thematic maps based on monospectral or multispectral decisions, radiance profiles, and movie loops. Applications of animation for uses other than creating time-lapse sequences are identified. Input to the ESIAC can be either digital or via photographic transparencies.
Interactive visualization and analysis of multimodal datasets for surgical applications.
Kirmizibayrak, Can; Yim, Yeny; Wakid, Mike; Hahn, James
2012-12-01
Surgeons use information from multiple sources when making surgical decisions. These include volumetric datasets (such as CT, PET, MRI, and their variants), 2D datasets (such as endoscopic videos), and vector-valued datasets (such as computer simulations). Presenting all the information to the user in an effective manner is a challenging problem. In this paper, we present a visualization approach that displays the information from various sources in a single coherent view. The system allows the user to explore and manipulate volumetric datasets, display analysis of dataset values in local regions, combine 2D and 3D imaging modalities and display results of vector-based computer simulations. Several interaction methods are discussed: in addition to traditional interfaces including mouse and trackers, gesture-based natural interaction methods are shown to control these visualizations with real-time performance. An example of a medical application (medialization laryngoplasty) is presented to demonstrate how the combination of different modalities can be used in a surgical setting with our approach.
New generation of 3D desktop computer interfaces
NASA Astrophysics Data System (ADS)
Skerjanc, Robert; Pastoor, Siegmund
1997-05-01
Today's computer interfaces use 2-D displays showing windows, icons and menus and support mouse interactions for handling programs and data files. The interface metaphor is that of a writing desk with (partly) overlapping sheets of documents placed on its top. Recent advances in the development of 3-D display technology give the opportunity to take the interface concept a radical stage further by breaking the design limits of the desktop metaphor. The major advantage of the envisioned 'application space' is, that it offers an additional, immediately perceptible dimension to clearly and constantly visualize the structure and current state of interrelations between documents, videos, application programs and networked systems. In this context, we describe the development of a visual operating system (VOS). Under VOS, applications appear as objects in 3-D space. Users can (graphically connect selected objects to enable communication between the respective applications. VOS includes a general concept of visual and object oriented programming for tasks ranging from, e.g., low-level programming up to high-level application configuration. In order to enable practical operation in an office or at home for many hours, the system should be very comfortable to use. Since typical 3-D equipment used, e.g., in virtual-reality applications (head-mounted displays, data gloves) is rather cumbersome and straining, we suggest to use off-head displays and contact-free interaction techniques. In this article, we introduce an autostereoscopic 3-D display and connected video based interaction techniques which allow viewpoint-depending imaging (by head tracking) and visually controlled modification of data objects and links (by gaze tracking, e.g., to pick, 3-D objects just by looking at them).
Wachter, S. Blake; Johnson, Ken; Albert, Robert; Syroid, Noah; Drews, Frank; Westenskow, Dwayne
2006-01-01
Objective Authors developed a picture-graphics display for pulmonary function to present typical respiratory data used in perioperative and intensive care environments. The display utilizes color, shape and emergent alerting to highlight abnormal pulmonary physiology. The display serves as an adjunct to traditional operating room displays and monitors. Design To evaluate the prototype, nineteen clinician volunteers each managed four adverse respiratory events and one normal event using a high-resolution patient simulator which included the new displays (intervention subjects) and traditional displays (control subjects). Between-group comparisons included (i) time to diagnosis and treatment for each adverse respiratory event; (ii) the number of unnecessary treatments during the normal scenario; and (iii) self-reported workload estimates while managing study events. Measurements Two expert anesthesiologists reviewed video-taped transcriptions of the volunteers to determine time to treat and time to diagnosis. Time values were then compared between groups using a Mann-Whitney-U Test. Estimated workload for both groups was assessed using the NASA-TLX and compared between groups using an ANOVA. P-values < 0.05 were considered significant. Results Clinician volunteers detected and treated obstructed endotracheal tubes and intrinsic PEEP problems faster with graphical rather than conventional displays (p < 0.05). During the normal scenario simulation, 3 clinicians using the graphical display, and 5 clinicians using the conventional display gave unnecessary treatments. Clinician-volunteers reported significantly lower subjective workloads using the graphical display for the obstructed endotracheal tube scenario (p < 0.001) and the intrinsic PEEP scenario (p < 0.03). Conclusion Authors conclude that the graphical pulmonary display may serve as a useful adjunct to traditional displays in identifying adverse respiratory events. PMID:16929038
Display aids for remote control of untethered undersea vehicles
NASA Technical Reports Server (NTRS)
Verplank, W. L.
1978-01-01
A predictor display superimposed on slow-scan video or sonar data is proposed as a method to allow better remote manual control of an untethered submersible. Simulation experiments show good control under circumstances which otherwise make control practically impossible.
Experiences in teleoperation of land vehicles
NASA Technical Reports Server (NTRS)
Mcgovern, Douglas E.
1989-01-01
Teleoperation of land vehicles allows the removal of the operator from the vehicle to a remote location. This can greatly increase operator safety and comfort in applications such as security patrol or military combat. The cost includes system complexity and reduced system performance. All feedback on vehicle performance and on environmental conditions must pass through sensors, a communications channel, and displays. In particular, this requires vision to be transmitted by close-circuit television with a consequent degradation of information content. Vehicular teleoperation, as a result, places severe demands on the operator. Teleoperated land vehicles have been built and tested by many organizations, including Sandia National Laboratories (SNL). The SNL fleet presently includes eight vehicles of varying capability. These vehicles have been operated using different types of controls, displays, and visual systems. Experimentation studying the effects of vision system characteristics on off-road, remote driving was performed for conditions of fixed camera versus steering-coupled camera and of color versus black and white video display. Additionally, much experience was gained through system demonstrations and hardware development trials. The preliminary experimental findings and the results of the accumulated operational experience are discussed.
The Video PATSEARCH System: An Interview with Peter Urbach.
ERIC Educational Resources Information Center
Videodisc/Videotext, 1982
1982-01-01
The Video PATSEARCH system consists of a microcomputer with a special keyboard and two display screens which accesses the PATSEARCH database of United States government patents on the Bibliographic Retrieval Services (BRS) search system. The microcomputer retrieves text from BRS and matching graphics from an analog optical videodisc. (Author/JJD)
Preliminary experience with a stereoscopic video system in a remotely piloted aircraft application
NASA Technical Reports Server (NTRS)
Rezek, T. W.
1983-01-01
Remote piloting video display development at the Dryden Flight Research Facility of NASA's Ames Research Center is summarized, and the reasons for considering stereo television are presented. Pertinent equipment is described. Limited flight experience is also discussed, along with recommendations for further study.
Comparing Pictures and Videos for Teaching Action Labels to Children with Communication Delays
ERIC Educational Resources Information Center
Schebell, Shannon; Shepley, Collin; Mataras, Theologia; Wunderlich, Kara
2018-01-01
Children with communication delays often display difficulties labeling stimuli in their environment, particularly related to actions. Research supports direct instruction with video and picture stimuli for increasing children's action labeling repertoires; however, no studies have compared which type of stimuli results in more efficient,…
1996-01-01
Ted Brunzie and Peter Mason observe the float package and the data rack aboard the DC-9 reduced gravity aircraft. The float package contains a cryostat, a video camera, a pump and accelerometers. The data rack displays and record the video signal from the float package on tape and stores acceleration and temperature measurements on disk.
ERIC Educational Resources Information Center
Krumboltz, John D.; Babineaux, Ryan; Wientjes, Greg
2010-01-01
The supply of occupational information appears to exceed the demand. A website displaying over 100 videos about various occupations was created to help career searchers find attractive alternatives. Access to the videos was free for anyone in the world. It had been hoped that many thousands of people would make use of the resource. However, the…
On-line content creation for photo products: understanding what the user wants
NASA Astrophysics Data System (ADS)
Fageth, Reiner
2015-03-01
This paper describes how videos can be implemented into printed photo books and greeting cards. We will show that - surprisingly or not- pictures from videos are similarly used such as classical images to tell compelling stories. Videos can be taken with nearly every camera, digital point and shoot cameras, DSLRs as well as smartphones and more and more with so-called action cameras mounted on sports devices. The implementation of videos while generating QR codes and relevant pictures out of the video stream via a software implementation was contents in last years' paper. This year we present first data about what contents is displayed and how the users represent their videos in printed products, e.g. CEWE PHOTOBOOKS and greeting cards. We report the share of the different video formats used.
Design and implementation of H.264 based embedded video coding technology
NASA Astrophysics Data System (ADS)
Mao, Jian; Liu, Jinming; Zhang, Jiemin
2016-03-01
In this paper, an embedded system for remote online video monitoring was designed and developed to capture and record the real-time circumstances in elevator. For the purpose of improving the efficiency of video acquisition and processing, the system selected Samsung S5PV210 chip as the core processor which Integrated graphics processing unit. And the video was encoded with H.264 format for storage and transmission efficiently. Based on S5PV210 chip, the hardware video coding technology was researched, which was more efficient than software coding. After running test, it had been proved that the hardware video coding technology could obviously reduce the cost of system and obtain the more smooth video display. It can be widely applied for the security supervision [1].
Automatic view synthesis by image-domain-warping.
Stefanoski, Nikolce; Wang, Oliver; Lang, Manuel; Greisen, Pierre; Heinzle, Simon; Smolic, Aljosa
2013-09-01
Today, stereoscopic 3D (S3D) cinema is already mainstream, and almost all new display devices for the home support S3D content. S3D distribution infrastructure to the home is already established partly in the form of 3D Blu-ray discs, video on demand services, or television channels. The necessity to wear glasses is, however, often considered as an obstacle, which hinders broader acceptance of this technology in the home. Multiviewautostereoscopic displays enable a glasses free perception of S3D content for several observers simultaneously, and support head motion parallax in a limited range. To support multiviewautostereoscopic displays in an already established S3D distribution infrastructure, a synthesis of new views from S3D video is needed. In this paper, a view synthesis method based on image-domain-warping (IDW) is presented that automatically synthesizes new views directly from S3D video and functions completely. IDW relies on an automatic and robust estimation of sparse disparities and image saliency information, and enforces target disparities in synthesized images using an image warping framework. Two configurations of the view synthesizer in the scope of a transmission and view synthesis framework are analyzed and evaluated. A transmission and view synthesis system that uses IDW is recently submitted to MPEG's call for proposals on 3D video technology, where it is ranked among the four best performing proposals.
Scalable large format 3D displays
NASA Astrophysics Data System (ADS)
Chang, Nelson L.; Damera-Venkata, Niranjan
2010-02-01
We present a general framework for the modeling and optimization of scalable large format 3-D displays using multiple projectors. Based on this framework, we derive algorithms that can robustly optimize the visual quality of an arbitrary combination of projectors (e.g. tiled, superimposed, combinations of the two) without manual adjustment. The framework creates for the first time a new unified paradigm that is agnostic to a particular configuration of projectors yet robustly optimizes for the brightness, contrast, and resolution of that configuration. In addition, we demonstrate that our algorithms support high resolution stereoscopic video at real-time interactive frame rates achieved on commodity graphics hardware. Through complementary polarization, the framework creates high quality multi-projector 3-D displays at low hardware and operational cost for a variety of applications including digital cinema, visualization, and command-and-control walls.
High dynamic range adaptive real-time smart camera: an overview of the HDR-ARTiST project
NASA Astrophysics Data System (ADS)
Lapray, Pierre-Jean; Heyrman, Barthélémy; Ginhac, Dominique
2015-04-01
Standard cameras capture only a fraction of the information that is visible to the human visual system. This is specifically true for natural scenes including areas of low and high illumination due to transitions between sunlit and shaded areas. When capturing such a scene, many cameras are unable to store the full Dynamic Range (DR) resulting in low quality video where details are concealed in shadows or washed out by sunlight. The imaging technique that can overcome this problem is called HDR (High Dynamic Range) imaging. This paper describes a complete smart camera built around a standard off-the-shelf LDR (Low Dynamic Range) sensor and a Virtex-6 FPGA board. This smart camera called HDR-ARtiSt (High Dynamic Range Adaptive Real-time Smart camera) is able to produce a real-time HDR live video color stream by recording and combining multiple acquisitions of the same scene while varying the exposure time. This technique appears as one of the most appropriate and cheapest solution to enhance the dynamic range of real-life environments. HDR-ARtiSt embeds real-time multiple captures, HDR processing, data display and transfer of a HDR color video for a full sensor resolution (1280 1024 pixels) at 60 frames per second. The main contributions of this work are: (1) Multiple Exposure Control (MEC) dedicated to the smart image capture with alternating three exposure times that are dynamically evaluated from frame to frame, (2) Multi-streaming Memory Management Unit (MMMU) dedicated to the memory read/write operations of the three parallel video streams, corresponding to the different exposure times, (3) HRD creating by combining the video streams using a specific hardware version of the Devebecs technique, and (4) Global Tone Mapping (GTM) of the HDR scene for display on a standard LCD monitor.
When less is best: female brown-headed cowbirds prefer less intense male displays.
O'Loghlen, Adrian L; Rothstein, Stephen I
2012-01-01
Sexual selection theory predicts that females should prefer males with the most intense courtship displays. However, wing-spread song displays that male brown-headed cowbirds (Molothrus ater) direct at females are generally less intense than versions of this display that are directed at other males. Because male-directed displays are used in aggressive signaling, we hypothesized that females should prefer lower intensity performances of this display. To test this hypothesis, we played audiovisual recordings showing the same males performing both high intensity male-directed and low intensity female-directed displays to females (N = 8) and recorded the females' copulation solicitation display (CSD) responses. All eight females responded strongly to both categories of playbacks but were more sexually stimulated by the low intensity female-directed displays. Because each pair of high and low intensity playback videos had the exact same audio track, the divergent responses of females must have been based on differences in the visual content of the displays shown in the videos. Preferences female cowbirds show in acoustic CSD studies are correlated with mate choice in field and captivity studies and this is also likely to be true for preferences elucidated by playback of audiovisual displays. Female preferences for low intensity female-directed displays may explain why male cowbirds rarely use high intensity displays when signaling to females. Repetitive high intensity displays may demonstrate a male's current condition and explain why these displays are used in male-male interactions which can escalate into physical fights in which males in poorer condition could be injured or killed. This is the first study in songbirds to use audiovisual playbacks to assess how female sexual behavior varies in response to variation in a male visual display.
Li, Xiangrui; Lu, Zhong-Lin
2012-02-29
Display systems based on conventional computer graphics cards are capable of generating images with 8-bit gray level resolution. However, most experiments in vision research require displays with more than 12 bits of luminance resolution. Several solutions are available. Bit++ (1) and DataPixx (2) use the Digital Visual Interface (DVI) output from graphics cards and high resolution (14 or 16-bit) digital-to-analog converters to drive analog display devices. The VideoSwitcher (3) described here combines analog video signals from the red and blue channels of graphics cards with different weights using a passive resister network (4) and an active circuit to deliver identical video signals to the three channels of color monitors. The method provides an inexpensive way to enable high-resolution monochromatic displays using conventional graphics cards and analog monitors. It can also provide trigger signals that can be used to mark stimulus onsets, making it easy to synchronize visual displays with physiological recordings or response time measurements. Although computer keyboards and mice are frequently used in measuring response times (RT), the accuracy of these measurements is quite low. The RTbox is a specialized hardware and software solution for accurate RT measurements. Connected to the host computer through a USB connection, the driver of the RTbox is compatible with all conventional operating systems. It uses a microprocessor and high-resolution clock to record the identities and timing of button events, which are buffered until the host computer retrieves them. The recorded button events are not affected by potential timing uncertainties or biases associated with data transmission and processing in the host computer. The asynchronous storage greatly simplifies the design of user programs. Several methods are available to synchronize the clocks of the RTbox and the host computer. The RTbox can also receive external triggers and be used to measure RT with respect to external events. Both VideoSwitcher and RTbox are available for users to purchase. The relevant information and many demonstration programs can be found at http://lobes.usc.edu/.
Modern Display Technologies for Airborne Applications.
1983-04-01
the case of LED head-down direct view displays, this requires that special attention be paid to the optical filtering , the electrical drive/address...effectively attenuates the LED specular reflectance component, the colour and neutral density filtering attentuate the diffuse component and the... filter techniques are planned for use with video, multi- colour and advanced versions of numeric, alphanumeric and graphic displays; this technique
Payload specialist station study. Part 2: CEI specifications (part 1). [space shuttles
NASA Technical Reports Server (NTRS)
1976-01-01
The performance, design, and verification specifications are established for the multifunction display system (MFDS) to be located at the payload station in the shuttle orbiter aft flight deck. The system provides the display units (with video, alphanumerics, and graphics capabilities), associated with electronic units and the keyboards in support of the payload dedicated controls and the displays concept.
I Think We're Alone Now: Solitary Social Behaviors in Adolescents with Autism Spectrum Disorder.
Zane, Emily; Neumeyer, Kayla; Mertens, Julia; Chugg, Amanda; Grossman, Ruth B
2017-10-10
Research into emotional responsiveness in Autism Spectrum Disorder (ASD) has yielded mixed findings. Some studies report uniform, flat and emotionless expressions in ASD; others describe highly variable expressions that are as or even more intense than those of typically developing (TD) individuals. Variability in findings is likely due to differences in study design: some studies have examined posed (i.e., not spontaneous expressions) and others have examined spontaneous expressions in social contexts, during which individuals with ASD-by nature of the disorder-are likely to behave differently than their TD peers. To determine whether (and how) spontaneous facial expressions and other emotional responses are different from TD individuals, we video-recorded the spontaneous responses of children and adolescents with and without ASD (between the ages of 10 and 17 years) as they watched emotionally evocative videos in a non-social context. Researchers coded facial expressions for intensity, and noted the presence of laughter and other responsive vocalizations. Adolescents with ASD displayed more intense, frequent and varied spontaneous facial expressions than their TD peers. They also produced significantly more emotional vocalizations, including laughter. Individuals with ASD may display their emotions more frequently and more intensely than TD individuals when they are unencumbered by social pressure. Differences in the interpretation of the social setting and/or understanding of emotional display rules may also contribute to differences in emotional behaviors between groups.
From Antarctica to space: Use of telepresence and virtual reality in control of remote vehicles
NASA Technical Reports Server (NTRS)
Stoker, Carol; Hine, Butler P., III; Sims, Michael; Rasmussen, Daryl; Hontalas, Phil; Fong, Terrence W.; Steele, Jay; Barch, Don; Andersen, Dale; Miles, Eric
1994-01-01
In the Fall of 1993, NASA Ames deployed a modified Phantom S2 Remotely-Operated underwater Vehicle (ROV) into an ice-covered sea environment near McMurdo Science Station, Antarctica. This deployment was part of the antarctic Space Analog Program, a joint program between NASA and the National Science Foundation to demonstrate technologies relevant for space exploration in realistic field setting in the Antarctic. The goal of the mission was to operationally test the use of telepresence and virtual reality technology in the operator interface to a remote vehicle, while performing a benthic ecology study. The vehicle was operated both locally, from above a dive hole in the ice through which it was launched, and remotely over a satellite communications link from a control room at NASA's Ames Research Center. Local control of the vehicle was accomplished using the standard Phantom control box containing joysticks and switches, with the operator viewing stereo video camera images on a stereo display monitor. Remote control of the vehicle over the satellite link was accomplished using the Virtual Environment Vehicle Interface (VEVI) control software developed at NASA Ames. The remote operator interface included either a stereo display monitor similar to that used locally or a stereo head-mounted head-tracked display. The compressed video signal from the vehicle was transmitted to NASA Ames over a 768 Kbps satellite channel. Another channel was used to provide a bi-directional Internet link to the vehicle control computer through which the command and telemetry signals traveled, along with a bi-directional telephone service. In addition to the live stereo video from the satellite link, the operator could view a computer-generated graphic representation of the underwater terrain, modeled from the vehicle's sensors. The virtual environment contained an animate graphic model of the vehicle which reflected the state of the actual vehicle, along with ancillary information such as the vehicle track, science markers, and locations of video snapshots. The actual vehicle was driven either from within the virtual environment or through a telepresence interface. All vehicle functions could be controlled remotely over the satellite link.
Motmot, an open-source toolkit for realtime video acquisition and analysis.
Straw, Andrew D; Dickinson, Michael H
2009-07-22
Video cameras sense passively from a distance, offer a rich information stream, and provide intuitively meaningful raw data. Camera-based imaging has thus proven critical for many advances in neuroscience and biology, with applications ranging from cellular imaging of fluorescent dyes to tracking of whole-animal behavior at ecologically relevant spatial scales. Here we present 'Motmot': an open-source software suite for acquiring, displaying, saving, and analyzing digital video in real-time. At the highest level, Motmot is written in the Python computer language. The large amounts of data produced by digital cameras are handled by low-level, optimized functions, usually written in C. This high-level/low-level partitioning and use of select external libraries allow Motmot, with only modest complexity, to perform well as a core technology for many high-performance imaging tasks. In its current form, Motmot allows for: (1) image acquisition from a variety of camera interfaces (package motmot.cam_iface), (2) the display of these images with minimal latency and computer resources using wxPython and OpenGL (package motmot.wxglvideo), (3) saving images with no compression in a single-pass, low-CPU-use format (package motmot.FlyMovieFormat), (4) a pluggable framework for custom analysis of images in realtime and (5) firmware for an inexpensive USB device to synchronize image acquisition across multiple cameras, with analog input, or with other hardware devices (package motmot.fview_ext_trig). These capabilities are brought together in a graphical user interface, called 'FView', allowing an end user to easily view and save digital video without writing any code. One plugin for FView, 'FlyTrax', which tracks the movement of fruit flies in real-time, is included with Motmot, and is described to illustrate the capabilities of FView. Motmot enables realtime image processing and display using the Python computer language. In addition to the provided complete applications, the architecture allows the user to write relatively simple plugins, which can accomplish a variety of computer vision tasks and be integrated within larger software systems. The software is available at http://code.astraw.com/projects/motmot.
A design of real time image capturing and processing system using Texas Instrument's processor
NASA Astrophysics Data System (ADS)
Wee, Toon-Joo; Chaisorn, Lekha; Rahardja, Susanto; Gan, Woon-Seng
2007-09-01
In this work, we developed and implemented an image capturing and processing system that equipped with capability of capturing images from an input video in real time. The input video can be a video from a PC, video camcorder or DVD player. We developed two modes of operation in the system. In the first mode, an input image from the PC is processed on the processing board (development platform with a digital signal processor) and is displayed on the PC. In the second mode, current captured image from the video camcorder (or from DVD player) is processed on the board but is displayed on the LCD monitor. The major difference between our system and other existing conventional systems is that image-processing functions are performed on the board instead of the PC (so that the functions can be used for further developments on the board). The user can control the operations of the board through the Graphic User Interface (GUI) provided on the PC. In order to have a smooth image data transfer between the PC and the board, we employed Real Time Data Transfer (RTDX TM) technology to create a link between them. For image processing functions, we developed three main groups of function: (1) Point Processing; (2) Filtering and; (3) 'Others'. Point Processing includes rotation, negation and mirroring. Filter category provides median, adaptive, smooth and sharpen filtering in the time domain. In 'Others' category, auto-contrast adjustment, edge detection, segmentation and sepia color are provided, these functions either add effect on the image or enhance the image. We have developed and implemented our system using C/C# programming language on TMS320DM642 (or DM642) board from Texas Instruments (TI). The system was showcased in College of Engineering (CoE) exhibition 2006 at Nanyang Technological University (NTU) and have more than 40 users tried our system. It is demonstrated that our system is adequate for real time image capturing. Our system can be used or applied for applications such as medical imaging, video surveillance, etc.
American Carrier Air Power at the Dawn of a New Century
2005-01-01
Systems, Office of the Secretary of Defense (Operational Test and Evaluation); then–Commander Calvin Craig, OPNAV N81; Captain Kenneth Neubauer and...TACP Tactical Air Control Party TARPS Tactical Air Reconnaissance Pod System TCS Television Camera System TLAM Tomahawk Land-Attack Missile TST Time...store any video imagery acquired by the aircraft’s systems, including the TARPS pod, the pilot’s head-up display (HUD), the Television Camera System (TCS
Contaminated and uncontaminated feeding influence perceived intimacy in mixed-sex dyads.
Alley, Thomas R
2012-06-01
It was expected that viewers watching adult mixed-sex pairs dining together will give higher ratings of the perceived intimacy and involvement of the pair if feeding is displayed while eating, especially if the feeding involves contaminated (i.e., with potential germ transfer) foods. Our hypotheses were tested using a design in which participants viewed five videotapes in varying order. Each video showed different mixed-sex pairs of actors sharing meal and included a distinct form of food sharing or none. These were shown to 50 small groups of young adults in quasi-random sequences to control for order effects. Immediately after each video, viewers were asked about the attractiveness, attraction and intimacy in the dyad they had just observed. As predicted, videos featuring contaminated feeding consistently produced higher ratings on involvement and attraction than those showing uncontaminated feeding which, in turn, mostly produced higher ratings on involvement and attraction than those showing no feeding behaviors. Copyright © 2012 Elsevier Ltd. All rights reserved.
The interactive digital video interface
NASA Technical Reports Server (NTRS)
Doyle, Michael D.
1989-01-01
A frequent complaint in the computer oriented trade journals is that current hardware technology is progressing so quickly that software developers cannot keep up. A example of this phenomenon can be seen in the field of microcomputer graphics. To exploit the advantages of new mechanisms of information storage and retrieval, new approaches must be made towards incorporating existing programs as well as developing entirely new applications. A particular area of need is the correlation of discrete image elements to textural information. The interactive digital video (IDV) interface embodies a new concept in software design which addresses these needs. The IDV interface is a patented device and language independent process for identifying image features on a digital video display and which allows a number of different processes to be keyed to that identification. Its capabilities include the correlation of discrete image elements to relevant text information and the correlation of these image features to other images as well as to program control mechanisms. Sophisticated interrelationships can be set up between images, text, and program control mechanisms.
Effects of Picture Prompts Delivered by a Video iPod on Pedestrian Navigation
ERIC Educational Resources Information Center
Kelley, Kelly R.; Test, David W.; Cooke, Nancy L.
2013-01-01
Transportation access is a major contributor to independence, productivity, and societal inclusion for individuals with intellectual and development disabilities (IDD). This study examined the effects of pedestrian navigation training using picture prompts displayed through a video iPod on travel route completion with 4 adults and IDD. Results…
NASA Technical Reports Server (NTRS)
Gilliland, M. G.; Rougelot, R. S.; Schumaker, R. A.
1966-01-01
Video signal processor uses special-purpose integrated circuits with nonsaturating current mode switching to accept texture and color information from a digital computer in a visual spaceflight simulator and to combine these, for display on color CRT with analog information concerning fading.
38 CFR 1.9 - Description, use, and display of VA seal and flag.
Code of Federal Regulations, 2010 CFR
2010-07-01
...) Official awards, certificates, medals, and plaques. (E) Motion picture film, video tape, and other... governments. (F) Official awards, certificates, and medals. (G) Motion picture film, video tape, and other... with this section shall be subject to the penalty provisions of 18 U.S.C. 506, 701, or 1017, providing...
38 CFR 1.9 - Description, use, and display of VA seal and flag.
Code of Federal Regulations, 2012 CFR
2012-07-01
...) Official awards, certificates, medals, and plaques. (E) Motion picture film, video tape, and other... governments. (F) Official awards, certificates, and medals. (G) Motion picture film, video tape, and other... with this section shall be subject to the penalty provisions of 18 U.S.C. 506, 701, or 1017, providing...
38 CFR 1.9 - Description, use, and display of VA seal and flag.
Code of Federal Regulations, 2014 CFR
2014-07-01
...) Official awards, certificates, medals, and plaques. (E) Motion picture film, video tape, and other... governments. (F) Official awards, certificates, and medals. (G) Motion picture film, video tape, and other... with this section shall be subject to the penalty provisions of 18 U.S.C. 506, 701, or 1017, providing...
38 CFR 1.9 - Description, use, and display of VA seal and flag.
Code of Federal Regulations, 2011 CFR
2011-07-01
...) Official awards, certificates, medals, and plaques. (E) Motion picture film, video tape, and other... governments. (F) Official awards, certificates, and medals. (G) Motion picture film, video tape, and other... with this section shall be subject to the penalty provisions of 18 U.S.C. 506, 701, or 1017, providing...
38 CFR 1.9 - Description, use, and display of VA seal and flag.
Code of Federal Regulations, 2013 CFR
2013-07-01
...) Official awards, certificates, medals, and plaques. (E) Motion picture film, video tape, and other... governments. (F) Official awards, certificates, and medals. (G) Motion picture film, video tape, and other... with this section shall be subject to the penalty provisions of 18 U.S.C. 506, 701, or 1017, providing...
Imaging System for Vaginal Surgery.
Taylor, G Bernard; Myers, Erinn M
2015-12-01
The vaginal surgeon is challenged with performing complex procedures within a surgical field of limited light and exposure. The video telescopic operating microscope is an illumination and imaging system that provides visualization during open surgical procedures with a limited field of view. The imaging system is positioned within the surgical field and then secured to the operating room table with a maneuverable holding arm. A high-definition camera and Xenon light source allow transmission of the magnified image to a high-definition monitor in the operating room. The monitor screen is positioned above the patient for the surgeon and assistants to view real time throughout the operation. The video telescopic operating microscope system was used to provide surgical illumination and magnification during total vaginal hysterectomy and salpingectomy, midurethral sling, and release of vaginal scar procedures. All procedures were completed without complications. The video telescopic operating microscope provided illumination of the vaginal operative field and display of the magnified image onto high-definition monitors in the operating room for the surgeon and staff to simultaneously view the procedures. The video telescopic operating microscope provides high-definition display, magnification, and illumination during vaginal surgery.
Impact of pain behaviors on evaluations of warmth and competence.
Ashton-James, Claire E; Richardson, Daniel C; de C Williams, Amanda C; Bianchi-Berthouze, Nadia; Dekker, Peter H
2014-12-01
This study investigated the social judgments that are made about people who appear to be in pain. Fifty-six participants viewed 2 video clips of human figures exercising. The videos were created by a motion tracking system, and showed dots that had been placed at various points on the body, so that body motion was the only visible cue. One of the figures displayed pain behaviors (eg, rubbing, holding, hesitating), while the other did not. Without any other information about the person in each video, participants evaluated each person on a variety of attributes associated with interpersonal warmth, competence, mood, and physical fitness. As well as judging them to be in more pain, participants evaluated the person who displayed pain behavior as less warm and less competent than the person who did not display pain behavior. In addition, the person who displayed pain behavior was perceived to be in a more negative mood and to have poorer physical fitness than the person who did not, and these perceptions contributed to the impact of pain behaviors on evaluations of warmth and competence, respectively. The implications of these negative social evaluations for social relationships, well-being, and pain assessment in persons in chronic pain are discussed. Copyright © 2014 International Association for the Study of Pain. Published by Elsevier B.V. All rights reserved.
Kutsuna, Kenichiro; Matsuura, Yasuyuki; Fujikake, Kazuhiro; Miyao, Masaru; Takada, Hiroki
2013-01-01
Visually induced motion sickness (VIMS) is caused by sensory conflict, the disagreement between vergence and visual accommodation while observing stereoscopic images. VIMS can be measured by psychological and physiological methods. We propose a mathematical methodology to measure the effect of three-dimensional (3D) images on the equilibrium function. In this study, body sway in the resting state is compared with that during exposure to 3D video clips on a liquid crystal display (LCD) and on a head mounted display (HMD). In addition, the Simulator Sickness Questionnaire (SSQ) was completed immediately afterward. Based on the statistical analysis of the SSQ subscores and each index for stabilograms, we succeeded in determining the quantity of the VIMS during exposure to the stereoscopic images. Moreover, we discuss the metamorphism in the potential functions to control the standing posture during the exposure to stereoscopic video clips.
Avey, Marc T; Phillmore, Leslie S; MacDougall-Shackleton, Scott A
2005-12-07
Sensory driven immediate early gene expression (IEG) has been a key tool to explore auditory perceptual areas in the avian brain. Most work on IEG expression in songbirds such as zebra finches has focused on playback of acoustic stimuli and its effect on auditory processing areas such as caudal medial mesopallium (CMM) caudal medial nidopallium (NCM). However, in a natural setting, the courtship displays of songbirds (including zebra finches) include visual as well as acoustic components. To determine whether the visual stimulus of a courting male modifies song-induced expression of the IEG ZENK in the auditory forebrain we exposed male and female zebra finches to acoustic (song) and visual (dancing) components of courtship. Birds were played digital movies with either combined audio and video, audio only, video only, or neither audio nor video (control). We found significantly increased levels of Zenk response in the auditory region CMM in the two treatment groups exposed to acoustic stimuli compared to the control group. The video only group had an intermediate response, suggesting potential effect of visual input on activity in these auditory brain regions. Finally, we unexpectedly found a lateralization of Zenk response that was independent of sex, brain region, or treatment condition, such that Zenk immunoreactivity was consistently higher in the left hemisphere than in the right and the majority of individual birds were left-hemisphere dominant.
SarcOptiM for ImageJ: high-frequency online sarcomere length computing on stimulated cardiomyocytes.
Pasqualin, Côme; Gannier, François; Yu, Angèle; Malécot, Claire O; Bredeloux, Pierre; Maupoil, Véronique
2016-08-01
Accurate measurement of cardiomyocyte contraction is a critical issue for scientists working on cardiac physiology and physiopathology of diseases implying contraction impairment. Cardiomyocytes contraction can be quantified by measuring sarcomere length, but few tools are available for this, and none is freely distributed. We developed a plug-in (SarcOptiM) for the ImageJ/Fiji image analysis platform developed by the National Institutes of Health. SarcOptiM computes sarcomere length via fast Fourier transform analysis of video frames captured or displayed in ImageJ and thus is not tied to a dedicated video camera. It can work in real time or offline, the latter overcoming rotating motion or displacement-related artifacts. SarcOptiM includes a simulator and video generator of cardiomyocyte contraction. Acquisition parameters, such as pixel size and camera frame rate, were tested with both experimental recordings of rat ventricular cardiomyocytes and synthetic videos. It is freely distributed, and its source code is available. It works under Windows, Mac, or Linux operating systems. The camera speed is the limiting factor, since the algorithm can compute online sarcomere shortening at frame rates >10 kHz. In conclusion, SarcOptiM is a free and validated user-friendly tool for studying cardiomyocyte contraction in all species, including human. Copyright © 2016 the American Physiological Society.
NASA Astrophysics Data System (ADS)
Ilgner, Justus F. R.; Kawai, Takashi; Shibata, Takashi; Yamazoe, Takashi; Westhofen, Martin
2006-02-01
Introduction: An increasing number of surgical procedures are performed in a microsurgical and minimally-invasive fashion. However, the performance of surgery, its possibilities and limitations become difficult to teach. Stereoscopic video has evolved from a complex production process and expensive hardware towards rapid editing of video streams with standard and HDTV resolution which can be displayed on portable equipment. This study evaluates the usefulness of stereoscopic video in teaching undergraduate medical students. Material and methods: From an earlier study we chose two clips each of three different microsurgical operations (tympanoplasty type III of the ear, endonasal operation of the paranasal sinuses and laser chordectomy for carcinoma of the larynx). This material was added by 23 clips of a cochlear implantation, which was specifically edited for a portable computer with an autostereoscopic display (PC-RD1-3D, SHARP Corp., Japan). The recording and synchronization of left and right image was performed at the University Hospital Aachen. The footage was edited stereoscopically at the Waseda University by means of our original software for non-linear editing of stereoscopic 3-D movies. Then the material was converted into the streaming 3-D video format. The purpose of the conversion was to present the video clips by a file type that does not depend on a television signal such as PAL or NTSC. 25 4th year medical students who participated in the general ENT course at Aachen University Hospital were asked to estimate depth clues within the six video clips plus cochlear implantation clips. Another 25 4th year students who were shown the material monoscopically on a conventional laptop served as control. Results: All participants noted that the additional depth information helped with understanding the relation of anatomical structures, even though none had hands-on experience with Ear, Nose and Throat operations before or during the course. The monoscopic group generally estimated resection depth to much lesser values than in reality. Although this was the case with some participants in the stereoscopic group, too, the estimation of depth features reflected the enhanced depth impression provided by stereoscopy. Conclusion: Following first implementation of stereoscopic video teaching, medical students who are inexperienced with ENT surgical procedures are able to reproduce depth information and therefore anatomically complex structures to a greater extent following stereoscopic video teaching. Besides extending video teaching to junior doctors, the next evaluation step will address its effect on the learning curve during the surgical training program.
Holodeck: Telepresence Dome Visualization System Simulations
NASA Technical Reports Server (NTRS)
Hite, Nicolas
2012-01-01
This paper explores the simulation and consideration of different image-projection strategies for the Holodeck, a dome that will be used for highly immersive telepresence operations in future endeavors of the National Aeronautics and Space Administration (NASA). Its visualization system will include a full 360 degree projection onto the dome's interior walls in order to display video streams from both simulations and recorded video. Because humans innately trust their vision to precisely report their surroundings, the Holodeck's visualization system is crucial to its realism. This system will be rigged with an integrated hardware and software infrastructure-namely, a system of projectors that will relay with a Graphics Processing Unit (GPU) and computer to both project images onto the dome and correct warping in those projections in real-time. Using both Computer-Aided Design (CAD) and ray-tracing software, virtual models of various dome/projector geometries were created and simulated via tracking and analysis of virtual light sources, leading to the selection of two possible configurations for installation. Research into image warping and the generation of dome-ready video content was also conducted, including generation of fisheye images, distortion correction, and the generation of a reliable content-generation pipeline.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Springmeyer, R R; Brugger, E; Cook, R
The Data group provides data analysis and visualization support to its customers. This consists primarily of the development and support of VisIt, a data analysis and visualization tool. Support ranges from answering questions about the tool, providing classes on how to use the tool, and performing data analysis and visualization for customers. The Information Management and Graphics Group supports and develops tools that enhance our ability to access, display, and understand large, complex data sets. Activities include applying visualization software for large scale data exploration; running video production labs on two networks; supporting graphics libraries and tools for end users;more » maintaining PowerWalls and assorted other displays; and developing software for searching and managing scientific data. Researchers in the Center for Applied Scientific Computing (CASC) work on various projects including the development of visualization techniques for large scale data exploration that are funded by the ASC program, among others. The researchers also have LDRD projects and collaborations with other lab researchers, academia, and industry. The IMG group is located in the Terascale Simulation Facility, home to Dawn, Atlas, BGL, and others, which includes both classified and unclassified visualization theaters, a visualization computer floor and deployment workshop, and video production labs. We continued to provide the traditional graphics group consulting and video production support. We maintained five PowerWalls and many other displays. We deployed a 576-node Opteron/IB cluster with 72 TB of memory providing a visualization production server on our classified network. We continue to support a 128-node Opteron/IB cluster providing a visualization production server for our unclassified systems and an older 256-node Opteron/IB cluster for the classified systems, as well as several smaller clusters to drive the PowerWalls. The visualization production systems includes NFS servers to provide dedicated storage for data analysis and visualization. The ASC projects have delivered new versions of visualization and scientific data management tools to end users and continue to refine them. VisIt had 4 releases during the past year, ending with VisIt 2.0. We released version 2.4 of Hopper, a Java application for managing and transferring files. This release included a graphical disk usage view which works on all types of connections and an aggregated copy feature for quickly transferring massive datasets quickly and efficiently to HPSS. We continue to use and develop Blockbuster and Telepath. Both the VisIt and IMG teams were engaged in a variety of movie production efforts during the past year in addition to the development tasks.« less
High-speed reconstruction of compressed images
NASA Astrophysics Data System (ADS)
Cox, Jerome R., Jr.; Moore, Stephen M.
1990-07-01
A compression scheme is described that allows high-definition radiological images with greater than 8-bit intensity resolution to be represented by 8-bit pixels. Reconstruction of the images with their original intensity resolution can be carried out by means of a pipeline architecture suitable for compact, high-speed implementation. A reconstruction system is described that can be fabricated according to this approach and placed between an 8-bit display buffer and the display's video system thereby allowing contrast control of images at video rates. Results for 50 CR chest images are described showing that error-free reconstruction of the original 10-bit CR images can be achieved.
Description and flight tests of an oculometer
NASA Technical Reports Server (NTRS)
Middleton, D. B.; Hurt, G. J., Jr.; Wise, M. A.; Holt, J. D.
1977-01-01
A remote sensing oculometer was successfully operated during flight tests with a NASA experimental Twin Otter aircraft at the Langley Research Center. Although the oculometer was designed primarily for the laboratory, it was able to track the pilot's eye-point-of-regard (lookpoint) consistently and unobtrusively in the flight environment. The instantaneous position of the lookpoint was determined to within approximately 1 deg. Data were recorded on both analog and video tape. The video data consisted of continuous scenes of the aircraft's instrument display and a superimposed white dot (simulating the lookpoint) dwelling on an instrument or moving from instrument to instrument as the pilot monitored the display information during landing approaches.
Ochiai, Tetsuji; Mushiake, Hajime; Tanji, Jun
2005-07-01
The ventral premotor cortex (PMv) has been implicated in the visual guidance of movement. To examine whether neuronal activity in the PMv is involved in controlling the direction of motion of a visual image of the hand or the actual movement of the hand, we trained a monkey to capture a target that was presented on a video display using the same side of its hand as was displayed on the video display. We found that PMv neurons predominantly exhibited premovement activity that reflected the image motion to be controlled, rather than the physical motion of the hand. We also found that the activity of half of such direction-selective PMv neurons depended on which side (left versus right) of the video image of the hand was used to capture the target. Furthermore, this selectivity for a portion of the hand was not affected by changing the starting position of the hand movement. These findings suggest that PMv neurons play a crucial role in determining which part of the body moves in which direction, at least under conditions in which a visual image of a limb is used to guide limb movements.
Optical links in handheld multimedia devices
NASA Astrophysics Data System (ADS)
van Geffen, S.; Duis, J.; Miller, R.
2008-04-01
Ever emerging applications in handheld multimedia devices such as mobile phones, laptop computers, portable video games and digital cameras requiring increased screen resolutions are driving higher aggregate bitrates between host processor and display(s) enabling services such as mobile video conferencing, video on demand and TV broadcasting. Larger displays and smaller phones require complex mechanical 3D hinge configurations striving to combine maximum functionality with compact building volumes. Conventional galvanic interconnections such as Micro-Coax and FPC carrying parallel digital data between host processor and display module may produce Electromagnetic Interference (EMI) and bandwidth limitations caused by small cable size and tight cable bends. To reduce the number of signals through a hinge, the mobile phone industry, organized in the MIPI (Mobile Industry Processor Interface) alliance, is currently defining an electrical interface transmitting serialized digital data at speeds >1Gbps. This interface allows for electrical or optical interconnects. Above 1Gbps optical links may offer a cost effective alternative because of their flexibility, increased bandwidth and immunity to EMI. This paper describes the development of optical links for handheld communication devices. A cable assembly based on a special Plastic Optical Fiber (POF) selected for its mechanical durability is terminated with a small form factor molded lens assembly which interfaces between an 850nm VCSEL transmitter and a receiving device on the printed circuit board of the display module. A statistical approach based on a Lean Design For Six Sigma (LDFSS) roadmap for new product development tries to find an optimum link definition which will be robust and low cost meeting the power consumption requirements appropriate for battery operated systems.
1981 Image II Conference Proceedings.
1981-11-01
rapid motion of terrain detail across the display requires fast display processors. Other difficulties are perceptual: the visual displays must convey...has been a continuing effort by Vought in the last decade. Early systems were restricted by the unavailability of video bulk storage with fast random...each photograph. The calculations aided in the proper sequencing of the scanned scenes on the tape recorder and eventually facilitated fast random
Image sequence analysis workstation for multipoint motion analysis
NASA Astrophysics Data System (ADS)
Mostafavi, Hassan
1990-08-01
This paper describes an application-specific engineering workstation designed and developed to analyze motion of objects from video sequences. The system combines the software and hardware environment of a modem graphic-oriented workstation with the digital image acquisition, processing and display techniques. In addition to automation and Increase In throughput of data reduction tasks, the objective of the system Is to provide less invasive methods of measurement by offering the ability to track objects that are more complex than reflective markers. Grey level Image processing and spatial/temporal adaptation of the processing parameters is used for location and tracking of more complex features of objects under uncontrolled lighting and background conditions. The applications of such an automated and noninvasive measurement tool include analysis of the trajectory and attitude of rigid bodies such as human limbs, robots, aircraft in flight, etc. The system's key features are: 1) Acquisition and storage of Image sequences by digitizing and storing real-time video; 2) computer-controlled movie loop playback, freeze frame display, and digital Image enhancement; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored Image sequence; 4) model-based estimation and tracking of the six degrees of freedom of a rigid body: 5) field-of-view and spatial calibration: 6) Image sequence and measurement data base management; and 7) offline analysis software for trajectory plotting and statistical analysis.
Recovery of Images from the AMOS ELSI Data for STS-33
1990-04-19
ore recorded on tape in both video and digital formats. The ELSI \\-. used on thrce passes, orbits 21, 37, and 67 on 24,2S, and 27 November. These data...November, in video fontit, were hin&narried to Gcopih)sics labontory (0L) :t the beginning or December 1989; tli cl.ified data, in digital formn.t, were...are also sampled and reconverted to maulog form, in a stanicrd viko format, for display on a video monitor and recording on videotape. 3. TAPE FORMAT
Design Issues in Video Disc Map Display.
1984-10-01
such items as the equipment used by ETL in its work with discs and selected images from a disc. % %. I 4 11. VIDEO DISC TECHNOLOGY AND VOCABULARY 0...The term video refers to a television image. The standard home television set is equipped with a receiver, which is capable of picking up a signal...plays for one hour per side and is played at a constant linear velocity. The industria )y-formatted disc has 54,000 frames per side in concentric tracks
12. NBS LOWER ROOM. BEHIND FAR GLASS WALL IS VIDEO ...
12. NBS LOWER ROOM. BEHIND FAR GLASS WALL IS VIDEO TAPE EQUIPMENT AND VOICE INTERCOM EQUIPMENT. THE MONITORS ABOVE GLASS WALL DISPLAY UNDERWATER TEST VIDEO TO CONTROL ROOM. FARTHEST CONSOLE ROW CONTAINS CAMERA SWITCHING, PANNING, TILTING, FOCUSING, AND ZOOMING. MIDDLE CONSOLE ROW CONTAINS TEST CONDUCTOR CONSOLES FOR MONITORING TEST ACTIVITIES AND DATA. THE CLOSEST CONSOLE ROW IS NBS FACILITY CONSOLES FOR TEST DIRECTOR, SAFETY AND QUALITY ASSURANCE REPRESENTATIVES. - Marshall Space Flight Center, Neutral Buoyancy Simulator Facility, Rideout Road, Huntsville, Madison County, AL
13. NBS LOWER ROOM. BEHIND FAR GLASS WALL IS VIDEO ...
13. NBS LOWER ROOM. BEHIND FAR GLASS WALL IS VIDEO TAPE EQUIPMENT AND VOICE INTERCOM EQUIPMENT. THE MONITORS ABOVE GLASS WALL DISPLAY UNDERWATER TEST VIDEO TO CONTROL ROOM. FARTHEST CONSOLE ROW CONTAINS CAMERA SWITCHING, PANNING, TILTING, FOCUSING, AND ZOOMING. MIDDLE CONSOLE ROW CONTAINS TEST CONDUCTOR CONSOLES FOR MONITORING TEST ACTIVITIES AND DATA. THE CLOSEST CONSOLE ROW IS NBC FACILITY CONSOLES FOR TEST DIRECTOR, SAFETY AND QUALITY ASSURANCE REPRESENTATIVES. - Marshall Space Flight Center, Neutral Buoyancy Simulator Facility, Rideout Road, Huntsville, Madison County, AL
Effects Of Frame Rates In Video Displays
NASA Technical Reports Server (NTRS)
Kellogg, Gary V.; Wagner, Charles A.
1991-01-01
Report describes experiment on subjective effects of rates at which display on cathode-ray tube in flight simulator updated and refreshed. Conducted to learn more about jumping, blurring, flickering, and multiple lines that observer perceives when line moves at high speed across screen of a calligraphic CRT.
Internet Protocol Display Sharing Solution for Mission Control Center Video System
NASA Technical Reports Server (NTRS)
Brown, Michael A.
2009-01-01
With the advent of broadcast television as a constant source of information throughout the NASA manned space flight Mission Control Center (MCC) at the Johnson Space Center (JSC), the current Video Transport System (VTS) characteristics provides the ability to visually enhance real-time applications as a broadcast channel that decision making flight controllers come to rely on, but can be difficult to maintain and costly. The Operations Technology Facility (OTF) of the Mission Operations Facility Division (MOFD) has been tasked to provide insight to new innovative technological solutions for the MCC environment focusing on alternative architectures for a VTS. New technology will be provided to enable sharing of all imagery from one specific computer display, better known as Display Sharing (DS), to other computer displays and display systems such as; large projector systems, flight control rooms, and back supporting rooms throughout the facilities and other offsite centers using IP networks. It has been stated that Internet Protocol (IP) applications are easily readied to substitute for the current visual architecture, but quality and speed may need to be forfeited for reducing cost and maintainability. Although the IP infrastructure can support many technologies, the simple task of sharing ones computer display can be rather clumsy and difficult to configure and manage to the many operators and products. The DS process shall invest in collectively automating the sharing of images while focusing on such characteristics as; managing bandwidth, encrypting security measures, synchronizing disconnections from loss of signal / loss of acquisitions, performance latency, and provide functions like, scalability, multi-sharing, ease of initial integration / sustained configuration, integration with video adjustments packages, collaborative tools, host / recipient controllability, and the utmost paramount priority, an enterprise solution that provides ownership to the whole process, while maintaining the integrity of the latest technological displayed image devices. This study will provide insights to the many possibilities that can be filtered down to a harmoniously responsive product that can be used in today's MCC environment.
ARINC 818 express for high-speed avionics video and power over coax
NASA Astrophysics Data System (ADS)
Keller, Tim; Alexander, Jon
2012-06-01
CoaXPress is a new standard for high-speed video over coax cabling developed for the machine vision industry. CoaXPress includes both a physical layer and a video protocol. The physical layer has desirable features for aerospace and defense applications: it allows 3Gbps (up to 6Gbps) communication, includes 21Mbps return path allowing for bidirectional communication, and provides up to 13W of power, all over a single coax connection. ARINC 818, titled "Avionics Digital Video Bus" is a protocol standard developed specifically for high speed, mission critical aerospace video systems. ARINC 818 is being widely adopted for new military and commercial display and sensor applications. The ARINC 818 protocol combined with the CoaXPress physical layer provide desirable characteristics for many aerospace systems. This paper presents the results of a technology demonstration program to marry the physical layer from CoaXPress with the ARINC 818 protocol. ARINC 818 is a protocol, not a physical layer. Typically, ARINC 818 is implemented over fiber or copper for speeds of 1 to 2Gbps, but beyond 2Gbps, it has been implemented exclusively over fiber optic links. In many rugged applications, a copper interface is still desired, by implementing ARINC 818 over the CoaXPress physical layer, it provides a path to 3 and 6 Gbps copper interfaces for ARINC 818. Results of the successful technology demonstration dubbed ARINC 818 Express are presented showing 3Gbps communication while powering a remote module over a single coax cable. The paper concludes with suggested next steps for bring this technology to production readiness.
Putnam, P.T.; Roman, J.M.; Zimmerman, P.E.; Gothard, K.M.
2017-01-01
Gaze following is a basic building block of social behavior that has been observed in multiple species, including primates. The absence of gaze following is associated with abnormal development of social cognition, such as in autism spectrum disorders (ASD). Some social deficits in ASD, including the failure to look at eyes and the inability to recognize facial expressions, are ameliorated by intranasal administration of oxytocin (IN-OT). Here we tested the hypothesis that IN-OT might enhance social processes that require active engagement with a social partner, such as gaze following. Alternatively, IN-OT may only enhance the perceptual salience of the eyes, and may not modify behavioral responses to social signals. To test this hypothesis, we presented four monkeys with videos of conspecifics displaying natural behaviors. Each video was viewed multiple times before and after the monkeys received intranasally either 50 IU of OT or saline. We found that despite a gradual decrease in attention to the repeated viewing of the same videos (habituation), IN-OT consistently increased the frequency of gaze following saccades. Further analysis confirmed that these behaviors did not occur randomly, but rather predictably in response to the same segments of the videos. These findings suggest that in response to more naturalistic social stimuli IN-OT enhances the propensity to interact with a social partner rather than merely elevating the perceptual salience of the eyes. In light of these findings, gaze following may serve as a metric for pro-social effects of oxytocin that target social action more than social perception. PMID:27343726
NASA Technical Reports Server (NTRS)
Cohen, Tamar E.; Lees, David S.; Deans, Matthew C.; Lim, Darlene S. S.; Lee, Yeon Jin Grace
2018-01-01
Exploration Ground Data Systems (xGDS) supports rapid scientific decision making by synchronizing video in context with map, instrument data visualization, geo-located notes and any other collected data. xGDS is an open source web-based software suite developed at NASA Ames Research Center to support remote science operations in analog missions and prototype solutions for remote planetary exploration. (See Appendix B) Typical video systems are designed to play or stream video only, independent of other data collected in the context of the video. Providing customizable displays for monitoring live video and data as well as replaying recorded video and data helps end users build up a rich situational awareness. xGDS was designed to support remote field exploration with unreliable networks. Commercial digital recording systems operate under the assumption that there is a stable and reliable network between the source of the video and the recording system. In many field deployments and space exploration scenarios, this is not the case - there are both anticipated and unexpected network losses. xGDS' Video Module handles these interruptions, storing the available video, organizing and characterizing the dropouts, and presenting the video for streaming or replay to the end user including visualization of the dropouts. Scientific instruments often require custom or expensive software to analyze and visualize collected data. This limits the speed at which the data can be visualized and limits access to the data to those users with the software. xGDS' Instrument Module integrates with instruments that collect and broadcast data in a single snapshot or that continually collect and broadcast a stream of data. While seeing a visualization of collected instrument data is informative, showing the context for the collected data, other data collected nearby along with events indicating current status helps remote science teams build a better understanding of the environment. Further, sharing geo-located, tagged notes recorded by the scientists and others on the team spurs deeper analysis of the data.
Actively addressed single pixel full-colour plasmonic display
NASA Astrophysics Data System (ADS)
Franklin, Daniel; Frank, Russell; Wu, Shin-Tson; Chanda, Debashis
2017-05-01
Dynamic, colour-changing surfaces have many applications including displays, wearables and active camouflage. Plasmonic nanostructures can fill this role by having the advantages of ultra-small pixels, high reflectivity and post-fabrication tuning through control of the surrounding media. However, previous reports of post-fabrication tuning have yet to cover a full red-green-blue (RGB) colour basis set with a single nanostructure of singular dimensions. Here, we report a method which greatly advances this tuning and demonstrates a liquid crystal-plasmonic system that covers the full RGB colour basis set, only as a function of voltage. This is accomplished through a surface morphology-induced, polarization-dependent plasmonic resonance and a combination of bulk and surface liquid crystal effects that manifest at different voltages. We further demonstrate the system's compatibility with existing LCD technology by integrating it with a commercially available thin-film-transistor array. The imprinted surface interfaces readily with computers to display images as well as video.
NASA Astrophysics Data System (ADS)
McIntire, John; Geiselman, Eric; Heft, Eric; Havig, Paul
2011-06-01
Designers, researchers, and users of binocular stereoscopic head- or helmet-mounted displays (HMDs) face the tricky issue of what imagery to present in their particular displays, and how to do so effectively. Stereoscopic imagery must often be created in-house with a 3D graphics program or from within a 3D virtual environment, or stereoscopic photos/videos must be carefully captured, perhaps for relaying to an operator in a teleoperative system. In such situations, the question arises as to what camera separation (real or virtual) is appropriate or desirable for end-users and operators. We review some of the relevant literature regarding the question of stereo pair camera separation using deskmounted or larger scale stereoscopic displays, and employ our findings to potential HMD applications, including command & control, teleoperation, information and scientific visualization, and entertainment.
Polnau, D G; Ma, P M
2001-12-01
Neuroethology seeks to uncover the neural mechanisms underlying natural behaviour. One of the major challenges in this field is the need to correlate directly neural activity and behavioural output. In most cases, recording of neural activity in freely moving animals is extremely difficult. However, electromyographic recording can often be used in lieu of neural recording to gain an understanding of the motor output program underlying a well-defined behaviour. Electromyographic recording is less invasive than most other recording methods, and does not impede the performance of most natural tasks. Using the opercular display of the Siamese fighting fish as a model, we developed a protocol for correlating directly electromyographic activity and kinematics of opercular movement: electromyographic activity was recorded in the audio channel of a video cassette recorder while video taping the display behaviour. By combining computer-assisted, quantitative video analysis and spike analysis, the kinematics of opercular movement are linked to the motor output program. Since the muscle that mediates opercular abduction in this fish, the dilator operculi, is a relatively small muscle with several subdivisions, we also describe methods for recording from small muscles and marking the precise recording site with electrolytic corrosion. The protocol described here is applicable to studies of a variety of natural behaviour that can be performed in a relatively confined space. It is also useful for analyzing complex or rapidly changing behaviour in which a precise correlation between kinematics and electromyography is required.
ERIC Educational Resources Information Center
Norling, Martina; Lillvist, Anne
2016-01-01
This study investigates language-promoting strategies and support of concept development displayed by preschool staffs' when interacting with preschool children in literacy-related play activities. The data analysed consisted of 39 minutes of video, selected systematically from a total of 11 hours of video material from six Swedish preschool…
Video-Out Projection and Lecture Hall Set-Up. Microcomputing Working Paper Series.
ERIC Educational Resources Information Center
Gibson, Chris
This paper details the considerations involved in determining suitable video projection systems for displaying the Apple Macintosh's screen to large groups of people, both in classrooms with approximately 25 people, and in lecture halls with approximately 250. To project the Mac screen to groups in lecture halls, the Electrohome EDP-57 video…
Float Package and the Data Rack aboard the DC-9
NASA Technical Reports Server (NTRS)
1996-01-01
Ted Brunzie and Peter Mason observe the float package and the data rack aboard the DC-9 reduced gravity aircraft. The float package contains a cryostat, a video camera, a pump and accelerometers. The data rack displays and record the video signal from the float package on tape and stores acceleration and temperature measurements on disk.
An Intuitive Graphical User Interface for Small UAS
2013-05-01
reduced from two to one . The stock displays, including video with text overlay on one and FalconView on the other, are replaced with a single, graphics...INTRODUCTION Tactical UAVs such as the Raven, Puma and Wasp are often used by dismounted warfighters on missions that require a high degree of mobility by...the operators on the ground. The current ground control stations (GCS) for the Wasp, Raven and Puma tactical UAVs require two people and two user
Bandera, Cesar
2016-05-25
The Office of Public Health Preparedness and Response (OPHPR) in the Centers for Disease Control and Prevention conducts outreach for public preparedness for natural and manmade incidents. In 2011, OPHPR conducted a nationwide mobile public health (m-Health) campaign that pushed brief videos on preparing for severe winter weather onto cell phones, with the objective of evaluating the interoperability of multimedia m-Health outreach with diverse cell phones (including handsets without Internet capability), carriers, and user preferences. Existing OPHPR outreach material on winter weather preparedness was converted into mobile-ready multimedia using mobile marketing best practices to improve audiovisual quality and relevance. Middleware complying with opt-in requirements was developed to push nine bi-weekly multimedia broadcasts onto subscribers' cell phones, and OPHPR promoted the campaign on its web site and to subscribers on its govdelivery.com notification platform. Multimedia, text, and voice messaging activity to/from the middleware was logged and analyzed. Adapting existing media into mobile video was straightforward using open source and commercial software, including web pages, PDF documents, and public service announcements. The middleware successfully delivered all outreach videos to all participants (a total of 504 videos) regardless of the participant's device. 54 % of videos were viewed on cell phones, 32 % on computers, and 14 % were retrieved by search engine web crawlers. 21 % of participating cell phones did not have Internet access, yet still received and displayed all videos. The time from media push to media viewing on cell phones was half that of push to viewing on computers. Video delivered through multimedia messaging can be as interoperable as text messages, while providing much richer information. This may be the only multimedia mechanism available to outreach campaigns targeting vulnerable populations impacted by the digital divide. Anti-spam laws preserve the integrity of mobile messaging, but complicate campaign promotion. Person-to-person messages may boost enrollment.
Study to Expand Simulation Cockpit Displays of Advanced Sensors
1981-03-01
common source is being used for multiple sensor types). If inde- pendent displays and controls are desired then two independent video sources or sensor...line is inserted in each gap, the result is the familiar 211 in- terlace. If two lines are inserted, the result is 31l interlace, and so on. The total...symbol generators. If these systems are oper- ating at various scan rates and if a common display device, such as a multifunction display (MFD) is to
Video PATSEARCH: A Mixed-Media System.
ERIC Educational Resources Information Center
Schulman, Jacque-Lynne
1982-01-01
Describes a videodisc-based information display system in which a computer terminal is used to search the online PATSEARCH database from a remote host with local microcomputer control to select and display drawings from the retrieved records. System features and system components are discussed and criteria for system evaluation are presented.…
Software Aids Visualization Of Mars Pathfinder Mission
NASA Technical Reports Server (NTRS)
Weidner, Richard J.
1996-01-01
Report describes Simulator for Imager for Mars Pathfinder (SIMP) computer program. SIMP generates "virtual reality" display of view through video camera on Mars lander spacecraft of Mars Pathfinder mission, along with display of pertinent textual and graphical data, for use by scientific investigators in planning sequences of activities for mission.
An attentive multi-camera system
NASA Astrophysics Data System (ADS)
Napoletano, Paolo; Tisato, Francesco
2014-03-01
Intelligent multi-camera systems that integrate computer vision algorithms are not error free, and thus both false positive and negative detections need to be revised by a specialized human operator. Traditional multi-camera systems usually include a control center with a wall of monitors displaying videos from each camera of the network. Nevertheless, as the number of cameras increases, switching from a camera to another becomes hard for a human operator. In this work we propose a new method that dynamically selects and displays the content of a video camera from all the available contents in the multi-camera system. The proposed method is based on a computational model of human visual attention that integrates top-down and bottom-up cues. We believe that this is the first work that tries to use a model of human visual attention for the dynamic selection of the camera view of a multi-camera system. The proposed method has been experimented in a given scenario and has demonstrated its effectiveness with respect to the other methods and manually generated ground-truth. The effectiveness has been evaluated in terms of number of correct best-views generated by the method with respect to the camera views manually generated by a human operator.
Learned saliency transformations for gaze guidance
NASA Astrophysics Data System (ADS)
Vig, Eleonora; Dorr, Michael; Barth, Erhardt
2011-03-01
The saliency of an image or video region indicates how likely it is that the viewer of the image or video fixates that region due to its conspicuity. An intriguing question is how we can change the video region to make it more or less salient. Here, we address this problem by using a machine learning framework to learn from a large set of eye movements collected on real-world dynamic scenes how to alter the saliency level of the video locally. We derive saliency transformation rules by performing spatio-temporal contrast manipulations (on a spatio-temporal Laplacian pyramid) on the particular video region. Our goal is to improve visual communication by designing gaze-contingent interactive displays that change, in real time, the saliency distribution of the scene.
TRECVID: the utility of a content-based video retrieval evaluation
NASA Astrophysics Data System (ADS)
Hauptmann, Alexander G.
2006-01-01
TRECVID, an annual retrieval evaluation benchmark organized by NIST, encourages research in information retrieval from digital video. TRECVID benchmarking covers both interactive and manual searching by end users, as well as the benchmarking of some supporting technologies including shot boundary detection, extraction of semantic features, and the automatic segmentation of TV news broadcasts. Evaluations done in the context of the TRECVID benchmarks show that generally, speech transcripts and annotations provide the single most important clue for successful retrieval. However, automatically finding the individual images is still a tremendous and unsolved challenge. The evaluations repeatedly found that none of the multimedia analysis and retrieval techniques provide a significant benefit over retrieval using only textual information such as from automatic speech recognition transcripts or closed captions. In interactive systems, we do find significant differences among the top systems, indicating that interfaces can make a huge difference for effective video/image search. For interactive tasks efficient interfaces require few key clicks, but display large numbers of images for visual inspection by the user. The text search finds the right context region in the video in general, but to select specific relevant images we need good interfaces to easily browse the storyboard pictures. In general, TRECVID has motivated the video retrieval community to be honest about what we don't know how to do well (sometimes through painful failures), and has focused us to work on the actual task of video retrieval, as opposed to flashy demos based on technological capabilities.
Video stereo-laparoscopy system
NASA Astrophysics Data System (ADS)
Xiang, Yang; Hu, Jiasheng; Jiang, Huilin
2006-01-01
Minimally invasive surgery (MIS) has contributed significantly to patient care by reducing the morbidity associated with more invasive procedures. MIS procedures have become standard treatment for gallbladder disease and some abdominal malignancies. The imaging system has played a major role in the evolving field of minimally invasive surgery (MIS). The image need to have good resolution, large magnification, especially, the image need to have depth cue at the same time the image have no flicker and suit brightness. The video stereo-laparoscopy system can meet the demand of the doctors. This paper introduces the 3d video laparoscopy has those characteristic, field frequency: 100Hz, the depth space: 150mm, resolution: 10pl/mm. The work principle of the system is introduced in detail, and the optical system and time-division stereo-display system are described briefly in this paper. The system has focusing image lens, it can image on the CCD chip, the optical signal can change the video signal, and through A/D switch of the image processing system become the digital signal, then display the polarized image on the screen of the monitor through the liquid crystal shutters. The doctors with the polarized glasses can watch the 3D image without flicker of the tissue or organ. The 3D video laparoscope system has apply in the MIS field and praised by doctors. Contrast to the traditional 2D video laparoscopy system, it has some merit such as reducing the time of surgery, reducing the problem of surgery and the trained time.
Integrated Launch Operations Applications Remote Display Developer
NASA Technical Reports Server (NTRS)
Flemming, Cedric M., II
2014-01-01
This internship provides the opportunity to support the creation and use of Firing Room Displays and Firing Room Applications that use an abstraction layer called the Application Control Language (ACL). Required training included video watching, reading assignments, face-to-face instruction and job shadowing other Firing Room software developers as they completed their daily duties. During the training period various computer and access rights needed for creating the applications were obtained. The specific ground subsystems supported are the Cryogenics Subsystems, Liquid Hydrogen (LH2) and Liquid Oxygen (LO2). The cryogenics team is given the task of finding the best way to handle these very volatile liquids that are used to fuel the Space Launch System (SLS) and the Orion flight vehicles safely.
Generating Stereoscopic Television Images With One Camera
NASA Technical Reports Server (NTRS)
Coan, Paul P.
1996-01-01
Straightforward technique for generating stereoscopic television images involves use of single television camera translated laterally between left- and right-eye positions. Camera acquires one of images (left- or right-eye image), and video signal from image delayed while camera translated to position where it acquires other image. Length of delay chosen so both images displayed simultaneously or as nearly simultaneously as necessary to obtain stereoscopic effect. Technique amenable to zooming in on small areas within broad scenes. Potential applications include three-dimensional viewing of geological features and meteorological events from spacecraft and aircraft, inspection of workpieces moving along conveyor belts, and aiding ground and water search-and-rescue operations. Also used to generate and display imagery for public education and general information, and possible for medical purposes.
Janosik, Elzbieta; Grzesik, Jan
2003-01-01
The aim of this work was to evaluate the influence of different lighting levels at workstations with video display terminals (VDTs) on the course of the operators' visual work, and to determine the optimal levels of lighting at VDT workstations. For two kinds of job (entry of figures from a typescript and edition of the text displayed on the screen), the work capacity, the degree of the visual strain and the operators' subjective symptoms were determined for four lighting levels (200, 300, 500 and 750 lx). It was found that the work at VDT workstations may overload the visual system and cause eyes complaints as well as the reduction of accommodation or convergence strength. It was also noted that the edition of the text displayed on the screen is more burdening for operators than the entry of figures from a typescript. Moreover, the examination results showed that the lighting at VDT workstations should be higher than 200 lx and that 300 lx makes the work conditions most comfortable during the entry of figures from a typescript, and 500 lx during the edition of the text displayed on the screen.
A teleconference with three-dimensional surgical video presentation on the 'usual' Internet.
Obuchi, Toshiro; Moroga, Toshihiko; Nakamura, Hiroshige; Shima, Hiroji; Iwasaki, Akinori
2015-03-01
Endoscopic surgery employing three-dimensional (3D) video images, such as a robotic surgery, has recently become common. However, the number of opportunities to watch such actual 3D videos is still limited due to many technical difficulties associated with showing 3D videos in front of an audience. A teleconference with 3D video presentations of robotic surgeries was held between our institution and a distant institution using a commercially available telecommunication appliance on the 'usual' Internet. Although purpose-built video displays and 3D glasses were necessary, no technical problems occurred during the presentation and discussion. This high-definition 3D telecommunication system can be applied to discussions about and education on 3D endoscopic surgeries for many surgeons, even in distant places, without difficulty over the usual Internet connection.
Simple video format for mobile applications
NASA Astrophysics Data System (ADS)
Smith, John R.; Miao, Zhourong; Li, Chung-Sheng
2000-04-01
With the advent of pervasive computing, there is a growing demand for enabling multimedia applications on mobile devices. Large numbers of pervasive computing devices, such as personal digital assistants (PDAs), hand-held computer (HHC), smart phones, portable audio players, automotive computing devices, and wearable computers are gaining access to online information sources. However, the pervasive computing devices are often constrained along a number of dimensions, such as processing power, local storage, display size and depth, connectivity, and communication bandwidth, which makes it difficult to access rich image and video content. In this paper, we report on our initial efforts in designing a simple scalable video format with low-decoding and transcoding complexity for pervasive computing. The goal is to enable image and video access for mobile applications such as electronic catalog shopping, video conferencing, remote surveillance and video mail using pervasive computing devices.
NASA Astrophysics Data System (ADS)
Schlam, E.
1983-01-01
Human factors in visible displays are discussed, taking into account an introduction to color vision, a laser optometric assessment of visual display viewability, the quantification of color contrast, human performance evaluations of digital image quality, visual problems of office video display terminals, and contemporary problems in airborne displays. Other topics considered are related to electroluminescent technology, liquid crystal and related technologies, plasma technology, and display terminal and systems. Attention is given to the application of electroluminescent technology to personal computers, electroluminescent driving techniques, thin film electroluminescent devices with memory, the fabrication of very large electroluminescent displays, the operating properties of thermally addressed dye switching liquid crystal display, light field dichroic liquid crystal displays for very large area displays, and hardening military plasma displays for a nuclear environment.
Backscatter absorption gas imaging system
McRae, Jr., Thomas G.
1985-01-01
A video imaging system for detecting hazardous gas leaks. Visual displays of invisible gas clouds are produced by radiation augmentation of the field of view of an imaging device by radiation corresponding to an absorption line of the gas to be detected. The field of view of an imager is irradiated by a laser. The imager receives both backscattered laser light and background radiation. When a detectable gas is present, the backscattered laser light is highly attenuated, producing a region of contrast or shadow on the image. A flying spot imaging system is utilized to synchronously irradiate and scan the area to lower laser power requirements. The imager signal is processed to produce a video display.
Backscatter absorption gas imaging system
McRae, T.G. Jr.
A video imaging system for detecting hazardous gas leaks. Visual displays of invisible gas clouds are produced by radiation augmentation of the field of view of an imaging device by radiation corresponding to an absorption line of the gas to be detected. The field of view of an imager is irradiated by a laser. The imager receives both backscattered laser light and background radiation. When a detectable gas is present, the backscattered laser light is highly attenuated, producing a region of contrast or shadow on the image. A flying spot imaging system is utilized to synchronously irradiate and scan the area to lower laser power requirements. The imager signal is processed to produce a video display.
DC-8 Scanning Lidar Characterization of Aircraft Contrails and Cirrus Clouds
NASA Technical Reports Server (NTRS)
Uthe, Edward E.; Nielsen, Norman B.; Oseberg, Terje E.
1998-01-01
An angular-scanning large-aperture (36 cm) backscatter lidar was developed and deployed on the NASA DC-8 research aircraft as part of the SUCCESS (Subsonic Aircraft: Contrail and Cloud Effects Special Study) program. The lidar viewing direction could be scanned continuously during aircraft flight from vertically upward to forward to vertically downward, or the viewing could be at fixed angles. Real-time pictorial displays generated from the lidar signatures were broadcast on the DC-8 video network and used to locate clouds and contrails above, ahead of, and below the DC-8 to depict their spatial structure and to help select DC-8 altitudes for achieving optimum sampling by onboard in situ sensors. Several lidar receiver systems and real-time data displays were evaluated to help extend in situ data into vertical dimensions and to help establish possible lidar configurations and applications on future missions. Digital lidar signatures were recorded on 8 mm Exabyte tape and generated real-time displays were recorded on 8mm video tape. The digital records were transcribed in a common format to compact disks to facilitate data analysis and delivery to SUCCESS participants. Data selected from the real-time display video recordings were processed for publication-quality displays incorporating several standard lidar data corrections. Data examples are presented that illustrate: (1) correlation with particulate, gas, and radiometric measurements made by onboard sensors, (2) discrimination and identification between contrails observed by onboard sensors, (3) high-altitude (13 km) scattering layer that exhibits greatly enhanced vertical backscatter relative to off-vertical backscatter, and (4) mapping of vertical distributions of individual precipitating ice crystals and their capture by cloud layers. An angular scan plotting program was developed that accounts for DC-8 pitch and velocity.
Emotional Processing of Infants Displays in Eating Disorders
Cardi, Valentina; Corfield, Freya; Leppanen, Jenni; Rhind, Charlotte; Deriziotis, Stephanie; Hadjimichalis, Alexandra; Hibbs, Rebecca; Micali, Nadia; Treasure, Janet
2014-01-01
Aim The aim of this study is to examine emotional processing of infant displays in people with Eating Disorders (EDs). Background Social and emotional factors are implicated as causal and maintaining factors in EDs. Difficulties in emotional regulation have been mainly studied in relation to adult interactions, with less interest given to interactions with infants. Method A sample of 138 women were recruited, of which 49 suffered from Anorexia Nervosa (AN), 16 from Bulimia Nervosa (BN), and 73 were healthy controls (HCs). Attentional responses to happy and sad infant faces were tested with the visual probe detection task. Emotional identification of, and reactivity to, infant displays were measured using self-report measures. Facial expressions to video clips depicting sad, happy and frustrated infants were also recorded. Results No significant differences between groups were observed in the attentional response to infant photographs. However, there was a trend for patients to disengage from happy faces. People with EDs also reported lower positive ratings of happy infant displays and greater subjective negative reactions to sad infants. Finally, patients showed a significantly lower production of facial expressions, especially in response to the happy infant video clip. Insecure attachment was negatively correlated with positive facial expressions displayed in response to the happy infant and positively correlated with the intensity of negative emotions experienced in response to the sad infant video clip. Conclusion People with EDs do not have marked abnormalities in their attentional processing of infant emotional faces. However, they do have a reduction in facial affect particularly in response to happy infants. Also, they report greater negative reactions to sadness, and rate positive emotions less intensively than HCs. This pattern of emotional responsivity suggests abnormalities in social reward sensitivity and might indicate new treatment targets. PMID:25463051
ERIC Educational Resources Information Center
Carrein, Cindy; Bernaud, Jean-Luc
2010-01-01
This study investigated the effects of nonverbal self-disclosure within the dynamic of aptitude-treatment interaction. Participants (N = 94) watched a video of a career counseling session aimed at helping the jobseeker to find employment. The video was then edited to display 3 varying degrees of nonverbal self-disclosure. In conjunction with the…
VID-R and SCAN: Tools and Methods for the Automated Analysis of Visual Records.
ERIC Educational Resources Information Center
Ekman, Paul; And Others
The VID-R (Visual Information Display and Retrieval) system that enables computer-aided analysis of visual records is composed of a film-to-television chain, two videotape recorders with complete remote control of functions, a video-disc recorder, three high-resolution television monitors, a teletype, a PDP-8, a video and audio interface, three…
Glass Vision 3D: Digital Discovery for the Deaf
ERIC Educational Resources Information Center
Parton, Becky Sue
2017-01-01
Glass Vision 3D was a grant-funded project focused on developing and researching a Google Glass app that would allowed young Deaf children to look at the QR code of an object in the classroom and see an augmented reality projection that displays an American Sign Language (ASL) related video. Twenty five objects and videos were prepared and tested…
Otto, Kristen J; Hapner, Edie R; Baker, Michael; Johns, Michael M
2006-02-01
Advances in commercial video technology have improved office-based laryngeal imaging. This study investigates the perceived image quality of a true high-definition (HD) video camera and the effect of magnification on laryngeal videostroboscopy. We performed a prospective, dual-armed, single-blinded analysis of a standard laryngeal videostroboscopic examination comparing 3 separate add-on camera systems: a 1-chip charge-coupled device (CCD) camera, a 3-chip CCD camera, and a true 720p (progressive scan) HD camera. Displayed images were controlled for magnification and image size (20-inch [50-cm] display, red-green-blue, and S-video cable for 1-chip and 3-chip cameras; digital visual interface cable and HD monitor for HD camera). Ten blinded observers were then asked to rate the following 5 items on a 0-to-100 visual analog scale: resolution, color, ability to see vocal fold vibration, sense of depth perception, and clarity of blood vessels. Eight unblinded observers were then asked to rate the difference in perceived resolution and clarity of laryngeal examination images when displayed on a 10-inch (25-cm) monitor versus a 42-inch (105-cm) monitor. A visual analog scale was used. These monitors were controlled for actual resolution capacity. For each item evaluated, randomized block design analysis demonstrated that the 3-chip camera scored significantly better than the 1-chip camera (p < .05). For the categories of color and blood vessel discrimination, the 3-chip camera scored significantly better than the HD camera (p < .05). For magnification alone, observers rated the 42-inch monitor statistically better than the 10-inch monitor. The expense of new medical technology must be judged against its added value. This study suggests that HD laryngeal imaging may not add significant value over currently available video systems, in perceived image quality, when a small monitor is used. Although differences in clarity between standard and HD cameras may not be readily apparent on small displays, a large display size coupled with HD technology may impart improved diagnosis of subtle vocal fold lesions and vibratory anomalies.
The Video Display Terminal Health Hazard Debate.
ERIC Educational Resources Information Center
Clark, Carolyn A.
A study was conducted to identify the potential health hazards of visual display terminals for employees and then to develop a list of recommendations for improving the physical conditions of the workplace. Data were collected by questionnaires from 55 employees in 10 word processing departments in Topeka, Kansas. A majority of the employees…
Perceived Intensity of Emotional Point-Light Displays Is Reduced in Subjects with ASD
ERIC Educational Resources Information Center
Krüger, Britta; Kaletsch, Morten; Pilgramm, Sebastian; Schwippert, Sven-Sören; Hennig, Jürgen; Stark, Rudolf; Lis, Stefanie; Gallhofer, Bernd; Sammer, Gebhard; Zentgraf, Karen; Munzert, Jörn
2018-01-01
One major characteristic of autism spectrum disorder (ASD) is problems with social interaction and communication. The present study explored ASD-related alterations in perceiving emotions expressed via body movements. 16 participants with ASD and 16 healthy controls observed video scenes of human interactions conveyed by point-light displays. They…
A new display stream compression standard under development in VESA
NASA Astrophysics Data System (ADS)
Jacobson, Natan; Thirumalai, Vijayaraghavan; Joshi, Rajan; Goel, James
2017-09-01
The Advanced Display Stream Compression (ADSC) codec project is in development in response to a call for technologies from the Video Electronics Standards Association (VESA). This codec targets visually lossless compression of display streams at a high compression rate (typically 6 bits/pixel) for mobile/VR/HDR applications. Functionality of the ADSC codec is described in this paper, and subjective trials results are provided using the ISO 29170-2 testing protocol.
Master/Programmable-Slave Computer
NASA Technical Reports Server (NTRS)
Smaistrla, David; Hall, William A.
1990-01-01
Unique modular computer features compactness, low power, mass storage of data, multiprocessing, and choice of various input/output modes. Master processor communicates with user via usual keyboard and video display terminal. Coordinates operations of as many as 24 slave processors, each dedicated to different experiment. Each slave circuit card includes slave microprocessor and assortment of input/output circuits for communication with external equipment, with master processor, and with other slave processors. Adaptable to industrial process control with selectable degrees of automatic control, automatic and/or manual monitoring, and manual intervention.
NASA Astrophysics Data System (ADS)
Culp, Robert D.; Bickley, George
Papers from the sixteenth annual American Astronautical Society Rocky Mountain Guidance and Control Conference are presented. The topics covered include the following: advances in guidance, navigation, and control; control system videos; guidance, navigation and control embedded flight control systems; recent experiences; guidance and control storyboard displays; and applications of modern control, featuring the Hubble Space Telescope (HST) performance enhancement study. For individual titles, see A95-80390 through A95-80436.
Sensing And Force-Reflecting Exoskeleton
NASA Technical Reports Server (NTRS)
Eberman, Brian; Fontana, Richard; Marcus, Beth
1993-01-01
Sensing and force-reflecting exoskeleton (SAFiRE) provides control signals to robot hand and force feedback from robot hand to human operator. Operator makes robot hand touch objects gently and manipulates them finely without exerting excessive forces. Device attaches to operator's hand; comfortable and lightweight. Includes finger exoskeleton, cable mechanical transmission, two dc servomotors, partial thumb exoskeleton, harness, amplifier box, two computer circuit boards, and software. Transduces motion of index finger and thumb. Video monitor of associated computer displays image corresponding to motion.
Benoit, Justin L; Vogele, Jennifer; Hart, Kimberly W; Lindsell, Christopher J; McMullan, Jason T
2017-06-01
Bystander compression-only cardiopulmonary resuscitation (CPR) improves survival after out-of-hospital cardiac arrest. To broaden CPR training, 1-2min ultra-brief videos have been disseminated via the Internet and television. Our objective was to determine whether participants passively exposed to a televised ultra-brief video perform CPR better than unexposed controls. This before-and-after study was conducted with non-patients in an urban Emergency Department waiting room. The intervention was an ultra-brief CPR training video displayed via closed-circuit television 3-6 times/hour. Participants were unaware of the study and not told to watch the video. Pre-intervention, no video was displayed. Participants were asked to demonstrate compression-only CPR on a manikin. Performance was scored based on critical actions: check for responsiveness, call for help, begin compressions immediately, and correct hand placement, compression rate and depth. The primary outcome was the proportion of participants who performed all actions correctly. There were 50 control and 50 exposed participants. Mean age was 37, 51% were African-American, 52% were female, and 10% self-reported current CPR certification. There were no statistically significant differences in baseline characteristics between groups. The number of participants who performed all actions correctly was 0 (0%) control vs. 10 (20%) exposed (difference 20%, 95% confidence interval [CI] 8.9-31.1%, p<0.001). Correct compression rate and depth were 11 (22%) control vs. 22 (44%) exposed (22%, 95% CI 4.1-39.9%, p=0.019), and 5 (10%) control vs. 15 (30%) exposed (20%, 95% CI 4.8-35.2%, p=0.012), respectively. Passive ultra-brief video training is associated with improved performance of compression-only CPR. Copyright © 2017 Elsevier B.V. All rights reserved.
The relative importance of different perceptual-cognitive skills during anticipation.
North, Jamie S; Hope, Ed; Williams, A Mark
2016-10-01
We examined whether anticipation is underpinned by perceiving structured patterns or postural cues and whether the relative importance of these processes varied as a function of task constraints. Skilled and less-skilled soccer players completed anticipation paradigms in video-film and point light display (PLD) format. Skilled players anticipated more accurately regardless of display condition, indicating that both perception of structured patterns between players and postural cues contribute to anticipation. However, the Skill×Display interaction showed skilled players' advantage was enhanced in the video-film condition, suggesting that they make better use of postural cues when available during anticipation. We also examined anticipation as a function of proximity to the ball. When participants were near the ball, anticipation was more accurate for video-film than PLD clips, whereas when the ball was far away there was no difference between viewing conditions. Perceiving advance postural cues appears more important than structured patterns when the ball is closer to the observer, whereas the reverse is true when the ball is far away. Various perceptual-cognitive skills contribute to anticipation with the relative importance of perceiving structured patterns and advance postural cues being determined by task constraints and the availability of perceptual information. Copyright © 2016 Elsevier B.V. All rights reserved.
An integrated port camera and display system for laparoscopy.
Terry, Benjamin S; Ruppert, Austin D; Steinhaus, Kristen R; Schoen, Jonathan A; Rentschler, Mark E
2010-05-01
In this paper, we built and tested the port camera, a novel, inexpensive, portable, and battery-powered laparoscopic tool that integrates the components of a vision system with a cannula port. This new device 1) minimizes the invasiveness of laparoscopic surgery by combining a camera port and tool port; 2) reduces the cost of laparoscopic vision systems by integrating an inexpensive CMOS sensor and LED light source; and 3) enhances laparoscopic surgical procedures by mechanically coupling the camera, tool port, and liquid crystal display (LCD) screen to provide an on-patient visual display. The port camera video system was compared to two laparoscopic video systems: a standard resolution unit from Karl Storz (model 22220130) and a high definition unit from Stryker (model 1188HD). Brightness, contrast, hue, colorfulness, and sharpness were compared. The port camera video is superior to the Storz scope and approximately equivalent to the Stryker scope. An ex vivo study was conducted to measure the operative performance of the port camera. The results suggest that simulated tissue identification and biopsy acquisition with the port camera is as efficient as with a traditional laparoscopic system. The port camera was successfully used by a laparoscopic surgeon for exploratory surgery and liver biopsy during a porcine surgery, demonstrating initial surgical feasibility.
NASA Astrophysics Data System (ADS)
Mantel, Claire; Korhonen, Jari; Pedersen, Jesper M.; Bech, Søren; Andersen, Jakob Dahl; Forchhammer, Søren
2015-01-01
This paper focuses on the influence of ambient light on the perceived quality of videos displayed on Liquid Crystal Display (LCD) with local backlight dimming. A subjective test assessing the quality of videos with two backlight dimming methods and three lighting conditions, i.e. no light, low light level (5 lux) and higher light level (60 lux) was organized to collect subjective data. Results show that participants prefer the method exploiting local dimming possibilities to the conventional full backlight but that this preference varies depending on the ambient light level. The clear preference for one method at the low light conditions decreases at the high ambient light, confirming that the ambient light significantly attenuates the perception of the leakage defect (light leaking through dark pixels). Results are also highly dependent on the content of the sequence, which can modulate the effect of the ambient light from having an important influence on the quality grades to no influence at all.
Lord, D.E.; Carter, G.W.; Petrini, R.R.
1983-08-02
A video flowmeter is described that is capable of specifying flow nature and pattern and, at the same time, the quantitative value of the rate of volumetric flow. An image of a determinable volumetric region within a fluid containing entrained particles is formed and positioned by a rod optic lens assembly on the raster area of a low-light level television camera. The particles are illuminated by light transmitted through a bundle of glass fibers surrounding the rod optic lens assembly. Only particle images having speeds on the raster area below the raster line scanning speed may be used to form a video picture which is displayed on a video screen. The flowmeter is calibrated so that the locus of positions of origin of the video picture gives a determination of the volumetric flow rate of the fluid. 4 figs.
Şaşmaz, M I; Akça, A H
2017-06-01
In this study, the reliability of trauma management scenario videos (in English) on YouTube and their compliance with Advanced Trauma Life Support (ATLS ® ) guidelines were investigated. The search was conducted on February 15, 2016 by using the terms "assessment of trauma" and ''management of trauma''. All videos that were uploaded between January 2011 and June 2016 were viewed by two experienced emergency physicians. The data regarding the date of upload, the type of the uploader, duration of the video and view counts were recorded. The videos were categorized according to the video source and scores. The search results yielded 880 videos. Eight hundred and thirteen videos were excluded by the researchers. The distribution of videos by years was found to be balanced. The scores of videos uploaded by an institution were determined to be higher compared to other groups (p = 0.003). The findings of this study display that trauma management videos on YouTube in the majority of cases are not reliable/compliant with ATLS-guidelines and can therefore not be recommended for educational purposes. These data may only be used in public education after making necessary arrangements.
Image enhancement software for underwater recovery operations: User's manual
NASA Astrophysics Data System (ADS)
Partridge, William J.; Therrien, Charles W.
1989-06-01
This report describes software for performing image enhancement on live or recorded video images. The software was developed for operational use during underwater recovery operations at the Naval Undersea Warfare Engineering Station. The image processing is performed on an IBM-PC/AT compatible computer equipped with hardware to digitize and display video images. The software provides the capability to provide contrast enhancement and other similar functions in real time through hardware lookup tables, to automatically perform histogram equalization, to capture one or more frames and average them or apply one of several different processing algorithms to a captured frame. The report is in the form of a user manual for the software and includes guided tutorial and reference sections. A Digital Image Processing Primer in the appendix serves to explain the principle concepts that are used in the image processing.
Stereo Imaging Miniature Endoscope with Single Imaging Chip and Conjugated Multi-Bandpass Filters
NASA Technical Reports Server (NTRS)
Shahinian, Hrayr Karnig (Inventor); Bae, Youngsam (Inventor); White, Victor E. (Inventor); Shcheglov, Kirill V. (Inventor); Manohara, Harish M. (Inventor); Kowalczyk, Robert S. (Inventor)
2018-01-01
A dual objective endoscope for insertion into a cavity of a body for providing a stereoscopic image of a region of interest inside of the body including an imaging device at the distal end for obtaining optical images of the region of interest (ROI), and processing the optical images for forming video signals for wired and/or wireless transmission and display of 3D images on a rendering device. The imaging device includes a focal plane detector array (FPA) for obtaining the optical images of the ROI, and processing circuits behind the FPA. The processing circuits convert the optical images into the video signals. The imaging device includes right and left pupil for receiving a right and left images through a right and left conjugated multi-band pass filters. Illuminators illuminate the ROI through a multi-band pass filter having three right and three left pass bands that are matched to the right and left conjugated multi-band pass filters. A full color image is collected after three or six sequential illuminations with the red, green and blue lights.
Playing a first-person shooter video game induces neuroplastic change.
Wu, Sijing; Cheng, Cho Kin; Feng, Jing; D'Angelo, Lisa; Alain, Claude; Spence, Ian
2012-06-01
Playing a first-person shooter (FPS) video game alters the neural processes that support spatial selective attention. Our experiment establishes a causal relationship between playing an FPS game and neuroplastic change. Twenty-five participants completed an attentional visual field task while we measured ERPs before and after playing an FPS video game for a cumulative total of 10 hr. Early visual ERPs sensitive to bottom-up attentional processes were little affected by video game playing for only 10 hr. However, participants who played the FPS video game and also showed the greatest improvement on the attentional visual field task displayed increased amplitudes in the later visual ERPs. These potentials are thought to index top-down enhancement of spatial selective attention via increased inhibition of distractors. Individual variations in learning were observed, and these differences show that not all video game players benefit equally, either behaviorally or in terms of neural change.
Xu, Jing; Wong, Kevin; Jian, Yifan; Sarunic, Marinko V
2014-02-01
In this report, we describe a graphics processing unit (GPU)-accelerated processing platform for real-time acquisition and display of flow contrast images with Fourier domain optical coherence tomography (FDOCT) in mouse and human eyes in vivo. Motion contrast from blood flow is processed using the speckle variance OCT (svOCT) technique, which relies on the acquisition of multiple B-scan frames at the same location and tracking the change of the speckle pattern. Real-time mouse and human retinal imaging using two different custom-built OCT systems with processing and display performed on GPU are presented with an in-depth analysis of performance metrics. The display output included structural OCT data, en face projections of the intensity data, and the svOCT en face projections of retinal microvasculature; these results compare projections with and without speckle variance in the different retinal layers to reveal significant contrast improvements. As a demonstration, videos of real-time svOCT for in vivo human and mouse retinal imaging are included in our results. The capability of performing real-time svOCT imaging of the retinal vasculature may be a useful tool in a clinical environment for monitoring disease-related pathological changes in the microcirculation such as diabetic retinopathy.
Method and apparatus for calibrating a display using an array of cameras
NASA Technical Reports Server (NTRS)
Johnson, Michael J. (Inventor); Chen, Chung-Jen (Inventor); Chandrasekhar, Rajesh (Inventor)
2001-01-01
The present invention overcomes many of the disadvantages of the prior art by providing a display that can be calibrated and re-calibrated with a minimal amount of manual intervention. To accomplish this, the present invention provides one or more cameras to capture an image that is projected on a display screen. In one embodiment, the one or more cameras are placed on the same side of the screen as the projectors. In another embodiment, an array of cameras is provided on either or both sides of the screen for capturing a number of adjacent and/or overlapping capture images of the screen. In either of these embodiments, the resulting capture images are processed to identify any non-desirable characteristics including any visible artifacts such as seams, bands, rings, etc. Once the non-desirable characteristics are identified, an appropriate transformation function is determined. The transformation function is used to pre-warp the input video signal to the display such that the non-desirable characteristics are reduced or eliminated from the display. The transformation function preferably compensates for spatial non-uniformity, color non-uniformity, luminance non-uniformity, and/or other visible artifacts.
Organ donation video messaging in motor vehicle offices: results of a randomized trial.
Rodrigue, James R; Fleishman, Aaron; Fitzpatrick, Sean; Boger, Matthew
2015-12-01
Since nearly all registered organ donors in the United States signed up via a driver's license transaction, motor vehicle (MV) offices represent an important venue for organ donation education. To evaluate the impact of organ donation video messaging in MV offices. A 2-group (usual care vs usual care+video messaging) randomized trial with baseline, intervention, and follow-up assessment phases. Twenty-eight MV offices in Massachusetts. Usual care comprised education of MV clerks, display of organ donation print materials (ie, posters, brochures, signing mats), and a volunteer ambassador program. The intervention included video messaging with silent (subtitled) segments highlighting individuals affected by donation, playing on a recursive loop on monitors in MV waiting rooms. Aggregate monthly donor designation rates at MV offices (primary) and percentage of MV customers who registered as donors after viewing the video (secondary). Controlling for baseline donor designation rate, analysis of covariance showed a significant group effect for intervention phase (F=7.3, P=.01). The usual-care group had a significantly higher aggregate monthly donor designation rate than the intervention group had. In the logistic regression model of customer surveys (n=912), prior donor designation (β=-1.29, odds ratio [OR]=0.27 [95% CI=0.20-0.37], P<.001), white race (β=0.57 OR=1.77 [95% CI=1.23-2.54], P=.002), and viewing the intervention video (β=0.73, OR=1.54 [95% CI=1.24-2.60], P=.01) were statistically significant predictors of donor registration on the day of the survey. The relatively low uptake of the video intervention by customers most likely contributed to the negative trial finding.
ERIC Educational Resources Information Center
Shriver, Edgar L.; And Others
This volume reports an effort to use the video media as an approach for the preparation of a battery of symbolic tests that would be empirically valid substitutes for criterion referenced Job Task Performance Tests. The graphic symbolic tests require the storage of a large amount of pictorial information which must be searched rapidly for display.…
Fakhruddin, Kausar Sadia; El Batawi, Hisham; Gorduysus, Mehmet Omer
2015-01-01
The aim of this study is to assess the effectiveness of audiovisual distraction technique with video eyewear and computerized delivery system-intrasulcular (CDS-IS) during the application of local anesthetic in phobic pediatric patients undergoing pulp therapy of primary molars. This randomized, crossover clinical study includes 60 children, aged between 4 and 7-year-old (31 boys and 29 girls). Children were randomly distributed equally into two groups as A and B. This study involved two treatment sessions of pulp therapy, 1-week apart. During treatment session I, group A had an audiovisual distraction with video eyewear, whereas group B had audiovisual distraction using projector display only without video eyewear. During treatment session II, group A had undergone pulp therapy without video eyewear distraction, whereas group B had the pulp treatment using video eyewear distraction. Each session involved the pulp therapy of equivalent teeth in the opposite sides of the mouth. At each visit scores on the Modified Child Dental Anxiety Scale (MCDAS) (f) were used to evaluate the level of anxiety before treatment. After the procedure, children were instructed to rate their pain during treatment on the Wong Bakers' faces pain scale. Changes in pulse oximeter and heart rate were recorded in every 10 min. From preoperative treatment session I (with video eyewear) to preoperative treatment session II (without video eyewear) for the MCDAS (f), a significant (P > 0.03) change in the mean anxiety score was observed for group A. Self-reported mean pain score decreases dramatically after treatment sessions' with video eyewear for both groups. The use of audiovisual distraction with video eyewear and the use of CDS-IS system for anesthetic delivery was demonstrated to be effective in improving children's cooperation, than routine psychological interventions and is, therefore, highly recommended as an effective behavior management technique for long invasive procedures of pulp therapy in young children.
Distributed Coding/Decoding Complexity in Video Sensor Networks
Cordeiro, Paulo J.; Assunção, Pedro
2012-01-01
Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality. PMID:22736972
OPSO - The OpenGL based Field Acquisition and Telescope Guiding System
NASA Astrophysics Data System (ADS)
Škoda, P.; Fuchs, J.; Honsa, J.
2006-07-01
We present OPSO, a modular pointing and auto-guiding system for the coudé spectrograph of the Ondřejov observatory 2m telescope. The current field and slit viewing CCD cameras with image intensifiers are giving only standard TV video output. To allow the acquisition and guiding of very faint targets, we have designed an image enhancing system working in real time on TV frames grabbed by BT878-based video capture card. Its basic capabilities include the sliding averaging of hundreds of frames with bad pixel masking and removal of outliers, display of median of set of frames, quick zooming, contrast and brightness adjustment, plotting of horizontal and vertical cross cuts of seeing disk within given intensity range and many more. From the programmer's point of view, the system consists of three tasks running in parallel on a Linux PC. One C task controls the video capturing over Video for Linux (v4l2) interface and feeds the frames into the large block of shared memory, where the core image processing is done by another C program calling the OpenGL library. The GUI is, however, dynamically built in Python from XML description of widgets prepared in Glade. All tasks are exchanging information by IPC calls using the shared memory segments.
Distributed coding/decoding complexity in video sensor networks.
Cordeiro, Paulo J; Assunção, Pedro
2012-01-01
Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality.
A new technique for presentation of scientific works: video in poster.
Bozdag, Ali Dogan
2008-07-01
Presentations at scientific congresses and symposiums can be in two different forms: poster or oral presentation. Each method has some advantages and disadvantages. To combine the advantages of oral and poster presentations, a new presentation type was conceived: "video in poster." The top of the portable digital video display (DVD) player is opened 180 degrees to keep the screen and the body of the DVD player in the same plane. The poster is attached to the DVD player and a window is made in the poster to expose the screen of the DVD player so the screen appears as a picture on the poster. Then this video in poster is fixed to the panel. When the DVD player is turned on, the video presentation of the surgical procedure starts. Several posters were presented at different medical congresses in 2007 using the "video in poster" technique, and they received poster awards. The video in poster combines the advantages of both oral and poster presentations.
NASA Astrophysics Data System (ADS)
Khosla, Deepak; Huber, David J.; Bhattacharyya, Rajan
2017-05-01
In this paper, we describe an algorithm and system for optimizing search and detection performance for "items of interest" (IOI) in large-sized images and videos that employ the Rapid Serial Visual Presentation (RSVP) based EEG paradigm and surprise algorithms that incorporate motion processing to determine whether static or video RSVP is used. The system works by first computing a motion surprise map on image sub-regions (chips) of incoming sensor video data and then uses those surprise maps to label the chips as either "static" or "moving". This information tells the system whether to use a static or video RSVP presentation and decoding algorithm in order to optimize EEG based detection of IOI in each chip. Using this method, we are able to demonstrate classification of a series of image regions from video with an azimuth value of 1, indicating perfect classification, over a range of display frequencies and video speeds.
Secure Display of Space-Exploration Images
NASA Technical Reports Server (NTRS)
Cheng, Cecilia; Thornhill, Gillian; McAuley, Michael
2006-01-01
Java EDR Display Interface (JEDI) is software for either local display or secure Internet distribution, to authorized clients, of image data acquired from cameras aboard spacecraft engaged in exploration of remote planets. ( EDR signifies experimental data record, which, in effect, signifies image data.) Processed at NASA s Multimission Image Processing Laboratory (MIPL), the data can be from either near-realtime processing streams or stored files. JEDI uses the Java Advanced Imaging application program interface, plus input/output packages that are parts of the Video Image Communication and Retrieval software of the MIPL, to display images. JEDI can be run as either a standalone application program or within a Web browser as a servlet with an applet front end. In either operating mode, JEDI communicates using the HTTP(s) protocol(s). In the Web-browser case, the user must provide a password to gain access. For each user and/or image data type, there is a configuration file, called a "personality file," containing parameters that control the layout of the displays and the information to be included in them. Once JEDI has accepted the user s password, it processes the requested EDR (provided that user is authorized to receive the specific EDR) to create a display according to the user s personality file.
Bar-Chart-Monitor System For Wind Tunnels
NASA Technical Reports Server (NTRS)
Jung, Oscar
1993-01-01
Real-time monitor system provides bar-chart displays of significant operating parameters developed for National Full-Scale Aerodynamic Complex at Ames Research Center. Designed to gather and process sensory data on operating conditions of wind tunnels and models, and displays data for test engineers and technicians concerned with safety and validation of operating conditions. Bar-chart video monitor displays data in as many as 50 channels at maximum update rate of 2 Hz in format facilitating quick interpretation.
ERIC Educational Resources Information Center
Kroski, Ellyssa
2008-01-01
A widget displays Web content from external sources and can be embedded into a blog, social network, or other Web page, or downloaded to one's desktop. With widgets--sometimes referred to as gadgets--one can insert video into a blog post, display slideshows on MySpace, get the weather delivered to his mobile device, drag-and-drop his Netflix queue…
Video enhancement of X-ray and neutron radiographs
NASA Technical Reports Server (NTRS)
Vary, A.
1973-01-01
System was devised for displaying radiographs on television screen and enhancing fine detail in picture. System uses analog-computer circuits to process television signal from low-noise television camera. Enhanced images are displayed in black and white and can be controlled to vary degree of enhancement and magnification of details in either radiographic transparencies or opaque photographs.
Halloran, M C; Kalil, K
1994-04-01
During development, axons of the mammalian corpus callosum must navigate across the midline to establish connections with corresponding targets in the contralateral cerebral cortex. To gain insight into how growth cones of callosal axons respond to putative guidance cues along this CNS pathway, we have used time-lapse video microscopy to observe dynamic behaviors of individual callosal growth cones extending in living brain slices from neonatal hamster sensorimotor cortex. Crystals of the lipophilic dye 1,1'-dioctadecyl-3,3,3',3'-tetramethylindocarbocyanine perchlorate (Dil) were inserted into the cortex in vivo to label small populations of callosal axons and their growth cones. Subsequently, 400 microns brain slices that included the injection site, the corpus callosum, and the target cortex were placed in culture and viewed under low-light-level conditions with a silicon-intensified target (SIT) camera. Time-lapse video observations revealed striking differences in growth cone behaviors in different regions of the callosal pathway. In the tract, which is defined as the region of the callosal pathway from the injection site to the corresponding target cortex, growth cones advanced rapidly, displaying continual lamellipodial shape changes and filopodial exploration. Forward advance was sometimes interrupted by brief pauses or retraction. Growth cones in the target cortex had almost uniform compact shapes that were consistently smaller than those in the tract. In cortex, axons adhered to straight radial trajectories and their growth cones extended at only half the speed of those in the tract. Growth cones in subtarget regions of the callosum beneath cortical targets displayed complex behaviors characterized by long pauses, extension of transitory branches, and repeated cycles of collapse, withdrawal, and resurgence. Video observations suggested that extension of axons into cortical targets could occur by interstitial branching from callosal axons rather than by turning behaviors of the primary growth cones. These results suggest the existence of guidance cues distinct for each of these callosal regions that elicit characteristic growth cone behaviors.
CARMA: Software for continuous affect rating and media annotation
Girard, Jeffrey M
2017-01-01
CARMA is a media annotation program that collects continuous ratings while displaying audio and video files. It is designed to be highly user-friendly and easily customizable. Based on Gottman and Levenson's affect rating dial, CARMA enables researchers and study participants to provide moment-by-moment ratings of multimedia files using a computer mouse or keyboard. The rating scale can be configured on a number of parameters including the labels for its upper and lower bounds, its numerical range, and its visual representation. Annotations can be displayed alongside the multimedia file and saved for easy import into statistical analysis software. CARMA provides a tool for researchers in affective computing, human-computer interaction, and the social sciences who need to capture the unfolding of subjective experience and observable behavior over time. PMID:29308198
Mobile visual communications and displays
NASA Astrophysics Data System (ADS)
Valliath, George T.
2004-09-01
The different types of mobile visual communication modes and the types of displays needed in cellular handsets are explored. The well-known 2-way video conferencing is only one of the possible modes. Some modes are already supported on current handsets while others need the arrival of advanced network capabilities to be supported. Displays for devices that support these visual communication modes need to deliver the required visual experience. Over the last 20 years the display has grown in size while the rest of the handset has shrunk. However, the display is still not large enough - the processor performance and network capabilities continue to outstrip the display ability. This makes the display a bottleneck. This paper will explore potential solutions to a small large image on a small handset.
Real-Time Detection and Reading of LED/LCD Displays for Visually Impaired Persons
Tekin, Ender; Coughlan, James M.; Shen, Huiying
2011-01-01
Modern household appliances, such as microwave ovens and DVD players, increasingly require users to read an LED or LCD display to operate them, posing a severe obstacle for persons with blindness or visual impairment. While OCR-enabled devices are emerging to address the related problem of reading text in printed documents, they are not designed to tackle the challenge of finding and reading characters in appliance displays. Any system for reading these characters must address the challenge of first locating the characters among substantial amounts of background clutter; moreover, poor contrast and the abundance of specular highlights on the display surface – which degrade the image in an unpredictable way as the camera is moved – motivate the need for a system that processes images at a few frames per second, rather than forcing the user to take several photos, each of which can take seconds to acquire and process, until one is readable. We describe a novel system that acquires video, detects and reads LED/LCD characters in real time, reading them aloud to the user with synthesized speech. The system has been implemented on both a desktop and a cell phone. Experimental results are reported on videos of display images, demonstrating the feasibility of the system. PMID:21804957
Impact of packet losses in scalable 3D holoscopic video coding
NASA Astrophysics Data System (ADS)
Conti, Caroline; Nunes, Paulo; Ducla Soares, Luís.
2014-05-01
Holoscopic imaging became a prospective glassless 3D technology to provide more natural 3D viewing experiences to the end user. Additionally, holoscopic systems also allow new post-production degrees of freedom, such as controlling the plane of focus or the viewing angle presented to the user. However, to successfully introduce this technology into the consumer market, a display scalable coding approach is essential to achieve backward compatibility with legacy 2D and 3D displays. Moreover, to effectively transmit 3D holoscopic content over error-prone networks, e.g., wireless networks or the Internet, error resilience techniques are required to mitigate the impact of data impairments in the user quality perception. Therefore, it is essential to deeply understand the impact of packet losses in terms of decoding video quality for the specific case of 3D holoscopic content, notably when a scalable approach is used. In this context, this paper studies the impact of packet losses when using a three-layer display scalable 3D holoscopic video coding architecture previously proposed, where each layer represents a different level of display scalability (i.e., L0 - 2D, L1 - stereo or multiview, and L2 - full 3D holoscopic). For this, a simple error concealment algorithm is used, which makes use of inter-layer redundancy between multiview and 3D holoscopic content and the inherent correlation of the 3D holoscopic content to estimate lost data. Furthermore, a study of the influence of 2D views generation parameters used in lower layers on the performance of the used error concealment algorithm is also presented.
Smith, M J
1997-10-01
Psychosocial aspects of using video display terminals (VDTs) have been recognized as contributors to employees' mental and physical health problems for more than 15 years. Yet, little has been done by employers to change work organization conditions to improve the psychosocial work environment of VDT users. Thus, psychosocial aspects of work are emerging as one of the biggest problems for VDT users in the late 1990s. This paper explores how psychosocial aspects of VDT work are related to job stress, and their consequences for mental and physical health. Using the research literature, it defines various aspects of work organization and job design that have been shown to be related to VDT users' ill-health. Some of the important work design aspects uncovered include a lack of employee skill use, monotonous tasks, high job demands and work pressure, a lack of control over the job, poor supervisory relations, fear of job loss, and unreliable technology. These are the same job stressors that have been defined as problematic for a variety of blue collar jobs in previous research. Work organization improvements for healthier VDT jobs are proposed. These include organizational support, employee participation, improved task content, increased job control, reasonable production standards, career development, enhanced peer socialization, and improved workstation ergonomics. These organizational improvements are derived from a more detailed organizational strategy for job stress reduction. A model of job redesign through proper 'balancing' of work organization features is discussed.
Millisecond accuracy video display using OpenGL under Linux.
Stewart, Neil
2006-02-01
To measure people's reaction times to the nearest millisecond, it is necessary to know exactly when a stimulus is displayed. This article describes how to display stimuli with millisecond accuracy on a normal CRT monitor, using a PC running Linux. A simple C program is presented to illustrate how this may be done within X Windows using the OpenGL rendering system. A test of this system is reported that demonstrates that stimuli may be consistently displayed with millisecond accuracy. An algorithm is presented that allows the exact time of stimulus presentation to be deduced, even if there are relatively large errors in measuring the display time.
Implementation of a Landscape Lighting System to Display Images
NASA Astrophysics Data System (ADS)
Sun, Gi-Ju; Cho, Sung-Jae; Kim, Chang-Beom; Moon, Cheol-Hong
The system implemented in this study consists of a PC, MASTER, SLAVEs and MODULEs. The PC sets the various landscape lighting displays, and the image files can be sent to the MASTER through a virtual serial port connected to the USB (Universal Serial Bus). The MASTER sends a sync signal to the SLAVE. The SLAVE uses the signal received from the MASTER and the landscape lighting display pattern. The video file is saved in the NAND Flash memory and the R, G, B signals are separated using the self-made display signal and sent to the MODULE so that it can display the image.
Lord, David E.; Carter, Gary W.; Petrini, Richard R.
1983-01-01
A video flowmeter is described that is capable of specifying flow nature and pattern and, at the same time, the quantitative value of the rate of volumetric flow. An image of a determinable volumetric region within a fluid (10) containing entrained particles (12) is formed and positioned by a rod optic lens assembly (31) on the raster area of a low-light level television camera (20). The particles (12) are illuminated by light transmitted through a bundle of glass fibers (32) surrounding the rod optic lens assembly (31). Only particle images having speeds on the raster area below the raster line scanning speed may be used to form a video picture which is displayed on a video screen (40). The flowmeter is calibrated so that the locus of positions of origin of the video picture gives a determination of the volumetric flow rate of the fluid (10).
Age-related changes in perception of movement in driving scenes.
Lacherez, Philippe; Turner, Laura; Lester, Robert; Burns, Zoe; Wood, Joanne M
2014-07-01
Age-related changes in motion sensitivity have been found to relate to reductions in various indices of driving performance and safety. The aim of this study was to investigate the basis of this relationship in terms of determining which aspects of motion perception are most relevant to driving. Participants included 61 regular drivers (age range 22-87 years). Visual performance was measured binocularly. Measures included visual acuity, contrast sensitivity and motion sensitivity assessed using four different approaches: (1) threshold minimum drift rate for a drifting Gabor patch, (2) Dmin from a random dot display, (3) threshold coherence from a random dot display, and (4) threshold drift rate for a second-order (contrast modulated) sinusoidal grating. Participants then completed the Hazard Perception Test (HPT) in which they were required to identify moving hazards in videos of real driving scenes, and also a Direction of Heading task (DOH) in which they identified deviations from normal lane keeping in brief videos of driving filmed from the interior of a vehicle. In bivariate correlation analyses, all motion sensitivity measures significantly declined with age. Motion coherence thresholds, and minimum drift rate threshold for the first-order stimulus (Gabor patch) both significantly predicted HPT performance even after controlling for age, visual acuity and contrast sensitivity. Bootstrap mediation analysis showed that individual differences in DOH accuracy partly explained these relationships, where those individuals with poorer motion sensitivity on the coherence and Gabor tests showed decreased ability to perceive deviations in motion in the driving videos, which related in turn to their ability to detect the moving hazards. The ability to detect subtle movements in the driving environment (as determined by the DOH task) may be an important contributor to effective hazard perception, and is associated with age, and an individuals' performance on tests of motion sensitivity. The locus of the processing deficits appears to lie in first-order, rather than second-order motion pathways. © 2014 The Authors Ophthalmic & Physiological Optics © 2014 The College of Optometrists.
Power-Constrained Fuzzy Logic Control of Video Streaming over a Wireless Interconnect
NASA Astrophysics Data System (ADS)
Razavi, Rouzbeh; Fleury, Martin; Ghanbari, Mohammed
2008-12-01
Wireless communication of video, with Bluetooth as an example, represents a compromise between channel conditions, display and decode deadlines, and energy constraints. This paper proposes fuzzy logic control (FLC) of automatic repeat request (ARQ) as a way of reconciling these factors, with a 40% saving in power in the worst channel conditions from economizing on transmissions when channel errors occur. Whatever the channel conditions are, FLC is shown to outperform the default Bluetooth scheme and an alternative Bluetooth-adaptive ARQ scheme in terms of reduced packet loss and delay, as well as improved video quality.
NASA Astrophysics Data System (ADS)
Kachejian, Kerry C.; Vujcic, Doug
1999-07-01
The Tactical Visualization Module (TVM) research effort will develop and demonstrate a portable, tactical information system to enhance the situational awareness of individual warfighters and small military units by providing real-time access to manned and unmanned aircraft, tactically mobile robots, and unattended sensors. TVM consists of a family of portable and hand-held devices being advanced into a next- generation, embedded capability. It enables warfighters to visualize the tactical situation by providing real-time video, imagery, maps, floor plans, and 'fly-through' video on demand. When combined with unattended ground sensors, such as Combat- Q, TVM permits warfighters to validate and verify tactical targets. The use of TVM results in faster target engagement times, increased survivability, and reduction of the potential for fratricide. TVM technology can support both mounted and dismounted tactical forces involved in land, sea, and air warfighting operations. As a PCMCIA card, TVM can be embedded in portable, hand-held, and wearable PCs. Thus, it leverages emerging tactical displays including flat-panel, head-mounted displays. The end result of the program will be the demonstration of the system with U.S. Army and USMC personnel in an operational environment. Raytheon Systems Company, the U.S. Army Soldier Systems Command -- Natick RDE Center (SSCOM- NRDEC) and the Defense Advanced Research Projects Agency (DARPA) are partners in developing and demonstrating the TVM technology.
A method for the real-time construction of a full parallax light field
NASA Astrophysics Data System (ADS)
Tanaka, Kenji; Aoki, Soko
2006-02-01
We designed and implemented a light field acquisition and reproduction system for dynamic objects called LiveDimension, which serves as a 3D live video system for multiple viewers. The acquisition unit consists of circularly arranged NTSC cameras surrounding an object. The display consists of circularly arranged projectors and a rotating screen. The projectors are constantly projecting images captured by the corresponding cameras onto the screen. The screen rotates around an in-plane vertical axis at a sufficient speed so that it faces each of the projectors in sequence. Since the Lambertian surfaces of the screens are covered by light-collimating plastic films with vertical louver patterns that are used for the selection of appropriate light rays, viewers can only observe images from a projector located in the same direction as the viewer. Thus, the dynamic view of an object is dependent on the viewer's head position. We evaluated the system by projecting both objects and human figures and confirmed that the entire system can reproduce light fields with a horizontal parallax to display video sequences of 430x770 pixels at a frame rate of 45 fps. Applications of this system include product design reviews, sales promotion, art exhibits, fashion shows, and sports training with form checking.
Design of large format commercial display holograms
NASA Astrophysics Data System (ADS)
Perry, John F. W.
1989-05-01
Commercial display holography is approaching a critical stage where the ability to compete with other graphic media will dictate its future. Factors involved will be cost, technical quality and, in particular, design. The tenuous commercial success of display holography has relied heavily on its appeal to an audience with little or no previous experience in the medium. Well designed images were scarce, leading many commercial designers to avoid holography. As the public became more accustomed to holograms, the excitement dissipated, leaving a need for strong visual design if the medium is to survive in this marketplace. Drawing on the vast experience of TV, rock music and magazine advertising, competitive techniques such as video walls, mural duratrans, laser light shows and interactive videos attract a professional support structure far greater than does holography. This paper will address design principles developed at Holographics North for large format commercial holography. Examples will be drawn from a number of foreign and domestic corporate trade exhibitions. Recommendations will also be made on how to develop greater awareness of a holographic design.
Actively addressed single pixel full-colour plasmonic display
Franklin, Daniel; Frank, Russell; Wu, Shin-Tson; Chanda, Debashis
2017-01-01
Dynamic, colour-changing surfaces have many applications including displays, wearables and active camouflage. Plasmonic nanostructures can fill this role by having the advantages of ultra-small pixels, high reflectivity and post-fabrication tuning through control of the surrounding media. However, previous reports of post-fabrication tuning have yet to cover a full red-green-blue (RGB) colour basis set with a single nanostructure of singular dimensions. Here, we report a method which greatly advances this tuning and demonstrates a liquid crystal-plasmonic system that covers the full RGB colour basis set, only as a function of voltage. This is accomplished through a surface morphology-induced, polarization-dependent plasmonic resonance and a combination of bulk and surface liquid crystal effects that manifest at different voltages. We further demonstrate the system's compatibility with existing LCD technology by integrating it with a commercially available thin-film-transistor array. The imprinted surface interfaces readily with computers to display images as well as video. PMID:28488671
An Automatic Portable Telecine Camera.
1978-08-01
five television frames to achieve synchronous operation, that is about 0.2 second. 6.3 Video recorder noise imnunity The synchronisation pulse separator...display is filmed by a modified 16 am cine camera driven by a control unit in which the camera supply voltage is derived from the field synchronisation ...pulses of the video signal. Automatic synchronisation of the camera mechanism is achieved over a wide range of television field frequencies and the
Fusion Helmet: Electronic Analysis
2014-04-01
Table 1: LYR203-101B Board Feature P1 (SEC MODULE) DM648 GPIO PORn Video Ports (2) Bootmode SPI/UART I2C CLKIN MDIO DDR2 128MB/16bit SPI Flash 16...McASP EMAC-SGMII /2 MDIO I2C GPIO DDR2 128MB/16bit JTAG Memory CLKGEN I2C PGoodPGood PORn Pwr LED Power DSP SPI/UART DSP SPI/UARTSPI/UART Video Display
On Target: Organizing and Executing the Strategic Air Campaign Against Iraq
2002-01-01
possession, use, sale, creation or display of any porno graphic photograph, videotape, movie, drawing, book, or magazine or similar represen- tations. This...forward-looking infrared (FLIR) sensor to create daylight-quality video images of terrain and utilized terrain-following radar to enable the aircraft to...The Black Hole Planners had pleaded with CENTAF Intel to provide them with photos of targets, provide additional personnel to analyze PGM video
Depth assisted compression of full parallax light fields
NASA Astrophysics Data System (ADS)
Graziosi, Danillo B.; Alpaslan, Zahir Y.; El-Ghoroury, Hussein S.
2015-03-01
Full parallax light field displays require high pixel density and huge amounts of data. Compression is a necessary tool used by 3D display systems to cope with the high bandwidth requirements. One of the formats adopted by MPEG for 3D video coding standards is the use of multiple views with associated depth maps. Depth maps enable the coding of a reduced number of views, and are used by compression and synthesis software to reconstruct the light field. However, most of the developed coding and synthesis tools target linearly arranged cameras with small baselines. Here we propose to use the 3D video coding format for full parallax light field coding. We introduce a view selection method inspired by plenoptic sampling followed by transform-based view coding and view synthesis prediction to code residual views. We determine the minimal requirements for view sub-sampling and present the rate-distortion performance of our proposal. We also compare our method with established video compression techniques, such as H.264/AVC, H.264/MVC, and the new 3D video coding algorithm, 3DV-ATM. Our results show that our method not only has an improved rate-distortion performance, it also preserves the structure of the perceived light fields better.
Discontinuity minimization for omnidirectional video projections
NASA Astrophysics Data System (ADS)
Alshina, Elena; Zakharchenko, Vladyslav
2017-09-01
Advances in display technologies both for head mounted devices and television panels demand resolution increase beyond 4K for source signal in virtual reality video streaming applications. This poses a problem of content delivery trough a bandwidth limited distribution networks. Considering a fact that source signal covers entire surrounding space investigation reviled that compression efficiency may fluctuate 40% in average depending on origin selection at the conversion stage from 3D space to 2D projection. Based on these knowledge the origin selection algorithm for video compression applications has been proposed. Using discontinuity entropy minimization function projection origin rotation may be defined to provide optimal compression results. Outcome of this research may be applied across various video compression solutions for omnidirectional content.
Peña, Raul; Ávila, Alfonso; Muñoz, David; Lavariega, Juan
2015-01-01
The recognition of clinical manifestations in both video images and physiological-signal waveforms is an important aid to improve the safety and effectiveness in medical care. Physicians can rely on video-waveform (VW) observations to recognize difficult-to-spot signs and symptoms. The VW observations can also reduce the number of false positive incidents and expand the recognition coverage to abnormal health conditions. The synchronization between the video images and the physiological-signal waveforms is fundamental for the successful recognition of the clinical manifestations. The use of conventional equipment to synchronously acquire and display the video-waveform information involves complex tasks such as the video capture/compression, the acquisition/compression of each physiological signal, and the video-waveform synchronization based on timestamps. This paper introduces a data hiding technique capable of both enabling embedding channels and synchronously hiding samples of physiological signals into encoded video sequences. Our data hiding technique offers large data capacity and simplifies the complexity of the video-waveform acquisition and reproduction. The experimental results revealed successful embedding and full restoration of signal's samples. Our results also demonstrated a small distortion in the video objective quality, a small increment in bit-rate, and embedded cost savings of -2.6196% for high and medium motion video sequences.
2004-03-01
mirror device ( DMD ) for C4ISR applications, the IBM 9.2 megapixel 22-in. diagonal active matrix liquid crystal display (AMLCD) monitor for data...FED, VFD, OLED and a variety of microdisplays (uD, comprising uLCD, uOLED, DMD and other MEMs) (see glossary). 3 CDT = cathode display tubes (used in...than SVGA, greater battery life and brightness, decreased weight and thickness, electromagnetic interference (EMI), and development of video
On-screen-display (OSD) menu detection for proper stereo content reproduction for 3D TV
NASA Astrophysics Data System (ADS)
Tolstaya, Ekaterina V.; Bucha, Victor V.; Rychagov, Michael N.
2011-03-01
Modern consumer 3D TV sets are able to show video content in two different modes: 2D and 3D. In 3D mode, stereo pair comes from external device such as Blue-ray player, satellite receivers etc. The stereo pair is split into left and right images that are shown one after another. The viewer sees different image for left and right eyes using shutter-glasses properly synchronized with a 3DTV. Besides, some devices that provide TV with a stereo content are able to display some additional information by imposing an overlay picture on video content, an On-Screen-Display (OSD) menu. Some OSDs are not always 3D compatible and lead to incorrect 3D reproduction. In this case, TV set must recognize the type of OSD, whether it is 3D compatible, and visualize it correctly by either switching off stereo mode, or continue demonstration of stereo content. We propose a new stable method for detection of 3D incompatible OSD menus on stereo content. Conventional OSD is a rectangular area with letters and pictograms. OSD menu can be of different transparency levels and colors. To be 3D compatible, an OSD is overlaid separately on both images of a stereo pair. The main problem in detecting OSD is to distinguish whether the color difference is due to OSD presence, or due to stereo parallax. We applied special techniques to find reliable image difference and additionally used a cue that usually OSD has very implicit geometrical features: straight parallel lines. The developed algorithm was tested on our video sequences database, with several types of OSD with different colors and transparency levels overlaid upon video content. Detection quality exceeded 99% of true answers.
Projection displays and MEMS: timely convergence for a bright future
NASA Astrophysics Data System (ADS)
Hornbeck, Larry J.
1995-09-01
Projection displays and microelectromechanical systems (MEMS) have evolved independently, occasionally crossing paths as early as the 1950s. But the commercially viable use of MEMS for projection displays has been illusive until the recent invention of Texas Instruments Digital Light Processing TM (DLP) technology. DLP technology is based on the Digital Micromirror DeviceTM (DMD) microchip, a MEMS technology that is a semiconductor digital light switch that precisely controls a light source for projection display and hardcopy applications. DLP technology provides a unique business opportunity because of the timely convergence of market needs and technology advances. The world is rapidly moving to an all- digital communications and entertainment infrastructure. In the near future, most of the technologies necessary for this infrastrucutre will be available at the right performance and price levels. This will make commercially viable an all-digital chain (capture, compression, transmission, reception decompression, hearing, and viewing). Unfortunately, the digital images received today must be translated into analog signals for viewing on today's televisions. Digital video is the final link in the all-digital infrastructure and DLP technoogy provides that link. DLP technology is an enabler for digital, high-resolution, color projection displays that have high contrast, are bright, seamless, and have the accuracy of color and grayscale that can be achieved only by digital control. This paper contains an introduction to DMD and DLP technology, including the historical context from which to view their developemnt. The architecture, projection operation, and fabrication are presented. Finally, the paper includes an update about current DMD business opportunities in projection displays and hardcopy.
The impact of video technology on learning: A cooking skills experiment.
Surgenor, Dawn; Hollywood, Lynsey; Furey, Sinéad; Lavelle, Fiona; McGowan, Laura; Spence, Michelle; Raats, Monique; McCloat, Amanda; Mooney, Elaine; Caraher, Martin; Dean, Moira
2017-07-01
This study examines the role of video technology in the development of cooking skills. The study explored the views of 141 female participants on whether video technology can promote confidence in learning new cooking skills to assist in meal preparation. Prior to each focus group participants took part in a cooking experiment to assess the most effective method of learning for low-skilled cooks across four experimental conditions (recipe card only; recipe card plus video demonstration; recipe card plus video demonstration conducted in segmented stages; and recipe card plus video demonstration whereby participants freely accessed video demonstrations as and when needed). Focus group findings revealed that video technology was perceived to assist learning in the cooking process in the following ways: (1) improved comprehension of the cooking process; (2) real-time reassurance in the cooking process; (3) assisting the acquisition of new cooking skills; and (4) enhancing the enjoyment of the cooking process. These findings display the potential for video technology to promote motivation and confidence as well as enhancing cooking skills among low-skilled individuals wishing to cook from scratch using fresh ingredients. Copyright © 2017 Elsevier Ltd. All rights reserved.
A Framework for Realistic Modeling and Display of Object Surface Appearance
NASA Astrophysics Data System (ADS)
Darling, Benjamin A.
With advances in screen and video hardware technology, the type of content presented on computers has progressed from text and simple shapes to high-resolution photographs, photorealistic renderings, and high-definition video. At the same time, there have been significant advances in the area of content capture, with the development of devices and methods for creating rich digital representations of real-world objects. Unlike photo or video capture, which provide a fixed record of the light in a scene, these new technologies provide information on the underlying properties of the objects, allowing their appearance to be simulated for novel lighting and viewing conditions. These capabilities provide an opportunity to continue the computer display progression, from high-fidelity image presentations to digital surrogates that recreate the experience of directly viewing objects in the real world. In this dissertation, a framework was developed for representing objects with complex color, gloss, and texture properties and displaying them onscreen to appear as if they are part of the real-world environment. At its core, there is a conceptual shift from a traditional image-based display workflow to an object-based one. Instead of presenting the stored patterns of light from a scene, the objective is to reproduce the appearance attributes of a stored object by simulating its dynamic patterns of light for the real viewing and lighting geometry. This is accomplished using a computational approach where the physical light sources are modeled and the observer and display screen are actively tracked. Surface colors are calculated for the real spectral composition of the illumination with a custom multispectral rendering pipeline. In a set of experiments, the accuracy of color and gloss reproduction was evaluated by measuring the screen directly with a spectroradiometer. Gloss reproduction was assessed by comparing gonio measurements of the screen output to measurements of the real samples in the same measurement configuration. A chromatic adaptation experiment was performed to evaluate color appearance in the framework and explore the factors that contribute to differences when viewing self-luminous displays as opposed to reflective objects. A set of sample applications was developed to demonstrate the potential utility of the object display technology for digital proofing, psychophysical testing, and artwork display.
New teaching methods in use at UC Irvine's optical engineering and instrument design programs
NASA Astrophysics Data System (ADS)
Silberman, Donn M.; Rowe, T. Scott; Jo, Joshua; Dimas, David
2012-10-01
New teaching methods reach geographically dispersed students with advances in Distance Education. Capabilities include a new "Hybrid" teaching method with an instructor in a classroom and a live WebEx simulcast for remote students. Our Distance Education Geometric and Physical Optics courses include Hands-On Optics experiments. Low cost laboratory kits have been developed and YouTube type video recordings of the instructor using these tools guide the students through their labs. A weekly "Office Hour" has been developed using WebEx and a Live Webcam the instructor uses to display his live writings from his notebook for answering students' questions.
Hu, Peter F; Xiao, Yan; Ho, Danny; Mackenzie, Colin F; Hu, Hao; Voigt, Roger; Martz, Douglas
2006-06-01
One of the major challenges for day-of-surgery operating room coordination is accurate and timely situation awareness. Distributed and secure real-time status information is key to addressing these challenges. This article reports on the design and implementation of a passive status monitoring system in a 19-room surgical suite of a major academic medical center. Key design requirements considered included integrated real-time operating room status display, access control, security, and network impact. The system used live operating room video images and patient vital signs obtained through monitors to automatically update events and operating room status. Images were presented on a "need-to-know" basis, and access was controlled by identification badge authorization. The system delivered reliable real-time operating room images and status with acceptable network impact. Operating room status was visualized at 4 separate locations and was used continuously by clinicians and operating room service providers to coordinate operating room activities.
ERIC Educational Resources Information Center
Kucalaba, Linda
Previous studies have found that the librarian's use of book displays and recommended lists are an effective means to increase circulation in the public library. Yet conflicting results were found when these merchandising techniques were used with collection materials in the nonprint format, specifically audiobooks and videos, instead of books.…
[Development of a system for ultrasonic three-dimensional reconstruction of fetus].
Baba, K
1989-04-01
We have developed a system for ultrasonic three-dimensional (3-D) fetus reconstruction using computers. Either a real-time linear array probe or a convex array probe of an ultrasonic scanner was mounted on a position sensor arm of a manual compound scanner in order to detect the position of the probe. A microcomputer was used to convert the position information to what could be recorded on a video tape as an image. This image was superimposed on the ultrasonic tomographic image simultaneously with a superimposer and recorded on a video tape. Fetuses in utero were scanned in seven cases. More than forty ultrasonic section image on the video tape were fed into a minicomputer. The shape of the fetus was displayed three-dimensionally by means of computer graphics. The computer-generated display produced a 3-D image of the fetus and showed the usefulness and accuracy of this system. Since it took only a few seconds for data collection by ultrasonic inspection, fetal movement did not adversely affect the results. Data input took about ten minutes for 40 slices, and 3-D reconstruction and display took about two minutes. The system made it possible to observe and record the 3-D image of the fetus in utero non-invasively and therefore is expected to make it much easier to obtain a 3-D picture of the fetus in utero.
Yoshida, Soichiro; Kihara, Kazunori; Takeshita, Hideki; Fujii, Yasuhisa
2014-12-01
The head-mounted display (HMD) is a new image monitoring system. We developed the Personal Integrated-image Monitoring System (PIM System) using the HMD (HMZ-T2, Sony Corporation, Tokyo, Japan) in combination with video splitters and multiplexers as a surgical guide system for transurethral resection of the prostate (TURP). The imaging information obtained from the cystoscope, the transurethral ultrasonography (TRUS), the video camera attached to the HMD, and the patient's vital signs monitor were split and integrated by the PIM System and a composite image was displayed by the HMD using a four-split screen technique. Wearing the HMD, the lead surgeon and the assistant could simultaneously and continuously monitor the same information displayed by the HMD in an ergonomically efficient posture. Each participant could independently rearrange the images comprising the composite image depending on the engaging step. Two benign prostatic hyperplasia (BPH) patients underwent TURP performed by surgeons guided with this system. In both cases, the TURP procedure was successfully performed, and their postoperative clinical courses had no remarkable unfavorable events. During the procedure, none of the participants experienced any HMD-wear related adverse effects or reported any discomfort.
Micro-video display with ocular tracking and interactive voice control
NASA Technical Reports Server (NTRS)
Miller, James E.
1993-01-01
In certain space-restricted environments, many of the benefits resulting from computer technology have been foregone because of the size, weight, inconvenience, and lack of mobility associated with existing computer interface devices. Accordingly, an effort to develop a highly miniaturized and 'wearable' computer display and control interface device, referred to as the Sensory Integrated Data Interface (SIDI), is underway. The system incorporates a micro-video display that provides data display and ocular tracking on a lightweight headset. Software commands are implemented by conjunctive eye movement and voice commands of the operator. In this initial prototyping effort, various 'off-the-shelf' components have been integrated into a desktop computer and with a customized menu-tree software application to demonstrate feasibility and conceptual capabilities. When fully developed as a customized system, the interface device will allow mobile, 'hand-free' operation of portable computer equipment. It will thus allow integration of information technology applications into those restrictive environments, both military and industrial, that have not yet taken advantage of the computer revolution. This effort is Phase 1 of Small Business Innovative Research (SBIR) Topic number N90-331 sponsored by the Naval Undersea Warfare Center Division, Newport. The prime contractor is Foster-Miller, Inc. of Waltham, MA.
A large flat panel multifunction display for military and space applications
NASA Astrophysics Data System (ADS)
Pruitt, James S.
1992-09-01
A flat panel multifunction display (MFD) that offers the size and reliability benefits of liquid crystal display technology while achieving near-CRT display quality is presented. Display generation algorithms that provide exceptional display quality are being implemented in custom VLSI components to minimize MFD size. A high-performance processor converts user-specified display lists to graphics commands used by these components, resulting in high-speed updates of two-dimensional and three-dimensional images. The MFD uses the MIL-STD-1553B data bus for compatibility with virtually all avionics systems. The MFD can generate displays directly from display lists received from the MIL-STD-1553B bus. Complex formats can be stored in the MFD and displayed using parameters from the data bus. The MFD also accepts direct video input and performs special processing on this input to enhance image quality.
Thomas, W P; Gaber, C E; Jacobs, G J; Kaplan, P M; Lombard, C W; Moise, N S; Moses, B L
1993-01-01
Recommendations are presented for standardized imaging planes and display conventions for two-dimensional echocardiography in the dog and cat. Three transducer locations ("windows") provide access to consistent imaging planes: the right parasternal location, the left caudal (apical) parasternal location, and the left cranial parasternal location. Recommendations for image display orientations are very similar to those for comparable human cardiac images, with the heart base or cranial aspect of the heart displayed to the examiner's right on the video display. From the right parasternal location, standard views include a long-axis four-chamber view and a long-axis left ventricular outflow view, and short-axis views at the levels of the left ventricular apex, papillary muscles, chordae tendineae, mitral valve, aortic valve, and pulmonary arteries. From the left caudal (apical) location, standard views include long-axis two-chamber and four-chamber views. From the left cranial parasternal location, standard views include a long-axis view of the left ventricular outflow tract and ascending aorta (with variations to image the right atrium and tricuspid valve, and the pulmonary valve and pulmonary artery), and a short-axis view of the aortic root encircled by the right heart. These images are presented by means of idealized line drawings. Adoption of these standards should facilitate consistent performance, recording, teaching, and communicating results of studies obtained by two-dimensional echocardiography.
NASA Astrophysics Data System (ADS)
Rhzanov, Y.; Beaulieu, S.; Soule, S. A.; Shank, T.; Fornari, D.; Mayer, L. A.
2005-12-01
Many advances in understanding geologic, tectonic, biologic, and sedimentologic processes in the deep ocean are facilitated by direct observation of the seafloor. However, making such observations is both difficult and expensive. Optical systems (e.g., video, still camera, or direct observation) will always be constrained by the severe attenuation of light in the deep ocean, limiting the field of view to distances that are typically less than 10 meters. Acoustic systems can 'see' much larger areas, but at the cost of spatial resolution. Ultimately, scientists want to study and observe deep-sea processes in the same way we do land-based phenomena so that the spatial distribution and juxtaposition of processes and features can be resolved. We have begun development of algorithms that will, in near real-time, generate mosaics from video collected by deep-submergence vehicles. Mosaics consist of >>10 video frames and can cover 100's of square-meters. This work builds on a publicly available still and video mosaicking software package developed by Rzhanov and Mayer. Here we present the results of initial tests of data collection methodologies (e.g., transects across the seafloor and panoramas across features of interest), algorithm application, and GIS integration conducted during a recent cruise to the Eastern Galapagos Spreading Center (0 deg N, 86 deg W). We have developed a GIS database for the region that will act as a means to access and display mosaics within a geospatially-referenced framework. We have constructed numerous mosaics using both video and still imagery and assessed the quality of the mosaics (including registration errors) under different lighting conditions and with different navigation procedures. We have begun to develop algorithms for efficient and timely mosaicking of collected video as well as integration with navigation data for georeferencing the mosaics. Initial results indicate that operators must be properly versed in the control of the video systems as well as maintaining vehicle attitude and altitude in order to achieve the best results possible.
NASA Astrophysics Data System (ADS)
Qin, Chen; Ren, Bin; Guo, Longfei; Dou, Wenhua
2014-11-01
Multi-projector three dimension display is a promising multi-view glass-free three dimension (3D) display technology, can produce full colour high definition 3D images on its screen. One key problem of multi-projector 3D display is how to acquire the source images of projector array while avoiding pseudoscopic problem. This paper analysis the displaying characteristics of multi-projector 3D display first and then propose a projector content synthetic method using tetrahedral transform. A 3D video format that based on stereo image pair and associated disparity map is presented, it is well suit for any type of multi-projector 3D display and has advantage in saving storage usage. Experiment results show that our method solved the pseudoscopic problem.
NASA Technical Reports Server (NTRS)
1986-01-01
The FluoroScan Imaging System is a high resolution, low radiation device for viewing stationary or moving objects. It resulted from NASA technology developed for x-ray astronomy and Goddard application to a low intensity x-ray imaging scope. FlouroScan Imaging Systems, Inc, (formerly HealthMate, Inc.), a NASA licensee, further refined the FluoroScan System. It is used for examining fractures, placement of catheters, and in veterinary medicine. Its major components include an x-ray generator, scintillator, visible light image intensifier and video display. It is small, light and maneuverable.
Broadening the interface bandwidth in simulation based training
NASA Technical Reports Server (NTRS)
Somers, Larry E.
1989-01-01
Currently most computer based simulations rely exclusively on computer generated graphics to create the simulation. When training is involved, the method almost exclusively used to display information to the learner is text displayed on the cathode ray tube. MICROEXPERT Systems is concentrating on broadening the communications bandwidth between the computer and user by employing a novel approach to video image storage combined with sound and voice output. An expert system is used to combine and control the presentation of analog video, sound, and voice output with computer based graphics and text. Researchers are currently involved in the development of several graphics based user interfaces for NASA, the U.S. Army, and the U.S. Navy. Here, the focus is on the human factors considerations, software modules, and hardware components being used to develop these interfaces.
Psycho-physiological effects of head-mounted displays in ubiquitous use
NASA Astrophysics Data System (ADS)
Kawai, Takashi; Häkkinen, Jukka; Oshima, Keisuke; Saito, Hiroko; Yamazoe, Takashi; Morikawa, Hiroyuki; Nyman, Göte
2011-02-01
In this study, two experiments were conducted to evaluate the psycho-physiological effects by practical use of monocular head-mounted display (HMD) in a real-world environment, based on the assumption of consumer-level applications as viewing video content and receiving navigation information while walking. In the experiment 1, the workload was examined for different types of presenting stimuli using an HMD (monocular or binocular, see-through or non-see-through). The experiment 2 focused on the relationship between the real-world environment and the visual information presented using a monocular HMD. The workload was compared between a case where participants walked while viewing video content without relation to the real-world environment, and a case where participants walked while viewing visual information to augment the real-world environment as navigations.
Synchronized voltage contrast display analysis system
NASA Technical Reports Server (NTRS)
Johnston, M. F.; Shumka, A.; Miller, E.; Evans, K. C. (Inventor)
1982-01-01
An apparatus and method for comparing internal voltage potentials of first and second operating electronic components such as large scale integrated circuits (LSI's) in which voltage differentials are visually identified via an appropriate display means are described. More particularly, in a first embodiment of the invention a first and second scanning electron microscope (SEM) are configured to scan a first and second operating electronic component respectively. The scan pattern of the second SEM is synchronized to that of the first SEM so that both simultaneously scan corresponding portions of the two operating electronic components. Video signals from each SEM corresponding to secondary electron signals generated as a result of a primary electron beam intersecting each operating electronic component in accordance with a predetermined scan pattern are provided to a video mixer and color encoder.
Flow visualization of CFD using graphics workstations
NASA Technical Reports Server (NTRS)
Lasinski, Thomas; Buning, Pieter; Choi, Diana; Rogers, Stuart; Bancroft, Gordon
1987-01-01
High performance graphics workstations are used to visualize the fluid flow dynamics obtained from supercomputer solutions of computational fluid dynamic programs. The visualizations can be done independently on the workstation or while the workstation is connected to the supercomputer in a distributed computing mode. In the distributed mode, the supercomputer interactively performs the computationally intensive graphics rendering tasks while the workstation performs the viewing tasks. A major advantage of the workstations is that the viewers can interactively change their viewing position while watching the dynamics of the flow fields. An overview of the computer hardware and software required to create these displays is presented. For complex scenes the workstation cannot create the displays fast enough for good motion analysis. For these cases, the animation sequences are recorded on video tape or 16 mm film a frame at a time and played back at the desired speed. The additional software and hardware required to create these video tapes or 16 mm movies are also described. Photographs illustrating current visualization techniques are discussed. Examples of the use of the workstations for flow visualization through animation are available on video tape.
The virtual brain: 30 years of video-game play and cognitive abilities.
Latham, Andrew J; Patston, Lucy L M; Tippett, Lynette J
2013-09-13
Forty years have passed since video-games were first made widely available to the public and subsequently playing games has become a favorite past-time for many. Players continuously engage with dynamic visual displays with success contingent on the time-pressured deployment, and flexible allocation, of attention as well as precise bimanual movements. Evidence to date suggests that both brief and extensive exposure to video-game play can result in a broad range of enhancements to various cognitive faculties that generalize beyond the original context. Despite promise, video-game research is host to a number of methodological issues that require addressing before progress can be made in this area. Here an effort is made to consolidate the past 30 years of literature examining the effects of video-game play on cognitive faculties and, more recently, neural systems. Future work is required to identify the mechanism that allows the act of video-game play to generate such a broad range of generalized enhancements.
The virtual brain: 30 years of video-game play and cognitive abilities
Latham, Andrew J.; Patston, Lucy L. M.; Tippett, Lynette J.
2013-01-01
Forty years have passed since video-games were first made widely available to the public and subsequently playing games has become a favorite past-time for many. Players continuously engage with dynamic visual displays with success contingent on the time-pressured deployment, and flexible allocation, of attention as well as precise bimanual movements. Evidence to date suggests that both brief and extensive exposure to video-game play can result in a broad range of enhancements to various cognitive faculties that generalize beyond the original context. Despite promise, video-game research is host to a number of methodological issues that require addressing before progress can be made in this area. Here an effort is made to consolidate the past 30 years of literature examining the effects of video-game play on cognitive faculties and, more recently, neural systems. Future work is required to identify the mechanism that allows the act of video-game play to generate such a broad range of generalized enhancements. PMID:24062712
Scorebox extraction from mobile sports videos using Support Vector Machines
NASA Astrophysics Data System (ADS)
Kim, Wonjun; Park, Jimin; Kim, Changick
2008-08-01
Scorebox plays an important role in understanding contents of sports videos. However, the tiny scorebox may give the small-display-viewers uncomfortable experience in grasping the game situation. In this paper, we propose a novel framework to extract the scorebox from sports video frames. We first extract candidates by using accumulated intensity and edge information after short learning period. Since there are various types of scoreboxes inserted in sports videos, multiple attributes need to be used for efficient extraction. Based on those attributes, the optimal information gain is computed and top three ranked attributes in terms of information gain are selected as a three-dimensional feature vector for Support Vector Machines (SVM) to distinguish the scorebox from other candidates, such as logos and advertisement boards. The proposed method is tested on various videos of sports games and experimental results show the efficiency and robustness of our proposed method.
Viewing the viewers: how adults with attentional deficits watch educational videos.
Hassner, Tal; Wolf, Lior; Lerner, Anat; Leitner, Yael
2014-10-01
Knowing how adults with ADHD interact with prerecorded video lessons at home may provide a novel means of early screening and long-term monitoring for ADHD. Viewing patterns of 484 students with known ADHD were compared with 484 age, gender, and academically matched controls chosen from 8,699 non-ADHD students. Transcripts generated by their video playback software were analyzed using t tests and regression analysis. ADHD students displayed significant tendencies (p ≤ .05) to watch videos with more pauses and more reviews of previously watched parts. Other parameters showed similar tendencies. Regression analysis indicated that attentional deficits remained constant for age and gender but varied for learning experience. There were measurable and significant differences between the video-viewing habits of the ADHD and non-ADHD students. This provides a new perspective on how adults cope with attention deficits and suggests a novel means of early screening for ADHD. © 2011 SAGE Publications.
Introducing a Public Stereoscopic 3D High Dynamic Range (SHDR) Video Database
NASA Astrophysics Data System (ADS)
Banitalebi-Dehkordi, Amin
2017-03-01
High dynamic range (HDR) displays and cameras are paving their ways through the consumer market at a rapid growth rate. Thanks to TV and camera manufacturers, HDR systems are now becoming available commercially to end users. This is taking place only a few years after the blooming of 3D video technologies. MPEG/ITU are also actively working towards the standardization of these technologies. However, preliminary research efforts in these video technologies are hammered by the lack of sufficient experimental data. In this paper, we introduce a Stereoscopic 3D HDR database of videos that is made publicly available to the research community. We explain the procedure taken to capture, calibrate, and post-process the videos. In addition, we provide insights on potential use-cases, challenges, and research opportunities, implied by the combination of higher dynamic range of the HDR aspect, and depth impression of the 3D aspect.
Bartholow, Bruce D; Sestir, Marc A; Davis, Edward B
2005-11-01
Research has shown that exposure to violent video games causes increases in aggression, but the mechanisms of this effect have remained elusive. Also, potential differences in short-term and long-term exposure are not well understood. An initial correlational study shows that video game violence exposure (VVE) is positively correlated with self-reports of aggressive behavior and that this relation is robust to controlling for multiple aspects of personality. A lab experiment showed that individuals low in VVE behave more aggressively after playing a violent video game than after a nonviolent game but that those high in VVE display relatively high levels of aggression regardless of game content. Mediational analyses show that trait hostility, empathy, and hostile perceptions partially account for the VVE effect on aggression. These findings suggest that repeated exposure to video game violence increases aggressive behavior in part via changes in cognitive and personality factors associated with desensitization.
User interface using a 3D model for video surveillance
NASA Astrophysics Data System (ADS)
Hata, Toshihiko; Boh, Satoru; Tsukada, Akihiro; Ozaki, Minoru
1998-02-01
These days fewer people, who must carry out their tasks quickly and precisely, are required in industrial surveillance and monitoring applications such as plant control or building security. Utilizing multimedia technology is a good approach to meet this need, and we previously developed Media Controller, which is designed for the applications and provides realtime recording and retrieval of digital video data in a distributed environment. In this paper, we propose a user interface for such a distributed video surveillance system in which 3D models of buildings and facilities are connected to the surveillance video. A novel method of synchronizing camera field data with each frame of a video stream is considered. This method records and reads the camera field data similarity to the video data and transmits it synchronously with the video stream. This enables the user interface to have such useful functions as comprehending the camera field immediately and providing clues when visibility is poor, for not only live video but also playback video. We have also implemented and evaluated the display function which makes surveillance video and 3D model work together using Media Controller with Java and Virtual Reality Modeling Language employed for multi-purpose and intranet use of 3D model.
Speech Auditory Alerts Promote Memory for Alerted Events in a Video-Simulated Self-Driving Car Ride.
Nees, Michael A; Helbein, Benji; Porter, Anna
2016-05-01
Auditory displays could be essential to helping drivers maintain situation awareness in autonomous vehicles, but to date, few or no studies have examined the effectiveness of different types of auditory displays for this application scenario. Recent advances in the development of autonomous vehicles (i.e., self-driving cars) have suggested that widespread automation of driving may be tenable in the near future. Drivers may be required to monitor the status of automation programs and vehicle conditions as they engage in secondary leisure or work tasks (entertainment, communication, etc.) in autonomous vehicles. An experiment compared memory for alerted events-a component of Level 1 situation awareness-using speech alerts, auditory icons, and a visual control condition during a video-simulated self-driving car ride with a visual secondary task. The alerts gave information about the vehicle's operating status and the driving scenario. Speech alerts resulted in better memory for alerted events. Both auditory display types resulted in less perceived effort devoted toward the study tasks but also greater perceived annoyance with the alerts. Speech auditory displays promoted Level 1 situation awareness during a simulation of a ride in a self-driving vehicle under routine conditions, but annoyance remains a concern with auditory displays. Speech auditory displays showed promise as a means of increasing Level 1 situation awareness of routine scenarios during an autonomous vehicle ride with an unrelated secondary task. © 2016, Human Factors and Ergonomics Society.
Video Bandwidth Compression System.
1980-08-01
scaling function, located between the inverse DPCM and inverse transform , on the decoder matrix multiplier chips. 1"V1 T.. ---- i.13 SECURITY...Bit Unpacker and Inverse DPCM Slave Sync Board 15 e. Inverse DPCM Loop Boards 15 f. Inverse Transform Board 16 g. Composite Video Output Board 16...36 a. Display Refresh Memory 36 (1) Memory Section 37 (2) Timing and Control 39 b. Bit Unpacker and Inverse DPCM 40 c. Inverse Transform Processor 43
Leading the Development of Concepts of Operations for Next-Generation Remotely Piloted Aircraft
2016-01-01
overarching CONOPS. RPAs must provide full motion video and signals intelli- gence (SIGINT) capabilities to fulfill their intelligence, surveillance, and...reached full capacity, combatant commanders had an insatiable demand for this new breed of capability, and phrases like Pred porn and drone strike...dimensional steering line on the video feed of the pilot’s head-up display (HUD) that would indicate turning cues and finite steering paths for optimal
A manual for microcomputer image analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rich, P.M.; Ranken, D.M.; George, J.S.
1989-12-01
This manual is intended to serve three basic purposes: as a primer in microcomputer image analysis theory and techniques, as a guide to the use of IMAGE{copyright}, a public domain microcomputer program for image analysis, and as a stimulus to encourage programmers to develop microcomputer software suited for scientific use. Topics discussed include the principals of image processing and analysis, use of standard video for input and display, spatial measurement techniques, and the future of microcomputer image analysis. A complete reference guide that lists the commands for IMAGE is provided. IMAGE includes capabilities for digitization, input and output of images,more » hardware display lookup table control, editing, edge detection, histogram calculation, measurement along lines and curves, measurement of areas, examination of intensity values, output of analytical results, conversion between raster and vector formats, and region movement and rescaling. The control structure of IMAGE emphasizes efficiency, precision of measurement, and scientific utility. 18 refs., 18 figs., 2 tabs.« less
ESIAC: A data products system for ERTS imagery (time-lapse viewing and measuring)
NASA Technical Reports Server (NTRS)
Evans, W. E.; Serebreny, S. M.
1974-01-01
An Electronic Satellite Image Analysis Console (ESIAC) has been developed for visual analysis and objective measurement of earth resources imagery. The system is being employed to process imagery for use by USGS investigators in several different disciplines studying dynamic hydrologic conditions. The ESIAC provides facilities for storing registered image sequences in a magnetic video disc memory for subsequent recall, enhancement, and animated display in monochrome or color. The unique feature of the system is the capability to time-lapse the ERTS imagery and/or analytic displays of the imagery. Data products have included quantitative measurements of distances and areas, brightness profiles, and movie loops of selected themes. The applications of these data products are identified and include such diverse problem areas as measurement of snowfield extent, sediment plumes from estuary dicharge, playa inventory, phreatophyte and other vegetation changes. A comparative ranking of the electronic system in terms of accuracy, cost effectiveness and data output shows it to be a viable means of data analysis.
Integrating critical interface elements for intuitive single-display aviation control of UAVs
NASA Astrophysics Data System (ADS)
Cooper, Joseph L.; Goodrich, Michael A.
2006-05-01
Although advancing levels of technology allow UAV operators to give increasingly complex commands with expanding temporal scope, it is unlikely that the need for immediate situation awareness and local, short-term flight adjustment will ever be completely superseded. Local awareness and control are particularly important when the operator uses the UAV to perform a search or inspection task. There are many different tasks which would be facilitated by search and inspection capabilities of a camera-equipped UAV. These tasks range from bridge inspection and news reporting to wilderness search and rescue. The system should be simple, inexpensive, and intuitive for non-pilots. An appropriately designed interface should (a) provide a context for interpreting video and (b) support UAV tasking and control, all within a single display screen. In this paper, we present and analyze an interface that attempts to accomplish this goal. The interface utilizes a georeferenced terrain map rendered from publicly available altitude data and terrain imagery to create a context in which the location of the UAV and the source of the video are communicated to the operator. Rotated and transformed imagery from the UAV provides a stable frame of reference for the operator and integrates cleanly into the terrain model. Simple icons overlaid onto the main display provide intuitive control and feedback when necessary but fade to a semi-transparent state when not in use to avoid distracting the operator's attention from the video signal. With various interface elements integrated into a single display, the interface runs nicely on a small, portable, inexpensive system with a single display screen and simple input device, but is powerful enough to allow a single operator to deploy, control, and recover a small UAV when coupled with appropriate autonomy. As we present elements of the interface design, we will identify concepts that can be leveraged into a large class of UAV applications.
VAP/VAT: video analytics platform and test bed for testing and deploying video analytics
NASA Astrophysics Data System (ADS)
Gorodnichy, Dmitry O.; Dubrofsky, Elan
2010-04-01
Deploying Video Analytics in operational environments is extremely challenging. This paper presents a methodological approach developed by the Video Surveillance and Biometrics Section (VSB) of the Science and Engineering Directorate (S&E) of the Canada Border Services Agency (CBSA) to resolve these problems. A three-phase approach to enable VA deployment within an operational agency is presented and the Video Analytics Platform and Testbed (VAP/VAT) developed by the VSB section is introduced. In addition to allowing the integration of third party and in-house built VA codes into an existing video surveillance infrastructure, VAP/VAT also allows the agency to conduct an unbiased performance evaluation of the cameras and VA software available on the market. VAP/VAT consists of two components: EventCapture, which serves to Automatically detect a "Visual Event", and EventBrowser, which serves to Display & Peruse of "Visual Details" captured at the "Visual Event". To deal with Open architecture as well as with Closed architecture cameras, two video-feed capture mechanisms have been developed within the EventCapture component: IPCamCapture and ScreenCapture.
... It is a painless process that uses a computer and a video monitor to display bodily functions ... or as linegraphs we can see on a computer screen. In this way, we receive information (feedback) ...
Tanco, Kimberson; Rhondali, Wadih; Perez-Cruz, Pedro; Tanzi, Silvia; Chisholm, Gary B; Baile, Walter; Frisbee-Hume, Susan; Williams, Janet; Masino, Charles; Cantu, Hilda; Sisson, Amy; Arthur, Joseph; Bruera, Eduardo
2015-05-01
Information regarding treatment options and prognosis is essential for patient decision making. Patient perception of physicians as being less compassionate when they deliver bad news might be a contributor to physicians' reluctance in delivering these types of communication. To compare patients' perception of physician compassion after watching video vignettes of 2 physicians conveying a more optimistic vs a less optimistic message, determine patients' physician preference after watching both videos, and establish demographic and clinical predictors of compassion. Randomized clinical trial at an outpatient supportive care center in a cancer center in Houston, Texas, including English-speaking adult patients with advanced cancer who were able to understand the nature of the study and complete the consent process. Actors and patients were blinded to the purpose of the study. Investigators were blinded to the videos observed by the patient. One hundred patients were randomized to observe 2 standardized, roughly 4-minute videos depicting a physician discussing treatment information (more optimistic message vs less optimistic message) with a patient with advanced cancer. Both physicians made an identical number of empathetic statements (5) and displayed identical posture. After viewing each video, patients completed assessments including the Physician Compassion Questionnaire (0 = best, 50 = worst). Patients' perception of physician compassion after being exposed to a more optimistic vs an equally empathetic but less optimistic message. Patients reported significantly better compassion scores after watching the more optimistic video as compared with the less optimistic video (median [interquartile range], 15 [5-23] vs 23 [10-31]; P < .001). There was a sequence effect favoring the second video on both compassion scores (P < .001) and physician preference (P < .001). Higher perception of compassion was found to be associated with greater trust in the medical profession independent of message type: 63 patients observing the more optimistic message ranked the physician as trustworthy vs 39 after the less optimistic message (P = .03). Patients perceived a higher level of compassion and preferred physicians who provided a more optimistic message. More research is needed in structuring less optimistic message content to support health care professionals in delivering less optimistic news. clinicaltrials.gov Identifier: NCT02357108.
Hardware/Software Issues for Video Guidance Systems: The Coreco Frame Grabber
NASA Technical Reports Server (NTRS)
Bales, John W.
1996-01-01
The F64 frame grabber is a high performance video image acquisition and processing board utilizing the TMS320C40 and TMS34020 processors. The hardware is designed for the ISA 16 bit bus and supports multiple digital or analog cameras. It has an acquisition rate of 40 million pixels per second, with a variable sampling frequency of 510 kHz to MO MHz. The board has a 4MB frame buffer memory expandable to 32 MB, and has a simultaneous acquisition and processing capability. It supports both VGA and RGB displays, and accepts all analog and digital video input standards.
ARINC 818 adds capabilities for high-speed sensors and systems
NASA Astrophysics Data System (ADS)
Keller, Tim; Grunwald, Paul
2014-06-01
ARINC 818, titled Avionics Digital Video Bus (ADVB), is the standard for cockpit video that has gained wide acceptance in both the commercial and military cockpits including the Boeing 787, the A350XWB, the A400M, the KC- 46A and many others. Initially conceived of for cockpit displays, ARINC 818 is now propagating into high-speed sensors, such as infrared and optical cameras due to its high-bandwidth and high reliability. The ARINC 818 specification that was initially release in the 2006 and has recently undergone a major update that will enhance its applicability as a high speed sensor interface. The ARINC 818-2 specification was published in December 2013. The revisions to the specification include: video switching, stereo and 3-D provisions, color sequential implementations, regions of interest, data-only transmissions, multi-channel implementations, bi-directional communication, higher link rates to 32Gbps, synchronization signals, options for high-speed coax interfaces and optical interface details. The additions to the specification are especially appealing for high-bandwidth, multi sensor systems that have issues with throughput bottlenecks and SWaP concerns. ARINC 818 is implemented on either copper or fiber optic high speed physical layers, and allows for time multiplexing multiple sensors onto a single link. This paper discusses each of the new capabilities in the ARINC 818-2 specification and the benefits for ISR and countermeasures implementations, several examples are provided.
Nissen, Nicholas N; Menon, Vijay; Williams, James; Berci, George
2011-01-01
Background The use of loupe magnification during complex hepatobiliary and pancreatic (HBP) surgery has become routine. Unfortunately, loupe magnification has several disadvantages including limited magnification, a fixed field and non-variable magnification parameters. The aim of this report is to describe a simple system of video-microscopy for use in open surgery as an alternative to loupe magnification. Methods In video-microscopy, the operative field is displayed on a TV monitor using a high-definition (HD) camera with a special optic mounted on an adjustable mechanical arm. The set-up and application of this system are described and illustrated using examples drawn from pancreaticoduodenectomy, bile duct repair and liver transplantation. Results This system is easy to use and can provide variable magnification of ×4–12 at a camera distance of 25–35 cm from the operative field and a depth of field of 15 mm. This system allows the surgeon and assistant to work from a HD TV screen during critical phases of microsurgery. Conclusions The system described here provides better magnification than loupe lenses and thus may be beneficial during complex HPB procedures. Other benefits of this system include the fact that its use decreases neck strain and postural fatigue in the surgeon and it can be used as a tool for documentation and teaching. PMID:21929677
Plant Chlorophyll Content Imager with Reference Detection Signals
NASA Technical Reports Server (NTRS)
Spiering, Bruce A. (Inventor); Carter, Gregory A. (Inventor)
2000-01-01
A portable plant chlorophyll imaging system is described which collects light reflected from a target plant and separates the collected light into two different wavelength bands. These wavelength bands, or channels, are described as having center wavelengths of 700 nm and 840 nm. The light collected in these two channels is processed using synchronized video cameras. A controller provided in the system compares the level of light of video images reflected from a target plant with a reference level of light from a source illuminating the plant. The percent of reflection in the two separate wavelength bands from a target plant are compared to provide a ratio video image which indicates a relative level of plant chlorophyll content and physiological stress. Multiple display modes are described for viewing the video images.
1991-08-15
Conversely, displays Atr con- past experience to the experimental stimuli. structed %xith normal density- controlled KDE cues but %ith 5. Excluding...frame. This 3Ndisplays, gray background is displayed’ on ail introduces 50% -scintillation (density control lion even frames (labelled 1:0). Other non ...video tapes were prepared, each of whsich contained all the experimental ASL signs but distributed 1 2 3 4 into dliffereint. filter groups . Eight
Prygun, A V; Lazarev, N V
1998-10-01
Radiation measuring on the work places of operators in command and control installations proved that environment parameters depending on electronic display functioning are in line with the regulations' requirements. Nevertheless the operator health estimates show that the problem of personnel security still exists. The authors recommend some measures to improve the situation.
How Children’s Mentalistic Theory Widens their Conception of Pictorial Possibilities
Gilli, Gabriella M.; Ruggi, Simona; Gatti, Monica; Freeman, Norman H.
2016-01-01
An interpretative theory of mind enables young children to grasp that people fulfill varying intentions when making pictures. We tested the hypothesis that in middle childhood a unifunctional conception of artists’ intention to produce a picture widens to include artists’ intention to display their pictures to others. Children aged between 5 and 10 years viewed a brief video of an artist deliberately hiding her picture but her intention was thwarted when her picture was discovered and displayed. By 8 years of age children were almost unanimous that a picture-producer without an intention to show her work to others cannot be considered to be an artist. Further exploratory studies centered on aspects of picture-display involving normal public display as well as the contrary intentions of hiding an original picture and of deceitfully displaying a forgery. Interviews suggested that the concept of exhibition widened to take others’ minds into account viewers’ critical judgments and effects of forgeries on viewers’ minds. The approach of interpolating probes of typical possibilities between atypical intentions generated evidence that in middle childhood the foundations are laid for a conception of communication between artists’ minds and viewers’ minds via pictorial display. The combination of hypothesis-testing and exploratory opening-up of the area generates a new testable hypothesis about how an increasingly mentalistic approach enables children to understand diverse possibilities in the pictorial domain. PMID:26955360
Hybrid markerless tracking of complex articulated motion in golf swings.
Fung, Sim Kwoh; Sundaraj, Kenneth; Ahamed, Nizam Uddin; Kiang, Lam Chee; Nadarajah, Sivadev; Sahayadhas, Arun; Ali, Md Asraf; Islam, Md Anamul; Palaniappan, Rajkumar
2014-04-01
Sports video tracking is a research topic that has attained increasing attention due to its high commercial potential. A number of sports, including tennis, soccer, gymnastics, running, golf, badminton and cricket have been utilised to display the novel ideas in sports motion tracking. The main challenge associated with this research concerns the extraction of a highly complex articulated motion from a video scene. Our research focuses on the development of a markerless human motion tracking system that tracks the major body parts of an athlete straight from a sports broadcast video. We proposed a hybrid tracking method, which consists of a combination of three algorithms (pyramidal Lucas-Kanade optical flow (LK), normalised correlation-based template matching and background subtraction), to track the golfer's head, body, hands, shoulders, knees and feet during a full swing. We then match, track and map the results onto a 2D articulated human stick model to represent the pose of the golfer over time. Our work was tested using two video broadcasts of a golfer, and we obtained satisfactory results. The current outcomes of this research can play an important role in enhancing the performance of a golfer, provide vital information to sports medicine practitioners by providing technically sound guidance on movements and should assist to diminish the risk of golfing injuries. Copyright © 2013 Elsevier Ltd. All rights reserved.
Print, Broadcast Students Share VDTs at West Fla.
ERIC Educational Resources Information Center
Roberts, Churchill L.; Dickson, Sandra H.
1985-01-01
Describes the use of video display terminals in the journalism lab of a Florida university. Discusses the different purposes for which broadcast and print journalism students use such equipment. (HTH)
Feasibility of video codec algorithms for software-only playback
NASA Astrophysics Data System (ADS)
Rodriguez, Arturo A.; Morse, Ken
1994-05-01
Software-only video codecs can provide good playback performance in desktop computers with a 486 or 68040 CPU running at 33 MHz without special hardware assistance. Typically, playback of compressed video can be categorized into three tasks: the actual decoding of the video stream, color conversion, and the transfer of decoded video data from system RAM to video RAM. By current standards, good playback performance is the decoding and display of video streams of 320 by 240 (or larger) compressed frames at 15 (or greater) frames-per- second. Software-only video codecs have evolved by modifying and tailoring existing compression methodologies to suit video playback in desktop computers. In this paper we examine the characteristics used to evaluate software-only video codec algorithms, namely: image fidelity (i.e., image quality), bandwidth (i.e., compression) ease-of-decoding (i.e., playback performance), memory consumption, compression to decompression asymmetry, scalability, and delay. We discuss the tradeoffs among these variables and the compromises that can be made to achieve low numerical complexity for software-only playback. Frame- differencing approaches are described since software-only video codecs typically employ them to enhance playback performance. To complement other papers that appear in this session of the Proceedings, we review methods derived from binary pattern image coding since these methods are amenable for software-only playback. In particular, we introduce a novel approach called pixel distribution image coding.
Prediction-guided quantization for video tone mapping
NASA Astrophysics Data System (ADS)
Le Dauphin, Agnès.; Boitard, Ronan; Thoreau, Dominique; Olivier, Yannick; Francois, Edouard; LeLéannec, Fabrice
2014-09-01
Tone Mapping Operators (TMOs) compress High Dynamic Range (HDR) content to address Low Dynamic Range (LDR) displays. However, before reaching the end-user, this tone mapped content is usually compressed for broadcasting or storage purposes. Any TMO includes a quantization step to convert floating point values to integer ones. In this work, we propose to adapt this quantization, in the loop of an encoder, to reduce the entropy of the tone mapped video content. Our technique provides an appropriate quantization for each mode of both the Intra and Inter-prediction that is performed in the loop of a block-based encoder. The mode that minimizes a rate-distortion criterion uses its associated quantization to provide integer values for the rest of the encoding process. The method has been implemented in HEVC and was tested over two different scenarios: the compression of tone mapped LDR video content (using the HM10.0) and the compression of perceptually encoded HDR content (HM14.0). Results show an average bit-rate reduction under the same PSNR for all the sequences and TMO considered of 20.3% and 27.3% for tone mapped content and 2.4% and 2.7% for HDR content.
The optical design of ultra-short throw system for panel emitted theater video system
NASA Astrophysics Data System (ADS)
Huang, Jiun-Woei
2015-07-01
In the past decade, the display format from (HD High Definition) through Full HD(1920X1080) to UHD(4kX2k), mainly guides display industry to two directions: one is liquid crystal display(LCD) from 10 inch to 100 inch and more, and the other is projector. Although LCD has been popularly used in market; however, the investment for production such kind displays cost more money expenditure, and less consideration of environmental pollution and protection[1]. The Projection system may be considered, due to more viewing access, flexible in location, energy saving and environmental protection issues. The topic is to design and fabricate a short throw factor liquid crystal on silicon (LCoS) projection system for cinema. It provides a projection lens system, including a tele-centric lens fitted for emitted LCoS to collimate light to enlarge the field angle. Then, the optical path is guided by a symmetric lens. Light of LCoS may pass through the lens, hit on and reflect through an aspherical mirror, to form a less distortion image on blank wall or screen for home cinema. The throw ratio is less than 0.33.
Display of travelling 3D scenes from single integral-imaging capture
NASA Astrophysics Data System (ADS)
Martinez-Corral, Manuel; Dorado, Adrian; Hong, Seok-Min; Sola-Pikabea, Jorge; Saavedra, Genaro
2016-06-01
Integral imaging (InI) is a 3D auto-stereoscopic technique that captures and displays 3D images. We present a method for easily projecting the information recorded with this technique by transforming the integral image into a plenoptic image, as well as choosing, at will, the field of view (FOV) and the focused plane of the displayed plenoptic image. Furthermore, with this method we can generate a sequence of images that simulates a camera travelling through the scene from a single integral image. The application of this method permits to improve the quality of 3D display images and videos.
The USL NASA PC R and D interactive presentation development system
NASA Technical Reports Server (NTRS)
Dominick, Wayne D. (Editor); Moreau, Dennis R.
1984-01-01
The Interactive Presentation Development System (IPFS) is a highly interactive system for creating, editing, and displaying video presentation sequences, e.g., for developing and presenting displays of instructional material similiar to overhead transparency or slide presentations. However, since this system is PC-based, users (instructors) can step through sequences forward or backward, focusing attention to areas of the display with special cursor pointers. Additionally, screen displays may be dynamically modified during the presentation to show assignments or to answer questions, much like a traditional blackboard. This system is now implemented at the University of Southwestern Louisiana for use within the piloting phases of the NASA contract work.
AOIPS water resources data management system
NASA Technical Reports Server (NTRS)
Vanwie, P.
1977-01-01
The text and computer-generated displays used to demonstrate the AOIPS (Atmospheric and Oceanographic Information Processing System) water resources data management system are investigated. The system was developed to assist hydrologists in analyzing the physical processes occurring in watersheds. It was designed to alleviate some of the problems encountered while investigating the complex interrelationships of variables such as land-cover type, topography, precipitation, snow melt, surface runoff, evapotranspiration, and streamflow rates. The system has an interactive image processing capability and a color video display to display results as they are obtained.
Enhanced Eddy-Current Detection Of Weld Flaws
NASA Technical Reports Server (NTRS)
Van Wyk, Lisa M.; Willenberg, James D.
1992-01-01
Mixing of impedances measured at different frequencies reduces noise and helps reveal flaws. In new method, one excites eddy-current probe simultaneously at two different frequencies; usually, one of which integral multiple of other. Resistive and reactive components of impedance of eddy-current probe measured at two frequencies, mixed in computer, and displayed in real time on video terminal of computer. Mixing of measurements obtained at two different frequencies often "cleans up" displayed signal in situations in which band-pass filtering alone cannot: mixing removes most noise, and displayed signal resolves flaws well.
NASA Astrophysics Data System (ADS)
Veligdan, James T.; Beiser, Leo; Biscardi, Cyrus; Brewster, Calvin; DeSanto, Leonard
1997-07-01
The polyplanar optical display (POD) is a unique display screen which can be use with any projection source. This display screen is 2 inches thick and has a matte black face which allows for high contrast images. The prototype being developed is a form, fit and functional replacement display for the B-52 aircraft which uses a monochrome ten-inch display. The new display uses a 100 milliwatt green solid state laser as its optical source. In order to produce real- time video, the laser light is being modulated by a digital light processing (DLP) chip manufactured by Texas Instruments, Inc. A variable astigmatic focusing system is used to produce a stigmatic image on the viewing face of the POD. In addition to the optical design, we discuss the electronic interfacing to the DLP chip, the opto-mechanical design and viewing angle characteristics.
Laser-driven polyplanar optic display
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veligdan, J.T.; Biscardi, C.; Brewster, C.
1998-01-01
The Polyplanar Optical Display (POD) is a unique display screen which can be used with any projection source. This display screen is 2 inches thick and has a matte-black face which allows for high contrast images. The prototype being developed is a form, fit and functional replacement display for the B-52 aircraft which uses a monochrome ten-inch display. The new display uses a 200 milliwatt green solid-state laser (532 nm) as its optical source. In order to produce real-time video, the laser light is being modulated by a Digital Light Processing (DLP) chip manufactured by Texas Instruments, Inc. A variablemore » astigmatic focusing system is used to produce a stigmatic image on the viewing face of the POD. In addition to the optical design, the authors discuss the DLP chip, the optomechanical design and viewing angle characteristics.« less
Laser-driven polyplanar optic display
NASA Astrophysics Data System (ADS)
Veligdan, James T.; Beiser, Leo; Biscardi, Cyrus; Brewster, Calvin; DeSanto, Leonard
1998-05-01
The Polyplanar Optical Display (POD) is a unique display screen which can be used with any projection source. This display screen is 2 inches thick and has a matte-black face which allows for high contrast images. The prototype being developed is a form, fit and functional replacement display for the B-52 aircraft which uses a monochrome ten-inch display. The new display uses a 200 milliwatt green solid- state laser (532 nm) as its optical source. In order to produce real-time video, the laser light is being modulated by a Digital Light Processing (DLPTM) chip manufactured by Texas Instruments, Inc. A variable astigmatic focusing system is used to produce a stigmatic image on the viewing face of the POD. In addition to the optical design, we discuss the DLPTM chip, the opto-mechanical design and viewing angle characteristics.
Toward enhancing the distributed video coder under a multiview video codec framework
NASA Astrophysics Data System (ADS)
Lee, Shih-Chieh; Chen, Jiann-Jone; Tsai, Yao-Hong; Chen, Chin-Hua
2016-11-01
The advance of video coding technology enables multiview video (MVV) or three-dimensional television (3-D TV) display for users with or without glasses. For mobile devices or wireless applications, a distributed video coder (DVC) can be utilized to shift the encoder complexity to decoder under the MVV coding framework, denoted as multiview distributed video coding (MDVC). We proposed to exploit both inter- and intraview video correlations to enhance side information (SI) and improve the MDVC performance: (1) based on the multiview motion estimation (MVME) framework, a categorized block matching prediction with fidelity weights (COMPETE) was proposed to yield a high quality SI frame for better DVC reconstructed images. (2) The block transform coefficient properties, i.e., DCs and ACs, were exploited to design the priority rate control for the turbo code, such that the DVC decoding can be carried out with fewest parity bits. In comparison, the proposed COMPETE method demonstrated lower time complexity, while presenting better reconstructed video quality. Simulations show that the proposed COMPETE can reduce the time complexity of MVME to 1.29 to 2.56 times smaller, as compared to previous hybrid MVME methods, while the image peak signal to noise ratios (PSNRs) of a decoded video can be improved 0.2 to 3.5 dB, as compared to H.264/AVC intracoding.
Flexible active-matrix displays and shift registers based on solution-processed organic transistors.
Gelinck, Gerwin H; Huitema, H Edzer A; van Veenendaal, Erik; Cantatore, Eugenio; Schrijnemakers, Laurens; van der Putten, Jan B P H; Geuns, Tom C T; Beenhakkers, Monique; Giesbers, Jacobus B; Huisman, Bart-Hendrik; Meijer, Eduard J; Benito, Estrella Mena; Touwslager, Fred J; Marsman, Albert W; van Rens, Bas J E; de Leeuw, Dago M
2004-02-01
At present, flexible displays are an important focus of research. Further development of large, flexible displays requires a cost-effective manufacturing process for the active-matrix backplane, which contains one transistor per pixel. One way to further reduce costs is to integrate (part of) the display drive circuitry, such as row shift registers, directly on the display substrate. Here, we demonstrate flexible active-matrix monochrome electrophoretic displays based on solution-processed organic transistors on 25-microm-thick polyimide substrates. The displays can be bent to a radius of 1 cm without significant loss in performance. Using the same process flow we prepared row shift registers. With 1,888 transistors, these are the largest organic integrated circuits reported to date. More importantly, the operating frequency of 5 kHz is sufficiently high to allow integration with the display operating at video speed. This work therefore represents a major step towards 'system-on-plastic'.
National Weather Service: Watch, Warning, Advisory Display
... Education & Outreach About the SPC SPC FAQ About Tornadoes About Derechos Video Lecture Series WCM Page Enh. ... Convective/Tropical Weather Flooding Winter Weather Non-Precipitation Tornado Watch Tornado Warning* Severe Thunderstorm Watch Severe Thunderstorm ...
National Niemann-Pick Disease Foundation
... Disease Registry News & Media NNPDF Newsletters Foundation NewsLine Print Resources Video Resources NNPDF Webinars Vision of Hope ... nor does it host or receive funding from advertising or from the display of commercial content. This ...
Jia, Jia; Chen, Jhensi; Yao, Jun; Chu, Daping
2017-03-17
A high quality 3D display requires a high amount of optical information throughput, which needs an appropriate mechanism to distribute information in space uniformly and efficiently. This study proposes a front-viewing system which is capable of managing the required amount of information efficiently from a high bandwidth source and projecting 3D images with a decent size and a large viewing angle at video rate in full colour. It employs variable gratings to support a high bandwidth distribution. This concept is scalable and the system can be made compact in size. A horizontal parallax only (HPO) proof-of-concept system is demonstrated by projecting holographic images from a digital micro mirror device (DMD) through rotational tiled gratings before they are realised on a vertical diffuser for front-viewing.
Haptic display for the VR arthroscopy training simulator
NASA Astrophysics Data System (ADS)
Ziegler, Rolf; Brandt, Christoph; Kunstmann, Christian; Mueller, Wolfgang; Werkhaeuser, Holger
1997-05-01
A specific desire to find new training methods arose from the new fields called 'minimal invasive surgery.' With the technical advance modern video arthroscopy became the standard procedure in the ORs. Holding the optical system with the video camera in one hand, watching the operation field on the monitor, the other hand was free to guide, e.g., a probe. As arthroscopy became a more common procedure it became obvious that some sort of special training was necessary to guarantee a certain level of qualification of the surgeons. Therefore, a hospital in Frankfurt, Germany approached the Fraunhofer Institute for Computer Graphics to develop a training system for arthroscopy based on VR techniques. At least the main drawback of the developed simulator is the missing of haptic perception, especially of force feedback. In cooperation with the Department of Electro-Mechanical Construction at the Darmstadt Technical University we have designed and built a haptic display for the VR arthroscopy training simulator. In parallel we developed a concept for the integration of the haptic display in a configurable way.
Using Globe Browsing Systems in Planetariums to Take Audiences to Other Worlds.
NASA Astrophysics Data System (ADS)
Emmart, C. B.
2014-12-01
For the last decade planetariums have been adding capability of "full dome video" systems for both movie playback and interactive display. True scientific data visualization has now come to planetarium audiences as a means to display the actual three dimensional layout of the universe, the time based array of planets, minor bodies and spacecraft across the solar system, and now globe browsing systems to examine planetary bodies to the limits of resolutions acquired. Additionally, such planetarium facilities can be networked for simultaneous display across the world for wider audience and reach to authoritative scientist description and commentary. Data repositories such as NASA's Lunar Mapping and Modeling Project (LMMP), NASA GSFC's LANCE-MODIS, and others conforming to the Open Geospatial Consortium (OGC) standard of Web Map Server (WMS) protocols make geospatial data available for a growing number of dome supporting globe visualization systems. The immersive surround graphics of full dome video replicates our visual system creating authentic virtual scenes effectively placing audiences on location in some cases to other worlds only mapped robotically.
Context-dependent JPEG backward-compatible high-dynamic range image compression
NASA Astrophysics Data System (ADS)
Korshunov, Pavel; Ebrahimi, Touradj
2013-10-01
High-dynamic range (HDR) imaging is expected, together with ultrahigh definition and high-frame rate video, to become a technology that may change photo, TV, and film industries. Many cameras and displays capable of capturing and rendering both HDR images and video are already available in the market. The popularity and full-public adoption of HDR content is, however, hindered by the lack of standards in evaluation of quality, file formats, and compression, as well as large legacy base of low-dynamic range (LDR) displays that are unable to render HDR. To facilitate the wide spread of HDR usage, the backward compatibility of HDR with commonly used legacy technologies for storage, rendering, and compression of video and images are necessary. Although many tone-mapping algorithms are developed for generating viewable LDR content from HDR, there is no consensus of which algorithm to use and under which conditions. We, via a series of subjective evaluations, demonstrate the dependency of the perceptual quality of the tone-mapped LDR images on the context: environmental factors, display parameters, and image content itself. Based on the results of subjective tests, it proposes to extend JPEG file format, the most popular image format, in a backward compatible manner to deal with HDR images also. An architecture to achieve such backward compatibility with JPEG is proposed. A simple implementation of lossy compression demonstrates the efficiency of the proposed architecture compared with the state-of-the-art HDR image compression.
Hsieh, K S; Lin, C C; Liu, W S; Chen, F L
1996-01-01
Two-dimensional echocardiography had long been a standard diagnostic modality for congenital heart disease. Further attempts of three-dimensional reconstruction using two-dimensional echocardiographic images to visualize stereotypic structure of cardiac lesions have been successful only recently. So far only very few studies have been done to display three-dimensional anatomy of the heart through two-dimensional image acquisition because such complex procedures were involved. This study introduced a recently developed image acquisition and processing system for dynamic three-dimensional visualization of various congenital cardiac lesions. From December 1994 to April 1995, 35 cases were selected in the Echo Laboratory here from about 3000 Echo examinations completed. Each image was acquired on-line with specially designed high resolution image grazmber with EKG and respiratory gating technique. Off-line image processing using a window-architectured interactive software package includes construction of 2-D ehcocardiographic pixel to 3-D "voxel" with conversion of orthogonal to rotatory axial system, interpolation, extraction of region of interest, segmentation, shading and, finally, 3D rendering. Three-dimensional anatomy of various congenital cardiac defects was shown, including four cases with ventricular septal defects, two cases with atrial septal defects, and two cases with aortic stenosis. Dynamic reconstruction of a "beating heart" is recorded as vedio tape with video interface. The potential application of 3D display of the reconstruction from 2D echocardiographic images for the diagnosis of various congenital heart defects has been shown. The 3D display was able to improve the diagnostic ability of echocardiography, and clear-cut display of the various congenital cardiac defects and vavular stenosis could be demonstrated. Reinforcement of current techniques will expand future application of 3D display of conventional 2D images.
Liquid crystal display (LCD) drive electronics
NASA Astrophysics Data System (ADS)
Loudin, Jeffrey A.; Duffey, Jason N.; Booth, Joseph J.; Jones, Brian K.
1995-03-01
A new drive circuit for the liquid crystal display (LCD) of the InFocus TVT-6000 video projector is currently under development at the U.S. Army Missile Command. The new circuit will allow individual pixel control of the LCD and increase the frame rate by a factor of two while yielding a major reduction in space and power requirements. This paper will discuss results of the effort to date.
Advanced Extravehicular Mobility Unit Informatics Software Design
NASA Technical Reports Server (NTRS)
Wright, Theodore
2014-01-01
This is a description of the software design for the 2013 edition of the Advanced Extravehicular Mobility Unit (AEMU) Informatics computer assembly. The Informatics system is an optional part of the space suit assembly. It adds a graphical interface for displaying suit status, timelines, procedures, and caution and warning information. In the future it will display maps with GPS position data, and video and still images captured by the astronaut.
Open Source Subtitle Editor Software Study for Section 508 Close Caption Applications
NASA Technical Reports Server (NTRS)
Murphy, F. Brandon
2013-01-01
This paper will focus on a specific item within the NASA Electronic Information Accessibility Policy - Multimedia Presentation shall have synchronized caption; thus making information accessible to a person with hearing impairment. This synchronized caption will assist a person with hearing or cognitive disability to access the same information as everyone else. This paper focuses on the research and implementation for CC (subtitle option) support to video multimedia. The goal of this research is identify the best available open-source (free) software to achieve synchronized captions requirement and achieve savings, while meeting the security requirement for Government information integrity and assurance. CC and subtitling are processes that display text within a video to provide additional or interpretive information for those whom may need it or those whom chose it. Closed captions typically show the transcription of the audio portion of a program (video) as it occurs (either verbatim or in its edited form), sometimes including non-speech elements (such as sound effects). The transcript can be provided by a third party source or can be extracted word for word from the video. This feature can be made available for videos in two forms: either Soft-Coded or Hard-Coded. Soft-Coded is the more optional version of CC, where you can chose to turn them on if you want, or you can turn them off. Most of the time, when using the Soft-Coded option, the transcript is also provided to the view along-side the video. This option is subject to compromise, whereas the transcript is merely a text file that can be changed by anyone who has access to it. With this option the integrity of the CC is at the mercy of the user. Hard-Coded CC is a more permanent form of CC. A Hard-Coded CC transcript is embedded within a video, without the option of removal.
Accuracy of pulse oximetry measurement of heart rate of newborn infants in the delivery room.
Kamlin, C Omar F; Dawson, Jennifer A; O'Donnell, Colm P F; Morley, Colin J; Donath, Susan M; Sekhon, Jasbir; Davis, Peter G
2008-06-01
To determine the accuracy of heart rate obtained by pulse oximetry (HR(PO)) relative to HR obtained by 3-lead electrocardiography (HR(ECG)) in newborn infants in the delivery room. Immediately after birth, a preductal PO sensor and ECG leads were applied. PO and ECG monitor displays were recorded by a video camera. Two investigators reviewed the videos. Every two seconds, 1 of the investigators recorded HR(PO) and indicators of signal quality from the oximeter while masked to ECG, whereas the other recorded HR(ECG) and ECG signal quality while masked to PO. HR(PO) and HR(ECG) measurements were compared using Bland-Altman analysis. We attended 92 deliveries; 37 infants were excluded due to equipment malfunction. The 55 infants studied had a mean (+/-standard deviation [SD]) gestational age of 35 (+/-3.7) weeks, and birth weight 2399 (+/-869) g. In total, we analyzed 5877 data pairs. The mean difference (+/-2 SD) between HR(ECG) and HR(PO) was -2 (+/-26) beats per minute (bpm) overall and -0.5 (+/-16) bpm in those infants who received positive-pressure ventilation and/or cardiac massage. The sensitivity and specificity of PO for detecting HR(ECG) <100 bpm was 89% and 99%, respectively. PO provided an accurate display of newborn infants' HR in the delivery room, including those infants receiving advanced resuscitation.
Simulating video-assisted thoracoscopic lobectomy: a virtual reality cognitive task simulation.
Solomon, Brian; Bizekis, Costas; Dellis, Sophia L; Donington, Jessica S; Oliker, Aaron; Balsam, Leora B; Zervos, Michael; Galloway, Aubrey C; Pass, Harvey; Grossi, Eugene A
2011-01-01
Current video-assisted thoracoscopic surgery training models rely on animals or mannequins to teach procedural skills. These approaches lack inherent teaching/testing capability and are limited by cost, anatomic variations, and single use. In response, we hypothesized that video-assisted thoracoscopic surgery right upper lobe resection could be simulated in a virtual reality environment with commercial software. An anatomy explorer (Maya [Autodesk Inc, San Rafael, Calif] models of the chest and hilar structures) and simulation engine were adapted. Design goals included freedom of port placement, incorporation of well-known anatomic variants, teaching and testing modes, haptic feedback for the dissection, ability to perform the anatomic divisions, and a portable platform. Preexisting commercial models did not provide sufficient surgical detail, and extensive modeling modifications were required. Video-assisted thoracoscopic surgery right upper lobe resection simulation is initiated with a random vein and artery variation. The trainee proceeds in a teaching or testing mode. A knowledge database currently includes 13 anatomic identifications and 20 high-yield lung cancer learning points. The "patient" is presented in the left lateral decubitus position. After initial camera port placement, the endoscopic view is displayed and the thoracoscope is manipulated via the haptic device. The thoracoscope port can be relocated; additional ports are placed using an external "operating room" view. Unrestricted endoscopic exploration of the thorax is allowed. An endo-dissector tool allows for hilar dissection, and a virtual stapling device divides structures. The trainee's performance is reported. A virtual reality cognitive task simulation can overcome the deficiencies of existing training models. Performance scoring is being validated as we assess this simulator for cognitive and technical surgical education. Copyright © 2011. Published by Mosby, Inc.
Helping Video Games Rewire "Our Minds"
NASA Technical Reports Server (NTRS)
Pope, Alan T.; Palsson, Olafur S.
2001-01-01
Biofeedback-modulated video games are games that respond to physiological signals as well as mouse, joystick or game controller input; they embody the concept of improving physiological functioning by rewarding specific healthy body signals with success at playing a video game. The NASA patented biofeedback-modulated game method blends biofeedback into popular off-the- shelf video games in such a way that the games do not lose their entertainment value. This method uses physiological signals (e.g., electroencephalogram frequency band ratio) not simply to drive a biofeedback display directly, or periodically modify a task as in other systems, but to continuously modulate parameters (e.g., game character speed and mobility) of a game task in real time while the game task is being performed by other means (e.g., a game controller). Biofeedback-modulated video games represent a new generation of computer and video game environments that train valuable mental skills beyond eye-hand coordination. These psychophysiological training technologies are poised to exploit the revolution in interactive multimedia home entertainment for the personal improvement, not just the diversion, of the user.
Maier, Hans; de Heer, Gert; Ortac, Ajda; Kuijten, Jan
2015-11-01
To analyze, interpret and evaluate microscopic images, used in medical diagnostics and forensic science, video images for educational purposes were made with a very high resolution of 4096 × 2160 pixels (4K), which is four times as many pixels as High-Definition Video (1920 × 1080 pixels). The unprecedented high resolution makes it possible to see details that remain invisible to any other video format. The images of the specimens (blood cells, tissue sections, hair, fibre, etc.) are recorded using a 4K video camera which is attached to a light microscope. After processing, this resulted in very sharp and highly detailed images. This material was then used in education for classroom discussion. Spoken explanation by experts in the field of medical diagnostics and forensic science was also added to the high-resolution video images to make it suitable for self-study. © 2015 The Authors. Journal of Microscopy published by John Wiley & Sons Ltd on behalf of Royal Microscopical Society.
Strategies for combining physics videos and virtual laboratories in the training of physics teachers
NASA Astrophysics Data System (ADS)
Dickman, Adriana; Vertchenko, Lev; Martins, Maria Inés
2007-03-01
Among the multimedia resources used in physics education, the most prominent are virtual laboratories and videos. On one hand, computer simulations and applets have very attractive graphic interfaces, showing an incredible amount of detail and movement. On the other hand, videos, offer the possibility of displaying high quality images, and are becoming more feasible with the increasing availability of digital resources. We believe it is important to discuss, throughout the teacher training program, both the functionality of information and communication technology (ICT) in physics education and, the varied applications of these resources. In our work we suggest the introduction of ICT resources in a sequence integrating these important tools in the teacher training program, as opposed to the traditional approach, in which virtual laboratories and videos are introduced separately. In this perspective, when we introduce and utilize virtual laboratory techniques we also provide for its use in videos, taking advantage of graphic interfaces. Thus the students in our program learn to use instructional software in the production of videos for classroom use.
Kwon, Min-Woo; Kim, Seung-Cheol; Kim, Eun-Soo
2016-01-20
A three-directional motion-compensation mask-based novel look-up table method is proposed and implemented on graphics processing units (GPUs) for video-rate generation of digital holographic videos of three-dimensional (3D) scenes. Since the proposed method is designed to be well matched with the software and memory structures of GPUs, the number of compute-unified-device-architecture kernel function calls can be significantly reduced. This results in a great increase of the computational speed of the proposed method, allowing video-rate generation of the computer-generated hologram (CGH) patterns of 3D scenes. Experimental results reveal that the proposed method can generate 39.8 frames of Fresnel CGH patterns with 1920×1080 pixels per second for the test 3D video scenario with 12,088 object points on dual GPU boards of NVIDIA GTX TITANs, and they confirm the feasibility of the proposed method in the practical application fields of electroholographic 3D displays.