77 FR 9964 - Certain Video Displays and Products Using and Containing Same
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-21
... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-828] Certain Video Displays and Products... importation, and the sale within the United States after importation of certain video displays and products... States, the sale for importation, or the sale within the United States after importation of certain video...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-20
... INTERNATIONAL TRADE COMMISSION [DN 2871] Certain Video Displays and Products Using and Containing... Trade Commission has received a complaint entitled In Re Certain Video Displays and Products Using and... for importation, and the sale within the United States after importation of certain video displays and...
Prevention: lessons from video display installations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Margach, C.B.
1983-04-01
Workers interacting with video display units for periods in excess of two hours per day report significantly increased visual discomfort, fatigue and inefficiencies, as compared with workers performing similar tasks, but without the video viewing component. Difficulties in focusing and the appearance of myopia are among the problems being described. With a view to preventing or minimizing such problems, principles and procedures are presented providing for (a) modification of physical features of the video workstation and (b) improvement in the visual performances of the individual video unit operator.
Feasibility study of utilizing ultraportable projectors for endoscopic video display (with videos).
Tang, Shou-Jiang; Fehring, Amanda; Mclemore, Mac; Griswold, Michael; Wang, Wanmei; Paine, Elizabeth R; Wu, Ruonan; To, Filip
2014-10-01
Modern endoscopy requires video display. Recent miniaturized, ultraportable projectors are affordable, durable, and offer quality image display. Explore feasibility of using ultraportable projectors in endoscopy. Prospective bench-top comparison; clinical feasibility study. Masked comparison study of images displayed via 2 Samsung ultraportable light-emitting diode projectors (pocket-sized SP-HO3; pico projector SP-P410M) and 1 Microvision Showwx-II Laser pico projector. BENCH-TOP FEASIBILITY STUDY: Prerecorded endoscopic video was streamed via computer. CLINICAL COMPARISON STUDY: Live high-definition endoscopy video was simultaneously displayed through each processor onto a standard liquid crystal display monitor and projected onto a portable, pull-down projection screen. Endoscopists, endoscopy nurses, and technicians rated video images; ratings were analyzed by linear mixed-effects regression models with random intercepts. All projectors were easy to set up, adjust, focus, and operate, with no real-time lapse for any. Bench-top study outcomes: Samsung pico preferred to Laser pico, overall rating 1.5 units higher (95% confidence interval [CI] = 0.7-2.4), P < .001; Samsung pocket preferred to Laser pico, 3.3 units higher (95% CI = 2.4-4.1), P < .001; Samsung pocket preferred to Samsung pico, 1.7 units higher (95% CI = 0.9-2.5), P < .001. The clinical comparison study confirmed the Samsung pocket projector as best, with a higher overall rating of 2.3 units (95% CI = 1.6-3.0), P < .001, than Samsung pico. Low brightness currently limits pico projector use in clinical endoscopy. The pocket projector, with higher brightness levels (170 lumens), is clinically useful. Continued improvements to ultraportable projectors will supply a needed niche in endoscopy through portability, reduced cost, and equal or better image quality. © The Author(s) 2013.
IVTS-CEV (Interactive Video Tape System-Combat Engineer Vehicle) Gunnery Trainer.
1981-07-01
video game technology developed for and marketed in consumer video games. The IVTS/CEV is a conceptual/breadboard-level classroom interactive training system designed to train Combat Engineer Vehicle (CEV) gunners in target acquisition and engagement with the main gun. The concept demonstration consists of two units: a gunner station and a display module. The gunner station has optics and gun controls replicating those of the CEV gunner station. The display module contains a standard large-screen color video monitor and a video tape player. The gunner’s sight
Improving School Lighting for Video Display Units.
ERIC Educational Resources Information Center
Parker-Jenkins, Marie; Parker-Jenkins, William
1985-01-01
Provides information to identify and implement the key characteristics which contribute to an efficient and comfortable visual display unit (VDU) lighting installation. Areas addressed include VDU lighting requirements, glare, lighting controls, VDU environment, lighting retrofit, optical filters, and lighting recommendations. A checklist to…
47 CFR 79.103 - Closed caption decoder requirements for apparatus.
Code of Federal Regulations, 2014 CFR
2014-10-01
... RADIO SERVICES ACCESSIBILITY OF VIDEO PROGRAMMING Apparatus § 79.103 Closed caption decoder requirements... video programming transmitted simultaneously with sound, if such apparatus is manufactured in the United... with built-in closed caption decoder circuitry or capability designed to display closed-captioned video...
Patterned Video Sensors For Low Vision
NASA Technical Reports Server (NTRS)
Juday, Richard D.
1996-01-01
Miniature video cameras containing photoreceptors arranged in prescribed non-Cartesian patterns to compensate partly for some visual defects proposed. Cameras, accompanied by (and possibly integrated with) miniature head-mounted video display units restore some visual function in humans whose visual fields reduced by defects like retinitis pigmentosa.
Wrap-Around Out-the-Window Sensor Fusion System
NASA Technical Reports Server (NTRS)
Fox, Jeffrey; Boe, Eric A.; Delgado, Francisco; Secor, James B.; Clark, Michael R.; Ehlinger, Kevin D.; Abernathy, Michael F.
2009-01-01
The Advanced Cockpit Evaluation System (ACES) includes communication, computing, and display subsystems, mounted in a van, that synthesize out-the-window views to approximate the views of the outside world as it would be seen from the cockpit of a crewed spacecraft, aircraft, or remote control of a ground vehicle or UAV (unmanned aerial vehicle). The system includes five flat-panel display units arranged approximately in a semicircle around an operator, like cockpit windows. The scene displayed on each panel represents the view through the corresponding cockpit window. Each display unit is driven by a personal computer equipped with a video-capture card that accepts live input from any of a variety of sensors (typically, visible and/or infrared video cameras). Software running in the computers blends the live video images with synthetic images that could be generated, for example, from heads-up-display outputs, waypoints, corridors, or from satellite photographs of the same geographic region. Data from a Global Positioning System receiver and an inertial navigation system aboard the remote vehicle are used by the ACES software to keep the synthetic and live views in registration. If the live image were to fail, the synthetic scenes could still be displayed to maintain situational awareness.
Payload specialist station study. Part 2: CEI specifications (part 1). [space shuttles
NASA Technical Reports Server (NTRS)
1976-01-01
The performance, design, and verification specifications are established for the multifunction display system (MFDS) to be located at the payload station in the shuttle orbiter aft flight deck. The system provides the display units (with video, alphanumerics, and graphics capabilities), associated with electronic units and the keyboards in support of the payload dedicated controls and the displays concept.
Help for the Visually Impaired
NASA Technical Reports Server (NTRS)
1995-01-01
The Low Vision Enhancement System (LVES) is a video headset that offers people with low vision a view of their surroundings equivalent to the image on a five-foot television screen four feet from the viewer. It will not make the blind see but for many people with low vision, it eases everyday activities such as reading, watching TV and shopping. LVES was developed over almost a decade of cooperation between Stennis Space Center, the Wilmer Eye Institute of the Johns Hopkins Medical Institutions, the Department of Veteran Affairs, and Visionics Corporation. With the aid of Stennis scientists, Wilmer researchers used NASA technology for computer processing of satellite images and head-mounted vision enhancement systems originally intended for the space station. The unit consists of a head-mounted video display, three video cameras, and a control unit for the cameras. The cameras feed images to the video display in the headset.
Carlson, Jay; Kowalczuk, Jędrzej; Psota, Eric; Pérez, Lance C
2012-01-01
Robotic surgical platforms require vision feedback systems, which often consist of low-resolution, expensive, single-imager analog cameras. These systems are retooled for 3D display by simply doubling the cameras and outboard control units. Here, a fully-integrated digital stereoscopic video camera employing high-definition sensors and a class-compliant USB video interface is presented. This system can be used with low-cost PC hardware and consumer-level 3D displays for tele-medical surgical applications including military medical support, disaster relief, and space exploration.
Use of videotape for off-line viewing of computer-assisted radionuclide cardiology studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thrall, J.H.; Pitt, B.; Marx, R.S.
1978-02-01
Videotape offers an inexpensive method for off-line viewing of dynamic radionuclide cardiac studies. Two approaches to videotaping have been explored and demonstrated to be feasible. In the first, a video camera in conjunction with a cassette-type recorder is used to record from the computer display scope. Alternatively, for computer systems already linked to video display units, the video signal can be routed directly to the recorder. Acceptance and use of tracer cardiology studies will be enhanced by increased availability of the studies for clinical review. Videotape offers an inexpensive flexible means of achieving this.
Travel guidance system for vehicles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takanabe, K.; Yamamoto, M.; Ito, K.
1987-02-24
A travel guidance system is described for vehicles including: a heading sensor for detecting a direction of movement of a vehicle; a distance sensor for detecting a distance traveled by the vehicle; a map data storage medium preliminarily storing map data; a control unit for receiving a heading signal from the heading sensor and a distance signal from the distance sensor to successively compute a present position of the vehicle and for generating video signals corresponding to display data including map data from the map data storage medium and data of the present position; and a display having first andmore » second display portions and responsive to the video signals from the control unit to display on the first display portion a map and a present portion mark, in which: the map data storage medium comprises means for preliminarily storing administrative division name data and landmark data; and the control unit comprises: landmark display means for: (1) determining a landmark closest to the present position, (2) causing a position of the landmark to be displayed on the map and (3) retrieving a landmark massage concerning the landmark from the storage medium to cause the display to display the landmark message on the second display portion; division name display means for retrieving the name of an administrative division to which the present position belongs from the storage medium and causing the display to display a division name message on the second display portion; and selection means for selectively actuating at least one of the landmark display means and the division name display means.« less
Development of a Low Cost Graphics Terminal.
ERIC Educational Resources Information Center
Lehr, Ted
1985-01-01
Describes modifications made to expand the capabilities of a display unit (Lear Siegler ADM-3A) to include medium resolution graphics. The modifying circuitry is detailed along with software subroutined written in Z-80 machine language for controlling the video display. (JN)
The Video PATSEARCH System: An Interview with Peter Urbach.
ERIC Educational Resources Information Center
Videodisc/Videotext, 1982
1982-01-01
The Video PATSEARCH system consists of a microcomputer with a special keyboard and two display screens which accesses the PATSEARCH database of United States government patents on the Bibliographic Retrieval Services (BRS) search system. The microcomputer retrieves text from BRS and matching graphics from an analog optical videodisc. (Author/JJD)
Spatiotemporal video deinterlacing using control grid interpolation
NASA Astrophysics Data System (ADS)
Venkatesan, Ragav; Zwart, Christine M.; Frakes, David H.; Li, Baoxin
2015-03-01
With the advent of progressive format display and broadcast technologies, video deinterlacing has become an important video-processing technique. Numerous approaches exist in the literature to accomplish deinterlacing. While most earlier methods were simple linear filtering-based approaches, the emergence of faster computing technologies and even dedicated video-processing hardware in display units has allowed higher quality but also more computationally intense deinterlacing algorithms to become practical. Most modern approaches analyze motion and content in video to select different deinterlacing methods for various spatiotemporal regions. We introduce a family of deinterlacers that employs spectral residue to choose between and weight control grid interpolation based spatial and temporal deinterlacing methods. The proposed approaches perform better than the prior state-of-the-art based on peak signal-to-noise ratio, other visual quality metrics, and simple perception-based subjective evaluations conducted by human viewers. We further study the advantages of using soft and hard decision thresholds on the visual performance.
Advanced Extravehicular Mobility Unit Informatics Software Design
NASA Technical Reports Server (NTRS)
Wright, Theodore
2014-01-01
This is a description of the software design for the 2013 edition of the Advanced Extravehicular Mobility Unit (AEMU) Informatics computer assembly. The Informatics system is an optional part of the space suit assembly. It adds a graphical interface for displaying suit status, timelines, procedures, and caution and warning information. In the future it will display maps with GPS position data, and video and still images captured by the astronaut.
Design of video interface conversion system based on FPGA
NASA Astrophysics Data System (ADS)
Zhao, Heng; Wang, Xiang-jun
2014-11-01
This paper presents a FPGA based video interface conversion system that enables the inter-conversion between digital and analog video. Cyclone IV series EP4CE22F17C chip from Altera Corporation is used as the main video processing chip, and single-chip is used as the information interaction control unit between FPGA and PC. The system is able to encode/decode messages from the PC. Technologies including video decoding/encoding circuits, bus communication protocol, data stream de-interleaving and de-interlacing, color space conversion and the Camera Link timing generator module of FPGA are introduced. The system converts Composite Video Broadcast Signal (CVBS) from the CCD camera into Low Voltage Differential Signaling (LVDS), which will be collected by the video processing unit with Camera Link interface. The processed video signals will then be inputted to system output board and displayed on the monitor.The current experiment shows that it can achieve high-quality video conversion with minimum board size.
Flat-panel display solutions for ground-environment military displays (Invited Paper)
NASA Astrophysics Data System (ADS)
Thomas, J., II; Roach, R.
2005-05-01
Displays for military vehicles have very distinct operational and cost requirements that differ from other military applications. These requirements demand that display suppliers to Army and Marine ground-environments provide low cost equipment that is capable of operation across environmental extremes. Inevitably, COTS components form the foundation of these "affordable" display solutions. This paper will outline the major display requirements and review the options that satisfy conflicting and difficult operational demands, using newly developed equipment as an example. Recently, a new supplier was selected for the Drivers Vision Enhancer (DVE) equipment, including the Display Control Module (DCM). The paper will outline the DVE and describe development of a new DCM solution. The DVE programme, with several thousand units presently in service and operational in conflicts such as "Operation Iraqi Freedom", represents a critical balance between cost and performance. We shall describe design considerations that include selection of COTS sources, the need to minimise display modification; video interfaces, power interfaces, operator interfaces and new provisions to optimise displayed video content.
NASA Astrophysics Data System (ADS)
Deckard, Michael; Ratib, Osman M.; Rubino, Gregory
2002-05-01
Our project was to design and implement a ceiling-mounted multi monitor display unit for use in a high-field MRI surgical suite. The system is designed to simultaneously display images/data from four different digital and/or analog sources with: minimal interference from the adjacent high magnetic field, minimal signal-to-noise/artifact contribution to the MRI images and compliance with codes and regulations for the sterile neuro-surgical environment. Provisions were also made to accommodate the importing and exporting of video information via PACS and remote processing/display for clinical and education uses. Commercial fiber optic receivers/transmitters were implemented along with supporting video processing and distribution equipment to solve the video communication problem. A new generation of high-resolution color flat panel displays was selected for the project. A custom-made monitor mount and in-suite electronics enclosure was designed and constructed at UCLA. Difficulties with implementing an isolated AC power system are discussed and a work-around solution presented.
NASA Astrophysics Data System (ADS)
Figl, Michael; Birkfellner, Wolfgang; Watzinger, Franz; Wanschitz, Felix; Hummel, Johann; Hanel, Rudolf A.; Ewers, Rolf; Bergmann, Helmar
2002-05-01
Two main concepts of Head Mounted Displays (HMD) for augmented reality (AR) visualization exist, the optical and video-see through type. Several research groups have pursued both approaches for utilizing HMDs for computer aided surgery. While the hardware requirements for a video see through HMD to achieve acceptable time delay and frame rate seem to be enormous the clinical acceptance of such a device is doubtful from a practical point of view. Starting from previous work in displaying additional computer-generated graphics in operating microscopes, we have adapted a miniature head mounted operating microscope for AR by integrating two very small computer displays. To calibrate the projection parameters of this so called Varioscope AR we have used Tsai's Algorithm for camera calibration. Connection to a surgical navigation system was performed by defining an open interface to the control unit of the Varioscope AR. The control unit consists of a standard PC with a dual head graphics adapter to render and display the desired augmentation of the scene. We connected this control unit to a computer aided surgery (CAS) system by the TCP/IP interface. In this paper we present the control unit for the HMD and its software design. We tested two different optical tracking systems, the Flashpoint (Image Guided Technologies, Boulder, CO), which provided about 10 frames per second, and the Polaris (Northern Digital, Ontario, Canada) which provided at least 30 frames per second, both with a time delay of one frame.
Video image processor on the Spacelab 2 Solar Optical Universal Polarimeter /SL2 SOUP/
NASA Technical Reports Server (NTRS)
Lindgren, R. W.; Tarbell, T. D.
1981-01-01
The SOUP instrument is designed to obtain diffraction-limited digital images of the sun with high photometric accuracy. The Video Processor originated from the requirement to provide onboard real-time image processing, both to reduce the telemetry rate and to provide meaningful video displays of scientific data to the payload crew. This original concept has evolved into a versatile digital processing system with a multitude of other uses in the SOUP program. The central element in the Video Processor design is a 16-bit central processing unit based on 2900 family bipolar bit-slice devices. All arithmetic, logical and I/O operations are under control of microprograms, stored in programmable read-only memory and initiated by commands from the LSI-11. Several functions of the Video Processor are described, including interface to the High Rate Multiplexer downlink, cosmetic and scientific data processing, scan conversion for crew displays, focus and exposure testing, and use as ground support equipment.
An integrated port camera and display system for laparoscopy.
Terry, Benjamin S; Ruppert, Austin D; Steinhaus, Kristen R; Schoen, Jonathan A; Rentschler, Mark E
2010-05-01
In this paper, we built and tested the port camera, a novel, inexpensive, portable, and battery-powered laparoscopic tool that integrates the components of a vision system with a cannula port. This new device 1) minimizes the invasiveness of laparoscopic surgery by combining a camera port and tool port; 2) reduces the cost of laparoscopic vision systems by integrating an inexpensive CMOS sensor and LED light source; and 3) enhances laparoscopic surgical procedures by mechanically coupling the camera, tool port, and liquid crystal display (LCD) screen to provide an on-patient visual display. The port camera video system was compared to two laparoscopic video systems: a standard resolution unit from Karl Storz (model 22220130) and a high definition unit from Stryker (model 1188HD). Brightness, contrast, hue, colorfulness, and sharpness were compared. The port camera video is superior to the Storz scope and approximately equivalent to the Stryker scope. An ex vivo study was conducted to measure the operative performance of the port camera. The results suggest that simulated tissue identification and biopsy acquisition with the port camera is as efficient as with a traditional laparoscopic system. The port camera was successfully used by a laparoscopic surgeon for exploratory surgery and liver biopsy during a porcine surgery, demonstrating initial surgical feasibility.
Reconfigurable work station for a video display unit and keyboard
NASA Technical Reports Server (NTRS)
Shields, Nicholas L. (Inventor); Roe, Fred D., Jr. (Inventor); Fagg, Mary F. (Inventor); Henderson, David E. (Inventor)
1988-01-01
A reconfigurable workstation is described having video, keyboard, and hand operated motion controller capabilities. The workstation includes main side panels between which a primary work panel is pivotally carried in a manner in which the primary work panel may be adjusted and set in a negatively declined or positively inclined position for proper forearm support when operating hand controllers. A keyboard table supports a keyboard in such a manner that the keyboard is set in a positively inclined position with respect to the negatively declined work panel. Various adjustable devices are provided for adjusting the relative declinations and inclinations of the work panels, tables, and visual display panels.
Design and implementation of H.264 based embedded video coding technology
NASA Astrophysics Data System (ADS)
Mao, Jian; Liu, Jinming; Zhang, Jiemin
2016-03-01
In this paper, an embedded system for remote online video monitoring was designed and developed to capture and record the real-time circumstances in elevator. For the purpose of improving the efficiency of video acquisition and processing, the system selected Samsung S5PV210 chip as the core processor which Integrated graphics processing unit. And the video was encoded with H.264 format for storage and transmission efficiently. Based on S5PV210 chip, the hardware video coding technology was researched, which was more efficient than software coding. After running test, it had been proved that the hardware video coding technology could obviously reduce the cost of system and obtain the more smooth video display. It can be widely applied for the security supervision [1].
An Automatic Portable Telecine Camera.
1978-08-01
five television frames to achieve synchronous operation, that is about 0.2 second. 6.3 Video recorder noise imnunity The synchronisation pulse separator...display is filmed by a modified 16 am cine camera driven by a control unit in which the camera supply voltage is derived from the field synchronisation ...pulses of the video signal. Automatic synchronisation of the camera mechanism is achieved over a wide range of television field frequencies and the
Kwon, Min-Woo; Kim, Seung-Cheol; Kim, Eun-Soo
2016-01-20
A three-directional motion-compensation mask-based novel look-up table method is proposed and implemented on graphics processing units (GPUs) for video-rate generation of digital holographic videos of three-dimensional (3D) scenes. Since the proposed method is designed to be well matched with the software and memory structures of GPUs, the number of compute-unified-device-architecture kernel function calls can be significantly reduced. This results in a great increase of the computational speed of the proposed method, allowing video-rate generation of the computer-generated hologram (CGH) patterns of 3D scenes. Experimental results reveal that the proposed method can generate 39.8 frames of Fresnel CGH patterns with 1920×1080 pixels per second for the test 3D video scenario with 12,088 object points on dual GPU boards of NVIDIA GTX TITANs, and they confirm the feasibility of the proposed method in the practical application fields of electroholographic 3D displays.
Madrigal-Garcia, Maria Isabel; Rodrigues, Marcos; Shenfield, Alex; Singer, Mervyn; Moreno-Cuesta, Jeronimo
2018-07-01
To identify facial expressions occurring in patients at risk of deterioration in hospital wards. Prospective observational feasibility study. General ward patients in a London Community Hospital, United Kingdom. Thirty-four patients at risk of clinical deterioration. A 5-minute video (25 frames/s; 7,500 images) was recorded, encrypted, and subsequently analyzed for action units by a trained facial action coding system psychologist blinded to outcome. Action units of the upper face, head position, eyes position, lips and jaw position, and lower face were analyzed in conjunction with clinical measures collected within the National Early Warning Score. The most frequently detected action units were action unit 43 (73%) for upper face, action unit 51 (11.7%) for head position, action unit 62 (5.8%) for eyes position, action unit 25 (44.1%) for lips and jaw, and action unit 15 (67.6%) for lower face. The presence of certain combined face displays was increased in patients requiring admission to intensive care, namely, action units 43 + 15 + 25 (face display 1, p < 0.013), action units 43 + 15 + 51/52 (face display 2, p < 0.003), and action units 43 + 15 + 51 + 25 (face display 3, p < 0.002). Having face display 1, face display 2, and face display 3 increased the risk of being admitted to intensive care eight-fold, 18-fold, and as a sure event, respectively. A logistic regression model with face display 1, face display 2, face display 3, and National Early Warning Score as independent covariates described admission to intensive care with an average concordance statistic (C-index) of 0.71 (p = 0.009). Patterned facial expressions can be identified in deteriorating general ward patients. This tool may potentially augment risk prediction of current scoring systems.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Definitions. 23.701... DRUG-FREE WORKPLACE Contracting for Environmentally Preferable Products and Services 23.701 Definitions. As used in this subpart— Computer monitor means a video display unit used with a computer. Desktop...
Mobile Vehicle Teleoperated Over Wireless IP
2007-06-13
VideoLAN software suite. The VLC media player portion of this suite handles net- work streaming of video, as well as the receipt and display of the video...is found in appendix C.7. Video Display The video feed is displayed for the operator using VLC opened independently from the control sending program...This gives the operator the most choice in how to configure the display. To connect VLC to the feed all you need is the IP address from the Java
Multi-star processing and gyro filtering for the video inertial pointing system
NASA Technical Reports Server (NTRS)
Murphy, J. P.
1976-01-01
The video inertial pointing (VIP) system is being developed to satisfy the acquisition and pointing requirements of astronomical telescopes. The VIP system uses a single video sensor to provide star position information that can be used to generate three-axis pointing error signals (multi-star processing) and for input to a cathode ray tube (CRT) display of the star field. The pointing error signals are used to update the telescope's gyro stabilization system (gyro filtering). The CRT display facilitates target acquisition and positioning of the telescope by a remote operator. Linearized small angle equations are used for the multistar processing and a consideration of error performance and singularities lead to star pair location restrictions and equation selection criteria. A discrete steady-state Kalman filter which uses the integration of the gyros is developed and analyzed. The filter includes unit time delays representing asynchronous operations of the VIP microprocessor and video sensor. A digital simulation of a typical gyro stabilized gimbal is developed and used to validate the approach to the gyro filtering.
NASA Astrophysics Data System (ADS)
Horii, Steven C.; Kundel, Harold L.; Shile, Peter E.; Carey, Bruce; Seshadri, Sridhar B.; Feingold, Eric R.
1994-05-01
As part of a study of the use of a PACS workstation compared to film in a Medical Intensive Care Unit, logs of workstation activity were maintained. The software for the workstation kept track of the type of user (i.e., intern, resident, fellow, or attending physician) and also of the workstation image manipulation functions used. The functions logged were: no operation, brightness/contrast adjustment, invert video, zoom, and high resolution display (this last function resulted in the display of the full 2 K X 2 K image rather than the usual subsampled 1 K X 1 K image. Associated data collection allows us to obtain the diagnostic category of the examination being viewed (e.g., location of tubes and lines, rule out: pneumonia, congestive heart failure, pneumothorax, and pleural effusion). The diagnostic categories and user type were then correlated with the use of workstation functions during viewing of images. In general, there was an inverse relationship between the level of training and the number of workstation uses. About two-thirds of the time, there was no image manipulation operation performed. Adjustment of brightness/contrast had the highest percentage of use overall, followed by zoom, video invert, and high resolution display.
Performance and Preference with Various VDT (Video Display Terminal) Phosphors
1987-04-24
Unit M\\100.001-1302. was submitted for review on 13 March 1987, approved for publication on 24 April 1987, and has been designated as Naval Submarine... designed to investigate reading fatigue, Nordqvist et al. (1986) had their subjects read texts for 15 minutes, followed by 5 minutes of performance tests...Doc, Ophthalmol. 3: 138-163. Tu-lis, T.S. (1981). An evaluation of alphanumeric, graphic , and color information displays. -_pman Factors 23: 541-550
Ketola, Ritva; Toivonen, Risto; Luukkonen, Ritva; Takala, Esa-Pekka; Viikari-Juntura, Eira
2004-08-01
Inter-observer repeatability, validity and responsiveness to change were determined for an expert assessment method for video-display unit (VDU) workstation ergonomics. The aim was to determine to what extent the expert assessment of ergonomics is related to the technical measurements, tidiness and space, work chair ergonomics and responds to changes in these characteristics. Technical measurements and video-recordings before and 2 months after an ergonomic intervention were made for 109 VDU office workstations. Two experts in ergonomics analysed and rated the ergonomics of the workstations. A researcher analysed tidiness and available space. A physiotherapist classified the work chairs used according to their ergonomic properties. The intra-class correlation coefficient between the workstation ergonomic ratings of the two experts was 0.74 at the baseline and 0.81 at the follow-up. Workstation tidiness and space, and work chair ergonomics, had a strong effect on the assessments of both experts. For both experts a change in the locations of the mouse, the screen and the keyboard and values of tidiness and space and work chair ergonomics during the intervention showed a significant association with the ratings. The assessment method studied can be utilized by an expert in a repeatable manner both in cross-sectional and in longitudinal settings.
NASA Technical Reports Server (NTRS)
Bogart, Edward H. (Inventor); Pope, Alan T. (Inventor)
2000-01-01
A system for display on a single video display terminal of multiple physiological measurements is provided. A subject is monitored by a plurality of instruments which feed data to a computer programmed to receive data, calculate data products such as index of engagement and heart rate, and display the data in a graphical format simultaneously on a single video display terminal. In addition live video representing the view of the subject and the experimental setup may also be integrated into the single data display. The display may be recorded on a standard video tape recorder for retrospective analysis.
Technique for improving solid state mosaic images
NASA Technical Reports Server (NTRS)
Saboe, J. M.
1969-01-01
Method identifies and corrects mosaic image faults in solid state visual displays and opto-electronic presentation systems. Composite video signals containing faults due to defective sensing elements are corrected by a memory unit that contains the stored fault pattern and supplies the appropriate fault word to the blanking circuit.
Things the Teacher of Your Media Utilization Course May Not Have Told You.
ERIC Educational Resources Information Center
Ekhaml, Leticia
1995-01-01
Discusses maintenance and safety information that may not be covered in a technology training program. Topics include computers, printers, televisions, video and audio equipment, electric roll laminators, overhead and slide projectors, equipment carts, power cords and outlets, batteries, darkrooms, barcode readers, Liquid Crystal Display units,…
Dual-Use Applications of Infrared Sensitive Materials: Appendices
1993-06-01
CLUs. The Command Launch Unit and missile round each use a second generation forward looking IR detector. The CLU uses a LWIR , MCT-based, 240xI scanning...read out digitally to other display units or video equipment through a port on the unit’s side. The IR detector in the CLU operates in the LWIR for two...greater than about a kilometer needs to operate generally in the LWIR to achieve the sensitivity necessary to image objects at those distances.’ Second
Stereoscopic display technologies for FHD 3D LCD TV
NASA Astrophysics Data System (ADS)
Kim, Dae-Sik; Ko, Young-Ji; Park, Sang-Moo; Jung, Jong-Hoon; Shestak, Sergey
2010-04-01
Stereoscopic display technologies have been developed as one of advanced displays, and many TV industrials have been trying commercialization of 3D TV. We have been developing 3D TV based on LCD with LED BLU (backlight unit) since Samsung launched the world's first 3D TV based on PDP. However, the data scanning of panel and LC's response characteristics of LCD TV cause interference among frames (that is crosstalk), and this makes 3D video quality worse. We propose the method to reduce crosstalk by LCD driving and backlight control of FHD 3D LCD TV.
A Scalable, Collaborative, Interactive Light-field Display System
2014-06-01
displays, 3D display, holographic video, integral photography, plenoptic , computed photography 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF...light-field, holographic displays, 3D display, holographic video, integral photography, plenoptic , computed photography 1 Distribution A: Approved
Display device-adapted video quality-of-experience assessment
NASA Astrophysics Data System (ADS)
Rehman, Abdul; Zeng, Kai; Wang, Zhou
2015-03-01
Today's viewers consume video content from a variety of connected devices, including smart phones, tablets, notebooks, TVs, and PCs. This imposes significant challenges for managing video traffic efficiently to ensure an acceptable quality-of-experience (QoE) for the end users as the perceptual quality of video content strongly depends on the properties of the display device and the viewing conditions. State-of-the-art full-reference objective video quality assessment algorithms do not take into account the combined impact of display device properties, viewing conditions, and video resolution while performing video quality assessment. We performed a subjective study in order to understand the impact of aforementioned factors on perceptual video QoE. We also propose a full reference video QoE measure, named SSIMplus, that provides real-time prediction of the perceptual quality of a video based on human visual system behaviors, video content characteristics (such as spatial and temporal complexity, and video resolution), display device properties (such as screen size, resolution, and brightness), and viewing conditions (such as viewing distance and angle). Experimental results have shown that the proposed algorithm outperforms state-of-the-art video quality measures in terms of accuracy and speed.
Using ARINC 818 Avionics Digital Video Bus (ADVB) for military displays
NASA Astrophysics Data System (ADS)
Alexander, Jon; Keller, Tim
2007-04-01
ARINC 818 Avionics Digital Video Bus (ADVB) is a new digital video interface and protocol standard developed especially for high bandwidth uncompressed digital video. The first draft of this standard, released in January of 2007, has been advanced by ARINC and the aerospace community to meet the acute needs of commercial aviation for higher performance digital video. This paper analyzes ARINC 818 for use in military display systems found in avionics, helicopters, and ground vehicles. The flexibility of ARINC 818 for the diverse resolutions, grayscales, pixel formats, and frame rates of military displays is analyzed as well as the suitability of ARINC 818 to support requirements for military video systems including bandwidth, latency, and reliability. Implementation issues relevant to military displays are presented.
LMDS Lightweight Modular Display System.
1982-02-16
based on standard functions. This means that the cost to produce a particular display function can be met in the most economical fashion and at the same...not mean that the NTDS interface would be eliminated. What is anticipated is the use of ETHERNET at a low level of system interface, ie internal to...GENERATOR dSYMBOL GEN eCOMMUNICATION 3-2 The architecture of the unit’s (fig 3-4) input circuitry is based on a video table look-up ROM. The function
Capture and playback synchronization in video conferencing
NASA Astrophysics Data System (ADS)
Shae, Zon-Yin; Chang, Pao-Chi; Chen, Mon-Song
1995-03-01
Packet-switching based video conferencing has emerged as one of the most important multimedia applications. Lip synchronization can be disrupted in the packet network as the result of the network properties: packet delay jitters at the capture end, network delay jitters, packet loss, packet arrived out of sequence, local clock mismatch, and video playback overlay with the graphic system. The synchronization problem become more demanding as the real time and multiparty requirement of the video conferencing application. Some of the above mentioned problem can be solved in the more advanced network architecture as ATM having promised. This paper will present some of the solutions to the problems that can be useful at the end station terminals in the massively deployed packet switching network today. The playback scheme in the end station will consist of two units: compression domain buffer management unit and the pixel domain buffer management unit. The pixel domain buffer management unit is responsible for removing the annoying frame shearing effect in the display. The compression domain buffer management unit is responsible for parsing the incoming packets for identifying the complete data blocks in the compressed data stream which can be decoded independently. The compression domain buffer management unit is also responsible for concealing the effects of clock mismatch, lip synchronization, and packet loss, out of sequence, and network jitters. This scheme can also be applied to the multiparty teleconferencing environment. Some of the schemes presented in this paper have been implemented in the Multiparty Multimedia Teleconferencing (MMT) system prototype at the IBM watson research center.
NASA work unit system users manual
NASA Technical Reports Server (NTRS)
1972-01-01
The NASA Work Unit System is a management information system for research tasks (i.e., work units) performed under NASA grants and contracts. It supplies profiles to indicate how much effort is being expended to what types of research, where the effort is being expended, and how funds are being distributed. The user obtains information by entering requests on the keyboard of a time-sharing terminal. Responses are received as video displays or typed messages at the terminal, or as lists printed in the computer room for subsequent delivery by messenger.
Portable Computer Technology (PCT) Research and Development Program Phase 2
NASA Technical Reports Server (NTRS)
Castillo, Michael; McGuire, Kenyon; Sorgi, Alan
1995-01-01
The subject of this project report, focused on: (1) Design and development of two Advanced Portable Workstation 2 (APW 2) units. These units incorporate advanced technology features such as a low power Pentium processor, a high resolution color display, National Television Standards Committee (NTSC) video handling capabilities, a Personal Computer Memory Card International Association (PCMCIA) interface, and Small Computer System Interface (SCSI) and ethernet interfaces. (2) Use these units to integrate and demonstrate advanced wireless network and portable video capabilities. (3) Qualification of the APW 2 systems for use in specific experiments aboard the Mir Space Station. A major objective of the PCT Phase 2 program was to help guide future choices in computing platforms and techniques for meeting National Aeronautics and Space Administration (NASA) mission objectives. The focus being on the development of optimal configurations of computing hardware, software applications, and network technologies for use on NASA missions.
Women and Office Automation: Issues for the Decade Ahead.
ERIC Educational Resources Information Center
Women's Bureau (DOL), Washington, DC.
More than 7 million workers in the United States today use computer-based video display terminals to do word and data processing; an overwhelming number of these workers are women. Women make up most of the occupational groups identified as "administrative support," and they are particularly affected by the changes taking place in the workplace.…
47 CFR 79.101 - Closed caption decoder requirements for analog television receivers.
Code of Federal Regulations, 2012 CFR
2012-10-01
...) BROADCAST RADIO SERVICES CLOSED CAPTIONING AND VIDEO DESCRIPTION OF VIDEO PROGRAMMING § 79.101 Closed... display the captioning for whichever channel the user selects. The TV Mode of operation allows the video... and rows. The characters must be displayed clearly separated from the video over which they are placed...
Live HDR video streaming on commodity hardware
NASA Astrophysics Data System (ADS)
McNamee, Joshua; Hatchett, Jonathan; Debattista, Kurt; Chalmers, Alan
2015-09-01
High Dynamic Range (HDR) video provides a step change in viewing experience, for example the ability to clearly see the soccer ball when it is kicked from the shadow of the stadium into sunshine. To achieve the full potential of HDR video, so-called true HDR, it is crucial that all the dynamic range that was captured is delivered to the display device and tone mapping is confined only to the display. Furthermore, to ensure widespread uptake of HDR imaging, it should be low cost and available on commodity hardware. This paper describes an end-to-end HDR pipeline for capturing, encoding and streaming high-definition HDR video in real-time using off-the-shelf components. All the lighting that is captured by HDR-enabled consumer cameras is delivered via the pipeline to any display, including HDR displays and even mobile devices with minimum latency. The system thus provides an integrated HDR video pipeline that includes everything from capture to post-production, archival and storage, compression, transmission, and display.
Compression of stereoscopic video using MPEG-2
NASA Astrophysics Data System (ADS)
Puri, A.; Kollarits, Richard V.; Haskell, Barry G.
1995-10-01
Many current as well as emerging applications in areas of entertainment, remote operations, manufacturing industry and medicine can benefit from the depth perception offered by stereoscopic video systems which employ two views of a scene imaged under the constraints imposed by human visual system. Among the many challenges to be overcome for practical realization and widespread use of 3D/stereoscopic systems are good 3D displays and efficient techniques for digital compression of enormous amounts of data while maintaining compatibility with normal video decoding and display systems. After a brief introduction to the basics of 3D/stereo including issues of depth perception, stereoscopic 3D displays and terminology in stereoscopic imaging and display, we present an overview of tools in the MPEG-2 video standard that are relevant to our discussion on compression of stereoscopic video, which is the main topic of this paper. Next, we outilne the various approaches for compression of stereoscopic video and then focus on compatible stereoscopic video coding using MPEG-2 Temporal scalability concepts. Compatible coding employing two different types of prediction structures become potentially possible, disparity compensated prediction and combined disparity and motion compensated predictions. To further improve coding performance and display quality, preprocessing for reducing mismatch between the two views forming stereoscopic video is considered. Results of simulations performed on stereoscopic video of normal TV resolution are then reported comparing the performance of two prediction structures with the simulcast solution. It is found that combined disparity and motion compensated prediction offers the best performance. Results indicate that compression of both views of stereoscopic video of normal TV resolution appears feasible in a total of 6 to 8 Mbit/s. We then discuss regarding multi-viewpoint video, a generalization of stereoscopic video. Finally, we describe ongoing efforts within MPEG-2 to define a profile for stereoscopic video coding, as well as, the promise of MPEG-4 in addressing coding of multi-viewpoint video.
Compression of stereoscopic video using MPEG-2
NASA Astrophysics Data System (ADS)
Puri, Atul; Kollarits, Richard V.; Haskell, Barry G.
1995-12-01
Many current as well as emerging applications in areas of entertainment, remote operations, manufacturing industry and medicine can benefit from the depth perception offered by stereoscopic video systems which employ two views of a scene imaged under the constraints imposed by human visual system. Among the many challenges to be overcome for practical realization and widespread use of 3D/stereoscopic systems are good 3D displays and efficient techniques for digital compression of enormous amounts of data while maintaining compatibility with normal video decoding and display systems. After a brief introduction to the basics of 3D/stereo including issues of depth perception, stereoscopic 3D displays and terminology in stereoscopic imaging and display, we present an overview of tools in the MPEG-2 video standard that are relevant to our discussion on compression of stereoscopic video, which is the main topic of this paper. Next, we outline the various approaches for compression of stereoscopic video and then focus on compatible stereoscopic video coding using MPEG-2 Temporal scalability concepts. Compatible coding employing two different types of prediction structures become potentially possible, disparity compensated prediction and combined disparity and motion compensated predictions. To further improve coding performance and display quality, preprocessing for reducing mismatch between the two views forming stereoscopic video is considered. Results of simulations performed on stereoscopic video of normal TV resolution are then reported comparing the performance of two prediction structures with the simulcast solution. It is found that combined disparity and motion compensated prediction offers the best performance. Results indicate that compression of both views of stereoscopic video of normal TV resolution appears feasible in a total of 6 to 8 Mbit/s. We then discuss regarding multi-viewpoint video, a generalization of stereoscopic video. Finally, we describe ongoing efforts within MPEG-2 to define a profile for stereoscopic video coding, as well as, the promise of MPEG-4 in addressing coding of multi-viewpoint video.
Video Games: A Human Factors Guide to Visual Display Design and Instructional System Design
1984-04-01
Electronic video games have many of the same technological and psychological characteristics that are found in military computer-based systems. For...both of which employ video games as experimental stimuli, are presented here. The first research program seeks to identify and exploit the...characteristics of video games in the design of game-based training devices. The second program is designed to explore the effects of electronic video display
Predictive Displays for High Latency Teleoperation
2016-08-04
PREDICTIVE DISPLAYS FOR HIGH LATENCY TELEOPERATION” Analysis of existing approach 3 C om m s. C hannel Vehicle OCU D Throttle, Steer, Brake D Video ...presents opportunity mitigate outgoing latency. • Video is not governed by physics, however, video is dependent on the state of the vehicle, which...Commands, estimates UDP: H.264 Video UDP: Vehicle state • C++ implementation • 2 threads • OpenCV for image manipulation • FFMPEG for video decoding
1983-12-01
storage included room for not only the video display incompatibilties which have been plaguing the terminal (VDT), but also for the disk drive, the...once at system implementation time. This sample Video Display Terminal - ---------------------------------- O(VT) screen shows the Appendix N Code...override theavalue with a different data value. Video Display Terminal (VDT): A cathode ray tube or gas plasma tube display screen terminal that allows
Microcomputer Selection Guide for Construction Field Offices. Revision.
1984-09-01
the system, and the monitor displays information on a video display screen. Microcomputer systems today are available in a variety of configura- tions...background. White on black monitors report- edly caule more eye fatigue, while amber is reported to cause the least eye fatigue. Reverse video ...The video should be amber or green display with a resolution of at least 640 x 200 dots per in. Additional features of the monitor include an
Overview of FTV (free-viewpoint television)
NASA Astrophysics Data System (ADS)
Tanimoto, Masayuki
2010-07-01
We have developed a new type of television named FTV (Free-viewpoint TV). FTV is the ultimate 3DTV that enables us to view a 3D scene by freely changing our viewpoints. We proposed the concept of FTV and constructed the world's first real-time system including the complete chain of operation from image capture to display. FTV is based on the rayspace method that represents one ray in real space with one point in the ray-space. We have developed ray capture, processing and display technologies for FTV. FTV can be carried out today in real time on a single PC or on a mobile player. We also realized FTV with free listening-point audio. The international standardization of FTV has been conducted in MPEG. The first phase of FTV was MVC (Multi-view Video Coding) and the second phase is 3DV (3D Video). MVC was completed in May 2009. The Blu-ray 3D specification has adopted MVC for compression. 3DV is a standard that targets serving a variety of 3D displays. The view generation function of FTV is used to decouple capture and display in 3DV. FDU (FTV Data Unit) is proposed as a data format for 3DV. FTU can compensate errors of the synthesized views caused by depth error.
Method and Apparatus for Improved Spatial Light Modulation
NASA Technical Reports Server (NTRS)
Soutar, Colin (Inventor); Juday, Richard D. (Inventor)
2000-01-01
A method and apparatus for modulating a light beam in an optical processing system is described. Preferably, an electrically-controlled polarizer unit and/or an analyzer unit are utilized in combination with a spatial light modulator and a controller. Preferably, the spatial light modulator comprises a pixelated birefringent medium such as a liquid crystal video display. The combination of the electrically controlled polarizer unit and analyzer unit make it simple and fast to reconfigure the modulation described by the Jones matrix of the spatial light modulator. A particular optical processing objective is provided to the controller. The controller performs calculations and supplies control signals to the polarizer unit, the analyzer unit, and the spatial light modulator in order to obtain the optical processing objective.
Method and Apparatus for Improved Spatial Light Modulation
NASA Technical Reports Server (NTRS)
Colin, Soutar (Inventor); Juday, Richard D. (Inventor)
1999-01-01
A method and apparatus for modulating a light beam in an optical processing system is described. Preferably, an electrically-controlled polarizer unit and/or an analyzer unit are utilized in combination with a spatial light modulator and a controller. Preferably, the spatial light modulator comprises a pixelated birefringent medium such as a liquid crystal video display. The combination of the electrically controlled polarizer unit and analyzer unit make it simple and fast to reconfigure the modulation described by the Jones matrix of the spatial light modulator. A particular optical processing objective is provided to the controller. The controller performs calculations and supplies control signals to the polarizer unit, the analyzer unit, and the spatial light modulator in order to obtain die optical processing objective.
2008-04-01
Index ( NASA - TLX : Hart & Staveland, 1988), and a Post-Test Questionnaire. Demographic data/Background Questionnaire. This questionnaire was used...very confident). NASA - TLX . The NASA TLX (Hart & Staveland, 1988) is a subjective workload assessment tool. A multidimensional weighting...completed the NASA - TLX . The test trials were randomized across participants and occurred in a counterbalanced order that took into account video display
An evaluation of the efficacy of video displays for use with chimpanzees (Pan troglodytes).
Hopper, Lydia M; Lambeth, Susan P; Schapiro, Steven J
2012-05-01
Video displays for behavioral research lend themselves particularly well to studies with chimpanzees (Pan troglodytes), as their vision is comparable to humans', yet there has been no formal test of the efficacy of video displays as a form of social information for chimpanzees. To address this, we compared the learning success of chimpanzees shown video footage of a conspecific compared to chimpanzees shown a live conspecific performing the same novel task. Footage of an unfamiliar chimpanzee operating a bidirectional apparatus was presented to 24 chimpanzees (12 males, 12 females), and their responses were compared to those of a further 12 chimpanzees given the same task but with no form of information. Secondly, we also compared the responses of the chimpanzees in the video display condition to responses of eight chimpanzees from a previously published study of ours, in which chimpanzees observed live models. Chimpanzees shown a video display were more successful than those in the control condition and showed comparable success to those that saw a live model. Regarding fine-grained copying (i.e. the direction that the door was pushed), only chimpanzees that observed a live model showed significant matching to the model's methods with their first response. Yet, when all the responses made by the chimpanzees were considered, comparable levels of matching were shown by chimpanzees in both the live and video conditions. © 2012 Wiley Periodicals, Inc.
An Evaluation of the Efficacy of Video Displays for Use With Chimpanzees (Pan troglodytes)
HOPPER, LYDIA M.; LAMBETH, SUSAN P.; SCHAPIRO, STEVEN J.
2013-01-01
Video displays for behavioral research lend themselves particularly well to studies with chimpanzees (Pan troglodytes), as their vision is comparable to humans’, yet there has been no formal test of the efficacy of video displays as a form of social information for chimpanzees. To address this, we compared the learning success of chimpanzees shown video footage of a conspecific compared to chimpanzees shown a live conspecific performing the same novel task. Footage of an unfamiliar chimpanzee operating a bidirectional apparatus was presented to 24 chimpanzees (12 males, 12 females), and their responses were compared to those of a further 12 chimpanzees given the same task but with no form of information. Secondly, we also compared the responses of the chimpanzees in the video display condition to responses of eight chimpanzees from a previously published study of ours, in which chimpanzees observed live models. Chimpanzees shown a video display were more successful than those in the control condition and showed comparable success to those that saw a live model. Regarding fine-grained copying (i.e. the direction that the door was pushed), only chimpanzees that observed a live model showed significant matching to the model’s methods with their first response. Yet, when all the responses made by the chimpanzees were considered, comparable levels of matching were shown by chimpanzees in both the live and video conditions. PMID:22318867
Real-Time Acquisition and Display of Data and Video
NASA Technical Reports Server (NTRS)
Bachnak, Rafic; Chakinarapu, Ramya; Garcia, Mario; Kar, Dulal; Nguyen, Tien
2007-01-01
This paper describes the development of a prototype that takes in an analog National Television System Committee (NTSC) video signal generated by a video camera and data acquired by a microcontroller and display them in real-time on a digital panel. An 8051 microcontroller is used to acquire power dissipation by the display panel, room temperature, and camera zoom level. The paper describes the major hardware components and shows how they are interfaced into a functional prototype. Test data results are presented and discussed.
Video display engineering and optimization system
NASA Technical Reports Server (NTRS)
Larimer, James (Inventor)
1997-01-01
A video display engineering and optimization CAD simulation system for designing a LCD display integrates models of a display device circuit, electro-optics, surface geometry, and physiological optics to model the system performance of a display. This CAD system permits system performance and design trade-offs to be evaluated without constructing a physical prototype of the device. The systems includes a series of modules which permit analysis of design trade-offs in terms of their visual impact on a viewer looking at a display.
Young Children's Analogical Problem Solving: Gaining Insights from Video Displays
ERIC Educational Resources Information Center
Chen, Zhe; Siegler, Robert S.
2013-01-01
This study examined how toddlers gain insights from source video displays and use the insights to solve analogous problems. Two- to 2.5-year-olds viewed a source video illustrating a problem-solving strategy and then attempted to solve analogous problems. Older but not younger toddlers extracted the problem-solving strategy depicted in the video…
Code of Federal Regulations, 2011 CFR
2011-04-01
... capability to display the date and time of recorded events on video and/or digital recordings. The displayed... digital record retention. (1) All video recordings and/or digital records of coverage provided by the.... (3) Duly authenticated copies of video recordings and/or digital records shall be provided to the...
Code of Federal Regulations, 2014 CFR
2014-04-01
... capability to display the date and time of recorded events on video and/or digital recordings. The displayed... digital record retention. (1) All video recordings and/or digital records of coverage provided by the.... (3) Duly authenticated copies of video recordings and/or digital records shall be provided to the...
Code of Federal Regulations, 2013 CFR
2013-04-01
... capability to display the date and time of recorded events on video and/or digital recordings. The displayed... digital record retention. (1) All video recordings and/or digital records of coverage provided by the.... (3) Duly authenticated copies of video recordings and/or digital records shall be provided to the...
Code of Federal Regulations, 2010 CFR
2010-04-01
... capability to display the date and time of recorded events on video and/or digital recordings. The displayed... digital record retention. (1) All video recordings and/or digital records of coverage provided by the.... (3) Duly authenticated copies of video recordings and/or digital records shall be provided to the...
Code of Federal Regulations, 2012 CFR
2012-04-01
... capability to display the date and time of recorded events on video and/or digital recordings. The displayed... digital record retention. (1) All video recordings and/or digital records of coverage provided by the.... (3) Duly authenticated copies of video recordings and/or digital records shall be provided to the...
Stockdale, Laura; Coyne, Sarah M
2018-01-01
The Internet Gaming Disorder Scale (IGDS) is a widely used measure of video game addiction, a pathology affecting a small percentage of all people who play video games. Emerging adult males are significantly more likely to be video game addicts. Few researchers have examined how people who qualify as video game addicts based on the IGDS compared to matched controls based on age, gender, race, and marital status. The current study compared IGDS video game addicts to matched non-addicts in terms of their mental, physical, social-emotional health using self-report, survey methods. Addicts had poorer mental health and cognitive functioning including poorer impulse control and ADHD symptoms compared to controls. Additionally, addicts displayed increased emotional difficulties including increased depression and anxiety, felt more socially isolated, and were more likely to display internet pornography pathological use symptoms. Female video game addicts were at unique risk for negative outcomes. The sample for this study was undergraduate college students and self-report measures were used. Participants who met the IGDS criteria for video game addiction displayed poorer emotional, physical, mental, and social health, adding to the growing evidence that video game addictions are a valid phenomenon. Copyright © 2017 Elsevier B.V. All rights reserved.
Korhonen, T; Ketola, R; Toivonen, R; Luukkonen, R; Hakkanen, M; Viikari-Juntura, E
2003-01-01
Aims: To investigate work related and individual factors as predictors for incident neck pain among office employees working with video display units (VDUs). Methods: Employees in three administrative units of a medium sized city in Finland (n = 515) received mailed questionnaires in the baseline survey in 1998 and in the follow up survey in 1999. Response rate for the baseline was 81% (n = 416); respondents who reported neck pain for less than eight days during the preceding 12 months were included into the study cohort as healthy subjects (n = 232). The follow up questionnaire 12 months later was completed by 78% (n = 180). Incident neck cases were those reporting neck pain for at least eight days during the preceding 12 months. Results: The annual incidence of neck pain was 34.4% (95% CI 25.5 to 41.3). Poor physical work environment and poor placement of the keyboard increased the risk of neck pain. Among the individual factors, female sex was a strong predictor. Smoking showed a tendency for an increased risk of neck pain. There was an interaction between mental stress and physical exercise, those with higher mental stress and less physical exercise having especially high risk. Conclusion: In the prevention of neck disorders in office work with a high frequency of VDU tasks, attention should be given to the work environment in general and to the more specific aspects of VDU workstation layout. Physical exercise may prevent neck disorders among sedentary employees. PMID:12819280
Wireless Augmented Reality Prototype (WARP)
NASA Technical Reports Server (NTRS)
Devereaux, A. S.
1999-01-01
Initiated in January, 1997, under NASA's Office of Life and Microgravity Sciences and Applications, the Wireless Augmented Reality Prototype (WARP) is a means to leverage recent advances in communications, displays, imaging sensors, biosensors, voice recognition and microelectronics to develop a hands-free, tetherless system capable of real-time personal display and control of computer system resources. Using WARP, an astronaut may efficiently operate and monitor any computer-controllable activity inside or outside the vehicle or station. The WARP concept is a lightweight, unobtrusive heads-up display with a wireless wearable control unit. Connectivity to the external system is achieved through a high-rate radio link from the WARP personal unit to a base station unit installed into any system PC. The radio link has been specially engineered to operate within the high- interference, high-multipath environment of a space shuttle or space station module. Through this virtual terminal, the astronaut will be able to view and manipulate imagery, text or video, using voice commands to control the terminal operations. WARP's hands-free access to computer-based instruction texts, diagrams and checklists replaces juggling manuals and clipboards, and tetherless computer system access allows free motion throughout a cabin while monitoring and operating equipment.
Quick-disconnect harness system for helmet-mounted displays
NASA Astrophysics Data System (ADS)
Bapu, P. T.; Aulds, M. J.; Fuchs, Steven P.; McCormick, David M.
1992-10-01
We have designed a pilot's harness-mounted, high voltage quick-disconnect connectors with 62 pins, to transmit voltages up to 13.5 kV and video signals with 70 MHz bandwidth, for a binocular helmet-mounted display system. It connects and disconnects with power off, and disconnects 'hot' without pilot intervention and without producing external sparks or exposing hot embers to the explosive cockpit environment. We have implemented a procedure in which the high voltage pins disconnect inside a hermetically-sealed unit before the physical separation of the connector. The 'hot' separation triggers a crowbar circuit in the high voltage power supplies for additional protection. Conductor locations and shields are designed to reduce capacitance in the circuit and avoid crosstalk among adjacent circuits. The quick- disconnect connector and wiring harness are human-engineered to ensure pilot safety and mobility. The connector backshell is equipped with two hybrid video amplifiers to improve the clarity of the video signals. Shielded wires and coaxial cables are molded as a multi-layered ribbon for maximum flexibility between the pilot's harness and helmet. Stiff cabling is provided between the quick-disconnect connector and the aircraft console to control behavior during seat ejection. The components of the system have been successfully tested for safety, performance, ergonomic considerations, and reliability.
NASA Technical Reports Server (NTRS)
Jedlovec, Gary; Srikishen, Jayanthi; Edwards, Rita; Cross, David; Welch, Jon; Smith, Matt
2013-01-01
The use of collaborative scientific visualization systems for the analysis, visualization, and sharing of "big data" available from new high resolution remote sensing satellite sensors or four-dimensional numerical model simulations is propelling the wider adoption of ultra-resolution tiled display walls interconnected by high speed networks. These systems require a globally connected and well-integrated operating environment that provides persistent visualization and collaboration services. This abstract and subsequent presentation describes a new collaborative visualization system installed for NASA's Shortterm Prediction Research and Transition (SPoRT) program at Marshall Space Flight Center and its use for Earth science applications. The system consists of a 3 x 4 array of 1920 x 1080 pixel thin bezel video monitors mounted on a wall in a scientific collaboration lab. The monitors are physically and virtually integrated into a 14' x 7' for video display. The display of scientific data on the video wall is controlled by a single Alienware Aurora PC with a 2nd Generation Intel Core 4.1 GHz processor, 32 GB memory, and an AMD Fire Pro W600 video card with 6 mini display port connections. Six mini display-to-dual DVI cables are used to connect the 12 individual video monitors. The open source Scalable Adaptive Graphics Environment (SAGE) windowing and media control framework, running on top of the Ubuntu 12 Linux operating system, allows several users to simultaneously control the display and storage of high resolution still and moving graphics in a variety of formats, on tiled display walls of any size. The Ubuntu operating system supports the open source Scalable Adaptive Graphics Environment (SAGE) software which provides a common environment, or framework, enabling its users to access, display and share a variety of data-intensive information. This information can be digital-cinema animations, high-resolution images, high-definition video-teleconferences, presentation slides, documents, spreadsheets or laptop screens. SAGE is cross-platform, community-driven, open-source visualization and collaboration middleware that utilizes shared national and international cyberinfrastructure for the advancement of scientific research and education.
NASA Astrophysics Data System (ADS)
Jedlovec, G.; Srikishen, J.; Edwards, R.; Cross, D.; Welch, J. D.; Smith, M. R.
2013-12-01
The use of collaborative scientific visualization systems for the analysis, visualization, and sharing of 'big data' available from new high resolution remote sensing satellite sensors or four-dimensional numerical model simulations is propelling the wider adoption of ultra-resolution tiled display walls interconnected by high speed networks. These systems require a globally connected and well-integrated operating environment that provides persistent visualization and collaboration services. This abstract and subsequent presentation describes a new collaborative visualization system installed for NASA's Short-term Prediction Research and Transition (SPoRT) program at Marshall Space Flight Center and its use for Earth science applications. The system consists of a 3 x 4 array of 1920 x 1080 pixel thin bezel video monitors mounted on a wall in a scientific collaboration lab. The monitors are physically and virtually integrated into a 14' x 7' for video display. The display of scientific data on the video wall is controlled by a single Alienware Aurora PC with a 2nd Generation Intel Core 4.1 GHz processor, 32 GB memory, and an AMD Fire Pro W600 video card with 6 mini display port connections. Six mini display-to-dual DVI cables are used to connect the 12 individual video monitors. The open source Scalable Adaptive Graphics Environment (SAGE) windowing and media control framework, running on top of the Ubuntu 12 Linux operating system, allows several users to simultaneously control the display and storage of high resolution still and moving graphics in a variety of formats, on tiled display walls of any size. The Ubuntu operating system supports the open source Scalable Adaptive Graphics Environment (SAGE) software which provides a common environment, or framework, enabling its users to access, display and share a variety of data-intensive information. This information can be digital-cinema animations, high-resolution images, high-definition video-teleconferences, presentation slides, documents, spreadsheets or laptop screens. SAGE is cross-platform, community-driven, open-source visualization and collaboration middleware that utilizes shared national and international cyberinfrastructure for the advancement of scientific research and education.
High-definition video display based on the FPGA and THS8200
NASA Astrophysics Data System (ADS)
Qian, Jia; Sui, Xiubao
2014-11-01
This paper presents a high-definition video display solution based on the FPGA and THS8200. THS8200 is a video decoder chip launched by TI company, this chip has three 10-bit DAC channels which can capture video data in both 4:2:2 and 4:4:4 formats, and its data synchronization can be either through the dedicated synchronization signals HSYNC and VSYNC, or extracted from the embedded video stream synchronization information SAV / EAV code. In this paper, we will utilize the address and control signals generated by FPGA to access to the data-storage array, and then the FPGA generates the corresponding digital video signals YCbCr. These signals combined with the synchronization signals HSYNC and VSYNC that are also generated by the FPGA act as the input signals of THS8200. In order to meet the bandwidth requirements of the high-definition TV, we adopt video input in the 4:2:2 format over 2×10-bit interface. THS8200 is needed to be controlled by FPGA with I2C bus to set the internal registers, and as a result, it can generate the synchronous signal that is satisfied with the standard SMPTE and transfer the digital video signals YCbCr into analog video signals YPbPr. Hence, the composite analog output signals YPbPr are consist of image data signal and synchronous signal which are superimposed together inside the chip THS8200. The experimental research indicates that the method presented in this paper is a viable solution for high-definition video display, which conforms to the input requirements of the new high-definition display devices.
Woo, Kevin L; Rieucau, Guillaume
2008-07-01
The increasing use of the video playback technique in behavioural ecology reveals a growing need to ensure better control of the visual stimuli that focal animals experience. Technological advances now allow researchers to develop computer-generated animations instead of using video sequences of live-acting demonstrators. However, care must be taken to match the motion characteristics (speed and velocity) of the animation to the original video source. Here, we presented a tool based on the use of an optic flow analysis program to measure the resemblance of motion characteristics of computer-generated animations compared to videos of live-acting animals. We examined three distinct displays (tail-flick (TF), push-up body rock (PUBR), and slow arm wave (SAW)) exhibited by animations of Jacky dragons (Amphibolurus muricatus) that were compared to the original video sequences of live lizards. We found no significant differences between the motion characteristics of videos and animations across all three displays. Our results showed that our animations are similar the speed and velocity features of each display. Researchers need to ensure that similar motion characteristics in animation and video stimuli are represented, and this feature is a critical component in the future success of the video playback technique.
Telemetry and Communication IP Video Player
NASA Technical Reports Server (NTRS)
OFarrell, Zachary L.
2011-01-01
Aegis Video Player is the name of the video over IP system for the Telemetry and Communications group of the Launch Services Program. Aegis' purpose is to display video streamed over a network connection to be viewed during launches. To accomplish this task, a VLC ActiveX plug-in was used in C# to provide the basic capabilities of video streaming. The program was then customized to be used during launches. The VLC plug-in can be configured programmatically to display a single stream, but for this project multiple streams needed to be accessed. To accomplish this, an easy to use, informative menu system was added to the program to enable users to quickly switch between videos. Other features were added to make the player more useful, such as watching multiple videos and watching a video in full screen.
A system for the real-time display of radar and video images of targets
NASA Technical Reports Server (NTRS)
Allen, W. W.; Burnside, W. D.
1990-01-01
Described here is a software and hardware system for the real-time display of radar and video images for use in a measurement range. The main purpose is to give the reader a clear idea of the software and hardware design and its functions. This system is designed around a Tektronix XD88-30 graphics workstation, used to display radar images superimposed on video images of the actual target. The system's purpose is to provide a platform for tha analysis and documentation of radar images and their associated targets in a menu-driven, user oriented environment.
Novel use of video glasses during binocular microscopy in the otolaryngology clinic.
Fastenberg, Judd H; Fang, Christina H; Akbar, Nadeem A; Abuzeid, Waleed M; Moskowitz, Howard S
2018-06-06
The development of portable, high resolution video displays such as video glasses allows clinicians the opportunity to offer patients an increased ability to visualize aspects of their physical examination in an ergonomic and cost-effective manner. The objective of this pilot study is to trial the use of video glasses for patients undergoing binocular microscopy as well as to better understand some of the potential benefits of the enhanced display option. This study was comprised of a single treatment group. Patients seen in the otolaryngology clinic who required binocular microscopy for diagnosis and treatment were recruited. All patients wore video glasses during their otoscopic examination. An additional cohort of patients who required binocular microscopy were also recruited, but did not use the video glasses during their examination. Patients subsequently completed a 10-point Likert scale survey that assessed their comfort, anxiety, and satisfaction with the examination as well as their general understanding of their otologic condition. A total of 29 patients who used the video glasses were recruited, including those with normal examinations, cerumen impaction, or chronic ear disease. Based on the survey results, patients reported a high level of satisfaction and comfort during their exam with video glasses. Patients who used the video glasses did not exhibit any increased anxiety with their examination. Patients reported that video glasses improved their understanding and they expressed a desire to wear the glasses again during repeat exams. This pilot study demonstrates that video glasses may represent a viable alternative display option in the otolaryngology clinic. The results show that the use of video glasses is associated with high patient comfort and satisfaction during binocular microscopy. Further investigation is warranted to determine the potential for this display option in other facets of patient care as well as in expanding patient understanding of disease and anatomy. Copyright © 2018 Elsevier Inc. All rights reserved.
Study of a direct visualization display tool for space applications
NASA Astrophysics Data System (ADS)
Pereira do Carmo, J.; Gordo, P. R.; Martins, M.; Rodrigues, F.; Teodoro, P.
2017-11-01
The study of a Direct Visualization Display Tool (DVDT) for space applications is reported. The review of novel technologies for a compact display tool is described. Several applications for this tool have been identified with the support of ESA astronauts and are presented. A baseline design is proposed. It consists mainly of OLEDs as image source; a specially designed optical prism as relay optics; a Personal Digital Assistant (PDA), with data acquisition card, as control unit; and voice control and simplified keyboard as interfaces. Optical analysis and the final estimated performance are reported. The system is able to display information (text, pictures or/and video) with SVGA resolution directly to the astronaut using a Field of View (FOV) of 20x14.5 degrees. The image delivery system is a monocular Head Mounted Display (HMD) that weights less than 100g. The HMD optical system has an eye pupil of 7mm and an eye relief distance of 30mm.
Does a video displaying a stair climbing model increase stair use in a worksite setting?
Van Calster, L; Van Hoecke, A-S; Octaef, A; Boen, F
2017-08-01
This study evaluated the effects of improving the visibility of the stairwell and of displaying a video with a stair climbing model on climbing and descending stair use in a worksite setting. Intervention study. Three consecutive one-week intervention phases were implemented: (1) the visibility of the stairs was improved by the attachment of pictograms that indicated the stairwell; (2) a video showing a stair climbing model was sent to the employees by email; and (3) the same video was displayed on a television screen at the point-of-choice (POC) between the stairs and the elevator. The interventions took place in two buildings. The implementation of the interventions varied between these buildings and the sequence was reversed. Improving the visibility of the stairs increased both stair climbing (+6%) and descending stair use (+7%) compared with baseline. Sending the video by email yielded no additional effect on stair use. By contrast, displaying the video at the POC increased stair climbing in both buildings by 12.5% on average. One week after the intervention, the positive effects on stair climbing remained in one of the buildings, but not in the other. These findings suggest that improving the visibility of the stairwell and displaying a stair climbing model on a screen at the POC can result in a short-term increase in both climbing and descending stair use. Copyright © 2017 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.
Video Display Terminals: Radiation Issues.
ERIC Educational Resources Information Center
Murray, William E.
1985-01-01
Discusses information gathered in past few years related to health effects of video display terminals (VDTs) with particular emphasis given to issues raised by VDT users. Topics covered include radiation emissions, health concerns, radiation surveys, occupational radiation exposure standards, and long-term risks. (17 references) (EJS)
Spatial constraints of stereopsis in video displays
NASA Technical Reports Server (NTRS)
Schor, Clifton
1989-01-01
Recent development in video technology, such as the liquid crystal displays and shutters, have made it feasible to incorporate stereoscopic depth into the 3-D representations on 2-D displays. However, depth has already been vividly portrayed in video displays without stereopsis using the classical artists' depth cues described by Helmholtz (1866) and the dynamic depth cues described in detail by Ittleson (1952). Successful static depth cues include overlap, size, linear perspective, texture gradients, and shading. Effective dynamic cues include looming (Regan and Beverly, 1979) and motion parallax (Rogers and Graham, 1982). Stereoscopic depth is superior to the monocular distance cues under certain circumstances. It is most useful at portraying depth intervals as small as 5 to 10 arc secs. For this reason it is extremely useful in user-video interactions such as telepresence. Objects can be manipulated in 3-D space, for example, while a person who controls the operations views a virtual image of the manipulated object on a remote 2-D video display. Stereopsis also provides structure and form information in camouflaged surfaces such as tree foliage. Motion parallax also reveals form; however, without other monocular cues such as overlap, motion parallax can yield an ambiguous perception. For example, a turning sphere, portrayed as solid by parallax can appear to rotate either leftward or rightward. However, only one direction of rotation is perceived when stereo-depth is included. If the scene is static, then stereopsis is the principal cue for revealing the camouflaged surface structure. Finally, dynamic stereopsis provides information about the direction of motion in depth (Regan and Beverly, 1979). Clearly there are many spatial constraints, including spatial frequency content, retinal eccentricity, exposure duration, target spacing, and disparity gradient, which - when properly adjusted - can greatly enhance stereodepth in video displays.
ERIC Educational Resources Information Center
Walsh, Janet
1982-01-01
Discusses issues related to possible health hazards associated with viewing video display terminals. Includes some findings of the 1979 NIOSH report on Potential Hazards of Video Display Terminals indicating level of radiation emitted is low and providing recommendations related to glare and back pain/muscular fatigue problems. (JN)
Virtual navigation performance: the relationship to field of view and prior video gaming experience.
Richardson, Anthony E; Collaer, Marcia L
2011-04-01
Two experiments examined whether learning a virtual environment was influenced by field of view and how it related to prior video gaming experience. In the first experiment, participants (42 men, 39 women; M age = 19.5 yr., SD = 1.8) performed worse on a spatial orientation task displayed with a narrow field of view in comparison to medium and wide field-of-view displays. Counter to initial hypotheses, wide field-of-view displays did not improve performance over medium displays, and this was replicated in a second experiment (30 men, 30 women; M age = 20.4 yr., SD = 1.9) presenting a more complex learning environment. Self-reported video gaming experience correlated with several spatial tasks: virtual environment pointing and tests of Judgment of Line Angle and Position, mental rotation, and Useful Field of View (with correlations between .31 and .45). When prior video gaming experience was included as a covariate, sex differences in spatial tasks disappeared.
Motion sickness, console video games, and head-mounted displays.
Merhi, Omar; Faugloire, Elise; Flanagan, Moira; Stoffregen, Thomas A
2007-10-01
We evaluated the nauseogenic properties of commercial console video games (i.e., games that are sold to the public) when presented through a head-mounted display. Anecdotal reports suggest that motion sickness may occur among players of contemporary commercial console video games. Participants played standard console video games using an Xbox game system. We varied the participants' posture (standing vs. sitting) and the game (two Xbox games). Participants played for up to 50 min and were asked to discontinue if they experienced any symptoms of motion sickness. Sickness occurred in all conditions, but it was more common during standing. During seated play there were significant differences in head motion between sick and well participants before the onset of motion sickness. The results indicate that commercial console video game systems can induce motion sickness when presented via a head-mounted display and support the hypothesis that motion sickness is preceded by instability in the control of seated posture. Potential applications of this research include changes in the design of console video games and recommendations for how such systems should be used.
Advanced Spacesuit Informatics Software Design for Power, Avionics and Software Version 2.0
NASA Technical Reports Server (NTRS)
Wright, Theodore W.
2016-01-01
A description of the software design for the 2016 edition of the Informatics computer assembly of the NASAs Advanced Extravehicular Mobility Unit (AEMU), also called the Advanced Spacesuit. The Informatics system is an optional part of the spacesuit assembly. It adds a graphical interface for displaying suit status, timelines, procedures, and warning information. It also provides an interface to the suit mounted camera for recording still images, video, and audio field notes.
A Macintosh-Based Scientific Images Video Analysis System
NASA Technical Reports Server (NTRS)
Groleau, Nicolas; Friedland, Peter (Technical Monitor)
1994-01-01
A set of experiments was designed at MIT's Man-Vehicle Laboratory in order to evaluate the effects of zero gravity on the human orientation system. During many of these experiments, the movements of the eyes are recorded on high quality video cassettes. The images must be analyzed off-line to calculate the position of the eyes at every moment in time. To this aim, I have implemented a simple inexpensive computerized system which measures the angle of rotation of the eye from digitized video images. The system is implemented on a desktop Macintosh computer, processes one play-back frame per second and exhibits adequate levels of accuracy and precision. The system uses LabVIEW, a digital output board, and a video input board to control a VCR, digitize video images, analyze them, and provide a user friendly interface for the various phases of the process. The system uses the Concept Vi LabVIEW library (Graftek's Image, Meudon la Foret, France) for image grabbing and displaying as well as translation to and from LabVIEW arrays. Graftek's software layer drives an Image Grabber board from Neotech (Eastleigh, United Kingdom). A Colour Adapter box from Neotech provides adequate video signal synchronization. The system also requires a LabVIEW driven digital output board (MacADIOS II from GW Instruments, Cambridge, MA) controlling a slightly modified VCR remote control used mainly to advance the video tape frame by frame.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-19
... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-828] Certain Video Displays and Products Using and Containing Same; Investigations: Terminations, Modifications and Rulings AGENCY: U.S. International Trade Commission. ACTION: Notice. SUMMARY: Notice is hereby given that the U.S. International...
Natural 3D content on glasses-free light-field 3D cinema
NASA Astrophysics Data System (ADS)
Balogh, Tibor; Nagy, Zsolt; Kovács, Péter Tamás.; Adhikarla, Vamsi K.
2013-03-01
This paper presents a complete framework for capturing, processing and displaying the free viewpoint video on a large scale immersive light-field display. We present a combined hardware-software solution to visualize free viewpoint 3D video on a cinema-sized screen. The new glasses-free 3D projection technology can support larger audience than the existing autostereoscopic displays. We introduce and describe our new display system including optical and mechanical design considerations, the capturing system and render cluster for producing the 3D content, and the various software modules driving the system. The indigenous display is first of its kind, equipped with front-projection light-field HoloVizio technology, controlling up to 63 MP. It has all the advantages of previous light-field displays and in addition, allows a more flexible arrangement with a larger screen size, matching cinema or meeting room geometries, yet simpler to set-up. The software system makes it possible to show 3D applications in real-time, besides the natural content captured from dense camera arrangements as well as from sparse cameras covering a wider baseline. Our software system on the GPU accelerated render cluster, can also visualize pre-recorded Multi-view Video plus Depth (MVD4) videos on this light-field glasses-free cinema system, interpolating and extrapolating missing views.
RAPID: A random access picture digitizer, display, and memory system
NASA Technical Reports Server (NTRS)
Yakimovsky, Y.; Rayfield, M.; Eskenazi, R.
1976-01-01
RAPID is a system capable of providing convenient digital analysis of video data in real-time. It has two modes of operation. The first allows for continuous digitization of an EIA RS-170 video signal. Each frame in the video signal is digitized and written in 1/30 of a second into RAPID's internal memory. The second mode leaves the content of the internal memory independent of the current input video. In both modes of operation the image contained in the memory is used to generate an EIA RS-170 composite video output signal representing the digitized image in the memory so that it can be displayed on a monitor.
Standardized access, display, and retrieval of medical video
NASA Astrophysics Data System (ADS)
Bellaire, Gunter; Steines, Daniel; Graschew, Georgi; Thiel, Andreas; Bernarding, Johannes; Tolxdorff, Thomas; Schlag, Peter M.
1999-05-01
The system presented here enhances documentation and data- secured, second-opinion facilities by integrating video sequences into DICOM 3.0. We present an implementation for a medical video server extended by a DICOM interface. Security mechanisms conforming with DICOM are integrated to enable secure internet access. Digital video documents of diagnostic and therapeutic procedures should be examined regarding the clip length and size necessary for second opinion and manageable with today's hardware. Image sources relevant for this paper include 3D laparoscope, 3D surgical microscope, 3D open surgery camera, synthetic video, and monoscopic endoscopes, etc. The global DICOM video concept and three special workplaces of distinct applications are described. Additionally, an approach is presented to analyze the motion of the endoscopic camera for future automatic video-cutting. Digital stereoscopic video sequences are especially in demand for surgery . Therefore DSVS are also integrated into the DICOM video concept. Results are presented describing the suitability of stereoscopic display techniques for the operating room.
A new AS-display as part of the MIRO lightweight robot for surgical applications
NASA Astrophysics Data System (ADS)
Grossmann, Christoph M.
2010-02-01
The DLR MIRO is the second generation of versatile robot arms for surgical applications, developed at the Institute for Robotics and Mechatronics at Deutsche Zentrum für Luft- und Raumfahrt (DLR) in Oberpfaffenhofen, Germany. With its low weight of 10 kg and dimensions similar to those of the human arm, the MIRO robot can assist the surgeon directly at the operating table where space is scarce. The planned scope of applications of this robot arm ranges from guiding a laser unit for the precise separation of bone tissue in orthopedics to positioning holes for bone screws, robot assisted endoscope guidance and on to the multi-robot concept for endoscopic minimally invasive surgery. A stereo-endoscope delivers two full HD video streams that can even be augmented with information, e.g vectors indicating the forces that act on the surgical tool at any given moment. SeeFront's new autostereoscopic 3D display SF 2223, being a part of the MIRO assembly, will let the surgeon view the stereo video stream in excellent quality, in real time and without the need for any viewing aids. The presentation is meant to provide an insight into the principles at the basis of the SeeFront 3D technology and how they allow the creation of autostereoscopic display solutions ranging from smallest "stamp-sized" displays to 30" desktop versions, which all provide comfortable freedom of movement for the viewer along with excellent 3D image quality.
An Airborne Programmable Digital to Video Converter Interface and Operation Manual.
1981-02-01
Identify by block number) SCAN CONVERTER VIDEO DISPLAY TELEVISION DISPLAY 20. ABSTRACT (Continue on reverse oide If necessary and Identify by block...programmable cathode ray tube (CRT) controller which is accessed by the CPU to permit operation in a wide variety of modes. The Alphanumeric Generator
Potential Health Hazards of Video Display Terminals.
ERIC Educational Resources Information Center
Murray, William E.; And Others
In response to a request from three California unions to evaluate potential health hazards from the use of video display terminals (VDT's) in information processing applications, the National Institute for Occupational Safety and Health (NIOSH) conducted a limited field investigation of three companies in the San Francisco-Oakland Bay Area. A…
Accuracy of pulse oximetry in assessing heart rate of infants in the neonatal intensive care unit.
Singh, Jasbir K S B; Kamlin, C Omar F; Morley, Colin J; O'Donnell, Colm P F; Donath, Susan M; Davis, Peter G
2008-05-01
To determine the accuracy of pulse oximetry measurement of heart rate in the neonatal intensive care unit. Stable preterm infants were monitored with a pulse oximeter and an ECG. The displays of both monitors were captured on video. Heart rate data from both monitors, including measures of signal quality, were extracted and analysed using Bland Altman plots. In 30 infants the mean (SD) difference between heart rate measured by pulse oximetry and electrocardiography was -0.4 (12) beats per minute. Accuracy was maintained when the signal quality or perfusion was low. Pulse oximetry may provide a useful measurement of heart rate in the neonatal intensive care unit. Studies of this technique in the delivery room are indicated.
Display Sharing: An Alternative Paradigm
NASA Technical Reports Server (NTRS)
Brown, Michael A.
2010-01-01
The current Johnson Space Center (JSC) Mission Control Center (MCC) Video Transport System (VTS) provides flight controllers and management the ability to meld raw video from various sources with telemetry to improve situational awareness. However, maintaining a separate infrastructure for video delivery and integration of video content with data adds significant complexity and cost to the system. When considering alternative architectures for a VTS, the current system's ability to share specific computer displays in their entirety to other locations, such as large projector systems, flight control rooms, and back supporting rooms throughout the facilities and centers must be incorporated into any new architecture. Internet Protocol (IP)-based systems also support video delivery and integration. IP-based systems generally have an advantage in terms of cost and maintainability. Although IP-based systems are versatile, the task of sharing a computer display from one workstation to another can be time consuming for an end-user and inconvenient to administer at a system level. The objective of this paper is to present a prototype display sharing enterprise solution. Display sharing is a system which delivers image sharing across the LAN while simultaneously managing bandwidth, supporting encryption, enabling recovery and resynchronization following a loss of signal, and, minimizing latency. Additional critical elements will include image scaling support, multi -sharing, ease of initial integration and configuration, integration with desktop window managers, collaboration tools, host and recipient controls. This goal of this paper is to summarize the various elements of an IP-based display sharing system that can be used in today's control center environment.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-24
... Accessible Emergency Information; Apparatus Requirements for Emergency Information and Video Description...] Accessible Emergency Information; Apparatus Requirements for Emergency Information and Video Description... manufacturers of devices that display video programming to ensure that certain apparatus are able to make...
Remote Video Auditing in the Surgical Setting.
Pedersen, Anne; Getty Ritter, Elizabeth; Beaton, Megan; Gibbons, David
2017-02-01
Remote video auditing, a method first adopted by the food preparation industry, was later introduced to the health care industry as a novel approach to improving hand hygiene practices. This strategy yielded tremendous and sustained improvement, causing leaders to consider the potential effects of such technology on the complex surgical environment. This article outlines the implementation of remote video auditing and the first year of activity, outcomes, and measurable successes in a busy surgery department in the eastern United States. A team of anesthesia care providers, surgeons, and OR personnel used low-resolution cameras, large-screen displays, and cell phone alerts to make significant progress in three domains: application of the Universal Protocol for preventing wrong site, wrong procedure, wrong person surgery; efficiency metrics; and cleaning compliance. The use of cameras with real-time auditing and results-sharing created an environment of continuous learning, compliance, and synergy, which has resulted in a safer, cleaner, and more efficient OR. Copyright © 2017 AORN, Inc. Published by Elsevier Inc. All rights reserved.
Yamasaki, Toshiki; Moritake, Kouzo; Nagai, Hidemasa; Kimura, Yoriyoshi
2002-06-01
A technique to integrate ultrasonography and endoscopy is described for transsphenoidal surgery to prevent intraoperative internal carotid artery (ICA)-related, life-threatening complications such as aneurysmal formation and carotid-cavernous fistula. The ultrasound unit helps avoid direct injury to the ICA. The technical advantage of this system is the miniature 1-mm diameter microvascular probe, which does not disturb the operative field. An arterial or venous flow source of even an invisible vessel can be detected easily, noninvasively, and reproducibly. Real-time information with a 100% detection rate for the ICA is helpful for predicting localization even in the intracavernous portion, where the ICA is invisible. The endoscope unit can visualize the dead angle areas of the operating microscope by varying the endoscopic gateways and display on a "picture-in-picture" system. The advantage of both devices is the integration with a video processor, so that the real-time information from each unit can be switched intraoperatively onto the display as required. This method is of particular help for removing lesions with intracavernous invasion or encasement of the ICA.
Xu, Jing; Wong, Kevin; Jian, Yifan; Sarunic, Marinko V
2014-02-01
In this report, we describe a graphics processing unit (GPU)-accelerated processing platform for real-time acquisition and display of flow contrast images with Fourier domain optical coherence tomography (FDOCT) in mouse and human eyes in vivo. Motion contrast from blood flow is processed using the speckle variance OCT (svOCT) technique, which relies on the acquisition of multiple B-scan frames at the same location and tracking the change of the speckle pattern. Real-time mouse and human retinal imaging using two different custom-built OCT systems with processing and display performed on GPU are presented with an in-depth analysis of performance metrics. The display output included structural OCT data, en face projections of the intensity data, and the svOCT en face projections of retinal microvasculature; these results compare projections with and without speckle variance in the different retinal layers to reveal significant contrast improvements. As a demonstration, videos of real-time svOCT for in vivo human and mouse retinal imaging are included in our results. The capability of performing real-time svOCT imaging of the retinal vasculature may be a useful tool in a clinical environment for monitoring disease-related pathological changes in the microcirculation such as diabetic retinopathy.
Use of Internet Resources in the Biology Lecture Classroom.
ERIC Educational Resources Information Center
Francis, Joseph W.
2000-01-01
Introduces internet resources that are available for instructional use in biology classrooms. Provides information on video-based technologies to create and capture video sequences, interactive web sites that allow interaction with biology simulations, online texts, and interactive videos that display animated video sequences. (YDS)
NASA Technical Reports Server (NTRS)
Sawyer, Kevin; Jacobsen, Robert; Aiken, Edwin W. (Technical Monitor)
1995-01-01
NASA Ames Research Center and the US Army are developing the Rotorcraft Aircrew Systems Concepts Airborne Laboratory (RASCAL) using a Sikorsky UH-60 helicopter for the purpose of flight systems research. A primary use of the RASCAL is in-flight simulation for which the visual scene will use computer generated imagery and synthetic vision. This research is made possible in part to a full color wide field of view Helmet Mounted Display (HMD) system that provides high performance color imagery suitable for daytime operations in a flight-rated package. This paper describes the design and performance characteristics of the HMD system. Emphasis is placed on the design specifications, testing, and integration into the aircraft of Kaiser Electronics' RASCAL HMD system that was designed and built under contract for NASA. The optical performance and design of the Helmet mounted display unit will be discussed as well as the unique capabilities provided by the system's Programmable Display Generator (PDG).
Payload specialist station study: Volume 2, part 3: Program analysis and planning for phase C/D
NASA Technical Reports Server (NTRS)
1976-01-01
The controls and displays (C&D) required at the Orbiter aft-flight deck (AFD) and the core C&D required at the Payload Specialist Station (PSS) are identified in this document. The AFD C&D Concept consists of a multifunction display system (MFDS) and elements of multiuse mission support equipment (MMSE). The MFDS consists of two CRTs, a display electronics unit (DEU), and a keyboard. The MMSE consists of a manual pointing controller (MPC), five digit numeric displays, 10 character alphanumeric legends, event timers, analog meters, rotary and toggle switches. The MMSE may be hardwired to the experiment, or interface with a data bus at the PSS for signal processing. The MFDS has video capability, with alphanumeric and graphic overlay features, on one CRT and alphanumeric and graphic (tricolor) capability on a second CRT. The DEU will have the capability to communicate, via redundant data buses, with both the spacelab experiment and subsystem computers.
Secure Video Surveillance System Acquisition Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
2009-12-04
The SVSS Acquisition Software collects and displays video images from two cameras through a VPN, and store the images onto a collection controller. The software is configured to allow a user to enter a time window to display up to 2 1/2, hours of video review. The software collects images from the cameras at a rate of 1 image per second and automatically deletes images older than 3 hours. The software code operates in a linux environment and can be run in a virtual machine on Windows XP. The Sandia software integrates the different COTS software together to build themore » video review system.« less
3D video coding: an overview of present and upcoming standards
NASA Astrophysics Data System (ADS)
Merkle, Philipp; Müller, Karsten; Wiegand, Thomas
2010-07-01
An overview of existing and upcoming 3D video coding standards is given. Various different 3D video formats are available, each with individual pros and cons. The 3D video formats can be separated into two classes: video-only formats (such as stereo and multiview video) and depth-enhanced formats (such as video plus depth and multiview video plus depth). Since all these formats exist of at least two video sequences and possibly additional depth data, efficient compression is essential for the success of 3D video applications and technologies. For the video-only formats the H.264 family of coding standards already provides efficient and widely established compression algorithms: H.264/AVC simulcast, H.264/AVC stereo SEI message, and H.264/MVC. For the depth-enhanced formats standardized coding algorithms are currently being developed. New and specially adapted coding approaches are necessary, as the depth or disparity information included in these formats has significantly different characteristics than video and is not displayed directly, but used for rendering. Motivated by evolving market needs, MPEG has started an activity to develop a generic 3D video standard within the 3DVC ad-hoc group. Key features of the standard are efficient and flexible compression of depth-enhanced 3D video representations and decoupling of content creation and display requirements.
36 CFR 1194.24 - Video and multimedia products.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 36 Parks, Forests, and Public Property 3 2010-07-01 2010-07-01 false Video and multimedia products... Video and multimedia products. (a) All analog television displays 13 inches and larger, and computer... training and informational video and multimedia productions which support the agency's mission, regardless...
36 CFR 1194.24 - Video and multimedia products.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 36 Parks, Forests, and Public Property 3 2011-07-01 2011-07-01 false Video and multimedia products... Video and multimedia products. (a) All analog television displays 13 inches and larger, and computer... training and informational video and multimedia productions which support the agency's mission, regardless...
36 CFR 1194.24 - Video and multimedia products.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 36 Parks, Forests, and Public Property 3 2012-07-01 2012-07-01 false Video and multimedia products... Video and multimedia products. (a) All analog television displays 13 inches and larger, and computer... training and informational video and multimedia productions which support the agency's mission, regardless...
36 CFR 1194.24 - Video and multimedia products.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 36 Parks, Forests, and Public Property 3 2014-07-01 2014-07-01 false Video and multimedia products... Video and multimedia products. (a) All analog television displays 13 inches and larger, and computer... training and informational video and multimedia productions which support the agency's mission, regardless...
Portable Airborne Laser System Measures Forest-Canopy Height
NASA Technical Reports Server (NTRS)
Nelson, Ross
2005-01-01
(PALS) is a combination of laser ranging, video imaging, positioning, and data-processing subsystems designed for measuring the heights of forest canopies along linear transects from tens to thousands of kilometers long. Unlike prior laser ranging systems designed to serve the same purpose, the PALS is not restricted to use aboard a single aircraft of a specific type: the PALS fits into two large suitcases that can be carried to any convenient location, and the PALS can be installed in almost any local aircraft for hire, thereby making it possible to sample remote forests at relatively low cost. The initial cost and the cost of repairing the PALS are also lower because the PALS hardware consists mostly of commercial off-the-shelf (COTS) units that can easily be replaced in the field. The COTS units include a laser ranging transceiver, a charge-coupled-device camera that images the laser-illuminated targets, a differential Global Positioning System (dGPS) receiver capable of operation within the Wide Area Augmentation System, a video titler, a video cassette recorder (VCR), and a laptop computer equipped with two serial ports. The VCR and computer are powered by batteries; the other units are powered at 12 VDC from the 28-VDC aircraft power system via a low-pass filter and a voltage converter. The dGPS receiver feeds location and time data, at an update rate of 0.5 Hz, to the video titler and the computer. The laser ranging transceiver, operating at a sampling rate of 2 kHz, feeds its serial range and amplitude data stream to the computer. The analog video signal from the CCD camera is fed into the video titler wherein the signal is annotated with position and time information. The titler then forwards the annotated signal to the VCR for recording on 8-mm tapes. The dGPS and laser range and amplitude serial data streams are processed by software that displays the laser trace and the dGPS information as they are fed into the computer, subsamples the laser range and amplitude data, interleaves the subsampled data with the dGPS information, and records the resulting interleaved data stream.
IR sensors and imagers in networked operations
NASA Astrophysics Data System (ADS)
Breiter, Rainer; Cabanski, Wolfgang
2005-05-01
"Network-centric Warfare" is a common slogan describing an overall concept of networked operation of sensors, information and weapons to gain command and control superiority. Referring to IR sensors, integration and fusion of different channels like day/night or SAR images or the ability to spread image data among various users are typical requirements. Looking for concrete implementations the German Army future infantryman IdZ is an example where a group of ten soldiers build a unit with every soldier equipped with a personal digital assistant (PDA) for information display, day photo camera and a high performance thermal imager for every unit. The challenge to allow networked operation among such a unit is bringing information together and distribution over a capable network. So also AIM's thermal reconnaissance and targeting sight HuntIR which was selected for the IdZ program provides this capabilities by an optional wireless interface. Besides the global approach of Network-centric Warfare network technology can also be an interesting solution for digital image data distribution and signal processing behind the FPA replacing analog video networks or specific point to point interfaces. The resulting architecture can provide capabilities of data fusion from e.g. IR dual-band or IR multicolor sensors. AIM has participated in a German/UK collaboration program to produce a demonstrator for day/IR video distribution via Gigabit Ethernet for vehicle applications. In this study Ethernet technology was chosen for network implementation and a set of electronics was developed for capturing video data of IR and day imagers and Gigabit Ethernet video distribution. The demonstrator setup follows the requirements of current and future vehicles having a set of day and night imager cameras and a crew station with several members. Replacing the analog video path by a digital video network also makes it easy to implement embedded training by simply feeding the network with simulation data. The paper addresses the special capabilities, requirements and design considerations of IR sensors and imagers in applications like thermal weapon sights and UAVs for networked operating infantry forces.
Benady-Chorney, Jessica; Yau, Yvonne; Zeighami, Yashar; Bohbot, Veronique D; West, Greg L
2018-03-21
Action video game players (aVGPs) display increased performance in attention-based tasks and enhanced procedural motor learning. In parallel, the anterior cingulate cortex (ACC) is centrally implicated in specific types of reward-based learning and attentional control, the execution or inhibition of motor commands, and error detection. These processes are hypothesized to support aVGP in-game performance and enhanced learning though in-game feedback. We, therefore, tested the hypothesis that habitual aVGPs would display increased cortical thickness compared with nonvideo game players (nonVGPs). Results showed that the aVGP group (n=17) displayed significantly higher levels of cortical thickness specifically in the dorsal ACC compared with the nonVGP group (n=16). Results are discussed in the context of previous findings examining video game experience, attention/performance, and responses to affective components such as pain and fear.
Video monitoring system for car seat
NASA Technical Reports Server (NTRS)
Elrod, Susan Vinz (Inventor); Dabney, Richard W. (Inventor)
2004-01-01
A video monitoring system for use with a child car seat has video camera(s) mounted in the car seat. The video images are wirelessly transmitted to a remote receiver/display encased in a portable housing that can be removably mounted in the vehicle in which the car seat is installed.
36 CFR § 1194.24 - Video and multimedia products.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 36 Parks, Forests, and Public Property 3 2013-07-01 2012-07-01 true Video and multimedia products... § 1194.24 Video and multimedia products. (a) All analog television displays 13 inches and larger, and... circuitry. (c) All training and informational video and multimedia productions which support the agency's...
Peden, Robert G; Mercer, Rachel; Tatham, Andrew J
2016-10-01
To investigate whether 'surgeon's eye view' videos provided via head-mounted displays can improve skill acquisition and satisfaction in basic surgical training compared with conventional wet-lab teaching. A prospective randomised study of 14 medical students with no prior suturing experience, randomised to 3 groups: 1) conventional teaching; 2) head-mounted display-assisted teaching and 3) head-mounted display self-learning. All were instructed in interrupted suturing followed by 15 minutes' practice. Head-mounted displays provided a 'surgeon's eye view' video demonstrating the technique, available during practice. Subsequently students undertook a practical assessment, where suturing was videoed and graded by masked assessors using a 10-point surgical skill score (1 = very poor technique, 10 = very good technique). Students completed a questionnaire assessing confidence and satisfaction. Suturing ability after teaching was similar between groups (P = 0.229, Kruskal-Wallis test). Median surgical skill scores were 7.5 (range 6-10), 6 (range 3-8) and 7 (range 1-7) following head-mounted display-assisted teaching, conventional teaching, and head-mounted display self-learning respectively. There was good agreement between graders regarding surgical skill scores (rho.c = 0.599, r = 0.603), and no difference in number of sutures placed between groups (P = 0.120). The head-mounted display-assisted teaching group reported greater enjoyment than those attending conventional teaching (P = 0.033). Head-mounted display self-learning was regarded as least useful (7.4 vs 9.0 for conventional teaching, P = 0.021), but more enjoyable than conventional teaching (9.6 vs 8.0, P = 0.050). Teaching augmented with head-mounted displays was significantly more enjoyable than conventional teaching. Students undertaking self-directed learning using head-mounted displays with pre-recorded videos had comparable skill acquisition to those attending traditional wet-lab tutorials. Copyright © 2016 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.
A color video display technique for flow field surveys
NASA Technical Reports Server (NTRS)
Winkelmann, A. E.; Tsao, C. P.
1982-01-01
A computer driven color video display technique has been developed for the presentation of wind tunnel flow field survey data. The results of both qualitative and quantitative flow field surveys can be presented in high spatial resolutions color coded displays. The technique has been used for data obtained with a hot-wire probe, a split-film probe, a Conrad (pitch) probe and a 5-tube pressure probe in surveys above and behind a wing with partially stalled and fully stalled flow.
NASA Astrophysics Data System (ADS)
Lee, Seokhee; Lee, Kiyoung; Kim, Man Bae; Kim, JongWon
2005-11-01
In this paper, we propose a design of multi-view stereoscopic HD video transmission system based on MPEG-21 Digital Item Adaptation (DIA). It focuses on the compatibility and scalability to meet various user preferences and terminal capabilities. There exist a large variety of multi-view 3D HD video types according to the methods for acquisition, display, and processing. By following the MPEG-21 DIA framework, the multi-view stereoscopic HD video is adapted according to user feedback. A user can be served multi-view stereoscopic video which corresponds with his or her preferences and terminal capabilities. In our preliminary prototype, we verify that the proposed design can support two deferent types of display device (stereoscopic and auto-stereoscopic) and switching viewpoints between two available viewpoints.
Display system employing acousto-optic tunable filter
NASA Technical Reports Server (NTRS)
Lambert, James L. (Inventor)
1995-01-01
An acousto-optic tunable filter (AOTF) is employed to generate a display by driving the AOTF with a RF electrical signal comprising modulated red, green, and blue video scan line signals and scanning the AOTF with a linearly polarized, pulsed light beam, resulting in encoding of color video columns (scan lines) of an input video image into vertical columns of the AOTF output beam. The AOTF is illuminated periodically as each acoustically-encoded scan line fills the cell aperture of the AOTF. A polarizing beam splitter removes the unused first order beam component of the AOTF output and, if desired, overlays a real world scene on the output plane. Resolutions as high as 30,000 lines are possible, providing holographic display capability.
Display system employing acousto-optic tunable filter
NASA Technical Reports Server (NTRS)
Lambert, James L. (Inventor)
1993-01-01
An acousto-optic tunable filter (AOTF) is employed to generate a display by driving the AOTF with a RF electrical signal comprising modulated red, green, and blue video scan line signals and scanning the AOTF with a linearly polarized, pulsed light beam, resulting in encoding of color video columns (scan lines) of an input video image into vertical columns of the AOTF output beam. The AOTF is illuminated periodically as each acoustically-encoded scan line fills the cell aperture of the AOTF. A polarizing beam splitter removes the unused first order beam component of the AOTF output and, if desired, overlays a real world scene on the output plane. Resolutions as high as 30,000 lines are possible, providing holographic display capability.
Virtual displays for 360-degree video
NASA Astrophysics Data System (ADS)
Gilbert, Stephen; Boonsuk, Wutthigrai; Kelly, Jonathan W.
2012-03-01
In this paper we describe a novel approach for comparing users' spatial cognition when using different depictions of 360- degree video on a traditional 2D display. By using virtual cameras within a game engine and texture mapping of these camera feeds to an arbitrary shape, we were able to offer users a 360-degree interface composed of four 90-degree views, two 180-degree views, or one 360-degree view of the same interactive environment. An example experiment is described using these interfaces. This technique for creating alternative displays of wide-angle video facilitates the exploration of how compressed or fish-eye distortions affect spatial perception of the environment and can benefit the creation of interfaces for surveillance and remote system teleoperation.
NASA Technical Reports Server (NTRS)
Boton, Matthew L.; Bass, Ellen J.; Comstock, James R., Jr.
2006-01-01
The evaluation of human-centered systems can be performed using a variety of different methodologies. This paper describes a human-centered systems evaluation methodology where participants watch 5-second non-interactive videos of a system in operation before supplying judgments and subjective measures based on the information conveyed in the videos. This methodology was used to evaluate the ability of different textures and fields of view to convey spatial awareness in synthetic vision systems (SVS) displays. It produced significant results for both judgment based and subjective measures. This method is compared to other methods commonly used to evaluate SVS displays based on cost, the amount of experimental time required, experimental flexibility, and the type of data provided.
Segmented cold cathode display panel
NASA Technical Reports Server (NTRS)
Payne, Leslie (Inventor)
1998-01-01
The present invention is a video display device that utilizes the novel concept of generating an electronically controlled pattern of electron emission at the output of a segmented photocathode. This pattern of electron emission is amplified via a channel plate. The result is that an intense electronic image can be accelerated toward a phosphor thus creating a bright video image. This novel arrangement allows for one to provide a full color flat video display capable of implementation in large formats. In an alternate arrangement, the present invention is provided without the channel plate and a porous conducting surface is provided instead. In this alternate arrangement, the brightness of the image is reduced but the cost of the overall device is significantly lowered because fabrication complexity is significantly decreased.
NASA Astrophysics Data System (ADS)
Starks, Michael R.
1990-09-01
A variety of low cost devices for capturing, editing and displaying field sequential 60 cycle stereoscopic video have recently been marketed by 3D TV Corp. and others. When properly used, they give very high quality images with most consumer and professional equipment. Our stereoscopic multiplexers for creating and editing field sequential video in NTSC or component(SVHS, Betacain, RGB) and Home 3D Theater system employing LCD eyeglasses have made 3D movies and television available to a large audience.
ERIC Educational Resources Information Center
Plavnick, Joshua B.
2012-01-01
Video modeling is an effective and efficient methodology for teaching new skills to individuals with autism. New technology may enhance video modeling as smartphones or tablet computers allow for portable video displays. However, the reduced screen size may decrease the likelihood of attending to the video model for some children. The present…
47 CFR 79.109 - Activating accessibility features.
Code of Federal Regulations, 2014 CFR
2014-10-01
... ACCESSIBILITY OF VIDEO PROGRAMMING Apparatus § 79.109 Activating accessibility features. (a) Requirements... video programming transmitted in digital format simultaneously with sound, including apparatus designed to receive or display video programming transmitted in digital format using Internet protocol, with...
Sequential color video to parallel color video converter
NASA Technical Reports Server (NTRS)
1975-01-01
The engineering design, development, breadboard fabrication, test, and delivery of a breadboard field sequential color video to parallel color video converter is described. The converter was designed for use onboard a manned space vehicle to eliminate a flickering TV display picture and to reduce the weight and bulk of previous ground conversion systems.
Code of Federal Regulations, 2012 CFR
2012-01-01
... concerning the accessibility of videos, DVDs, and other audio-visual presentations shown on-aircraft to... meet concerning the accessibility of videos, DVDs, and other audio-visual presentations shown on... videos, DVDs, and other audio-visual displays played on aircraft for safety purposes, and all such new...
Code of Federal Regulations, 2013 CFR
2013-01-01
... concerning the accessibility of videos, DVDs, and other audio-visual presentations shown on-aircraft to... meet concerning the accessibility of videos, DVDs, and other audio-visual presentations shown on... videos, DVDs, and other audio-visual displays played on aircraft for safety purposes, and all such new...
47 CFR Appendix - Technical Appendix 1
Code of Federal Regulations, 2010 CFR
2010-10-01
... display program material that has been encoded in any and all of the video formats contained in Table A3... frame rate of the transmitted video format. 2. Output Formats Equipment shall support 4:3 center cut-out... for composite video (yellow). Output shall produce video with ITU-R BT.500-11 quality scale of Grade 4...
High dynamic range adaptive real-time smart camera: an overview of the HDR-ARTiST project
NASA Astrophysics Data System (ADS)
Lapray, Pierre-Jean; Heyrman, Barthélémy; Ginhac, Dominique
2015-04-01
Standard cameras capture only a fraction of the information that is visible to the human visual system. This is specifically true for natural scenes including areas of low and high illumination due to transitions between sunlit and shaded areas. When capturing such a scene, many cameras are unable to store the full Dynamic Range (DR) resulting in low quality video where details are concealed in shadows or washed out by sunlight. The imaging technique that can overcome this problem is called HDR (High Dynamic Range) imaging. This paper describes a complete smart camera built around a standard off-the-shelf LDR (Low Dynamic Range) sensor and a Virtex-6 FPGA board. This smart camera called HDR-ARtiSt (High Dynamic Range Adaptive Real-time Smart camera) is able to produce a real-time HDR live video color stream by recording and combining multiple acquisitions of the same scene while varying the exposure time. This technique appears as one of the most appropriate and cheapest solution to enhance the dynamic range of real-life environments. HDR-ARtiSt embeds real-time multiple captures, HDR processing, data display and transfer of a HDR color video for a full sensor resolution (1280 1024 pixels) at 60 frames per second. The main contributions of this work are: (1) Multiple Exposure Control (MEC) dedicated to the smart image capture with alternating three exposure times that are dynamically evaluated from frame to frame, (2) Multi-streaming Memory Management Unit (MMMU) dedicated to the memory read/write operations of the three parallel video streams, corresponding to the different exposure times, (3) HRD creating by combining the video streams using a specific hardware version of the Devebecs technique, and (4) Global Tone Mapping (GTM) of the HDR scene for display on a standard LCD monitor.
Psychophysical Comparison Of A Video Display System To Film By Using Bone Fracture Images
NASA Astrophysics Data System (ADS)
Seeley, George W.; Stempski, Mark; Roehrig, Hans; Nudelman, Sol; Capp, M. P.
1982-11-01
This study investigated the possibility of using a video display system instead of film for radiological diagnosis. Also investigated were the relationships between characteristics of the system and the observer's accuracy level. Radiologists were used as observers. Thirty-six clinical bone fractures were separated into two matched sets of equal difficulty. The difficulty parameters and ratings were defined by a panel of expert bone radiologists at the Arizona Health Sciences Center, Radiology Department. These two sets of fracture images were then matched with verifiably normal images using parameters such as film type, angle of view, size, portion of anatomy, the film's density range, and the patient's age and sex. The two sets of images were then displayed, using a counterbalanced design, to each of the participating radiologists for diagnosis. Whenever a response was given to a video image, the radiologist used enhancement controls to "window in" on the grey levels of interest. During the TV phase, the radiologist was required to record the settings of the calibrated controls of the image enhancer during interpretation. At no time did any single radiologist see the same film in both modes. The study was designed so that a standard analysis of variance would show the effects of viewing mode (film vs TV), the effects due to stimulus set, and any interactions with observers. A signal detection analysis of observer performance was also performed. Results indicate that the TV display system is almost as good as the view box display; an average of only two more errors were made on the TV display. The difference between the systems has been traced to four observers who had poor accuracy on a small number of films viewed on the TV display. This information is now being correlated with the video system's signal-to-noise ratio (SNR), signal transfer function (STF), and resolution measurements, to obtain information on the basic display and enhancement requirements for a video-based radiologic system. Due to time constraints the results are not included here. The complete results of this study will be reported at the conference.
Real-Time Visualization of Tissue Ischemia
NASA Technical Reports Server (NTRS)
Bearman, Gregory H. (Inventor); Chrien, Thomas D. (Inventor); Eastwood, Michael L. (Inventor)
2000-01-01
A real-time display of tissue ischemia which comprises three CCD video cameras, each with a narrow bandwidth filter at the correct wavelength is discussed. The cameras simultaneously view an area of tissue suspected of having ischemic areas through beamsplitters. The output from each camera is adjusted to give the correct signal intensity for combining with, the others into an image for display. If necessary a digital signal processor (DSP) can implement algorithms for image enhancement prior to display. Current DSP engines are fast enough to give real-time display. Measurement at three, wavelengths, combined into a real-time Red-Green-Blue (RGB) video display with a digital signal processing (DSP) board to implement image algorithms, provides direct visualization of ischemic areas.
Display nonlinearity in digital image processing for visual communications
NASA Astrophysics Data System (ADS)
Peli, Eli
1992-11-01
The luminance emitted from a cathode ray tube (CRT) display is a nonlinear function (the gamma function) of the input video signal voltage. In most analog video systems, compensation for this nonlinear transfer function is implemented in the camera amplifiers. When CRT displays are used to present psychophysical stimuli in vision research, the specific display nonlinearity usually is measured and accounted for to ensure that the luminance of each pixel in the synthetic image property represents the intended value. However, when using digital image processing, the linear analog-to-digital converters store a digital image that is nonlinearly related to the displayed or recorded image. The effect of this nonlinear transformation on a variety of image-processing applications used in visual communications is described.
Display nonlinearity in digital image processing for visual communications
NASA Astrophysics Data System (ADS)
Peli, Eli
1991-11-01
The luminance emitted from a cathode ray tube, (CRT) display is a nonlinear function (the gamma function) of the input video signal voltage. In most analog video systems, compensation for this nonlinear transfer function is implemented in the camera amplifiers. When CRT displays are used to present psychophysical stimuli in vision research, the specific display nonlinearity usually is measured and accounted for to ensure that the luminance of each pixel in the synthetic image properly represents the intended value. However, when using digital image processing, the linear analog-to-digital converters store a digital image that is nonlinearly related to the displayed or recorded image. This paper describes the effect of this nonlinear transformation on a variety of image-processing applications used in visual communications.
Recent progress of flexible AMOLED displays
NASA Astrophysics Data System (ADS)
Pang, Huiqing; Rajan, Kamala; Silvernail, Jeff; Mandlik, Prashant; Ma, Ruiqing; Hack, Mike; Brown, Julie J.; Yoo, Juhn S.; Jung, Sang-Hoon; Kim, Yong-Cheol; Byun, Seung-Chan; Kim, Jong-Moo; Yoon, Soo-Young; Kim, Chang-Dong; Hwang, Yong-Kee; Chung, In-Jae; Fletcher, Mark; Green, Derek; Pangle, Mike; McIntyre, Jim; Smith, Randal D.
2011-03-01
Significant progress has been made in recent years in flexible AMOLED displays and numerous prototypes have been demonstrated. Replacing rigid glass with flexible substrates and thin-film encapsulation makes displays thinner, lighter, and non-breakable - all attractive features for portable applications. Flexible AMOLEDs equipped with phosphorescent OLEDs are considered one of the best candidates for low-power, rugged, full-color video applications. Recently, we have demonstrated a portable communication display device, built upon a full-color 4.3-inch HVGA foil display with a resolution of 134 dpi using an all-phosphorescent OLED frontplane. The prototype is shaped into a thin and rugged housing that will fit over a user's wrist, providing situational awareness and enabling the wearer to see real-time video and graphics information.
Data acquisition and analysis in the DOE/NASA Wind Energy Program
NASA Technical Reports Server (NTRS)
Neustadter, H. E.
1980-01-01
Four categories of data systems, each responding to a distinct information need are presented. The categories are: control, technology, engineering and performance. The focus is on the technology data system which consists of the following elements: sensors which measure critical parameters such as wind speed and direction, output power, blade loads and strains, and tower vibrations; remote multiplexing units (RMU) mounted on each wind turbine which frequency modulate, multiplex and transmit sensor outputs; the instrumentation available to record, process and display these signals; and centralized computer analysis of data. The RMU characteristics and multiplexing techniques are presented. Data processing is illustrated by following a typical signal through instruments such as the analog tape recorder, analog to digital converter, data compressor, digital tape recorder, video (CRT) display, and strip chart recorder.
Xiao, Yan; Dexter, Franklin; Hu, Peter; Dutton, Richard P
2008-02-01
On the day of surgery, real-time information of both room occupancy and activities within the operating room (OR) is needed for management of staff, equipment, and unexpected events. A status display system showed color OR video with controllable image quality and showed times that patients entered and exited each OR (obtained automatically). The system was installed and its use was studied in a 6-OR trauma suite and at four locations in a 19-OR tertiary suite. Trauma staff were surveyed for their perceptions of the system. Evidence of staff acceptance of distributed OR video included its operational use for >3 yr in the two suites, with no administrative complaints. Individuals of all job categories used the video. Anesthesiologists were the most frequent users for more than half of the days (95% confidence interval [CI] >50%) in the tertiary ORs. The OR charge nurses accessed the video mostly early in the day when the OR occupancy was high. In comparison (P < 0.001), anesthesiologists accessed it mostly at the end of the workday when occupancy was declining and few cases were starting. Of all 30-min periods during which the video was accessed in the trauma suite, many accesses (95% CI >42%) occurred in periods with no cases starting or ending (i.e., the video was used during the middle of cases). The three stated reasons for using video that had median surveyed responses of "very useful" were "to see if cases are finished," "to see if a room is ready," and "to see when cases are about to finish." Our nurses and physicians both accepted and used distributed OR video as it provided useful information, regardless of whether real-time display of milestones was available (e.g., through anesthesia information system data).
Ethernet direct display: a new dimension for in-vehicle video connectivity solutions
NASA Astrophysics Data System (ADS)
Rowley, Vincent
2009-05-01
To improve the local situational awareness (LSA) of personnel in light or heavily armored vehicles, most military organizations recognize the need to equip their fleets with high-resolution digital video systems. Several related upgrade programs are already in progress and, almost invariably, COTS IP/Ethernet is specified as the underlying transport mechanism. The high bandwidths, long reach, networking flexibility, scalability, and affordability of IP/Ethernet make it an attractive choice. There are significant technical challenges, however, in achieving high-performance, real-time video connectivity over the IP/Ethernet platform. As an early pioneer in performance-oriented video systems based on IP/Ethernet, Pleora Technologies has developed core expertise in meeting these challenges and applied a singular focus to innovating within the required framework. The company's field-proven iPORTTM Video Connectivity Solution is deployed successfully in thousands of real-world applications for medical, military, and manufacturing operations. Pleora's latest innovation is eDisplayTM, a smallfootprint, low-power, highly efficient IP engine that acquires video from an Ethernet connection and sends it directly to a standard HDMI/DVI monitor for real-time viewing. More costly PCs are not required. This paper describes Pleora's eDisplay IP Engine in more detail. It demonstrates how - in concert with other elements of the end-to-end iPORT Video Connectivity Solution - the engine can be used to build standards-based, in-vehicle video systems that increase the safety and effectiveness of military personnel while fully leveraging the advantages of the lowcost COTS IP/Ethernet platform.
Optical Head-Mounted Computer Display for Education, Research, and Documentation in Hand Surgery.
Funk, Shawn; Lee, Donald H
2016-01-01
Intraoperative photography and capturing videos is important for the hand surgeon. Recently, optical head-mounted computer display has been introduced as a means of capturing photographs and videos. In this article, we discuss this new technology and review its potential use in hand surgery. Copyright © 2016 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.
Molecular dynamics simulations through GPU video games technologies
Loukatou, Styliani; Papageorgiou, Louis; Fakourelis, Paraskevas; Filntisi, Arianna; Polychronidou, Eleftheria; Bassis, Ioannis; Megalooikonomou, Vasileios; Makałowski, Wojciech; Vlachakis, Dimitrios; Kossida, Sophia
2016-01-01
Bioinformatics is the scientific field that focuses on the application of computer technology to the management of biological information. Over the years, bioinformatics applications have been used to store, process and integrate biological and genetic information, using a wide range of methodologies. One of the most de novo techniques used to understand the physical movements of atoms and molecules is molecular dynamics (MD). MD is an in silico method to simulate the physical motions of atoms and molecules under certain conditions. This has become a state strategic technique and now plays a key role in many areas of exact sciences, such as chemistry, biology, physics and medicine. Due to their complexity, MD calculations could require enormous amounts of computer memory and time and therefore their execution has been a big problem. Despite the huge computational cost, molecular dynamics have been implemented using traditional computers with a central memory unit (CPU). A graphics processing unit (GPU) computing technology was first designed with the goal to improve video games, by rapidly creating and displaying images in a frame buffer such as screens. The hybrid GPU-CPU implementation, combined with parallel computing is a novel technology to perform a wide range of calculations. GPUs have been proposed and used to accelerate many scientific computations including MD simulations. Herein, we describe the new methodologies developed initially as video games and how they are now applied in MD simulations. PMID:27525251
NASA Astrophysics Data System (ADS)
Froehlich, Jan; Grandinetti, Stefan; Eberhardt, Bernd; Walter, Simon; Schilling, Andreas; Brendel, Harald
2014-03-01
High quality video sequences are required for the evaluation of tone mapping operators and high dynamic range (HDR) displays. We provide scenic and documentary scenes with a dynamic range of up to 18 stops. The scenes are staged using professional film lighting, make-up and set design to enable the evaluation of image and material appearance. To address challenges for HDR-displays and temporal tone mapping operators, the sequences include highlights entering and leaving the image, brightness changing over time, high contrast skin tones, specular highlights and bright, saturated colors. HDR-capture is carried out using two cameras mounted on a mirror-rig. To achieve a cinematic depth of field, digital motion picture cameras with Super-35mm size sensors are used. We provide HDR-video sequences to serve as a common ground for the evaluation of temporal tone mapping operators and HDR-displays. They are available to the scientific community for further research.
Apparatus for monitoring crystal growth
Sachs, Emanual M.
1981-01-01
A system and method are disclosed for monitoring the growth of a crystalline body from a liquid meniscus in a furnace. The system provides an improved human/machine interface so as to reduce operator stress, strain and fatigue while improving the conditions for observation and control of the growing process. The system comprises suitable optics for forming an image of the meniscus and body wherein the image is anamorphic so that the entire meniscus can be viewed with good resolution in both the width and height dimensions. The system also comprises a video display for displaying the anamorphic image. The video display includes means for enhancing the contrast between any two contrasting points in the image. The video display also comprises a signal averager for averaging the intensity of at least one preselected portions of the image. The value of the average intensity, can in turn be utilized to control the growth of the body. The system and method are also capable of observing and monitoring multiple processes.
Method of monitoring crystal growth
Sachs, Emanual M.
1982-01-01
A system and method are disclosed for monitoring the growth of a crystalline body from a liquid meniscus in a furnace. The system provides an improved human/machine interface so as to reduce operator stress, strain and fatigue while improving the conditions for observation and control of the growing process. The system comprises suitable optics for forming an image of the meniscus and body wherein the image is anamorphic so that the entire meniscus can be viewed with good resolution in both the width and height dimensions. The system also comprises a video display for displaying the anamorphic image. The video display includes means for enhancing the contrast between any two contrasting points in the image. The video display also comprises a signal averager for averaging the intensity of at least one preselected portions of the image. The value of the average intensity, can in turn be utilized to control the growth of the body. The system and method are also capable of observing and monitoring multiple processes.
Polyplanar optical display electronics
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeSanto, L.; Biscardi, C.
The Polyplanar Optical Display (POD) is a unique display screen which can be used with any projection source. The prototype ten inch display is two inches thick and has a matte black face which allows for high contrast images. The prototype being developed is a form, fit and functional replacement display for the B-52 aircraft which uses a monochrome ten-inch display. In order to achieve a long lifetime, the new display uses a 100 milliwatt green solid-state laser (10,000 hr. life) at 532 nm as its light source. To produce real-time video, the laser light is being modulated by amore » Digital Light Processing (DLP{trademark}) chip manufactured by Texas Instruments. In order to use the solid-state laser as the light source and also fit within the constraints of the B-52 display, the Digital Micromirror Device (DMD{trademark}) circuit board is removed from the Texas Instruments DLP light engine assembly. Due to the compact architecture of the projection system within the display chassis, the DMD{trademark} chip is operated remotely from the Texas Instruments circuit board. The authors discuss the operation of the DMD{trademark} divorced from the light engine and the interfacing of the DMD{trademark} board with various video formats (CVBS, Y/C or S-video and RGB) including the format specific to the B-52 aircraft. A brief discussion of the electronics required to drive the laser is also presented.« less
Storing Data and Video on One Tape
NASA Technical Reports Server (NTRS)
Nixon, J. H.; Cater, J. P.
1985-01-01
Microprocessor-based system originally developed for anthropometric research merges digital data with video images for storage on video cassette recorder. Combined signals later retrieved and displayed simultaneously on television monitor. System also extracts digital portion of stored information and transfers it to solid-state memory.
47 CFR 79.107 - User interfaces provided by digital apparatus.
Code of Federal Regulations, 2014 CFR
2014-10-01
... SERVICES ACCESSIBILITY OF VIDEO PROGRAMMING Apparatus § 79.107 User interfaces provided by digital... States and designed to receive or play back video programming transmitted in digital format simultaneously with sound, including apparatus designed to receive or display video programming transmitted in...
Armellino, Donna; Hussain, Erfan; Schilling, Mary Ellen; Senicola, William; Eichorn, Ann; Dlugacz, Yosef; Farber, Bruce F
2012-01-01
Hand hygiene is a key measure in preventing infections. We evaluated healthcare worker (HCW) hand hygiene with the use of remote video auditing with and without feedback. The study was conducted in an 17-bed intensive care unit from June 2008 through June 2010. We placed cameras with views of every sink and hand sanitizer dispenser to record hand hygiene of HCWs. Sensors in doorways identified when an individual(s) entered/exited. When video auditors observed a HCW performing hand hygiene upon entering/exiting, they assigned a pass; if not, a fail was assigned. Hand hygiene was measured during a 16-week period of remote video auditing without feedback and a 91-week period with feedback of data. Performance feedback was continuously displayed on electronic boards mounted within the hallways, and summary reports were delivered to supervisors by electronic mail. During the 16-week prefeedback period, hand hygiene rates were less than 10% (3933/60 542) and in the 16-week postfeedback period it was 81.6% (59 627/73 080). The increase was maintained through 75 weeks at 87.9% (262 826/298 860). The data suggest that remote video auditing combined with feedback produced a significant and sustained improvement in hand hygiene.
Kwon, M-W; Kim, S-C; Yoon, S-E; Ho, Y-S; Kim, E-S
2015-02-09
A new object tracking mask-based novel-look-up-table (OTM-NLUT) method is proposed and implemented on graphics-processing-units (GPUs) for real-time generation of holographic videos of three-dimensional (3-D) scenes. Since the proposed method is designed to be matched with software and memory structures of the GPU, the number of compute-unified-device-architecture (CUDA) kernel function calls and the computer-generated hologram (CGH) buffer size of the proposed method have been significantly reduced. It therefore results in a great increase of the computational speed of the proposed method and enables real-time generation of CGH patterns of 3-D scenes. Experimental results show that the proposed method can generate 31.1 frames of Fresnel CGH patterns with 1,920 × 1,080 pixels per second, on average, for three test 3-D video scenarios with 12,666 object points on three GPU boards of NVIDIA GTX TITAN, and confirm the feasibility of the proposed method in the practical application of electro-holographic 3-D displays.
NASA Technical Reports Server (NTRS)
Deen, Robert G.; Andres, Paul M.; Mortensen, Helen B.; Parizher, Vadim; McAuley, Myche; Bartholomew, Paul
2009-01-01
The XVD [X-Windows VICAR (video image communication and retrieval) Display] computer program offers an interactive display of VICAR and PDS (planetary data systems) images. It is designed to efficiently display multiple-GB images and runs on Solaris, Linux, or Mac OS X systems using X-Windows.
Efficient stereoscopic contents file format on the basis of ISO base media file format
NASA Astrophysics Data System (ADS)
Kim, Kyuheon; Lee, Jangwon; Suh, Doug Young; Park, Gwang Hoon
2009-02-01
A lot of 3D contents haven been widely used for multimedia services, however, real 3D video contents have been adopted for a limited applications such as a specially designed 3D cinema. This is because of the difficulty of capturing real 3D video contents and the limitation of display devices available in a market. However, diverse types of display devices for stereoscopic video contents for real 3D video contents have been recently released in a market. Especially, a mobile phone with a stereoscopic camera has been released in a market, which provides a user as a consumer to have more realistic experiences without glasses, and also, as a content creator to take stereoscopic images or record the stereoscopic video contents. However, a user can only store and display these acquired stereoscopic contents with his/her own devices due to the non-existence of a common file format for these contents. This limitation causes a user not share his/her contents with any other users, which makes it difficult the relevant market to stereoscopic contents is getting expanded. Therefore, this paper proposes the common file format on the basis of ISO base media file format for stereoscopic contents, which enables users to store and exchange pure stereoscopic contents. This technology is also currently under development for an international standard of MPEG as being called as a stereoscopic video application format.
NASA Technical Reports Server (NTRS)
2004-01-01
Ever wonder whether a still shot from a home video could serve as a "picture perfect" photograph worthy of being framed and proudly displayed on the mantle? Wonder no more. A critical imaging code used to enhance video footage taken from spaceborne imaging instruments is now available within a portable photography tool capable of producing an optimized, high-resolution image from multiple video frames.
Code of Federal Regulations, 2013 CFR
2013-04-01
... recordings and/or digital records shall be provided to the Commission upon request. (x) Video library log. A... events on video and/or digital recordings. The displayed date and time shall not significantly obstruct... each gaming machine change booth. (w) Video recording and/or digital record retention. (1) All video...
Code of Federal Regulations, 2012 CFR
2012-04-01
... recordings and/or digital records shall be provided to the Commission upon request. (x) Video library log. A... events on video and/or digital recordings. The displayed date and time shall not significantly obstruct... each gaming machine change booth. (w) Video recording and/or digital record retention. (1) All video...
Code of Federal Regulations, 2014 CFR
2014-04-01
... recordings and/or digital records shall be provided to the Commission upon request. (x) Video library log. A... events on video and/or digital recordings. The displayed date and time shall not significantly obstruct... each gaming machine change booth. (w) Video recording and/or digital record retention. (1) All video...
Effectiveness of Immersive Videos in Inducing Awe: An Experimental Study.
Chirico, Alice; Cipresso, Pietro; Yaden, David B; Biassoni, Federica; Riva, Giuseppe; Gaggioli, Andrea
2017-04-27
Awe, a complex emotion composed by the appraisal components of vastness and need for accommodation, is a profound and often meaningful experience. Despite its importance, psychologists have only recently begun empirical study of awe. At the experimental level, a main issue concerns how to elicit high intensity awe experiences in the lab. To address this issue, Virtual Reality (VR) has been proposed as a potential solution. Here, we considered the highest realistic form of VR: immersive videos. 42 participants watched at immersive and normal 2D videos displaying an awe or a neutral content. After the experience, they rated their level of awe and sense of presence. Participants' psychophysiological responses (BVP, SC, sEMG) were recorded during the whole video exposure. We hypothesized that the immersive video condition would increase the intensity of awe experienced compared to 2D screen videos. Results indicated that immersive videos significantly enhanced the self-reported intensity of awe as well as the sense of presence. Immersive videos displaying an awe content also led to higher parasympathetic activation. These findings indicate the advantages of using VR in the experimental study of awe, with methodological implications for the study of other emotions.
77 FR 75617 - 36(b)(1) Arms Sales Notification
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-21
... transmittal, policy justification, and Sensitivity of Technology. Dated: December 18, 2012. Aaron Siegel... Processor Cabinets, 2 Video Wall Screen and Projector Systems, 46 Flat Panel Displays, and 2 Distributed Video Systems), 2 ship sets AN/SPQ-15 Digital Video Distribution Systems, 2 ship sets Operational...
Real-time rendering for multiview autostereoscopic displays
NASA Astrophysics Data System (ADS)
Berretty, R.-P. M.; Peters, F. J.; Volleberg, G. T. G.
2006-02-01
In video systems, the introduction of 3D video might be the next revolution after the introduction of color. Nowadays multiview autostereoscopic displays are in development. Such displays offer various views at the same time and the image content observed by the viewer depends upon his position with respect to the screen. His left eye receives a signal that is different from what his right eye gets; this gives, provided the signals have been properly processed, the impression of depth. The various views produced on the display differ with respect to their associated camera positions. A possible video format that is suited for rendering from different camera positions is the usual 2D format enriched with a depth related channel, e.g. for each pixel in the video not only its color is given, but also e.g. its distance to a camera. In this paper we provide a theoretical framework for the parallactic transformations which relates captured and observed depths to screen and image disparities. Moreover we present an efficient real time rendering algorithm that uses forward mapping to reduce aliasing artefacts and that deals properly with occlusions. For improved perceived resolution, we take the relative position of the color subpixels and the optics of the lenticular screen into account. Sophisticated filtering techniques results in high quality images.
Objective video presentation QoE predictor for smart adaptive video streaming
NASA Astrophysics Data System (ADS)
Wang, Zhou; Zeng, Kai; Rehman, Abdul; Yeganeh, Hojatollah; Wang, Shiqi
2015-09-01
How to deliver videos to consumers over the network for optimal quality-of-experience (QoE) has been the central goal of modern video delivery services. Surprisingly, regardless of the large volume of videos being delivered everyday through various systems attempting to improve visual QoE, the actual QoE of end consumers is not properly assessed, not to say using QoE as the key factor in making critical decisions at the video hosting, network and receiving sites. Real-world video streaming systems typically use bitrate as the main video presentation quality indicator, but using the same bitrate to encode different video content could result in drastically different visual QoE, which is further affected by the display device and viewing condition of each individual consumer who receives the video. To correct this, we have to put QoE back to the driver's seat and redesign the video delivery systems. To achieve this goal, a major challenge is to find an objective video presentation QoE predictor that is accurate, fast, easy-to-use, display device adaptive, and provides meaningful QoE predictions across resolution and content. We propose to use the newly developed SSIMplus index (https://ece.uwaterloo.ca/~z70wang/research/ssimplus/) for this role. We demonstrate that based on SSIMplus, one can develop a smart adaptive video streaming strategy that leads to much smoother visual QoE impossible to achieve using existing adaptive bitrate video streaming approaches. Furthermore, SSIMplus finds many more applications, in live and file-based quality monitoring, in benchmarking video encoders and transcoders, and in guiding network resource allocations.
Mask, Lisa; Blanchard, Céline M
2011-09-01
The present study examines the protective role of an autonomous regulation of eating behaviors (AREB) on the relationship between trait body dissatisfaction and women's body image concerns and eating-related intentions in response to "thin ideal" media. Undergraduate women (n=138) were randomly assigned to view a "thin ideal" video or a neutral video. As hypothesized, trait body dissatisfaction predicted more negative affect and size dissatisfaction following exposure to the "thin ideal" video among women who displayed less AREB. Conversely, trait body dissatisfaction predicted greater intentions to monitor food intake and limit unhealthy foods following exposure to the "thin ideal" video among women who displayed more AREB. Copyright © 2011 Elsevier Ltd. All rights reserved.
First Use of Heads-up Display for Astronomy Education
NASA Astrophysics Data System (ADS)
Mumford, Holly; Hintz, E. G.; Jones, M.; Lawler, J.; Fisler, A.
2013-01-01
As part of our work on deaf education in a planetarium environment we are exploring the use of heads-up display systems. This allows us to overlap an ASL interpreter with our educational videos. The overall goal is to allow a student to watch a full-dome planetarium show and have the interpreter tracking to any portion of the video. We will present the first results of using a heads-up display to provide an ASL ‘sound-track’ for a deaf audience. This work is partially funded by an NSF IIS-1124548 grant and funding from the Sorenson Foundation.
Video image stabilization and registration--plus
NASA Technical Reports Server (NTRS)
Hathaway, David H. (Inventor)
2009-01-01
A method of stabilizing a video image displayed in multiple video fields of a video sequence includes the steps of: subdividing a selected area of a first video field into nested pixel blocks; determining horizontal and vertical translation of each of the pixel blocks in each of the pixel block subdivision levels from the first video field to a second video field; and determining translation of the image from the first video field to the second video field by determining a change in magnification of the image from the first video field to the second video field in each of horizontal and vertical directions, and determining shear of the image from the first video field to the second video field in each of the horizontal and vertical directions.
Hardware and software improvements to a low-cost horizontal parallax holographic video monitor.
Henrie, Andrew; Codling, Jesse R; Gneiting, Scott; Christensen, Justin B; Awerkamp, Parker; Burdette, Mark J; Smalley, Daniel E
2018-01-01
Displays capable of true holographic video have been prohibitively expensive and difficult to build. With this paper, we present a suite of modularized hardware components and software tools needed to build a HoloMonitor with basic "hacker-space" equipment, highlighting improvements that have enabled the total materials cost to fall to $820, well below that of other holographic displays. It is our hope that the current level of simplicity, development, design flexibility, and documentation will enable the lay engineer, programmer, and scientist to relatively easily replicate, modify, and build upon our designs, bringing true holographic video to the masses.
Obstacles encountered in the development of the low vision enhancement system.
Massof, R W; Rickman, D L
1992-01-01
The Johns Hopkins Wilmer Eye Institute and the NASA Stennis Space Center are collaborating on the development of a new high technology low vision aid called the Low Vision Enhancement System (LVES). The LVES consists of a binocular head-mounted video display system, video cameras mounted on the head-mounted display, and real-time video image processing in a system package that is battery powered and portable. Through a phased development approach, several generations of the LVES can be made available to the patient in a timely fashion. This paper describes the LVES project with major emphasis on technical problems encountered or anticipated during the development process.
Design of video processing and testing system based on DSP and FPGA
NASA Astrophysics Data System (ADS)
Xu, Hong; Lv, Jun; Chen, Xi'ai; Gong, Xuexia; Yang, Chen'na
2007-12-01
Based on high speed Digital Signal Processor (DSP) and Field Programmable Gate Array (FPGA), a video capture, processing and display system is presented, which is of miniaturization and low power. In this system, a triple buffering scheme was used for the capture and display, so that the application can always get a new buffer without waiting; The Digital Signal Processor has an image process ability and it can be used to test the boundary of workpiece's image. A video graduation technology is used to aim at the position which is about to be tested, also, it can enhance the system's flexibility. The character superposition technology realized by DSP is used to display the test result on the screen in character format. This system can process image information in real time, ensure test precision, and help to enhance product quality and quality management.
Veligdan, James T.
2005-05-31
A video image is displayed from an optical panel by splitting the image into a plurality of image components, and then projecting the image components through corresponding portions of the panel to collectively form the image. Depth of the display is correspondingly reduced.
Veligdan, James T [Manorville, NY
2007-05-29
A video image is displayed from an optical panel by splitting the image into a plurality of image components, and then projecting the image components through corresponding portions of the panel to collectively form the image. Depth of the display is correspondingly reduced.
Mesoscale and severe storms (Mass) data management and analysis system
NASA Technical Reports Server (NTRS)
Hickey, J. S.; Karitani, S.; Dickerson, M.
1984-01-01
Progress on the Mesoscale and Severe Storms (MASS) data management and analysis system is described. An interactive atmospheric data base management software package to convert four types of data (Sounding, Single Level, Grid, Image) into standard random access formats is implemented and integrated with the MASS AVE80 Series general purpose plotting and graphics display data analysis software package. An interactive analysis and display graphics software package (AVE80) to analyze large volumes of conventional and satellite derived meteorological data is enhanced to provide imaging/color graphics display utilizing color video hardware integrated into the MASS computer system. Local and remote smart-terminal capability is provided by installing APPLE III computer systems within individual scientist offices and integrated with the MASS system, thus providing color video display, graphics, and characters display of the four data types.
NASA Astrophysics Data System (ADS)
Kachejian, Kerry C.; Vujcic, Doug
1999-07-01
The Tactical Visualization Module (TVM) research effort will develop and demonstrate a portable, tactical information system to enhance the situational awareness of individual warfighters and small military units by providing real-time access to manned and unmanned aircraft, tactically mobile robots, and unattended sensors. TVM consists of a family of portable and hand-held devices being advanced into a next- generation, embedded capability. It enables warfighters to visualize the tactical situation by providing real-time video, imagery, maps, floor plans, and 'fly-through' video on demand. When combined with unattended ground sensors, such as Combat- Q, TVM permits warfighters to validate and verify tactical targets. The use of TVM results in faster target engagement times, increased survivability, and reduction of the potential for fratricide. TVM technology can support both mounted and dismounted tactical forces involved in land, sea, and air warfighting operations. As a PCMCIA card, TVM can be embedded in portable, hand-held, and wearable PCs. Thus, it leverages emerging tactical displays including flat-panel, head-mounted displays. The end result of the program will be the demonstration of the system with U.S. Army and USMC personnel in an operational environment. Raytheon Systems Company, the U.S. Army Soldier Systems Command -- Natick RDE Center (SSCOM- NRDEC) and the Defense Advanced Research Projects Agency (DARPA) are partners in developing and demonstrating the TVM technology.
A method for the real-time construction of a full parallax light field
NASA Astrophysics Data System (ADS)
Tanaka, Kenji; Aoki, Soko
2006-02-01
We designed and implemented a light field acquisition and reproduction system for dynamic objects called LiveDimension, which serves as a 3D live video system for multiple viewers. The acquisition unit consists of circularly arranged NTSC cameras surrounding an object. The display consists of circularly arranged projectors and a rotating screen. The projectors are constantly projecting images captured by the corresponding cameras onto the screen. The screen rotates around an in-plane vertical axis at a sufficient speed so that it faces each of the projectors in sequence. Since the Lambertian surfaces of the screens are covered by light-collimating plastic films with vertical louver patterns that are used for the selection of appropriate light rays, viewers can only observe images from a projector located in the same direction as the viewer. Thus, the dynamic view of an object is dependent on the viewer's head position. We evaluated the system by projecting both objects and human figures and confirmed that the entire system can reproduce light fields with a horizontal parallax to display video sequences of 430x770 pixels at a frame rate of 45 fps. Applications of this system include product design reviews, sales promotion, art exhibits, fashion shows, and sports training with form checking.
Multilocation Video Conference By Optical Fiber
NASA Astrophysics Data System (ADS)
Gray, Donald J.
1982-10-01
An experimental system that permits interconnection of many offices in a single video conference is described. Video images transmitted to conference participants are selected by the conference chairman and switched by a microprocessor-controlled video switch. Speakers can, at their choice, transmit their own images or images of graphics they wish to display. Users are connected to the Switching Center by optical fiber subscriber loops that carry analog video, digitized telephone, data and signaling. The same system also provides user-selectable distribution of video program and video library material. Experience in the operation of the conference system is discussed.
ARINC 818 specification revisions enable new avionics architectures
NASA Astrophysics Data System (ADS)
Grunwald, Paul
2014-06-01
The ARINC 818 Avionics Digital Video Bus is the standard for cockpit video that has gained wide acceptance in both the commercial and military cockpits. The Boeing 787, A350XWB, A400M, KC-46A, and many other aircraft use it. The ARINC 818 specification, which was initially release in 2006, has recently undergone a major update to address new avionics architectures and capabilities. Over the seven years since its release, projects have gone beyond the specification due to the complexity of new architectures and desired capabilities, such as video switching, bi-directional communication, data-only paths, and camera and sensor control provisions. The ARINC 818 specification was revised in 2013, and ARINC 818-2 was approved in November 2013. The revisions to the ARINC 818-2 specification enable switching, stereo and 3-D provisions, color sequential implementations, regions of interest, bi-directional communication, higher link rates, data-only transmission, and synchronization signals. This paper discusses each of the new capabilities and the impact on avionics and display architectures, especially when integrating large area displays, stereoscopic displays, multiple displays, and systems that include a large number of sensors.
Rehm, K; Seeley, G W; Dallas, W J; Ovitt, T W; Seeger, J F
1990-01-01
One of the goals of our research in the field of digital radiography has been to develop contrast-enhancement algorithms for eventual use in the display of chest images on video devices with the aim of preserving the diagnostic information presently available with film, some of which would normally be lost because of the smaller dynamic range of video monitors. The ASAHE algorithm discussed in this article has been tested by investigating observer performance in a difficult detection task involving phantoms and simulated lung nodules, using film as the output medium. The results of the experiment showed that the algorithm is successful in providing contrast-enhanced, natural-looking chest images while maintaining diagnostic information. The algorithm did not effect an increase in nodule detectability, but this was not unexpected because film is a medium capable of displaying a wide range of gray levels. It is sufficient at this stage to show that there is no degradation in observer performance. Future tests will evaluate the performance of the ASAHE algorithm in preparing chest images for video display.
Increased ISR operator capability utilizing a centralized 360° full motion video display
NASA Astrophysics Data System (ADS)
Andryc, K.; Chamberlain, J.; Eagleson, T.; Gottschalk, G.; Kowal, B.; Kuzdeba, P.; LaValley, D.; Myers, E.; Quinn, S.; Rose, M.; Rusiecki, B.
2012-06-01
In many situations, the difference between success and failure comes down to taking the right actions quickly. While the myriad of electronic sensors available today can provide data quickly, it may overload the operator; where only a contextualized centralized display of information and intuitive human interface can help to support the quick and effective decisions needed. If these decisions are to result in quick actions, then the operator must be able to understand all of the data of his environment. In this paper we present a novel approach in contextualizing multi-sensor data onto a full motion video real-time 360 degree imaging display. The system described could function as a primary display system for command and control in security, military and observation posts. It has the ability to process and enable interactive control of multiple other sensor systems. It enhances the value of these other sensors by overlaying their information on a panorama of the surroundings. Also, it can be used to interface to other systems including: auxiliary electro-optical systems, aerial video, contact management, Hostile Fire Indicators (HFI), and Remote Weapon Stations (RWS).
Interactive Video in Training. Computers in Personnel--Making Management Profitable.
ERIC Educational Resources Information Center
Copeland, Peter
Interactive video is achieved by merging the two powerful technologies of microcomputing and video. Using television as the vehicle for display, text and diagrams, filmic images, and sound can be used separately or in combination to achieve a specific training task. An interactive program can check understanding, determine progress, and challenge…
Riby, Deborah M; Whittle, Lisa; Doherty-Sneddon, Gwyneth
2012-01-01
The human face is a powerful elicitor of emotion, which induces autonomic nervous system responses. In this study, we explored physiological arousal and reactivity to affective facial displays shown in person and through video-mediated communication. We compared measures of physiological arousal and reactivity in typically developing individuals and those with the developmental disorders Williams syndrome (WS) and autism spectrum disorder (ASD). Participants attended to facial displays of happy, sad, and neutral expressions via live and video-mediated communication. Skin conductance level (SCL) indicated that live faces, but not video-mediated faces, increased arousal, especially for typically developing individuals and those with WS. There was less increase of SCL, and physiological reactivity was comparable for live and video-mediated faces in ASD. In typical development and WS, physiological reactivity was greater for live than for video-mediated communication. Individuals with WS showed lower SCL than typically developing individuals, suggesting possible hypoarousal in this group, even though they showed an increase in arousal for faces. The results are discussed in terms of the use of video-mediated communication with typically and atypically developing individuals and atypicalities of physiological arousal across neurodevelopmental disorder groups.
Organ donation video messaging in motor vehicle offices: results of a randomized trial.
Rodrigue, James R; Fleishman, Aaron; Fitzpatrick, Sean; Boger, Matthew
2015-12-01
Since nearly all registered organ donors in the United States signed up via a driver's license transaction, motor vehicle (MV) offices represent an important venue for organ donation education. To evaluate the impact of organ donation video messaging in MV offices. A 2-group (usual care vs usual care+video messaging) randomized trial with baseline, intervention, and follow-up assessment phases. Twenty-eight MV offices in Massachusetts. Usual care comprised education of MV clerks, display of organ donation print materials (ie, posters, brochures, signing mats), and a volunteer ambassador program. The intervention included video messaging with silent (subtitled) segments highlighting individuals affected by donation, playing on a recursive loop on monitors in MV waiting rooms. Aggregate monthly donor designation rates at MV offices (primary) and percentage of MV customers who registered as donors after viewing the video (secondary). Controlling for baseline donor designation rate, analysis of covariance showed a significant group effect for intervention phase (F=7.3, P=.01). The usual-care group had a significantly higher aggregate monthly donor designation rate than the intervention group had. In the logistic regression model of customer surveys (n=912), prior donor designation (β=-1.29, odds ratio [OR]=0.27 [95% CI=0.20-0.37], P<.001), white race (β=0.57 OR=1.77 [95% CI=1.23-2.54], P=.002), and viewing the intervention video (β=0.73, OR=1.54 [95% CI=1.24-2.60], P=.01) were statistically significant predictors of donor registration on the day of the survey. The relatively low uptake of the video intervention by customers most likely contributed to the negative trial finding.
Tactile Cueing for Target Acquisition and Identification
2005-09-01
method of coding tactile information, and the method of presenting elevation information were studied. Results: Subjects were divided into video game experienced...VGP) subjects and non- video game (NVGP) experienced subjects. VGPs showed a significantly lower’ target acquisition time with the 12...that video game players performed better with the highest level of tactile resolution, while non- video game players performed better with simpler pattern and a lower resolution display.
An Attention-Information-Based Spatial Adaptation Framework for Browsing Videos via Mobile Devices
NASA Astrophysics Data System (ADS)
Li, Houqiang; Wang, Yi; Chen, Chang Wen
2007-12-01
With the growing popularity of personal digital assistant devices and smart phones, more and more consumers are becoming quite enthusiastic to appreciate videos via mobile devices. However, limited display size of the mobile devices has been imposing significant barriers for users to enjoy browsing high-resolution videos. In this paper, we present an attention-information-based spatial adaptation framework to address this problem. The whole framework includes two major parts: video content generation and video adaptation system. During video compression, the attention information in video sequences will be detected using an attention model and embedded into bitstreams with proposed supplement-enhanced information (SEI) structure. Furthermore, we also develop an innovative scheme to adaptively adjust quantization parameters in order to simultaneously improve the quality of overall encoding and the quality of transcoding the attention areas. When the high-resolution bitstream is transmitted to mobile users, a fast transcoding algorithm we developed earlier will be applied to generate a new bitstream for attention areas in frames. The new low-resolution bitstream containing mostly attention information, instead of the high-resolution one, will be sent to users for display on the mobile devices. Experimental results show that the proposed spatial adaptation scheme is able to improve both subjective and objective video qualities.
Gerber, Stephan M; Jeitziner, Marie-Madlen; Wyss, Patric; Chesham, Alvin; Urwyler, Prabitha; Müri, René M; Jakob, Stephan M; Nef, Tobias
2017-10-16
After prolonged stay in an intensive care unit (ICU) patients often complain about cognitive impairments that affect health-related quality of life after discharge. The aim of this proof-of-concept study was to test the feasibility and effects of controlled visual and acoustic stimulation in a virtual reality (VR) setup in the ICU. The VR setup consisted of a head-mounted display in combination with an eye tracker and sensors to assess vital signs. The stimulation consisted of videos featuring natural scenes and was tested in 37 healthy participants in the ICU. The VR stimulation led to a reduction of heart rate (p = 0. 049) and blood pressure (p = 0.044). Fixation/saccade ratio (p < 0.001) was increased when a visual target was presented superimposed on the videos (reduced search activity), reflecting enhanced visual processing. Overall, the VR stimulation had a relaxing effect as shown in vital markers of physical stress and participants explored less when attending the target. Our study indicates that VR stimulation in ICU settings is feasible and beneficial for critically ill patients.
Motion sickness and postural sway in console video games.
Stoffregen, Thomas A; Faugloire, Elise; Yoshida, Ken; Flanagan, Moira B; Merhi, Omar
2008-04-01
We tested the hypotheses that (a) participants might develop motion sickness while playing "off-the-shelf" console video games and (b) postural motion would differ between sick and well participants, prior to the onset of motion sickness. There have been many anecdotal reports of motion sickness among people who play console video games (e.g., Xbox, PlayStation). Participants (40 undergraduate students) played a game continuously for up to 50 min while standing or sitting. We varied the distance to the display screen (and, consequently, the visual angle of the display). Across conditions, the incidence of motion sickness ranged from 42% to 56%; incidence did not differ across conditions. During game play, head and torso motion differed between sick and well participants prior to the onset of subjective symptoms of motion sickness. The results indicate that console video games carry a significant risk of motion sickness. Potential applications of this research include changes in the design of console video games and recommendations for how such systems should be used.
Task-dependent color discrimination
NASA Technical Reports Server (NTRS)
Poirson, Allen B.; Wandell, Brian A.
1990-01-01
When color video displays are used in time-critical applications (e.g., head-up displays, video control panels), the observer must discriminate among briefly presented targets seen within a complex spatial scene. Color-discrimination threshold are compared by using two tasks. In one task the observer makes color matches between two halves of a continuously displayed bipartite field. In a second task the observer detects a color target in a set of briefly presented objects. The data from both tasks are well summarized by ellipsoidal isosensitivity contours. The fitted ellipsoids differ both in their size, which indicates an absolute sensitivity difference, and orientation, which indicates a relative sensitivity difference.
Affordable multisensor digital video architecture for 360° situational awareness displays
NASA Astrophysics Data System (ADS)
Scheiner, Steven P.; Khan, Dina A.; Marecki, Alexander L.; Berman, David A.; Carberry, Dana
2011-06-01
One of the major challenges facing today's military ground combat vehicle operations is the ability to achieve and maintain full-spectrum situational awareness while under armor (i.e. closed hatch). Thus, the ability to perform basic tasks such as driving, maintaining local situational awareness, surveillance, and targeting will require a high-density array of real time information be processed, distributed, and presented to the vehicle operators and crew in near real time (i.e. low latency). Advances in display and sensor technologies are providing never before seen opportunities to supply large amounts of high fidelity imagery and video to the vehicle operators and crew in real time. To fully realize the advantages of these emerging display and sensor technologies, an underlying digital architecture must be developed that is capable of processing these large amounts of video and data from separate sensor systems and distributing it simultaneously within the vehicle to multiple vehicle operators and crew. This paper will examine the systems and software engineering efforts required to overcome these challenges and will address development of an affordable, integrated digital video architecture. The approaches evaluated will enable both current and future ground combat vehicle systems the flexibility to readily adopt emerging display and sensor technologies, while optimizing the Warfighter Machine Interface (WMI), minimizing lifecycle costs, and improve the survivability of the vehicle crew working in closed-hatch systems during complex ground combat operations.
Computer Graphics in Research: Some State -of-the-Art Systems
ERIC Educational Resources Information Center
Reddy, R.; And Others
1975-01-01
A description is given of the structure and functional characteristics of three types of interactive computer graphic systems, developed by the Department of Computer Science at Carnegie-Mellon; a high-speed programmable display capable of displaying 50,000 short vectors, flicker free; a shaded-color video display for the display of gray-scale…
Co-Located Collaborative Learning Video Game with Single Display Groupware
ERIC Educational Resources Information Center
Infante, Cristian; Weitz, Juan; Reyes, Tomas; Nussbaum, Miguel; Gomez, Florencia; Radovic, Darinka
2010-01-01
Role Game is a co-located CSCL video game played by three students sitting at one machine sharing a single screen, each with their own input device. Inspired by video console games, Role Game enables students to learn by doing, acquiring social abilities and mastering subject matter in a context of co-located collaboration. After describing the…
Author Correction: Single-molecule imaging by optical absorption
NASA Astrophysics Data System (ADS)
Celebrano, Michele; Kukura, Philipp; Renn, Alois; Sandoghdar, Vahid
2018-05-01
In the Supplementary Video initially published with this Letter, the right-hand panel displaying the fluorescence emission was not showing on some video players due to a formatting problem; this has now been fixed. The video has also now been amended to include colour scale bars for both the left- (differential transmission signal) and right-hand panels.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-28
...In this document, the Commission proposes rules to implement provisions of the Twenty-First Century Communications and Video Accessibility Act of 2010 (``CVAA'') that mandate rules for closed captioning of certain video programming delivered using Internet protocol (``IP''). The Commission seeks comment on rules that would apply to the distributors, providers, and owners of IP-delivered video programming, as well as the devices that display such programming.
Heo, Hwan; Lee, Won Oh; Shin, Kwang Yong; Park, Kang Ryoung
2014-05-15
We propose a new method for measuring the degree of eyestrain on 3D stereoscopic displays using a glasses-type of eye tracking device. Our study is novel in the following four ways: first, the circular area where a user's gaze position exists is defined based on the calculated gaze position and gaze estimation error. Within this circular area, the position where edge strength is maximized can be detected, and we determine this position as the gaze position that has a higher probability of being the correct one. Based on this gaze point, the eye foveation model is defined. Second, we quantitatively evaluate the correlation between the degree of eyestrain and the causal factors of visual fatigue, such as the degree of change of stereoscopic disparity (CSD), stereoscopic disparity (SD), frame cancellation effect (FCE), and edge component (EC) of the 3D stereoscopic display using the eye foveation model. Third, by comparing the eyestrain in conventional 3D video and experimental 3D sample video, we analyze the characteristics of eyestrain according to various factors and types of 3D video. Fourth, by comparing the eyestrain with or without the compensation of eye saccades movement in 3D video, we analyze the characteristics of eyestrain according to the types of eye movements in 3D video. Experimental results show that the degree of CSD causes more eyestrain than other factors.
Presentation of Information on Visual Displays.
ERIC Educational Resources Information Center
Pettersson, Rune
This discussion of factors involved in the presentation of text, numeric data, and/or visuals using video display devices describes in some detail the following types of presentation: (1) visual displays, with attention to additive color combination; measurements, including luminance, radiance, brightness, and lightness; and standards, with…
NASA Technical Reports Server (NTRS)
Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)
1991-01-01
A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.
Composite video and graphics display for camera viewing systems in robotics and teleoperation
NASA Technical Reports Server (NTRS)
Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)
1993-01-01
A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.
ERIC Educational Resources Information Center
Dwyer, Paul F.
Drawing on testimony presented at hearings before the Subcommittee on Health and Safety of the House of Representatives conducted between February 28 and June 12, 1984, this staff report addresses the general topic of video display terminals (VDTs) and possible health hazards in the workplace. An introduction presents the history of the…
Rapid Damage Assessment. Volume II. Development and Testing of Rapid Damage Assessment System.
1981-02-01
pixels/s Camera Line Rate 732.4 lines/s Pixels per Line 1728 video 314 blank 4 line number (binary) 2 run number (BCD) 2048 total Pixel Resolution 8 bits...sists of an LSI-ll microprocessor, a VDI -200 video display processor, an FD-2 dual floppy diskette subsystem, an FT-I function key-trackball module...COMPONENT LIST FOR IMAGE PROCESSOR SYSTEM IMAGE PROCESSOR SYSTEM VIEWS I VDI -200 Display Processor Racks, Table FD-2 Dual Floppy Diskette Subsystem FT-l
The Development of the AFIT Communications Laboratory and Experiments for Communications Students.
1985-12-01
Actiatesdigtal wag*andPermits monitoring of max. Actiatesdigial sorag animum signal excursions over selects the "A" or " porn indeienite time...level at which the vertical display is installed in the 71.5. either peak detected or digitally averaged. Video signals above the level set by the... Video signals below the level set by the PEAK AVERAGE control or VERT P05 Positions the display Or baseline on digitally averaged and stored. th c_
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olson, B.M.
1985-01-01
The USAF OEHL conducted an extensive literature review of Video Display Terminals (VDTs) and the health problems commonly associated with them. The report is presented in a question-and-answer format in an attempt to paraphrase the most commonly asked questions about VDTs that are forwarded to USAF OEHL/RZN. The questions and answers have been divided into several topic areas: Ionizing Radiation; Nonionizing Radiation; Optical Radiation; Ultrasound; Static Electricity; Health Complaints/Ergonomics; Pregnancy.
System for clinical photometric stereo endoscopy
NASA Astrophysics Data System (ADS)
Durr, Nicholas J.; González, Germán.; Lim, Daryl; Traverso, Giovanni; Nishioka, Norman S.; Vakoc, Benjamin J.; Parot, Vicente
2014-02-01
Photometric stereo endoscopy is a technique that captures information about the high-spatial-frequency topography of the field of view simultaneously with a conventional color image. Here we describe a system that will enable photometric stereo endoscopy to be clinically evaluated in the large intestine of human patients. The clinical photometric stereo endoscopy system consists of a commercial gastroscope, a commercial video processor, an image capturing and processing unit, custom synchronization electronics, white light LEDs, a set of four fibers with diffusing tips, and an alignment cap. The custom pieces that come into contact with the patient are composed of biocompatible materials that can be sterilized before use. The components can then be assembled in the endoscopy suite before use. The resulting endoscope has the same outer diameter as a conventional colonoscope (14 mm), plugs into a commercial video processor, captures topography and color images at 15 Hz, and displays the conventional color image to the gastroenterologist in real-time. We show that this system can capture a color and topographical video in a tubular colon phantom, demonstrating robustness to complex geometries and motion. The reported system is suitable for in vivo evaluation of photometric stereo endoscopy in the human large intestine.
Holodeck: Telepresence Dome Visualization System Simulations
NASA Technical Reports Server (NTRS)
Hite, Nicolas
2012-01-01
This paper explores the simulation and consideration of different image-projection strategies for the Holodeck, a dome that will be used for highly immersive telepresence operations in future endeavors of the National Aeronautics and Space Administration (NASA). Its visualization system will include a full 360 degree projection onto the dome's interior walls in order to display video streams from both simulations and recorded video. Because humans innately trust their vision to precisely report their surroundings, the Holodeck's visualization system is crucial to its realism. This system will be rigged with an integrated hardware and software infrastructure-namely, a system of projectors that will relay with a Graphics Processing Unit (GPU) and computer to both project images onto the dome and correct warping in those projections in real-time. Using both Computer-Aided Design (CAD) and ray-tracing software, virtual models of various dome/projector geometries were created and simulated via tracking and analysis of virtual light sources, leading to the selection of two possible configurations for installation. Research into image warping and the generation of dome-ready video content was also conducted, including generation of fisheye images, distortion correction, and the generation of a reliable content-generation pipeline.
Accommodative performance for chromatic displays.
Lovasik, J V; Kergoat, H
1988-01-01
Over the past few years, video display units (VDUs) have been incorporated into many varieties of workplaces and occupational demands. The success of electro-optical displays in facilitating and improving job performance has spawned interest in extracting further advantage from VDUs by incorporating colour coding into such communication systems. However, concerns have been raised about the effect of chromatic stimuli on the visual comfort and task efficiency, because of the chromatic aberration inherent in the optics of the human eye. In this study, we used a computer aided laser speckle optometer system to measure the accommodative responses to brightness-matched chromatic letters displayed on a high-resolution RGB monitor. Twenty, visually normal, paid volunteers in a 22-35 year age category served as subjects. Stimuli were 14, 21, 28 minutes of arc letters presented in a 'monochromatic' (white, red, green or blue, on a black background) or 'multichromatic' (blue-red, blue-green, red-green, foreground-background combinations) mode at 40 and 80 cm viewing distances. The results demonstrated that while the accommodative responses were strongly influenced by the foreground-background colour combination, the group-averaged dioptric difference across colours was relatively small. Further, accommodative responses were not guided in any systematic fashion by the size of letters presented for fixation. Implications of these findings for display designs are discussed.
Raster Scan Computer Image Generation (CIG) System Based On Refresh Memory
NASA Astrophysics Data System (ADS)
Dichter, W.; Doris, K.; Conkling, C.
1982-06-01
A full color, Computer Image Generation (CIG) raster visual system has been developed which provides a high level of training sophistication by utilizing advanced semiconductor technology and innovative hardware and firmware techniques. Double buffered refresh memory and efficient algorithms eliminate the problem of conventional raster line ordering by allowing the generated image to be stored in a random fashion. Modular design techniques and simplified architecture provide significant advantages in reduced system cost, standardization of parts, and high reliability. The major system components are a general purpose computer to perform interfacing and data base functions; a geometric processor to define the instantaneous scene image; a display generator to convert the image to a video signal; an illumination control unit which provides final image processing; and a CRT monitor for display of the completed image. Additional optional enhancements include texture generators, increased edge and occultation capability, curved surface shading, and data base extensions.
Head-mounted display for use in functional endoscopic sinus surgery
NASA Astrophysics Data System (ADS)
Wong, Brian J.; Lee, Jon P.; Dugan, F. Markoe; MacArthur, Carol J.
1995-05-01
Since the introduction of functional endoscopic sinus surgery (FESS), the procedure has undergone rapid change with evolution keeping pace with technological advances. The advent of low cost charge coupled device 9CCD) cameras revolutionized the practice and instruction of FESS. Video-based FESS has allowed for documentation of the surgical procedure as well as interactive instruction during surgery. Presently, the technical requirements of video-based FESS include the addition of one or more television monitors positioned strategically in the operating room. Thought video monitors have greatly enhanced surgical endoscopy by re- involving nurses and assistants in the actual mechanics of surgery, video monitors require the operating surgeon to be focused on the screen instead of the patient. In this study, we describe the use of a new low-cost liquid crystal display (LCD) based device that functions as a monitor but is mounted on the head on a visor (PT-O1, O1 Products, Westlake Village, CA). This study illustrates the application of these HMD devices to FESS operations. The same surgeon performed the operation in each patient. In one nasal fossa, surgery was performed using conventional video FESS methods. The contralateral side was operated on while wearing the head mounted video display. The device had adequate resolution for the purposes of FESS. No adverse effects were noted intraoperatively. The results on the patients ipsalateral and contralateral sides were similar. The visor did eliminated significant torsion of the surgeon's neck during the operation, while at the same time permitted simultaneous viewing of both the patient and the intranasal surgical field.
Frohlich, Dennis Owen; Zmyslinski-Seelig, Anne
2012-01-01
The purpose of this study was to explore the types of social support messages YouTube users posted on medical videos. Specifically, the study compared messages posted on inflammatory bowel disease-related videos and ostomy-related videos. Additionally, the study analyzed the differences in social support messages posted on lay-created videos and professionally-created videos. Conducting a content analysis, the researchers unitized the comments on each video; the total number of thought units amounted to 5,960. Researchers coded each thought unit through the use of a coding scheme modified from a previous study. YouTube users posted informational support messages most frequently (65.1%), followed by emotional support messages (18.3%), and finally, instrumental support messages (8.2%).
Video quality assesment using M-SVD
NASA Astrophysics Data System (ADS)
Tao, Peining; Eskicioglu, Ahmet M.
2007-01-01
Objective video quality measurement is a challenging problem in a variety of video processing application ranging from lossy compression to printing. An ideal video quality measure should be able to mimic the human observer. We present a new video quality measure, M-SVD, to evaluate distorted video sequences based on singular value decomposition. A computationally efficient approach is developed for full-reference (FR) video quality assessment. This measure is tested on the Video Quality Experts Group (VQEG) phase I FR-TV test data set. Our experiments show the graphical measure displays the amount of distortion as well as the distribution of error in all frames of the video sequence while the numerical measure has a good correlation with perceived video quality outperforms PSNR and other objective measures by a clear margin.
A generic flexible and robust approach for intelligent real-time video-surveillance systems
NASA Astrophysics Data System (ADS)
Desurmont, Xavier; Delaigle, Jean-Francois; Bastide, Arnaud; Macq, Benoit
2004-05-01
In this article we present a generic, flexible and robust approach for an intelligent real-time video-surveillance system. A previous version of the system was presented in [1]. The goal of these advanced tools is to provide help to operators by detecting events of interest in visual scenes and highlighting alarms and compute statistics. The proposed system is a multi-camera platform able to handle different standards of video inputs (composite, IP, IEEE1394 ) and which can basically compress (MPEG4), store and display them. This platform also integrates advanced video analysis tools, such as motion detection, segmentation, tracking and interpretation. The design of the architecture is optimised to playback, display, and process video flows in an efficient way for video-surveillance application. The implementation is distributed on a scalable computer cluster based on Linux and IP network. It relies on POSIX threads for multitasking scheduling. Data flows are transmitted between the different modules using multicast technology and under control of a TCP-based command network (e.g. for bandwidth occupation control). We report here some results and we show the potential use of such a flexible system in third generation video surveillance system. We illustrate the interest of the system in a real case study, which is the indoor surveillance.
Code of Federal Regulations, 2010 CFR
2010-01-01
... carriers must use an equivalent non-video alternative for transmitting the briefing to passengers with... audio-visual displays played on aircraft for informational purposes that were created under your control...
Code of Federal Regulations, 2011 CFR
2011-01-01
... carriers must use an equivalent non-video alternative for transmitting the briefing to passengers with... audio-visual displays played on aircraft for informational purposes that were created under your control...
Code of Federal Regulations, 2014 CFR
2014-01-01
... carriers must use an equivalent non-video alternative for transmitting the briefing to passengers with... audio-visual displays played on aircraft for informational purposes that were created under your control...
Predictable Programming on a Precision Timed Architecture
2008-04-18
Application: A Video Game Figure 6: Structure of the Video Game Example Inspired by an example game sup- plied with the Hydra development board [17...we implemented a sim- ple video game in C targeted to our PRET architecture. Our example centers on rendering graphics and is otherwise fairly simple...background image. 13 Figure 10: A Screen Dump From Our Video Game Ultimately, each displayed pixel is one of only four col- ors, but the pixels in
Markerless client-server augmented reality system with natural features
NASA Astrophysics Data System (ADS)
Ning, Shuangning; Sang, Xinzhu; Chen, Duo
2017-10-01
A markerless client-server augmented reality system is presented. In this research, the more extensive and mature virtual reality head-mounted display is adopted to assist the implementation of augmented reality. The viewer is provided an image in front of their eyes with the head-mounted display. The front-facing camera is used to capture video signals into the workstation. The generated virtual scene is merged with the outside world information received from the camera. The integrated video is sent to the helmet display system. The distinguishing feature and novelty is to realize the augmented reality with natural features instead of marker, which address the limitations of the marker, such as only black and white, the inapplicability of different environment conditions, and particularly cannot work when the marker is partially blocked. Further, 3D stereoscopic perception of virtual animation model is achieved. The high-speed and stable socket native communication method is adopted for transmission of the key video stream data, which can reduce the calculation burden of the system.
Naval Research Laboratory 1984 Review.
1985-07-16
pulsed infrared comprehensive characterization of ultrahigh trans- sources and electronics for video signal process- parency fluoride glasses and...operates a video system through this port if desired. The optical bench in consisting of visible and infrared television cam- the trailer holds a high...resolution Fourier eras, a high-quality video cassette recorder and transform spectrometer to use in the receiving display, and a digitizer to convert
Multi-Target Camera Tracking, Hand-off and Display LDRD 158819 Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Robert J.
2014-10-01
Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn’t lead to more alarms, moremore » monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identify individual moving targets from the background imagery, and then display the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.« less
Multi-target camera tracking, hand-off and display LDRD 158819 final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Robert J.
2014-10-01
Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn't lead to more alarms, moremore » monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identifies individual moving targets from the background imagery, and then displays the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.« less
Fast repurposing of high-resolution stereo video content for mobile use
NASA Astrophysics Data System (ADS)
Karaoglu, Ali; Lee, Bong Ho; Boev, Atanas; Cheong, Won-Sik; Gotchev, Atanas
2012-06-01
3D video content is captured and created mainly in high resolution targeting big cinema or home TV screens. For 3D mobile devices, equipped with small-size auto-stereoscopic displays, such content has to be properly repurposed, preferably in real-time. The repurposing requires not only spatial resizing but also properly maintaining the output stereo disparity, as it should deliver realistic, pleasant and harmless 3D perception. In this paper, we propose an approach to adapt the disparity range of the source video to the comfort disparity zone of the target display. To achieve this, we adapt the scale and the aspect ratio of the source video. We aim at maximizing the disparity range of the retargeted content within the comfort zone, and minimizing the letterboxing of the cropped content. The proposed algorithm consists of five stages. First, we analyse the display profile, which characterises what 3D content can be comfortably observed in the target display. Then, we perform fast disparity analysis of the input stereoscopic content. Instead of returning the dense disparity map, it returns an estimate of the disparity statistics (min, max, meanand variance) per frame. Additionally, we detect scene cuts, where sharp transitions in disparities occur. Based on the estimated input, and desired output disparity ranges, we derive the optimal cropping parameters and scale of the cropping window, which would yield the targeted disparity range and minimize the area of cropped and letterboxed content. Once the rescaling and cropping parameters are known, we perform resampling procedure using spline-based and perceptually optimized resampling (anti-aliasing) kernels, which have also a very efficient computational structure. Perceptual optimization is achieved through adjusting the cut-off frequency of the anti-aliasing filter with the throughput of the target display.
Innovative railroad information displays : video guide
DOT National Transportation Integrated Search
1998-01-01
The objectives of this study were to explore the potential of advanced digital technology, : novel concepts of information management, geographic information databases and : display capabilities in order to enhance planning and decision-making proces...
Development of 40-in hybrid hologram screen for auto-stereoscopic video display
NASA Astrophysics Data System (ADS)
Song, Hyun Ho; Nakashima, Y.; Momonoi, Y.; Honda, Toshio
2004-06-01
Usually in auto stereoscopic display, there are two problems. The first problem is that large image display is difficult, and the second problem is that the view zone (which means the zone in which both eyes are put for stereoscopic or 3-D image observation) is very narrow. We have been developing an auto stereoscopic large video display system (over 100 inches diagonal) which a few people can view simultaneously1,2. Usually in displays that are over 100 inches diagonal, an optical video projection system is used. As one of auto stereoscopic display systems the hologram screen has been proposed3,4,5,6. However, if the hologram screen becomes too large, the view zone (corresponding to the reconstructed diffused object) causes color dispersion and color aberration7. We also proposed the additional Fresnel lens attached to the hologram screen. We call the screen a "hybrid hologram screen", (HHS in short). We made the HHS 866mm(H)×433mm(V) (about 40 inch diagonal)8,9,10,11. By using the lens in the reconstruction step, the angle between object light and reference light can be small, compared to without the lens. So, the spread of the view zone by the color dispersion and color aberration becomes small. And also, the virtual image which is reconstructed from the hologram screen can be transformed to a real image (view zone). So, it is not necessary to use a large lens or concave mirror while making a large hologram screen.
MobileASL: intelligibility of sign language video over mobile phones.
Cavender, Anna; Vanam, Rahul; Barney, Dane K; Ladner, Richard E; Riskin, Eve A
2008-01-01
For Deaf people, access to the mobile telephone network in the United States is currently limited to text messaging, forcing communication in English as opposed to American Sign Language (ASL), the preferred language. Because ASL is a visual language, mobile video phones have the potential to give Deaf people access to real-time mobile communication in their preferred language. However, even today's best video compression techniques can not yield intelligible ASL at limited cell phone network bandwidths. Motivated by this constraint, we conducted one focus group and two user studies with members of the Deaf Community to determine the intelligibility effects of video compression techniques that exploit the visual nature of sign language. Inspired by eye tracking results that show high resolution foveal vision is maintained around the face, we studied region-of-interest encodings (where the face is encoded at higher quality) as well as reduced frame rates (where fewer, better quality, frames are displayed every second). At all bit rates studied here, participants preferred moderate quality increases in the face region, sacrificing quality in other regions. They also preferred slightly lower frame rates because they yield better quality frames for a fixed bit rate. The limited processing power of cell phones is a serious concern because a real-time video encoder and decoder will be needed. Choosing less complex settings for the encoder can reduce encoding time, but will affect video quality. We studied the intelligibility effects of this tradeoff and found that we can significantly speed up encoding time without severely affecting intelligibility. These results show promise for real-time access to the current low-bandwidth cell phone network through sign-language-specific encoding techniques.
Perceptual tools for quality-aware video networks
NASA Astrophysics Data System (ADS)
Bovik, A. C.
2014-01-01
Monitoring and controlling the quality of the viewing experience of videos transmitted over increasingly congested networks (especially wireless networks) is a pressing problem owing to rapid advances in video-centric mobile communication and display devices that are straining the capacity of the network infrastructure. New developments in automatic perceptual video quality models offer tools that have the potential to be used to perceptually optimize wireless video, leading to more efficient video data delivery and better received quality. In this talk I will review key perceptual principles that are, or could be used to create effective video quality prediction models, and leading quality prediction models that utilize these principles. The goal is to be able to monitor and perceptually optimize video networks by making them "quality-aware."
Representing videos in tangible products
NASA Astrophysics Data System (ADS)
Fageth, Reiner; Weiting, Ralf
2014-03-01
Videos can be taken with nearly every camera, digital point and shoot cameras, DSLRs as well as smartphones and more and more with so-called action cameras mounted on sports devices. The implementation of videos while generating QR codes and relevant pictures out of the video stream via a software implementation was contents in last years' paper. This year we present first data about what contents is displayed and how the users represent their videos in printed products, e.g. CEWE PHOTOBOOKS and greeting cards. We report the share of the different video formats used, the number of images extracted out of the video in order to represent the video, the positions in the book and different design strategies compared to regular books.
Very High-Speed Digital Video Capability for In-Flight Use
NASA Technical Reports Server (NTRS)
Corda, Stephen; Tseng, Ting; Reaves, Matthew; Mauldin, Kendall; Whiteman, Donald
2006-01-01
digital video camera system has been qualified for use in flight on the NASA supersonic F-15B Research Testbed aircraft. This system is capable of very-high-speed color digital imaging at flight speeds up to Mach 2. The components of this system have been ruggedized and shock-mounted in the aircraft to survive the severe pressure, temperature, and vibration of the flight environment. The system includes two synchronized camera subsystems installed in fuselage-mounted camera pods (see Figure 1). Each camera subsystem comprises a camera controller/recorder unit and a camera head. The two camera subsystems are synchronized by use of an MHub(TradeMark) synchronization unit. Each camera subsystem is capable of recording at a rate up to 10,000 pictures per second (pps). A state-of-the-art complementary metal oxide/semiconductor (CMOS) sensor in the camera head has a maximum resolution of 1,280 1,024 pixels at 1,000 pps. Exposure times of the electronic shutter of the camera range from 1/200,000 of a second to full open. The recorded images are captured in a dynamic random-access memory (DRAM) and can be downloaded directly to a personal computer or saved on a compact flash memory card. In addition to the high-rate recording of images, the system can display images in real time at 30 pps. Inter Range Instrumentation Group (IRIG) time code can be inserted into the individual camera controllers or into the M-Hub unit. The video data could also be used to obtain quantitative, three-dimensional trajectory information. The first use of this system was in support of the Space Shuttle Return to Flight effort. Data were needed to help in understanding how thermally insulating foam is shed from a space shuttle external fuel tank during launch. The cameras captured images of simulated external tank debris ejected from a fixture mounted under the centerline of the F-15B aircraft. Digital video was obtained at subsonic and supersonic flight conditions, including speeds up to Mach 2 and altitudes up to 50,000 ft (15.24 km). The digital video was used to determine the structural survivability of the debris in a real flight environment and quantify the aerodynamic trajectories of the debris.
United Sugpiaq Alutiiq (USA) Video Game: Preserving Traditional Knowledge, Culture, and Language
ERIC Educational Resources Information Center
Hall, Leslie D.; Sanderville, James Mountain Chief
2009-01-01
Video games are explored as a means of reviving dying indigenous languages. The design and production of the place-based United Sugpiaq Alutiiq (USA) video game prototype involved work across generations and across cultures. The video game is one part of a proposed digital environment where Sugcestun speakers in traditional Alaskan villages could…
Video-speed electronic paper based on electrowetting
NASA Astrophysics Data System (ADS)
Hayes, Robert A.; Feenstra, B. J.
2003-09-01
In recent years, a number of different technologies have been proposed for use in reflective displays. One of the most appealing applications of a reflective display is electronic paper, which combines the desirable viewing characteristics of conventional printed paper with the ability to manipulate the displayed information electronically. Electronic paper based on the electrophoretic motion of particles inside small capsules has been demonstrated and commercialized; but the response speed of such a system is rather slow, limited by the velocity of the particles. Recently, we have demonstrated that electrowetting is an attractive technology for the rapid manipulation of liquids on a micrometre scale. Here we show that electrowetting can also be used to form the basis of a reflective display that is significantly faster than electrophoretic displays, so that video content can be displayed. Our display principle utilizes the voltage-controlled movement of a coloured oil film adjacent to a white substrate. The reflectivity and contrast of our system approach those of paper. In addition, we demonstrate a colour concept, which is intrinsically four times brighter than reflective liquid-crystal displays and twice as bright as other emerging technologies. The principle of microfluidic motion at low voltages is applicable in a wide range of electro-optic devices.
Dissecting children's observational learning of complex actions through selective video displays.
Flynn, Emma; Whiten, Andrew
2013-10-01
Children can learn how to use complex objects by watching others, yet the relative importance of different elements they may observe, such as the interactions of the individual parts of the apparatus, a model's movements, and desirable outcomes, remains unclear. In total, 140 3-year-olds and 140 5-year-olds participated in a study where they observed a video showing tools being used to extract a reward item from a complex puzzle box. Conditions varied according to the elements that could be seen in the video: (a) the whole display, including the model's hands, the tools, and the box; (b) the tools and the box but not the model's hands; (c) the model's hands and the tools but not the box; (d) only the end state with the box opened; and (e) no demonstration. Children's later attempts at the task were coded to establish whether they imitated the hierarchically organized sequence of the model's actions, the action details, and/or the outcome. Children's successful retrieval of the reward from the box and the replication of hierarchical sequence information were reduced in all but the whole display condition. Only once children had attempted the task and witnessed a second demonstration did the display focused on the tools and box prove to be better for hierarchical sequence information than the display focused on the tools and hands only. Copyright © 2013 Elsevier Inc. All rights reserved.
Heo, Hwan; Lee, Won Oh; Shin, Kwang Yong; Park, Kang Ryoung
2014-01-01
We propose a new method for measuring the degree of eyestrain on 3D stereoscopic displays using a glasses-type of eye tracking device. Our study is novel in the following four ways: first, the circular area where a user's gaze position exists is defined based on the calculated gaze position and gaze estimation error. Within this circular area, the position where edge strength is maximized can be detected, and we determine this position as the gaze position that has a higher probability of being the correct one. Based on this gaze point, the eye foveation model is defined. Second, we quantitatively evaluate the correlation between the degree of eyestrain and the causal factors of visual fatigue, such as the degree of change of stereoscopic disparity (CSD), stereoscopic disparity (SD), frame cancellation effect (FCE), and edge component (EC) of the 3D stereoscopic display using the eye foveation model. Third, by comparing the eyestrain in conventional 3D video and experimental 3D sample video, we analyze the characteristics of eyestrain according to various factors and types of 3D video. Fourth, by comparing the eyestrain with or without the compensation of eye saccades movement in 3D video, we analyze the characteristics of eyestrain according to the types of eye movements in 3D video. Experimental results show that the degree of CSD causes more eyestrain than other factors. PMID:24834910
Giera, Brian; Bukosky, Scott; Lee, Elaine; ...
2018-01-23
Here, quantitative color analysis is performed on videos of high contrast, low power reversible electrophoretic deposition (EPD)-based displays operated under different applied voltages. This analysis is coded in an open-source software, relies on a color differentiation metric, ΔE * 00, derived from digital video, and provides an intuitive relationship between the operating conditions of the devices and their performance. Time-dependent ΔE * 00 color analysis reveals color relaxation behavior, recoverability for different voltage sequences, and operating conditions that can lead to optimal performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giera, Brian; Bukosky, Scott; Lee, Elaine
Here, quantitative color analysis is performed on videos of high contrast, low power reversible electrophoretic deposition (EPD)-based displays operated under different applied voltages. This analysis is coded in an open-source software, relies on a color differentiation metric, ΔE * 00, derived from digital video, and provides an intuitive relationship between the operating conditions of the devices and their performance. Time-dependent ΔE * 00 color analysis reveals color relaxation behavior, recoverability for different voltage sequences, and operating conditions that can lead to optimal performance.
A Low Cost Video Display System Using the Motorola 6811 Single-Chip Microcomputer.
1986-08-01
EB JSR VIDEO display data;wait for keyentry 0426 E1EB BD E2 4E JSR CLRBUFF clean out buffer 0427 EEE C601 LDAB #1 reset pointer 0428 ElFO D7 02 STAB...E768 Al 00 REGI CMPA OX 1303 E76A 27 OE BEQ REG3 1304 E76C E6 00 LDAB 0,X 1305 E76E 08 INX 1306 E76F Cl 53 CMPB #’S’ 1307 E771 26 15 BNE REGI jump if
Neutrons Image Additive Manufactured Turbine Blade in 3-D
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2016-04-29
The video displays the Inconel 718 Turbine Blade made by Additive Manufacturing. First a gray scale neutron computed tomogram (CT) is displayed with transparency in order to show the internal structure. Then the neutron CT is overlapped with the engineering drawing that was used to print the part and a comparison of external and internal structures is possible. This provides a map of the accuracy of the printed turbine (printing tolerance). Internal surface roughness can also be observed. Credits: Experimental Measurements: Hassina Z. Bilheaux, Video and Printing Tolerance Analysis: Jean C. Bilheaux
Evaluating the content and reception of messages from incarcerated parents to their children.
Folk, Johanna B; Nichols, Emily B; Dallaire, Danielle H; Loper, Ann B
2012-10-01
In the current study, children's reactions to video messages from their incarcerated parents were evaluated. Previous research has yielded mixed results when it examined the impact of contact between incarcerated parents and their children; one reason for these mixed results may be a lack of attention to the quality of contact. This is the first study to examine the actual content and quality of a remote form of contact in this population. Participants included 186 incarcerated parents (54% mothers) who participated in a filming with The Messages Project and 61 caregivers of their children. Parental mood prior to filming the message and children's mood after viewing the message were assessed using the Positive and Negative Affect Scale. After coding the content of 172 videos, the data from the 61 videos with caregiver responses were used in subsequent path analyses. Analyses indicated that when parents were in more negative moods prior to filming their message, they displayed more negative emotions in the video messages ( = .210), and their children were in more negative moods after viewing the message ( = .288). Considering that displays of negative emotion can directly affect how children respond to contact, it seems important for parents to learn to regulate these emotional displays to improve the quality of their contact with their children. © 2012 American Orthopsychiatric Association.
Stereoscopic 3D video games and their effects on engagement
NASA Astrophysics Data System (ADS)
Hogue, Andrew; Kapralos, Bill; Zerebecki, Chris; Tawadrous, Mina; Stanfield, Brodie; Hogue, Urszula
2012-03-01
With television manufacturers developing low-cost stereoscopic 3D displays, a large number of consumers will undoubtedly have access to 3D-capable televisions at home. The availability of 3D technology places the onus on content creators to develop interesting and engaging content. While the technology of stereoscopic displays and content generation are well understood, there are many questions yet to be answered surrounding its effects on the viewer. Effects of stereoscopic display on passive viewers for film are known, however video games are fundamentally different since the viewer/player is actively (rather than passively) engaged in the content. Questions of how stereoscopic viewing affects interaction mechanics have previously been studied in the context of player performance but very few have attempted to quantify the player experience to determine whether stereoscopic 3D has a positive or negative influence on their overall engagement. In this paper we present a preliminary study of the effects stereoscopic 3D have on player engagement in video games. Participants played a video game in two conditions, traditional 2D and stereoscopic 3D and their engagement was quantified using a previously validated self-reporting tool. The results suggest that S3D has a positive effect on immersion, presence, flow, and absorption.
VideoBeam portable laser communicator
NASA Astrophysics Data System (ADS)
Mecherle, G. Stephen; Holcomb, Terry L.
1999-01-01
A VideoBeamTM portable laser communicator has been developed which provides full duplex communication links consisting of high quality analog video and stereo audio. The 3.2-pound unit resembles a binocular-type form factor and has an operational range of over two miles (clear air) with excellent jam-resistance and low probability of interception characteristics. The VideoBeamTM unit is ideally suited for numerous military scenarios, surveillance/espionage, industrial precious mineral exploration, and campus video teleconferencing applications.
Orbital thermal analysis of lattice structured spacecraft using color video display techniques
NASA Technical Reports Server (NTRS)
Wright, R. L.; Deryder, D. D.; Palmer, M. T.
1983-01-01
A color video display technique is demonstrated as a tool for rapid determination of thermal problems during the preliminary design of complex space systems. A thermal analysis is presented for the lattice-structured Earth Observation Satellite (EOS) spacecraft at 32 points in a baseline non Sun-synchronous (60 deg inclination) orbit. Large temperature variations (on the order of 150 K) were observed on the majority of the members. A gradual decrease in temperature was observed as the spacecraft traversed the Earth's shadow, followed by a sudden rise in temperature (100 K) as the spacecraft exited the shadow. Heating rate and temperature histories of selected members and color graphic displays of temperatures on the spacecraft are presented.
Polyplanar optical display electronics
NASA Astrophysics Data System (ADS)
DeSanto, Leonard; Biscardi, Cyrus
1997-07-01
The polyplanar optical display (POD) is a unique display screen which can be used with any projection source. The prototype ten inch display is two inches thick and has a matte black face which allows for high contrast images. The prototype being developed is a form, fit and functional replacement display for the B-52 aircraft which uses a monochrome ten-inch display. In order to achieve a long lifetime, the new display uses a 100 milliwatt green solid- state laser at 532 nm as its light source. To produce real- time video, the laser light is being modulated by a digital light processing (DLP) chip manufactured by Texas Instruments. In order to use the solid-state laser as the light source and also fit within the constraints of the B-52 display, the digital micromirror device (DMD) circuit board is removed from the Texas Instruments DLP light engine assembly. Due to the compact architecture of the projection system within the display chassis, the DMD chip is operated remotely from the Texas Instruments circuit board. We discuss the operation of the DMD divorced from the light engine and the interfacing of the DMD board with various video formats including the format specific to the B-52 aircraft. A brief discussion of the electronics required to drive the laser is also presented.
MPEG-1 low-cost encoder solution
NASA Astrophysics Data System (ADS)
Grueger, Klaus; Schirrmeister, Frank; Filor, Lutz; von Reventlow, Christian; Schneider, Ulrich; Mueller, Gerriet; Sefzik, Nicolai; Fiedrich, Sven
1995-02-01
A solution for real-time compression of digital YCRCB video data to an MPEG-1 video data stream has been developed. As an additional option, motion JPEG and video telephone streams (H.261) can be generated. For MPEG-1, up to two bidirectional predicted images are supported. The required computational power for motion estimation and DCT/IDCT, memory size and memory bandwidth have been the main challenges. The design uses fast-page-mode memory accesses and requires only one single 80 ns EDO-DRAM with 256 X 16 organization for video encoding. This can be achieved only by using adequate access and coding strategies. The architecture consists of an input processing and filter unit, a memory interface, a motion estimation unit, a motion compensation unit, a DCT unit, a quantization control, a VLC unit and a bus interface. For using the available memory bandwidth by the processing tasks, a fixed schedule for memory accesses has been applied, that can be interrupted for asynchronous events. The motion estimation unit implements a highly sophisticated hierarchical search strategy based on block matching. The DCT unit uses a separated fast-DCT flowgraph realized by a switchable hardware unit for both DCT and IDCT operation. By appropriate multiplexing, only one multiplier is required for: DCT, quantization, inverse quantization, and IDCT. The VLC unit generates the video-stream up to the video sequence layer and is directly coupled with an intelligent bus-interface. Thus, the assembly of video, audio and system data can easily be performed by the host computer. Having a relatively low complexity and only small requirements for DRAM circuits, the developed solution can be applied to low-cost encoding products for consumer electronics.
ERIC Educational Resources Information Center
Dahlgren, Sally
2000-01-01
Discusses how advances in light-emitting diode (LED) technology is helping video displays at sporting events get fans closer to the action than ever before. The types of LED displays available are discussed as are their operation and maintenance issues. (GR)
High Resolution Displays Using NCAP Liquid Crystals
NASA Astrophysics Data System (ADS)
Macknick, A. Brian; Jones, Phil; White, Larry
1989-07-01
Nematic curvilinear aligned phase (NCAP) liquid crystals have been found useful for high information content video displays. NCAP materials are liquid crystals which have been encapsulated in a polymer matrix and which have a light transmission which is variable with applied electric fields. Because NCAP materials do not require polarizers, their on-state transmission is substantially better than twisted nematic cells. All dimensional tolerances are locked in during the encapsulation process and hence there are no critical sealing or spacing issues. By controlling the polymer/liquid crystal morphology, switching speeds of NCAP materials have been significantly improved over twisted nematic systems. Recent work has combined active matrix addressing with NCAP materials. Active matrices, such as thin film transistors, have given displays of high resolution. The paper will discuss the advantages of NCAP materials specifically designed for operation at video rates on transistor arrays; applications for both backlit and projection displays will be discussed.
Expert Behavior in Children's Video Game Play.
ERIC Educational Resources Information Center
VanDeventer, Stephanie S.; White, James A.
2002-01-01
Investigates the display of expert behavior by seven outstanding video game-playing children ages 10 and 11. Analyzes observation and debriefing transcripts for evidence of self-monitoring, pattern recognition, principled decision making, qualitative thinking, and superior memory, and discusses implications for educators regarding the development…
Lim, Tae Ho; Choi, Hyuk Joong; Kang, Bo Seung
2010-01-01
We assessed the feasibility of using a camcorder mobile phone for teleconsulting about cardiac echocardiography. The diagnostic performance of evaluating left ventricle (LV) systolic function was measured by three emergency medicine physicians. A total of 138 short echocardiography video sequences (from 70 subjects) was selected from previous emergency room ultrasound examinations. The measurement of LV ejection fraction based on the transmitted video displayed on a mobile phone was compared with the original video displayed on the LCD monitor of the ultrasound machine. The image quality was evaluated using the double stimulation impairment scale (DSIS). All observers showed high sensitivity. There was an improvement in specificity with the observer's increasing experience of cardiac ultrasound. Although the image quality of video on the mobile phone was lower than that of the original, a receiver operating characteristic (ROC) analysis indicated that there was no significant difference in diagnostic performance. Immediate basic teleconsulting of echocardiography movies is possible using current commercially-available mobile phone systems.
Video-laryngoscopy introduction in a Sub-Saharan national teaching hospital: luxury or necessity?
Alain, Traoré Ibrahim; Drissa, Barro Sié; Flavien, Kaboré; Serge, Ilboudo; Idriss, Traoré
2015-01-01
Tracheal intubation using Macintosh blade is the technique of choice for the liberation of airways. It can turn out to be difficult, causing severe complications which can entail the prognosis for survival or the adjournment of the surgical operation. The video-laryngoscope allows a better display of the larynx and a good exposure of the glottis and then making tracheal intubation simpler compared with a conventional laryngoscope. It is little spread in sub-Saharan Africa and more particularly in Burkina Faso because of its high cost. We report our first experiences of use of the video-laryngoscope through two cases of difficult tracheal intubation which had required the adjournment of the interventions. It results that the video-laryngoscope makes tracheal intubation easier even in it's the first use because of the good glottal display which it gives and because its allows apprenticeship easy. Therefore, it is not a luxury to have it in our therapeutic arsenal. PMID:27047621
Coupled auralization and virtual video for immersive multimedia displays
NASA Astrophysics Data System (ADS)
Henderson, Paul D.; Torres, Rendell R.; Shimizu, Yasushi; Radke, Richard; Lonsway, Brian
2003-04-01
The implementation of maximally-immersive interactive multimedia in exhibit spaces requires not only the presentation of realistic visual imagery but also the creation of a perceptually accurate aural experience. While conventional implementations treat audio and video problems as essentially independent, this research seeks to couple the visual sensory information with dynamic auralization in order to enhance perceptual accuracy. An implemented system has been developed for integrating accurate auralizations with virtual video techniques for both interactive presentation and multi-way communication. The current system utilizes a multi-channel loudspeaker array and real-time signal processing techniques for synthesizing the direct sound, early reflections, and reverberant field excited by a moving sound source whose path may be interactively defined in real-time or derived from coupled video tracking data. In this implementation, any virtual acoustic environment may be synthesized and presented in a perceptually-accurate fashion to many participants over a large listening and viewing area. Subject tests support the hypothesis that the cross-modal coupling of aural and visual displays significantly affects perceptual localization accuracy.
Image Descriptors for Displays
1975-03-01
sampled with composite blanking signal; (c) signal in (a) formed into composite video signal ... 24 3. Power spectral density of the signals shown in...Curve A: composite video signal formed from 20 Hz to 2.5 MH.i band-limited, Gaussian white noise. Curve B: average spectrum of off-the-air video...previously. Our experimental procedure was the following. Off-the-air television signals broadcast on VHP channels were analyzed with a commercially
An Augmented Virtuality Display for Improving UAV Usability
2005-01-01
cockpit. For a more universally-understood metaphor, we have turned to virtual environments of the type represented in video games . Many of the...people who have the need to fly UAVs (such as military personnel) have experience with playing video games . They are skilled in navigating virtual...Another aspect of tailoring the interface to those with video game experience is to use familiar controls. Microsoft has developed a popular and
Toward a 3D video format for auto-stereoscopic displays
NASA Astrophysics Data System (ADS)
Vetro, Anthony; Yea, Sehoon; Smolic, Aljoscha
2008-08-01
There has been increased momentum recently in the production of 3D content for cinema applications; for the most part, this has been limited to stereo content. There are also a variety of display technologies on the market that support 3DTV, each offering a different viewing experience and having different input requirements. More specifically, stereoscopic displays support stereo content and require glasses, while auto-stereoscopic displays avoid the need for glasses by rendering view-dependent stereo pairs for a multitude of viewing angles. To realize high quality auto-stereoscopic displays, multiple views of the video must either be provided as input to the display, or these views must be created locally at the display. The former approach has difficulties in that the production environment is typically limited to stereo, and transmission bandwidth for a large number of views is not likely to be available. This paper discusses an emerging 3D data format that enables the latter approach to be realized. A new framework for efficiently representing a 3D scene and enabling the reconstruction of an arbitrarily large number of views prior to rendering is introduced. Several design challenges are also highlighted through experimental results.
Wide-Field-of-View, High-Resolution, Stereoscopic Imager
NASA Technical Reports Server (NTRS)
Prechtl, Eric F.; Sedwick, Raymond J.
2010-01-01
A device combines video feeds from multiple cameras to provide wide-field-of-view, high-resolution, stereoscopic video to the user. The prototype under development consists of two camera assemblies, one for each eye. One of these assemblies incorporates a mounting structure with multiple cameras attached at offset angles. The video signals from the cameras are fed to a central processing platform where each frame is color processed and mapped into a single contiguous wide-field-of-view image. Because the resolution of most display devices is typically smaller than the processed map, a cropped portion of the video feed is output to the display device. The positioning of the cropped window will likely be controlled through the use of a head tracking device, allowing the user to turn his or her head side-to-side or up and down to view different portions of the captured image. There are multiple options for the display of the stereoscopic image. The use of head mounted displays is one likely implementation. However, the use of 3D projection technologies is another potential technology under consideration, The technology can be adapted in a multitude of ways. The computing platform is scalable, such that the number, resolution, and sensitivity of the cameras can be leveraged to improve image resolution and field of view. Miniaturization efforts can be pursued to shrink the package down for better mobility. Power savings studies can be performed to enable unattended, remote sensing packages. Image compression and transmission technologies can be incorporated to enable an improved telepresence experience.
NASA Astrophysics Data System (ADS)
Sasaki, T.; Azuma, S.; Matsuda, S.; Nagayama, A.; Ogido, M.; Saito, H.; Hanafusa, Y.
2016-12-01
The Japan Agency for Marine-Earth Science and Technology (JAMSTEC) archives a large amount of deep-sea research videos and photos obtained by JAMSTEC's research submersibles and vehicles with cameras. The web site "JAMSTEC E-library of Deep-sea Images : J-EDI" (http://www.godac.jamstec.go.jp/jedi/e/) has made videos and photos available to the public via the Internet since 2011. Users can search for target videos and photos by keywords, easy-to-understand icons, and dive information at J-EDI because operating staffs classify videos and photos as to contents, e.g. living organism and geological environment, and add comments to them.Dive survey data including videos and photos are not only valiant academically but also helpful for education and outreach activities. With the aim of the improvement of visibility for broader communities, we added new functions of 3-dimensional display synchronized various dive survey data with videos in this year.New Functions Users can search for dive survey data by 3D maps with plotted dive points using the WebGL virtual map engine "Cesium". By selecting a dive point, users can watch deep-sea videos and photos and associated environmental data, e.g. water temperature, salinity, rock and biological sample photos, obtained by the dive survey. Users can browse a dive track visualized in 3D virtual spaces using the WebGL JavaScript library. By synchronizing this virtual dive track with videos, users can watch deep-sea videos recorded at a point on a dive track. Users can play an animation which a submersible-shaped polygon automatically traces a 3D virtual dive track and displays of dive survey data are synchronized with tracing a dive track. Users can directly refer to additional information of other JAMSTEC data sites such as marine biodiversity database, marine biological sample database, rock sample database, and cruise and dive information database, on each page which a 3D virtual dive track is displayed. A 3D visualization of a dive track makes users experience a virtual dive survey. In addition, by synchronizing a virtual dive track with videos, it is easy to understand living organisms and geological environments of a dive point. Therefore, these functions will visually support understanding of deep-sea environments in lectures and educational activities.
[The prevalence and influencing factors of eye diseases for IT industry video operation workers].
Zhao, Liang-liang; Yu, Yan-yan; Yu, Wen-lan; Xu, Ming; Cao, Wen-dong; Zhang, Hong-bing; Han, Lei; Zhang, Heng-dong
2013-05-01
To investigate the situation of video-contact and eye diseases for IT industry video operation workers, and to analyze the influencing factors, providing scientific evidence for the make of health-strategy for IT industry video operation workers. We take the random cluster sampling method to choose 190 IT industry video operation workers in a city of Jiangsu province, analyzing the relations between video contact and eye diseases. The daily video contact time of IT industry video operation workers is 6.0-16.0 hours, whose mean value is (I 0.1 ± 1.8) hours. 79.5% of workers in this survey wear myopic lens, 35.8% of workers have a rest during their working, and 14.2% of IT workers use protective products when they feel unwell of their eyes. Following the BUT experiment, 54.7% of IT workers have the normal examine results of hinoculus, while 45.3% have the abnormal results of at least one eye. Simultaneously, 54.7% workers have the normal examine results of hinoculus in the SIT experiment, however, 42.1% workers are abnormal. According to the broad linear model, there are six influencing factors (daily mean time to video, distance between eye and displayer, the frequency of rest, whether to use protective products when they feel unwell of their eyes, the type of dis player and daily time watching TV.) have significant influence on vision, having statistical significance. At the same time, there are also six influencing factors (whether have a rest regularly,sex, the situation of diaphaneity for cornea, the shape of pupil, family history and whether to use protective products when they feel unwell of their eyes.) have significant influence on the results of BUT experiment,having statistical significance. However, there are seven influencing factors (the type of computer, sex, the shape of pupil, the situation of diaphaneity for cornea, the angle between displayer and workers' sight, the type of displayer and the height of operating floor.) have significant influence on the results of SIT experiment,having statistical significance. The health-situation of IT industry video operation workers' eye is not optimistic, most of workers are lack of protection awareness; we need to strengthen propaganda and education according to its influencing factors and to improve the level of medical control and prevention for eye diseases in relevant industries.
Holo-Chidi video concentrator card
NASA Astrophysics Data System (ADS)
Nwodoh, Thomas A.; Prabhakar, Aditya; Benton, Stephen A.
2001-12-01
The Holo-Chidi Video Concentrator Card is a frame buffer for the Holo-Chidi holographic video processing system. Holo- Chidi is designed at the MIT Media Laboratory for real-time computation of computer generated holograms and the subsequent display of the holograms at video frame rates. The Holo-Chidi system is made of two sets of cards - the set of Processor cards and the set of Video Concentrator Cards (VCCs). The Processor cards are used for hologram computation, data archival/retrieval from a host system, and for higher-level control of the VCCs. The VCC formats computed holographic data from multiple hologram computing Processor cards, converting the digital data to analog form to feed the acousto-optic-modulators of the Media lab's Mark-II holographic display system. The Video Concentrator card is made of: a High-Speed I/O (HSIO) interface whence data is transferred from the hologram computing Processor cards, a set of FIFOs and video RAM used as buffer for data for the hololines being displayed, a one-chip integrated microprocessor and peripheral combination that handles communication with other VCCs and furnishes the card with a USB port, a co-processor which controls display data formatting, and D-to-A converters that convert digital fringes to analog form. The co-processor is implemented with an SRAM-based FPGA with over 500,000 gates and controls all the signals needed to format the data from the multiple Processor cards into the format required by Mark-II. A VCC has three HSIO ports through which up to 500 Megabytes of computed holographic data can flow from the Processor Cards to the VCC per second. A Holo-Chidi system with three VCCs has enough frame buffering capacity to hold up to thirty two 36Megabyte hologram frames at a time. Pre-computed holograms may also be loaded into the VCC from a host computer through the low- speed USB port. Both the microprocessor and the co- processor in the VCC can access the main system memory used to store control programs and data for the VCC. The Card also generates the control signals used by the scanning mirrors of Mark-II. In this paper we discuss the design of the VCC and its implementation in the Holo-Chidi system.
Method and apparatus for telemetry adaptive bandwidth compression
NASA Technical Reports Server (NTRS)
Graham, Olin L.
1987-01-01
Methods and apparatus are provided for automatic and/or manual adaptive bandwidth compression of telemetry. An adaptive sampler samples a video signal from a scanning sensor and generates a sequence of sampled fields. Each field and range rate information from the sensor are hence sequentially transmitted to and stored in a multiple and adaptive field storage means. The field storage means then, in response to an automatic or manual control signal, transfers the stored sampled field signals to a video monitor in a form for sequential or simultaneous display of a desired number of stored signal fields. The sampling ratio of the adaptive sample, the relative proportion of available communication bandwidth allocated respectively to transmitted data and video information, and the number of fields simultaneously displayed are manually or automatically selectively adjustable in functional relationship to each other and detected range rate. In one embodiment, when relatively little or no scene motion is detected, the control signal maximizes sampling ratio and causes simultaneous display of all stored fields, thus maximizing resolution and bandwidth available for data transmission. When increased scene motion is detected, the control signal is adjusted accordingly to cause display of fewer fields. If greater resolution is desired, the control signal is adjusted to increase the sampling ratio.
Seeing and Doing Science--With Video.
ERIC Educational Resources Information Center
Berger, Michelle Abel
1994-01-01
The article presents a video-based unit on camouflage for students in grades K-5, explaining how to make the classroom VCR a dynamic teaching tool. Information is offered on introducing the unit, active viewing strategies, and follow-up activities. Tips for teaching with video are included. (SM)
Analysis and Selection of a Remote Docking Simulation Visual Display System
NASA Technical Reports Server (NTRS)
Shields, N., Jr.; Fagg, M. F.
1984-01-01
The development of a remote docking simulation visual display system is examined. Video system and operator performance are discussed as well as operator command and control requirements and a design analysis of the reconfigurable work station.
Eavesdropping and signal matching in visual courtship displays of spiders.
Clark, David L; Roberts, J Andrew; Uetz, George W
2012-06-23
Eavesdropping on communication is widespread among animals, e.g. bystanders observing male-male contests, female mate choice copying and predator detection of prey cues. Some animals also exhibit signal matching, e.g. overlapping of competitors' acoustic signals in aggressive interactions. Fewer studies have examined male eavesdropping on conspecific courtship, although males could increase mating success by attending to others' behaviour and displaying whenever courtship is detected. In this study, we show that field-experienced male Schizocosa ocreata wolf spiders exhibit eavesdropping and signal matching when exposed to video playback of courting male conspecifics. Male spiders had longer bouts of interaction with a courting male stimulus, and more bouts of courtship signalling during and after the presence of a male on the video screen. Rates of courtship (leg tapping) displayed by individual focal males were correlated with the rates of the video exemplar to which they were exposed. These findings suggest male wolf spiders might gain information by eavesdropping on conspecific courtship and adjust performance to match that of rivals. This represents a novel finding, as these behaviours have previously been seen primarily among vertebrates.
Eavesdropping and signal matching in visual courtship displays of spiders
Clark, David L.; Roberts, J. Andrew; Uetz, George W.
2012-01-01
Eavesdropping on communication is widespread among animals, e.g. bystanders observing male–male contests, female mate choice copying and predator detection of prey cues. Some animals also exhibit signal matching, e.g. overlapping of competitors' acoustic signals in aggressive interactions. Fewer studies have examined male eavesdropping on conspecific courtship, although males could increase mating success by attending to others' behaviour and displaying whenever courtship is detected. In this study, we show that field-experienced male Schizocosa ocreata wolf spiders exhibit eavesdropping and signal matching when exposed to video playback of courting male conspecifics. Male spiders had longer bouts of interaction with a courting male stimulus, and more bouts of courtship signalling during and after the presence of a male on the video screen. Rates of courtship (leg tapping) displayed by individual focal males were correlated with the rates of the video exemplar to which they were exposed. These findings suggest male wolf spiders might gain information by eavesdropping on conspecific courtship and adjust performance to match that of rivals. This represents a novel finding, as these behaviours have previously been seen primarily among vertebrates. PMID:22219390
19. SITE BUILDING 002 SCANNER BUILDING AIR POLICE ...
19. SITE BUILDING 002 - SCANNER BUILDING - AIR POLICE SITE SECURITY OFFICE WITH "SITE PERIMETER STATUS PANEL" AND REAL TIME VIDEO DISPLAY OUTPUT FROM VIDEO CAMERA SYSTEM AT SECURITY FENCE LOCATIONS. - Cape Cod Air Station, Technical Facility-Scanner Building & Power Plant, Massachusetts Military Reservation, Sandwich, Barnstable County, MA
Video bandwidth compression system
NASA Astrophysics Data System (ADS)
Ludington, D.
1980-08-01
The objective of this program was the development of a Video Bandwidth Compression brassboard model for use by the Air Force Avionics Laboratory, Wright-Patterson Air Force Base, in evaluation of bandwidth compression techniques for use in tactical weapons and to aid in the selection of particular operational modes to be implemented in an advanced flyable model. The bandwidth compression system is partitioned into two major divisions: the encoder, which processes the input video with a compression algorithm and transmits the most significant information; and the decoder where the compressed data is reconstructed into a video image for display.
Fractional screen video enhancement apparatus
Spletzer, Barry L [Albuquerque, NM; Davidson, George S [Albuquerque, NM; Zimmerer, Daniel J [Tijeras, NM; Marron, Lisa C [Albuquerque, NM
2005-07-19
The present invention provides a method and apparatus for displaying two portions of an image at two resolutions. For example, the invention can display an entire image at a first resolution, and a subset of the image at a second, higher resolution. Two inexpensive, low resolution displays can be used to produce a large image with high resolution only where needed.
Exploiting spatio-temporal characteristics of human vision for mobile video applications
NASA Astrophysics Data System (ADS)
Jillani, Rashad; Kalva, Hari
2008-08-01
Video applications on handheld devices such as smart phones pose a significant challenge to achieve high quality user experience. Recent advances in processor and wireless networking technology are producing a new class of multimedia applications (e.g. video streaming) for mobile handheld devices. These devices are light weight and have modest sizes, and therefore very limited resources - lower processing power, smaller display resolution, lesser memory, and limited battery life as compared to desktop and laptop systems. Multimedia applications on the other hand have extensive processing requirements which make the mobile devices extremely resource hungry. In addition, the device specific properties (e.g. display screen) significantly influence the human perception of multimedia quality. In this paper we propose a saliency based framework that exploits the structure in content creation as well as the human vision system to find the salient points in the incoming bitstream and adapt it according to the target device, thus improving the quality of new adapted area around salient points. Our experimental results indicate that the adaptation process that is cognizant of video content and user preferences can produce better perceptual quality video for mobile devices. Furthermore, we demonstrated how such a framework can affect user experience on a handheld device.
Highly Reflective Multi-stable Electrofluidic Display Pixels
NASA Astrophysics Data System (ADS)
Yang, Shu
Electronic papers (E-papers) refer to the displays that mimic the appearance of printed papers, but still owning the features of conventional electronic displays, such as the abilities of browsing websites and playing videos. The motivation of creating paper-like displays is inspired by the truths that reading on a paper caused least eye fatigue due to the paper's reflective and light diffusive nature, and, unlike the existing commercial displays, there is no cost of any form of energy for sustaining the displayed image. To achieve the equivalent visual effect of a paper print, an ideal E-paper has to be a highly reflective with good contrast ratio and full-color capability. To sustain the image with zero power consumption, the display pixels need to be bistable, which means the "on" and "off" states are both lowest energy states. Pixel can change its state only when sufficient external energy is given. There are many emerging technologies competing to demonstrate the first ideal E-paper device. However, none is able to achieve satisfactory visual effect, bistability and video speed at the same time. Challenges come from either the inherent physical/chemical properties or the fabrication process. Electrofluidic display is one of the most promising E-paper technologies. It has successfully demonstrated high reflectivity, brilliant color and video speed operation by moving colored pigment dispersion between visible and invisible places with electrowetting force. However, the pixel design did not allow the image bistability. Presented in this dissertation are the multi-stable electrofluidic display pixels that are able to sustain grayscale levels without any power consumption, while keeping the favorable features of the previous generation electrofluidic display. The pixel design, fabrication method using multiple layer dry film photoresist lamination, and physical/optical characterizations are discussed in details. Based on the pixel structure, the preliminary results of a simplified design and fabrication method are demonstrated. As advanced research topics regarding the device optical performance, firstly an optical model for evaluating reflective displays' light out-coupling efficiency is established to guide the pixel design; Furthermore, Aluminum surface diffusers are analytically modeled and then fabricated onto multi-stable electrofluidic display pixels to demonstrate truly "white" multi-stable electrofluidic display modules. The achieved results successfully promoted multi-stable electrofluidic display as excellent candidate for the ultimate E-paper device especially for larger scale signage applications.
[Flipped classroom as a strategy to enhance active learning].
Wakabayashi, Noriyuki
2015-03-01
This paper reviews the introduction of a flipped class for fourth grade dentistry students, and analyzes the characteristics of the learning method. In fiscal 2013 and 2014, a series of ten three-hour units for removable partial prosthodontics were completed with the flipped class method; a lecture video of approximately 60 minutes was made by the teacher (author) and uploaded to the university's e-learning website one week before each class. Students were instructed to prepare for the class by watching the streaming video on their PC, tablet, or smartphone. In the flipped class, students were not given a lecture, but were asked to solve short questions displayed on screen, to make a short presentation about a part of the video lecture, and to discuss a critical question related to the main subject of the day. An additional team-based learning (TBL) session with individual and group answers was implemented. The average individual scores were considerably higher in the last two years, when the flipped method was implemented, than in the three previous years when conventional lectures were used. The following learning concepts were discussed: the role of the flipped method as an active learning strategy, the efficacy of lecture videos and short questions, students' participation in the class discussion, present-day value of the method, cooperation with TBL, the significance of active learning in relation with the students' learning ability, and the potential increase in the preparation time and workload for students.
Design of a highly integrated video acquisition module for smart video flight unit development
NASA Astrophysics Data System (ADS)
Lebre, V.; Gasti, W.
2017-11-01
CCD and APS devices are widely used in space missions as instrument sensors and/or in Avionics units like star detectors/trackers. Therefore, various and numerous designs of video acquisition chains have been produced. Basically, a classical video acquisition chain is constituted of two main functional blocks: the Proximity Electronics (PEC), including detector drivers and the Analogue Processing Chain (APC) Electronics that embeds the ADC, a master sequencer and the host interface. Nowadays, low power technologies allow to improve the integration, radiometric performances and power budget optimisation of video units and to standardize video units design and development. To this end, ESA has initiated a development activity through a competitive process requesting the expertise of experienced actors in the field of high resolution electronics for earth observation and Scientific missions. THALES ALENIA SPACE has been granted this activity as a prime contractor through ESA contract called HIVAC that holds for Highly Integrated Video Acquisition Chain. This paper presents main objectives of the on going HIVAC project and focuses on the functionalities and performances offered by the usage of the under development HIVAC board for future optical instruments.
Display aids for remote control of untethered undersea vehicles
NASA Technical Reports Server (NTRS)
Verplank, W. L.
1978-01-01
A predictor display superimposed on slow-scan video or sonar data is proposed as a method to allow better remote manual control of an untethered submersible. Simulation experiments show good control under circumstances which otherwise make control practically impossible.
Preliminary experience with a stereoscopic video system in a remotely piloted aircraft application
NASA Technical Reports Server (NTRS)
Rezek, T. W.
1983-01-01
Remote piloting video display development at the Dryden Flight Research Facility of NASA's Ames Research Center is summarized, and the reasons for considering stereo television are presented. Pertinent equipment is described. Limited flight experience is also discussed, along with recommendations for further study.
Comparing Pictures and Videos for Teaching Action Labels to Children with Communication Delays
ERIC Educational Resources Information Center
Schebell, Shannon; Shepley, Collin; Mataras, Theologia; Wunderlich, Kara
2018-01-01
Children with communication delays often display difficulties labeling stimuli in their environment, particularly related to actions. Research supports direct instruction with video and picture stimuli for increasing children's action labeling repertoires; however, no studies have compared which type of stimuli results in more efficient,…
1996-01-01
Ted Brunzie and Peter Mason observe the float package and the data rack aboard the DC-9 reduced gravity aircraft. The float package contains a cryostat, a video camera, a pump and accelerometers. The data rack displays and record the video signal from the float package on tape and stores acceleration and temperature measurements on disk.
ERIC Educational Resources Information Center
Krumboltz, John D.; Babineaux, Ryan; Wientjes, Greg
2010-01-01
The supply of occupational information appears to exceed the demand. A website displaying over 100 videos about various occupations was created to help career searchers find attractive alternatives. Access to the videos was free for anyone in the world. It had been hoped that many thousands of people would make use of the resource. However, the…
On-line content creation for photo products: understanding what the user wants
NASA Astrophysics Data System (ADS)
Fageth, Reiner
2015-03-01
This paper describes how videos can be implemented into printed photo books and greeting cards. We will show that - surprisingly or not- pictures from videos are similarly used such as classical images to tell compelling stories. Videos can be taken with nearly every camera, digital point and shoot cameras, DSLRs as well as smartphones and more and more with so-called action cameras mounted on sports devices. The implementation of videos while generating QR codes and relevant pictures out of the video stream via a software implementation was contents in last years' paper. This year we present first data about what contents is displayed and how the users represent their videos in printed products, e.g. CEWE PHOTOBOOKS and greeting cards. We report the share of the different video formats used.
Automatic view synthesis by image-domain-warping.
Stefanoski, Nikolce; Wang, Oliver; Lang, Manuel; Greisen, Pierre; Heinzle, Simon; Smolic, Aljosa
2013-09-01
Today, stereoscopic 3D (S3D) cinema is already mainstream, and almost all new display devices for the home support S3D content. S3D distribution infrastructure to the home is already established partly in the form of 3D Blu-ray discs, video on demand services, or television channels. The necessity to wear glasses is, however, often considered as an obstacle, which hinders broader acceptance of this technology in the home. Multiviewautostereoscopic displays enable a glasses free perception of S3D content for several observers simultaneously, and support head motion parallax in a limited range. To support multiviewautostereoscopic displays in an already established S3D distribution infrastructure, a synthesis of new views from S3D video is needed. In this paper, a view synthesis method based on image-domain-warping (IDW) is presented that automatically synthesizes new views directly from S3D video and functions completely. IDW relies on an automatic and robust estimation of sparse disparities and image saliency information, and enforces target disparities in synthesized images using an image warping framework. Two configurations of the view synthesizer in the scope of a transmission and view synthesis framework are analyzed and evaluated. A transmission and view synthesis system that uses IDW is recently submitted to MPEG's call for proposals on 3D video technology, where it is ranked among the four best performing proposals.
Update on POCIT portable optical communicators: VideoBeam and EtherBeam
NASA Astrophysics Data System (ADS)
Mecherle, G. Stephen; Holcomb, Terry L.
2000-05-01
LDSC is developing the POCITTM (Portable Optical Communication Integrated Transceiver) family of products which includes VideoBeamTM and the latest addition, EtherBeamTM. Each is a full duplex portable laser communicator: VideoBeamTM providing near-broadcast- quality analog video and stereo audio, and EtherBeamTM providing standard Ethernet connectivity. Each POCITTM transceiver consists of a 3.5-pound unit with a binocular- type form factor, which can be manually pointed, tripod- mounted or gyro-stabilized. Both units have an operational range of over two miles (clear air) with excellent jam- resistance and low probability of interception characteristics. The transmission wavelength of 1550 nm enables Class 1 eyesafe operation (ANSI, IEC). The POCITTM units are ideally suited for numerous military scenarios, surveillance/espionage, industrial precious mineral exploration, and campus video teleconferencing applications. VideoBeam will be available second quarter 2000, followed by EtherBeam in third quarter 2000.
NASA Technical Reports Server (NTRS)
Stute, Robert A. (Inventor); Galloway, F. Houston (Inventor); Medelius, Pedro J. (Inventor); Swindle, Robert W. (Inventor); Bierman, Tracy A. (Inventor)
1996-01-01
A remote monitor alarm system monitors discrete alarm and analog power supply voltage conditions at remotely located communications terminal equipment. A central monitoring unit (CMU) is connected via serial data links to each of a plurality of remote terminal units (RTUS) that monitor the alarm and power supply conditions of the remote terminal equipment. Each RTU can monitor and store condition information of both discrete alarm points and analog power supply voltage points in its associated communications terminal equipment. The stored alarm information is periodically transmitted to the CMU in response to sequential polling of the RTUS. The number of monitored alarm inputs and permissible voltage ranges for the analog inputs can be remotely configured at the CMU and downloaded into programmable memory at each RTU. The CMU includes a video display, a hard disk memory, a line printer and an audio alarm for communicating and storing the alarm information received from each RTU.
Optimization of the polyplanar optical display electronics for a monochrome B-52 display
NASA Astrophysics Data System (ADS)
DeSanto, Leonard
1998-09-01
The Polyplanar Optical Display (POD) is a unique display screen which can be used with any projection source. The prototype ten-inch display is two inches thick and has a matte black face which allows for high contrast images. The prototype being developed is a form, fit and functional replacement display for the B-52 aircraft which uses a monochrome ten-inch display. In order to achieve a long lifetime, the new display uses a new 200 mW green solid-state laser (10,000 hr. life) at 532 nm as its light source. To produce real-time video, the laser light is being modulated by a Digital Light Processing (DLPTM) chip manufactured by Texas Instruments (TI). In order to use the solid-state laser as the light source and also fit within the constraints of the B-52 display, the Digital Micromirror Device (DMDTM) chip is operated remotely from the Texas Instruments circuit board. In order to achieve increased brightness a monochrome digitizing interface was investigated. The operation of the DMDTM divorced from the light engine and the interfacing of the DMDTM board with the RS-170 video format specific to the B-52 aircraft will be discussed, including the increased brightness of the monochrome digitizing interface. A brief description of the electronics required to drive the new 200 mW laser is also presented.
The Short Life and Ignominious Death of ALA Video and Special Projects.
ERIC Educational Resources Information Center
Handman, Gary
1991-01-01
Discussion of videocassettes in our culture and the function of video collections in libraries focuses on the creation and demise of a unit sponsored by the American Library Association, the ALA Video and Special Projects. The unit's role is discussed and funding decisions that led to its demise are explained. (LRW)
NASA Astrophysics Data System (ADS)
Bada, Adedayo; Alcaraz-Calero, Jose M.; Wang, Qi; Grecos, Christos
2014-05-01
This paper describes a comprehensive empirical performance evaluation of 3D video processing employing the physical/virtual architecture implemented in a cloud environment. Different virtualization technologies, virtual video cards and various 3D benchmarks tools have been utilized in order to analyse the optimal performance in the context of 3D online gaming applications. This study highlights 3D video rendering performance under each type of hypervisors, and other factors including network I/O, disk I/O and memory usage. Comparisons of these factors under well-known virtual display technologies such as VNC, Spice and Virtual 3D adaptors reveal the strengths and weaknesses of the various hypervisors with respect to 3D video rendering and streaming.
System Synchronizes Recordings from Separated Video Cameras
NASA Technical Reports Server (NTRS)
Nail, William; Nail, William L.; Nail, Jasper M.; Le, Doung T.
2009-01-01
A system of electronic hardware and software for synchronizing recordings from multiple, physically separated video cameras is being developed, primarily for use in multiple-look-angle video production. The system, the time code used in the system, and the underlying method of synchronization upon which the design of the system is based are denoted generally by the term "Geo-TimeCode(TradeMark)." The system is embodied mostly in compact, lightweight, portable units (see figure) denoted video time-code units (VTUs) - one VTU for each video camera. The system is scalable in that any number of camera recordings can be synchronized. The estimated retail price per unit would be about $350 (in 2006 dollars). The need for this or another synchronization system external to video cameras arises because most video cameras do not include internal means for maintaining synchronization with other video cameras. Unlike prior video-camera-synchronization systems, this system does not depend on continuous cable or radio links between cameras (however, it does depend on occasional cable links lasting a few seconds). Also, whereas the time codes used in prior video-camera-synchronization systems typically repeat after 24 hours, the time code used in this system does not repeat for slightly more than 136 years; hence, this system is much better suited for long-term deployment of multiple cameras.
When less is best: female brown-headed cowbirds prefer less intense male displays.
O'Loghlen, Adrian L; Rothstein, Stephen I
2012-01-01
Sexual selection theory predicts that females should prefer males with the most intense courtship displays. However, wing-spread song displays that male brown-headed cowbirds (Molothrus ater) direct at females are generally less intense than versions of this display that are directed at other males. Because male-directed displays are used in aggressive signaling, we hypothesized that females should prefer lower intensity performances of this display. To test this hypothesis, we played audiovisual recordings showing the same males performing both high intensity male-directed and low intensity female-directed displays to females (N = 8) and recorded the females' copulation solicitation display (CSD) responses. All eight females responded strongly to both categories of playbacks but were more sexually stimulated by the low intensity female-directed displays. Because each pair of high and low intensity playback videos had the exact same audio track, the divergent responses of females must have been based on differences in the visual content of the displays shown in the videos. Preferences female cowbirds show in acoustic CSD studies are correlated with mate choice in field and captivity studies and this is also likely to be true for preferences elucidated by playback of audiovisual displays. Female preferences for low intensity female-directed displays may explain why male cowbirds rarely use high intensity displays when signaling to females. Repetitive high intensity displays may demonstrate a male's current condition and explain why these displays are used in male-male interactions which can escalate into physical fights in which males in poorer condition could be injured or killed. This is the first study in songbirds to use audiovisual playbacks to assess how female sexual behavior varies in response to variation in a male visual display.
Li, Xiangrui; Lu, Zhong-Lin
2012-02-29
Display systems based on conventional computer graphics cards are capable of generating images with 8-bit gray level resolution. However, most experiments in vision research require displays with more than 12 bits of luminance resolution. Several solutions are available. Bit++ (1) and DataPixx (2) use the Digital Visual Interface (DVI) output from graphics cards and high resolution (14 or 16-bit) digital-to-analog converters to drive analog display devices. The VideoSwitcher (3) described here combines analog video signals from the red and blue channels of graphics cards with different weights using a passive resister network (4) and an active circuit to deliver identical video signals to the three channels of color monitors. The method provides an inexpensive way to enable high-resolution monochromatic displays using conventional graphics cards and analog monitors. It can also provide trigger signals that can be used to mark stimulus onsets, making it easy to synchronize visual displays with physiological recordings or response time measurements. Although computer keyboards and mice are frequently used in measuring response times (RT), the accuracy of these measurements is quite low. The RTbox is a specialized hardware and software solution for accurate RT measurements. Connected to the host computer through a USB connection, the driver of the RTbox is compatible with all conventional operating systems. It uses a microprocessor and high-resolution clock to record the identities and timing of button events, which are buffered until the host computer retrieves them. The recorded button events are not affected by potential timing uncertainties or biases associated with data transmission and processing in the host computer. The asynchronous storage greatly simplifies the design of user programs. Several methods are available to synchronize the clocks of the RTbox and the host computer. The RTbox can also receive external triggers and be used to measure RT with respect to external events. Both VideoSwitcher and RTbox are available for users to purchase. The relevant information and many demonstration programs can be found at http://lobes.usc.edu/.
Modern Display Technologies for Airborne Applications.
1983-04-01
the case of LED head-down direct view displays, this requires that special attention be paid to the optical filtering , the electrical drive/address...effectively attenuates the LED specular reflectance component, the colour and neutral density filtering attentuate the diffuse component and the... filter techniques are planned for use with video, multi- colour and advanced versions of numeric, alphanumeric and graphic displays; this technique
Eye movements while viewing narrated, captioned, and silent videos
Ross, Nicholas M.; Kowler, Eileen
2013-01-01
Videos are often accompanied by narration delivered either by an audio stream or by captions, yet little is known about saccadic patterns while viewing narrated video displays. Eye movements were recorded while viewing video clips with (a) audio narration, (b) captions, (c) no narration, or (d) concurrent captions and audio. A surprisingly large proportion of time (>40%) was spent reading captions even in the presence of a redundant audio stream. Redundant audio did not affect the saccadic reading patterns but did lead to skipping of some portions of the captions and to delays of saccades made into the caption region. In the absence of captions, fixations were drawn to regions with a high density of information, such as the central region of the display, and to regions with high levels of temporal change (actions and events), regardless of the presence of narration. The strong attraction to captions, with or without redundant audio, raises the question of what determines how time is apportioned between captions and video regions so as to minimize information loss. The strategies of apportioning time may be based on several factors, including the inherent attraction of the line of sight to any available text, the moment by moment impressions of the relative importance of the information in the caption and the video, and the drive to integrate visual text accompanied by audio into a single narrative stream. PMID:23457357
Effects of Picture Prompts Delivered by a Video iPod on Pedestrian Navigation
ERIC Educational Resources Information Center
Kelley, Kelly R.; Test, David W.; Cooke, Nancy L.
2013-01-01
Transportation access is a major contributor to independence, productivity, and societal inclusion for individuals with intellectual and development disabilities (IDD). This study examined the effects of pedestrian navigation training using picture prompts displayed through a video iPod on travel route completion with 4 adults and IDD. Results…
NASA Technical Reports Server (NTRS)
Gilliland, M. G.; Rougelot, R. S.; Schumaker, R. A.
1966-01-01
Video signal processor uses special-purpose integrated circuits with nonsaturating current mode switching to accept texture and color information from a digital computer in a visual spaceflight simulator and to combine these, for display on color CRT with analog information concerning fading.
38 CFR 1.9 - Description, use, and display of VA seal and flag.
Code of Federal Regulations, 2010 CFR
2010-07-01
...) Official awards, certificates, medals, and plaques. (E) Motion picture film, video tape, and other... governments. (F) Official awards, certificates, and medals. (G) Motion picture film, video tape, and other... with this section shall be subject to the penalty provisions of 18 U.S.C. 506, 701, or 1017, providing...
38 CFR 1.9 - Description, use, and display of VA seal and flag.
Code of Federal Regulations, 2012 CFR
2012-07-01
...) Official awards, certificates, medals, and plaques. (E) Motion picture film, video tape, and other... governments. (F) Official awards, certificates, and medals. (G) Motion picture film, video tape, and other... with this section shall be subject to the penalty provisions of 18 U.S.C. 506, 701, or 1017, providing...
38 CFR 1.9 - Description, use, and display of VA seal and flag.
Code of Federal Regulations, 2014 CFR
2014-07-01
...) Official awards, certificates, medals, and plaques. (E) Motion picture film, video tape, and other... governments. (F) Official awards, certificates, and medals. (G) Motion picture film, video tape, and other... with this section shall be subject to the penalty provisions of 18 U.S.C. 506, 701, or 1017, providing...
38 CFR 1.9 - Description, use, and display of VA seal and flag.
Code of Federal Regulations, 2011 CFR
2011-07-01
...) Official awards, certificates, medals, and plaques. (E) Motion picture film, video tape, and other... governments. (F) Official awards, certificates, and medals. (G) Motion picture film, video tape, and other... with this section shall be subject to the penalty provisions of 18 U.S.C. 506, 701, or 1017, providing...
38 CFR 1.9 - Description, use, and display of VA seal and flag.
Code of Federal Regulations, 2013 CFR
2013-07-01
...) Official awards, certificates, medals, and plaques. (E) Motion picture film, video tape, and other... governments. (F) Official awards, certificates, and medals. (G) Motion picture film, video tape, and other... with this section shall be subject to the penalty provisions of 18 U.S.C. 506, 701, or 1017, providing...
Imaging System for Vaginal Surgery.
Taylor, G Bernard; Myers, Erinn M
2015-12-01
The vaginal surgeon is challenged with performing complex procedures within a surgical field of limited light and exposure. The video telescopic operating microscope is an illumination and imaging system that provides visualization during open surgical procedures with a limited field of view. The imaging system is positioned within the surgical field and then secured to the operating room table with a maneuverable holding arm. A high-definition camera and Xenon light source allow transmission of the magnified image to a high-definition monitor in the operating room. The monitor screen is positioned above the patient for the surgeon and assistants to view real time throughout the operation. The video telescopic operating microscope system was used to provide surgical illumination and magnification during total vaginal hysterectomy and salpingectomy, midurethral sling, and release of vaginal scar procedures. All procedures were completed without complications. The video telescopic operating microscope provided illumination of the vaginal operative field and display of the magnified image onto high-definition monitors in the operating room for the surgeon and staff to simultaneously view the procedures. The video telescopic operating microscope provides high-definition display, magnification, and illumination during vaginal surgery.
ERIC Educational Resources Information Center
Walsh, Janet
1982-01-01
Discusses the health hazards of working with the visual display systems of computers, in particular the eye problems associated with long-term use of video display terminals. Excerpts from and ordering information for the National Institute for Occupational Safety and Health report on such hazards are included. (JJD)
Impact of pain behaviors on evaluations of warmth and competence.
Ashton-James, Claire E; Richardson, Daniel C; de C Williams, Amanda C; Bianchi-Berthouze, Nadia; Dekker, Peter H
2014-12-01
This study investigated the social judgments that are made about people who appear to be in pain. Fifty-six participants viewed 2 video clips of human figures exercising. The videos were created by a motion tracking system, and showed dots that had been placed at various points on the body, so that body motion was the only visible cue. One of the figures displayed pain behaviors (eg, rubbing, holding, hesitating), while the other did not. Without any other information about the person in each video, participants evaluated each person on a variety of attributes associated with interpersonal warmth, competence, mood, and physical fitness. As well as judging them to be in more pain, participants evaluated the person who displayed pain behavior as less warm and less competent than the person who did not display pain behavior. In addition, the person who displayed pain behavior was perceived to be in a more negative mood and to have poorer physical fitness than the person who did not, and these perceptions contributed to the impact of pain behaviors on evaluations of warmth and competence, respectively. The implications of these negative social evaluations for social relationships, well-being, and pain assessment in persons in chronic pain are discussed. Copyright © 2014 International Association for the Study of Pain. Published by Elsevier B.V. All rights reserved.
Kutsuna, Kenichiro; Matsuura, Yasuyuki; Fujikake, Kazuhiro; Miyao, Masaru; Takada, Hiroki
2013-01-01
Visually induced motion sickness (VIMS) is caused by sensory conflict, the disagreement between vergence and visual accommodation while observing stereoscopic images. VIMS can be measured by psychological and physiological methods. We propose a mathematical methodology to measure the effect of three-dimensional (3D) images on the equilibrium function. In this study, body sway in the resting state is compared with that during exposure to 3D video clips on a liquid crystal display (LCD) and on a head mounted display (HMD). In addition, the Simulator Sickness Questionnaire (SSQ) was completed immediately afterward. Based on the statistical analysis of the SSQ subscores and each index for stabilograms, we succeeded in determining the quantity of the VIMS during exposure to the stereoscopic images. Moreover, we discuss the metamorphism in the potential functions to control the standing posture during the exposure to stereoscopic video clips.
Priority-based methods for reducing the impact of packet loss on HEVC encoded video streams
NASA Astrophysics Data System (ADS)
Nightingale, James; Wang, Qi; Grecos, Christos
2013-02-01
The rapid growth in the use of video streaming over IP networks has outstripped the rate at which new network infrastructure has been deployed. These bandwidth-hungry applications now comprise a significant part of all Internet traffic and present major challenges for network service providers. The situation is more acute in mobile networks where the available bandwidth is often limited. Work towards the standardisation of High Efficiency Video Coding (HEVC), the next generation video coding scheme, is currently on track for completion in 2013. HEVC offers the prospect of a 50% improvement in compression over the current H.264 Advanced Video Coding standard (H.264/AVC) for the same quality. However, there has been very little published research on HEVC streaming or the challenges of delivering HEVC streams in resource-constrained network environments. In this paper we consider the problem of adapting an HEVC encoded video stream to meet the bandwidth limitation in a mobile networks environment. Video sequences were encoded using the Test Model under Consideration (TMuC HM6) for HEVC. Network abstraction layers (NAL) units were packetized, on a one NAL unit per RTP packet basis, and transmitted over a realistic hybrid wired/wireless testbed configured with dynamically changing network path conditions and multiple independent network paths from the streamer to the client. Two different schemes for the prioritisation of RTP packets, based on the NAL units they contain, have been implemented and empirically compared using a range of video sequences, encoder configurations, bandwidths and network topologies. In the first prioritisation method the importance of an RTP packet was determined by the type of picture and the temporal switching point information carried in the NAL unit header. Packets containing parameter set NAL units and video coding layer (VCL) NAL units of the instantaneous decoder refresh (IDR) and the clean random access (CRA) pictures were given the highest priority followed by NAL units containing pictures used as reference pictures from which others can be predicted. The second method assigned a priority to each NAL unit based on the rate-distortion cost of the VCL coding units contained in the NAL unit. The sum of the rate-distortion costs of each coding unit contained in a NAL unit was used as the priority weighting. The preliminary results of extensive experiments have shown that all three schemes offered an improvement in PSNR, when comparing original and decoded received streams, over uncontrolled packet loss. Using the first method consistently delivered a significant average improvement of 0.97dB over the uncontrolled scenario while the second method provided a measurable, but less consistent, improvement across the range of testing conditions and encoder configurations.
NASA Astrophysics Data System (ADS)
Ilgner, Justus F. R.; Kawai, Takashi; Shibata, Takashi; Yamazoe, Takashi; Westhofen, Martin
2006-02-01
Introduction: An increasing number of surgical procedures are performed in a microsurgical and minimally-invasive fashion. However, the performance of surgery, its possibilities and limitations become difficult to teach. Stereoscopic video has evolved from a complex production process and expensive hardware towards rapid editing of video streams with standard and HDTV resolution which can be displayed on portable equipment. This study evaluates the usefulness of stereoscopic video in teaching undergraduate medical students. Material and methods: From an earlier study we chose two clips each of three different microsurgical operations (tympanoplasty type III of the ear, endonasal operation of the paranasal sinuses and laser chordectomy for carcinoma of the larynx). This material was added by 23 clips of a cochlear implantation, which was specifically edited for a portable computer with an autostereoscopic display (PC-RD1-3D, SHARP Corp., Japan). The recording and synchronization of left and right image was performed at the University Hospital Aachen. The footage was edited stereoscopically at the Waseda University by means of our original software for non-linear editing of stereoscopic 3-D movies. Then the material was converted into the streaming 3-D video format. The purpose of the conversion was to present the video clips by a file type that does not depend on a television signal such as PAL or NTSC. 25 4th year medical students who participated in the general ENT course at Aachen University Hospital were asked to estimate depth clues within the six video clips plus cochlear implantation clips. Another 25 4th year students who were shown the material monoscopically on a conventional laptop served as control. Results: All participants noted that the additional depth information helped with understanding the relation of anatomical structures, even though none had hands-on experience with Ear, Nose and Throat operations before or during the course. The monoscopic group generally estimated resection depth to much lesser values than in reality. Although this was the case with some participants in the stereoscopic group, too, the estimation of depth features reflected the enhanced depth impression provided by stereoscopy. Conclusion: Following first implementation of stereoscopic video teaching, medical students who are inexperienced with ENT surgical procedures are able to reproduce depth information and therefore anatomically complex structures to a greater extent following stereoscopic video teaching. Besides extending video teaching to junior doctors, the next evaluation step will address its effect on the learning curve during the surgical training program.
NASA Astrophysics Data System (ADS)
Riendeau, Diane
2012-04-01
I am indebted to Frank Noschese for this month's column. Frank has a fantastic blog and video list. He uses these videos in his class and asks the students to determine if they show a Physics Win or a Physics Fail. He will hook his students at the beginning of a new unit with a video and then refer back to that video at the end of the unit. Check out Frank's blog at: fnoschese. wordpress.com/2010/07/21/win-fail-physics-an-introduction/.
High-speed reconstruction of compressed images
NASA Astrophysics Data System (ADS)
Cox, Jerome R., Jr.; Moore, Stephen M.
1990-07-01
A compression scheme is described that allows high-definition radiological images with greater than 8-bit intensity resolution to be represented by 8-bit pixels. Reconstruction of the images with their original intensity resolution can be carried out by means of a pipeline architecture suitable for compact, high-speed implementation. A reconstruction system is described that can be fabricated according to this approach and placed between an 8-bit display buffer and the display's video system thereby allowing contrast control of images at video rates. Results for 50 CR chest images are described showing that error-free reconstruction of the original 10-bit CR images can be achieved.
Description and flight tests of an oculometer
NASA Technical Reports Server (NTRS)
Middleton, D. B.; Hurt, G. J., Jr.; Wise, M. A.; Holt, J. D.
1977-01-01
A remote sensing oculometer was successfully operated during flight tests with a NASA experimental Twin Otter aircraft at the Langley Research Center. Although the oculometer was designed primarily for the laboratory, it was able to track the pilot's eye-point-of-regard (lookpoint) consistently and unobtrusively in the flight environment. The instantaneous position of the lookpoint was determined to within approximately 1 deg. Data were recorded on both analog and video tape. The video data consisted of continuous scenes of the aircraft's instrument display and a superimposed white dot (simulating the lookpoint) dwelling on an instrument or moving from instrument to instrument as the pilot monitored the display information during landing approaches.
Ochiai, Tetsuji; Mushiake, Hajime; Tanji, Jun
2005-07-01
The ventral premotor cortex (PMv) has been implicated in the visual guidance of movement. To examine whether neuronal activity in the PMv is involved in controlling the direction of motion of a visual image of the hand or the actual movement of the hand, we trained a monkey to capture a target that was presented on a video display using the same side of its hand as was displayed on the video display. We found that PMv neurons predominantly exhibited premovement activity that reflected the image motion to be controlled, rather than the physical motion of the hand. We also found that the activity of half of such direction-selective PMv neurons depended on which side (left versus right) of the video image of the hand was used to capture the target. Furthermore, this selectivity for a portion of the hand was not affected by changing the starting position of the hand movement. These findings suggest that PMv neurons play a crucial role in determining which part of the body moves in which direction, at least under conditions in which a visual image of a limb is used to guide limb movements.
A retrospective study of the performance of video laryngoscopy in an obstetric unit.
Aziz, Michael F; Kim, Diana; Mako, Jeffrey; Hand, Karen; Brambrink, Ansgar M
2012-10-01
We evaluated the performance of tracheal intubation using video laryngoscopy in an obstetric unit. We analyzed airway management details during a 3-year period, and observed 180 intubations. All cases were managed with direct or video laryngoscopy. Direct laryngoscopy resulted in 157 out of 163 (95% confidence interval [CI], 92%-99%) first attempt successful intubations and failed once. Video laryngoscopy resulted in 18 of 18 (95% CI, 81%-100%) successful intubations on first attempt. The failed direct laryngoscopy was rescued with video laryngoscopy. The patients managed with video laryngoscopy frequently required urgent or emergency surgery and had predictors of difficult direct laryngoscopy in 16 of 18 cases. Video laryngoscopy may be a useful adjunct for obstetric airway management, and its role in this difficult airway scenario should be further studied.
Optical links in handheld multimedia devices
NASA Astrophysics Data System (ADS)
van Geffen, S.; Duis, J.; Miller, R.
2008-04-01
Ever emerging applications in handheld multimedia devices such as mobile phones, laptop computers, portable video games and digital cameras requiring increased screen resolutions are driving higher aggregate bitrates between host processor and display(s) enabling services such as mobile video conferencing, video on demand and TV broadcasting. Larger displays and smaller phones require complex mechanical 3D hinge configurations striving to combine maximum functionality with compact building volumes. Conventional galvanic interconnections such as Micro-Coax and FPC carrying parallel digital data between host processor and display module may produce Electromagnetic Interference (EMI) and bandwidth limitations caused by small cable size and tight cable bends. To reduce the number of signals through a hinge, the mobile phone industry, organized in the MIPI (Mobile Industry Processor Interface) alliance, is currently defining an electrical interface transmitting serialized digital data at speeds >1Gbps. This interface allows for electrical or optical interconnects. Above 1Gbps optical links may offer a cost effective alternative because of their flexibility, increased bandwidth and immunity to EMI. This paper describes the development of optical links for handheld communication devices. A cable assembly based on a special Plastic Optical Fiber (POF) selected for its mechanical durability is terminated with a small form factor molded lens assembly which interfaces between an 850nm VCSEL transmitter and a receiving device on the printed circuit board of the display module. A statistical approach based on a Lean Design For Six Sigma (LDFSS) roadmap for new product development tries to find an optimum link definition which will be robust and low cost meeting the power consumption requirements appropriate for battery operated systems.
Augmenting reality in Direct View Optical (DVO) overlay applications
NASA Astrophysics Data System (ADS)
Hogan, Tim; Edwards, Tim
2014-06-01
The integration of overlay displays into rifle scopes can transform precision Direct View Optical (DVO) sights into intelligent interactive fire-control systems. Overlay displays can provide ballistic solutions within the sight for dramatically improved targeting, can fuse sensor video to extend targeting into nighttime or dirty battlefield conditions, and can overlay complex situational awareness information over the real-world scene. High brightness overlay solutions for dismounted soldier applications have previously been hindered by excessive power consumption, weight and bulk making them unsuitable for man-portable, battery powered applications. This paper describes the advancements and capabilities of a high brightness, ultra-low power text and graphics overlay display module developed specifically for integration into DVO weapon sight applications. Central to the overlay display module was the development of a new general purpose low power graphics controller and dual-path display driver electronics. The graphics controller interface is a simple 2-wire RS-232 serial interface compatible with existing weapon systems such as the IBEAM ballistic computer and the RULR and STORM laser rangefinders (LRF). The module features include multiple graphics layers, user configurable fonts and icons, and parameterized vector rendering, making it suitable for general purpose DVO overlay applications. The module is configured for graphics-only operation for daytime use and overlays graphics with video for nighttime applications. The miniature footprint and ultra-low power consumption of the module enables a new generation of intelligent DVO systems and has been implemented for resolutions from VGA to SXGA, in monochrome and color, and in graphics applications with and without sensor video.
POCIT portable optical communicators: VideoBeam and EtherBeam
NASA Astrophysics Data System (ADS)
Mecherle, G. Stephen; Holcomb, Terry L.
1999-12-01
LDSC is developing the POCITTM (Portable Optical Communication Integrated Transceiver) family of products which now includes VideoBeamTM and the latest addition, EtherBeamTM. Each is a full duplex portable laser communicator: VideoBeamTM providing near-broadcast- quality analog video and stereo audio, and EtherBeamTM providing standard Ethernet connectivity. Each POCITTM transceiver consists of a 3.5-pound unit with a binocular- type form factor, which can be manually pointed, tripod- mounted or gyro-stabilized. Both units have an operational range of over two miles (clear air) with excellent jam- resistance and low probability of interception characteristics. The transmission wavelength of 1550 nm enables Class I eyesafe operation (ANSI, IEC). The POCITTM units are ideally suited for numerous miliary scenarios, surveillance/espionage, industrial precious mineral exploration, and campus video teleconferencing applications.
1981 Image II Conference Proceedings.
1981-11-01
rapid motion of terrain detail across the display requires fast display processors. Other difficulties are perceptual: the visual displays must convey...has been a continuing effort by Vought in the last decade. Early systems were restricted by the unavailability of video bulk storage with fast random...each photograph. The calculations aided in the proper sequencing of the scanned scenes on the tape recorder and eventually facilitated fast random
Contrast Transmission In Medical Image Display
NASA Astrophysics Data System (ADS)
Pizer, Stephen M.; Zimmerman, John B.; Johnston, R. Eugene
1982-11-01
The display of medical images involves transforming recorded intensities such at CT numbers into perceivable intensities such as combinations of color and luminance. For the viewer to extract the most information about patterns of decreasing and increasing recorded intensity, the display designer must pay attention to three issues: 1) choice of display scale, including its discretization; 2) correction for variations in contrast sensitivity across the display scale due to the observer and the display device (producing an honest display); and 3) contrast enhancement based on the information in the recorded image and its importance, determined by viewing objectives. This paper will present concepts and approaches in all three of these areas. In choosing display scales three properties are important: sensitivity, associability, and naturalness of order. The unit of just noticeable difference (jnd) will be carefully defined. An observer experiment to measure the jnd values across a display scale will be specified. The overall sensitivity provided by a scale as measured in jnd's gives a measure of sensitivity called the perceived dynamic range (PDR). Methods for determining the PDR fran the aforementioned PDR values, and PDR's for various grey and pseudocolor scales will be presented. Methods of achieving sensitivity while retaining associability and naturalness of order with pseudocolor scales will be suggested. For any display device and scale it is useful to compensate for the device and observer by preceding the device with an intensity mapping (lookup table) chosen so that perceived intensity is linear with display-driving intensity. This mapping can be determined from the aforementioned jnd values. With a linearized display it is possible to standardize display devices so that the same image displayed on different devices or scales (e.g. video and hard copy) will be in sane sense perceptually equivalent. Furthermore, with a linearized display, it is possible to design contrast enhancement mappings that optimize the transmission of information from the recorded image to the display-driving signal with the assurance that this information will not then be lost by a -further nonlinear relation between display-driving and perceived intensity. It is suggested that optimal contrast enhancement mappings are adaptive to the local distribution of recorded intensities.
Recovery of Images from the AMOS ELSI Data for STS-33
1990-04-19
ore recorded on tape in both video and digital formats. The ELSI \\-. used on thrce passes, orbits 21, 37, and 67 on 24,2S, and 27 November. These data...November, in video fontit, were hin&narried to Gcopih)sics labontory (0L) :t the beginning or December 1989; tli cl.ified data, in digital formn.t, were...are also sampled and reconverted to maulog form, in a stanicrd viko format, for display on a video monitor and recording on videotape. 3. TAPE FORMAT
Design Issues in Video Disc Map Display.
1984-10-01
such items as the equipment used by ETL in its work with discs and selected images from a disc. % %. I 4 11. VIDEO DISC TECHNOLOGY AND VOCABULARY 0...The term video refers to a television image. The standard home television set is equipped with a receiver, which is capable of picking up a signal...plays for one hour per side and is played at a constant linear velocity. The industria )y-formatted disc has 54,000 frames per side in concentric tracks
12. NBS LOWER ROOM. BEHIND FAR GLASS WALL IS VIDEO ...
12. NBS LOWER ROOM. BEHIND FAR GLASS WALL IS VIDEO TAPE EQUIPMENT AND VOICE INTERCOM EQUIPMENT. THE MONITORS ABOVE GLASS WALL DISPLAY UNDERWATER TEST VIDEO TO CONTROL ROOM. FARTHEST CONSOLE ROW CONTAINS CAMERA SWITCHING, PANNING, TILTING, FOCUSING, AND ZOOMING. MIDDLE CONSOLE ROW CONTAINS TEST CONDUCTOR CONSOLES FOR MONITORING TEST ACTIVITIES AND DATA. THE CLOSEST CONSOLE ROW IS NBS FACILITY CONSOLES FOR TEST DIRECTOR, SAFETY AND QUALITY ASSURANCE REPRESENTATIVES. - Marshall Space Flight Center, Neutral Buoyancy Simulator Facility, Rideout Road, Huntsville, Madison County, AL
13. NBS LOWER ROOM. BEHIND FAR GLASS WALL IS VIDEO ...
13. NBS LOWER ROOM. BEHIND FAR GLASS WALL IS VIDEO TAPE EQUIPMENT AND VOICE INTERCOM EQUIPMENT. THE MONITORS ABOVE GLASS WALL DISPLAY UNDERWATER TEST VIDEO TO CONTROL ROOM. FARTHEST CONSOLE ROW CONTAINS CAMERA SWITCHING, PANNING, TILTING, FOCUSING, AND ZOOMING. MIDDLE CONSOLE ROW CONTAINS TEST CONDUCTOR CONSOLES FOR MONITORING TEST ACTIVITIES AND DATA. THE CLOSEST CONSOLE ROW IS NBC FACILITY CONSOLES FOR TEST DIRECTOR, SAFETY AND QUALITY ASSURANCE REPRESENTATIVES. - Marshall Space Flight Center, Neutral Buoyancy Simulator Facility, Rideout Road, Huntsville, Madison County, AL
Effects Of Frame Rates In Video Displays
NASA Technical Reports Server (NTRS)
Kellogg, Gary V.; Wagner, Charles A.
1991-01-01
Report describes experiment on subjective effects of rates at which display on cathode-ray tube in flight simulator updated and refreshed. Conducted to learn more about jumping, blurring, flickering, and multiple lines that observer perceives when line moves at high speed across screen of a calligraphic CRT.
Using Videos and 3D Animations for Conceptual Learning in Basic Computer Units
ERIC Educational Resources Information Center
Cakiroglu, Unal; Yilmaz, Huseyin
2017-01-01
This article draws on a one-semester study to investigate the effect of videos and 3D animations on students' conceptual understandings about basic computer units. A quasi-experimental design was carried out in two classrooms; videos and 3D animations were used in classroom activities in one group and those were used for homework in the other…
Internet Protocol Display Sharing Solution for Mission Control Center Video System
NASA Technical Reports Server (NTRS)
Brown, Michael A.
2009-01-01
With the advent of broadcast television as a constant source of information throughout the NASA manned space flight Mission Control Center (MCC) at the Johnson Space Center (JSC), the current Video Transport System (VTS) characteristics provides the ability to visually enhance real-time applications as a broadcast channel that decision making flight controllers come to rely on, but can be difficult to maintain and costly. The Operations Technology Facility (OTF) of the Mission Operations Facility Division (MOFD) has been tasked to provide insight to new innovative technological solutions for the MCC environment focusing on alternative architectures for a VTS. New technology will be provided to enable sharing of all imagery from one specific computer display, better known as Display Sharing (DS), to other computer displays and display systems such as; large projector systems, flight control rooms, and back supporting rooms throughout the facilities and other offsite centers using IP networks. It has been stated that Internet Protocol (IP) applications are easily readied to substitute for the current visual architecture, but quality and speed may need to be forfeited for reducing cost and maintainability. Although the IP infrastructure can support many technologies, the simple task of sharing ones computer display can be rather clumsy and difficult to configure and manage to the many operators and products. The DS process shall invest in collectively automating the sharing of images while focusing on such characteristics as; managing bandwidth, encrypting security measures, synchronizing disconnections from loss of signal / loss of acquisitions, performance latency, and provide functions like, scalability, multi-sharing, ease of initial integration / sustained configuration, integration with video adjustments packages, collaborative tools, host / recipient controllability, and the utmost paramount priority, an enterprise solution that provides ownership to the whole process, while maintaining the integrity of the latest technological displayed image devices. This study will provide insights to the many possibilities that can be filtered down to a harmoniously responsive product that can be used in today's MCC environment.
NASA Astrophysics Data System (ADS)
Newman, R. L.
2002-12-01
How many images can you display at one time with Power Point without getting "postage stamps"? Do you have fantastic datasets that you cannot view because your computer is too slow/small? Do you assume a few 2-D images of a 3-D picture are sufficient? High-end visualization centers can minimize and often eliminate these problems. The new visualization center [http://siovizcenter.ucsd.edu] at Scripps Institution of Oceanography [SIO] immerses users into a virtual world by projecting 3-D images onto a Panoram GVR-120E wall-sized floor-to-ceiling curved screen [7' x 23'] that has 3.2 mega-pixels of resolution. The Infinite Reality graphics subsystem is driven by a single-pipe SGI Onyx 3400 with a system bandwidth of 44 Gbps. The Onyx is powered by 16 MIPS R12K processors and 16 GB of addressable memory. The system is also equipped with transmitters and LCD shutter glasses which permit stereographic 3-D viewing of high-resolution images. This center is ideal for groups of up to 60 people who can simultaneously view these large-format images. A wide range of hardware and software is available, giving the users a totally immersive working environment in which to display, analyze, and discuss large datasets. The system enables simultaneous display of video and audio streams from sources such as SGI megadesktop and stereo megadesktop, S-VHS video, DVD video, and video from a Macintosh or PC. For instance, one-third of the screen might be displaying S-VHS video from a remotely-operated-vehicle [ROV], while the remaining portion of the screen might be used for an interactive 3-D flight over the same parcel of seafloor. The video and audio combinations using this system are numerous, allowing users to combine and explore data and images in innovative ways, greatly enhancing scientists' ability to visualize, understand and collaborate on complex datasets. In the not-distant future, with the rapid growth in networking speeds in the US, it will be possible for Earth Sciences Departments to collaborate effectively while limiting the amount of physical travel required. This includes porting visualization content to the popular, low-cost Geowall visualization systems, and providing web-based access to databanks filled with stock geoscience visualizations.
Optimization of the polyplanar optical display electronics for a monochrome B-52 display
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeSanto, L.
The Polyplanar Optical Display (POD) is a unique display screen which can be used with any projection source. The prototype ten-inch display is two inches thick and has a matte black face which allows for high contrast images. The prototype being developed is a form, fit and functional replacement display for the B-52 aircraft which uses a monochrome ten-inch display. In order to achieve a long lifetime, the new display uses a new 200 mW green solid-state laser (10,000 hr. life) at 532 nm as its light source. To produce real-time video, the laser light is being modulated by amore » Digital Light Processing (DLP{trademark}) chip manufactured by Texas Instruments (TI). In order to use the solid-state laser as the light source and also fit within the constraints of the B-52 display, the Digital Micromirror Device (DMD{trademark}) chip is operated remotely from the Texas Instruments circuit board. In order to achieve increased brightness a monochrome digitizing interface was investigated. The operation of the DMD{trademark} divorced from the light engine and the interfacing of the DMD{trademark} board with the RS-170 video format specific to the B-52 aircraft will be discussed, including the increased brightness of the monochrome digitizing interface. A brief description of the electronics required to drive the new 200 mW laser is also presented.« less
Polnau, D G; Ma, P M
2001-12-01
Neuroethology seeks to uncover the neural mechanisms underlying natural behaviour. One of the major challenges in this field is the need to correlate directly neural activity and behavioural output. In most cases, recording of neural activity in freely moving animals is extremely difficult. However, electromyographic recording can often be used in lieu of neural recording to gain an understanding of the motor output program underlying a well-defined behaviour. Electromyographic recording is less invasive than most other recording methods, and does not impede the performance of most natural tasks. Using the opercular display of the Siamese fighting fish as a model, we developed a protocol for correlating directly electromyographic activity and kinematics of opercular movement: electromyographic activity was recorded in the audio channel of a video cassette recorder while video taping the display behaviour. By combining computer-assisted, quantitative video analysis and spike analysis, the kinematics of opercular movement are linked to the motor output program. Since the muscle that mediates opercular abduction in this fish, the dilator operculi, is a relatively small muscle with several subdivisions, we also describe methods for recording from small muscles and marking the precise recording site with electrolytic corrosion. The protocol described here is applicable to studies of a variety of natural behaviour that can be performed in a relatively confined space. It is also useful for analyzing complex or rapidly changing behaviour in which a precise correlation between kinematics and electromyography is required.
Design and development of a new facility for teaching and research in clinical anatomy.
Greene, John Richard T
2009-01-01
This article discusses factors in the design, commissioning, project management, and intellectual property protection of developments within a new clinical anatomy facility in the United Kingdom. The project was aimed at creating cost-effective facilities that would address widespread concerns over anatomy teaching, and support other activities central to the university mission-namely research and community interaction. The new facilities comprise an engaging learning environment and were designed to support a range of pedagogies appropriate to the needs of healthcare professionals at different stages of their careers. Specific innovations include integrated workstations each comprising of a dissection table, with removable top sections, an overhead operating light, and ceiling-mounted camera. The tables incorporate waterproof touch-screen monitors to display images from the camera, an endoscope or a database of images, videos, and tutorials. The screens work independently so that instructors can run different teaching sessions simultaneously and students can progress at different speeds to suit themselves. Further, database access is provided from within an integrated anatomy and pathology museum and display units dedicated to the correlation of cross-sectional anatomy with medical imaging. A new functional neuroanatomy modeling system, called the BrainTower, has been developed to aid integration of anatomy with physiology and clinical neurology. Many aspects of the new facility are reproduced within a Mobile Teaching Unit, which can be driven to hospitals, colleges, and schools to provide appropriate work-based education and community interaction. (c) 2009 American Association of Anatomists
ERIC Educational Resources Information Center
Norling, Martina; Lillvist, Anne
2016-01-01
This study investigates language-promoting strategies and support of concept development displayed by preschool staffs' when interacting with preschool children in literacy-related play activities. The data analysed consisted of 39 minutes of video, selected systematically from a total of 11 hours of video material from six Swedish preschool…
Video-Out Projection and Lecture Hall Set-Up. Microcomputing Working Paper Series.
ERIC Educational Resources Information Center
Gibson, Chris
This paper details the considerations involved in determining suitable video projection systems for displaying the Apple Macintosh's screen to large groups of people, both in classrooms with approximately 25 people, and in lecture halls with approximately 250. To project the Mac screen to groups in lecture halls, the Electrohome EDP-57 video…
Float Package and the Data Rack aboard the DC-9
NASA Technical Reports Server (NTRS)
1996-01-01
Ted Brunzie and Peter Mason observe the float package and the data rack aboard the DC-9 reduced gravity aircraft. The float package contains a cryostat, a video camera, a pump and accelerometers. The data rack displays and record the video signal from the float package on tape and stores acceleration and temperature measurements on disk.
VENI, video, VICI: The merging of computer and video technologies
NASA Technical Reports Server (NTRS)
Horowitz, Jay G.
1993-01-01
The topics covered include the following: High Definition Television (HDTV) milestones; visual information bandwidth; television frequency allocation and bandwidth; horizontal scanning; workstation RGB color domain; NTSC color domain; American HDTV time-table; HDTV image size; digital HDTV hierarchy; task force on digital image architecture; open architecture model; future displays; and the ULTIMATE imaging system.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-21
... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-852] Certain Video Analytics Software..., 2012, based on a complaint filed by ObjectVideo, Inc. (``ObjectVideo'') of Reston, Virginia. 77 FR... United States after importation of certain video analytics software systems, components thereof, and...
Hoang, Danthanh; Khawar, Nayaab; George, Maria; Gad, Ashraf; Sy, Farrah; Narula, Pramod
2018-04-01
To increase the hand-washing (HW) duration of staff and visitors in the NICU to a minimum of 20 seconds as recommended by the CDC. Intervention included video didactic triggered by motion sensor to play above wash basin. Video enacted Centers for Disease Control and Prevention (CDC) HW technique in real time and displayed timer of 20 seconds. HW was reviewed from surveillance video. Swabs of hands plated and observed for qualitative growth (QG) of bacterial colonies. In visitors, the mean HW duration at baseline was 16.3 seconds and increased to 23.4 seconds at the 2-week interval (p = .003) and 22.9 seconds at the 9-month interval (p < .0005). In staff, the mean HW duration at baseline was 18.4 seconds and increased to 29.0 seconds at 2-week interval (p = .001) and 25.7 seconds at the 9-month interval (p < .0005). In visitors, HW compliance at baseline was 33% and increased to 52% at the 2-week interval (p = .076) and 69% at the 9-month interval (p = .001). In staff, HW compliance at baseline was 42% and increased to 64% at the 2-week interval (p = .025) and 72% at the 9-month interval (p = .001). Increasing HW was significantly associated with linear decrease in bacterial QG. The intervention significantly increased mean HW time, compliance with a 20-econd wash time and decreased bacterial QG of hands and these results were sustained over a 9-month period. © 2018 American Society for Healthcare Risk Management of the American Hospital Association.
Study to Expand Simulation Cockpit Displays of Advanced Sensors
1981-03-01
common source is being used for multiple sensor types). If inde- pendent displays and controls are desired then two independent video sources or sensor...line is inserted in each gap, the result is the familiar 211 in- terlace. If two lines are inserted, the result is 31l interlace, and so on. The total...symbol generators. If these systems are oper- ating at various scan rates and if a common display device, such as a multifunction display (MFD) is to
NASA Astrophysics Data System (ADS)
Kim, Kyung-Su; Lee, Hae-Yeoun; Im, Dong-Hyuck; Lee, Heung-Kyu
Commercial markets employ digital right management (DRM) systems to protect valuable high-definition (HD) quality videos. DRM system uses watermarking to provide copyright protection and ownership authentication of multimedia contents. We propose a real-time video watermarking scheme for HD video in the uncompressed domain. Especially, our approach is in aspect of practical perspectives to satisfy perceptual quality, real-time processing, and robustness requirements. We simplify and optimize human visual system mask for real-time performance and also apply dithering technique for invisibility. Extensive experiments are performed to prove that the proposed scheme satisfies the invisibility, real-time processing, and robustness requirements against video processing attacks. We concentrate upon video processing attacks that commonly occur in HD quality videos to display on portable devices. These attacks include not only scaling and low bit-rate encoding, but also malicious attacks such as format conversion and frame rate change.
Video PATSEARCH: A Mixed-Media System.
ERIC Educational Resources Information Center
Schulman, Jacque-Lynne
1982-01-01
Describes a videodisc-based information display system in which a computer terminal is used to search the online PATSEARCH database from a remote host with local microcomputer control to select and display drawings from the retrieved records. System features and system components are discussed and criteria for system evaluation are presented.…
Software Aids Visualization Of Mars Pathfinder Mission
NASA Technical Reports Server (NTRS)
Weidner, Richard J.
1996-01-01
Report describes Simulator for Imager for Mars Pathfinder (SIMP) computer program. SIMP generates "virtual reality" display of view through video camera on Mars lander spacecraft of Mars Pathfinder mission, along with display of pertinent textual and graphical data, for use by scientific investigators in planning sequences of activities for mission.
Learned saliency transformations for gaze guidance
NASA Astrophysics Data System (ADS)
Vig, Eleonora; Dorr, Michael; Barth, Erhardt
2011-03-01
The saliency of an image or video region indicates how likely it is that the viewer of the image or video fixates that region due to its conspicuity. An intriguing question is how we can change the video region to make it more or less salient. Here, we address this problem by using a machine learning framework to learn from a large set of eye movements collected on real-world dynamic scenes how to alter the saliency level of the video locally. We derive saliency transformation rules by performing spatio-temporal contrast manipulations (on a spatio-temporal Laplacian pyramid) on the particular video region. Our goal is to improve visual communication by designing gaze-contingent interactive displays that change, in real time, the saliency distribution of the scene.
Video stereo-laparoscopy system
NASA Astrophysics Data System (ADS)
Xiang, Yang; Hu, Jiasheng; Jiang, Huilin
2006-01-01
Minimally invasive surgery (MIS) has contributed significantly to patient care by reducing the morbidity associated with more invasive procedures. MIS procedures have become standard treatment for gallbladder disease and some abdominal malignancies. The imaging system has played a major role in the evolving field of minimally invasive surgery (MIS). The image need to have good resolution, large magnification, especially, the image need to have depth cue at the same time the image have no flicker and suit brightness. The video stereo-laparoscopy system can meet the demand of the doctors. This paper introduces the 3d video laparoscopy has those characteristic, field frequency: 100Hz, the depth space: 150mm, resolution: 10pl/mm. The work principle of the system is introduced in detail, and the optical system and time-division stereo-display system are described briefly in this paper. The system has focusing image lens, it can image on the CCD chip, the optical signal can change the video signal, and through A/D switch of the image processing system become the digital signal, then display the polarized image on the screen of the monitor through the liquid crystal shutters. The doctors with the polarized glasses can watch the 3D image without flicker of the tissue or organ. The 3D video laparoscope system has apply in the MIS field and praised by doctors. Contrast to the traditional 2D video laparoscopy system, it has some merit such as reducing the time of surgery, reducing the problem of surgery and the trained time.
Autonomous spacecraft rendezvous and docking
NASA Technical Reports Server (NTRS)
Tietz, J. C.; Almand, B. J.
1985-01-01
A storyboard display is presented which summarizes work done recently in design and simulation of autonomous video rendezvous and docking systems for spacecraft. This display includes: photographs of the simulation hardware, plots of chase vehicle trajectories from simulations, pictures of the docking aid including image processing interpretations, and drawings of the control system strategy. Viewgraph-style sheets on the display bulletin board summarize the simulation objectives, benefits, special considerations, approach, and results.
Autonomous spacecraft rendezvous and docking
NASA Astrophysics Data System (ADS)
Tietz, J. C.; Almand, B. J.
A storyboard display is presented which summarizes work done recently in design and simulation of autonomous video rendezvous and docking systems for spacecraft. This display includes: photographs of the simulation hardware, plots of chase vehicle trajectories from simulations, pictures of the docking aid including image processing interpretations, and drawings of the control system strategy. Viewgraph-style sheets on the display bulletin board summarize the simulation objectives, benefits, special considerations, approach, and results.
Military display performance parameters
NASA Astrophysics Data System (ADS)
Desjardins, Daniel D.; Meyer, Frederick
2012-06-01
The military display market is analyzed in terms of four of its segments: avionics, vetronics, dismounted soldier, and command and control. Requirements are summarized for a number of technology-driving parameters, to include luminance, night vision imaging system compatibility, gray levels, resolution, dimming range, viewing angle, video capability, altitude, temperature, shock and vibration, etc., for direct-view and virtual-view displays in cockpits and crew stations. Technical specifications are discussed for selected programs.
Janosik, Elzbieta; Grzesik, Jan
2003-01-01
The aim of this work was to evaluate the influence of different lighting levels at workstations with video display terminals (VDTs) on the course of the operators' visual work, and to determine the optimal levels of lighting at VDT workstations. For two kinds of job (entry of figures from a typescript and edition of the text displayed on the screen), the work capacity, the degree of the visual strain and the operators' subjective symptoms were determined for four lighting levels (200, 300, 500 and 750 lx). It was found that the work at VDT workstations may overload the visual system and cause eyes complaints as well as the reduction of accommodation or convergence strength. It was also noted that the edition of the text displayed on the screen is more burdening for operators than the entry of figures from a typescript. Moreover, the examination results showed that the lighting at VDT workstations should be higher than 200 lx and that 300 lx makes the work conditions most comfortable during the entry of figures from a typescript, and 500 lx during the edition of the text displayed on the screen.
A teleconference with three-dimensional surgical video presentation on the 'usual' Internet.
Obuchi, Toshiro; Moroga, Toshihiko; Nakamura, Hiroshige; Shima, Hiroji; Iwasaki, Akinori
2015-03-01
Endoscopic surgery employing three-dimensional (3D) video images, such as a robotic surgery, has recently become common. However, the number of opportunities to watch such actual 3D videos is still limited due to many technical difficulties associated with showing 3D videos in front of an audience. A teleconference with 3D video presentations of robotic surgeries was held between our institution and a distant institution using a commercially available telecommunication appliance on the 'usual' Internet. Although purpose-built video displays and 3D glasses were necessary, no technical problems occurred during the presentation and discussion. This high-definition 3D telecommunication system can be applied to discussions about and education on 3D endoscopic surgeries for many surgeons, even in distant places, without difficulty over the usual Internet connection.
Simple video format for mobile applications
NASA Astrophysics Data System (ADS)
Smith, John R.; Miao, Zhourong; Li, Chung-Sheng
2000-04-01
With the advent of pervasive computing, there is a growing demand for enabling multimedia applications on mobile devices. Large numbers of pervasive computing devices, such as personal digital assistants (PDAs), hand-held computer (HHC), smart phones, portable audio players, automotive computing devices, and wearable computers are gaining access to online information sources. However, the pervasive computing devices are often constrained along a number of dimensions, such as processing power, local storage, display size and depth, connectivity, and communication bandwidth, which makes it difficult to access rich image and video content. In this paper, we report on our initial efforts in designing a simple scalable video format with low-decoding and transcoding complexity for pervasive computing. The goal is to enable image and video access for mobile applications such as electronic catalog shopping, video conferencing, remote surveillance and video mail using pervasive computing devices.
NASA Technical Reports Server (NTRS)
2003-01-01
KENNEDY SPACE CENTER, FLA. -- Students display an experiment that will fly in SPACEHAB on Space Shuttle Columbia on mission STS- 107. SPACEHAB's complement of commercial experiments includes six educational experiments designed and developed by students in six different countries under the auspices of Space Technology and Research Students (STARS), a global education program managed by SPACEHAB subsidiary Space Media. The countries represented are Australia, China, Israel, Japan, Liechtenstein and the United States. The student investigators who conceived these experiments will monitor their operations in space. The experiments will be housed in BioServe Space Technologies' Isothermal Containment Module (ICM --a small temperature-controlled facility that provides experiment support such as physical containment, lighting, and video imaging) and stowed in a middeck-size locker aboard the SPACEHAB Research Double Module.
NASA Technical Reports Server (NTRS)
2003-01-01
KENNEDY SPACE CENTER, FLA. -- A student displays an experiment that will fly in SPACEHAB on Space Shuttle Columbia on mission STS-107. SPACEHAB's complement of commercial experiments includes six educational experiments designed and developed by students in six different countries under the auspices of Space Technology and Research Students (STARS), a global education program managed by SPACEHAB subsidiary Space Media. The countries represented are Australia, China, Israel, Japan, Liechtenstein and the United States. The student investigators who conceived these experiments will monitor their operations in space. The experiments will be housed in BioServe Space Technologies' Isothermal Containment Module (ICM --a small temperature-controlled facility that provides experiment support such as physical containment, lighting, and video imaging) and stowed in a middeck-size locker aboard the SPACEHAB Research Double Module.
NASA Technical Reports Server (NTRS)
2003-01-01
KENNEDY SPACE CENTER, FLA. -- Students display an experiment that will fly in SPACEHAB on Space Shuttle Columbia on mission STS- 107. SPACEHAB's complement of commercial experiments includes six educational experiments designed and developed by students in six different countries under the auspices of Space Technology and Research Students (STARS), a global education program managed by SPACEHAB subsidiary Space Media. The countries represented are Australia, China, Israel, Japan, Liechtenstein and the United States. The student investigators who conceived these experiments will monitor their operations in space. The experiments will be housed in BioServe Space Technologies' Isothermal Containment Module (ICM --a small temperature-controlled facility that provides experiment support such as physical containment, lighting, and video imaging) and stowed in a middeck-size locker aboard the SPACEHAB Research Double Module.
2003-01-15
KENNEDY SPACE CENTER, FLA. -- Students display an experiment that will fly in SPACEHAB on Space Shuttle Columbia on mission STS-107. SPACEHAB's complement of commercial experiments includes six educational experiments designed and developed by students in six different countries under the auspices of Space Technology and Research Students (STARS), a global education program managed by SPACEHAB subsidiary Space Media. The countries represented are Australia, China, Israel, Japan, Liechtenstein and the United States. The student investigators who conceived these experiments will monitor their operations in space. The experiments will be housed in BioServe Space Technologies' Isothermal Containment Module (ICM --a small temperature-controlled facility that provides experiment support such as physical containment, lighting, and video imaging) and stowed in a middeck-size locker aboard the SPACEHAB Research Double Module.
2003-01-15
KENNEDY SPACE CENTER, FLA. -- A student displays an experiment that will fly in SPACEHAB on Space Shuttle Columbia on mission STS-107. SPACEHAB's complement of commercial experiments includes six educational experiments designed and developed by students in six different countries under the auspices of Space Technology and Research Students (STARS), a global education program managed by SPACEHAB subsidiary Space Media. The countries represented are Australia, China, Israel, Japan, Liechtenstein and the United States. The student investigators who conceived these experiments will monitor their operations in space. The experiments will be housed in BioServe Space Technologies' Isothermal Containment Module (ICM --a small temperature-controlled facility that provides experiment support such as physical containment, lighting, and video imaging) and stowed in a middeck-size locker aboard the SPACEHAB Research Double Module.
SPACEHAB - Space Shuttle Columbia mission STS-107
2003-01-14
Students display an experiment that will fly in SPACEHAB on Space Shuttle Columbia on mission STS-107. SPACEHAB's complement of commercial experiments includes six educational experiments designed and developed by students in six different countries under the auspices of Space Technology and Research Students (STARS), a global education program managed by SPACEHAB subsidiary Space Media. The countries represented are Australia, China, Israel, Japan, Liechtenstein and the United States. The student investigators who conceived these experiments will monitor their operations in space. The experiments will be housed in BioServe Space Technologies' Isothermal Containment Module (ICM --a small temperature-controlled facility that provides experiment support such as physical containment, lighting, and video imaging) and stowed in a middeck-size locker aboard the SPACEHAB Research Double Module.
NASA Astrophysics Data System (ADS)
Schlam, E.
1983-01-01
Human factors in visible displays are discussed, taking into account an introduction to color vision, a laser optometric assessment of visual display viewability, the quantification of color contrast, human performance evaluations of digital image quality, visual problems of office video display terminals, and contemporary problems in airborne displays. Other topics considered are related to electroluminescent technology, liquid crystal and related technologies, plasma technology, and display terminal and systems. Attention is given to the application of electroluminescent technology to personal computers, electroluminescent driving techniques, thin film electroluminescent devices with memory, the fabrication of very large electroluminescent displays, the operating properties of thermally addressed dye switching liquid crystal display, light field dichroic liquid crystal displays for very large area displays, and hardening military plasma displays for a nuclear environment.
Backscatter absorption gas imaging system
McRae, Jr., Thomas G.
1985-01-01
A video imaging system for detecting hazardous gas leaks. Visual displays of invisible gas clouds are produced by radiation augmentation of the field of view of an imaging device by radiation corresponding to an absorption line of the gas to be detected. The field of view of an imager is irradiated by a laser. The imager receives both backscattered laser light and background radiation. When a detectable gas is present, the backscattered laser light is highly attenuated, producing a region of contrast or shadow on the image. A flying spot imaging system is utilized to synchronously irradiate and scan the area to lower laser power requirements. The imager signal is processed to produce a video display.
Backscatter absorption gas imaging system
McRae, T.G. Jr.
A video imaging system for detecting hazardous gas leaks. Visual displays of invisible gas clouds are produced by radiation augmentation of the field of view of an imaging device by radiation corresponding to an absorption line of the gas to be detected. The field of view of an imager is irradiated by a laser. The imager receives both backscattered laser light and background radiation. When a detectable gas is present, the backscattered laser light is highly attenuated, producing a region of contrast or shadow on the image. A flying spot imaging system is utilized to synchronously irradiate and scan the area to lower laser power requirements. The imager signal is processed to produce a video display.
DC-8 Scanning Lidar Characterization of Aircraft Contrails and Cirrus Clouds
NASA Technical Reports Server (NTRS)
Uthe, Edward E.; Nielsen, Norman B.; Oseberg, Terje E.
1998-01-01
An angular-scanning large-aperture (36 cm) backscatter lidar was developed and deployed on the NASA DC-8 research aircraft as part of the SUCCESS (Subsonic Aircraft: Contrail and Cloud Effects Special Study) program. The lidar viewing direction could be scanned continuously during aircraft flight from vertically upward to forward to vertically downward, or the viewing could be at fixed angles. Real-time pictorial displays generated from the lidar signatures were broadcast on the DC-8 video network and used to locate clouds and contrails above, ahead of, and below the DC-8 to depict their spatial structure and to help select DC-8 altitudes for achieving optimum sampling by onboard in situ sensors. Several lidar receiver systems and real-time data displays were evaluated to help extend in situ data into vertical dimensions and to help establish possible lidar configurations and applications on future missions. Digital lidar signatures were recorded on 8 mm Exabyte tape and generated real-time displays were recorded on 8mm video tape. The digital records were transcribed in a common format to compact disks to facilitate data analysis and delivery to SUCCESS participants. Data selected from the real-time display video recordings were processed for publication-quality displays incorporating several standard lidar data corrections. Data examples are presented that illustrate: (1) correlation with particulate, gas, and radiometric measurements made by onboard sensors, (2) discrimination and identification between contrails observed by onboard sensors, (3) high-altitude (13 km) scattering layer that exhibits greatly enhanced vertical backscatter relative to off-vertical backscatter, and (4) mapping of vertical distributions of individual precipitating ice crystals and their capture by cloud layers. An angular scan plotting program was developed that accounts for DC-8 pitch and velocity.
Emotional Processing of Infants Displays in Eating Disorders
Cardi, Valentina; Corfield, Freya; Leppanen, Jenni; Rhind, Charlotte; Deriziotis, Stephanie; Hadjimichalis, Alexandra; Hibbs, Rebecca; Micali, Nadia; Treasure, Janet
2014-01-01
Aim The aim of this study is to examine emotional processing of infant displays in people with Eating Disorders (EDs). Background Social and emotional factors are implicated as causal and maintaining factors in EDs. Difficulties in emotional regulation have been mainly studied in relation to adult interactions, with less interest given to interactions with infants. Method A sample of 138 women were recruited, of which 49 suffered from Anorexia Nervosa (AN), 16 from Bulimia Nervosa (BN), and 73 were healthy controls (HCs). Attentional responses to happy and sad infant faces were tested with the visual probe detection task. Emotional identification of, and reactivity to, infant displays were measured using self-report measures. Facial expressions to video clips depicting sad, happy and frustrated infants were also recorded. Results No significant differences between groups were observed in the attentional response to infant photographs. However, there was a trend for patients to disengage from happy faces. People with EDs also reported lower positive ratings of happy infant displays and greater subjective negative reactions to sad infants. Finally, patients showed a significantly lower production of facial expressions, especially in response to the happy infant video clip. Insecure attachment was negatively correlated with positive facial expressions displayed in response to the happy infant and positively correlated with the intensity of negative emotions experienced in response to the sad infant video clip. Conclusion People with EDs do not have marked abnormalities in their attentional processing of infant emotional faces. However, they do have a reduction in facial affect particularly in response to happy infants. Also, they report greater negative reactions to sadness, and rate positive emotions less intensively than HCs. This pattern of emotional responsivity suggests abnormalities in social reward sensitivity and might indicate new treatment targets. PMID:25463051
ERIC Educational Resources Information Center
Carrein, Cindy; Bernaud, Jean-Luc
2010-01-01
This study investigated the effects of nonverbal self-disclosure within the dynamic of aptitude-treatment interaction. Participants (N = 94) watched a video of a career counseling session aimed at helping the jobseeker to find employment. The video was then edited to display 3 varying degrees of nonverbal self-disclosure. In conjunction with the…
VID-R and SCAN: Tools and Methods for the Automated Analysis of Visual Records.
ERIC Educational Resources Information Center
Ekman, Paul; And Others
The VID-R (Visual Information Display and Retrieval) system that enables computer-aided analysis of visual records is composed of a film-to-television chain, two videotape recorders with complete remote control of functions, a video-disc recorder, three high-resolution television monitors, a teletype, a PDP-8, a video and audio interface, three…
Glass Vision 3D: Digital Discovery for the Deaf
ERIC Educational Resources Information Center
Parton, Becky Sue
2017-01-01
Glass Vision 3D was a grant-funded project focused on developing and researching a Google Glass app that would allowed young Deaf children to look at the QR code of an object in the classroom and see an augmented reality projection that displays an American Sign Language (ASL) related video. Twenty five objects and videos were prepared and tested…
Otto, Kristen J; Hapner, Edie R; Baker, Michael; Johns, Michael M
2006-02-01
Advances in commercial video technology have improved office-based laryngeal imaging. This study investigates the perceived image quality of a true high-definition (HD) video camera and the effect of magnification on laryngeal videostroboscopy. We performed a prospective, dual-armed, single-blinded analysis of a standard laryngeal videostroboscopic examination comparing 3 separate add-on camera systems: a 1-chip charge-coupled device (CCD) camera, a 3-chip CCD camera, and a true 720p (progressive scan) HD camera. Displayed images were controlled for magnification and image size (20-inch [50-cm] display, red-green-blue, and S-video cable for 1-chip and 3-chip cameras; digital visual interface cable and HD monitor for HD camera). Ten blinded observers were then asked to rate the following 5 items on a 0-to-100 visual analog scale: resolution, color, ability to see vocal fold vibration, sense of depth perception, and clarity of blood vessels. Eight unblinded observers were then asked to rate the difference in perceived resolution and clarity of laryngeal examination images when displayed on a 10-inch (25-cm) monitor versus a 42-inch (105-cm) monitor. A visual analog scale was used. These monitors were controlled for actual resolution capacity. For each item evaluated, randomized block design analysis demonstrated that the 3-chip camera scored significantly better than the 1-chip camera (p < .05). For the categories of color and blood vessel discrimination, the 3-chip camera scored significantly better than the HD camera (p < .05). For magnification alone, observers rated the 42-inch monitor statistically better than the 10-inch monitor. The expense of new medical technology must be judged against its added value. This study suggests that HD laryngeal imaging may not add significant value over currently available video systems, in perceived image quality, when a small monitor is used. Although differences in clarity between standard and HD cameras may not be readily apparent on small displays, a large display size coupled with HD technology may impart improved diagnosis of subtle vocal fold lesions and vibratory anomalies.
Contour Detector and Data Acquisition System for the Left Ventricular Outline
NASA Technical Reports Server (NTRS)
Reiber, J. H. C. (Inventor)
1978-01-01
A real-time contour detector and data acquisition system is described for an angiographic apparatus having a video scanner for converting an X-ray image of a structure characterized by a change in brightness level compared with its surrounding into video format and displaying the X-ray image in recurring video fields. The real-time contour detector and data acqusition system includes track and hold circuits; a reference level analog computer circuit; an analog compartor; a digital processor; a field memory; and a computer interface.
The Video Display Terminal Health Hazard Debate.
ERIC Educational Resources Information Center
Clark, Carolyn A.
A study was conducted to identify the potential health hazards of visual display terminals for employees and then to develop a list of recommendations for improving the physical conditions of the workplace. Data were collected by questionnaires from 55 employees in 10 word processing departments in Topeka, Kansas. A majority of the employees…
Perceived Intensity of Emotional Point-Light Displays Is Reduced in Subjects with ASD
ERIC Educational Resources Information Center
Krüger, Britta; Kaletsch, Morten; Pilgramm, Sebastian; Schwippert, Sven-Sören; Hennig, Jürgen; Stark, Rudolf; Lis, Stefanie; Gallhofer, Bernd; Sammer, Gebhard; Zentgraf, Karen; Munzert, Jörn
2018-01-01
One major characteristic of autism spectrum disorder (ASD) is problems with social interaction and communication. The present study explored ASD-related alterations in perceiving emotions expressed via body movements. 16 participants with ASD and 16 healthy controls observed video scenes of human interactions conveyed by point-light displays. They…
Free viewpoint TV and its international standardization
NASA Astrophysics Data System (ADS)
Tanimoto, Masayuki
2009-05-01
We have developed a new type of television named FTV (Free-viewpoint TV). FTV is an innovative visual media that enables us to view a 3D scene by freely changing our viewpoints. We proposed the concept of FTV and constructed the world's first real-time system including the complete chain of operation from image capture to display. We also realized FTV on a single PC and FTV with free listening-point audio. FTV is based on the ray-space method that represents one ray in real space with one point in the ray-space. We have also developed new type of ray capture and display technologies such as a 360-degree mirror-scan ray capturing system and a 360 degree ray-reproducing display. MPEG regarded FTV as the most challenging 3D media and started the international standardization activities of FTV. The first phase of FTV is MVC (Multi-view Video Coding) and the second phase is 3DV (3D Video). MVC was completed in March 2009. 3DV is a standard that targets serving a variety of 3D displays. It will be completed within the next two years.
A new display stream compression standard under development in VESA
NASA Astrophysics Data System (ADS)
Jacobson, Natan; Thirumalai, Vijayaraghavan; Joshi, Rajan; Goel, James
2017-09-01
The Advanced Display Stream Compression (ADSC) codec project is in development in response to a call for technologies from the Video Electronics Standards Association (VESA). This codec targets visually lossless compression of display streams at a high compression rate (typically 6 bits/pixel) for mobile/VR/HDR applications. Functionality of the ADSC codec is described in this paper, and subjective trials results are provided using the ISO 29170-2 testing protocol.
Benoit, Justin L; Vogele, Jennifer; Hart, Kimberly W; Lindsell, Christopher J; McMullan, Jason T
2017-06-01
Bystander compression-only cardiopulmonary resuscitation (CPR) improves survival after out-of-hospital cardiac arrest. To broaden CPR training, 1-2min ultra-brief videos have been disseminated via the Internet and television. Our objective was to determine whether participants passively exposed to a televised ultra-brief video perform CPR better than unexposed controls. This before-and-after study was conducted with non-patients in an urban Emergency Department waiting room. The intervention was an ultra-brief CPR training video displayed via closed-circuit television 3-6 times/hour. Participants were unaware of the study and not told to watch the video. Pre-intervention, no video was displayed. Participants were asked to demonstrate compression-only CPR on a manikin. Performance was scored based on critical actions: check for responsiveness, call for help, begin compressions immediately, and correct hand placement, compression rate and depth. The primary outcome was the proportion of participants who performed all actions correctly. There were 50 control and 50 exposed participants. Mean age was 37, 51% were African-American, 52% were female, and 10% self-reported current CPR certification. There were no statistically significant differences in baseline characteristics between groups. The number of participants who performed all actions correctly was 0 (0%) control vs. 10 (20%) exposed (difference 20%, 95% confidence interval [CI] 8.9-31.1%, p<0.001). Correct compression rate and depth were 11 (22%) control vs. 22 (44%) exposed (22%, 95% CI 4.1-39.9%, p=0.019), and 5 (10%) control vs. 15 (30%) exposed (20%, 95% CI 4.8-35.2%, p=0.012), respectively. Passive ultra-brief video training is associated with improved performance of compression-only CPR. Copyright © 2017 Elsevier B.V. All rights reserved.
The relative importance of different perceptual-cognitive skills during anticipation.
North, Jamie S; Hope, Ed; Williams, A Mark
2016-10-01
We examined whether anticipation is underpinned by perceiving structured patterns or postural cues and whether the relative importance of these processes varied as a function of task constraints. Skilled and less-skilled soccer players completed anticipation paradigms in video-film and point light display (PLD) format. Skilled players anticipated more accurately regardless of display condition, indicating that both perception of structured patterns between players and postural cues contribute to anticipation. However, the Skill×Display interaction showed skilled players' advantage was enhanced in the video-film condition, suggesting that they make better use of postural cues when available during anticipation. We also examined anticipation as a function of proximity to the ball. When participants were near the ball, anticipation was more accurate for video-film than PLD clips, whereas when the ball was far away there was no difference between viewing conditions. Perceiving advance postural cues appears more important than structured patterns when the ball is closer to the observer, whereas the reverse is true when the ball is far away. Various perceptual-cognitive skills contribute to anticipation with the relative importance of perceiving structured patterns and advance postural cues being determined by task constraints and the availability of perceptual information. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Dastageeri, H.; Storz, M.; Koukofikis, A.; Knauth, S.; Coors, V.
2016-09-01
Providing mobile location-based information for pedestrians faces many challenges. On one hand the accuracy of localisation indoors and outdoors is restricted due to technical limitations of GPS and Beacons. Then again only a small display is available to display information as well as to develop a user interface. Plus, the software solution has to consider the hardware characteristics of mobile devices during the implementation process for aiming a performance with minimum latency. This paper describes our approach by including a combination of image tracking and GPS or Beacons to ensure orientation and precision of localisation. To communicate the information on Points of Interest (POIs), we decided to choose Augmented Reality (AR). For this concept of operations, we used besides the display also the acceleration and positions sensors as a user interface. This paper especially goes into detail on the optimization of the image tracking algorithms, the development of the video-based AR player for the Android platform and the evaluation of videos as an AR element in consideration of providing a good user experience. For setting up content for the POIs or even generate a tour we used and extended the Open Geospatial Consortium (OGC) standard Augmented Reality Markup Language (ARML).
NASA Astrophysics Data System (ADS)
Mantel, Claire; Korhonen, Jari; Pedersen, Jesper M.; Bech, Søren; Andersen, Jakob Dahl; Forchhammer, Søren
2015-01-01
This paper focuses on the influence of ambient light on the perceived quality of videos displayed on Liquid Crystal Display (LCD) with local backlight dimming. A subjective test assessing the quality of videos with two backlight dimming methods and three lighting conditions, i.e. no light, low light level (5 lux) and higher light level (60 lux) was organized to collect subjective data. Results show that participants prefer the method exploiting local dimming possibilities to the conventional full backlight but that this preference varies depending on the ambient light level. The clear preference for one method at the low light conditions decreases at the high ambient light, confirming that the ambient light significantly attenuates the perception of the leakage defect (light leaking through dark pixels). Results are also highly dependent on the content of the sequence, which can modulate the effect of the ambient light from having an important influence on the quality grades to no influence at all.
STS-114 Flight Day 13 and 14 Highlights
NASA Technical Reports Server (NTRS)
2005-01-01
On Flight Day 13, the crew of Space Shuttle Discovery on the STS-114 Return to Flight mission (Commander Eileen Collins, Pilot James Kelly, Mission Specialists Soichi Noguchi, Stephen Robinson, Andrew Thomas, Wendy Lawrence, and Charles Camarda) hear a weather report from Mission Control on conditions at the shuttle's possible landing sites. The video includes a view of a storm at sea. Noguchi appears in front of a banner for the Japanese Space Agency JAXA, displaying a baseball signed by Japanese MLB players, demonstrating origami, displaying other crafts, and playing the keyboard. The primary event on the video is an interview of the whole crew, in which they discuss the importance of their mission, lessons learned, shuttle operations, shuttle safety and repair, extravehicular activities (EVAs), astronaut training, and shuttle landing. Mission Control dedicates the song "A Piece of Sky" to the Shuttle crew, while the Earth is visible below the orbiter. The video ends with a view of the Earth limb lit against a dark background.
Evaluation of a HDR image sensor with logarithmic response for mobile video-based applications
NASA Astrophysics Data System (ADS)
Tektonidis, Marco; Pietrzak, Mateusz; Monnin, David
2017-10-01
The performance of mobile video-based applications using conventional LDR (Low Dynamic Range) image sensors highly depends on the illumination conditions. As an alternative, HDR (High Dynamic Range) image sensors with logarithmic response are capable to acquire illumination-invariant HDR images in a single shot. We have implemented a complete image processing framework for a HDR sensor, including preprocessing methods (nonuniformity correction (NUC), cross-talk correction (CTC), and demosaicing) as well as tone mapping (TM). We have evaluated the HDR sensor for video-based applications w.r.t. the display of images and w.r.t. image analysis techniques. Regarding the display we have investigated the image intensity statistics over time, and regarding image analysis we assessed the number of feature correspondences between consecutive frames of temporal image sequences. For the evaluation we used HDR image data recorded from a vehicle on outdoor or combined outdoor/indoor itineraries, and we performed a comparison with corresponding conventional LDR image data.
Lord, D.E.; Carter, G.W.; Petrini, R.R.
1983-08-02
A video flowmeter is described that is capable of specifying flow nature and pattern and, at the same time, the quantitative value of the rate of volumetric flow. An image of a determinable volumetric region within a fluid containing entrained particles is formed and positioned by a rod optic lens assembly on the raster area of a low-light level television camera. The particles are illuminated by light transmitted through a bundle of glass fibers surrounding the rod optic lens assembly. Only particle images having speeds on the raster area below the raster line scanning speed may be used to form a video picture which is displayed on a video screen. The flowmeter is calibrated so that the locus of positions of origin of the video picture gives a determination of the volumetric flow rate of the fluid. 4 figs.
Şaşmaz, M I; Akça, A H
2017-06-01
In this study, the reliability of trauma management scenario videos (in English) on YouTube and their compliance with Advanced Trauma Life Support (ATLS ® ) guidelines were investigated. The search was conducted on February 15, 2016 by using the terms "assessment of trauma" and ''management of trauma''. All videos that were uploaded between January 2011 and June 2016 were viewed by two experienced emergency physicians. The data regarding the date of upload, the type of the uploader, duration of the video and view counts were recorded. The videos were categorized according to the video source and scores. The search results yielded 880 videos. Eight hundred and thirteen videos were excluded by the researchers. The distribution of videos by years was found to be balanced. The scores of videos uploaded by an institution were determined to be higher compared to other groups (p = 0.003). The findings of this study display that trauma management videos on YouTube in the majority of cases are not reliable/compliant with ATLS-guidelines and can therefore not be recommended for educational purposes. These data may only be used in public education after making necessary arrangements.
Gabbett, Tim J
2013-08-01
The physical demands of rugby league, rugby union, and American football are significantly increased through the large number of collisions players are required to perform during match play. Because of the labor-intensive nature of coding collisions from video recordings, manufacturers of wearable microsensor (e.g., global positioning system [GPS]) units have refined the technology to automatically detect collisions, with several sport scientists attempting to use these microsensors to quantify the physical demands of collision sports. However, a question remains over the validity of these microtechnology units to quantify the contact demands of collision sports. Indeed, recent evidence has shown significant differences in the number of "impacts" recorded by microtechnology units (GPSports) and the actual number of collisions coded from video. However, a separate study investigated the validity of a different microtechnology unit (minimaxX; Catapult Sports) that included GPS and triaxial accelerometers, and also a gyroscope and magnetometer, to quantify collisions. Collisions detected by the minimaxX unit were compared with video-based coding of the actual events. No significant differences were detected in the number of mild, moderate, and heavy collisions detected via the minimaxX units and those coded from video recordings of the actual event. Furthermore, a strong correlation (r = 0.96, p < 0.01) was observed between collisions recorded via the minimaxX units and those coded from video recordings of the event. These findings demonstrate that only one commercially available and wearable microtechnology unit (minimaxX) can be considered capable of offering a valid method of quantifying the contact loads that typically occur in collision sports. Until such validation research is completed, sport scientists should be circumspect of the ability of other units to perform similar functions.
Playing a first-person shooter video game induces neuroplastic change.
Wu, Sijing; Cheng, Cho Kin; Feng, Jing; D'Angelo, Lisa; Alain, Claude; Spence, Ian
2012-06-01
Playing a first-person shooter (FPS) video game alters the neural processes that support spatial selective attention. Our experiment establishes a causal relationship between playing an FPS game and neuroplastic change. Twenty-five participants completed an attentional visual field task while we measured ERPs before and after playing an FPS video game for a cumulative total of 10 hr. Early visual ERPs sensitive to bottom-up attentional processes were little affected by video game playing for only 10 hr. However, participants who played the FPS video game and also showed the greatest improvement on the attentional visual field task displayed increased amplitudes in the later visual ERPs. These potentials are thought to index top-down enhancement of spatial selective attention via increased inhibition of distractors. Individual variations in learning were observed, and these differences show that not all video game players benefit equally, either behaviorally or in terms of neural change.
Dominant, open nonverbal displays are attractive at zero-acquaintance
Vacharkulksemsuk, Tanya; Reit, Emily; Khambatta, Poruz; Eastwick, Paul W.; Finkel, Eli J.; Carney, Dana R.
2016-01-01
Across two field studies of romantic attraction, we demonstrate that postural expansiveness makes humans more romantically appealing. In a field study (n = 144 speed-dates), we coded nonverbal behaviors associated with liking, love, and dominance. Postural expansiveness—expanding the body in physical space—was most predictive of attraction, with each one-unit increase in coded behavior from the video recordings nearly doubling a person’s odds of getting a “yes” response from one’s speed-dating partner. In a subsequent field experiment (n = 3,000), we tested the causality of postural expansion (vs. contraction) on attraction using a popular Global Positioning System-based online-dating application. Mate-seekers rapidly flipped through photographs of potential sexual/date partners, selecting those they desired to meet for a date. Mate-seekers were significantly more likely to select partners displaying an expansive (vs. contractive) nonverbal posture. Mediation analyses demonstrate one plausible mechanism through which expansiveness is appealing: Expansiveness makes the dating candidate appear more dominant. In a dating world in which success sometimes is determined by a split-second decision rendered after a brief interaction or exposure to a static photograph, single persons have very little time to make a good impression. Our research suggests that a nonverbal dominance display increases a person’s chances of being selected as a potential mate. PMID:27035937
Dominant, open nonverbal displays are attractive at zero-acquaintance.
Vacharkulksemsuk, Tanya; Reit, Emily; Khambatta, Poruz; Eastwick, Paul W; Finkel, Eli J; Carney, Dana R
2016-04-12
Across two field studies of romantic attraction, we demonstrate that postural expansiveness makes humans more romantically appealing. In a field study (n = 144 speed-dates), we coded nonverbal behaviors associated with liking, love, and dominance. Postural expansiveness-expanding the body in physical space-was most predictive of attraction, with each one-unit increase in coded behavior from the video recordings nearly doubling a person's odds of getting a "yes" response from one's speed-dating partner. In a subsequent field experiment (n = 3,000), we tested the causality of postural expansion (vs. contraction) on attraction using a popular Global Positioning System-based online-dating application. Mate-seekers rapidly flipped through photographs of potential sexual/date partners, selecting those they desired to meet for a date. Mate-seekers were significantly more likely to select partners displaying an expansive (vs. contractive) nonverbal posture. Mediation analyses demonstrate one plausible mechanism through which expansiveness is appealing: Expansiveness makes the dating candidate appear more dominant. In a dating world in which success sometimes is determined by a split-second decision rendered after a brief interaction or exposure to a static photograph, single persons have very little time to make a good impression. Our research suggests that a nonverbal dominance display increases a person's chances of being selected as a potential mate.
Task automation in a successful industrial telerobot
NASA Technical Reports Server (NTRS)
Spelt, Philip F.; Jones, Sammy L.
1994-01-01
In this paper, we discuss cooperative work by Oak Ridge National Laboratory and Remotec, Inc., to automate components of the operator's workload using Remotec's Andros telerobot, thereby providing an enhanced user interface which can be retrofit to existing fielded units as well as being incorporated into new production units. Remotec's Andros robots are presently used by numerous electric utilities to perform tasks in reactors where substantial exposure to radiation exists, as well as by the armed forces and numerous law enforcement agencies. The automation of task components, as well as the video graphics display of the robot's position in the environment, will enhance all tasks performed by these users, as well as enabling performance in terrain where the robots cannot presently perform due to lack of knowledge about, for instance, the degree of tilt of the robot. Enhanced performance of a successful industrial mobile robot leads to increased safety and efficiency of performance in hazardous environments. The addition of these capabilities will greatly enhance the utility of the robot, as well as its marketability.
ERIC Educational Resources Information Center
Shriver, Edgar L.; And Others
This volume reports an effort to use the video media as an approach for the preparation of a battery of symbolic tests that would be empirically valid substitutes for criterion referenced Job Task Performance Tests. The graphic symbolic tests require the storage of a large amount of pictorial information which must be searched rapidly for display.…
A new technique for presentation of scientific works: video in poster.
Bozdag, Ali Dogan
2008-07-01
Presentations at scientific congresses and symposiums can be in two different forms: poster or oral presentation. Each method has some advantages and disadvantages. To combine the advantages of oral and poster presentations, a new presentation type was conceived: "video in poster." The top of the portable digital video display (DVD) player is opened 180 degrees to keep the screen and the body of the DVD player in the same plane. The poster is attached to the DVD player and a window is made in the poster to expose the screen of the DVD player so the screen appears as a picture on the poster. Then this video in poster is fixed to the panel. When the DVD player is turned on, the video presentation of the surgical procedure starts. Several posters were presented at different medical congresses in 2007 using the "video in poster" technique, and they received poster awards. The video in poster combines the advantages of both oral and poster presentations.
NASA Astrophysics Data System (ADS)
Khosla, Deepak; Huber, David J.; Bhattacharyya, Rajan
2017-05-01
In this paper, we describe an algorithm and system for optimizing search and detection performance for "items of interest" (IOI) in large-sized images and videos that employ the Rapid Serial Visual Presentation (RSVP) based EEG paradigm and surprise algorithms that incorporate motion processing to determine whether static or video RSVP is used. The system works by first computing a motion surprise map on image sub-regions (chips) of incoming sensor video data and then uses those surprise maps to label the chips as either "static" or "moving". This information tells the system whether to use a static or video RSVP presentation and decoding algorithm in order to optimize EEG based detection of IOI in each chip. Using this method, we are able to demonstrate classification of a series of image regions from video with an azimuth value of 1, indicating perfect classification, over a range of display frequencies and video speeds.
Visual and ocular effects from the use of flat-panel displays.
Porcar, Esteban; Pons, Alvaro M; Lorente, Amalia
2016-01-01
To evaluate the prevalence of eye symptoms in a non-presbyopic population of video display unit (VDU) users with flat-panel displays. One hundred and sixteen VDU users with flat-panel display from an urban population participated in the study; their ages ranging from 20 to 34y. There were 60 females and 56 males. An eye examination to rule out the presence of significant uncorrected refractive errors, general binocular dysfunctions and eye conditions was carried out. In order to determine and quantify the type and nature of eye symptoms, participants were asked to answer written questionnaire and the results were grouped by gender, age and number of hours a day spent using a VDU. Seventy-two percent of participants reported eye symptoms related to VDU use. Eye symptoms from moderate-to-severe were found in 23% of participants. The main symptom was moderate-to-severe tired eyes (14%); followed by sensitivity to bright lights (12%), blurred vision at far distances (10%), eyestrain or dry eye or irritated or burning eyes (9%), difficulty in refocusing from one distance to another or headache (8%) and blurred vision at near or intermediate distances (<4%). Eye symptoms were greater among females (P=0.005) and increased with VDU use, markedly above 6h spent using a VDU in a typical day (P=0.01). Significant eye symptoms relate to VDU use often occur and should not be underestimated. The increasing use of electronic devices with flat-panel display should prompt users to take appropriate measures to prevent or to relieve the eye symptoms arising from their use.
Bar-Chart-Monitor System For Wind Tunnels
NASA Technical Reports Server (NTRS)
Jung, Oscar
1993-01-01
Real-time monitor system provides bar-chart displays of significant operating parameters developed for National Full-Scale Aerodynamic Complex at Ames Research Center. Designed to gather and process sensory data on operating conditions of wind tunnels and models, and displays data for test engineers and technicians concerned with safety and validation of operating conditions. Bar-chart video monitor displays data in as many as 50 channels at maximum update rate of 2 Hz in format facilitating quick interpretation.
ERIC Educational Resources Information Center
Kroski, Ellyssa
2008-01-01
A widget displays Web content from external sources and can be embedded into a blog, social network, or other Web page, or downloaded to one's desktop. With widgets--sometimes referred to as gadgets--one can insert video into a blog post, display slideshows on MySpace, get the weather delivered to his mobile device, drag-and-drop his Netflix queue…
Video enhancement of X-ray and neutron radiographs
NASA Technical Reports Server (NTRS)
Vary, A.
1973-01-01
System was devised for displaying radiographs on television screen and enhancing fine detail in picture. System uses analog-computer circuits to process television signal from low-noise television camera. Enhanced images are displayed in black and white and can be controlled to vary degree of enhancement and magnification of details in either radiographic transparencies or opaque photographs.
Ang, Cheah Kiok; Mohidin, Norhani; Chung, Kah Meng
2014-09-01
Wink glass (WG), an invention to stimulate blinking at interval of 5 s was designed to reduce dry eye symptoms during visual display unit (VDU) use. The objective of this study is to investigate the effect of WG on visual functions that include blink rate, ocular surface symptoms (OSS) and tear stability during VDU use. A total of 26 young and asymptomatic subjects were instructed to read articles in Malay language with a computer for 20 min with WG whereby their blink rate, pre- and post-task tear break-up time, and OSS were recorded. The results were compared to another reading session of the subjects wearing a transparent plastic sheet as a control. Non-invasive tear break-up time was reduced after reading session with transparent plastic sheet (pre-task = 5.97 s, post-task = 5.14 s, z = -2.426, p = 0.015, Wilcoxon), but remained stable (pre-task = 5.62 s, post-task = 5.35 s, z = -0.67, p = 0.501) during the reading session with WG. The blink rate recorded during reading session with plastic sheet was 9 blinks/min (median) and this increased to 15 blinks/min (z = -3.315, p = 0.001) with WG. The reading task caused OSS (maximum scores = 20) with median score of 1 (0-8) reduced to median score of 0 (0-3) after wearing WG (z = -2.417, p = 0.016). WG was found to increase post-task tear stability, increased blinking rate and reduced OSS during video display unit use among young and healthy adults. Although it may be considered as an option to improve dry eye symptoms among VDU users, further studies are warranted to establish its stability and its effect on subjects with dry eyes.
Optical instrument for measurement of vaginal coating thickness by drug delivery formulations
NASA Astrophysics Data System (ADS)
Henderson, Marcus H.; Peters, Jennifer J.; Walmer, David K.; Couchman, Grace M.; Katz, David F.
2005-03-01
An optical device has been developed for imaging the human vaginal epithelial surfaces, and quantitatively measuring distributions of coating thickness of drug delivery formulations—such as gels—applied for prophylaxis, contraception or therapy. The device consists of a rigid endoscope contained within a 27-mm-diam hollow, polished-transparent polycarbonate tube (150mm long) with a hemispherical cap. Illumination is from a xenon arc. The device is inserted into, and remains stationary within the vagina. A custom gearing mechanism moves the endoscope relative to the tube, so that it views epithelial surfaces immediately apposing its outer surface (i.e., 150mm long by 360° azimuthal angle). Thus, with the tube fixed relative to the vagina, the endoscope sites local regions at distinct and measurable locations that span the vaginal epithelium. The returning light path is split between a video camera and photomultiplier. Excitation and emission filters in the light path enable measurement of fluorescence of the sited region. Thus, the instrument captures video images simultaneously with photometric measurement of fluorescence of each video field [˜10mm diameter; formulations are labeled with 0.1%w/w United States Pharmacoepia (USP) injectable sodium fluorescein]. Position, time and fluorescence measurements are continuously displayed (on video) and recorded (to a computer database). The photomultiplier output is digitized to quantify fluorescence of the endoscope field of view. Quantification of the thickness of formulation coating of a surface sited by the device is achieved due to the linear relationship between thickness and fluorescence intensity for biologically relevant thin layers (of the order of 0.5mm). Summary measures of coating have been developed, focusing upon extent, location and uniformity. The device has begun to be applied in human studies of model formulations for prophylaxis against infection with HIV and other sexually transmitted pathogens.
Mobile visual communications and displays
NASA Astrophysics Data System (ADS)
Valliath, George T.
2004-09-01
The different types of mobile visual communication modes and the types of displays needed in cellular handsets are explored. The well-known 2-way video conferencing is only one of the possible modes. Some modes are already supported on current handsets while others need the arrival of advanced network capabilities to be supported. Displays for devices that support these visual communication modes need to deliver the required visual experience. Over the last 20 years the display has grown in size while the rest of the handset has shrunk. However, the display is still not large enough - the processor performance and network capabilities continue to outstrip the display ability. This makes the display a bottleneck. This paper will explore potential solutions to a small large image on a small handset.
Real-Time Detection and Reading of LED/LCD Displays for Visually Impaired Persons
Tekin, Ender; Coughlan, James M.; Shen, Huiying
2011-01-01
Modern household appliances, such as microwave ovens and DVD players, increasingly require users to read an LED or LCD display to operate them, posing a severe obstacle for persons with blindness or visual impairment. While OCR-enabled devices are emerging to address the related problem of reading text in printed documents, they are not designed to tackle the challenge of finding and reading characters in appliance displays. Any system for reading these characters must address the challenge of first locating the characters among substantial amounts of background clutter; moreover, poor contrast and the abundance of specular highlights on the display surface – which degrade the image in an unpredictable way as the camera is moved – motivate the need for a system that processes images at a few frames per second, rather than forcing the user to take several photos, each of which can take seconds to acquire and process, until one is readable. We describe a novel system that acquires video, detects and reads LED/LCD characters in real time, reading them aloud to the user with synthesized speech. The system has been implemented on both a desktop and a cell phone. Experimental results are reported on videos of display images, demonstrating the feasibility of the system. PMID:21804957
Impact of packet losses in scalable 3D holoscopic video coding
NASA Astrophysics Data System (ADS)
Conti, Caroline; Nunes, Paulo; Ducla Soares, Luís.
2014-05-01
Holoscopic imaging became a prospective glassless 3D technology to provide more natural 3D viewing experiences to the end user. Additionally, holoscopic systems also allow new post-production degrees of freedom, such as controlling the plane of focus or the viewing angle presented to the user. However, to successfully introduce this technology into the consumer market, a display scalable coding approach is essential to achieve backward compatibility with legacy 2D and 3D displays. Moreover, to effectively transmit 3D holoscopic content over error-prone networks, e.g., wireless networks or the Internet, error resilience techniques are required to mitigate the impact of data impairments in the user quality perception. Therefore, it is essential to deeply understand the impact of packet losses in terms of decoding video quality for the specific case of 3D holoscopic content, notably when a scalable approach is used. In this context, this paper studies the impact of packet losses when using a three-layer display scalable 3D holoscopic video coding architecture previously proposed, where each layer represents a different level of display scalability (i.e., L0 - 2D, L1 - stereo or multiview, and L2 - full 3D holoscopic). For this, a simple error concealment algorithm is used, which makes use of inter-layer redundancy between multiview and 3D holoscopic content and the inherent correlation of the 3D holoscopic content to estimate lost data. Furthermore, a study of the influence of 2D views generation parameters used in lower layers on the performance of the used error concealment algorithm is also presented.
Millisecond accuracy video display using OpenGL under Linux.
Stewart, Neil
2006-02-01
To measure people's reaction times to the nearest millisecond, it is necessary to know exactly when a stimulus is displayed. This article describes how to display stimuli with millisecond accuracy on a normal CRT monitor, using a PC running Linux. A simple C program is presented to illustrate how this may be done within X Windows using the OpenGL rendering system. A test of this system is reported that demonstrates that stimuli may be consistently displayed with millisecond accuracy. An algorithm is presented that allows the exact time of stimulus presentation to be deduced, even if there are relatively large errors in measuring the display time.
Implementation of a Landscape Lighting System to Display Images
NASA Astrophysics Data System (ADS)
Sun, Gi-Ju; Cho, Sung-Jae; Kim, Chang-Beom; Moon, Cheol-Hong
The system implemented in this study consists of a PC, MASTER, SLAVEs and MODULEs. The PC sets the various landscape lighting displays, and the image files can be sent to the MASTER through a virtual serial port connected to the USB (Universal Serial Bus). The MASTER sends a sync signal to the SLAVE. The SLAVE uses the signal received from the MASTER and the landscape lighting display pattern. The video file is saved in the NAND Flash memory and the R, G, B signals are separated using the self-made display signal and sent to the MODULE so that it can display the image.
Interactive Video: What the Research Says.
ERIC Educational Resources Information Center
Copeland, Peter
1988-01-01
Discussion of research that evaluates the effectiveness of interactive video used for training in the United States and in the United Kingdom highlights a program developed for the Ford Motor Company. Topics discussed include content-treatment interaction; learning strategies; intermode differences; research criteria; pretest and posttest results;…
Emotional display rules as work unit norms: a multilevel analysis of emotional labor among nurses.
Diefendorff, James M; Erickson, Rebecca J; Grandey, Alicia A; Dahling, Jason J
2011-04-01
Emotional labor theory has conceptualized emotional display rules as shared norms governing the expression of emotions at work. Using a sample of registered nurses working in different units of a hospital system, we provided the first empirical evidence that display rules can be represented as shared, unit-level beliefs. Additionally, controlling for the influence of dispositional affectivity, individual-level display rule perceptions, and emotion regulation, we found that unit-level display rules are associated with individual-level job satisfaction. We also showed that unit-level display rules relate to burnout indirectly through individual-level display rule perceptions and emotion regulation strategies. Finally, unit-level display rules also interacted with individual-level dispositional affectivity to predict employee use of emotion regulation strategies. We discuss how future research on emotional labor and display rules, particularly in the health care setting, can build on these findings.
Lord, David E.; Carter, Gary W.; Petrini, Richard R.
1983-01-01
A video flowmeter is described that is capable of specifying flow nature and pattern and, at the same time, the quantitative value of the rate of volumetric flow. An image of a determinable volumetric region within a fluid (10) containing entrained particles (12) is formed and positioned by a rod optic lens assembly (31) on the raster area of a low-light level television camera (20). The particles (12) are illuminated by light transmitted through a bundle of glass fibers (32) surrounding the rod optic lens assembly (31). Only particle images having speeds on the raster area below the raster line scanning speed may be used to form a video picture which is displayed on a video screen (40). The flowmeter is calibrated so that the locus of positions of origin of the video picture gives a determination of the volumetric flow rate of the fluid (10).
Development and testing for physical security robots
NASA Astrophysics Data System (ADS)
Carroll, Daniel M.; Nguyen, Chinh; Everett, H. R.; Frederick, Brian
2005-05-01
The Mobile Detection Assessment Response System (MDARS) provides physical security for Department of Defense bases and depots using autonomous unmanned ground vehicles (UGVs) to patrol the site while operating payloads for intruder detection and assessment, barrier assessment, and product assessment. MDARS is in the System Development and Demonstration acquisition phase and is currently undergoing developmental testing including an Early User Appraisal (EUA) at the Hawthorne Army Depot, Nevada-the world's largest army depot. The Multiple Resource Host Architecture (MRHA) allows the human guard force to command and control several MDARS platforms simultaneously. The MRHA graphically displays video, map, and status for each resource using wireless digital communications for integrated data, video, and audio. Events are prioritized and the user is prompted with audio alerts and text instructions for alarms and warnings. The MRHA also interfaces to remote resources to automate legacy physical devices such as fence gate controls, garage doors, and remote power on/off capability for the MDARS patrol units. This paper provides an overview and history of the MDARS program and control station software with details on the installation and operation at Hawthorne Army Depot, including discussions on scenarios for EUA excursions. Special attention is given to the MDARS technical development strategy for spiral evolutions.
Unmanned ground vehicles for integrated force protection
NASA Astrophysics Data System (ADS)
Carroll, Daniel M.; Mikell, Kenneth; Denewiler, Thomas
2004-09-01
The combination of Command and Control (C2) systems with Unmanned Ground Vehicles (UGVs) provides Integrated Force Protection from the Robotic Operation Command Center. Autonomous UGVs are directed as Force Projection units. UGV payloads and fixed sensors provide situational awareness while unattended munitions provide a less-than-lethal response capability. Remote resources serve as automated interfaces to legacy physical devices such as manned response vehicles, barrier gates, fence openings, garage doors, and remote power on/off capability for unmanned systems. The Robotic Operations Command Center executes the Multiple Resource Host Architecture (MRHA) to simultaneously control heterogeneous unmanned systems. The MRHA graphically displays video, map, and status for each resource using wireless digital communications for integrated data, video, and audio. Events are prioritized and the user is prompted with audio alerts and text instructions for alarms and warnings. A control hierarchy of missions and duty rosters support autonomous operations. This paper provides an overview of the key technology enablers for Integrated Force Protection with details on a force-on-force scenario to test and demonstrate concept of operations using Unmanned Ground Vehicles. Special attention is given to development and applications for the Remote Detection Challenge and Response (REDCAR) initiative for Integrated Base Defense.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-27
... INTERNATIONAL TRADE COMMISSION [Inv. No. 337-TA-770] In the Matter of Certain Video Game Systems... importation of certain video game systems and wireless controllers and components thereof by reason of... sale within the United States after importation of certain video game systems and wireless controllers...
Alternative Fuels Data Center: Schwan's Home Service Delivers With
distribute products across the United States. For information about this project, contact Twin Cities Clean Cities Coalition. Download QuickTime Video QuickTime (.mov) Download Windows Media Video Windows Media (.wmv) Video Download Help Text version See more videos provided by Clean Cities TV and FuelEconomy.gov
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-31
... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-852] Certain Video Analytics Software... 337 of the Tariff Act of 1930, as amended, 19 U.S.C. 1337, on behalf of ObjectVideo, Inc. of Reston... sale within the United States after importation of certain video analytics software, systems...
Longitudinal effects of violent video games on aggression in Japan and the United States.
Anderson, Craig A; Sakamoto, Akira; Gentile, Douglas A; Ihori, Nobuko; Shibuya, Akiko; Yukawa, Shintaro; Naito, Mayumi; Kobayashi, Kumiko
2008-11-01
Youth worldwide play violent video games many hours per week. Previous research suggests that such exposure can increase physical aggression. We tested whether high exposure to violent video games increases physical aggression over time in both high- (United States) and low- (Japan) violence cultures. We hypothesized that the amount of exposure to violent video games early in a school year would predict changes in physical aggressiveness assessed later in the school year, even after statistically controlling for gender and previous physical aggressiveness. In 3 independent samples, participants' video game habits and physically aggressive behavior tendencies were assessed at 2 points in time, separated by 3 to 6 months. One sample consisted of 181 Japanese junior high students ranging in age from 12 to 15 years. A second Japanese sample consisted of 1050 students ranging in age from 13 to 18 years. The third sample consisted of 364 United States 3rd-, 4th-, and 5th-graders ranging in age from 9 to 12 years. RESULTS. Habitual violent video game play early in the school year predicted later aggression, even after controlling for gender and previous aggressiveness in each sample. Those who played a lot of violent video games became relatively more physically aggressive. Multisample structure equation modeling revealed that this longitudinal effect was of a similar magnitude in the United States and Japan for similar-aged youth and was smaller (but still significant) in the sample that included older youth. These longitudinal results confirm earlier experimental and cross-sectional studies that had suggested that playing violent video games is a significant risk factor for later physically aggressive behavior and that this violent video game effect on youth generalizes across very different cultures. As a whole, the research strongly suggests reducing the exposure of youth to this risk factor.
Video System Highlights Hydrogen Fires
NASA Technical Reports Server (NTRS)
Youngquist, Robert C.; Gleman, Stuart M.; Moerk, John S.
1992-01-01
Video system combines images from visible spectrum and from three bands in infrared spectrum to produce color-coded display in which hydrogen fires distinguished from other sources of heat. Includes linear array of 64 discrete lead selenide mid-infrared detectors operating at room temperature. Images overlaid on black and white image of same scene from standard commercial video camera. In final image, hydrogen fires appear red; carbon-based fires, blue; and other hot objects, mainly green and combinations of green and red. Where no thermal source present, image remains in black and white. System enables high degree of discrimination between hydrogen flames and other thermal emitters.
Power-Constrained Fuzzy Logic Control of Video Streaming over a Wireless Interconnect
NASA Astrophysics Data System (ADS)
Razavi, Rouzbeh; Fleury, Martin; Ghanbari, Mohammed
2008-12-01
Wireless communication of video, with Bluetooth as an example, represents a compromise between channel conditions, display and decode deadlines, and energy constraints. This paper proposes fuzzy logic control (FLC) of automatic repeat request (ARQ) as a way of reconciling these factors, with a 40% saving in power in the worst channel conditions from economizing on transmissions when channel errors occur. Whatever the channel conditions are, FLC is shown to outperform the default Bluetooth scheme and an alternative Bluetooth-adaptive ARQ scheme in terms of reduced packet loss and delay, as well as improved video quality.
Design of large format commercial display holograms
NASA Astrophysics Data System (ADS)
Perry, John F. W.
1989-05-01
Commercial display holography is approaching a critical stage where the ability to compete with other graphic media will dictate its future. Factors involved will be cost, technical quality and, in particular, design. The tenuous commercial success of display holography has relied heavily on its appeal to an audience with little or no previous experience in the medium. Well designed images were scarce, leading many commercial designers to avoid holography. As the public became more accustomed to holograms, the excitement dissipated, leaving a need for strong visual design if the medium is to survive in this marketplace. Drawing on the vast experience of TV, rock music and magazine advertising, competitive techniques such as video walls, mural duratrans, laser light shows and interactive videos attract a professional support structure far greater than does holography. This paper will address design principles developed at Holographics North for large format commercial holography. Examples will be drawn from a number of foreign and domestic corporate trade exhibitions. Recommendations will also be made on how to develop greater awareness of a holographic design.
Explosive Transient Camera (ETC) Program
1991-10-01
VOLTAGES 4.- VIDEO OUT CCD CLOCKING UNIT UUPSTAIRS" ELECTRONICS AND ANALOG TO DIGITAL IPR OCECSSER I COMMANDS TO DATA AND STATUS INSTRUMENT INFORMATION I...and transmits digital video and status information to the "downstairs" system. The clocking unit and regulator/driver board are the only CCD dependent...A. 1001, " Video Cam-era’CC’" tandari Piells" (1(P’ll m-norartlum, unpublished). Condon,, J.J., Puckpan, M.A., and Vachalski, J. 1970, A. J., 9U, 1149
Fusion Helmet: Electronic Analysis
2014-04-01
Table 1: LYR203-101B Board Feature P1 (SEC MODULE) DM648 GPIO PORn Video Ports (2) Bootmode SPI/UART I2C CLKIN MDIO DDR2 128MB/16bit SPI Flash 16...McASP EMAC-SGMII /2 MDIO I2C GPIO DDR2 128MB/16bit JTAG Memory CLKGEN I2C PGoodPGood PORn Pwr LED Power DSP SPI/UART DSP SPI/UARTSPI/UART Video Display
On Target: Organizing and Executing the Strategic Air Campaign Against Iraq
2002-01-01
possession, use, sale, creation or display of any porno graphic photograph, videotape, movie, drawing, book, or magazine or similar represen- tations. This...forward-looking infrared (FLIR) sensor to create daylight-quality video images of terrain and utilized terrain-following radar to enable the aircraft to...The Black Hole Planners had pleaded with CENTAF Intel to provide them with photos of targets, provide additional personnel to analyze PGM video
Early development of fern gametophytes in microgravity
NASA Technical Reports Server (NTRS)
Roux, Stanley J.; Chatterjee, Ani; Hillier, Sheila; Cannon, Tom
2003-01-01
Dormant spores of the fern Ceratopteris richardii were flown on Shuttle mission STS-93 to evaluate the effects of micro-g on their development and on their pattern of gene expression. Prior to flight the spores were sterilized and sown into one of two environments: (1) Microscope slides in a video-microscopy module; and (2) Petri dishes. All spores were then stored in darkness until use. Spore germination was initiated on orbit after exposure to light. For the spores on microscope slides, cell level changes were recorded through the clear spore coat of the spores by video microscopy. After their exposure to light, spores in petri dishes were frozen in orbit at four different time points during which on earth gravity fixes the polarity of their development. Spores were then stored frozen in Biological Research in Canister units until recovery on earth. The RNAs from these cells and from 1-g control cells were extracted and analyzed on earth after flight to assay changes in gene expression. Video microscopy results revealed that the germinated spores developed normally in microgravity, although the polarity of their development, which is guided by gravity on earth, was random in space. Differential Display-PCR analyses of RNA extracted from space-flown cells showed that there was about a 5% change in the pattern of gene expression between cells developing in micro-g compared to those developing on earth. c2002 Published by Elsevier Science Ltd on behalf of COSPAR.
Fronto-parietal regulation of media violence exposure in adolescents: a multi-method study
Strenziok, Maren; Krueger, Frank; Deshpande, Gopikrishna; Lenroot, Rhoshel K.; van der Meer, Elke
2011-01-01
Adolescents spend a significant part of their leisure time watching TV programs and movies that portray violence. It is unknown, however, how the extent of violent media use and the severity of aggression displayed affect adolescents’ brain function. We investigated skin conductance responses, brain activation and functional brain connectivity to media violence in healthy adolescents. In an event-related functional magnetic resonance imaging experiment, subjects repeatedly viewed normed videos that displayed different degrees of aggressive behavior. We found a downward linear adaptation in skin conductance responses with increasing aggression and desensitization towards more aggressive videos. Our results further revealed adaptation in a fronto-parietal network including the left lateral orbitofrontal cortex (lOFC), right precuneus and bilateral inferior parietal lobules, again showing downward linear adaptations and desensitization towards more aggressive videos. Granger causality mapping analyses revealed attenuation in the left lOFC, indicating that activation during viewing aggressive media is driven by input from parietal regions that decreased over time, for more aggressive videos. We conclude that aggressive media activates an emotion–attention network that has the capability to blunt emotional responses through reduced attention with repeated viewing of aggressive media contents, which may restrict the linking of the consequences of aggression with an emotional response, and therefore potentially promotes aggressive attitudes and behavior. PMID:20934985
Depth assisted compression of full parallax light fields
NASA Astrophysics Data System (ADS)
Graziosi, Danillo B.; Alpaslan, Zahir Y.; El-Ghoroury, Hussein S.
2015-03-01
Full parallax light field displays require high pixel density and huge amounts of data. Compression is a necessary tool used by 3D display systems to cope with the high bandwidth requirements. One of the formats adopted by MPEG for 3D video coding standards is the use of multiple views with associated depth maps. Depth maps enable the coding of a reduced number of views, and are used by compression and synthesis software to reconstruct the light field. However, most of the developed coding and synthesis tools target linearly arranged cameras with small baselines. Here we propose to use the 3D video coding format for full parallax light field coding. We introduce a view selection method inspired by plenoptic sampling followed by transform-based view coding and view synthesis prediction to code residual views. We determine the minimal requirements for view sub-sampling and present the rate-distortion performance of our proposal. We also compare our method with established video compression techniques, such as H.264/AVC, H.264/MVC, and the new 3D video coding algorithm, 3DV-ATM. Our results show that our method not only has an improved rate-distortion performance, it also preserves the structure of the perceived light fields better.
Discontinuity minimization for omnidirectional video projections
NASA Astrophysics Data System (ADS)
Alshina, Elena; Zakharchenko, Vladyslav
2017-09-01
Advances in display technologies both for head mounted devices and television panels demand resolution increase beyond 4K for source signal in virtual reality video streaming applications. This poses a problem of content delivery trough a bandwidth limited distribution networks. Considering a fact that source signal covers entire surrounding space investigation reviled that compression efficiency may fluctuate 40% in average depending on origin selection at the conversion stage from 3D space to 2D projection. Based on these knowledge the origin selection algorithm for video compression applications has been proposed. Using discontinuity entropy minimization function projection origin rotation may be defined to provide optimal compression results. Outcome of this research may be applied across various video compression solutions for omnidirectional content.
Peña, Raul; Ávila, Alfonso; Muñoz, David; Lavariega, Juan
2015-01-01
The recognition of clinical manifestations in both video images and physiological-signal waveforms is an important aid to improve the safety and effectiveness in medical care. Physicians can rely on video-waveform (VW) observations to recognize difficult-to-spot signs and symptoms. The VW observations can also reduce the number of false positive incidents and expand the recognition coverage to abnormal health conditions. The synchronization between the video images and the physiological-signal waveforms is fundamental for the successful recognition of the clinical manifestations. The use of conventional equipment to synchronously acquire and display the video-waveform information involves complex tasks such as the video capture/compression, the acquisition/compression of each physiological signal, and the video-waveform synchronization based on timestamps. This paper introduces a data hiding technique capable of both enabling embedding channels and synchronously hiding samples of physiological signals into encoded video sequences. Our data hiding technique offers large data capacity and simplifies the complexity of the video-waveform acquisition and reproduction. The experimental results revealed successful embedding and full restoration of signal's samples. Our results also demonstrated a small distortion in the video objective quality, a small increment in bit-rate, and embedded cost savings of -2.6196% for high and medium motion video sequences.
2004-03-01
mirror device ( DMD ) for C4ISR applications, the IBM 9.2 megapixel 22-in. diagonal active matrix liquid crystal display (AMLCD) monitor for data...FED, VFD, OLED and a variety of microdisplays (uD, comprising uLCD, uOLED, DMD and other MEMs) (see glossary). 3 CDT = cathode display tubes (used in...than SVGA, greater battery life and brightness, decreased weight and thickness, electromagnetic interference (EMI), and development of video
On-screen-display (OSD) menu detection for proper stereo content reproduction for 3D TV
NASA Astrophysics Data System (ADS)
Tolstaya, Ekaterina V.; Bucha, Victor V.; Rychagov, Michael N.
2011-03-01
Modern consumer 3D TV sets are able to show video content in two different modes: 2D and 3D. In 3D mode, stereo pair comes from external device such as Blue-ray player, satellite receivers etc. The stereo pair is split into left and right images that are shown one after another. The viewer sees different image for left and right eyes using shutter-glasses properly synchronized with a 3DTV. Besides, some devices that provide TV with a stereo content are able to display some additional information by imposing an overlay picture on video content, an On-Screen-Display (OSD) menu. Some OSDs are not always 3D compatible and lead to incorrect 3D reproduction. In this case, TV set must recognize the type of OSD, whether it is 3D compatible, and visualize it correctly by either switching off stereo mode, or continue demonstration of stereo content. We propose a new stable method for detection of 3D incompatible OSD menus on stereo content. Conventional OSD is a rectangular area with letters and pictograms. OSD menu can be of different transparency levels and colors. To be 3D compatible, an OSD is overlaid separately on both images of a stereo pair. The main problem in detecting OSD is to distinguish whether the color difference is due to OSD presence, or due to stereo parallax. We applied special techniques to find reliable image difference and additionally used a cue that usually OSD has very implicit geometrical features: straight parallel lines. The developed algorithm was tested on our video sequences database, with several types of OSD with different colors and transparency levels overlaid upon video content. Detection quality exceeded 99% of true answers.
Vroom: designing an augmented environment for remote collaboration in digital cinema production
NASA Astrophysics Data System (ADS)
Margolis, Todd; Cornish, Tracy
2013-03-01
As media technologies become increasingly affordable, compact and inherently networked, new generations of telecollaborative platforms continue to arise which integrate these new affordances. Virtual reality has been primarily concerned with creating simulations of environments that can transport participants to real or imagined spaces that replace the "real world". Meanwhile Augmented Reality systems have evolved to interleave objects from Virtual Reality environments into the physical landscape. Perhaps now there is a new class of systems that reverse this precept to enhance dynamic media landscapes and immersive physical display environments to enable intuitive data exploration through collaboration. Vroom (Virtual Room) is a next-generation reconfigurable tiled display environment in development at the California Institute for Telecommunications and Information Technology (Calit2) at the University of California, San Diego. Vroom enables freely scalable digital collaboratories, connecting distributed, high-resolution visualization resources for collaborative work in the sciences, engineering and the arts. Vroom transforms a physical space into an immersive media environment with large format interactive display surfaces, video teleconferencing and spatialized audio built on a highspeed optical network backbone. Vroom enables group collaboration for local and remote participants to share knowledge and experiences. Possible applications include: remote learning, command and control, storyboarding, post-production editorial review, high resolution video playback, 3D visualization, screencasting and image, video and multimedia file sharing. To support these various scenarios, Vroom features support for multiple user interfaces (optical tracking, touch UI, gesture interface, etc.), support for directional and spatialized audio, giga-pixel image interactivity, 4K video streaming, 3D visualization and telematic production. This paper explains the design process that has been utilized to make Vroom an accessible and intuitive immersive environment for remote collaboration specifically for digital cinema production.
The impact of video technology on learning: A cooking skills experiment.
Surgenor, Dawn; Hollywood, Lynsey; Furey, Sinéad; Lavelle, Fiona; McGowan, Laura; Spence, Michelle; Raats, Monique; McCloat, Amanda; Mooney, Elaine; Caraher, Martin; Dean, Moira
2017-07-01
This study examines the role of video technology in the development of cooking skills. The study explored the views of 141 female participants on whether video technology can promote confidence in learning new cooking skills to assist in meal preparation. Prior to each focus group participants took part in a cooking experiment to assess the most effective method of learning for low-skilled cooks across four experimental conditions (recipe card only; recipe card plus video demonstration; recipe card plus video demonstration conducted in segmented stages; and recipe card plus video demonstration whereby participants freely accessed video demonstrations as and when needed). Focus group findings revealed that video technology was perceived to assist learning in the cooking process in the following ways: (1) improved comprehension of the cooking process; (2) real-time reassurance in the cooking process; (3) assisting the acquisition of new cooking skills; and (4) enhancing the enjoyment of the cooking process. These findings display the potential for video technology to promote motivation and confidence as well as enhancing cooking skills among low-skilled individuals wishing to cook from scratch using fresh ingredients. Copyright © 2017 Elsevier Ltd. All rights reserved.
A Framework for Realistic Modeling and Display of Object Surface Appearance
NASA Astrophysics Data System (ADS)
Darling, Benjamin A.
With advances in screen and video hardware technology, the type of content presented on computers has progressed from text and simple shapes to high-resolution photographs, photorealistic renderings, and high-definition video. At the same time, there have been significant advances in the area of content capture, with the development of devices and methods for creating rich digital representations of real-world objects. Unlike photo or video capture, which provide a fixed record of the light in a scene, these new technologies provide information on the underlying properties of the objects, allowing their appearance to be simulated for novel lighting and viewing conditions. These capabilities provide an opportunity to continue the computer display progression, from high-fidelity image presentations to digital surrogates that recreate the experience of directly viewing objects in the real world. In this dissertation, a framework was developed for representing objects with complex color, gloss, and texture properties and displaying them onscreen to appear as if they are part of the real-world environment. At its core, there is a conceptual shift from a traditional image-based display workflow to an object-based one. Instead of presenting the stored patterns of light from a scene, the objective is to reproduce the appearance attributes of a stored object by simulating its dynamic patterns of light for the real viewing and lighting geometry. This is accomplished using a computational approach where the physical light sources are modeled and the observer and display screen are actively tracked. Surface colors are calculated for the real spectral composition of the illumination with a custom multispectral rendering pipeline. In a set of experiments, the accuracy of color and gloss reproduction was evaluated by measuring the screen directly with a spectroradiometer. Gloss reproduction was assessed by comparing gonio measurements of the screen output to measurements of the real samples in the same measurement configuration. A chromatic adaptation experiment was performed to evaluate color appearance in the framework and explore the factors that contribute to differences when viewing self-luminous displays as opposed to reflective objects. A set of sample applications was developed to demonstrate the potential utility of the object display technology for digital proofing, psychophysical testing, and artwork display.
Nimbalkar, Somashekhar Marutirao; Raval, Himalaya; Bansal, Satvik Chaitanya; Pandya, Utkarsh; Pathak, Ajay
2018-05-03
Effective communication with parents is a very important skill for pediatricians especially in a neonatal setup. The authors analyzed non-verbal communication of medical caregivers during counseling sessions. Recorded videos of counseling sessions from the months of March-April 2016 were audited. Counseling episodes were scored using Non-verbal Immediacy Scale Observer Report (NIS-O). A total of 150 videos of counseling sessions were audited. The mean (SD) total score on (NIS-O) was 78.96(7.07). Female counseled sessions had significantly higher proportion of low scores (p < 0.001). No video revealed high score. Overall 67(44.67%) sessions revealed low total score. This reflects an urgent need to develop strategies to improve communication skills in a neonatal unit. This study lays down a template on which other Neonatal intensive care units (NICUs) can carry out gap defining audits.
ERIC Educational Resources Information Center
Kucalaba, Linda
Previous studies have found that the librarian's use of book displays and recommended lists are an effective means to increase circulation in the public library. Yet conflicting results were found when these merchandising techniques were used with collection materials in the nonprint format, specifically audiobooks and videos, instead of books.…
[Development of a system for ultrasonic three-dimensional reconstruction of fetus].
Baba, K
1989-04-01
We have developed a system for ultrasonic three-dimensional (3-D) fetus reconstruction using computers. Either a real-time linear array probe or a convex array probe of an ultrasonic scanner was mounted on a position sensor arm of a manual compound scanner in order to detect the position of the probe. A microcomputer was used to convert the position information to what could be recorded on a video tape as an image. This image was superimposed on the ultrasonic tomographic image simultaneously with a superimposer and recorded on a video tape. Fetuses in utero were scanned in seven cases. More than forty ultrasonic section image on the video tape were fed into a minicomputer. The shape of the fetus was displayed three-dimensionally by means of computer graphics. The computer-generated display produced a 3-D image of the fetus and showed the usefulness and accuracy of this system. Since it took only a few seconds for data collection by ultrasonic inspection, fetal movement did not adversely affect the results. Data input took about ten minutes for 40 slices, and 3-D reconstruction and display took about two minutes. The system made it possible to observe and record the 3-D image of the fetus in utero non-invasively and therefore is expected to make it much easier to obtain a 3-D picture of the fetus in utero.
Yoshida, Soichiro; Kihara, Kazunori; Takeshita, Hideki; Fujii, Yasuhisa
2014-12-01
The head-mounted display (HMD) is a new image monitoring system. We developed the Personal Integrated-image Monitoring System (PIM System) using the HMD (HMZ-T2, Sony Corporation, Tokyo, Japan) in combination with video splitters and multiplexers as a surgical guide system for transurethral resection of the prostate (TURP). The imaging information obtained from the cystoscope, the transurethral ultrasonography (TRUS), the video camera attached to the HMD, and the patient's vital signs monitor were split and integrated by the PIM System and a composite image was displayed by the HMD using a four-split screen technique. Wearing the HMD, the lead surgeon and the assistant could simultaneously and continuously monitor the same information displayed by the HMD in an ergonomically efficient posture. Each participant could independently rearrange the images comprising the composite image depending on the engaging step. Two benign prostatic hyperplasia (BPH) patients underwent TURP performed by surgeons guided with this system. In both cases, the TURP procedure was successfully performed, and their postoperative clinical courses had no remarkable unfavorable events. During the procedure, none of the participants experienced any HMD-wear related adverse effects or reported any discomfort.
Micro-video display with ocular tracking and interactive voice control
NASA Technical Reports Server (NTRS)
Miller, James E.
1993-01-01
In certain space-restricted environments, many of the benefits resulting from computer technology have been foregone because of the size, weight, inconvenience, and lack of mobility associated with existing computer interface devices. Accordingly, an effort to develop a highly miniaturized and 'wearable' computer display and control interface device, referred to as the Sensory Integrated Data Interface (SIDI), is underway. The system incorporates a micro-video display that provides data display and ocular tracking on a lightweight headset. Software commands are implemented by conjunctive eye movement and voice commands of the operator. In this initial prototyping effort, various 'off-the-shelf' components have been integrated into a desktop computer and with a customized menu-tree software application to demonstrate feasibility and conceptual capabilities. When fully developed as a customized system, the interface device will allow mobile, 'hand-free' operation of portable computer equipment. It will thus allow integration of information technology applications into those restrictive environments, both military and industrial, that have not yet taken advantage of the computer revolution. This effort is Phase 1 of Small Business Innovative Research (SBIR) Topic number N90-331 sponsored by the Naval Undersea Warfare Center Division, Newport. The prime contractor is Foster-Miller, Inc. of Waltham, MA.
Videos for Teachers: Successful Teaching Strategies in Middle and High School Classrooms. [CD-ROM].
ERIC Educational Resources Information Center
Teachers Network, New York, NY.
This CD-ROM presents six videos that feature veteran middle and high school teachers in action in their classrooms. Each video offers links to supplemental education resources, including innovative lesson plans. The six videos are: "Monsters and Myths" (a humanities unit for middle school students); "The Bleeding Edge" (a thematic…
A large flat panel multifunction display for military and space applications
NASA Astrophysics Data System (ADS)
Pruitt, James S.
1992-09-01
A flat panel multifunction display (MFD) that offers the size and reliability benefits of liquid crystal display technology while achieving near-CRT display quality is presented. Display generation algorithms that provide exceptional display quality are being implemented in custom VLSI components to minimize MFD size. A high-performance processor converts user-specified display lists to graphics commands used by these components, resulting in high-speed updates of two-dimensional and three-dimensional images. The MFD uses the MIL-STD-1553B data bus for compatibility with virtually all avionics systems. The MFD can generate displays directly from display lists received from the MIL-STD-1553B bus. Complex formats can be stored in the MFD and displayed using parameters from the data bus. The MFD also accepts direct video input and performs special processing on this input to enhance image quality.
NASA Astrophysics Data System (ADS)
Qin, Chen; Ren, Bin; Guo, Longfei; Dou, Wenhua
2014-11-01
Multi-projector three dimension display is a promising multi-view glass-free three dimension (3D) display technology, can produce full colour high definition 3D images on its screen. One key problem of multi-projector 3D display is how to acquire the source images of projector array while avoiding pseudoscopic problem. This paper analysis the displaying characteristics of multi-projector 3D display first and then propose a projector content synthetic method using tetrahedral transform. A 3D video format that based on stereo image pair and associated disparity map is presented, it is well suit for any type of multi-projector 3D display and has advantage in saving storage usage. Experiment results show that our method solved the pseudoscopic problem.
Broadening the interface bandwidth in simulation based training
NASA Technical Reports Server (NTRS)
Somers, Larry E.
1989-01-01
Currently most computer based simulations rely exclusively on computer generated graphics to create the simulation. When training is involved, the method almost exclusively used to display information to the learner is text displayed on the cathode ray tube. MICROEXPERT Systems is concentrating on broadening the communications bandwidth between the computer and user by employing a novel approach to video image storage combined with sound and voice output. An expert system is used to combine and control the presentation of analog video, sound, and voice output with computer based graphics and text. Researchers are currently involved in the development of several graphics based user interfaces for NASA, the U.S. Army, and the U.S. Navy. Here, the focus is on the human factors considerations, software modules, and hardware components being used to develop these interfaces.
Psycho-physiological effects of head-mounted displays in ubiquitous use
NASA Astrophysics Data System (ADS)
Kawai, Takashi; Häkkinen, Jukka; Oshima, Keisuke; Saito, Hiroko; Yamazoe, Takashi; Morikawa, Hiroyuki; Nyman, Göte
2011-02-01
In this study, two experiments were conducted to evaluate the psycho-physiological effects by practical use of monocular head-mounted display (HMD) in a real-world environment, based on the assumption of consumer-level applications as viewing video content and receiving navigation information while walking. In the experiment 1, the workload was examined for different types of presenting stimuli using an HMD (monocular or binocular, see-through or non-see-through). The experiment 2 focused on the relationship between the real-world environment and the visual information presented using a monocular HMD. The workload was compared between a case where participants walked while viewing video content without relation to the real-world environment, and a case where participants walked while viewing visual information to augment the real-world environment as navigations.
Synchronized voltage contrast display analysis system
NASA Technical Reports Server (NTRS)
Johnston, M. F.; Shumka, A.; Miller, E.; Evans, K. C. (Inventor)
1982-01-01
An apparatus and method for comparing internal voltage potentials of first and second operating electronic components such as large scale integrated circuits (LSI's) in which voltage differentials are visually identified via an appropriate display means are described. More particularly, in a first embodiment of the invention a first and second scanning electron microscope (SEM) are configured to scan a first and second operating electronic component respectively. The scan pattern of the second SEM is synchronized to that of the first SEM so that both simultaneously scan corresponding portions of the two operating electronic components. Video signals from each SEM corresponding to secondary electron signals generated as a result of a primary electron beam intersecting each operating electronic component in accordance with a predetermined scan pattern are provided to a video mixer and color encoder.
Flow visualization of CFD using graphics workstations
NASA Technical Reports Server (NTRS)
Lasinski, Thomas; Buning, Pieter; Choi, Diana; Rogers, Stuart; Bancroft, Gordon
1987-01-01
High performance graphics workstations are used to visualize the fluid flow dynamics obtained from supercomputer solutions of computational fluid dynamic programs. The visualizations can be done independently on the workstation or while the workstation is connected to the supercomputer in a distributed computing mode. In the distributed mode, the supercomputer interactively performs the computationally intensive graphics rendering tasks while the workstation performs the viewing tasks. A major advantage of the workstations is that the viewers can interactively change their viewing position while watching the dynamics of the flow fields. An overview of the computer hardware and software required to create these displays is presented. For complex scenes the workstation cannot create the displays fast enough for good motion analysis. For these cases, the animation sequences are recorded on video tape or 16 mm film a frame at a time and played back at the desired speed. The additional software and hardware required to create these video tapes or 16 mm movies are also described. Photographs illustrating current visualization techniques are discussed. Examples of the use of the workstations for flow visualization through animation are available on video tape.
The virtual brain: 30 years of video-game play and cognitive abilities.
Latham, Andrew J; Patston, Lucy L M; Tippett, Lynette J
2013-09-13
Forty years have passed since video-games were first made widely available to the public and subsequently playing games has become a favorite past-time for many. Players continuously engage with dynamic visual displays with success contingent on the time-pressured deployment, and flexible allocation, of attention as well as precise bimanual movements. Evidence to date suggests that both brief and extensive exposure to video-game play can result in a broad range of enhancements to various cognitive faculties that generalize beyond the original context. Despite promise, video-game research is host to a number of methodological issues that require addressing before progress can be made in this area. Here an effort is made to consolidate the past 30 years of literature examining the effects of video-game play on cognitive faculties and, more recently, neural systems. Future work is required to identify the mechanism that allows the act of video-game play to generate such a broad range of generalized enhancements.
The virtual brain: 30 years of video-game play and cognitive abilities
Latham, Andrew J.; Patston, Lucy L. M.; Tippett, Lynette J.
2013-01-01
Forty years have passed since video-games were first made widely available to the public and subsequently playing games has become a favorite past-time for many. Players continuously engage with dynamic visual displays with success contingent on the time-pressured deployment, and flexible allocation, of attention as well as precise bimanual movements. Evidence to date suggests that both brief and extensive exposure to video-game play can result in a broad range of enhancements to various cognitive faculties that generalize beyond the original context. Despite promise, video-game research is host to a number of methodological issues that require addressing before progress can be made in this area. Here an effort is made to consolidate the past 30 years of literature examining the effects of video-game play on cognitive faculties and, more recently, neural systems. Future work is required to identify the mechanism that allows the act of video-game play to generate such a broad range of generalized enhancements. PMID:24062712
Scorebox extraction from mobile sports videos using Support Vector Machines
NASA Astrophysics Data System (ADS)
Kim, Wonjun; Park, Jimin; Kim, Changick
2008-08-01
Scorebox plays an important role in understanding contents of sports videos. However, the tiny scorebox may give the small-display-viewers uncomfortable experience in grasping the game situation. In this paper, we propose a novel framework to extract the scorebox from sports video frames. We first extract candidates by using accumulated intensity and edge information after short learning period. Since there are various types of scoreboxes inserted in sports videos, multiple attributes need to be used for efficient extraction. Based on those attributes, the optimal information gain is computed and top three ranked attributes in terms of information gain are selected as a three-dimensional feature vector for Support Vector Machines (SVM) to distinguish the scorebox from other candidates, such as logos and advertisement boards. The proposed method is tested on various videos of sports games and experimental results show the efficiency and robustness of our proposed method.
Viewing the viewers: how adults with attentional deficits watch educational videos.
Hassner, Tal; Wolf, Lior; Lerner, Anat; Leitner, Yael
2014-10-01
Knowing how adults with ADHD interact with prerecorded video lessons at home may provide a novel means of early screening and long-term monitoring for ADHD. Viewing patterns of 484 students with known ADHD were compared with 484 age, gender, and academically matched controls chosen from 8,699 non-ADHD students. Transcripts generated by their video playback software were analyzed using t tests and regression analysis. ADHD students displayed significant tendencies (p ≤ .05) to watch videos with more pauses and more reviews of previously watched parts. Other parameters showed similar tendencies. Regression analysis indicated that attentional deficits remained constant for age and gender but varied for learning experience. There were measurable and significant differences between the video-viewing habits of the ADHD and non-ADHD students. This provides a new perspective on how adults cope with attention deficits and suggests a novel means of early screening for ADHD. © 2011 SAGE Publications.
Introducing a Public Stereoscopic 3D High Dynamic Range (SHDR) Video Database
NASA Astrophysics Data System (ADS)
Banitalebi-Dehkordi, Amin
2017-03-01
High dynamic range (HDR) displays and cameras are paving their ways through the consumer market at a rapid growth rate. Thanks to TV and camera manufacturers, HDR systems are now becoming available commercially to end users. This is taking place only a few years after the blooming of 3D video technologies. MPEG/ITU are also actively working towards the standardization of these technologies. However, preliminary research efforts in these video technologies are hammered by the lack of sufficient experimental data. In this paper, we introduce a Stereoscopic 3D HDR database of videos that is made publicly available to the research community. We explain the procedure taken to capture, calibrate, and post-process the videos. In addition, we provide insights on potential use-cases, challenges, and research opportunities, implied by the combination of higher dynamic range of the HDR aspect, and depth impression of the 3D aspect.
Bartholow, Bruce D; Sestir, Marc A; Davis, Edward B
2005-11-01
Research has shown that exposure to violent video games causes increases in aggression, but the mechanisms of this effect have remained elusive. Also, potential differences in short-term and long-term exposure are not well understood. An initial correlational study shows that video game violence exposure (VVE) is positively correlated with self-reports of aggressive behavior and that this relation is robust to controlling for multiple aspects of personality. A lab experiment showed that individuals low in VVE behave more aggressively after playing a violent video game than after a nonviolent game but that those high in VVE display relatively high levels of aggression regardless of game content. Mediational analyses show that trait hostility, empathy, and hostile perceptions partially account for the VVE effect on aggression. These findings suggest that repeated exposure to video game violence increases aggressive behavior in part via changes in cognitive and personality factors associated with desensitization.
Video: useful tool for delivering family planning messages.
Sumarsono, S K
1985-10-01
In 1969, the Government of Indonesia declared that the population explosion was a national problem. The National Family Planning Program was consequently launched to encourage adoption of the ideal of a small, happy and prosperous family norm. Micro-approach messages are composed of the following: physiology of menstruation; reproductive process; healthy pregnancy; rational family planning; rational application of contraceptives; infant and child care; nutrition improvement; increase in breastfeeding; increase in family income; education in family life; family health; and deferred marriage age. Macro-approach messages include: the population problem and its impact on socioeconomic aspects; efforts to cope with the population problem; and improvement of women's lot. In utilizing the media and communication channels, the program encourages the implementation of units and working units of IEC to produce IEC materials; utilizes all possible existing media and IEC channels; maintains the consistent linkage between the activity of mass media and the IEC activities in the field; and encourages the private sector to participate in the production of IEC media and materials. A media production center was set up and carries out the following activities: producing video cassettes for tv broadcasts of family planning drama, family planning news, and tv spots; producing duplicates of the video cassettes for distribution to provinces in support of the video network; producing teaching materials for family planning workers; and transfering family planning films into video cassettes. A video network was developed and includes video monitors in family planning service points such as hospitals, family planning clinics and public places like bus stations. In 1985, the program will be expanded by 50 mobile information units equipped with video monitors. Video has potentials to increase the productivity and effectiveness of the family planning program. The video production process is cheaper and simpler than film production. Video will be very helpful as a communication aid in group meetings. It can also be used as a teaching aid for training.
User interface using a 3D model for video surveillance
NASA Astrophysics Data System (ADS)
Hata, Toshihiko; Boh, Satoru; Tsukada, Akihiro; Ozaki, Minoru
1998-02-01
These days fewer people, who must carry out their tasks quickly and precisely, are required in industrial surveillance and monitoring applications such as plant control or building security. Utilizing multimedia technology is a good approach to meet this need, and we previously developed Media Controller, which is designed for the applications and provides realtime recording and retrieval of digital video data in a distributed environment. In this paper, we propose a user interface for such a distributed video surveillance system in which 3D models of buildings and facilities are connected to the surveillance video. A novel method of synchronizing camera field data with each frame of a video stream is considered. This method records and reads the camera field data similarity to the video data and transmits it synchronously with the video stream. This enables the user interface to have such useful functions as comprehending the camera field immediately and providing clues when visibility is poor, for not only live video but also playback video. We have also implemented and evaluated the display function which makes surveillance video and 3D model work together using Media Controller with Java and Virtual Reality Modeling Language employed for multi-purpose and intranet use of 3D model.
Speech Auditory Alerts Promote Memory for Alerted Events in a Video-Simulated Self-Driving Car Ride.
Nees, Michael A; Helbein, Benji; Porter, Anna
2016-05-01
Auditory displays could be essential to helping drivers maintain situation awareness in autonomous vehicles, but to date, few or no studies have examined the effectiveness of different types of auditory displays for this application scenario. Recent advances in the development of autonomous vehicles (i.e., self-driving cars) have suggested that widespread automation of driving may be tenable in the near future. Drivers may be required to monitor the status of automation programs and vehicle conditions as they engage in secondary leisure or work tasks (entertainment, communication, etc.) in autonomous vehicles. An experiment compared memory for alerted events-a component of Level 1 situation awareness-using speech alerts, auditory icons, and a visual control condition during a video-simulated self-driving car ride with a visual secondary task. The alerts gave information about the vehicle's operating status and the driving scenario. Speech alerts resulted in better memory for alerted events. Both auditory display types resulted in less perceived effort devoted toward the study tasks but also greater perceived annoyance with the alerts. Speech auditory displays promoted Level 1 situation awareness during a simulation of a ride in a self-driving vehicle under routine conditions, but annoyance remains a concern with auditory displays. Speech auditory displays showed promise as a means of increasing Level 1 situation awareness of routine scenarios during an autonomous vehicle ride with an unrelated secondary task. © 2016, Human Factors and Ergonomics Society.
Video Bandwidth Compression System.
1980-08-01
scaling function, located between the inverse DPCM and inverse transform , on the decoder matrix multiplier chips. 1"V1 T.. ---- i.13 SECURITY...Bit Unpacker and Inverse DPCM Slave Sync Board 15 e. Inverse DPCM Loop Boards 15 f. Inverse Transform Board 16 g. Composite Video Output Board 16...36 a. Display Refresh Memory 36 (1) Memory Section 37 (2) Timing and Control 39 b. Bit Unpacker and Inverse DPCM 40 c. Inverse Transform Processor 43
Sexual Orientation and U.S. Military Personnel Policy: Options and Assessment
1993-01-01
include smaller actions, such as allocation of time to the new policy and keeping the change before members through video or other messages such as...were also taken. A condensed video and still picture S record has been provided separately, and the complete videotape and all photography have been...touching, leering. las- s’ylimilter,.Ies attouchements.Iles regards concupis-* civous remarks and the display of porno - cents, les remarques lascives et
Leading the Development of Concepts of Operations for Next-Generation Remotely Piloted Aircraft
2016-01-01
overarching CONOPS. RPAs must provide full motion video and signals intelli- gence (SIGINT) capabilities to fulfill their intelligence, surveillance, and...reached full capacity, combatant commanders had an insatiable demand for this new breed of capability, and phrases like Pred porn and drone strike...dimensional steering line on the video feed of the pilot’s head-up display (HUD) that would indicate turning cues and finite steering paths for optimal
Method and apparatus for calibrating a tiled display
NASA Technical Reports Server (NTRS)
Chen, Chung-Jen (Inventor); Johnson, Michael J. (Inventor); Chandrasekhar, Rajesh (Inventor)
2001-01-01
A display system that can be calibrated and re-calibrated with a minimal amount of manual intervention. To accomplish this, one or more cameras are provided to capture an image of the display screen. The resulting captured image is processed to identify any non-desirable characteristics, including visible artifacts such as seams, bands, rings, etc. Once the non-desirable characteristics are identified, an appropriate transformation function is determined. The transformation function is used to pre-warp the input video signal that is provided to the display such that the non-desirable characteristics are reduced or eliminated from the display. The transformation function preferably compensates for spatial non-uniformity, color non-uniformity, luminance non-uniformity, and other visible artifacts.
Integrating critical interface elements for intuitive single-display aviation control of UAVs
NASA Astrophysics Data System (ADS)
Cooper, Joseph L.; Goodrich, Michael A.
2006-05-01
Although advancing levels of technology allow UAV operators to give increasingly complex commands with expanding temporal scope, it is unlikely that the need for immediate situation awareness and local, short-term flight adjustment will ever be completely superseded. Local awareness and control are particularly important when the operator uses the UAV to perform a search or inspection task. There are many different tasks which would be facilitated by search and inspection capabilities of a camera-equipped UAV. These tasks range from bridge inspection and news reporting to wilderness search and rescue. The system should be simple, inexpensive, and intuitive for non-pilots. An appropriately designed interface should (a) provide a context for interpreting video and (b) support UAV tasking and control, all within a single display screen. In this paper, we present and analyze an interface that attempts to accomplish this goal. The interface utilizes a georeferenced terrain map rendered from publicly available altitude data and terrain imagery to create a context in which the location of the UAV and the source of the video are communicated to the operator. Rotated and transformed imagery from the UAV provides a stable frame of reference for the operator and integrates cleanly into the terrain model. Simple icons overlaid onto the main display provide intuitive control and feedback when necessary but fade to a semi-transparent state when not in use to avoid distracting the operator's attention from the video signal. With various interface elements integrated into a single display, the interface runs nicely on a small, portable, inexpensive system with a single display screen and simple input device, but is powerful enough to allow a single operator to deploy, control, and recover a small UAV when coupled with appropriate autonomy. As we present elements of the interface design, we will identify concepts that can be leveraged into a large class of UAV applications.
VAP/VAT: video analytics platform and test bed for testing and deploying video analytics
NASA Astrophysics Data System (ADS)
Gorodnichy, Dmitry O.; Dubrofsky, Elan
2010-04-01
Deploying Video Analytics in operational environments is extremely challenging. This paper presents a methodological approach developed by the Video Surveillance and Biometrics Section (VSB) of the Science and Engineering Directorate (S&E) of the Canada Border Services Agency (CBSA) to resolve these problems. A three-phase approach to enable VA deployment within an operational agency is presented and the Video Analytics Platform and Testbed (VAP/VAT) developed by the VSB section is introduced. In addition to allowing the integration of third party and in-house built VA codes into an existing video surveillance infrastructure, VAP/VAT also allows the agency to conduct an unbiased performance evaluation of the cameras and VA software available on the market. VAP/VAT consists of two components: EventCapture, which serves to Automatically detect a "Visual Event", and EventBrowser, which serves to Display & Peruse of "Visual Details" captured at the "Visual Event". To deal with Open architecture as well as with Closed architecture cameras, two video-feed capture mechanisms have been developed within the EventCapture component: IPCamCapture and ScreenCapture.
... It is a painless process that uses a computer and a video monitor to display bodily functions ... or as linegraphs we can see on a computer screen. In this way, we receive information (feedback) ...
75 FR 68379 - In the Matter of: Certain Video Game Systems and Controllers; Notice of Investigation
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-05
... INTERNATIONAL TRADE COMMISSION [Inv. No. 337-TA-743] In the Matter of: Certain Video Game Systems... within the United States after importation of certain video game systems and controllers by reason of... certain video game systems and controllers that infringe one or more of claims 16, 27-32, 44, 57, 68, 81...
ERIC Educational Resources Information Center
What Works Clearinghouse, 2015
2015-01-01
In the 2014 study, "The Effects of Math Video Games on Learning," researchers examined the impacts of math video games on the fractions knowledge of 1,468 sixth-grade students in 23 schools. The video games focused on fractions concepts including: whole units, numerator and denominator, understanding the number line, fractions…
Fair Play? Violence, Gender and Race in Video Games.
ERIC Educational Resources Information Center
Glaubke, Christina R.; Miller, Patti; Parker, McCrae A.; Espejo, Eileen
Based on the view that the level of market penetration of video games combined with the high levels of realism portrayed in these games make it important to investigate the messages video games send children, this report details a study of the 10 top-selling video games for each of 6 game systems available in the United States and for personal…
Ham Video Commissioning in Columbus
2014-04-13
Documentation of the Ham Video unit installed in the Columbus European Laboratory. Part number (P/N) is HAM-11000-0F, serial number (S/N) is 01, barcode is HAMV0001E. Image was taken during Expedition 39 Ham Video commissioning activities and released by astronaut on Twitter.
47 CFR 76.66 - Satellite broadcast signal carriage.
Code of Federal Regulations, 2012 CFR
2012-10-01
... MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Carriage of Television Broadcast Signals § 76.66 Satellite... satellite carrier that offers multichannel video programming distribution service in the United States to... entirety the primary video, accompanying audio, and closed captioning data contained in line 21 of the...
47 CFR 76.66 - Satellite broadcast signal carriage.
Code of Federal Regulations, 2013 CFR
2013-10-01
... MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Carriage of Television Broadcast Signals § 76.66 Satellite... satellite carrier that offers multichannel video programming distribution service in the United States to... entirety the primary video, accompanying audio, and closed captioning data contained in line 21 of the...
47 CFR 76.66 - Satellite broadcast signal carriage.
Code of Federal Regulations, 2011 CFR
2011-10-01
... MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Carriage of Television Broadcast Signals § 76.66 Satellite... satellite carrier that offers multichannel video programming distribution service in the United States to... entirety the primary video, accompanying audio, and closed captioning data contained in line 21 of the...
47 CFR 76.66 - Satellite broadcast signal carriage.
Code of Federal Regulations, 2014 CFR
2014-10-01
... MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Carriage of Television Broadcast Signals § 76.66 Satellite... satellite carrier that offers multichannel video programming distribution service in the United States to... entirety the primary video, accompanying audio, and closed captioning data contained in line 21 of the...
Wireless, relative-motion computer input device
Holzrichter, John F.; Rosenbury, Erwin T.
2004-05-18
The present invention provides a system for controlling a computer display in a workspace using an input unit/output unit. A train of EM waves are sent out to flood the workspace. EM waves are reflected from the input unit/output unit. A relative distance moved information signal is created using the EM waves that are reflected from the input unit/output unit. Algorithms are used to convert the relative distance moved information signal to a display signal. The computer display is controlled in response to the display signal.
Hardware/Software Issues for Video Guidance Systems: The Coreco Frame Grabber
NASA Technical Reports Server (NTRS)
Bales, John W.
1996-01-01
The F64 frame grabber is a high performance video image acquisition and processing board utilizing the TMS320C40 and TMS34020 processors. The hardware is designed for the ISA 16 bit bus and supports multiple digital or analog cameras. It has an acquisition rate of 40 million pixels per second, with a variable sampling frequency of 510 kHz to MO MHz. The board has a 4MB frame buffer memory expandable to 32 MB, and has a simultaneous acquisition and processing capability. It supports both VGA and RGB displays, and accepts all analog and digital video input standards.
Topor, David R; Swenson, Lance P; Liguori, Gina M; Spirito, Anthony; Lowenhaupt, Elizabeth A; Hunt, Jeffrey I
2011-12-01
Excessive video game use among youth has been a growing concern in the United States and elsewhere. The aims of this study are to establish validity of a video game measure in a large adolescent inpatient sample, identify clinical factors underlying problem video game use, and identify associations with measures of psychopathology. Three hundred eighty participants admitted to an adolescent inpatient psychiatric unit between November 2007 and March 2009 were administered a battery of self-report measures, including a questionnaire developed for this study that assessed reinforcers and consequences of past-year video game use (ie, Problematic Video Game Use Scale). Factor analysis was used to identify the underlying structure of behaviors associated with problem video game use. A factor analysis of the Problematic Video Game Use Scale indicated 2 primary factors. One was associated with engaging in problem behaviors that impaired the adolescent's functioning as a result of playing video games and one reflected the reinforcing effects of playing video games. Both factors were associated with measures of psychopathology, although associations were generally stronger for impairment in functioning than for reinforcing effects. Both factors were significantly correlated with self-reported daily video game use (P < .001). Two underlying factors emerged to account for problem video game playing: impairment in functioning and reinforcing effects. Initial evidence of the content validity of the video game measure was established. Findings highlight the importance of assessing video game use among an adolescent population, the factors associated with video game use, and associations with symptoms of psychopathology. Limitations include a common reporter for multiple measures and cross-sectional data that do not allow for causal links to be made. © Copyright 2011 Physicians Postgraduate Press, Inc.
Plant Chlorophyll Content Imager with Reference Detection Signals
NASA Technical Reports Server (NTRS)
Spiering, Bruce A. (Inventor); Carter, Gregory A. (Inventor)
2000-01-01
A portable plant chlorophyll imaging system is described which collects light reflected from a target plant and separates the collected light into two different wavelength bands. These wavelength bands, or channels, are described as having center wavelengths of 700 nm and 840 nm. The light collected in these two channels is processed using synchronized video cameras. A controller provided in the system compares the level of light of video images reflected from a target plant with a reference level of light from a source illuminating the plant. The percent of reflection in the two separate wavelength bands from a target plant are compared to provide a ratio video image which indicates a relative level of plant chlorophyll content and physiological stress. Multiple display modes are described for viewing the video images.
1991-08-15
Conversely, displays Atr con- past experience to the experimental stimuli. structed %xith normal density- controlled KDE cues but %ith 5. Excluding...frame. This 3Ndisplays, gray background is displayed’ on ail introduces 50% -scintillation (density control lion even frames (labelled 1:0). Other non ...video tapes were prepared, each of whsich contained all the experimental ASL signs but distributed 1 2 3 4 into dliffereint. filter groups . Eight
Prygun, A V; Lazarev, N V
1998-10-01
Radiation measuring on the work places of operators in command and control installations proved that environment parameters depending on electronic display functioning are in line with the regulations' requirements. Nevertheless the operator health estimates show that the problem of personnel security still exists. The authors recommend some measures to improve the situation.
Print, Broadcast Students Share VDTs at West Fla.
ERIC Educational Resources Information Center
Roberts, Churchill L.; Dickson, Sandra H.
1985-01-01
Describes the use of video display terminals in the journalism lab of a Florida university. Discusses the different purposes for which broadcast and print journalism students use such equipment. (HTH)
The Use of Smart Glasses for Surgical Video Streaming.
Hiranaka, Takafumi; Nakanishi, Yuta; Fujishiro, Takaaki; Hida, Yuichi; Tsubosaka, Masanori; Shibata, Yosaku; Okimura, Kenjiro; Uemoto, Harunobu
2017-04-01
Observation of surgical procedures performed by experts is extremely important for acquisition and improvement of surgical skills. Smart glasses are small computers, which comprise a head-mounted monitor and video camera, and can be connected to the internet. They can be used for remote observation of surgeries by video streaming. Although Google Glass is the most commonly used smart glasses for medical purposes, it is still unavailable commercially and has some limitations. This article reports the use of a different type of smart glasses, InfoLinker, for surgical video streaming. InfoLinker has been commercially available in Japan for industrial purposes for more than 2 years. It is connected to a video server via wireless internet directly, and streaming video can be seen anywhere an internet connection is available. We have attempted live video streaming of knee arthroplasty operations that were viewed at several different locations, including foreign countries, on a common web browser. Although the quality of video images depended on the resolution and dynamic range of the video camera, speed of internet connection, and the wearer's attention to minimize image shaking, video streaming could be easily performed throughout the procedure. The wearer could confirm the quality of the video as the video was being shot by the head-mounted display. The time and cost for observation of surgical procedures can be reduced by InfoLinker, and further improvement of hardware as well as the wearer's video shooting technique is expected. We believe that this can be used in other medical settings.
Scholes, Edwin
2008-01-01
Ethology is rooted in the idea that behavior is composed of discrete units and sub-units that can be compared among taxa in a phylogenetic framework. This means that behavior, like morphology and genes, is inherently modular. Yet, the concept of modularity is not well integrated into how we envision the behavioral components of phenotype. Understanding ethological modularity, and its implications for animal phenotype organization and evolution, requires that we construct interpretive schemes that permit us to examine it. In this study, I describe the structure and composition of a complex part of the behavioral phenotype of Parotia lawesii Ramsay, 1885--a bird of paradise (Aves: Paradisaeidae) from the forests of eastern New Guinea. I use archived voucher video clips, photographic ethograms, and phenotype ontology diagrams to describe the modular units comprising courtship at various levels of integration. Results show P. lawesii to have 15 courtship and mating behaviors (11 males, 4 females) hierarchically arranged within a complex seven-level structure. At the finest level examined, male displays are comprised of 49 modular sub-units (elements) differentially employed to form more complex modular units (phases and versions) at higher-levels of integration. With its emphasis on hierarchical modularity, this study provides an important conceptual framework for understanding courtship-related phenotypic complexity and provides a solid basis for comparative study of the genus Parotia.
NASA Astrophysics Data System (ADS)
Morozov, Alexander; Dubinin, German; Dubynin, Sergey; Yanusik, Igor; Kim, Sun Il; Choi, Chil-Sung; Song, Hoon; Lee, Hong-Seok; Putilin, Andrey; Kopenkin, Sergey; Borodin, Yuriy
2017-06-01
Future commercialization of glasses-free holographic real 3D displays requires not only appropriate image quality but also slim design of backlight unit and whole display device to match market needs. While a lot of research aimed to solve computational issues of forming Computer Generated Holograms for 3D Holographic displays, less focus on development of backlight units suitable for 3D holographic display applications with form-factor of conventional 2D display systems. Thereby, we report coherent backlight unit for 3D holographic display with thickness comparable to commercially available 2D displays (cell phones, tablets, laptops, etc.). Coherent backlight unit forms uniform, high-collimated and effective illumination of spatial light modulator. Realization of such backlight unit is possible due to holographic optical elements, based on volume gratings, constructing coherent collimated beam to illuminate display plane. Design, recording and measurement of 5.5 inch coherent backlight unit based on two holographic optical elements are presented in this paper.
Feasibility of video codec algorithms for software-only playback
NASA Astrophysics Data System (ADS)
Rodriguez, Arturo A.; Morse, Ken
1994-05-01
Software-only video codecs can provide good playback performance in desktop computers with a 486 or 68040 CPU running at 33 MHz without special hardware assistance. Typically, playback of compressed video can be categorized into three tasks: the actual decoding of the video stream, color conversion, and the transfer of decoded video data from system RAM to video RAM. By current standards, good playback performance is the decoding and display of video streams of 320 by 240 (or larger) compressed frames at 15 (or greater) frames-per- second. Software-only video codecs have evolved by modifying and tailoring existing compression methodologies to suit video playback in desktop computers. In this paper we examine the characteristics used to evaluate software-only video codec algorithms, namely: image fidelity (i.e., image quality), bandwidth (i.e., compression) ease-of-decoding (i.e., playback performance), memory consumption, compression to decompression asymmetry, scalability, and delay. We discuss the tradeoffs among these variables and the compromises that can be made to achieve low numerical complexity for software-only playback. Frame- differencing approaches are described since software-only video codecs typically employ them to enhance playback performance. To complement other papers that appear in this session of the Proceedings, we review methods derived from binary pattern image coding since these methods are amenable for software-only playback. In particular, we introduce a novel approach called pixel distribution image coding.
Multiple-Flat-Panel System Displays Multidimensional Data
NASA Technical Reports Server (NTRS)
Gundo, Daniel; Levit, Creon; Henze, Christopher; Sandstrom, Timothy; Ellsworth, David; Green, Bryan; Joly, Arthur
2006-01-01
The NASA Ames hyperwall is a display system designed to facilitate the visualization of sets of multivariate and multidimensional data like those generated in complex engineering and scientific computations. The hyperwall includes a 77 matrix of computer-driven flat-panel video display units, each presenting an image of 1,280 1,024 pixels. The term hyperwall reflects the fact that this system is a more capable successor to prior computer-driven multiple-flat-panel display systems known by names that include the generic term powerwall and the trade names PowerWall and Powerwall. Each of the 49 flat-panel displays is driven by a rack-mounted, dual-central-processing- unit, workstation-class personal computer equipped with a hig-hperformance graphical-display circuit card and with a hard-disk drive having a storage capacity of 100 GB. Each such computer is a slave node in a master/ slave computing/data-communication system (see Figure 1). The computer that acts as the master node is similar to the slave-node computers, except that it runs the master portion of the system software and is equipped with a keyboard and mouse for control by a human operator. The system utilizes commercially available master/slave software along with custom software that enables the human controller to interact simultaneously with any number of selected slave nodes. In a powerwall, a single rendering task is spread across multiple processors and then the multiple outputs are tiled into one seamless super-display. It must be noted that the hyperwall concept subsumes the powerwall concept in that a single scene could be rendered as a mosaic image on the hyperwall. However, the hyperwall offers a wider set of capabilities to serve a different purpose: The hyperwall concept is one of (1) simultaneously displaying multiple different but related images, and (2) providing means for composing and controlling such sets of images. In place of elaborate software or hardware crossbar switches, the hyperwall concept substitutes reliance on the human visual system for integration, synthesis, and discrimination of patterns in complex and high-dimensional data spaces represented by the multiple displayed images. The variety of multidimensional data sets that can be displayed on the hyperwall is practically unlimited. For example, Figure 2 shows a hyperwall display of surface pressures and streamlines from a computational simulation of airflow about an aerospacecraft at various Mach numbers and angles of attack. In this display, Mach numbers increase from left to right and angles of attack increase from bottom to top. That is, all images in the same column represent simulations at the same Mach number, while all images in the same row represent simulations at the same angle of attack. The same viewing transformations and the same mapping from surface pressure to colors were used in generating all the images.
Display of travelling 3D scenes from single integral-imaging capture
NASA Astrophysics Data System (ADS)
Martinez-Corral, Manuel; Dorado, Adrian; Hong, Seok-Min; Sola-Pikabea, Jorge; Saavedra, Genaro
2016-06-01
Integral imaging (InI) is a 3D auto-stereoscopic technique that captures and displays 3D images. We present a method for easily projecting the information recorded with this technique by transforming the integral image into a plenoptic image, as well as choosing, at will, the field of view (FOV) and the focused plane of the displayed plenoptic image. Furthermore, with this method we can generate a sequence of images that simulates a camera travelling through the scene from a single integral image. The application of this method permits to improve the quality of 3D display images and videos.
The USL NASA PC R and D interactive presentation development system
NASA Technical Reports Server (NTRS)
Dominick, Wayne D. (Editor); Moreau, Dennis R.
1984-01-01
The Interactive Presentation Development System (IPFS) is a highly interactive system for creating, editing, and displaying video presentation sequences, e.g., for developing and presenting displays of instructional material similiar to overhead transparency or slide presentations. However, since this system is PC-based, users (instructors) can step through sequences forward or backward, focusing attention to areas of the display with special cursor pointers. Additionally, screen displays may be dynamically modified during the presentation to show assignments or to answer questions, much like a traditional blackboard. This system is now implemented at the University of Southwestern Louisiana for use within the piloting phases of the NASA contract work.
AOIPS water resources data management system
NASA Technical Reports Server (NTRS)
Vanwie, P.
1977-01-01
The text and computer-generated displays used to demonstrate the AOIPS (Atmospheric and Oceanographic Information Processing System) water resources data management system are investigated. The system was developed to assist hydrologists in analyzing the physical processes occurring in watersheds. It was designed to alleviate some of the problems encountered while investigating the complex interrelationships of variables such as land-cover type, topography, precipitation, snow melt, surface runoff, evapotranspiration, and streamflow rates. The system has an interactive image processing capability and a color video display to display results as they are obtained.
Interactive display system having a scaled virtual target zone
Veligdan, James T.; DeSanto, Leonard
2006-06-13
A display system includes a waveguide optical panel having an inlet face and an opposite outlet face. A projector and imaging device cooperate with the panel for projecting a video image thereon. An optical detector bridges at least a portion of the waveguides for detecting a location on the outlet face within a target zone of an inbound light spot. A controller is operatively coupled to the imaging device and detector for displaying a cursor on the outlet face corresponding with the detected location of the spot within the target zone.
Enhanced Eddy-Current Detection Of Weld Flaws
NASA Technical Reports Server (NTRS)
Van Wyk, Lisa M.; Willenberg, James D.
1992-01-01
Mixing of impedances measured at different frequencies reduces noise and helps reveal flaws. In new method, one excites eddy-current probe simultaneously at two different frequencies; usually, one of which integral multiple of other. Resistive and reactive components of impedance of eddy-current probe measured at two frequencies, mixed in computer, and displayed in real time on video terminal of computer. Mixing of measurements obtained at two different frequencies often "cleans up" displayed signal in situations in which band-pass filtering alone cannot: mixing removes most noise, and displayed signal resolves flaws well.
User Friendly Real Time Display
NASA Astrophysics Data System (ADS)
McCarthy, Denise M.; McCracken, Bill
1989-02-01
Real-time viewing of high resolution infrared line scan reconnaissance imagery is greatly facilitated using Honeywell's Real Time Display in conjunction with a D-500 Infrared Reconnaissance System. The Real-Time Display (RTD) provides the capability of on-board review of high resolution infrared imagery using the wide infrared dynamic range of the D-500 infrared receiver to maximum advantage. The scan converter accepts, processes, and displays imagery from four channels of the IR Receiver after formatting by a multiplexer. The scan converter interfaces with a standard RS-170 video monitor. Detailed review and on-board analysis of infrared reconnaissance imagery stored on a videotape is easily accomplished using the many user-friendly features of the RTD. Using a convenient joystick controller, on-screen mode menus, and a moveable cursor, the operator can examine scenes of interest at four different display magnifications using a four step bidirectional zoom. Imagery areas of interest are first noted using the scrolling wide field display mode at 8x reduced display resolution. On noting an area of interest, the imagery can be marked on the tape record for future recovery and a freeze frame mode can be initiated. The operator can then move the cursor to the area of interest and zoom to higher display magnification for 4x, 2x, and lx display resolutions so that the full 4096 x 4096 pixel infrared frame can be matched to the 512 x 512 pixel display frame. At 8x wide field display magnification the full line scanner field of view is displayed at 8x reduced resolution. There are two selectable modes of obtaining this reduced resolution. The operator can use the default method, which averages the signal from an 8 x 8 pixel group, or it is also possible to select the peak signal of the 8 x 8 pixel block to represent the entire block on the display. In this alternate peak-signal display the wide field can be effectively scanned for hot objects which are more likely to be candidate targets. The intermediate 4x and 2x zoom steps are very useful in maintaining operator orientation in examining target clusters and industrial complexes. The four operating modes of the RTD are described and their use to the operator on a typical mission is outlined. Some installation details are given. The RTD as part of a complete D-500 Infrared Linescan Reconnaissance System is now being installed on a Beech 1900 Environmental Control Aircraft to monitor pollution in very sensitive and commercially important marine ecologies. Its application on military reconnaissance missions will allow the normal review of recorded videotape imagery at a ground station immediately after return of the aircraft to base. The areas of highest interest will have been previously marked during the airborne real-time review by the operator. The RTD packages into only two Line Replaceable Units (LRUs), a Scan Converter, and a Control Unit which includes a joystick hand controller. The CRT display is assumed to be part of the aircraft.
ERIC Educational Resources Information Center
Ozdemir, Muzaffer; Izmirli, Serkan; Sahin-Izmirli, Ozden
2016-01-01
The purpose of the present study was to investigate the effect of captioned vs. non-captioned instructional videos on the motivation and achievement. To this end, a pre-test and post-test experimental design was used on 109 sophomores from a Turkish state university. Videos with and without captions of the unit in question were prepared by the…
NASA Astrophysics Data System (ADS)
Veligdan, James T.; Beiser, Leo; Biscardi, Cyrus; Brewster, Calvin; DeSanto, Leonard
1997-07-01
The polyplanar optical display (POD) is a unique display screen which can be use with any projection source. This display screen is 2 inches thick and has a matte black face which allows for high contrast images. The prototype being developed is a form, fit and functional replacement display for the B-52 aircraft which uses a monochrome ten-inch display. The new display uses a 100 milliwatt green solid state laser as its optical source. In order to produce real- time video, the laser light is being modulated by a digital light processing (DLP) chip manufactured by Texas Instruments, Inc. A variable astigmatic focusing system is used to produce a stigmatic image on the viewing face of the POD. In addition to the optical design, we discuss the electronic interfacing to the DLP chip, the opto-mechanical design and viewing angle characteristics.
Laser-driven polyplanar optic display
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veligdan, J.T.; Biscardi, C.; Brewster, C.
1998-01-01
The Polyplanar Optical Display (POD) is a unique display screen which can be used with any projection source. This display screen is 2 inches thick and has a matte-black face which allows for high contrast images. The prototype being developed is a form, fit and functional replacement display for the B-52 aircraft which uses a monochrome ten-inch display. The new display uses a 200 milliwatt green solid-state laser (532 nm) as its optical source. In order to produce real-time video, the laser light is being modulated by a Digital Light Processing (DLP) chip manufactured by Texas Instruments, Inc. A variablemore » astigmatic focusing system is used to produce a stigmatic image on the viewing face of the POD. In addition to the optical design, the authors discuss the DLP chip, the optomechanical design and viewing angle characteristics.« less
Laser-driven polyplanar optic display
NASA Astrophysics Data System (ADS)
Veligdan, James T.; Beiser, Leo; Biscardi, Cyrus; Brewster, Calvin; DeSanto, Leonard
1998-05-01
The Polyplanar Optical Display (POD) is a unique display screen which can be used with any projection source. This display screen is 2 inches thick and has a matte-black face which allows for high contrast images. The prototype being developed is a form, fit and functional replacement display for the B-52 aircraft which uses a monochrome ten-inch display. The new display uses a 200 milliwatt green solid- state laser (532 nm) as its optical source. In order to produce real-time video, the laser light is being modulated by a Digital Light Processing (DLPTM) chip manufactured by Texas Instruments, Inc. A variable astigmatic focusing system is used to produce a stigmatic image on the viewing face of the POD. In addition to the optical design, we discuss the DLPTM chip, the opto-mechanical design and viewing angle characteristics.
Toward enhancing the distributed video coder under a multiview video codec framework
NASA Astrophysics Data System (ADS)
Lee, Shih-Chieh; Chen, Jiann-Jone; Tsai, Yao-Hong; Chen, Chin-Hua
2016-11-01
The advance of video coding technology enables multiview video (MVV) or three-dimensional television (3-D TV) display for users with or without glasses. For mobile devices or wireless applications, a distributed video coder (DVC) can be utilized to shift the encoder complexity to decoder under the MVV coding framework, denoted as multiview distributed video coding (MDVC). We proposed to exploit both inter- and intraview video correlations to enhance side information (SI) and improve the MDVC performance: (1) based on the multiview motion estimation (MVME) framework, a categorized block matching prediction with fidelity weights (COMPETE) was proposed to yield a high quality SI frame for better DVC reconstructed images. (2) The block transform coefficient properties, i.e., DCs and ACs, were exploited to design the priority rate control for the turbo code, such that the DVC decoding can be carried out with fewest parity bits. In comparison, the proposed COMPETE method demonstrated lower time complexity, while presenting better reconstructed video quality. Simulations show that the proposed COMPETE can reduce the time complexity of MVME to 1.29 to 2.56 times smaller, as compared to previous hybrid MVME methods, while the image peak signal to noise ratios (PSNRs) of a decoded video can be improved 0.2 to 3.5 dB, as compared to H.264/AVC intracoding.
Is Tickling Torture? Assessing Welfare towards Slow Lorises (Nycticebus spp.) within Web 2.0 Videos.
Nekaris, K Anne I; Musing, Louisa; Vazquez, Asier Gil; Donati, Giuseppe
2015-01-01
Videos, memes and images of pet slow lorises have become increasingly popular on the Internet. Although some video sites allow viewers to tag material as 'animal cruelty', no site has yet acknowledged the presence of cruelty in slow loris videos. We examined 100 online videos to assess whether they violated the 'five freedoms' of animal welfare and whether presence or absence of these conditions contributed to the number of thumbs up and views received by the videos. We found that all 100 videos showed at least 1 condition known as negative for lorises, indicating absence of the necessary freedom; 4% showed only 1 condition, but in nearly one third (31.3%) all 5 chosen criteria were present, including human contact (57%), daylight (87%), signs of stress/ill health (53%), unnatural environment (91%) and isolation from conspecifics (77%). The public were more likely to like videos where a slow loris was kept in the light or displayed signs of stress. Recent work on primates has shown that imagery of primates in a human context can cause viewers to perceive them as less threatened. Prevalence of a positive public opinion of such videos is a real threat towards awareness of the conservation crisis faced by slow lorises. © 2016 S. Karger AG, Basel.
Deriving video content type from HEVC bitstream semantics
NASA Astrophysics Data System (ADS)
Nightingale, James; Wang, Qi; Grecos, Christos; Goma, Sergio R.
2014-05-01
As network service providers seek to improve customer satisfaction and retention levels, they are increasingly moving from traditional quality of service (QoS) driven delivery models to customer-centred quality of experience (QoE) delivery models. QoS models only consider metrics derived from the network however, QoE models also consider metrics derived from within the video sequence itself. Various spatial and temporal characteristics of a video sequence have been proposed, both individually and in combination, to derive methods of classifying video content either on a continuous scale or as a set of discrete classes. QoE models can be divided into three broad categories, full reference, reduced reference and no-reference models. Due to the need to have the original video available at the client for comparison, full reference metrics are of limited practical value in adaptive real-time video applications. Reduced reference metrics often require metadata to be transmitted with the bitstream, while no-reference metrics typically operate in the decompressed domain at the client side and require significant processing to extract spatial and temporal features. This paper proposes a heuristic, no-reference approach to video content classification which is specific to HEVC encoded bitstreams. The HEVC encoder already makes use of spatial characteristics to determine partitioning of coding units and temporal characteristics to determine the splitting of prediction units. We derive a function which approximates the spatio-temporal characteristics of the video sequence by using the weighted averages of the depth at which the coding unit quadtree is split and the prediction mode decision made by the encoder to estimate spatial and temporal characteristics respectively. Since the video content type of a sequence is determined by using high level information parsed from the video stream, spatio-temporal characteristics are identified without the need for full decoding and can be used in a timely manner to aid decision making in QoE oriented adaptive real time streaming.
Flexible active-matrix displays and shift registers based on solution-processed organic transistors.
Gelinck, Gerwin H; Huitema, H Edzer A; van Veenendaal, Erik; Cantatore, Eugenio; Schrijnemakers, Laurens; van der Putten, Jan B P H; Geuns, Tom C T; Beenhakkers, Monique; Giesbers, Jacobus B; Huisman, Bart-Hendrik; Meijer, Eduard J; Benito, Estrella Mena; Touwslager, Fred J; Marsman, Albert W; van Rens, Bas J E; de Leeuw, Dago M
2004-02-01
At present, flexible displays are an important focus of research. Further development of large, flexible displays requires a cost-effective manufacturing process for the active-matrix backplane, which contains one transistor per pixel. One way to further reduce costs is to integrate (part of) the display drive circuitry, such as row shift registers, directly on the display substrate. Here, we demonstrate flexible active-matrix monochrome electrophoretic displays based on solution-processed organic transistors on 25-microm-thick polyimide substrates. The displays can be bent to a radius of 1 cm without significant loss in performance. Using the same process flow we prepared row shift registers. With 1,888 transistors, these are the largest organic integrated circuits reported to date. More importantly, the operating frequency of 5 kHz is sufficiently high to allow integration with the display operating at video speed. This work therefore represents a major step towards 'system-on-plastic'.
Effects of Segmenting, Signalling, and Weeding on Learning from Educational Video
ERIC Educational Resources Information Center
Ibrahim, Mohamed; Antonenko, Pavlo D.; Greenwood, Carmen M.; Wheeler, Denna
2012-01-01
Informed by the cognitive theory of multimedia learning, this study examined the effects of three multimedia design principles on undergraduate students' learning outcomes and perceived learning difficulty in the context of learning entomology from an educational video. These principles included segmenting the video into smaller units, signalling…
An Overview of Video Description: History, Benefits, and Guidelines
ERIC Educational Resources Information Center
Packer, Jaclyn; Vizenor, Katie; Miele, Joshua A.
2015-01-01
This article provides an overview of the historical context in which video description services have evolved in the United States, a summary of research demonstrating benefits to people with vision loss, an overview of current video description guidelines, and information about current software programs that are available to produce video…
47 CFR 25.114 - Applications for space station authorizations.
Code of Federal Regulations, 2013 CFR
2013-10-01
... space station that will be used to provide video programming directly to consumers in the United States... application a technical analysis demonstrating that providing video programming service to consumers in Alaska and Hawaii that is comparable to the video programming service provided to consumers in the 48...
47 CFR 25.114 - Applications for space station authorizations.
Code of Federal Regulations, 2011 CFR
2011-10-01
... space station that will be used to provide video programming directly to consumers in the United States... application a technical analysis demonstrating that providing video programming service to consumers in Alaska and Hawaii that is comparable to the video programming service provided to consumers in the 48...
47 CFR 25.114 - Applications for space station authorizations.
Code of Federal Regulations, 2012 CFR
2012-10-01
... space station that will be used to provide video programming directly to consumers in the United States... application a technical analysis demonstrating that providing video programming service to consumers in Alaska and Hawaii that is comparable to the video programming service provided to consumers in the 48...
Children and Electronic Games in the United States.
ERIC Educational Resources Information Center
Funk, Jeanne B.; Bermann, Julie N.; Buchman, Debra D.
1997-01-01
Reports video game playing demographics. Reviews the literature on video game health hazards and positive health applications; cutting-edge applications in education and controversies about learning; and effects on personality. Discusses laboratory and survey research on the effects of video games violence. Considers whether some children may be…
Cultural Variation in Antismoking Video Ads between the United States, Taiwan, and China
ERIC Educational Resources Information Center
Wong, Tzu-Jung; King, Jessica L.; Pomeranz, Jamie L.
2016-01-01
Antitobacco advertisement components, including types of messages and advertising appeals, have not been evaluated among multinational groups. This study identified and compared the content of antismoking video ads across three countries. We reviewed 86 antismoking video advertisements for the following information: severity of the consequences of…
Objective analysis of image quality of video image capture systems
NASA Astrophysics Data System (ADS)
Rowberg, Alan H.
1990-07-01
As Picture Archiving and Communication System (PACS) technology has matured, video image capture has become a common way of capturing digital images from many modalities. While digital interfaces, such as those which use the ACR/NEMA standard, will become more common in the future, and are preferred because of the accuracy of image transfer, video image capture will be the dominant method in the short term, and may continue to be used for some time because of the low cost and high speed often associated with such devices. Currently, virtually all installed systems use methods of digitizing the video signal that is produced for display on the scanner viewing console itself. A series of digital test images have been developed for display on either a GE CT9800 or a GE Signa MRI scanner. These images have been captured with each of five commercially available image capture systems, and the resultant images digitally transferred on floppy disk to a PC1286 computer containing Optimast' image analysis software. Here the images can be displayed in a comparative manner for visual evaluation, in addition to being analyzed statistically. Each of the images have been designed to support certain tests, including noise, accuracy, linearity, gray scale range, stability, slew rate, and pixel alignment. These image capture systems vary widely in these characteristics, in addition to the presence or absence of other artifacts, such as shading and moire pattern. Other accessories such as video distribution amplifiers and noise filters can also add or modify artifacts seen in the captured images, often giving unusual results. Each image is described, together with the tests which were performed using them. One image contains alternating black and white lines, each one pixel wide, after equilibration strips ten pixels wide. While some systems have a slew rate fast enough to track this correctly, others blur it to an average shade of gray, and do not resolve the lines, or give horizontal or vertical streaking. While many of these results are significant from an engineering standpoint alone, there are clinical implications and some anatomy or pathology may not be visualized if an image capture system is used improperly.
National Weather Service: Watch, Warning, Advisory Display
... Education & Outreach About the SPC SPC FAQ About Tornadoes About Derechos Video Lecture Series WCM Page Enh. ... Convective/Tropical Weather Flooding Winter Weather Non-Precipitation Tornado Watch Tornado Warning* Severe Thunderstorm Watch Severe Thunderstorm ...
NASA Tech Briefs, April 2000. Volume 24, No. 4
NASA Technical Reports Server (NTRS)
2000-01-01
Topics covered include: Imaging/Video/Display Technology; Electronic Components and Circuits; Electronic Systems; Physical Sciences; Materials; Computer Programs; Mechanics; Bio-Medical; Test and Measurement; Mathematics and Information Sciences; Books and Reports.
National Niemann-Pick Disease Foundation
... Disease Registry News & Media NNPDF Newsletters Foundation NewsLine Print Resources Video Resources NNPDF Webinars Vision of Hope ... nor does it host or receive funding from advertising or from the display of commercial content. This ...
Wrist display concept demonstration based on 2-in. color AMOLED
NASA Astrophysics Data System (ADS)
Meyer, Frederick M.; Longo, Sam J.; Hopper, Darrel G.
2004-09-01
The wrist watch needs an upgrade. Recent advances in optoelectronics, microelectronics, and communication theory have established a technology base that now make the multimedia Dick Tracy watch attainable during the next decade. As a first step towards stuffing the functionality of an entire personnel computer (PC) and television receiver under a watch face, we have set a goal of providing wrist video capability to warfighters. Commercial sector work on the wrist form factor already includes all the functionality of a personal digital assistant (PDA) and full PC operating system. Our strategy is to leverage these commercial developments. In this paper we describe our use of a 2.2 in. diagonal color active matrix light emitting diode (AMOLED) device as a wrist-mounted display (WMD) to present either full motion video or computer generated graphical image formats.
Jia, Jia; Chen, Jhensi; Yao, Jun; Chu, Daping
2017-03-17
A high quality 3D display requires a high amount of optical information throughput, which needs an appropriate mechanism to distribute information in space uniformly and efficiently. This study proposes a front-viewing system which is capable of managing the required amount of information efficiently from a high bandwidth source and projecting 3D images with a decent size and a large viewing angle at video rate in full colour. It employs variable gratings to support a high bandwidth distribution. This concept is scalable and the system can be made compact in size. A horizontal parallax only (HPO) proof-of-concept system is demonstrated by projecting holographic images from a digital micro mirror device (DMD) through rotational tiled gratings before they are realised on a vertical diffuser for front-viewing.
Haptic display for the VR arthroscopy training simulator
NASA Astrophysics Data System (ADS)
Ziegler, Rolf; Brandt, Christoph; Kunstmann, Christian; Mueller, Wolfgang; Werkhaeuser, Holger
1997-05-01
A specific desire to find new training methods arose from the new fields called 'minimal invasive surgery.' With the technical advance modern video arthroscopy became the standard procedure in the ORs. Holding the optical system with the video camera in one hand, watching the operation field on the monitor, the other hand was free to guide, e.g., a probe. As arthroscopy became a more common procedure it became obvious that some sort of special training was necessary to guarantee a certain level of qualification of the surgeons. Therefore, a hospital in Frankfurt, Germany approached the Fraunhofer Institute for Computer Graphics to develop a training system for arthroscopy based on VR techniques. At least the main drawback of the developed simulator is the missing of haptic perception, especially of force feedback. In cooperation with the Department of Electro-Mechanical Construction at the Darmstadt Technical University we have designed and built a haptic display for the VR arthroscopy training simulator. In parallel we developed a concept for the integration of the haptic display in a configurable way.
Using Globe Browsing Systems in Planetariums to Take Audiences to Other Worlds.
NASA Astrophysics Data System (ADS)
Emmart, C. B.
2014-12-01
For the last decade planetariums have been adding capability of "full dome video" systems for both movie playback and interactive display. True scientific data visualization has now come to planetarium audiences as a means to display the actual three dimensional layout of the universe, the time based array of planets, minor bodies and spacecraft across the solar system, and now globe browsing systems to examine planetary bodies to the limits of resolutions acquired. Additionally, such planetarium facilities can be networked for simultaneous display across the world for wider audience and reach to authoritative scientist description and commentary. Data repositories such as NASA's Lunar Mapping and Modeling Project (LMMP), NASA GSFC's LANCE-MODIS, and others conforming to the Open Geospatial Consortium (OGC) standard of Web Map Server (WMS) protocols make geospatial data available for a growing number of dome supporting globe visualization systems. The immersive surround graphics of full dome video replicates our visual system creating authentic virtual scenes effectively placing audiences on location in some cases to other worlds only mapped robotically.
Context-dependent JPEG backward-compatible high-dynamic range image compression
NASA Astrophysics Data System (ADS)
Korshunov, Pavel; Ebrahimi, Touradj
2013-10-01
High-dynamic range (HDR) imaging is expected, together with ultrahigh definition and high-frame rate video, to become a technology that may change photo, TV, and film industries. Many cameras and displays capable of capturing and rendering both HDR images and video are already available in the market. The popularity and full-public adoption of HDR content is, however, hindered by the lack of standards in evaluation of quality, file formats, and compression, as well as large legacy base of low-dynamic range (LDR) displays that are unable to render HDR. To facilitate the wide spread of HDR usage, the backward compatibility of HDR with commonly used legacy technologies for storage, rendering, and compression of video and images are necessary. Although many tone-mapping algorithms are developed for generating viewable LDR content from HDR, there is no consensus of which algorithm to use and under which conditions. We, via a series of subjective evaluations, demonstrate the dependency of the perceptual quality of the tone-mapped LDR images on the context: environmental factors, display parameters, and image content itself. Based on the results of subjective tests, it proposes to extend JPEG file format, the most popular image format, in a backward compatible manner to deal with HDR images also. An architecture to achieve such backward compatibility with JPEG is proposed. A simple implementation of lossy compression demonstrates the efficiency of the proposed architecture compared with the state-of-the-art HDR image compression.
Symptomatic accommodative and binocular dysfunctions from the use of flat-panel displays
Porcar, Esteban; Montalt, Juan Carlos; Pons, Álvaro M.; España-Gregori, Enrique
2018-01-01
AIM To determine the presence of symptomatic accommodative and non-strabismic binocular dysfunctions (ANSBD) in a non-presbyopic population of video display unit (VDU) users with flat-panel displays. METHODS One hundred and one VDU users, aged between 20 to 34y, initially participated in the study. This study excluded contact-lens wearers and subjects who had undergone refractive surgery or had any systemic or ocular disease. First, subjects were asked about the type and nature of eye symptoms they experienced during VDU use. Then, a thorough eye examination excluded those subjects with a significant uncorrected refractive error or other problem, such as ocular motility disorders, vertical deviation, strabismus and eye diseases. Finally, the remaining participants underwent an exhaustive assessment of their accommodative and binocular vision status. RESULTS Eighty-nine VDU users (46 females and 43 males) were included in this study. They used flat-panel displays for an average of 5±1.9h a day. Twenty subjects presented ANSBD (22.5%). Convergence excess was the most frequent non-strabismic binocular dysfunction (9 subjects), followed by fusional vergence dysfunction (3 subjects) and convergence insufficiency (2 subjects). Within the accommodative dysfunctions, accommodative excess was the most common (4 subjects), followed by accommodative insufficiency (2 subjects). Moderate to severe eye symptoms were found in 13 subjects with ANSBD. CONCLUSION Significant eye symptoms in VDU users with accommodative and/or non-strabismic binocular dysfunctions often occur and should not be underestimated; therefore, an appropriate evaluation of accommodative and binocular vision status is more important for this population. PMID:29600186
Liquid crystal display (LCD) drive electronics
NASA Astrophysics Data System (ADS)
Loudin, Jeffrey A.; Duffey, Jason N.; Booth, Joseph J.; Jones, Brian K.
1995-03-01
A new drive circuit for the liquid crystal display (LCD) of the InFocus TVT-6000 video projector is currently under development at the U.S. Army Missile Command. The new circuit will allow individual pixel control of the LCD and increase the frame rate by a factor of two while yielding a major reduction in space and power requirements. This paper will discuss results of the effort to date.
State of the art in video system performance
NASA Technical Reports Server (NTRS)
Lewis, Michael J.
1990-01-01
The closed circuit television (CCTV) system that is onboard the Space Shuttle has the following capabilities: camera, video signal switching and routing unit (VSU); and Space Shuttle video tape recorder. However, this system is inadequate for use with many experiments that require video imaging. In order to assess the state-of-the-art in video technology and data storage systems, a survey was conducted of the High Resolution, High Frame Rate Video Technology (HHVT) products. The performance of the state-of-the-art solid state cameras and image sensors, video recording systems, data transmission devices, and data storage systems versus users' requirements are shown graphically.
NASA Technical Reports Server (NTRS)
1974-01-01
A descriptive handbook for the audio/CTE splitter/interleaver (RCA part No. 8673734-502) was presented. This unit is designed to perform two major functions: extract audio and time data from an interleaved video/audio signal (splitter section), and provide a test interleaved video/audio/CTE signal for the system (interleaver section). It is a rack mounting unit 7 inches high, 19 inches wide, 20 inches deep, mounted on slides for retracting from the rack, and weighs approximately 40 pounds. The following information is provided: installation, operation, principles of operation, maintenance, schematics and parts lists.
Video coding for 3D-HEVC based on saliency information
NASA Astrophysics Data System (ADS)
Yu, Fang; An, Ping; Yang, Chao; You, Zhixiang; Shen, Liquan
2016-11-01
As an extension of High Efficiency Video Coding ( HEVC), 3D-HEVC has been widely researched under the impetus of the new generation coding standard in recent years. Compared with H.264/AVC, its compression efficiency is doubled while keeping the same video quality. However, its higher encoding complexity and longer encoding time are not negligible. To reduce the computational complexity and guarantee the subjective quality of virtual views, this paper presents a novel video coding method for 3D-HEVC based on the saliency informat ion which is an important part of Human Visual System (HVS). First of all, the relationship between the current coding unit and its adjacent units is used to adjust the maximum depth of each largest coding unit (LCU) and determine the SKIP mode reasonably. Then, according to the saliency informat ion of each frame image, the texture and its corresponding depth map will be divided into three regions, that is, salient area, middle area and non-salient area. Afterwards, d ifferent quantization parameters will be assigned to different regions to conduct low complexity coding. Finally, the compressed video will generate new view point videos through the renderer tool. As shown in our experiments, the proposed method saves more bit rate than other approaches and achieves up to highest 38% encoding time reduction without subjective quality loss in compression or rendering.
Kouloulias, V E; Ntasis, E; Poortmans, Ph; Maniatis, T A; Nikita, K S
2003-01-01
The desire to develop web-based platforms for remote collaboration among physicians and technologists is becoming a great challenge. In this paper we describe a web-based radiotherapy treatment planning (WBRTP) system to facilitate decentralized radiotherapy services by allowing remote treatment planning and quality assurance (QA) of treatment delivery. Significant prerequisites are digital storage of relevant data as well as efficient and reliable telecommunication system between collaborating units. The system of WBRTP includes video conferencing, display of medical images (CT scans, dose distributions etc), replication of selected data from a common database, remote treatment planning, evaluation of treatment technique and follow-up of the treated patients. Moreover the system features real-time remote operations in terms of tele-consulting like target volume delineation performed by a team of experts at different and distant units. An appraisal of its possibilities in quality assurance in radiotherapy is also discussed. As a conclusion, a WBRTP system would not only be a medium for communication between experts in oncology but mainly a tool for improving the QA in radiotherapy.
Real-time blood flow visualization using the graphics processing unit
NASA Astrophysics Data System (ADS)
Yang, Owen; Cuccia, David; Choi, Bernard
2011-01-01
Laser speckle imaging (LSI) is a technique in which coherent light incident on a surface produces a reflected speckle pattern that is related to the underlying movement of optical scatterers, such as red blood cells, indicating blood flow. Image-processing algorithms can be applied to produce speckle flow index (SFI) maps of relative blood flow. We present a novel algorithm that employs the NVIDIA Compute Unified Device Architecture (CUDA) platform to perform laser speckle image processing on the graphics processing unit. Software written in C was integrated with CUDA and integrated into a LabVIEW Virtual Instrument (VI) that is interfaced with a monochrome CCD camera able to acquire high-resolution raw speckle images at nearly 10 fps. With the CUDA code integrated into the LabVIEW VI, the processing and display of SFI images were performed also at ~10 fps. We present three video examples depicting real-time flow imaging during a reactive hyperemia maneuver, with fluid flow through an in vitro phantom, and a demonstration of real-time LSI during laser surgery of a port wine stain birthmark.
Real-time blood flow visualization using the graphics processing unit
Yang, Owen; Cuccia, David; Choi, Bernard
2011-01-01
Laser speckle imaging (LSI) is a technique in which coherent light incident on a surface produces a reflected speckle pattern that is related to the underlying movement of optical scatterers, such as red blood cells, indicating blood flow. Image-processing algorithms can be applied to produce speckle flow index (SFI) maps of relative blood flow. We present a novel algorithm that employs the NVIDIA Compute Unified Device Architecture (CUDA) platform to perform laser speckle image processing on the graphics processing unit. Software written in C was integrated with CUDA and integrated into a LabVIEW Virtual Instrument (VI) that is interfaced with a monochrome CCD camera able to acquire high-resolution raw speckle images at nearly 10 fps. With the CUDA code integrated into the LabVIEW VI, the processing and display of SFI images were performed also at ∼10 fps. We present three video examples depicting real-time flow imaging during a reactive hyperemia maneuver, with fluid flow through an in vitro phantom, and a demonstration of real-time LSI during laser surgery of a port wine stain birthmark. PMID:21280915
Development of High-speed Visualization System of Hypocenter Data Using CUDA-based GPU computing
NASA Astrophysics Data System (ADS)
Kumagai, T.; Okubo, K.; Uchida, N.; Matsuzawa, T.; Kawada, N.; Takeuchi, N.
2014-12-01
After the Great East Japan Earthquake on March 11, 2011, intelligent visualization of seismic information is becoming important to understand the earthquake phenomena. On the other hand, to date, the quantity of seismic data becomes enormous as a progress of high accuracy observation network; we need to treat many parameters (e.g., positional information, origin time, magnitude, etc.) to efficiently display the seismic information. Therefore, high-speed processing of data and image information is necessary to handle enormous amounts of seismic data. Recently, GPU (Graphic Processing Unit) is used as an acceleration tool for data processing and calculation in various study fields. This movement is called GPGPU (General Purpose computing on GPUs). In the last few years the performance of GPU keeps on improving rapidly. GPU computing gives us the high-performance computing environment at a lower cost than before. Moreover, use of GPU has an advantage of visualization of processed data, because GPU is originally architecture for graphics processing. In the GPU computing, the processed data is always stored in the video memory. Therefore, we can directly write drawing information to the VRAM on the video card by combining CUDA and the graphics API. In this study, we employ CUDA and OpenGL and/or DirectX to realize full-GPU implementation. This method makes it possible to write drawing information to the VRAM on the video card without PCIe bus data transfer: It enables the high-speed processing of seismic data. The present study examines the GPU computing-based high-speed visualization and the feasibility for high-speed visualization system of hypocenter data.
ERIC Educational Resources Information Center
2000
As children learn to practice responsible behaviors, discipline problems in the early childhood classroom can be reduced. As Part 3 of a 3-part video series designed to help adults working with 3- to 8-year-olds use a proactive approach to prevent discipline problems, this video training package is comprised of a Facilitators' Guide, a Viewers'…
The Use of the Library of Video Excerpts (L.O.V.E.) in Personnel Preparation Programs
ERIC Educational Resources Information Center
Trief, Ellen; Rosenblum, L. Penny
2016-01-01
A three-year, grant-funded program to create an online video clip library for personnel programs preparing teachers of students with visual impairments in the United States and Canada was launched in September 2014. The first author was the developer of the Library of Video Excerpts (L.O.V.E.) and collected over 300 video clips that were 8 to 10…
Wieland, Mark L; Nelson, Jonathan; Palmer, Tiffany; O'Hara, Connie; Weis, Jennifer A; Nigon, Julie A; Sia, Irene G
2013-01-01
Tuberculosis disproportionately affects immigrants and refugees to the United States. Upon arrival to the United States, many of these individuals attend adult education centers, but little is known about how to deliver tuberculosis health information at these venues. Therefore, the authors used a participatory approach to design and evaluate a tuberculosis education video in this setting. The authors used focus group data to inform the content of the video that was produced and delivered by adult learners and their teachers. The video was evaluated by learners for acceptability through 3 items with a 3-point Likert scale. Knowledge (4 items) and self-efficacy (2 items) about tuberculosis were evaluated before and after viewing the video. A total of 159 learners (94%) rated the video as highly acceptable. Knowledge about tuberculosis improved after viewing the video (56% correct vs. 82% correct; p <.001), as did tuberculosis-related self-efficacy (77% vs. 90%; p <.001). Adult education centers that serve large immigrant and refugee populations may be excellent venues for health education, and a video may be an effective tool to educate these populations. Furthermore, a participatory approach in designing health education materials may enhance the efficacy of these tools.
Pilot-Configurable Information on a Display Unit
NASA Technical Reports Server (NTRS)
Bell, Charles Frederick (Inventor); Ametsitsi, Julian (Inventor); Che, Tan Nhat (Inventor); Shafaat, Syed Tahir (Inventor)
2017-01-01
A small thin display unit that can be installed in the flight deck for displaying only flight crew-selected tactical information needed for the task at hand. The flight crew can select the tactical information to be displayed by means of any conventional user interface. Whenever the flight crew selects tactical information for processes the request, including periodically retrieving measured current values or computing current values for the requested tactical parameters and returning those current tactical parameter values to the display unit for display.
Digital image processing of bone - Problems and potentials
NASA Technical Reports Server (NTRS)
Morey, E. R.; Wronski, T. J.
1980-01-01
The development of a digital image processing system for bone histomorphometry and fluorescent marker monitoring is discussed. The system in question is capable of making measurements of UV or light microscope features on a video screen with either video or computer-generated images, and comprises a microscope, low-light-level video camera, video digitizer and display terminal, color monitor, and PDP 11/34 computer. Capabilities demonstrated in the analysis of an undecalcified rat tibia include the measurement of perimeter and total bone area, and the generation of microscope images, false color images, digitized images and contoured images for further analysis. Software development will be based on an existing software library, specifically the mini-VICAR system developed at JPL. It is noted that the potentials of the system in terms of speed and reliability far exceed any problems associated with hardware and software development.
Aidlen, Jeremy T; Glick, Sara; Silverman, Kenneth; Silverman, Harvey F; Luks, Francois I
2009-08-01
Light-weight, low-profile, and high-resolution head-mounted displays (HMDs) now allow personalized viewing, of a laparoscopic image. The advantages include unobstructed viewing, regardless of position at the operating table, and the possibility to customize the image (i.e., enhanced reality, picture-in-picture, etc.). The bright image display allows use in daylight surroundings and the low profile of the HMD provides adequate peripheral vision. Theoretic disadvantages include reliance for all on the same image capture and anticues (i.e., reality disconnect) when the projected image remains static, despite changes in head position. This can lead to discomfort and even nausea. We have developed a prototype of interactive laparoscopic image display that allows hands-free control of the displayed image by changes in spatial orientation of the operator's head. The prototype consists of an HMD, a spatial orientation device, and computer software to enable hands-free panning and zooming of a video-endoscopic image display. The spatial orientation device uses magnetic fields created by a transmitter and receiver, each containing three orthogonal coils. The transmitter coils are efficiently driven, using USB power only, by a newly developed circuit, each at a unique frequency. The HMD-mounted receiver system links to a commercially available PC-interface PCI-bus sound card (M-Audiocard Delta 44; Avid Technology, Tewksbury, MA). Analog signals at the receiver are filtered, amplified, and converted to digital signals, which are processed to control the image display. The prototype uses a proprietary static fish-eye lens and software for the distortion-free reconstitution of any portion of the captured image. Left-right and up-down motions of the head (and HMD) produce real-time panning of the displayed image. Motion of the head toward, or away from, the transmitter causes real-time zooming in or out, respectively, of the displayed image. This prototype of the interactive HMD allows hands-free, intuitive control of the laparoscopic field, independent of the captured image.
Keebler, Joseph R; Jentsch, Florian; Schuster, David
2014-12-01
We investigated the effects of active stereoscopic simulation-based training and individual differences in video game experience on multiple indices of combat identification (CID) performance. Fratricide is a major problem in combat operations involving military vehicles. In this research, we aimed to evaluate the effects of training on CID performance in order to reduce fratricide errors. Individuals were trained on 12 combat vehicles in a simulation, which were presented via either a non-stereoscopic or active stereoscopic display using NVIDIA's GeForce shutter glass technology. Self-report was used to assess video game experience, leading to four between-subjects groups: high video game experience with stereoscopy, low video game experience with stereoscopy, high video game experience without stereoscopy, and low video game experience without stereoscopy. We then tested participants on their memory of each vehicle's alliance and name across multiple measures, including photographs and videos. There was a main effect for both video game experience and stereoscopy across many of the dependent measures. Further, we found interactions between video game experience and stereoscopic training, such that those individuals with high video game experience in the non-stereoscopic group had the highest performance outcomes in the sample on multiple dependent measures. This study suggests that individual differences in video game experience may be predictive of enhanced performance in CID tasks. Selection based on video game experience in CID tasks may be a useful strategy for future military training. Future research should investigate the generalizability of these effects, such as identification through unmanned vehicle sensors.
Helping Video Games Rewire "Our Minds"
NASA Technical Reports Server (NTRS)
Pope, Alan T.; Palsson, Olafur S.
2001-01-01
Biofeedback-modulated video games are games that respond to physiological signals as well as mouse, joystick or game controller input; they embody the concept of improving physiological functioning by rewarding specific healthy body signals with success at playing a video game. The NASA patented biofeedback-modulated game method blends biofeedback into popular off-the- shelf video games in such a way that the games do not lose their entertainment value. This method uses physiological signals (e.g., electroencephalogram frequency band ratio) not simply to drive a biofeedback display directly, or periodically modify a task as in other systems, but to continuously modulate parameters (e.g., game character speed and mobility) of a game task in real time while the game task is being performed by other means (e.g., a game controller). Biofeedback-modulated video games represent a new generation of computer and video game environments that train valuable mental skills beyond eye-hand coordination. These psychophysiological training technologies are poised to exploit the revolution in interactive multimedia home entertainment for the personal improvement, not just the diversion, of the user.
Maier, Hans; de Heer, Gert; Ortac, Ajda; Kuijten, Jan
2015-11-01
To analyze, interpret and evaluate microscopic images, used in medical diagnostics and forensic science, video images for educational purposes were made with a very high resolution of 4096 × 2160 pixels (4K), which is four times as many pixels as High-Definition Video (1920 × 1080 pixels). The unprecedented high resolution makes it possible to see details that remain invisible to any other video format. The images of the specimens (blood cells, tissue sections, hair, fibre, etc.) are recorded using a 4K video camera which is attached to a light microscope. After processing, this resulted in very sharp and highly detailed images. This material was then used in education for classroom discussion. Spoken explanation by experts in the field of medical diagnostics and forensic science was also added to the high-resolution video images to make it suitable for self-study. © 2015 The Authors. Journal of Microscopy published by John Wiley & Sons Ltd on behalf of Royal Microscopical Society.
Strategies for combining physics videos and virtual laboratories in the training of physics teachers
NASA Astrophysics Data System (ADS)
Dickman, Adriana; Vertchenko, Lev; Martins, Maria Inés
2007-03-01
Among the multimedia resources used in physics education, the most prominent are virtual laboratories and videos. On one hand, computer simulations and applets have very attractive graphic interfaces, showing an incredible amount of detail and movement. On the other hand, videos, offer the possibility of displaying high quality images, and are becoming more feasible with the increasing availability of digital resources. We believe it is important to discuss, throughout the teacher training program, both the functionality of information and communication technology (ICT) in physics education and, the varied applications of these resources. In our work we suggest the introduction of ICT resources in a sequence integrating these important tools in the teacher training program, as opposed to the traditional approach, in which virtual laboratories and videos are introduced separately. In this perspective, when we introduce and utilize virtual laboratory techniques we also provide for its use in videos, taking advantage of graphic interfaces. Thus the students in our program learn to use instructional software in the production of videos for classroom use.
North, Frederick; Hanna, Barbara K; Crane, Sarah J; Smith, Steven A; Tulledge-Scheitel, Sidna M; Stroebel, Robert J
2011-12-01
The patient portal is a web service which allows patients to view their electronic health record, communicate online with their care teams, and manage healthcare appointments and medications. Despite advantages of the patient portal, registrations for portal use have often been slow. Using a secure video system on our existing exam room electronic health record displays during regular office visits, the authors showed patients a video which promoted use of the patient portal. The authors compared portal registrations and portal use following the video to providing a paper instruction sheet and to a control (no additional portal promotion). From the 12,050 office appointments examined, portal registrations within 45 days of the appointment were 11.7%, 7.1%, and 2.5% for video, paper instructions, and control respectively (p<0.0001). Within 6 months following the interventions, 3.5% in the video cohort, 1.2% in the paper, and 0.75% of the control patients demonstrated portal use by initiating portal messages to their providers (p<0.0001).
Ergonomic Training for Tomorrow's Office.
ERIC Educational Resources Information Center
Gross, Clifford M.; Chapnik, Elissa Beth
1987-01-01
The authors focus on issues related to the continual use of video display terminals in the office, including safety and health regulations, potential health problems, and the role of training in minimizing work-related health problems. (CH)
Transducer with a sense of touch
NASA Technical Reports Server (NTRS)
Bejczy, A. K.; Paine, G.
1979-01-01
Matrix of pressure sensors determines shape and pressure distribution of object in contact with its surface. Output can be used to develop pressure map of objects' surface and displayed as array of alphanumeric symbols on video monitor.