Sample records for video display engineering

  1. Video display engineering and optimization system

    NASA Technical Reports Server (NTRS)

    Larimer, James (Inventor)

    1997-01-01

    A video display engineering and optimization CAD simulation system for designing a LCD display integrates models of a display device circuit, electro-optics, surface geometry, and physiological optics to model the system performance of a display. This CAD system permits system performance and design trade-offs to be evaluated without constructing a physical prototype of the device. The systems includes a series of modules which permit analysis of design trade-offs in terms of their visual impact on a viewer looking at a display.

  2. IVTS-CEV (Interactive Video Tape System-Combat Engineer Vehicle) Gunnery Trainer.

    DTIC Science & Technology

    1981-07-01

    video game technology developed for and marketed in consumer video games. The IVTS/CEV is a conceptual/breadboard-level classroom interactive training system designed to train Combat Engineer Vehicle (CEV) gunners in target acquisition and engagement with the main gun. The concept demonstration consists of two units: a gunner station and a display module. The gunner station has optics and gun controls replicating those of the CEV gunner station. The display module contains a standard large-screen color video monitor and a video tape player. The gunner’s sight

  3. Ethernet direct display: a new dimension for in-vehicle video connectivity solutions

    NASA Astrophysics Data System (ADS)

    Rowley, Vincent

    2009-05-01

    To improve the local situational awareness (LSA) of personnel in light or heavily armored vehicles, most military organizations recognize the need to equip their fleets with high-resolution digital video systems. Several related upgrade programs are already in progress and, almost invariably, COTS IP/Ethernet is specified as the underlying transport mechanism. The high bandwidths, long reach, networking flexibility, scalability, and affordability of IP/Ethernet make it an attractive choice. There are significant technical challenges, however, in achieving high-performance, real-time video connectivity over the IP/Ethernet platform. As an early pioneer in performance-oriented video systems based on IP/Ethernet, Pleora Technologies has developed core expertise in meeting these challenges and applied a singular focus to innovating within the required framework. The company's field-proven iPORTTM Video Connectivity Solution is deployed successfully in thousands of real-world applications for medical, military, and manufacturing operations. Pleora's latest innovation is eDisplayTM, a smallfootprint, low-power, highly efficient IP engine that acquires video from an Ethernet connection and sends it directly to a standard HDMI/DVI monitor for real-time viewing. More costly PCs are not required. This paper describes Pleora's eDisplay IP Engine in more detail. It demonstrates how - in concert with other elements of the end-to-end iPORT Video Connectivity Solution - the engine can be used to build standards-based, in-vehicle video systems that increase the safety and effectiveness of military personnel while fully leveraging the advantages of the lowcost COTS IP/Ethernet platform.

  4. Sequential color video to parallel color video converter

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The engineering design, development, breadboard fabrication, test, and delivery of a breadboard field sequential color video to parallel color video converter is described. The converter was designed for use onboard a manned space vehicle to eliminate a flickering TV display picture and to reduce the weight and bulk of previous ground conversion systems.

  5. Engineering visualization utilizing advanced animation

    NASA Technical Reports Server (NTRS)

    Sabionski, Gunter R.; Robinson, Thomas L., Jr.

    1989-01-01

    Engineering visualization is the use of computer graphics to depict engineering analysis and simulation in visual form from project planning through documentation. Graphics displays let engineers see data represented dynamically which permits the quick evaluation of results. The current state of graphics hardware and software generally allows the creation of two types of 3D graphics. The use of animated video as an engineering visualization tool is presented. The engineering, animation, and videography aspects of animated video production are each discussed. Specific issues include the integration of staffing expertise, hardware, software, and the various production processes. A detailed explanation of the animation process reveals the capabilities of this unique engineering visualization method. Automation of animation and video production processes are covered and future directions are proposed.

  6. Polyplanar optical display electronics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeSanto, L.; Biscardi, C.

    The Polyplanar Optical Display (POD) is a unique display screen which can be used with any projection source. The prototype ten inch display is two inches thick and has a matte black face which allows for high contrast images. The prototype being developed is a form, fit and functional replacement display for the B-52 aircraft which uses a monochrome ten-inch display. In order to achieve a long lifetime, the new display uses a 100 milliwatt green solid-state laser (10,000 hr. life) at 532 nm as its light source. To produce real-time video, the laser light is being modulated by amore » Digital Light Processing (DLP{trademark}) chip manufactured by Texas Instruments. In order to use the solid-state laser as the light source and also fit within the constraints of the B-52 display, the Digital Micromirror Device (DMD{trademark}) circuit board is removed from the Texas Instruments DLP light engine assembly. Due to the compact architecture of the projection system within the display chassis, the DMD{trademark} chip is operated remotely from the Texas Instruments circuit board. The authors discuss the operation of the DMD{trademark} divorced from the light engine and the interfacing of the DMD{trademark} board with various video formats (CVBS, Y/C or S-video and RGB) including the format specific to the B-52 aircraft. A brief discussion of the electronics required to drive the laser is also presented.« less

  7. Virtual displays for 360-degree video

    NASA Astrophysics Data System (ADS)

    Gilbert, Stephen; Boonsuk, Wutthigrai; Kelly, Jonathan W.

    2012-03-01

    In this paper we describe a novel approach for comparing users' spatial cognition when using different depictions of 360- degree video on a traditional 2D display. By using virtual cameras within a game engine and texture mapping of these camera feeds to an arbitrary shape, we were able to offer users a 360-degree interface composed of four 90-degree views, two 180-degree views, or one 360-degree view of the same interactive environment. An example experiment is described using these interfaces. This technique for creating alternative displays of wide-angle video facilitates the exploration of how compressed or fish-eye distortions affect spatial perception of the environment and can benefit the creation of interfaces for surveillance and remote system teleoperation.

  8. Real-Time Visualization of Tissue Ischemia

    NASA Technical Reports Server (NTRS)

    Bearman, Gregory H. (Inventor); Chrien, Thomas D. (Inventor); Eastwood, Michael L. (Inventor)

    2000-01-01

    A real-time display of tissue ischemia which comprises three CCD video cameras, each with a narrow bandwidth filter at the correct wavelength is discussed. The cameras simultaneously view an area of tissue suspected of having ischemic areas through beamsplitters. The output from each camera is adjusted to give the correct signal intensity for combining with, the others into an image for display. If necessary a digital signal processor (DSP) can implement algorithms for image enhancement prior to display. Current DSP engines are fast enough to give real-time display. Measurement at three, wavelengths, combined into a real-time Red-Green-Blue (RGB) video display with a digital signal processing (DSP) board to implement image algorithms, provides direct visualization of ischemic areas.

  9. Practical question-and-answer guide on VDTS (video display terminals) for BEES (base bioenvironmental engineer). Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olson, B.M.

    1985-01-01

    The USAF OEHL conducted an extensive literature review of Video Display Terminals (VDTs) and the health problems commonly associated with them. The report is presented in a question-and-answer format in an attempt to paraphrase the most commonly asked questions about VDTs that are forwarded to USAF OEHL/RZN. The questions and answers have been divided into several topic areas: Ionizing Radiation; Nonionizing Radiation; Optical Radiation; Ultrasound; Static Electricity; Health Complaints/Ergonomics; Pregnancy.

  10. Hardware and software improvements to a low-cost horizontal parallax holographic video monitor.

    PubMed

    Henrie, Andrew; Codling, Jesse R; Gneiting, Scott; Christensen, Justin B; Awerkamp, Parker; Burdette, Mark J; Smalley, Daniel E

    2018-01-01

    Displays capable of true holographic video have been prohibitively expensive and difficult to build. With this paper, we present a suite of modularized hardware components and software tools needed to build a HoloMonitor with basic "hacker-space" equipment, highlighting improvements that have enabled the total materials cost to fall to $820, well below that of other holographic displays. It is our hope that the current level of simplicity, development, design flexibility, and documentation will enable the lay engineer, programmer, and scientist to relatively easily replicate, modify, and build upon our designs, bringing true holographic video to the masses.

  11. Polyplanar optical display electronics

    NASA Astrophysics Data System (ADS)

    DeSanto, Leonard; Biscardi, Cyrus

    1997-07-01

    The polyplanar optical display (POD) is a unique display screen which can be used with any projection source. The prototype ten inch display is two inches thick and has a matte black face which allows for high contrast images. The prototype being developed is a form, fit and functional replacement display for the B-52 aircraft which uses a monochrome ten-inch display. In order to achieve a long lifetime, the new display uses a 100 milliwatt green solid- state laser at 532 nm as its light source. To produce real- time video, the laser light is being modulated by a digital light processing (DLP) chip manufactured by Texas Instruments. In order to use the solid-state laser as the light source and also fit within the constraints of the B-52 display, the digital micromirror device (DMD) circuit board is removed from the Texas Instruments DLP light engine assembly. Due to the compact architecture of the projection system within the display chassis, the DMD chip is operated remotely from the Texas Instruments circuit board. We discuss the operation of the DMD divorced from the light engine and the interfacing of the DMD board with various video formats including the format specific to the B-52 aircraft. A brief discussion of the electronics required to drive the laser is also presented.

  12. Neutrons Image Additive Manufactured Turbine Blade in 3-D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2016-04-29

    The video displays the Inconel 718 Turbine Blade made by Additive Manufacturing. First a gray scale neutron computed tomogram (CT) is displayed with transparency in order to show the internal structure. Then the neutron CT is overlapped with the engineering drawing that was used to print the part and a comparison of external and internal structures is possible. This provides a map of the accuracy of the printed turbine (printing tolerance). Internal surface roughness can also be observed. Credits: Experimental Measurements: Hassina Z. Bilheaux, Video and Printing Tolerance Analysis: Jean C. Bilheaux

  13. Bar-Chart-Monitor System For Wind Tunnels

    NASA Technical Reports Server (NTRS)

    Jung, Oscar

    1993-01-01

    Real-time monitor system provides bar-chart displays of significant operating parameters developed for National Full-Scale Aerodynamic Complex at Ames Research Center. Designed to gather and process sensory data on operating conditions of wind tunnels and models, and displays data for test engineers and technicians concerned with safety and validation of operating conditions. Bar-chart video monitor displays data in as many as 50 channels at maximum update rate of 2 Hz in format facilitating quick interpretation.

  14. Applying emerging digital video interface standards to airborne avionics sensor and digital map integrations: benefits outweigh the initial costs

    NASA Astrophysics Data System (ADS)

    Kuehl, C. Stephen

    1996-06-01

    Video signal system performance can be compromised in a military aircraft cockpit management system (CMS) with the tailoring of vintage Electronics Industries Association (EIA) RS170 and RS343A video interface standards. Video analog interfaces degrade when induced system noise is present. Further signal degradation has been traditionally associated with signal data conversions between avionics sensor outputs and the cockpit display system. If the CMS engineering process is not carefully applied during the avionics video and computing architecture development, extensive and costly redesign will occur when visual sensor technology upgrades are incorporated. Close monitoring and technical involvement in video standards groups provides the knowledge-base necessary for avionic systems engineering organizations to architect adaptable and extendible cockpit management systems. With the Federal Communications Commission (FCC) in the process of adopting the Digital HDTV Grand Alliance System standard proposed by the Advanced Television Systems Committee (ATSC), the entertainment and telecommunications industries are adopting and supporting the emergence of new serial/parallel digital video interfaces and data compression standards that will drastically alter present NTSC-M video processing architectures. The re-engineering of the U.S. Broadcasting system must initially preserve the electronic equipment wiring networks within broadcast facilities to make the transition to HDTV affordable. International committee activities in technical forums like ITU-R (former CCIR), ANSI/SMPTE, IEEE, and ISO/IEC are establishing global consensus on video signal parameterizations that support a smooth transition from existing analog based broadcasting facilities to fully digital computerized systems. An opportunity exists for implementing these new video interface standards over existing video coax/triax cabling in military aircraft cockpit management systems. Reductions in signal conversion processing steps, major improvement in video noise reduction, and an added capability to pass audio/embedded digital data within the digital video signal stream are the significant performance increases associated with the incorporation of digital video interface standards. By analyzing the historical progression of military CMS developments, establishing a systems engineering process for CMS design, tracing the commercial evolution of video signal standardization, adopting commercial video signal terminology/definitions, and comparing/contrasting CMS architecture modifications using digital video interfaces; this paper provides a technical explanation on how a systems engineering process approach to video interface standardization can result in extendible and affordable cockpit management systems.

  15. Affordable multisensor digital video architecture for 360° situational awareness displays

    NASA Astrophysics Data System (ADS)

    Scheiner, Steven P.; Khan, Dina A.; Marecki, Alexander L.; Berman, David A.; Carberry, Dana

    2011-06-01

    One of the major challenges facing today's military ground combat vehicle operations is the ability to achieve and maintain full-spectrum situational awareness while under armor (i.e. closed hatch). Thus, the ability to perform basic tasks such as driving, maintaining local situational awareness, surveillance, and targeting will require a high-density array of real time information be processed, distributed, and presented to the vehicle operators and crew in near real time (i.e. low latency). Advances in display and sensor technologies are providing never before seen opportunities to supply large amounts of high fidelity imagery and video to the vehicle operators and crew in real time. To fully realize the advantages of these emerging display and sensor technologies, an underlying digital architecture must be developed that is capable of processing these large amounts of video and data from separate sensor systems and distributing it simultaneously within the vehicle to multiple vehicle operators and crew. This paper will examine the systems and software engineering efforts required to overcome these challenges and will address development of an affordable, integrated digital video architecture. The approaches evaluated will enable both current and future ground combat vehicle systems the flexibility to readily adopt emerging display and sensor technologies, while optimizing the Warfighter Machine Interface (WMI), minimizing lifecycle costs, and improve the survivability of the vehicle crew working in closed-hatch systems during complex ground combat operations.

  16. AFRC2017-0076-1

    NASA Image and Video Library

    2017-04-04

    NASA Armstrong’s Mission Control Center, or MCC, is where culmination of all data-gathering occurs. Engineers, flight controllers and researchers monitor flights and missions as they are carried out. Data and video run through the MCC and are recorded, displayed and archived. Data is then processed and prepared for post-flight analysis.

  17. Airborne Navigation Remote Map Reader Evaluation.

    DTIC Science & Technology

    1986-03-01

    EVALUATION ( James C. Byrd Intergrated Controls/Displays Branch SAvionics Systems Division Directorate of Avionics Engineering SMarch 1986 Final Report...Resolution 15 3.2 Accuracy 15 3.3 Symbology 15 3.4 Video Standard 18 3.5 Simulator Control Box 18 3.6 Software 18 3.7 Display Performance 21 3.8 Reliability 24...can be selected depending on the detail required and will automatically be presented at his present position. .The French RMR uses a Flying Spot Scanner

  18. Model-Based Method for Terrain-Following Display Design

    DTIC Science & Technology

    1989-06-15

    data into a more compact set of model parameters. These model parameters provide insights into the interpretation of the experimental results as well...2.8 presents the VSD display, and is taken from figure 1.95 of the B-IB Flight Manual , NA-77-400. There are two primary elements in the VSD: 1) the...baseline VSD based on figures such as these from the B-lB Flight Manual , a video tape of an operating VSD in the engineering - 21 - research simulator, and

  19. Optimization of the polyplanar optical display electronics for a monochrome B-52 display

    NASA Astrophysics Data System (ADS)

    DeSanto, Leonard

    1998-09-01

    The Polyplanar Optical Display (POD) is a unique display screen which can be used with any projection source. The prototype ten-inch display is two inches thick and has a matte black face which allows for high contrast images. The prototype being developed is a form, fit and functional replacement display for the B-52 aircraft which uses a monochrome ten-inch display. In order to achieve a long lifetime, the new display uses a new 200 mW green solid-state laser (10,000 hr. life) at 532 nm as its light source. To produce real-time video, the laser light is being modulated by a Digital Light Processing (DLPTM) chip manufactured by Texas Instruments (TI). In order to use the solid-state laser as the light source and also fit within the constraints of the B-52 display, the Digital Micromirror Device (DMDTM) chip is operated remotely from the Texas Instruments circuit board. In order to achieve increased brightness a monochrome digitizing interface was investigated. The operation of the DMDTM divorced from the light engine and the interfacing of the DMDTM board with the RS-170 video format specific to the B-52 aircraft will be discussed, including the increased brightness of the monochrome digitizing interface. A brief description of the electronics required to drive the new 200 mW laser is also presented.

  20. 77 FR 3000 - Certain Video Displays and Products Using and Containing Same; Receipt of Complaint; Solicitation...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-20

    ... INTERNATIONAL TRADE COMMISSION [DN 2871] Certain Video Displays and Products Using and Containing... Trade Commission has received a complaint entitled In Re Certain Video Displays and Products Using and... for importation, and the sale within the United States after importation of certain video displays and...

  1. Aerial Video Imaging

    NASA Technical Reports Server (NTRS)

    1991-01-01

    When Michael Henry wanted to start an aerial video service, he turned to Johnson Space Center for assistance. Two NASA engineers - one had designed and developed TV systems in Apollo, Skylab, Apollo- Soyuz and Space Shuttle programs - designed a wing-mounted fiberglass camera pod. Camera head and angles are adjustable, and the pod is shaped to reduce vibration. The controls are located so a solo pilot can operate the system. A microprocessor displays latitude, longitude, and bearing, and a GPS receiver provides position data for possible legal references. The service has been successfully utilized by railroads, oil companies, real estate companies, etc.

  2. Optimization of the polyplanar optical display electronics for a monochrome B-52 display

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeSanto, L.

    The Polyplanar Optical Display (POD) is a unique display screen which can be used with any projection source. The prototype ten-inch display is two inches thick and has a matte black face which allows for high contrast images. The prototype being developed is a form, fit and functional replacement display for the B-52 aircraft which uses a monochrome ten-inch display. In order to achieve a long lifetime, the new display uses a new 200 mW green solid-state laser (10,000 hr. life) at 532 nm as its light source. To produce real-time video, the laser light is being modulated by amore » Digital Light Processing (DLP{trademark}) chip manufactured by Texas Instruments (TI). In order to use the solid-state laser as the light source and also fit within the constraints of the B-52 display, the Digital Micromirror Device (DMD{trademark}) chip is operated remotely from the Texas Instruments circuit board. In order to achieve increased brightness a monochrome digitizing interface was investigated. The operation of the DMD{trademark} divorced from the light engine and the interfacing of the DMD{trademark} board with the RS-170 video format specific to the B-52 aircraft will be discussed, including the increased brightness of the monochrome digitizing interface. A brief description of the electronics required to drive the new 200 mW laser is also presented.« less

  3. Mobile Vehicle Teleoperated Over Wireless IP

    DTIC Science & Technology

    2007-06-13

    VideoLAN software suite. The VLC media player portion of this suite handles net- work streaming of video, as well as the receipt and display of the video...is found in appendix C.7. Video Display The video feed is displayed for the operator using VLC opened independently from the control sending program...This gives the operator the most choice in how to configure the display. To connect VLC to the feed all you need is the IP address from the Java

  4. 77 FR 9964 - Certain Video Displays and Products Using and Containing Same

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-21

    ... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-828] Certain Video Displays and Products... importation, and the sale within the United States after importation of certain video displays and products... States, the sale for importation, or the sale within the United States after importation of certain video...

  5. VAP/VAT: video analytics platform and test bed for testing and deploying video analytics

    NASA Astrophysics Data System (ADS)

    Gorodnichy, Dmitry O.; Dubrofsky, Elan

    2010-04-01

    Deploying Video Analytics in operational environments is extremely challenging. This paper presents a methodological approach developed by the Video Surveillance and Biometrics Section (VSB) of the Science and Engineering Directorate (S&E) of the Canada Border Services Agency (CBSA) to resolve these problems. A three-phase approach to enable VA deployment within an operational agency is presented and the Video Analytics Platform and Testbed (VAP/VAT) developed by the VSB section is introduced. In addition to allowing the integration of third party and in-house built VA codes into an existing video surveillance infrastructure, VAP/VAT also allows the agency to conduct an unbiased performance evaluation of the cameras and VA software available on the market. VAP/VAT consists of two components: EventCapture, which serves to Automatically detect a "Visual Event", and EventBrowser, which serves to Display & Peruse of "Visual Details" captured at the "Visual Event". To deal with Open architecture as well as with Closed architecture cameras, two video-feed capture mechanisms have been developed within the EventCapture component: IPCamCapture and ScreenCapture.

  6. Design of a projection display screen with vanishing color shift for rear-projection HDTV

    NASA Astrophysics Data System (ADS)

    Liu, Xiu; Zhu, Jin-lin

    1996-09-01

    Using bi-convex cylinder lens with matrix structure, the transmissive projection display screen with high contrast and wider viewing angle has been widely used in large rear projection TV and video projectors, it obtained a inhere color shift and puzzled the designer of display screen for RGB projection tube in-line adjustment. Based on the method of light beam racing, the general software of designing projection display screen has been developed and the computer model of vanishing color shift for rear projection HDTV has bee completed. This paper discussed the practical designing method to vanish the defect of color shift and mentioned the relations between the primary optical parameters of display screen and relative geometry sizes of lens' surface. The distributions of optical gain to viewing angle and the influences on engineering design are briefly analyzed.

  7. Method for Visually Integrating Multiple Data Acquisition Technologies for Real Time and Retrospective Analysis

    NASA Technical Reports Server (NTRS)

    Bogart, Edward H. (Inventor); Pope, Alan T. (Inventor)

    2000-01-01

    A system for display on a single video display terminal of multiple physiological measurements is provided. A subject is monitored by a plurality of instruments which feed data to a computer programmed to receive data, calculate data products such as index of engagement and heart rate, and display the data in a graphical format simultaneously on a single video display terminal. In addition live video representing the view of the subject and the experimental setup may also be integrated into the single data display. The display may be recorded on a standard video tape recorder for retrospective analysis.

  8. JAMSTEC E-library of Deep-sea Images (J-EDI) Realizes a Virtual Journey to the Earth's Unexplored Deep Ocean

    NASA Astrophysics Data System (ADS)

    Sasaki, T.; Azuma, S.; Matsuda, S.; Nagayama, A.; Ogido, M.; Saito, H.; Hanafusa, Y.

    2016-12-01

    The Japan Agency for Marine-Earth Science and Technology (JAMSTEC) archives a large amount of deep-sea research videos and photos obtained by JAMSTEC's research submersibles and vehicles with cameras. The web site "JAMSTEC E-library of Deep-sea Images : J-EDI" (http://www.godac.jamstec.go.jp/jedi/e/) has made videos and photos available to the public via the Internet since 2011. Users can search for target videos and photos by keywords, easy-to-understand icons, and dive information at J-EDI because operating staffs classify videos and photos as to contents, e.g. living organism and geological environment, and add comments to them.Dive survey data including videos and photos are not only valiant academically but also helpful for education and outreach activities. With the aim of the improvement of visibility for broader communities, we added new functions of 3-dimensional display synchronized various dive survey data with videos in this year.New Functions Users can search for dive survey data by 3D maps with plotted dive points using the WebGL virtual map engine "Cesium". By selecting a dive point, users can watch deep-sea videos and photos and associated environmental data, e.g. water temperature, salinity, rock and biological sample photos, obtained by the dive survey. Users can browse a dive track visualized in 3D virtual spaces using the WebGL JavaScript library. By synchronizing this virtual dive track with videos, users can watch deep-sea videos recorded at a point on a dive track. Users can play an animation which a submersible-shaped polygon automatically traces a 3D virtual dive track and displays of dive survey data are synchronized with tracing a dive track. Users can directly refer to additional information of other JAMSTEC data sites such as marine biodiversity database, marine biological sample database, rock sample database, and cruise and dive information database, on each page which a 3D virtual dive track is displayed. A 3D visualization of a dive track makes users experience a virtual dive survey. In addition, by synchronizing a virtual dive track with videos, it is easy to understand living organisms and geological environments of a dive point. Therefore, these functions will visually support understanding of deep-sea environments in lectures and educational activities.

  9. PCI-based WILDFIRE reconfigurable computing engines

    NASA Astrophysics Data System (ADS)

    Fross, Bradley K.; Donaldson, Robert L.; Palmer, Douglas J.

    1996-10-01

    WILDFORCE is the first PCI-based custom reconfigurable computer that is based on the Splash 2 technology transferred from the National Security Agency and the Institute for Defense Analyses, Supercomputing Research Center (SRC). The WILDFORCE architecture has many of the features of the WILDFIRE computer, such as field- programmable gate array (FPGA) based processing elements, linear array and crossbar interconnection, and high- performance memory and I/O subsystems. New features introduced in the PCI-based WILDFIRE systems include memory/processor options that can be added to any processing element. These options include static and dynamic memory, digital signal processors (DSPs), FPGAs, and microprocessors. In addition to memory/processor options, many different application specific connectors can be used to extend the I/O capabilities of the system, including systolic I/O, camera input and video display output. This paper also discusses how this new PCI-based reconfigurable computing engine is used for rapid-prototyping, real-time video processing and other DSP applications.

  10. A Scalable, Collaborative, Interactive Light-field Display System

    DTIC Science & Technology

    2014-06-01

    displays, 3D display, holographic video, integral photography, plenoptic , computed photography 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF...light-field, holographic displays, 3D display, holographic video, integral photography, plenoptic , computed photography 1 Distribution A: Approved

  11. Display device-adapted video quality-of-experience assessment

    NASA Astrophysics Data System (ADS)

    Rehman, Abdul; Zeng, Kai; Wang, Zhou

    2015-03-01

    Today's viewers consume video content from a variety of connected devices, including smart phones, tablets, notebooks, TVs, and PCs. This imposes significant challenges for managing video traffic efficiently to ensure an acceptable quality-of-experience (QoE) for the end users as the perceptual quality of video content strongly depends on the properties of the display device and the viewing conditions. State-of-the-art full-reference objective video quality assessment algorithms do not take into account the combined impact of display device properties, viewing conditions, and video resolution while performing video quality assessment. We performed a subjective study in order to understand the impact of aforementioned factors on perceptual video QoE. We also propose a full reference video QoE measure, named SSIMplus, that provides real-time prediction of the perceptual quality of a video based on human visual system behaviors, video content characteristics (such as spatial and temporal complexity, and video resolution), display device properties (such as screen size, resolution, and brightness), and viewing conditions (such as viewing distance and angle). Experimental results have shown that the proposed algorithm outperforms state-of-the-art video quality measures in terms of accuracy and speed.

  12. Using ARINC 818 Avionics Digital Video Bus (ADVB) for military displays

    NASA Astrophysics Data System (ADS)

    Alexander, Jon; Keller, Tim

    2007-04-01

    ARINC 818 Avionics Digital Video Bus (ADVB) is a new digital video interface and protocol standard developed especially for high bandwidth uncompressed digital video. The first draft of this standard, released in January of 2007, has been advanced by ARINC and the aerospace community to meet the acute needs of commercial aviation for higher performance digital video. This paper analyzes ARINC 818 for use in military display systems found in avionics, helicopters, and ground vehicles. The flexibility of ARINC 818 for the diverse resolutions, grayscales, pixel formats, and frame rates of military displays is analyzed as well as the suitability of ARINC 818 to support requirements for military video systems including bandwidth, latency, and reliability. Implementation issues relevant to military displays are presented.

  13. Bathymetric and underwater video survey of Lower Granite Reservoir and vicinity, Washington and Idaho, 2009-10

    USGS Publications Warehouse

    Williams, Marshall L.; Fosness, Ryan L.; Weakland, Rhonda J.

    2012-01-01

    The U.S. Geological Survey conducted a bathymetric survey of the Lower Granite Reservoir, Washington, using a multibeam echosounder, and an underwater video mapping survey during autumn 2009 and winter 2010. The surveys were conducted as part of the U.S. Army Corps of Engineer's study on sediment deposition and control in the reservoir. The multibeam echosounder survey was performed in 1-mile increments between river mile (RM) 130 and 142 on the Snake River, and between RM 0 and 2 on the Clearwater River. The result of the survey is a digital elevation dataset in ASCII coordinate positioning data (easting, northing, and elevation) useful in rendering a 3×3-foot point grid showing bed elevation and reservoir geomorphology. The underwater video mapping survey was conducted from RM 107.73 to 141.78 on the Snake River and RM 0 to 1.66 on the Clearwater River, along 61 U.S. Army Corps of Engineers established cross sections, and dredge material deposit transects. More than 900 videos and 90 bank photographs were used to characterize the sediment facies and ground-truth the multibeam echosounder data. Combined, the surveys were used to create a surficial sediment facies map that displays type of substrate, level of embeddedness, and presence of silt.

  14. Virtual Reality System Offers a Wide Perspective

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Robot Systems Technology Branch engineers at Johnson Space Center created the remotely controlled Robonaut for use as an additional "set of hands" in extravehicular activities (EVAs) and to allow exploration of environments that would be too dangerous or difficult for humans. One of the problems Robonaut developers encountered was that the robot s interface offered an extremely limited field of vision. Johnson robotics engineer, Darby Magruder, explained that the 40-degree field-of-view (FOV) in initial robotic prototypes provided very narrow tunnel vision, which posed difficulties for Robonaut operators trying to see the robot s surroundings. Because of the narrow FOV, NASA decided to reach out to the private sector for assistance. In addition to a wider FOV, NASA also desired higher resolution in a head-mounted display (HMD) with the added ability to capture and display video.

  15. A portable high-definition electronic endoscope based on embedded system

    NASA Astrophysics Data System (ADS)

    Xu, Guang; Wang, Liqiang; Xu, Jin

    2012-11-01

    This paper presents a low power and portable highdefinition (HD) electronic endoscope based on CortexA8 embedded system. A 1/6 inch CMOS image sensor is used to acquire HD images with 1280 *800 pixels. The camera interface of A8 is designed to support images of various sizes and support multiple inputs of video format such as ITUR BT601/ 656 standard. Image rotation (90 degrees clockwise) and image process functions are achieved by CAMIF. The decode engine of the processor plays back or records HD videos at speed of 30 frames per second, builtin HDMI interface transmits high definition images to the external display. Image processing procedures such as demosaicking, color correction and auto white balance are realized on the A8 platform. Other functions are selected through OSD settings. An LCD panel displays the real time images. The snapshot pictures or compressed videos are saved in an SD card or transmited to a computer through USB interface. The size of the camera head is 4×4.8×15 mm with more than 3 meters working distance. The whole endoscope system can be powered by a lithium battery, with the advantages of miniature, low cost and portability.

  16. Signal processing and display interface studies. [performance tests - design analysis/equipment specifications

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Signal processing equipment specifications, operating and test procedures, and systems design and engineering are described. Five subdivisions of the overall circuitry are treated: (1) the spectrum analyzer; (2) the spectrum integrator; (3) the velocity discriminator; (4) the display interface; and (5) the formatter. They function in series: (1) first in analog form to provide frequency resolution, (2) then in digital form to achieve signal to noise improvement (video integration) and frequency discrimination, and (3) finally in analog form again for the purpose of real-time display of the significant velocity data. The formatter collects binary data from various points in the processor and provides a serial output for bi-phase recording. Block diagrams are used to illustrate the system.

  17. Vroom: designing an augmented environment for remote collaboration in digital cinema production

    NASA Astrophysics Data System (ADS)

    Margolis, Todd; Cornish, Tracy

    2013-03-01

    As media technologies become increasingly affordable, compact and inherently networked, new generations of telecollaborative platforms continue to arise which integrate these new affordances. Virtual reality has been primarily concerned with creating simulations of environments that can transport participants to real or imagined spaces that replace the "real world". Meanwhile Augmented Reality systems have evolved to interleave objects from Virtual Reality environments into the physical landscape. Perhaps now there is a new class of systems that reverse this precept to enhance dynamic media landscapes and immersive physical display environments to enable intuitive data exploration through collaboration. Vroom (Virtual Room) is a next-generation reconfigurable tiled display environment in development at the California Institute for Telecommunications and Information Technology (Calit2) at the University of California, San Diego. Vroom enables freely scalable digital collaboratories, connecting distributed, high-resolution visualization resources for collaborative work in the sciences, engineering and the arts. Vroom transforms a physical space into an immersive media environment with large format interactive display surfaces, video teleconferencing and spatialized audio built on a highspeed optical network backbone. Vroom enables group collaboration for local and remote participants to share knowledge and experiences. Possible applications include: remote learning, command and control, storyboarding, post-production editorial review, high resolution video playback, 3D visualization, screencasting and image, video and multimedia file sharing. To support these various scenarios, Vroom features support for multiple user interfaces (optical tracking, touch UI, gesture interface, etc.), support for directional and spatialized audio, giga-pixel image interactivity, 4K video streaming, 3D visualization and telematic production. This paper explains the design process that has been utilized to make Vroom an accessible and intuitive immersive environment for remote collaboration specifically for digital cinema production.

  18. 47 CFR 79.101 - Closed caption decoder requirements for analog television receivers.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...) BROADCAST RADIO SERVICES CLOSED CAPTIONING AND VIDEO DESCRIPTION OF VIDEO PROGRAMMING § 79.101 Closed... display the captioning for whichever channel the user selects. The TV Mode of operation allows the video... and rows. The characters must be displayed clearly separated from the video over which they are placed...

  19. Computer-aided video exposure monitoring.

    PubMed

    Walsh, P T; Clark, R D; Flaherty, S; Gentry, S J

    2000-01-01

    A computer-aided video exposure monitoring system was used to record exposure information. The system comprised a handheld camcorder, portable video cassette recorder, radio-telemetry transmitter/receiver, and handheld or notebook computers for remote data logging, photoionization gas/vapor detectors (PIDs), and a personal aerosol monitor. The following workplaces were surveyed using the system: dry cleaning establishments--monitoring tetrachoroethylene in the air and in breath; printing works--monitoring white spirit type solvent; tire manufacturing factory--monitoring rubber fume; and a slate quarry--monitoring respirable dust and quartz. The system based on the handheld computer, in particular, simplified the data acquisition process compared with earlier systems in use by our laboratory. The equipment is more compact and easier to operate, and allows more accurate calibration of the instrument reading on the video image. Although a variety of data display formats are possible, the best format for videos intended for educational and training purposes was the review-preview chart superimposed on the video image of the work process. Recommendations for reducing exposure by engineering or by modifying work practice were possible through use of the video exposure system in the dry cleaning and tire manufacturing applications. The slate quarry work illustrated how the technique can be used to test ventilation configurations quickly to see their effect on the worker's personal exposure.

  20. Quick-disconnect harness system for helmet-mounted displays

    NASA Astrophysics Data System (ADS)

    Bapu, P. T.; Aulds, M. J.; Fuchs, Steven P.; McCormick, David M.

    1992-10-01

    We have designed a pilot's harness-mounted, high voltage quick-disconnect connectors with 62 pins, to transmit voltages up to 13.5 kV and video signals with 70 MHz bandwidth, for a binocular helmet-mounted display system. It connects and disconnects with power off, and disconnects 'hot' without pilot intervention and without producing external sparks or exposing hot embers to the explosive cockpit environment. We have implemented a procedure in which the high voltage pins disconnect inside a hermetically-sealed unit before the physical separation of the connector. The 'hot' separation triggers a crowbar circuit in the high voltage power supplies for additional protection. Conductor locations and shields are designed to reduce capacitance in the circuit and avoid crosstalk among adjacent circuits. The quick- disconnect connector and wiring harness are human-engineered to ensure pilot safety and mobility. The connector backshell is equipped with two hybrid video amplifiers to improve the clarity of the video signals. Shielded wires and coaxial cables are molded as a multi-layered ribbon for maximum flexibility between the pilot's harness and helmet. Stiff cabling is provided between the quick-disconnect connector and the aircraft console to control behavior during seat ejection. The components of the system have been successfully tested for safety, performance, ergonomic considerations, and reliability.

  1. Live HDR video streaming on commodity hardware

    NASA Astrophysics Data System (ADS)

    McNamee, Joshua; Hatchett, Jonathan; Debattista, Kurt; Chalmers, Alan

    2015-09-01

    High Dynamic Range (HDR) video provides a step change in viewing experience, for example the ability to clearly see the soccer ball when it is kicked from the shadow of the stadium into sunshine. To achieve the full potential of HDR video, so-called true HDR, it is crucial that all the dynamic range that was captured is delivered to the display device and tone mapping is confined only to the display. Furthermore, to ensure widespread uptake of HDR imaging, it should be low cost and available on commodity hardware. This paper describes an end-to-end HDR pipeline for capturing, encoding and streaming high-definition HDR video in real-time using off-the-shelf components. All the lighting that is captured by HDR-enabled consumer cameras is delivered via the pipeline to any display, including HDR displays and even mobile devices with minimum latency. The system thus provides an integrated HDR video pipeline that includes everything from capture to post-production, archival and storage, compression, transmission, and display.

  2. Compression of stereoscopic video using MPEG-2

    NASA Astrophysics Data System (ADS)

    Puri, A.; Kollarits, Richard V.; Haskell, Barry G.

    1995-10-01

    Many current as well as emerging applications in areas of entertainment, remote operations, manufacturing industry and medicine can benefit from the depth perception offered by stereoscopic video systems which employ two views of a scene imaged under the constraints imposed by human visual system. Among the many challenges to be overcome for practical realization and widespread use of 3D/stereoscopic systems are good 3D displays and efficient techniques for digital compression of enormous amounts of data while maintaining compatibility with normal video decoding and display systems. After a brief introduction to the basics of 3D/stereo including issues of depth perception, stereoscopic 3D displays and terminology in stereoscopic imaging and display, we present an overview of tools in the MPEG-2 video standard that are relevant to our discussion on compression of stereoscopic video, which is the main topic of this paper. Next, we outilne the various approaches for compression of stereoscopic video and then focus on compatible stereoscopic video coding using MPEG-2 Temporal scalability concepts. Compatible coding employing two different types of prediction structures become potentially possible, disparity compensated prediction and combined disparity and motion compensated predictions. To further improve coding performance and display quality, preprocessing for reducing mismatch between the two views forming stereoscopic video is considered. Results of simulations performed on stereoscopic video of normal TV resolution are then reported comparing the performance of two prediction structures with the simulcast solution. It is found that combined disparity and motion compensated prediction offers the best performance. Results indicate that compression of both views of stereoscopic video of normal TV resolution appears feasible in a total of 6 to 8 Mbit/s. We then discuss regarding multi-viewpoint video, a generalization of stereoscopic video. Finally, we describe ongoing efforts within MPEG-2 to define a profile for stereoscopic video coding, as well as, the promise of MPEG-4 in addressing coding of multi-viewpoint video.

  3. Compression of stereoscopic video using MPEG-2

    NASA Astrophysics Data System (ADS)

    Puri, Atul; Kollarits, Richard V.; Haskell, Barry G.

    1995-12-01

    Many current as well as emerging applications in areas of entertainment, remote operations, manufacturing industry and medicine can benefit from the depth perception offered by stereoscopic video systems which employ two views of a scene imaged under the constraints imposed by human visual system. Among the many challenges to be overcome for practical realization and widespread use of 3D/stereoscopic systems are good 3D displays and efficient techniques for digital compression of enormous amounts of data while maintaining compatibility with normal video decoding and display systems. After a brief introduction to the basics of 3D/stereo including issues of depth perception, stereoscopic 3D displays and terminology in stereoscopic imaging and display, we present an overview of tools in the MPEG-2 video standard that are relevant to our discussion on compression of stereoscopic video, which is the main topic of this paper. Next, we outline the various approaches for compression of stereoscopic video and then focus on compatible stereoscopic video coding using MPEG-2 Temporal scalability concepts. Compatible coding employing two different types of prediction structures become potentially possible, disparity compensated prediction and combined disparity and motion compensated predictions. To further improve coding performance and display quality, preprocessing for reducing mismatch between the two views forming stereoscopic video is considered. Results of simulations performed on stereoscopic video of normal TV resolution are then reported comparing the performance of two prediction structures with the simulcast solution. It is found that combined disparity and motion compensated prediction offers the best performance. Results indicate that compression of both views of stereoscopic video of normal TV resolution appears feasible in a total of 6 to 8 Mbit/s. We then discuss regarding multi-viewpoint video, a generalization of stereoscopic video. Finally, we describe ongoing efforts within MPEG-2 to define a profile for stereoscopic video coding, as well as, the promise of MPEG-4 in addressing coding of multi-viewpoint video.

  4. Video Games: A Human Factors Guide to Visual Display Design and Instructional System Design

    DTIC Science & Technology

    1984-04-01

    Electronic video games have many of the same technological and psychological characteristics that are found in military computer-based systems. For...both of which employ video games as experimental stimuli, are presented here. The first research program seeks to identify and exploit the...characteristics of video games in the design of game-based training devices. The second program is designed to explore the effects of electronic video display

  5. Predictive Displays for High Latency Teleoperation

    DTIC Science & Technology

    2016-08-04

    PREDICTIVE DISPLAYS FOR HIGH LATENCY TELEOPERATION” Analysis of existing approach 3 C om m s. C hannel Vehicle OCU D Throttle, Steer, Brake D Video ...presents opportunity mitigate outgoing latency. • Video is not governed by physics, however, video is dependent on the state of the vehicle, which...Commands, estimates UDP: H.264 Video UDP: Vehicle state • C++ implementation • 2 threads • OpenCV for image manipulation • FFMPEG for video decoding

  6. Proceedings of the Federal Acquisition Research Symposium with Theme: Government, Industry, Academe: Synergism for Acquisition Improvement, Held at the Williamsburg Hilton and National Conference Center, Williamsburg, Virginia on 7-9 December 1983

    DTIC Science & Technology

    1983-12-01

    storage included room for not only the video display incompatibilties which have been plaguing the terminal (VDT), but also for the disk drive, the...once at system implementation time. This sample Video Display Terminal - ---------------------------------- O(VT) screen shows the Appendix N Code...override theavalue with a different data value. Video Display Terminal (VDT): A cathode ray tube or gas plasma tube display screen terminal that allows

  7. The effects of video compression on acceptability of images for monitoring life sciences' experiments

    NASA Technical Reports Server (NTRS)

    Haines, Richard F.; Chuang, Sherry L.

    1993-01-01

    Current plans indicate that there will be a large number of life science experiments carried out during the thirty year-long mission of the Biological Flight Research Laboratory (BFRL) on board Space Station Freedom (SSF). Non-human life science experiments will be performed in the BFRL. Two distinct types of activities have already been identified for this facility: (1) collect, store, distribute, analyze and manage engineering and science data from the Habitats, Glovebox and Centrifuge, (2) perform a broad range of remote science activities in the Glovebox and Habitat chambers in conjunction with the remotely located principal investigator (PI). These activities require extensive video coverage, viewing and/or recording and distribution to video displays on board SSF and to the ground. This paper concentrates mainly on the second type of activity. Each of the two BFRL habitat racks are designed to be configurable for either six rodent habitats per rack, four plant habitats per rack, or a combination of the above. Two video cameras will be installed in each habitat with a spare attachment for a third camera when needed. Therefore, a video system that can accommodate up to 12-18 camera inputs per habitat rack must be considered.

  8. Microcomputer Selection Guide for Construction Field Offices. Revision.

    DTIC Science & Technology

    1984-09-01

    the system, and the monitor displays information on a video display screen. Microcomputer systems today are available in a variety of configura- tions...background. White on black monitors report- edly caule more eye fatigue, while amber is reported to cause the least eye fatigue. Reverse video ...The video should be amber or green display with a resolution of at least 640 x 200 dots per in. Additional features of the monitor include an

  9. Real World Audio

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Crystal River Engineering was originally featured in Spinoff 1992 with the Convolvotron, a high speed digital audio processing system that delivers three-dimensional sound over headphones. The Convolvotron was developed for Ames' research on virtual acoustic displays. Crystal River is a now a subsidiary of Aureal Semiconductor, Inc. and they together develop and market the technology, which is a 3-D (three dimensional) audio technology known commercially today as Aureal 3D (A-3D). The technology has been incorporated into video games, surround sound systems, and sound cards.

  10. Prevention: lessons from video display installations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Margach, C.B.

    1983-04-01

    Workers interacting with video display units for periods in excess of two hours per day report significantly increased visual discomfort, fatigue and inefficiencies, as compared with workers performing similar tasks, but without the video viewing component. Difficulties in focusing and the appearance of myopia are among the problems being described. With a view to preventing or minimizing such problems, principles and procedures are presented providing for (a) modification of physical features of the video workstation and (b) improvement in the visual performances of the individual video unit operator.

  11. Multi-Aircraft Video - Human/Automation Target Recognition Studies: Video Display Size in Unaided Target Acquisition Involving Multiple Videos

    DTIC Science & Technology

    2008-04-01

    Index ( NASA - TLX : Hart & Staveland, 1988), and a Post-Test Questionnaire. Demographic data/Background Questionnaire. This questionnaire was used...very confident). NASA - TLX . The NASA TLX (Hart & Staveland, 1988) is a subjective workload assessment tool. A multidimensional weighting...completed the NASA - TLX . The test trials were randomized across participants and occurred in a counterbalanced order that took into account video display

  12. An evaluation of the efficacy of video displays for use with chimpanzees (Pan troglodytes).

    PubMed

    Hopper, Lydia M; Lambeth, Susan P; Schapiro, Steven J

    2012-05-01

    Video displays for behavioral research lend themselves particularly well to studies with chimpanzees (Pan troglodytes), as their vision is comparable to humans', yet there has been no formal test of the efficacy of video displays as a form of social information for chimpanzees. To address this, we compared the learning success of chimpanzees shown video footage of a conspecific compared to chimpanzees shown a live conspecific performing the same novel task. Footage of an unfamiliar chimpanzee operating a bidirectional apparatus was presented to 24 chimpanzees (12 males, 12 females), and their responses were compared to those of a further 12 chimpanzees given the same task but with no form of information. Secondly, we also compared the responses of the chimpanzees in the video display condition to responses of eight chimpanzees from a previously published study of ours, in which chimpanzees observed live models. Chimpanzees shown a video display were more successful than those in the control condition and showed comparable success to those that saw a live model. Regarding fine-grained copying (i.e. the direction that the door was pushed), only chimpanzees that observed a live model showed significant matching to the model's methods with their first response. Yet, when all the responses made by the chimpanzees were considered, comparable levels of matching were shown by chimpanzees in both the live and video conditions. © 2012 Wiley Periodicals, Inc.

  13. An Evaluation of the Efficacy of Video Displays for Use With Chimpanzees (Pan troglodytes)

    PubMed Central

    HOPPER, LYDIA M.; LAMBETH, SUSAN P.; SCHAPIRO, STEVEN J.

    2013-01-01

    Video displays for behavioral research lend themselves particularly well to studies with chimpanzees (Pan troglodytes), as their vision is comparable to humans’, yet there has been no formal test of the efficacy of video displays as a form of social information for chimpanzees. To address this, we compared the learning success of chimpanzees shown video footage of a conspecific compared to chimpanzees shown a live conspecific performing the same novel task. Footage of an unfamiliar chimpanzee operating a bidirectional apparatus was presented to 24 chimpanzees (12 males, 12 females), and their responses were compared to those of a further 12 chimpanzees given the same task but with no form of information. Secondly, we also compared the responses of the chimpanzees in the video display condition to responses of eight chimpanzees from a previously published study of ours, in which chimpanzees observed live models. Chimpanzees shown a video display were more successful than those in the control condition and showed comparable success to those that saw a live model. Regarding fine-grained copying (i.e. the direction that the door was pushed), only chimpanzees that observed a live model showed significant matching to the model’s methods with their first response. Yet, when all the responses made by the chimpanzees were considered, comparable levels of matching were shown by chimpanzees in both the live and video conditions. PMID:22318867

  14. Packet based serial link realized in FPGA dedicated for high resolution infrared image transmission

    NASA Astrophysics Data System (ADS)

    Bieszczad, Grzegorz

    2015-05-01

    In article the external digital interface specially designed for thermographic camera built in Military University of Technology is described. The aim of article is to illustrate challenges encountered during design process of thermal vision camera especially related to infrared data processing and transmission. Article explains main requirements for interface to transfer Infra-Red or Video digital data and describes the solution which we elaborated based on Low Voltage Differential Signaling (LVDS) physical layer and signaling scheme. Elaborated link for image transmission is built using FPGA integrated circuit with built-in high speed serial transceivers achieving up to 2500Gbps throughput. Image transmission is realized using proprietary packet protocol. Transmission protocol engine was described in VHDL language and tested in FPGA hardware. The link is able to transmit 1280x1024@60Hz 24bit video data using one signal pair. Link was tested to transmit thermal-vision camera picture to remote monitor. Construction of dedicated video link allows to reduce power consumption compared to solutions with ASIC based encoders and decoders realizing video links like DVI or packed based Display Port, with simultaneous reduction of wires needed to establish link to one pair. Article describes functions of modules integrated in FPGA design realizing several functions like: synchronization to video source, video stream packeting, interfacing transceiver module and dynamic clock generation for video standard conversion.

  15. Real-Time Acquisition and Display of Data and Video

    NASA Technical Reports Server (NTRS)

    Bachnak, Rafic; Chakinarapu, Ramya; Garcia, Mario; Kar, Dulal; Nguyen, Tien

    2007-01-01

    This paper describes the development of a prototype that takes in an analog National Television System Committee (NTSC) video signal generated by a video camera and data acquired by a microcontroller and display them in real-time on a digital panel. An 8051 microcontroller is used to acquire power dissipation by the display panel, room temperature, and camera zoom level. The paper describes the major hardware components and shows how they are interfaced into a functional prototype. Test data results are presented and discussed.

  16. Objective analysis of image quality of video image capture systems

    NASA Astrophysics Data System (ADS)

    Rowberg, Alan H.

    1990-07-01

    As Picture Archiving and Communication System (PACS) technology has matured, video image capture has become a common way of capturing digital images from many modalities. While digital interfaces, such as those which use the ACR/NEMA standard, will become more common in the future, and are preferred because of the accuracy of image transfer, video image capture will be the dominant method in the short term, and may continue to be used for some time because of the low cost and high speed often associated with such devices. Currently, virtually all installed systems use methods of digitizing the video signal that is produced for display on the scanner viewing console itself. A series of digital test images have been developed for display on either a GE CT9800 or a GE Signa MRI scanner. These images have been captured with each of five commercially available image capture systems, and the resultant images digitally transferred on floppy disk to a PC1286 computer containing Optimast' image analysis software. Here the images can be displayed in a comparative manner for visual evaluation, in addition to being analyzed statistically. Each of the images have been designed to support certain tests, including noise, accuracy, linearity, gray scale range, stability, slew rate, and pixel alignment. These image capture systems vary widely in these characteristics, in addition to the presence or absence of other artifacts, such as shading and moire pattern. Other accessories such as video distribution amplifiers and noise filters can also add or modify artifacts seen in the captured images, often giving unusual results. Each image is described, together with the tests which were performed using them. One image contains alternating black and white lines, each one pixel wide, after equilibration strips ten pixels wide. While some systems have a slew rate fast enough to track this correctly, others blur it to an average shade of gray, and do not resolve the lines, or give horizontal or vertical streaking. While many of these results are significant from an engineering standpoint alone, there are clinical implications and some anatomy or pathology may not be visualized if an image capture system is used improperly.

  17. Young Children's Analogical Problem Solving: Gaining Insights from Video Displays

    ERIC Educational Resources Information Center

    Chen, Zhe; Siegler, Robert S.

    2013-01-01

    This study examined how toddlers gain insights from source video displays and use the insights to solve analogous problems. Two- to 2.5-year-olds viewed a source video illustrating a problem-solving strategy and then attempted to solve analogous problems. Older but not younger toddlers extracted the problem-solving strategy depicted in the video…

  18. What Young Adolescents Think about Engineering: Immediate and Longer Lasting Impressions of a Video Intervention

    ERIC Educational Resources Information Center

    Jennings, Sybillyn; McIntyre, Julie Guay; Butler, Sarah E.

    2015-01-01

    To explore young adolescents' interest in engineering as a future career, we examined the influence of gender and grade level on participants' (N = 197, aged 10-13) views of engineering. One group (107 students) viewed a brief engineering video and wrote why they felt the same or different about engineering following the video. Qualitative…

  19. 25 CFR 542.33 - What are the minimum internal control standards for surveillance for Tier B gaming operations?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... capability to display the date and time of recorded events on video and/or digital recordings. The displayed... digital record retention. (1) All video recordings and/or digital records of coverage provided by the.... (3) Duly authenticated copies of video recordings and/or digital records shall be provided to the...

  20. 25 CFR 542.33 - What are the minimum internal control standards for surveillance for Tier B gaming operations?

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... capability to display the date and time of recorded events on video and/or digital recordings. The displayed... digital record retention. (1) All video recordings and/or digital records of coverage provided by the.... (3) Duly authenticated copies of video recordings and/or digital records shall be provided to the...

  1. 25 CFR 542.33 - What are the minimum internal control standards for surveillance for Tier B gaming operations?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... capability to display the date and time of recorded events on video and/or digital recordings. The displayed... digital record retention. (1) All video recordings and/or digital records of coverage provided by the.... (3) Duly authenticated copies of video recordings and/or digital records shall be provided to the...

  2. 25 CFR 542.33 - What are the minimum internal control standards for surveillance for Tier B gaming operations?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... capability to display the date and time of recorded events on video and/or digital recordings. The displayed... digital record retention. (1) All video recordings and/or digital records of coverage provided by the.... (3) Duly authenticated copies of video recordings and/or digital records shall be provided to the...

  3. 25 CFR 542.33 - What are the minimum internal control standards for surveillance for Tier B gaming operations?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... capability to display the date and time of recorded events on video and/or digital recordings. The displayed... digital record retention. (1) All video recordings and/or digital records of coverage provided by the.... (3) Duly authenticated copies of video recordings and/or digital records shall be provided to the...

  4. Video game addiction in emerging adulthood: Cross-sectional evidence of pathology in video game addicts as compared to matched healthy controls.

    PubMed

    Stockdale, Laura; Coyne, Sarah M

    2018-01-01

    The Internet Gaming Disorder Scale (IGDS) is a widely used measure of video game addiction, a pathology affecting a small percentage of all people who play video games. Emerging adult males are significantly more likely to be video game addicts. Few researchers have examined how people who qualify as video game addicts based on the IGDS compared to matched controls based on age, gender, race, and marital status. The current study compared IGDS video game addicts to matched non-addicts in terms of their mental, physical, social-emotional health using self-report, survey methods. Addicts had poorer mental health and cognitive functioning including poorer impulse control and ADHD symptoms compared to controls. Additionally, addicts displayed increased emotional difficulties including increased depression and anxiety, felt more socially isolated, and were more likely to display internet pornography pathological use symptoms. Female video game addicts were at unique risk for negative outcomes. The sample for this study was undergraduate college students and self-report measures were used. Participants who met the IGDS criteria for video game addiction displayed poorer emotional, physical, mental, and social health, adding to the growing evidence that video game addictions are a valid phenomenon. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Video personalization for usage environment

    NASA Astrophysics Data System (ADS)

    Tseng, Belle L.; Lin, Ching-Yung; Smith, John R.

    2002-07-01

    A video personalization and summarization system is designed and implemented incorporating usage environment to dynamically generate a personalized video summary. The personalization system adopts the three-tier server-middleware-client architecture in order to select, adapt, and deliver rich media content to the user. The server stores the content sources along with their corresponding MPEG-7 metadata descriptions. Our semantic metadata is provided through the use of the VideoAnnEx MPEG-7 Video Annotation Tool. When the user initiates a request for content, the client communicates the MPEG-21 usage environment description along with the user query to the middleware. The middleware is powered by the personalization engine and the content adaptation engine. Our personalization engine includes the VideoSue Summarization on Usage Environment engine that selects the optimal set of desired contents according to user preferences. Afterwards, the adaptation engine performs the required transformations and compositions of the selected contents for the specific usage environment using our VideoEd Editing and Composition Tool. Finally, two personalization and summarization systems are demonstrated for the IBM Websphere Portal Server and for the pervasive PDA devices.

  6. Image enhancement software for underwater recovery operations: User's manual

    NASA Astrophysics Data System (ADS)

    Partridge, William J.; Therrien, Charles W.

    1989-06-01

    This report describes software for performing image enhancement on live or recorded video images. The software was developed for operational use during underwater recovery operations at the Naval Undersea Warfare Engineering Station. The image processing is performed on an IBM-PC/AT compatible computer equipped with hardware to digitize and display video images. The software provides the capability to provide contrast enhancement and other similar functions in real time through hardware lookup tables, to automatically perform histogram equalization, to capture one or more frames and average them or apply one of several different processing algorithms to a captured frame. The report is in the form of a user manual for the software and includes guided tutorial and reference sections. A Digital Image Processing Primer in the appendix serves to explain the principle concepts that are used in the image processing.

  7. Scalable Adaptive Graphics Environment (SAGE) Software for the Visualization of Large Data Sets on a Video Wall

    NASA Technical Reports Server (NTRS)

    Jedlovec, Gary; Srikishen, Jayanthi; Edwards, Rita; Cross, David; Welch, Jon; Smith, Matt

    2013-01-01

    The use of collaborative scientific visualization systems for the analysis, visualization, and sharing of "big data" available from new high resolution remote sensing satellite sensors or four-dimensional numerical model simulations is propelling the wider adoption of ultra-resolution tiled display walls interconnected by high speed networks. These systems require a globally connected and well-integrated operating environment that provides persistent visualization and collaboration services. This abstract and subsequent presentation describes a new collaborative visualization system installed for NASA's Shortterm Prediction Research and Transition (SPoRT) program at Marshall Space Flight Center and its use for Earth science applications. The system consists of a 3 x 4 array of 1920 x 1080 pixel thin bezel video monitors mounted on a wall in a scientific collaboration lab. The monitors are physically and virtually integrated into a 14' x 7' for video display. The display of scientific data on the video wall is controlled by a single Alienware Aurora PC with a 2nd Generation Intel Core 4.1 GHz processor, 32 GB memory, and an AMD Fire Pro W600 video card with 6 mini display port connections. Six mini display-to-dual DVI cables are used to connect the 12 individual video monitors. The open source Scalable Adaptive Graphics Environment (SAGE) windowing and media control framework, running on top of the Ubuntu 12 Linux operating system, allows several users to simultaneously control the display and storage of high resolution still and moving graphics in a variety of formats, on tiled display walls of any size. The Ubuntu operating system supports the open source Scalable Adaptive Graphics Environment (SAGE) software which provides a common environment, or framework, enabling its users to access, display and share a variety of data-intensive information. This information can be digital-cinema animations, high-resolution images, high-definition video-teleconferences, presentation slides, documents, spreadsheets or laptop screens. SAGE is cross-platform, community-driven, open-source visualization and collaboration middleware that utilizes shared national and international cyberinfrastructure for the advancement of scientific research and education.

  8. Scalable Adaptive Graphics Environment (SAGE) Software for the Visualization of Large Data Sets on a Video Wall

    NASA Astrophysics Data System (ADS)

    Jedlovec, G.; Srikishen, J.; Edwards, R.; Cross, D.; Welch, J. D.; Smith, M. R.

    2013-12-01

    The use of collaborative scientific visualization systems for the analysis, visualization, and sharing of 'big data' available from new high resolution remote sensing satellite sensors or four-dimensional numerical model simulations is propelling the wider adoption of ultra-resolution tiled display walls interconnected by high speed networks. These systems require a globally connected and well-integrated operating environment that provides persistent visualization and collaboration services. This abstract and subsequent presentation describes a new collaborative visualization system installed for NASA's Short-term Prediction Research and Transition (SPoRT) program at Marshall Space Flight Center and its use for Earth science applications. The system consists of a 3 x 4 array of 1920 x 1080 pixel thin bezel video monitors mounted on a wall in a scientific collaboration lab. The monitors are physically and virtually integrated into a 14' x 7' for video display. The display of scientific data on the video wall is controlled by a single Alienware Aurora PC with a 2nd Generation Intel Core 4.1 GHz processor, 32 GB memory, and an AMD Fire Pro W600 video card with 6 mini display port connections. Six mini display-to-dual DVI cables are used to connect the 12 individual video monitors. The open source Scalable Adaptive Graphics Environment (SAGE) windowing and media control framework, running on top of the Ubuntu 12 Linux operating system, allows several users to simultaneously control the display and storage of high resolution still and moving graphics in a variety of formats, on tiled display walls of any size. The Ubuntu operating system supports the open source Scalable Adaptive Graphics Environment (SAGE) software which provides a common environment, or framework, enabling its users to access, display and share a variety of data-intensive information. This information can be digital-cinema animations, high-resolution images, high-definition video-teleconferences, presentation slides, documents, spreadsheets or laptop screens. SAGE is cross-platform, community-driven, open-source visualization and collaboration middleware that utilizes shared national and international cyberinfrastructure for the advancement of scientific research and education.

  9. High-definition video display based on the FPGA and THS8200

    NASA Astrophysics Data System (ADS)

    Qian, Jia; Sui, Xiubao

    2014-11-01

    This paper presents a high-definition video display solution based on the FPGA and THS8200. THS8200 is a video decoder chip launched by TI company, this chip has three 10-bit DAC channels which can capture video data in both 4:2:2 and 4:4:4 formats, and its data synchronization can be either through the dedicated synchronization signals HSYNC and VSYNC, or extracted from the embedded video stream synchronization information SAV / EAV code. In this paper, we will utilize the address and control signals generated by FPGA to access to the data-storage array, and then the FPGA generates the corresponding digital video signals YCbCr. These signals combined with the synchronization signals HSYNC and VSYNC that are also generated by the FPGA act as the input signals of THS8200. In order to meet the bandwidth requirements of the high-definition TV, we adopt video input in the 4:2:2 format over 2×10-bit interface. THS8200 is needed to be controlled by FPGA with I2C bus to set the internal registers, and as a result, it can generate the synchronous signal that is satisfied with the standard SMPTE and transfer the digital video signals YCbCr into analog video signals YPbPr. Hence, the composite analog output signals YPbPr are consist of image data signal and synchronous signal which are superimposed together inside the chip THS8200. The experimental research indicates that the method presented in this paper is a viable solution for high-definition video display, which conforms to the input requirements of the new high-definition display devices.

  10. The advanced linked extended reconnaissance and targeting technology demonstration project

    NASA Astrophysics Data System (ADS)

    Cruickshank, James; de Villers, Yves; Maheux, Jean; Edwards, Mark; Gains, David; Rea, Terry; Banbury, Simon; Gauthier, Michelle

    2007-06-01

    The Advanced Linked Extended Reconnaissance & Targeting (ALERT) Technology Demonstration (TD) project is addressing key operational needs of the future Canadian Army's Surveillance and Reconnaissance forces by fusing multi-sensor and tactical data, developing automated processes, and integrating beyond line-of-sight sensing. We discuss concepts for displaying and fusing multi-sensor and tactical data within an Enhanced Operator Control Station (EOCS). The sensor data can originate from the Coyote's own visible-band and IR cameras, laser rangefinder, and ground-surveillance radar, as well as beyond line-of-sight systems such as a mini-UAV and unattended ground sensors. The authors address technical issues associated with the use of fully digital IR and day video cameras and discuss video-rate image processing developed to assist the operator to recognize poorly visible targets. Automatic target detection and recognition algorithms processing both IR and visible-band images have been investigated to draw the operator's attention to possible targets. The machine generated information display requirements are presented with the human factors engineering aspects of the user interface in this complex environment, with a view to establishing user trust in the automation. The paper concludes with a summary of achievements to date and steps to project completion.

  11. Data acquisition and analysis in the DOE/NASA Wind Energy Program

    NASA Technical Reports Server (NTRS)

    Neustadter, H. E.

    1980-01-01

    Four categories of data systems, each responding to a distinct information need are presented. The categories are: control, technology, engineering and performance. The focus is on the technology data system which consists of the following elements: sensors which measure critical parameters such as wind speed and direction, output power, blade loads and strains, and tower vibrations; remote multiplexing units (RMU) mounted on each wind turbine which frequency modulate, multiplex and transmit sensor outputs; the instrumentation available to record, process and display these signals; and centralized computer analysis of data. The RMU characteristics and multiplexing techniques are presented. Data processing is illustrated by following a typical signal through instruments such as the analog tape recorder, analog to digital converter, data compressor, digital tape recorder, video (CRT) display, and strip chart recorder.

  12. Generalized pipeline for preview and rendering of synthetic holograms

    NASA Astrophysics Data System (ADS)

    Pappu, Ravikanth; Sparrell, Carlton J.; Underkoffler, John S.; Kropp, Adam B.; Chen, Benjie; Plesniak, Wendy J.

    1997-04-01

    We describe a general pipeline for the computation and display of either fully-computed holograms or holographic stereograms using the same 3D database. A rendering previewer on a Silicon Graphics Onyx allows a user to specify viewing geometry, database transformations, and scene lighting. The previewer then generates one of two descriptions of the object--a series of perspective views or a polygonal model--which is then used by a fringe rendering engine to compute fringes specific to hologram type. The images are viewed on the second generation MIT Holographic Video System. This allows a viewer to compare holographic stereograms with fully-computed holograms originating from the same database and comes closer to the goal of a single pipeline being able to display the same data in different formats.

  13. Considerations in video playback design: using optic flow analysis to examine motion characteristics of live and computer-generated animation sequences.

    PubMed

    Woo, Kevin L; Rieucau, Guillaume

    2008-07-01

    The increasing use of the video playback technique in behavioural ecology reveals a growing need to ensure better control of the visual stimuli that focal animals experience. Technological advances now allow researchers to develop computer-generated animations instead of using video sequences of live-acting demonstrators. However, care must be taken to match the motion characteristics (speed and velocity) of the animation to the original video source. Here, we presented a tool based on the use of an optic flow analysis program to measure the resemblance of motion characteristics of computer-generated animations compared to videos of live-acting animals. We examined three distinct displays (tail-flick (TF), push-up body rock (PUBR), and slow arm wave (SAW)) exhibited by animations of Jacky dragons (Amphibolurus muricatus) that were compared to the original video sequences of live lizards. We found no significant differences between the motion characteristics of videos and animations across all three displays. Our results showed that our animations are similar the speed and velocity features of each display. Researchers need to ensure that similar motion characteristics in animation and video stimuli are represented, and this feature is a critical component in the future success of the video playback technique.

  14. A clinical pilot study of a modular video-CT augmentation system for image-guided skull base surgery

    NASA Astrophysics Data System (ADS)

    Liu, Wen P.; Mirota, Daniel J.; Uneri, Ali; Otake, Yoshito; Hager, Gregory; Reh, Douglas D.; Ishii, Masaru; Gallia, Gary L.; Siewerdsen, Jeffrey H.

    2012-02-01

    Augmentation of endoscopic video with preoperative or intraoperative image data [e.g., planning data and/or anatomical segmentations defined in computed tomography (CT) and magnetic resonance (MR)], can improve navigation, spatial orientation, confidence, and tissue resection in skull base surgery, especially with respect to critical neurovascular structures that may be difficult to visualize in the video scene. This paper presents the engineering and evaluation of a video augmentation system for endoscopic skull base surgery translated to use in a clinical study. Extension of previous research yielded a practical system with a modular design that can be applied to other endoscopic surgeries, including orthopedic, abdominal, and thoracic procedures. A clinical pilot study is underway to assess feasibility and benefit to surgical performance by overlaying CT or MR planning data in realtime, high-definition endoscopic video. Preoperative planning included segmentation of the carotid arteries, optic nerves, and surgical target volume (e.g., tumor). An automated camera calibration process was developed that demonstrates mean re-projection accuracy (0.7+/-0.3) pixels and mean target registration error of (2.3+/-1.5) mm. An IRB-approved clinical study involving fifteen patients undergoing skull base tumor surgery is underway in which each surgery includes the experimental video-CT system deployed in parallel to the standard-of-care (unaugmented) video display. Questionnaires distributed to one neurosurgeon and two otolaryngologists are used to assess primary outcome measures regarding the benefit to surgical confidence in localizing critical structures and targets by means of video overlay during surgical approach, resection, and reconstruction.

  15. Telemetry and Communication IP Video Player

    NASA Technical Reports Server (NTRS)

    OFarrell, Zachary L.

    2011-01-01

    Aegis Video Player is the name of the video over IP system for the Telemetry and Communications group of the Launch Services Program. Aegis' purpose is to display video streamed over a network connection to be viewed during launches. To accomplish this task, a VLC ActiveX plug-in was used in C# to provide the basic capabilities of video streaming. The program was then customized to be used during launches. The VLC plug-in can be configured programmatically to display a single stream, but for this project multiple streams needed to be accessed. To accomplish this, an easy to use, informative menu system was added to the program to enable users to quickly switch between videos. Other features were added to make the player more useful, such as watching multiple videos and watching a video in full screen.

  16. A system for the real-time display of radar and video images of targets

    NASA Technical Reports Server (NTRS)

    Allen, W. W.; Burnside, W. D.

    1990-01-01

    Described here is a software and hardware system for the real-time display of radar and video images for use in a measurement range. The main purpose is to give the reader a clear idea of the software and hardware design and its functions. This system is designed around a Tektronix XD88-30 graphics workstation, used to display radar images superimposed on video images of the actual target. The system's purpose is to provide a platform for tha analysis and documentation of radar images and their associated targets in a menu-driven, user oriented environment.

  17. Novel use of video glasses during binocular microscopy in the otolaryngology clinic.

    PubMed

    Fastenberg, Judd H; Fang, Christina H; Akbar, Nadeem A; Abuzeid, Waleed M; Moskowitz, Howard S

    2018-06-06

    The development of portable, high resolution video displays such as video glasses allows clinicians the opportunity to offer patients an increased ability to visualize aspects of their physical examination in an ergonomic and cost-effective manner. The objective of this pilot study is to trial the use of video glasses for patients undergoing binocular microscopy as well as to better understand some of the potential benefits of the enhanced display option. This study was comprised of a single treatment group. Patients seen in the otolaryngology clinic who required binocular microscopy for diagnosis and treatment were recruited. All patients wore video glasses during their otoscopic examination. An additional cohort of patients who required binocular microscopy were also recruited, but did not use the video glasses during their examination. Patients subsequently completed a 10-point Likert scale survey that assessed their comfort, anxiety, and satisfaction with the examination as well as their general understanding of their otologic condition. A total of 29 patients who used the video glasses were recruited, including those with normal examinations, cerumen impaction, or chronic ear disease. Based on the survey results, patients reported a high level of satisfaction and comfort during their exam with video glasses. Patients who used the video glasses did not exhibit any increased anxiety with their examination. Patients reported that video glasses improved their understanding and they expressed a desire to wear the glasses again during repeat exams. This pilot study demonstrates that video glasses may represent a viable alternative display option in the otolaryngology clinic. The results show that the use of video glasses is associated with high patient comfort and satisfaction during binocular microscopy. Further investigation is warranted to determine the potential for this display option in other facets of patient care as well as in expanding patient understanding of disease and anatomy. Copyright © 2018 Elsevier Inc. All rights reserved.

  18. Does a video displaying a stair climbing model increase stair use in a worksite setting?

    PubMed

    Van Calster, L; Van Hoecke, A-S; Octaef, A; Boen, F

    2017-08-01

    This study evaluated the effects of improving the visibility of the stairwell and of displaying a video with a stair climbing model on climbing and descending stair use in a worksite setting. Intervention study. Three consecutive one-week intervention phases were implemented: (1) the visibility of the stairs was improved by the attachment of pictograms that indicated the stairwell; (2) a video showing a stair climbing model was sent to the employees by email; and (3) the same video was displayed on a television screen at the point-of-choice (POC) between the stairs and the elevator. The interventions took place in two buildings. The implementation of the interventions varied between these buildings and the sequence was reversed. Improving the visibility of the stairs increased both stair climbing (+6%) and descending stair use (+7%) compared with baseline. Sending the video by email yielded no additional effect on stair use. By contrast, displaying the video at the POC increased stair climbing in both buildings by 12.5% on average. One week after the intervention, the positive effects on stair climbing remained in one of the buildings, but not in the other. These findings suggest that improving the visibility of the stairwell and displaying a stair climbing model on a screen at the POC can result in a short-term increase in both climbing and descending stair use. Copyright © 2017 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  19. Video Display Terminals: Radiation Issues.

    ERIC Educational Resources Information Center

    Murray, William E.

    1985-01-01

    Discusses information gathered in past few years related to health effects of video display terminals (VDTs) with particular emphasis given to issues raised by VDT users. Topics covered include radiation emissions, health concerns, radiation surveys, occupational radiation exposure standards, and long-term risks. (17 references) (EJS)

  20. Image sequence analysis workstation for multipoint motion analysis

    NASA Astrophysics Data System (ADS)

    Mostafavi, Hassan

    1990-08-01

    This paper describes an application-specific engineering workstation designed and developed to analyze motion of objects from video sequences. The system combines the software and hardware environment of a modem graphic-oriented workstation with the digital image acquisition, processing and display techniques. In addition to automation and Increase In throughput of data reduction tasks, the objective of the system Is to provide less invasive methods of measurement by offering the ability to track objects that are more complex than reflective markers. Grey level Image processing and spatial/temporal adaptation of the processing parameters is used for location and tracking of more complex features of objects under uncontrolled lighting and background conditions. The applications of such an automated and noninvasive measurement tool include analysis of the trajectory and attitude of rigid bodies such as human limbs, robots, aircraft in flight, etc. The system's key features are: 1) Acquisition and storage of Image sequences by digitizing and storing real-time video; 2) computer-controlled movie loop playback, freeze frame display, and digital Image enhancement; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored Image sequence; 4) model-based estimation and tracking of the six degrees of freedom of a rigid body: 5) field-of-view and spatial calibration: 6) Image sequence and measurement data base management; and 7) offline analysis software for trajectory plotting and statistical analysis.

  1. Spatial constraints of stereopsis in video displays

    NASA Technical Reports Server (NTRS)

    Schor, Clifton

    1989-01-01

    Recent development in video technology, such as the liquid crystal displays and shutters, have made it feasible to incorporate stereoscopic depth into the 3-D representations on 2-D displays. However, depth has already been vividly portrayed in video displays without stereopsis using the classical artists' depth cues described by Helmholtz (1866) and the dynamic depth cues described in detail by Ittleson (1952). Successful static depth cues include overlap, size, linear perspective, texture gradients, and shading. Effective dynamic cues include looming (Regan and Beverly, 1979) and motion parallax (Rogers and Graham, 1982). Stereoscopic depth is superior to the monocular distance cues under certain circumstances. It is most useful at portraying depth intervals as small as 5 to 10 arc secs. For this reason it is extremely useful in user-video interactions such as telepresence. Objects can be manipulated in 3-D space, for example, while a person who controls the operations views a virtual image of the manipulated object on a remote 2-D video display. Stereopsis also provides structure and form information in camouflaged surfaces such as tree foliage. Motion parallax also reveals form; however, without other monocular cues such as overlap, motion parallax can yield an ambiguous perception. For example, a turning sphere, portrayed as solid by parallax can appear to rotate either leftward or rightward. However, only one direction of rotation is perceived when stereo-depth is included. If the scene is static, then stereopsis is the principal cue for revealing the camouflaged surface structure. Finally, dynamic stereopsis provides information about the direction of motion in depth (Regan and Beverly, 1979). Clearly there are many spatial constraints, including spatial frequency content, retinal eccentricity, exposure duration, target spacing, and disparity gradient, which - when properly adjusted - can greatly enhance stereodepth in video displays.

  2. Feasibility study of utilizing ultraportable projectors for endoscopic video display (with videos).

    PubMed

    Tang, Shou-Jiang; Fehring, Amanda; Mclemore, Mac; Griswold, Michael; Wang, Wanmei; Paine, Elizabeth R; Wu, Ruonan; To, Filip

    2014-10-01

    Modern endoscopy requires video display. Recent miniaturized, ultraportable projectors are affordable, durable, and offer quality image display. Explore feasibility of using ultraportable projectors in endoscopy. Prospective bench-top comparison; clinical feasibility study. Masked comparison study of images displayed via 2 Samsung ultraportable light-emitting diode projectors (pocket-sized SP-HO3; pico projector SP-P410M) and 1 Microvision Showwx-II Laser pico projector. BENCH-TOP FEASIBILITY STUDY: Prerecorded endoscopic video was streamed via computer. CLINICAL COMPARISON STUDY: Live high-definition endoscopy video was simultaneously displayed through each processor onto a standard liquid crystal display monitor and projected onto a portable, pull-down projection screen. Endoscopists, endoscopy nurses, and technicians rated video images; ratings were analyzed by linear mixed-effects regression models with random intercepts. All projectors were easy to set up, adjust, focus, and operate, with no real-time lapse for any. Bench-top study outcomes: Samsung pico preferred to Laser pico, overall rating 1.5 units higher (95% confidence interval [CI] = 0.7-2.4), P < .001; Samsung pocket preferred to Laser pico, 3.3 units higher (95% CI = 2.4-4.1), P < .001; Samsung pocket preferred to Samsung pico, 1.7 units higher (95% CI = 0.9-2.5), P < .001. The clinical comparison study confirmed the Samsung pocket projector as best, with a higher overall rating of 2.3 units (95% CI = 1.6-3.0), P < .001, than Samsung pico. Low brightness currently limits pico projector use in clinical endoscopy. The pocket projector, with higher brightness levels (170 lumens), is clinically useful. Continued improvements to ultraportable projectors will supply a needed niche in endoscopy through portability, reduced cost, and equal or better image quality. © The Author(s) 2013.

  3. The Eyes Have It.

    ERIC Educational Resources Information Center

    Walsh, Janet

    1982-01-01

    Discusses issues related to possible health hazards associated with viewing video display terminals. Includes some findings of the 1979 NIOSH report on Potential Hazards of Video Display Terminals indicating level of radiation emitted is low and providing recommendations related to glare and back pain/muscular fatigue problems. (JN)

  4. Virtual navigation performance: the relationship to field of view and prior video gaming experience.

    PubMed

    Richardson, Anthony E; Collaer, Marcia L

    2011-04-01

    Two experiments examined whether learning a virtual environment was influenced by field of view and how it related to prior video gaming experience. In the first experiment, participants (42 men, 39 women; M age = 19.5 yr., SD = 1.8) performed worse on a spatial orientation task displayed with a narrow field of view in comparison to medium and wide field-of-view displays. Counter to initial hypotheses, wide field-of-view displays did not improve performance over medium displays, and this was replicated in a second experiment (30 men, 30 women; M age = 20.4 yr., SD = 1.9) presenting a more complex learning environment. Self-reported video gaming experience correlated with several spatial tasks: virtual environment pointing and tests of Judgment of Line Angle and Position, mental rotation, and Useful Field of View (with correlations between .31 and .45). When prior video gaming experience was included as a covariate, sex differences in spatial tasks disappeared.

  5. Motion sickness, console video games, and head-mounted displays.

    PubMed

    Merhi, Omar; Faugloire, Elise; Flanagan, Moira; Stoffregen, Thomas A

    2007-10-01

    We evaluated the nauseogenic properties of commercial console video games (i.e., games that are sold to the public) when presented through a head-mounted display. Anecdotal reports suggest that motion sickness may occur among players of contemporary commercial console video games. Participants played standard console video games using an Xbox game system. We varied the participants' posture (standing vs. sitting) and the game (two Xbox games). Participants played for up to 50 min and were asked to discontinue if they experienced any symptoms of motion sickness. Sickness occurred in all conditions, but it was more common during standing. During seated play there were significant differences in head motion between sick and well participants before the onset of motion sickness. The results indicate that commercial console video game systems can induce motion sickness when presented via a head-mounted display and support the hypothesis that motion sickness is preceded by instability in the control of seated posture. Potential applications of this research include changes in the design of console video games and recommendations for how such systems should be used.

  6. 78 FR 23591 - Certain Video Displays and Products Using and Containing Same; Investigations: Terminations...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-19

    ... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-828] Certain Video Displays and Products Using and Containing Same; Investigations: Terminations, Modifications and Rulings AGENCY: U.S. International Trade Commission. ACTION: Notice. SUMMARY: Notice is hereby given that the U.S. International...

  7. Natural 3D content on glasses-free light-field 3D cinema

    NASA Astrophysics Data System (ADS)

    Balogh, Tibor; Nagy, Zsolt; Kovács, Péter Tamás.; Adhikarla, Vamsi K.

    2013-03-01

    This paper presents a complete framework for capturing, processing and displaying the free viewpoint video on a large scale immersive light-field display. We present a combined hardware-software solution to visualize free viewpoint 3D video on a cinema-sized screen. The new glasses-free 3D projection technology can support larger audience than the existing autostereoscopic displays. We introduce and describe our new display system including optical and mechanical design considerations, the capturing system and render cluster for producing the 3D content, and the various software modules driving the system. The indigenous display is first of its kind, equipped with front-projection light-field HoloVizio technology, controlling up to 63 MP. It has all the advantages of previous light-field displays and in addition, allows a more flexible arrangement with a larger screen size, matching cinema or meeting room geometries, yet simpler to set-up. The software system makes it possible to show 3D applications in real-time, besides the natural content captured from dense camera arrangements as well as from sparse cameras covering a wider baseline. Our software system on the GPU accelerated render cluster, can also visualize pre-recorded Multi-view Video plus Depth (MVD4) videos on this light-field glasses-free cinema system, interpolating and extrapolating missing views.

  8. RAPID: A random access picture digitizer, display, and memory system

    NASA Technical Reports Server (NTRS)

    Yakimovsky, Y.; Rayfield, M.; Eskenazi, R.

    1976-01-01

    RAPID is a system capable of providing convenient digital analysis of video data in real-time. It has two modes of operation. The first allows for continuous digitization of an EIA RS-170 video signal. Each frame in the video signal is digitized and written in 1/30 of a second into RAPID's internal memory. The second mode leaves the content of the internal memory independent of the current input video. In both modes of operation the image contained in the memory is used to generate an EIA RS-170 composite video output signal representing the digitized image in the memory so that it can be displayed on a monitor.

  9. Standardized access, display, and retrieval of medical video

    NASA Astrophysics Data System (ADS)

    Bellaire, Gunter; Steines, Daniel; Graschew, Georgi; Thiel, Andreas; Bernarding, Johannes; Tolxdorff, Thomas; Schlag, Peter M.

    1999-05-01

    The system presented here enhances documentation and data- secured, second-opinion facilities by integrating video sequences into DICOM 3.0. We present an implementation for a medical video server extended by a DICOM interface. Security mechanisms conforming with DICOM are integrated to enable secure internet access. Digital video documents of diagnostic and therapeutic procedures should be examined regarding the clip length and size necessary for second opinion and manageable with today's hardware. Image sources relevant for this paper include 3D laparoscope, 3D surgical microscope, 3D open surgery camera, synthetic video, and monoscopic endoscopes, etc. The global DICOM video concept and three special workplaces of distinct applications are described. Additionally, an approach is presented to analyze the motion of the endoscopic camera for future automatic video-cutting. Digital stereoscopic video sequences are especially in demand for surgery . Therefore DSVS are also integrated into the DICOM video concept. Results are presented describing the suitability of stereoscopic display techniques for the operating room.

  10. A design of real time image capturing and processing system using Texas Instrument's processor

    NASA Astrophysics Data System (ADS)

    Wee, Toon-Joo; Chaisorn, Lekha; Rahardja, Susanto; Gan, Woon-Seng

    2007-09-01

    In this work, we developed and implemented an image capturing and processing system that equipped with capability of capturing images from an input video in real time. The input video can be a video from a PC, video camcorder or DVD player. We developed two modes of operation in the system. In the first mode, an input image from the PC is processed on the processing board (development platform with a digital signal processor) and is displayed on the PC. In the second mode, current captured image from the video camcorder (or from DVD player) is processed on the board but is displayed on the LCD monitor. The major difference between our system and other existing conventional systems is that image-processing functions are performed on the board instead of the PC (so that the functions can be used for further developments on the board). The user can control the operations of the board through the Graphic User Interface (GUI) provided on the PC. In order to have a smooth image data transfer between the PC and the board, we employed Real Time Data Transfer (RTDX TM) technology to create a link between them. For image processing functions, we developed three main groups of function: (1) Point Processing; (2) Filtering and; (3) 'Others'. Point Processing includes rotation, negation and mirroring. Filter category provides median, adaptive, smooth and sharpen filtering in the time domain. In 'Others' category, auto-contrast adjustment, edge detection, segmentation and sepia color are provided, these functions either add effect on the image or enhance the image. We have developed and implemented our system using C/C# programming language on TMS320DM642 (or DM642) board from Texas Instruments (TI). The system was showcased in College of Engineering (CoE) exhibition 2006 at Nanyang Technological University (NTU) and have more than 40 users tried our system. It is demonstrated that our system is adequate for real time image capturing. Our system can be used or applied for applications such as medical imaging, video surveillance, etc.

  11. An Airborne Programmable Digital to Video Converter Interface and Operation Manual.

    DTIC Science & Technology

    1981-02-01

    Identify by block number) SCAN CONVERTER VIDEO DISPLAY TELEVISION DISPLAY 20. ABSTRACT (Continue on reverse oide If necessary and Identify by block...programmable cathode ray tube (CRT) controller which is accessed by the CPU to permit operation in a wide variety of modes. The Alphanumeric Generator

  12. Potential Health Hazards of Video Display Terminals.

    ERIC Educational Resources Information Center

    Murray, William E.; And Others

    In response to a request from three California unions to evaluate potential health hazards from the use of video display terminals (VDT's) in information processing applications, the National Institute for Occupational Safety and Health (NIOSH) conducted a limited field investigation of three companies in the San Francisco-Oakland Bay Area. A…

  13. Uninhabited Military Vehicles (UMVs): Human Factors Issues in Augmenting the Force (Vehicules Militaires sans Pilote (UMV): Questions Relatives aux Facteurs Humains lies a l’augmentation des Forces)

    DTIC Science & Technology

    2007-07-01

    engineering of a process or system that mimics biology, to investigate behaviours in robots that emulate animals such as self - healing and swarming [2...7.3.5 References 7-25 7.4 Adaptive Automation for Robotic Military Systems 7-29 7.4.1 Introduction 7-29 7.4.2 Human Performance Issues for...Figure 6-7 Integrated Display of Video, Range Readings, and Robot Representation 6-31 Figure 6-8 Representing the Pose of a Panning Camera 6-32 Figure

  14. Display Sharing: An Alternative Paradigm

    NASA Technical Reports Server (NTRS)

    Brown, Michael A.

    2010-01-01

    The current Johnson Space Center (JSC) Mission Control Center (MCC) Video Transport System (VTS) provides flight controllers and management the ability to meld raw video from various sources with telemetry to improve situational awareness. However, maintaining a separate infrastructure for video delivery and integration of video content with data adds significant complexity and cost to the system. When considering alternative architectures for a VTS, the current system's ability to share specific computer displays in their entirety to other locations, such as large projector systems, flight control rooms, and back supporting rooms throughout the facilities and centers must be incorporated into any new architecture. Internet Protocol (IP)-based systems also support video delivery and integration. IP-based systems generally have an advantage in terms of cost and maintainability. Although IP-based systems are versatile, the task of sharing a computer display from one workstation to another can be time consuming for an end-user and inconvenient to administer at a system level. The objective of this paper is to present a prototype display sharing enterprise solution. Display sharing is a system which delivers image sharing across the LAN while simultaneously managing bandwidth, supporting encryption, enabling recovery and resynchronization following a loss of signal, and, minimizing latency. Additional critical elements will include image scaling support, multi -sharing, ease of initial integration and configuration, integration with desktop window managers, collaboration tools, host and recipient controls. This goal of this paper is to summarize the various elements of an IP-based display sharing system that can be used in today's control center environment.

  15. 78 FR 31769 - Accessible Emergency Information; Apparatus Requirements for Emergency Information and Video...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-24

    ... Accessible Emergency Information; Apparatus Requirements for Emergency Information and Video Description...] Accessible Emergency Information; Apparatus Requirements for Emergency Information and Video Description... manufacturers of devices that display video programming to ensure that certain apparatus are able to make...

  16. HEVC for high dynamic range services

    NASA Astrophysics Data System (ADS)

    Kim, Seung-Hwan; Zhao, Jie; Misra, Kiran; Segall, Andrew

    2015-09-01

    Displays capable of showing a greater range of luminance values can render content containing high dynamic range information in a way such that the viewers have a more immersive experience. This paper introduces the design aspects of a high dynamic range (HDR) system, and examines the performance of the HDR processing chain in terms of compression efficiency. Specifically it examines the relation between recently introduced Society of Motion Picture and Television Engineers (SMPTE) ST 2084 transfer function and the High Efficiency Video Coding (HEVC) standard. SMPTE ST 2084 is designed to cover the full range of an HDR signal from 0 to 10,000 nits, however in many situations the valid signal range of actual video might be smaller than SMPTE ST 2084 supported range. The above restricted signal range results in restricted range of code values for input video data and adversely impacts compression efficiency. In this paper, we propose a code value remapping method that extends the restricted range code values into the full range code values so that the existing standards such as HEVC may better compress the video content. The paper also identifies related non-normative encoder-only changes that are required for remapping method for a fair comparison with anchor. Results are presented comparing the efficiency of the current approach versus the proposed remapping method for HM-16.2.

  17. Use of Internet Resources in the Biology Lecture Classroom.

    ERIC Educational Resources Information Center

    Francis, Joseph W.

    2000-01-01

    Introduces internet resources that are available for instructional use in biology classrooms. Provides information on video-based technologies to create and capture video sequences, interactive web sites that allow interaction with biology simulations, online texts, and interactive videos that display animated video sequences. (YDS)

  18. Secure Video Surveillance System Acquisition Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2009-12-04

    The SVSS Acquisition Software collects and displays video images from two cameras through a VPN, and store the images onto a collection controller. The software is configured to allow a user to enter a time window to display up to 2 1/2, hours of video review. The software collects images from the cameras at a rate of 1 image per second and automatically deletes images older than 3 hours. The software code operates in a linux environment and can be run in a virtual machine on Windows XP. The Sandia software integrates the different COTS software together to build themore » video review system.« less

  19. 3D video coding: an overview of present and upcoming standards

    NASA Astrophysics Data System (ADS)

    Merkle, Philipp; Müller, Karsten; Wiegand, Thomas

    2010-07-01

    An overview of existing and upcoming 3D video coding standards is given. Various different 3D video formats are available, each with individual pros and cons. The 3D video formats can be separated into two classes: video-only formats (such as stereo and multiview video) and depth-enhanced formats (such as video plus depth and multiview video plus depth). Since all these formats exist of at least two video sequences and possibly additional depth data, efficient compression is essential for the success of 3D video applications and technologies. For the video-only formats the H.264 family of coding standards already provides efficient and widely established compression algorithms: H.264/AVC simulcast, H.264/AVC stereo SEI message, and H.264/MVC. For the depth-enhanced formats standardized coding algorithms are currently being developed. New and specially adapted coding approaches are necessary, as the depth or disparity information included in these formats has significantly different characteristics than video and is not displayed directly, but used for rendering. Motivated by evolving market needs, MPEG has started an activity to develop a generic 3D video standard within the 3DVC ad-hoc group. Key features of the standard are efficient and flexible compression of depth-enhanced 3D video representations and decoupling of content creation and display requirements.

  20. 36 CFR 1194.24 - Video and multimedia products.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 36 Parks, Forests, and Public Property 3 2010-07-01 2010-07-01 false Video and multimedia products... Video and multimedia products. (a) All analog television displays 13 inches and larger, and computer... training and informational video and multimedia productions which support the agency's mission, regardless...

  1. 36 CFR 1194.24 - Video and multimedia products.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 36 Parks, Forests, and Public Property 3 2011-07-01 2011-07-01 false Video and multimedia products... Video and multimedia products. (a) All analog television displays 13 inches and larger, and computer... training and informational video and multimedia productions which support the agency's mission, regardless...

  2. 36 CFR 1194.24 - Video and multimedia products.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 36 Parks, Forests, and Public Property 3 2012-07-01 2012-07-01 false Video and multimedia products... Video and multimedia products. (a) All analog television displays 13 inches and larger, and computer... training and informational video and multimedia productions which support the agency's mission, regardless...

  3. 36 CFR 1194.24 - Video and multimedia products.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 36 Parks, Forests, and Public Property 3 2014-07-01 2014-07-01 false Video and multimedia products... Video and multimedia products. (a) All analog television displays 13 inches and larger, and computer... training and informational video and multimedia productions which support the agency's mission, regardless...

  4. An Imaging And Graphics Workstation For Image Sequence Analysis

    NASA Astrophysics Data System (ADS)

    Mostafavi, Hassan

    1990-01-01

    This paper describes an application-specific engineering workstation designed and developed to analyze imagery sequences from a variety of sources. The system combines the software and hardware environment of the modern graphic-oriented workstations with the digital image acquisition, processing and display techniques. The objective is to achieve automation and high throughput for many data reduction tasks involving metric studies of image sequences. The applications of such an automated data reduction tool include analysis of the trajectory and attitude of aircraft, missile, stores and other flying objects in various flight regimes including launch and separation as well as regular flight maneuvers. The workstation can also be used in an on-line or off-line mode to study three-dimensional motion of aircraft models in simulated flight conditions such as wind tunnels. The system's key features are: 1) Acquisition and storage of image sequences by digitizing real-time video or frames from a film strip; 2) computer-controlled movie loop playback, slow motion and freeze frame display combined with digital image sharpening, noise reduction, contrast enhancement and interactive image magnification; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored image sequence; 4) automatic and manual field-of-view and spatial calibration; 5) image sequence data base generation and management, including the measurement data products; 6) off-line analysis software for trajectory plotting and statistical analysis; 7) model-based estimation and tracking of object attitude angles; and 8) interface to a variety of video players and film transport sub-systems.

  5. Habitual action video game players display increased cortical thickness in the dorsal anterior cingulate cortex.

    PubMed

    Benady-Chorney, Jessica; Yau, Yvonne; Zeighami, Yashar; Bohbot, Veronique D; West, Greg L

    2018-03-21

    Action video game players (aVGPs) display increased performance in attention-based tasks and enhanced procedural motor learning. In parallel, the anterior cingulate cortex (ACC) is centrally implicated in specific types of reward-based learning and attentional control, the execution or inhibition of motor commands, and error detection. These processes are hypothesized to support aVGP in-game performance and enhanced learning though in-game feedback. We, therefore, tested the hypothesis that habitual aVGPs would display increased cortical thickness compared with nonvideo game players (nonVGPs). Results showed that the aVGP group (n=17) displayed significantly higher levels of cortical thickness specifically in the dorsal ACC compared with the nonVGP group (n=16). Results are discussed in the context of previous findings examining video game experience, attention/performance, and responses to affective components such as pain and fear.

  6. Optical detection of blade flutter. [in YF-100 turbofan engine

    NASA Technical Reports Server (NTRS)

    Nieberding, W. C.; Pollack, J. L.

    1977-01-01

    The paper examines the capabilities of photoelectric scanning (PES) and stroboscopic imagery (SI) as optical monitoring tools for detection of the onset of flutter in the fan blades of an aircraft gas turbine engine. Both optical techniques give visual data in real time as well as video-tape records. PES is shown to be an ideal flutter monitor, since a single cathode ray tube displays the behavior of all the blades in a stage simultaneously. Operation of the SI system continuously while searching for a flutter condition imposes severe demands on the flash tube and affects its reliability, thus limiting its use as a flutter monitor. A better method of operation is to search for flutter with the PES and limit the use of SI to those times when the PES indicates interesting blade activity.

  7. Advances in Engine Test Capabilities at the NASA Glenn Research Center's Propulsion Systems Laboratory

    NASA Technical Reports Server (NTRS)

    Pachlhofer, Peter M.; Panek, Joseph W.; Dicki, Dennis J.; Piendl, Barry R.; Lizanich, Paul J.; Klann, Gary A.

    2006-01-01

    The Propulsion Systems Laboratory at the National Aeronautics and Space Administration (NASA) Glenn Research Center is one of the premier U.S. facilities for research on advanced aeropropulsion systems. The facility can simulate a wide range of altitude and Mach number conditions while supplying the aeropropulsion system with all the support services necessary to operate at those conditions. Test data are recorded on a combination of steady-state and highspeed data-acquisition systems. Recently a number of upgrades were made to the facility to meet demanding new requirements for the latest aeropropulsion concepts and to improve operational efficiency. Improvements were made to data-acquisition systems, facility and engine-control systems, test-condition simulation systems, video capture and display capabilities, and personnel training procedures. This paper discusses the facility s capabilities, recent upgrades, and planned future improvements.

  8. Video monitoring system for car seat

    NASA Technical Reports Server (NTRS)

    Elrod, Susan Vinz (Inventor); Dabney, Richard W. (Inventor)

    2004-01-01

    A video monitoring system for use with a child car seat has video camera(s) mounted in the car seat. The video images are wirelessly transmitted to a remote receiver/display encased in a portable housing that can be removably mounted in the vehicle in which the car seat is installed.

  9. 36 CFR § 1194.24 - Video and multimedia products.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 36 Parks, Forests, and Public Property 3 2013-07-01 2012-07-01 true Video and multimedia products... § 1194.24 Video and multimedia products. (a) All analog television displays 13 inches and larger, and... circuitry. (c) All training and informational video and multimedia productions which support the agency's...

  10. The use of head-mounted display eyeglasses for teaching surgical skills: A prospective randomised study.

    PubMed

    Peden, Robert G; Mercer, Rachel; Tatham, Andrew J

    2016-10-01

    To investigate whether 'surgeon's eye view' videos provided via head-mounted displays can improve skill acquisition and satisfaction in basic surgical training compared with conventional wet-lab teaching. A prospective randomised study of 14 medical students with no prior suturing experience, randomised to 3 groups: 1) conventional teaching; 2) head-mounted display-assisted teaching and 3) head-mounted display self-learning. All were instructed in interrupted suturing followed by 15 minutes' practice. Head-mounted displays provided a 'surgeon's eye view' video demonstrating the technique, available during practice. Subsequently students undertook a practical assessment, where suturing was videoed and graded by masked assessors using a 10-point surgical skill score (1 = very poor technique, 10 = very good technique). Students completed a questionnaire assessing confidence and satisfaction. Suturing ability after teaching was similar between groups (P = 0.229, Kruskal-Wallis test). Median surgical skill scores were 7.5 (range 6-10), 6 (range 3-8) and 7 (range 1-7) following head-mounted display-assisted teaching, conventional teaching, and head-mounted display self-learning respectively. There was good agreement between graders regarding surgical skill scores (rho.c = 0.599, r = 0.603), and no difference in number of sutures placed between groups (P = 0.120). The head-mounted display-assisted teaching group reported greater enjoyment than those attending conventional teaching (P = 0.033). Head-mounted display self-learning was regarded as least useful (7.4 vs 9.0 for conventional teaching, P = 0.021), but more enjoyable than conventional teaching (9.6 vs 8.0, P = 0.050). Teaching augmented with head-mounted displays was significantly more enjoyable than conventional teaching. Students undertaking self-directed learning using head-mounted displays with pre-recorded videos had comparable skill acquisition to those attending traditional wet-lab tutorials. Copyright © 2016 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.

  11. A color video display technique for flow field surveys

    NASA Technical Reports Server (NTRS)

    Winkelmann, A. E.; Tsao, C. P.

    1982-01-01

    A computer driven color video display technique has been developed for the presentation of wind tunnel flow field survey data. The results of both qualitative and quantitative flow field surveys can be presented in high spatial resolutions color coded displays. The technique has been used for data obtained with a hot-wire probe, a split-film probe, a Conrad (pitch) probe and a 5-tube pressure probe in surveys above and behind a wing with partially stalled and fully stalled flow.

  12. Design of multi-view stereoscopic HD video transmission system based on MPEG-21 digital item adaptation

    NASA Astrophysics Data System (ADS)

    Lee, Seokhee; Lee, Kiyoung; Kim, Man Bae; Kim, JongWon

    2005-11-01

    In this paper, we propose a design of multi-view stereoscopic HD video transmission system based on MPEG-21 Digital Item Adaptation (DIA). It focuses on the compatibility and scalability to meet various user preferences and terminal capabilities. There exist a large variety of multi-view 3D HD video types according to the methods for acquisition, display, and processing. By following the MPEG-21 DIA framework, the multi-view stereoscopic HD video is adapted according to user feedback. A user can be served multi-view stereoscopic video which corresponds with his or her preferences and terminal capabilities. In our preliminary prototype, we verify that the proposed design can support two deferent types of display device (stereoscopic and auto-stereoscopic) and switching viewpoints between two available viewpoints.

  13. Display system employing acousto-optic tunable filter

    NASA Technical Reports Server (NTRS)

    Lambert, James L. (Inventor)

    1995-01-01

    An acousto-optic tunable filter (AOTF) is employed to generate a display by driving the AOTF with a RF electrical signal comprising modulated red, green, and blue video scan line signals and scanning the AOTF with a linearly polarized, pulsed light beam, resulting in encoding of color video columns (scan lines) of an input video image into vertical columns of the AOTF output beam. The AOTF is illuminated periodically as each acoustically-encoded scan line fills the cell aperture of the AOTF. A polarizing beam splitter removes the unused first order beam component of the AOTF output and, if desired, overlays a real world scene on the output plane. Resolutions as high as 30,000 lines are possible, providing holographic display capability.

  14. Display system employing acousto-optic tunable filter

    NASA Technical Reports Server (NTRS)

    Lambert, James L. (Inventor)

    1993-01-01

    An acousto-optic tunable filter (AOTF) is employed to generate a display by driving the AOTF with a RF electrical signal comprising modulated red, green, and blue video scan line signals and scanning the AOTF with a linearly polarized, pulsed light beam, resulting in encoding of color video columns (scan lines) of an input video image into vertical columns of the AOTF output beam. The AOTF is illuminated periodically as each acoustically-encoded scan line fills the cell aperture of the AOTF. A polarizing beam splitter removes the unused first order beam component of the AOTF output and, if desired, overlays a real world scene on the output plane. Resolutions as high as 30,000 lines are possible, providing holographic display capability.

  15. Using Videos Derived from Simulations to Support the Analysis of Spatial Awareness in Synthetic Vision Displays

    NASA Technical Reports Server (NTRS)

    Boton, Matthew L.; Bass, Ellen J.; Comstock, James R., Jr.

    2006-01-01

    The evaluation of human-centered systems can be performed using a variety of different methodologies. This paper describes a human-centered systems evaluation methodology where participants watch 5-second non-interactive videos of a system in operation before supplying judgments and subjective measures based on the information conveyed in the videos. This methodology was used to evaluate the ability of different textures and fields of view to convey spatial awareness in synthetic vision systems (SVS) displays. It produced significant results for both judgment based and subjective measures. This method is compared to other methods commonly used to evaluate SVS displays based on cost, the amount of experimental time required, experimental flexibility, and the type of data provided.

  16. Segmented cold cathode display panel

    NASA Technical Reports Server (NTRS)

    Payne, Leslie (Inventor)

    1998-01-01

    The present invention is a video display device that utilizes the novel concept of generating an electronically controlled pattern of electron emission at the output of a segmented photocathode. This pattern of electron emission is amplified via a channel plate. The result is that an intense electronic image can be accelerated toward a phosphor thus creating a bright video image. This novel arrangement allows for one to provide a full color flat video display capable of implementation in large formats. In an alternate arrangement, the present invention is provided without the channel plate and a porous conducting surface is provided instead. In this alternate arrangement, the brightness of the image is reduced but the cost of the overall device is significantly lowered because fabrication complexity is significantly decreased.

  17. Teaching Complicated Conceptual Knowledge with Simulation Videos in Foundational Electrical Engineering Courses

    ERIC Educational Resources Information Center

    Chen, Baiyun; Wei, Lei; Li, Huihui

    2016-01-01

    Building a solid foundation of conceptual knowledge is critical for students in electrical engineering. This mixed-method case study explores the use of simulation videos to illustrate complicated conceptual knowledge in foundational communications and signal processing courses. Students found these videos to be very useful for establishing…

  18. A compact high-definition low-cost digital stereoscopic video camera for rapid robotic surgery development.

    PubMed

    Carlson, Jay; Kowalczuk, Jędrzej; Psota, Eric; Pérez, Lance C

    2012-01-01

    Robotic surgical platforms require vision feedback systems, which often consist of low-resolution, expensive, single-imager analog cameras. These systems are retooled for 3D display by simply doubling the cameras and outboard control units. Here, a fully-integrated digital stereoscopic video camera employing high-definition sensors and a class-compliant USB video interface is presented. This system can be used with low-cost PC hardware and consumer-level 3D displays for tele-medical surgical applications including military medical support, disaster relief, and space exploration.

  19. Portable low-cost devices for videotaping, editing, and displaying field-sequential stereoscopic motion pictures and video

    NASA Astrophysics Data System (ADS)

    Starks, Michael R.

    1990-09-01

    A variety of low cost devices for capturing, editing and displaying field sequential 60 cycle stereoscopic video have recently been marketed by 3D TV Corp. and others. When properly used, they give very high quality images with most consumer and professional equipment. Our stereoscopic multiplexers for creating and editing field sequential video in NTSC or component(SVHS, Betacain, RGB) and Home 3D Theater system employing LCD eyeglasses have made 3D movies and television available to a large audience.

  20. A Practical Strategy for Teaching a Child with Autism to Attend to and Imitate a Portable Video Model

    ERIC Educational Resources Information Center

    Plavnick, Joshua B.

    2012-01-01

    Video modeling is an effective and efficient methodology for teaching new skills to individuals with autism. New technology may enhance video modeling as smartphones or tablet computers allow for portable video displays. However, the reduced screen size may decrease the likelihood of attending to the video model for some children. The present…

  1. 47 CFR 79.109 - Activating accessibility features.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... ACCESSIBILITY OF VIDEO PROGRAMMING Apparatus § 79.109 Activating accessibility features. (a) Requirements... video programming transmitted in digital format simultaneously with sound, including apparatus designed to receive or display video programming transmitted in digital format using Internet protocol, with...

  2. 14 CFR 382.69 - What requirements must carriers meet concerning the accessibility of videos, DVDs, and other...

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... concerning the accessibility of videos, DVDs, and other audio-visual presentations shown on-aircraft to... meet concerning the accessibility of videos, DVDs, and other audio-visual presentations shown on... videos, DVDs, and other audio-visual displays played on aircraft for safety purposes, and all such new...

  3. 14 CFR 382.69 - What requirements must carriers meet concerning the accessibility of videos, DVDs, and other...

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... concerning the accessibility of videos, DVDs, and other audio-visual presentations shown on-aircraft to... meet concerning the accessibility of videos, DVDs, and other audio-visual presentations shown on... videos, DVDs, and other audio-visual displays played on aircraft for safety purposes, and all such new...

  4. 47 CFR Appendix - Technical Appendix 1

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... display program material that has been encoded in any and all of the video formats contained in Table A3... frame rate of the transmitted video format. 2. Output Formats Equipment shall support 4:3 center cut-out... for composite video (yellow). Output shall produce video with ITU-R BT.500-11 quality scale of Grade 4...

  5. A Continuing Engineering Education Program Utilizing Video Tape

    ERIC Educational Resources Information Center

    Biedenbach, Joseph M.

    1970-01-01

    Radio Corporation of America has developed a series of courses on video tape for use with their engineering staffs at locations throughout the country. The courses include such topics as FORTRAN Programming, Engineering Mathematics, and Holography. Thirty-six course topics are proposed to date. (MF)

  6. Using virtual reality to analyze sports performance.

    PubMed

    Bideau, Benoit; Kulpa, Richard; Vignais, Nicolas; Brault, Sébastien; Multon, Franck; Craig, Cathy

    2010-01-01

    Improving performance in sports can be difficult because many biomechanical, physiological, and psychological factors come into play during competition. A better understanding of the perception-action loop employed by athletes is necessary. This requires isolating contributing factors to determine their role in player performance. Because of its inherent limitations, video playback doesn't permit such in-depth analysis. Interactive, immersive virtual reality (VR) can overcome these limitations and foster a better understanding of sports performance from a behavioral-neuroscience perspective. Two case studies using VR technology and a sophisticated animation engine demonstrate how to use information from visual displays to inform a player's future course of action.

  7. New teaching methods in use at UC Irvine's optical engineering and instrument design programs

    NASA Astrophysics Data System (ADS)

    Silberman, Donn M.; Rowe, T. Scott; Jo, Joshua; Dimas, David

    2012-10-01

    New teaching methods reach geographically dispersed students with advances in Distance Education. Capabilities include a new "Hybrid" teaching method with an instructor in a classroom and a live WebEx simulcast for remote students. Our Distance Education Geometric and Physical Optics courses include Hands-On Optics experiments. Low cost laboratory kits have been developed and YouTube type video recordings of the instructor using these tools guide the students through their labs. A weekly "Office Hour" has been developed using WebEx and a Live Webcam the instructor uses to display his live writings from his notebook for answering students' questions.

  8. Wrap-Around Out-the-Window Sensor Fusion System

    NASA Technical Reports Server (NTRS)

    Fox, Jeffrey; Boe, Eric A.; Delgado, Francisco; Secor, James B.; Clark, Michael R.; Ehlinger, Kevin D.; Abernathy, Michael F.

    2009-01-01

    The Advanced Cockpit Evaluation System (ACES) includes communication, computing, and display subsystems, mounted in a van, that synthesize out-the-window views to approximate the views of the outside world as it would be seen from the cockpit of a crewed spacecraft, aircraft, or remote control of a ground vehicle or UAV (unmanned aerial vehicle). The system includes five flat-panel display units arranged approximately in a semicircle around an operator, like cockpit windows. The scene displayed on each panel represents the view through the corresponding cockpit window. Each display unit is driven by a personal computer equipped with a video-capture card that accepts live input from any of a variety of sensors (typically, visible and/or infrared video cameras). Software running in the computers blends the live video images with synthetic images that could be generated, for example, from heads-up-display outputs, waypoints, corridors, or from satellite photographs of the same geographic region. Data from a Global Positioning System receiver and an inertial navigation system aboard the remote vehicle are used by the ACES software to keep the synthetic and live views in registration. If the live image were to fail, the synthetic scenes could still be displayed to maintain situational awareness.

  9. Psychophysical Comparison Of A Video Display System To Film By Using Bone Fracture Images

    NASA Astrophysics Data System (ADS)

    Seeley, George W.; Stempski, Mark; Roehrig, Hans; Nudelman, Sol; Capp, M. P.

    1982-11-01

    This study investigated the possibility of using a video display system instead of film for radiological diagnosis. Also investigated were the relationships between characteristics of the system and the observer's accuracy level. Radiologists were used as observers. Thirty-six clinical bone fractures were separated into two matched sets of equal difficulty. The difficulty parameters and ratings were defined by a panel of expert bone radiologists at the Arizona Health Sciences Center, Radiology Department. These two sets of fracture images were then matched with verifiably normal images using parameters such as film type, angle of view, size, portion of anatomy, the film's density range, and the patient's age and sex. The two sets of images were then displayed, using a counterbalanced design, to each of the participating radiologists for diagnosis. Whenever a response was given to a video image, the radiologist used enhancement controls to "window in" on the grey levels of interest. During the TV phase, the radiologist was required to record the settings of the calibrated controls of the image enhancer during interpretation. At no time did any single radiologist see the same film in both modes. The study was designed so that a standard analysis of variance would show the effects of viewing mode (film vs TV), the effects due to stimulus set, and any interactions with observers. A signal detection analysis of observer performance was also performed. Results indicate that the TV display system is almost as good as the view box display; an average of only two more errors were made on the TV display. The difference between the systems has been traced to four observers who had poor accuracy on a small number of films viewed on the TV display. This information is now being correlated with the video system's signal-to-noise ratio (SNR), signal transfer function (STF), and resolution measurements, to obtain information on the basic display and enhancement requirements for a video-based radiologic system. Due to time constraints the results are not included here. The complete results of this study will be reported at the conference.

  10. Display nonlinearity in digital image processing for visual communications

    NASA Astrophysics Data System (ADS)

    Peli, Eli

    1992-11-01

    The luminance emitted from a cathode ray tube (CRT) display is a nonlinear function (the gamma function) of the input video signal voltage. In most analog video systems, compensation for this nonlinear transfer function is implemented in the camera amplifiers. When CRT displays are used to present psychophysical stimuli in vision research, the specific display nonlinearity usually is measured and accounted for to ensure that the luminance of each pixel in the synthetic image property represents the intended value. However, when using digital image processing, the linear analog-to-digital converters store a digital image that is nonlinearly related to the displayed or recorded image. The effect of this nonlinear transformation on a variety of image-processing applications used in visual communications is described.

  11. Display nonlinearity in digital image processing for visual communications

    NASA Astrophysics Data System (ADS)

    Peli, Eli

    1991-11-01

    The luminance emitted from a cathode ray tube, (CRT) display is a nonlinear function (the gamma function) of the input video signal voltage. In most analog video systems, compensation for this nonlinear transfer function is implemented in the camera amplifiers. When CRT displays are used to present psychophysical stimuli in vision research, the specific display nonlinearity usually is measured and accounted for to ensure that the luminance of each pixel in the synthetic image properly represents the intended value. However, when using digital image processing, the linear analog-to-digital converters store a digital image that is nonlinearly related to the displayed or recorded image. This paper describes the effect of this nonlinear transformation on a variety of image-processing applications used in visual communications.

  12. Recent progress of flexible AMOLED displays

    NASA Astrophysics Data System (ADS)

    Pang, Huiqing; Rajan, Kamala; Silvernail, Jeff; Mandlik, Prashant; Ma, Ruiqing; Hack, Mike; Brown, Julie J.; Yoo, Juhn S.; Jung, Sang-Hoon; Kim, Yong-Cheol; Byun, Seung-Chan; Kim, Jong-Moo; Yoon, Soo-Young; Kim, Chang-Dong; Hwang, Yong-Kee; Chung, In-Jae; Fletcher, Mark; Green, Derek; Pangle, Mike; McIntyre, Jim; Smith, Randal D.

    2011-03-01

    Significant progress has been made in recent years in flexible AMOLED displays and numerous prototypes have been demonstrated. Replacing rigid glass with flexible substrates and thin-film encapsulation makes displays thinner, lighter, and non-breakable - all attractive features for portable applications. Flexible AMOLEDs equipped with phosphorescent OLEDs are considered one of the best candidates for low-power, rugged, full-color video applications. Recently, we have demonstrated a portable communication display device, built upon a full-color 4.3-inch HVGA foil display with a resolution of 134 dpi using an all-phosphorescent OLED frontplane. The prototype is shaped into a thin and rugged housing that will fit over a user's wrist, providing situational awareness and enabling the wearer to see real-time video and graphics information.

  13. Process of videotape making: presentation design, software, and hardware

    NASA Astrophysics Data System (ADS)

    Dickinson, Robert R.; Brady, Dan R.; Bennison, Tim; Burns, Thomas; Pines, Sheldon

    1991-06-01

    The use of technical video tape presentations for communicating abstractions of complex data is now becoming commonplace. While the use of video tapes in the day-to-day work of scientists and engineers is still in its infancy, their use as applications oriented conferences is now growing rapidly. Despite these advancements, there is still very little that is written down about the process of making technical videotapes. For printed media, different presentation styles are well known for categories such as results reports, executive summary reports, and technical papers and articles. In this paper, the authors present ideas on the topic of technical videotape presentation design in a format that is worth referring to. They have started to document the ways in which the experience of media specialist, teaching professionals, and character animators can be applied to scientific animation. Software and hardware considerations are also discussed. For this portion, distinctions are drawn between the software and hardware required for computer animation (frame at a time) productions, and live recorded interaction with a computer graphics display.

  14. The use of distributed displays of operating room video when real-time occupancy status was available.

    PubMed

    Xiao, Yan; Dexter, Franklin; Hu, Peter; Dutton, Richard P

    2008-02-01

    On the day of surgery, real-time information of both room occupancy and activities within the operating room (OR) is needed for management of staff, equipment, and unexpected events. A status display system showed color OR video with controllable image quality and showed times that patients entered and exited each OR (obtained automatically). The system was installed and its use was studied in a 6-OR trauma suite and at four locations in a 19-OR tertiary suite. Trauma staff were surveyed for their perceptions of the system. Evidence of staff acceptance of distributed OR video included its operational use for >3 yr in the two suites, with no administrative complaints. Individuals of all job categories used the video. Anesthesiologists were the most frequent users for more than half of the days (95% confidence interval [CI] >50%) in the tertiary ORs. The OR charge nurses accessed the video mostly early in the day when the OR occupancy was high. In comparison (P < 0.001), anesthesiologists accessed it mostly at the end of the workday when occupancy was declining and few cases were starting. Of all 30-min periods during which the video was accessed in the trauma suite, many accesses (95% CI >42%) occurred in periods with no cases starting or ending (i.e., the video was used during the middle of cases). The three stated reasons for using video that had median surveyed responses of "very useful" were "to see if cases are finished," "to see if a room is ready," and "to see when cases are about to finish." Our nurses and physicians both accepted and used distributed OR video as it provided useful information, regardless of whether real-time display of milestones was available (e.g., through anesthesia information system data).

  15. Use of videotape for off-line viewing of computer-assisted radionuclide cardiology studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thrall, J.H.; Pitt, B.; Marx, R.S.

    1978-02-01

    Videotape offers an inexpensive method for off-line viewing of dynamic radionuclide cardiac studies. Two approaches to videotaping have been explored and demonstrated to be feasible. In the first, a video camera in conjunction with a cassette-type recorder is used to record from the computer display scope. Alternatively, for computer systems already linked to video display units, the video signal can be routed directly to the recorder. Acceptance and use of tracer cardiology studies will be enhanced by increased availability of the studies for clinical review. Videotape offers an inexpensive flexible means of achieving this.

  16. Optical Head-Mounted Computer Display for Education, Research, and Documentation in Hand Surgery.

    PubMed

    Funk, Shawn; Lee, Donald H

    2016-01-01

    Intraoperative photography and capturing videos is important for the hand surgeon. Recently, optical head-mounted computer display has been introduced as a means of capturing photographs and videos. In this article, we discuss this new technology and review its potential use in hand surgery. Copyright © 2016 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.

  17. Flat-panel display solutions for ground-environment military displays (Invited Paper)

    NASA Astrophysics Data System (ADS)

    Thomas, J., II; Roach, R.

    2005-05-01

    Displays for military vehicles have very distinct operational and cost requirements that differ from other military applications. These requirements demand that display suppliers to Army and Marine ground-environments provide low cost equipment that is capable of operation across environmental extremes. Inevitably, COTS components form the foundation of these "affordable" display solutions. This paper will outline the major display requirements and review the options that satisfy conflicting and difficult operational demands, using newly developed equipment as an example. Recently, a new supplier was selected for the Drivers Vision Enhancer (DVE) equipment, including the Display Control Module (DCM). The paper will outline the DVE and describe development of a new DCM solution. The DVE programme, with several thousand units presently in service and operational in conflicts such as "Operation Iraqi Freedom", represents a critical balance between cost and performance. We shall describe design considerations that include selection of COTS sources, the need to minimise display modification; video interfaces, power interfaces, operator interfaces and new provisions to optimise displayed video content.

  18. Creating cinematic wide gamut HDR-video for the evaluation of tone mapping operators and HDR-displays

    NASA Astrophysics Data System (ADS)

    Froehlich, Jan; Grandinetti, Stefan; Eberhardt, Bernd; Walter, Simon; Schilling, Andreas; Brendel, Harald

    2014-03-01

    High quality video sequences are required for the evaluation of tone mapping operators and high dynamic range (HDR) displays. We provide scenic and documentary scenes with a dynamic range of up to 18 stops. The scenes are staged using professional film lighting, make-up and set design to enable the evaluation of image and material appearance. To address challenges for HDR-displays and temporal tone mapping operators, the sequences include highlights entering and leaving the image, brightness changing over time, high contrast skin tones, specular highlights and bright, saturated colors. HDR-capture is carried out using two cameras mounted on a mirror-rig. To achieve a cinematic depth of field, digital motion picture cameras with Super-35mm size sensors are used. We provide HDR-video sequences to serve as a common ground for the evaluation of temporal tone mapping operators and HDR-displays. They are available to the scientific community for further research.

  19. Apparatus for monitoring crystal growth

    DOEpatents

    Sachs, Emanual M.

    1981-01-01

    A system and method are disclosed for monitoring the growth of a crystalline body from a liquid meniscus in a furnace. The system provides an improved human/machine interface so as to reduce operator stress, strain and fatigue while improving the conditions for observation and control of the growing process. The system comprises suitable optics for forming an image of the meniscus and body wherein the image is anamorphic so that the entire meniscus can be viewed with good resolution in both the width and height dimensions. The system also comprises a video display for displaying the anamorphic image. The video display includes means for enhancing the contrast between any two contrasting points in the image. The video display also comprises a signal averager for averaging the intensity of at least one preselected portions of the image. The value of the average intensity, can in turn be utilized to control the growth of the body. The system and method are also capable of observing and monitoring multiple processes.

  20. Method of monitoring crystal growth

    DOEpatents

    Sachs, Emanual M.

    1982-01-01

    A system and method are disclosed for monitoring the growth of a crystalline body from a liquid meniscus in a furnace. The system provides an improved human/machine interface so as to reduce operator stress, strain and fatigue while improving the conditions for observation and control of the growing process. The system comprises suitable optics for forming an image of the meniscus and body wherein the image is anamorphic so that the entire meniscus can be viewed with good resolution in both the width and height dimensions. The system also comprises a video display for displaying the anamorphic image. The video display includes means for enhancing the contrast between any two contrasting points in the image. The video display also comprises a signal averager for averaging the intensity of at least one preselected portions of the image. The value of the average intensity, can in turn be utilized to control the growth of the body. The system and method are also capable of observing and monitoring multiple processes.

  1. Advantages and difficulties of implementation of flat-panel multimedia monitoring system in a surgical MRI suite

    NASA Astrophysics Data System (ADS)

    Deckard, Michael; Ratib, Osman M.; Rubino, Gregory

    2002-05-01

    Our project was to design and implement a ceiling-mounted multi monitor display unit for use in a high-field MRI surgical suite. The system is designed to simultaneously display images/data from four different digital and/or analog sources with: minimal interference from the adjacent high magnetic field, minimal signal-to-noise/artifact contribution to the MRI images and compliance with codes and regulations for the sterile neuro-surgical environment. Provisions were also made to accommodate the importing and exporting of video information via PACS and remote processing/display for clinical and education uses. Commercial fiber optic receivers/transmitters were implemented along with supporting video processing and distribution equipment to solve the video communication problem. A new generation of high-resolution color flat panel displays was selected for the project. A custom-made monitor mount and in-suite electronics enclosure was designed and constructed at UCLA. Difficulties with implementing an isolated AC power system are discussed and a work-around solution presented.

  2. Storing Data and Video on One Tape

    NASA Technical Reports Server (NTRS)

    Nixon, J. H.; Cater, J. P.

    1985-01-01

    Microprocessor-based system originally developed for anthropometric research merges digital data with video images for storage on video cassette recorder. Combined signals later retrieved and displayed simultaneously on television monitor. System also extracts digital portion of stored information and transfers it to solid-state memory.

  3. 47 CFR 79.107 - User interfaces provided by digital apparatus.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... SERVICES ACCESSIBILITY OF VIDEO PROGRAMMING Apparatus § 79.107 User interfaces provided by digital... States and designed to receive or play back video programming transmitted in digital format simultaneously with sound, including apparatus designed to receive or display video programming transmitted in...

  4. 47 CFR 79.103 - Closed caption decoder requirements for apparatus.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... RADIO SERVICES ACCESSIBILITY OF VIDEO PROGRAMMING Apparatus § 79.103 Closed caption decoder requirements... video programming transmitted simultaneously with sound, if such apparatus is manufactured in the United... with built-in closed caption decoder circuitry or capability designed to display closed-captioned video...

  5. Patterned Video Sensors For Low Vision

    NASA Technical Reports Server (NTRS)

    Juday, Richard D.

    1996-01-01

    Miniature video cameras containing photoreceptors arranged in prescribed non-Cartesian patterns to compensate partly for some visual defects proposed. Cameras, accompanied by (and possibly integrated with) miniature head-mounted video display units restore some visual function in humans whose visual fields reduced by defects like retinitis pigmentosa.

  6. Spatiotemporal video deinterlacing using control grid interpolation

    NASA Astrophysics Data System (ADS)

    Venkatesan, Ragav; Zwart, Christine M.; Frakes, David H.; Li, Baoxin

    2015-03-01

    With the advent of progressive format display and broadcast technologies, video deinterlacing has become an important video-processing technique. Numerous approaches exist in the literature to accomplish deinterlacing. While most earlier methods were simple linear filtering-based approaches, the emergence of faster computing technologies and even dedicated video-processing hardware in display units has allowed higher quality but also more computationally intense deinterlacing algorithms to become practical. Most modern approaches analyze motion and content in video to select different deinterlacing methods for various spatiotemporal regions. We introduce a family of deinterlacers that employs spectral residue to choose between and weight control grid interpolation based spatial and temporal deinterlacing methods. The proposed approaches perform better than the prior state-of-the-art based on peak signal-to-noise ratio, other visual quality metrics, and simple perception-based subjective evaluations conducted by human viewers. We further study the advantages of using soft and hard decision thresholds on the visual performance.

  7. XVD Image Display Program

    NASA Technical Reports Server (NTRS)

    Deen, Robert G.; Andres, Paul M.; Mortensen, Helen B.; Parizher, Vadim; McAuley, Myche; Bartholomew, Paul

    2009-01-01

    The XVD [X-Windows VICAR (video image communication and retrieval) Display] computer program offers an interactive display of VICAR and PDS (planetary data systems) images. It is designed to efficiently display multiple-GB images and runs on Solaris, Linux, or Mac OS X systems using X-Windows.

  8. Attracting STEM talent: do STEM students prefer traditional or work/life-interaction labs?

    PubMed

    DeFraine, William C; Williams, Wendy M; Ceci, Stephen J

    2014-01-01

    The demand for employees trained in science, technology, engineering, and mathematics (STEM) fields continues to increase, yet the number of Millennial students pursuing STEM is not keeping pace. We evaluated whether this shortfall is associated with Millennials' preference for flexibility and work/life-interaction in their careers-a preference that may be inconsistent with the traditional idea of a science career endorsed by many lab directors. Two contrasting approaches to running STEM labs and training students were explored, and we created a lab recruitment video depicting each. The work-focused video emphasized the traditional notions of a science lab, characterized by long work hours and a focus on individual achievement and conducting research above all else. In contrast, the work/life-interaction-focused video emphasized a more progressive view - lack of demarcation between work and non-work lives, flexible hours, and group achievement. In Study 1, 40 professors rated the videos, and the results confirmed that the two lab types reflected meaningful real-world differences in training approaches. In Study 2, we recruited 53 current and prospective graduate students in STEM fields who displayed high math-identification and a commitment to science careers. In a between-subjects design, they watched one of the two lab-recruitment videos, and then reported their anticipated sense of belonging to and desire to participate in the lab depicted in the video. Very large effects were observed on both primary measures: Participants who watched the work/life-interaction-focused video reported a greater sense of belonging to (d = 1.49) and desire to participate in (d = 1.33) the lab, relative to participants who watched the work-focused video. These results suggest Millennials possess a strong desire for work/life-interaction, which runs counter to the traditional lab-training model endorsed by many lab directors. We discuss implications of these findings for STEM recruitment.

  9. Attracting STEM Talent: Do STEM Students Prefer Traditional or Work/Life-Interaction Labs?

    PubMed Central

    DeFraine, William C.; Williams, Wendy M.; Ceci, Stephen J.

    2014-01-01

    The demand for employees trained in science, technology, engineering, and mathematics (STEM) fields continues to increase, yet the number of Millennial students pursuing STEM is not keeping pace. We evaluated whether this shortfall is associated with Millennials' preference for flexibility and work/life-interaction in their careers-a preference that may be inconsistent with the traditional idea of a science career endorsed by many lab directors. Two contrasting approaches to running STEM labs and training students were explored, and we created a lab recruitment video depicting each. The work-focused video emphasized the traditional notions of a science lab, characterized by long work hours and a focus on individual achievement and conducting research above all else. In contrast, the work/life-interaction-focused video emphasized a more progressive view – lack of demarcation between work and non-work lives, flexible hours, and group achievement. In Study 1, 40 professors rated the videos, and the results confirmed that the two lab types reflected meaningful real-world differences in training approaches. In Study 2, we recruited 53 current and prospective graduate students in STEM fields who displayed high math-identification and a commitment to science careers. In a between-subjects design, they watched one of the two lab-recruitment videos, and then reported their anticipated sense of belonging to and desire to participate in the lab depicted in the video. Very large effects were observed on both primary measures: Participants who watched the work/life-interaction-focused video reported a greater sense of belonging to (d = 1.49) and desire to participate in (d = 1.33) the lab, relative to participants who watched the work-focused video. These results suggest Millennials possess a strong desire for work/life-interaction, which runs counter to the traditional lab-training model endorsed by many lab directors. We discuss implications of these findings for STEM recruitment. PMID:24587044

  10. Efficient stereoscopic contents file format on the basis of ISO base media file format

    NASA Astrophysics Data System (ADS)

    Kim, Kyuheon; Lee, Jangwon; Suh, Doug Young; Park, Gwang Hoon

    2009-02-01

    A lot of 3D contents haven been widely used for multimedia services, however, real 3D video contents have been adopted for a limited applications such as a specially designed 3D cinema. This is because of the difficulty of capturing real 3D video contents and the limitation of display devices available in a market. However, diverse types of display devices for stereoscopic video contents for real 3D video contents have been recently released in a market. Especially, a mobile phone with a stereoscopic camera has been released in a market, which provides a user as a consumer to have more realistic experiences without glasses, and also, as a content creator to take stereoscopic images or record the stereoscopic video contents. However, a user can only store and display these acquired stereoscopic contents with his/her own devices due to the non-existence of a common file format for these contents. This limitation causes a user not share his/her contents with any other users, which makes it difficult the relevant market to stereoscopic contents is getting expanded. Therefore, this paper proposes the common file format on the basis of ISO base media file format for stereoscopic contents, which enables users to store and exchange pure stereoscopic contents. This technology is also currently under development for an international standard of MPEG as being called as a stereoscopic video application format.

  11. From Video to Photo

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Ever wonder whether a still shot from a home video could serve as a "picture perfect" photograph worthy of being framed and proudly displayed on the mantle? Wonder no more. A critical imaging code used to enhance video footage taken from spaceborne imaging instruments is now available within a portable photography tool capable of producing an optimized, high-resolution image from multiple video frames.

  12. 25 CFR 542.43 - What are the minimum internal control standards for surveillance for a Tier C gaming operation?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... recordings and/or digital records shall be provided to the Commission upon request. (x) Video library log. A... events on video and/or digital recordings. The displayed date and time shall not significantly obstruct... each gaming machine change booth. (w) Video recording and/or digital record retention. (1) All video...

  13. 25 CFR 542.43 - What are the minimum internal control standards for surveillance for a Tier C gaming operation?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... recordings and/or digital records shall be provided to the Commission upon request. (x) Video library log. A... events on video and/or digital recordings. The displayed date and time shall not significantly obstruct... each gaming machine change booth. (w) Video recording and/or digital record retention. (1) All video...

  14. 25 CFR 542.43 - What are the minimum internal control standards for surveillance for a Tier C gaming operation?

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... recordings and/or digital records shall be provided to the Commission upon request. (x) Video library log. A... events on video and/or digital recordings. The displayed date and time shall not significantly obstruct... each gaming machine change booth. (w) Video recording and/or digital record retention. (1) All video...

  15. Effectiveness of Immersive Videos in Inducing Awe: An Experimental Study.

    PubMed

    Chirico, Alice; Cipresso, Pietro; Yaden, David B; Biassoni, Federica; Riva, Giuseppe; Gaggioli, Andrea

    2017-04-27

    Awe, a complex emotion composed by the appraisal components of vastness and need for accommodation, is a profound and often meaningful experience. Despite its importance, psychologists have only recently begun empirical study of awe. At the experimental level, a main issue concerns how to elicit high intensity awe experiences in the lab. To address this issue, Virtual Reality (VR) has been proposed as a potential solution. Here, we considered the highest realistic form of VR: immersive videos. 42 participants watched at immersive and normal 2D videos displaying an awe or a neutral content. After the experience, they rated their level of awe and sense of presence. Participants' psychophysiological responses (BVP, SC, sEMG) were recorded during the whole video exposure. We hypothesized that the immersive video condition would increase the intensity of awe experienced compared to 2D screen videos. Results indicated that immersive videos significantly enhanced the self-reported intensity of awe as well as the sense of presence. Immersive videos displaying an awe content also led to higher parasympathetic activation. These findings indicate the advantages of using VR in the experimental study of awe, with methodological implications for the study of other emotions.

  16. 77 FR 75617 - 36(b)(1) Arms Sales Notification

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-21

    ... transmittal, policy justification, and Sensitivity of Technology. Dated: December 18, 2012. Aaron Siegel... Processor Cabinets, 2 Video Wall Screen and Projector Systems, 46 Flat Panel Displays, and 2 Distributed Video Systems), 2 ship sets AN/SPQ-15 Digital Video Distribution Systems, 2 ship sets Operational...

  17. Real-time rendering for multiview autostereoscopic displays

    NASA Astrophysics Data System (ADS)

    Berretty, R.-P. M.; Peters, F. J.; Volleberg, G. T. G.

    2006-02-01

    In video systems, the introduction of 3D video might be the next revolution after the introduction of color. Nowadays multiview autostereoscopic displays are in development. Such displays offer various views at the same time and the image content observed by the viewer depends upon his position with respect to the screen. His left eye receives a signal that is different from what his right eye gets; this gives, provided the signals have been properly processed, the impression of depth. The various views produced on the display differ with respect to their associated camera positions. A possible video format that is suited for rendering from different camera positions is the usual 2D format enriched with a depth related channel, e.g. for each pixel in the video not only its color is given, but also e.g. its distance to a camera. In this paper we provide a theoretical framework for the parallactic transformations which relates captured and observed depths to screen and image disparities. Moreover we present an efficient real time rendering algorithm that uses forward mapping to reduce aliasing artefacts and that deals properly with occlusions. For improved perceived resolution, we take the relative position of the color subpixels and the optics of the lenticular screen into account. Sophisticated filtering techniques results in high quality images.

  18. Objective video presentation QoE predictor for smart adaptive video streaming

    NASA Astrophysics Data System (ADS)

    Wang, Zhou; Zeng, Kai; Rehman, Abdul; Yeganeh, Hojatollah; Wang, Shiqi

    2015-09-01

    How to deliver videos to consumers over the network for optimal quality-of-experience (QoE) has been the central goal of modern video delivery services. Surprisingly, regardless of the large volume of videos being delivered everyday through various systems attempting to improve visual QoE, the actual QoE of end consumers is not properly assessed, not to say using QoE as the key factor in making critical decisions at the video hosting, network and receiving sites. Real-world video streaming systems typically use bitrate as the main video presentation quality indicator, but using the same bitrate to encode different video content could result in drastically different visual QoE, which is further affected by the display device and viewing condition of each individual consumer who receives the video. To correct this, we have to put QoE back to the driver's seat and redesign the video delivery systems. To achieve this goal, a major challenge is to find an objective video presentation QoE predictor that is accurate, fast, easy-to-use, display device adaptive, and provides meaningful QoE predictions across resolution and content. We propose to use the newly developed SSIMplus index (https://ece.uwaterloo.ca/~z70wang/research/ssimplus/) for this role. We demonstrate that based on SSIMplus, one can develop a smart adaptive video streaming strategy that leads to much smoother visual QoE impossible to achieve using existing adaptive bitrate video streaming approaches. Furthermore, SSIMplus finds many more applications, in live and file-based quality monitoring, in benchmarking video encoders and transcoders, and in guiding network resource allocations.

  19. The effects of "thin ideal" media on women's body image concerns and eating-related intentions: the beneficial role of an autonomous regulation of eating behaviors.

    PubMed

    Mask, Lisa; Blanchard, Céline M

    2011-09-01

    The present study examines the protective role of an autonomous regulation of eating behaviors (AREB) on the relationship between trait body dissatisfaction and women's body image concerns and eating-related intentions in response to "thin ideal" media. Undergraduate women (n=138) were randomly assigned to view a "thin ideal" video or a neutral video. As hypothesized, trait body dissatisfaction predicted more negative affect and size dissatisfaction following exposure to the "thin ideal" video among women who displayed less AREB. Conversely, trait body dissatisfaction predicted greater intentions to monitor food intake and limit unhealthy foods following exposure to the "thin ideal" video among women who displayed more AREB. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. First Use of Heads-up Display for Astronomy Education

    NASA Astrophysics Data System (ADS)

    Mumford, Holly; Hintz, E. G.; Jones, M.; Lawler, J.; Fisler, A.

    2013-01-01

    As part of our work on deaf education in a planetarium environment we are exploring the use of heads-up display systems. This allows us to overlap an ASL interpreter with our educational videos. The overall goal is to allow a student to watch a full-dome planetarium show and have the interpreter tracking to any portion of the video. We will present the first results of using a heads-up display to provide an ASL ‘sound-track’ for a deaf audience. This work is partially funded by an NSF IIS-1124548 grant and funding from the Sorenson Foundation.

  1. Blade counting tool with a 3D borescope for turbine applications

    NASA Astrophysics Data System (ADS)

    Harding, Kevin G.; Gu, Jiajun; Tao, Li; Song, Guiju; Han, Jie

    2014-07-01

    Video borescopes are widely used for turbine and aviation engine inspection to guarantee the health of blades and prevent blade failure during running. When the moving components of a turbine engine are inspected with a video borescope, the operator must view every blade in a given stage. The blade counting tool is video interpretation software that runs simultaneously in the background during inspection. It identifies moving turbine blades in a video stream, tracks and counts the blades as they move across the screen. This approach includes blade detection to identify blades in different inspection scenarios and blade tracking to perceive blade movement even in hand-turning engine inspections. The software is able to label each blade by comparing counting results to a known blade count for the engine type and stage. On-screen indications show the borescope user labels for each blade and how many blades have been viewed as the turbine is rotated.

  2. Video-Based Big Data Analytics in Cyberlearning

    ERIC Educational Resources Information Center

    Wang, Shuangbao; Kelly, William

    2017-01-01

    In this paper, we present a novel system, inVideo, for video data analytics, and its use in transforming linear videos into interactive learning objects. InVideo is able to analyze video content automatically without the need for initial viewing by a human. Using a highly efficient video indexing engine we developed, the system is able to analyze…

  3. Video image stabilization and registration--plus

    NASA Technical Reports Server (NTRS)

    Hathaway, David H. (Inventor)

    2009-01-01

    A method of stabilizing a video image displayed in multiple video fields of a video sequence includes the steps of: subdividing a selected area of a first video field into nested pixel blocks; determining horizontal and vertical translation of each of the pixel blocks in each of the pixel block subdivision levels from the first video field to a second video field; and determining translation of the image from the first video field to the second video field by determining a change in magnification of the image from the first video field to the second video field in each of horizontal and vertical directions, and determining shear of the image from the first video field to the second video field in each of the horizontal and vertical directions.

  4. Obstacles encountered in the development of the low vision enhancement system.

    PubMed

    Massof, R W; Rickman, D L

    1992-01-01

    The Johns Hopkins Wilmer Eye Institute and the NASA Stennis Space Center are collaborating on the development of a new high technology low vision aid called the Low Vision Enhancement System (LVES). The LVES consists of a binocular head-mounted video display system, video cameras mounted on the head-mounted display, and real-time video image processing in a system package that is battery powered and portable. Through a phased development approach, several generations of the LVES can be made available to the patient in a timely fashion. This paper describes the LVES project with major emphasis on technical problems encountered or anticipated during the development process.

  5. An Evaluation of Streaming Digital Video Resources in On- and Off-Campus Engineering Management Education

    ERIC Educational Resources Information Center

    Palmer, Stuart

    2007-01-01

    A recent television documentary on the Columbia space shuttle disaster was converted to streaming digital video format for educational use by on- and off-campus students in an engineering management study unit examining issues in professional engineering ethics. An evaluation was conducted to assess the effectiveness of this new resource. Use of…

  6. Design of video processing and testing system based on DSP and FPGA

    NASA Astrophysics Data System (ADS)

    Xu, Hong; Lv, Jun; Chen, Xi'ai; Gong, Xuexia; Yang, Chen'na

    2007-12-01

    Based on high speed Digital Signal Processor (DSP) and Field Programmable Gate Array (FPGA), a video capture, processing and display system is presented, which is of miniaturization and low power. In this system, a triple buffering scheme was used for the capture and display, so that the application can always get a new buffer without waiting; The Digital Signal Processor has an image process ability and it can be used to test the boundary of workpiece's image. A video graduation technology is used to aim at the position which is about to be tested, also, it can enhance the system's flexibility. The character superposition technology realized by DSP is used to display the test result on the screen in character format. This system can process image information in real time, ensure test precision, and help to enhance product quality and quality management.

  7. Split image optical display

    DOEpatents

    Veligdan, James T.

    2005-05-31

    A video image is displayed from an optical panel by splitting the image into a plurality of image components, and then projecting the image components through corresponding portions of the panel to collectively form the image. Depth of the display is correspondingly reduced.

  8. Split image optical display

    DOEpatents

    Veligdan, James T [Manorville, NY

    2007-05-29

    A video image is displayed from an optical panel by splitting the image into a plurality of image components, and then projecting the image components through corresponding portions of the panel to collectively form the image. Depth of the display is correspondingly reduced.

  9. A blended learning concept for an engineering course in the field of color representation and display technologies

    NASA Astrophysics Data System (ADS)

    Vauderwange, Oliver; Wozniak, Peter; Javahiraly, Nicolas; Curticapean, Dan

    2016-09-01

    The Paper presents the design and development of a blended learning concept for an engineering course in the field of color representation and display technologies. A suitable learning environment is crucial for the success of the teaching scenario. A mixture of theoretical lectures and hands-on activities with practical applications and experiments, combined with the advantages of modern digital media is the main topic of the paper. Blended learning describes the didactical change of attendance periods and online periods. The e-learning environment for the online period is designed toward an easy access and interaction. Present digital media extends the established teaching scenarios and enables the presentation of videos, animations and augmented reality (AR). Visualizations are effective tools to impart learning contents with lasting effect. The preparation and evaluation of the theoretical lectures and the hands-on activities are stimulated and affects positively the attendance periods. The tasks and experiments require the students to work independently and to develop individual solution strategies. This engages and motivates the students, deepens the knowledge. The authors will present their experience with the implemented blended learning scenario in this field of optics and photonics. All aspects of the learning environment will be introduced.

  10. Mesoscale and severe storms (Mass) data management and analysis system

    NASA Technical Reports Server (NTRS)

    Hickey, J. S.; Karitani, S.; Dickerson, M.

    1984-01-01

    Progress on the Mesoscale and Severe Storms (MASS) data management and analysis system is described. An interactive atmospheric data base management software package to convert four types of data (Sounding, Single Level, Grid, Image) into standard random access formats is implemented and integrated with the MASS AVE80 Series general purpose plotting and graphics display data analysis software package. An interactive analysis and display graphics software package (AVE80) to analyze large volumes of conventional and satellite derived meteorological data is enhanced to provide imaging/color graphics display utilizing color video hardware integrated into the MASS computer system. Local and remote smart-terminal capability is provided by installing APPLE III computer systems within individual scientist offices and integrated with the MASS system, thus providing color video display, graphics, and characters display of the four data types.

  11. Multilocation Video Conference By Optical Fiber

    NASA Astrophysics Data System (ADS)

    Gray, Donald J.

    1982-10-01

    An experimental system that permits interconnection of many offices in a single video conference is described. Video images transmitted to conference participants are selected by the conference chairman and switched by a microprocessor-controlled video switch. Speakers can, at their choice, transmit their own images or images of graphics they wish to display. Users are connected to the Switching Center by optical fiber subscriber loops that carry analog video, digitized telephone, data and signaling. The same system also provides user-selectable distribution of video program and video library material. Experience in the operation of the conference system is discussed.

  12. ARINC 818 specification revisions enable new avionics architectures

    NASA Astrophysics Data System (ADS)

    Grunwald, Paul

    2014-06-01

    The ARINC 818 Avionics Digital Video Bus is the standard for cockpit video that has gained wide acceptance in both the commercial and military cockpits. The Boeing 787, A350XWB, A400M, KC-46A, and many other aircraft use it. The ARINC 818 specification, which was initially release in 2006, has recently undergone a major update to address new avionics architectures and capabilities. Over the seven years since its release, projects have gone beyond the specification due to the complexity of new architectures and desired capabilities, such as video switching, bi-directional communication, data-only paths, and camera and sensor control provisions. The ARINC 818 specification was revised in 2013, and ARINC 818-2 was approved in November 2013. The revisions to the ARINC 818-2 specification enable switching, stereo and 3-D provisions, color sequential implementations, regions of interest, bi-directional communication, higher link rates, data-only transmission, and synchronization signals. This paper discusses each of the new capabilities and the impact on avionics and display architectures, especially when integrating large area displays, stereoscopic displays, multiple displays, and systems that include a large number of sensors.

  13. Design and testing of artifact-suppressed adaptive histogram equalization: a contrast-enhancement technique for display of digital chest radiographs.

    PubMed

    Rehm, K; Seeley, G W; Dallas, W J; Ovitt, T W; Seeger, J F

    1990-01-01

    One of the goals of our research in the field of digital radiography has been to develop contrast-enhancement algorithms for eventual use in the display of chest images on video devices with the aim of preserving the diagnostic information presently available with film, some of which would normally be lost because of the smaller dynamic range of video monitors. The ASAHE algorithm discussed in this article has been tested by investigating observer performance in a difficult detection task involving phantoms and simulated lung nodules, using film as the output medium. The results of the experiment showed that the algorithm is successful in providing contrast-enhanced, natural-looking chest images while maintaining diagnostic information. The algorithm did not effect an increase in nodule detectability, but this was not unexpected because film is a medium capable of displaying a wide range of gray levels. It is sufficient at this stage to show that there is no degradation in observer performance. Future tests will evaluate the performance of the ASAHE algorithm in preparing chest images for video display.

  14. Increased ISR operator capability utilizing a centralized 360° full motion video display

    NASA Astrophysics Data System (ADS)

    Andryc, K.; Chamberlain, J.; Eagleson, T.; Gottschalk, G.; Kowal, B.; Kuzdeba, P.; LaValley, D.; Myers, E.; Quinn, S.; Rose, M.; Rusiecki, B.

    2012-06-01

    In many situations, the difference between success and failure comes down to taking the right actions quickly. While the myriad of electronic sensors available today can provide data quickly, it may overload the operator; where only a contextualized centralized display of information and intuitive human interface can help to support the quick and effective decisions needed. If these decisions are to result in quick actions, then the operator must be able to understand all of the data of his environment. In this paper we present a novel approach in contextualizing multi-sensor data onto a full motion video real-time 360 degree imaging display. The system described could function as a primary display system for command and control in security, military and observation posts. It has the ability to process and enable interactive control of multiple other sensor systems. It enhances the value of these other sensors by overlaying their information on a panorama of the surroundings. Also, it can be used to interface to other systems including: auxiliary electro-optical systems, aerial video, contact management, Hostile Fire Indicators (HFI), and Remote Weapon Stations (RWS).

  15. Interactive Video in Training. Computers in Personnel--Making Management Profitable.

    ERIC Educational Resources Information Center

    Copeland, Peter

    Interactive video is achieved by merging the two powerful technologies of microcomputing and video. Using television as the vehicle for display, text and diagrams, filmic images, and sound can be used separately or in combination to achieve a specific training task. An interactive program can check understanding, determine progress, and challenge…

  16. Physiological reactivity to faces via live and video-mediated communication in typical and atypical development.

    PubMed

    Riby, Deborah M; Whittle, Lisa; Doherty-Sneddon, Gwyneth

    2012-01-01

    The human face is a powerful elicitor of emotion, which induces autonomic nervous system responses. In this study, we explored physiological arousal and reactivity to affective facial displays shown in person and through video-mediated communication. We compared measures of physiological arousal and reactivity in typically developing individuals and those with the developmental disorders Williams syndrome (WS) and autism spectrum disorder (ASD). Participants attended to facial displays of happy, sad, and neutral expressions via live and video-mediated communication. Skin conductance level (SCL) indicated that live faces, but not video-mediated faces, increased arousal, especially for typically developing individuals and those with WS. There was less increase of SCL, and physiological reactivity was comparable for live and video-mediated faces in ASD. In typical development and WS, physiological reactivity was greater for live than for video-mediated communication. Individuals with WS showed lower SCL than typically developing individuals, suggesting possible hypoarousal in this group, even though they showed an increase in arousal for faces. The results are discussed in terms of the use of video-mediated communication with typically and atypically developing individuals and atypicalities of physiological arousal across neurodevelopmental disorder groups.

  17. Tactile Cueing for Target Acquisition and Identification

    DTIC Science & Technology

    2005-09-01

    method of coding tactile information, and the method of presenting elevation information were studied. Results: Subjects were divided into video game experienced...VGP) subjects and non- video game (NVGP) experienced subjects. VGPs showed a significantly lower’ target acquisition time with the 12...that video game players performed better with the highest level of tactile resolution, while non- video game players performed better with simpler pattern and a lower resolution display.

  18. An Attention-Information-Based Spatial Adaptation Framework for Browsing Videos via Mobile Devices

    NASA Astrophysics Data System (ADS)

    Li, Houqiang; Wang, Yi; Chen, Chang Wen

    2007-12-01

    With the growing popularity of personal digital assistant devices and smart phones, more and more consumers are becoming quite enthusiastic to appreciate videos via mobile devices. However, limited display size of the mobile devices has been imposing significant barriers for users to enjoy browsing high-resolution videos. In this paper, we present an attention-information-based spatial adaptation framework to address this problem. The whole framework includes two major parts: video content generation and video adaptation system. During video compression, the attention information in video sequences will be detected using an attention model and embedded into bitstreams with proposed supplement-enhanced information (SEI) structure. Furthermore, we also develop an innovative scheme to adaptively adjust quantization parameters in order to simultaneously improve the quality of overall encoding and the quality of transcoding the attention areas. When the high-resolution bitstream is transmitted to mobile users, a fast transcoding algorithm we developed earlier will be applied to generate a new bitstream for attention areas in frames. The new low-resolution bitstream containing mostly attention information, instead of the high-resolution one, will be sent to users for display on the mobile devices. Experimental results show that the proposed spatial adaptation scheme is able to improve both subjective and objective video qualities.

  19. Design and management of public health outreach using interoperable mobile multimedia: an analysis of a national winter weather preparedness campaign.

    PubMed

    Bandera, Cesar

    2016-05-25

    The Office of Public Health Preparedness and Response (OPHPR) in the Centers for Disease Control and Prevention conducts outreach for public preparedness for natural and manmade incidents. In 2011, OPHPR conducted a nationwide mobile public health (m-Health) campaign that pushed brief videos on preparing for severe winter weather onto cell phones, with the objective of evaluating the interoperability of multimedia m-Health outreach with diverse cell phones (including handsets without Internet capability), carriers, and user preferences. Existing OPHPR outreach material on winter weather preparedness was converted into mobile-ready multimedia using mobile marketing best practices to improve audiovisual quality and relevance. Middleware complying with opt-in requirements was developed to push nine bi-weekly multimedia broadcasts onto subscribers' cell phones, and OPHPR promoted the campaign on its web site and to subscribers on its govdelivery.com notification platform. Multimedia, text, and voice messaging activity to/from the middleware was logged and analyzed. Adapting existing media into mobile video was straightforward using open source and commercial software, including web pages, PDF documents, and public service announcements. The middleware successfully delivered all outreach videos to all participants (a total of 504 videos) regardless of the participant's device. 54 % of videos were viewed on cell phones, 32 % on computers, and 14 % were retrieved by search engine web crawlers. 21 % of participating cell phones did not have Internet access, yet still received and displayed all videos. The time from media push to media viewing on cell phones was half that of push to viewing on computers. Video delivered through multimedia messaging can be as interoperable as text messages, while providing much richer information. This may be the only multimedia mechanism available to outreach campaigns targeting vulnerable populations impacted by the digital divide. Anti-spam laws preserve the integrity of mobile messaging, but complicate campaign promotion. Person-to-person messages may boost enrollment.

  20. Motion sickness and postural sway in console video games.

    PubMed

    Stoffregen, Thomas A; Faugloire, Elise; Yoshida, Ken; Flanagan, Moira B; Merhi, Omar

    2008-04-01

    We tested the hypotheses that (a) participants might develop motion sickness while playing "off-the-shelf" console video games and (b) postural motion would differ between sick and well participants, prior to the onset of motion sickness. There have been many anecdotal reports of motion sickness among people who play console video games (e.g., Xbox, PlayStation). Participants (40 undergraduate students) played a game continuously for up to 50 min while standing or sitting. We varied the distance to the display screen (and, consequently, the visual angle of the display). Across conditions, the incidence of motion sickness ranged from 42% to 56%; incidence did not differ across conditions. During game play, head and torso motion differed between sick and well participants prior to the onset of subjective symptoms of motion sickness. The results indicate that console video games carry a significant risk of motion sickness. Potential applications of this research include changes in the design of console video games and recommendations for how such systems should be used.

  1. Video image processor on the Spacelab 2 Solar Optical Universal Polarimeter /SL2 SOUP/

    NASA Technical Reports Server (NTRS)

    Lindgren, R. W.; Tarbell, T. D.

    1981-01-01

    The SOUP instrument is designed to obtain diffraction-limited digital images of the sun with high photometric accuracy. The Video Processor originated from the requirement to provide onboard real-time image processing, both to reduce the telemetry rate and to provide meaningful video displays of scientific data to the payload crew. This original concept has evolved into a versatile digital processing system with a multitude of other uses in the SOUP program. The central element in the Video Processor design is a 16-bit central processing unit based on 2900 family bipolar bit-slice devices. All arithmetic, logical and I/O operations are under control of microprograms, stored in programmable read-only memory and initiated by commands from the LSI-11. Several functions of the Video Processor are described, including interface to the High Rate Multiplexer downlink, cosmetic and scientific data processing, scan conversion for crew displays, focus and exposure testing, and use as ground support equipment.

  2. Task-dependent color discrimination

    NASA Technical Reports Server (NTRS)

    Poirson, Allen B.; Wandell, Brian A.

    1990-01-01

    When color video displays are used in time-critical applications (e.g., head-up displays, video control panels), the observer must discriminate among briefly presented targets seen within a complex spatial scene. Color-discrimination threshold are compared by using two tasks. In one task the observer makes color matches between two halves of a continuously displayed bipartite field. In a second task the observer detects a color target in a set of briefly presented objects. The data from both tasks are well summarized by ellipsoidal isosensitivity contours. The fitted ellipsoids differ both in their size, which indicates an absolute sensitivity difference, and orientation, which indicates a relative sensitivity difference.

  3. Computer Graphics in Research: Some State -of-the-Art Systems

    ERIC Educational Resources Information Center

    Reddy, R.; And Others

    1975-01-01

    A description is given of the structure and functional characteristics of three types of interactive computer graphic systems, developed by the Department of Computer Science at Carnegie-Mellon; a high-speed programmable display capable of displaying 50,000 short vectors, flicker free; a shaded-color video display for the display of gray-scale…

  4. Wireless Augmented Reality Prototype (WARP)

    NASA Technical Reports Server (NTRS)

    Devereaux, A. S.

    1999-01-01

    Initiated in January, 1997, under NASA's Office of Life and Microgravity Sciences and Applications, the Wireless Augmented Reality Prototype (WARP) is a means to leverage recent advances in communications, displays, imaging sensors, biosensors, voice recognition and microelectronics to develop a hands-free, tetherless system capable of real-time personal display and control of computer system resources. Using WARP, an astronaut may efficiently operate and monitor any computer-controllable activity inside or outside the vehicle or station. The WARP concept is a lightweight, unobtrusive heads-up display with a wireless wearable control unit. Connectivity to the external system is achieved through a high-rate radio link from the WARP personal unit to a base station unit installed into any system PC. The radio link has been specially engineered to operate within the high- interference, high-multipath environment of a space shuttle or space station module. Through this virtual terminal, the astronaut will be able to view and manipulate imagery, text or video, using voice commands to control the terminal operations. WARP's hands-free access to computer-based instruction texts, diagrams and checklists replaces juggling manuals and clipboards, and tetherless computer system access allows free motion throughout a cabin while monitoring and operating equipment.

  5. Evaluation of advanced displays for engine monitoring and control

    NASA Technical Reports Server (NTRS)

    Summers, L. G.

    1993-01-01

    The relative effectiveness of two advanced display concepts for monitoring engine performance for commercial transport aircraft was studied. The concepts were the Engine Monitoring and Control System (EMACS) display developed by NASA Langley and a display by exception design. Both of these concepts were based on the philosophy of providing information that is directly related to the pilot's task. Both concepts used a normalized thrust display. In addition, EMACS used column deviation indicators; i.e., the difference between the actual parameter value and the value predicted by an engine model, for engine health monitoring; while the Display by Exception displayed the engine parameters if the automated system detected a difference between the actual and the predicted values. The results showed that the advanced display concepts had shorter detection and response times. There were no differences in any of the results between manual and auto throttles. There were no effects upon perceived workload or performance on the primary flight task. The majority of pilots preferred the advanced displays and thought they were operationally acceptable. Certification of these concepts depends on the validation of the engine model. Recommendations are made to improve both the EMACS and the display by exception display formats.

  6. Using Student Video Cases to Assess Pre-service Elementary Teachers' Engineering Teaching Responsiveness

    NASA Astrophysics Data System (ADS)

    Dalvi, Tejaswini; Wendell, Kristen

    2017-10-01

    Our study addresses the need for new approaches to prepare novice elementary teachers to teach both science and engineering, and for new tools to measure how well those approaches are working. This in particular would inform the teacher educators of the extent to which novice teachers are developing expertise in facilitating their students' engineering design work. One important dimension to measure is novice teachers' abilities to notice the substance of student thinking and to respond in productive ways. This teacher noticing is particularly important in science and engineering education, where students' initial, idiosyncratic ideas and practices influence the likelihood that particular instructional strategies will help them learn. This paper describes evidence of validity and reliability for the Video Case Diagnosis (VCD) task, a new instrument for measuring pre-service elementary teachers' engineering teaching responsiveness. To complete the VCD, participants view a 6-min video episode of children solving an engineering design problem, describe in writing what they notice about the students' science ideas and engineering practices, and propose how a teacher could productively respond to the students. The rubric for scoring VCD responses allowed two independent scorers to achieve inter-rater reliability. Content analysis of the video episode, systematic review of literature on science and engineering practices, and solicitation of external expert educator responses establish content validity for VCD. Field test results with three different participant groups who have different levels of engineering education experience offer evidence of construct validity.

  7. Co-Located Collaborative Learning Video Game with Single Display Groupware

    ERIC Educational Resources Information Center

    Infante, Cristian; Weitz, Juan; Reyes, Tomas; Nussbaum, Miguel; Gomez, Florencia; Radovic, Darinka

    2010-01-01

    Role Game is a co-located CSCL video game played by three students sitting at one machine sharing a single screen, each with their own input device. Inspired by video console games, Role Game enables students to learn by doing, acquiring social abilities and mastering subject matter in a context of co-located collaboration. After describing the…

  8. Author Correction: Single-molecule imaging by optical absorption

    NASA Astrophysics Data System (ADS)

    Celebrano, Michele; Kukura, Philipp; Renn, Alois; Sandoghdar, Vahid

    2018-05-01

    In the Supplementary Video initially published with this Letter, the right-hand panel displaying the fluorescence emission was not showing on some video players due to a formatting problem; this has now been fixed. The video has also now been amended to include colour scale bars for both the left- (differential transmission signal) and right-hand panels.

  9. 76 FR 59963 - Closed Captioning of Internet Protocol-Delivered Video Programming: Implementation of the Twenty...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-28

    ...In this document, the Commission proposes rules to implement provisions of the Twenty-First Century Communications and Video Accessibility Act of 2010 (``CVAA'') that mandate rules for closed captioning of certain video programming delivered using Internet protocol (``IP''). The Commission seeks comment on rules that would apply to the distributors, providers, and owners of IP-delivered video programming, as well as the devices that display such programming.

  10. Quantitative measurement of eyestrain on 3D stereoscopic display considering the eye foveation model and edge information.

    PubMed

    Heo, Hwan; Lee, Won Oh; Shin, Kwang Yong; Park, Kang Ryoung

    2014-05-15

    We propose a new method for measuring the degree of eyestrain on 3D stereoscopic displays using a glasses-type of eye tracking device. Our study is novel in the following four ways: first, the circular area where a user's gaze position exists is defined based on the calculated gaze position and gaze estimation error. Within this circular area, the position where edge strength is maximized can be detected, and we determine this position as the gaze position that has a higher probability of being the correct one. Based on this gaze point, the eye foveation model is defined. Second, we quantitatively evaluate the correlation between the degree of eyestrain and the causal factors of visual fatigue, such as the degree of change of stereoscopic disparity (CSD), stereoscopic disparity (SD), frame cancellation effect (FCE), and edge component (EC) of the 3D stereoscopic display using the eye foveation model. Third, by comparing the eyestrain in conventional 3D video and experimental 3D sample video, we analyze the characteristics of eyestrain according to various factors and types of 3D video. Fourth, by comparing the eyestrain with or without the compensation of eye saccades movement in 3D video, we analyze the characteristics of eyestrain according to the types of eye movements in 3D video. Experimental results show that the degree of CSD causes more eyestrain than other factors.

  11. Presentation of Information on Visual Displays.

    ERIC Educational Resources Information Center

    Pettersson, Rune

    This discussion of factors involved in the presentation of text, numeric data, and/or visuals using video display devices describes in some detail the following types of presentation: (1) visual displays, with attention to additive color combination; measurements, including luminance, radiance, brightness, and lightness; and standards, with…

  12. Composite video and graphics display for multiple camera viewing system in robotics and teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)

    1991-01-01

    A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.

  13. Composite video and graphics display for camera viewing systems in robotics and teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)

    1993-01-01

    A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.

  14. Help for the Visually Impaired

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The Low Vision Enhancement System (LVES) is a video headset that offers people with low vision a view of their surroundings equivalent to the image on a five-foot television screen four feet from the viewer. It will not make the blind see but for many people with low vision, it eases everyday activities such as reading, watching TV and shopping. LVES was developed over almost a decade of cooperation between Stennis Space Center, the Wilmer Eye Institute of the Johns Hopkins Medical Institutions, the Department of Veteran Affairs, and Visionics Corporation. With the aid of Stennis scientists, Wilmer researchers used NASA technology for computer processing of satellite images and head-mounted vision enhancement systems originally intended for the space station. The unit consists of a head-mounted video display, three video cameras, and a control unit for the cameras. The cameras feed images to the video display in the headset.

  15. Oversight of OSHA with Respect to Video Display Terminals in the Workplace. A Staff Report for the Subcommittee on Health and Safety of the Committee on Education and Labor. House of Representatives, Ninety-Ninth Congress, First Session (August 1985).

    ERIC Educational Resources Information Center

    Dwyer, Paul F.

    Drawing on testimony presented at hearings before the Subcommittee on Health and Safety of the House of Representatives conducted between February 28 and June 12, 1984, this staff report addresses the general topic of video display terminals (VDTs) and possible health hazards in the workplace. An introduction presents the history of the…

  16. Rapid Damage Assessment. Volume II. Development and Testing of Rapid Damage Assessment System.

    DTIC Science & Technology

    1981-02-01

    pixels/s Camera Line Rate 732.4 lines/s Pixels per Line 1728 video 314 blank 4 line number (binary) 2 run number (BCD) 2048 total Pixel Resolution 8 bits...sists of an LSI-ll microprocessor, a VDI -200 video display processor, an FD-2 dual floppy diskette subsystem, an FT-I function key-trackball module...COMPONENT LIST FOR IMAGE PROCESSOR SYSTEM IMAGE PROCESSOR SYSTEM VIEWS I VDI -200 Display Processor Racks, Table FD-2 Dual Floppy Diskette Subsystem FT-l

  17. The Development of the AFIT Communications Laboratory and Experiments for Communications Students.

    DTIC Science & Technology

    1985-12-01

    Actiatesdigtal wag*andPermits monitoring of max. Actiatesdigial sorag animum signal excursions over selects the "A" or 󈧑" porn indeienite time...level at which the vertical display is installed in the 71.5. either peak detected or digitally averaged. Video signals above the level set by the... Video signals below the level set by the PEAK AVERAGE control or VERT P05 Positions the display Or baseline on digitally averaged and stored. th c_

  18. Annual Technical Symposium (28th): Achieving Technical and Management Excellence. Held in Arlington, Virginia on April 11, 1991,

    DTIC Science & Technology

    1991-04-11

    Perplexed: Think Energy Again. Video Enhanced SECAT - An Energy Program; Quality Ship Service Power with an Integrated Diesel Electric Propulsion...DIesign Branch (5011), NAVSEA * "Think Energy Again! Video Enhanced SECAT - 5 An Energy Program"’ Hasan Pehlivan, Mechanical Engineer/Ship Trials, Surface...1.015, or 1.5% increase.) Association of Scientists and Engineers 28th Annual Technical Symposium, 11 April 1991 THINK ENERGY AGAIN! A VIDEO ENHANCED

  19. Earthscape, a Multi-Purpose Interactive 3d Globe Viewer for Hybrid Data Visualization and Analysis

    NASA Astrophysics Data System (ADS)

    Sarthou, A.; Mas, S.; Jacquin, M.; Moreno, N.; Salamon, A.

    2015-08-01

    The hybrid visualization and interaction tool EarthScape is presented here. The software is able to display simultaneously LiDAR point clouds, draped videos with moving footprint, volume scientific data (using volume rendering, isosurface and slice plane), raster data such as still satellite images, vector data and 3D models such as buildings or vehicles. The application runs on touch screen devices such as tablets. The software is based on open source libraries, such as OpenSceneGraph, osgEarth and OpenCV, and shader programming is used to implement volume rendering of scientific data. The next goal of EarthScape is to perform data analysis using ENVI Services Engine, a cloud data analysis solution. EarthScape is also designed to be a client of Jagwire which provides multisource geo-referenced video fluxes. When all these components will be included, EarthScape will be a multi-purpose platform that will provide at the same time data analysis, hybrid visualization and complex interactions. The software is available on demand for free at france@exelisvis.com.

  20. Head-mounted display for use in functional endoscopic sinus surgery

    NASA Astrophysics Data System (ADS)

    Wong, Brian J.; Lee, Jon P.; Dugan, F. Markoe; MacArthur, Carol J.

    1995-05-01

    Since the introduction of functional endoscopic sinus surgery (FESS), the procedure has undergone rapid change with evolution keeping pace with technological advances. The advent of low cost charge coupled device 9CCD) cameras revolutionized the practice and instruction of FESS. Video-based FESS has allowed for documentation of the surgical procedure as well as interactive instruction during surgery. Presently, the technical requirements of video-based FESS include the addition of one or more television monitors positioned strategically in the operating room. Thought video monitors have greatly enhanced surgical endoscopy by re- involving nurses and assistants in the actual mechanics of surgery, video monitors require the operating surgeon to be focused on the screen instead of the patient. In this study, we describe the use of a new low-cost liquid crystal display (LCD) based device that functions as a monitor but is mounted on the head on a visor (PT-O1, O1 Products, Westlake Village, CA). This study illustrates the application of these HMD devices to FESS operations. The same surgeon performed the operation in each patient. In one nasal fossa, surgery was performed using conventional video FESS methods. The contralateral side was operated on while wearing the head mounted video display. The device had adequate resolution for the purposes of FESS. No adverse effects were noted intraoperatively. The results on the patients ipsalateral and contralateral sides were similar. The visor did eliminated significant torsion of the surgeon's neck during the operation, while at the same time permitted simultaneous viewing of both the patient and the intranasal surgical field.

  1. Putting Your Camp on Video.

    ERIC Educational Resources Information Center

    Peterson, Michael

    1997-01-01

    Creating a video to use in marketing camp involves selecting a format, writing the script, determining the video's length, obtaining release forms from campers who appear in the video, determining strategies for filming, choosing a narrator, and renting a studio and a mixing engineer (videotape editor). Includes distribution tips. (LP)

  2. Video quality assesment using M-SVD

    NASA Astrophysics Data System (ADS)

    Tao, Peining; Eskicioglu, Ahmet M.

    2007-01-01

    Objective video quality measurement is a challenging problem in a variety of video processing application ranging from lossy compression to printing. An ideal video quality measure should be able to mimic the human observer. We present a new video quality measure, M-SVD, to evaluate distorted video sequences based on singular value decomposition. A computationally efficient approach is developed for full-reference (FR) video quality assessment. This measure is tested on the Video Quality Experts Group (VQEG) phase I FR-TV test data set. Our experiments show the graphical measure displays the amount of distortion as well as the distribution of error in all frames of the video sequence while the numerical measure has a good correlation with perceived video quality outperforms PSNR and other objective measures by a clear margin.

  3. A generic flexible and robust approach for intelligent real-time video-surveillance systems

    NASA Astrophysics Data System (ADS)

    Desurmont, Xavier; Delaigle, Jean-Francois; Bastide, Arnaud; Macq, Benoit

    2004-05-01

    In this article we present a generic, flexible and robust approach for an intelligent real-time video-surveillance system. A previous version of the system was presented in [1]. The goal of these advanced tools is to provide help to operators by detecting events of interest in visual scenes and highlighting alarms and compute statistics. The proposed system is a multi-camera platform able to handle different standards of video inputs (composite, IP, IEEE1394 ) and which can basically compress (MPEG4), store and display them. This platform also integrates advanced video analysis tools, such as motion detection, segmentation, tracking and interpretation. The design of the architecture is optimised to playback, display, and process video flows in an efficient way for video-surveillance application. The implementation is distributed on a scalable computer cluster based on Linux and IP network. It relies on POSIX threads for multitasking scheduling. Data flows are transmitted between the different modules using multicast technology and under control of a TCP-based command network (e.g. for bandwidth occupation control). We report here some results and we show the potential use of such a flexible system in third generation video surveillance system. We illustrate the interest of the system in a real case study, which is the indoor surveillance.

  4. 14 CFR 382.69 - What requirements must carriers meet concerning the accessibility of videos, DVDs, and other...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... carriers must use an equivalent non-video alternative for transmitting the briefing to passengers with... audio-visual displays played on aircraft for informational purposes that were created under your control...

  5. 14 CFR 382.69 - What requirements must carriers meet concerning the accessibility of videos, DVDs, and other...

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... carriers must use an equivalent non-video alternative for transmitting the briefing to passengers with... audio-visual displays played on aircraft for informational purposes that were created under your control...

  6. 14 CFR 382.69 - What requirements must carriers meet concerning the accessibility of videos, DVDs, and other...

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... carriers must use an equivalent non-video alternative for transmitting the briefing to passengers with... audio-visual displays played on aircraft for informational purposes that were created under your control...

  7. Predictable Programming on a Precision Timed Architecture

    DTIC Science & Technology

    2008-04-18

    Application: A Video Game Figure 6: Structure of the Video Game Example Inspired by an example game sup- plied with the Hydra development board [17...we implemented a sim- ple video game in C targeted to our PRET architecture. Our example centers on rendering graphics and is otherwise fairly simple...background image. 13 Figure 10: A Screen Dump From Our Video Game Ultimately, each displayed pixel is one of only four col- ors, but the pixels in

  8. Markerless client-server augmented reality system with natural features

    NASA Astrophysics Data System (ADS)

    Ning, Shuangning; Sang, Xinzhu; Chen, Duo

    2017-10-01

    A markerless client-server augmented reality system is presented. In this research, the more extensive and mature virtual reality head-mounted display is adopted to assist the implementation of augmented reality. The viewer is provided an image in front of their eyes with the head-mounted display. The front-facing camera is used to capture video signals into the workstation. The generated virtual scene is merged with the outside world information received from the camera. The integrated video is sent to the helmet display system. The distinguishing feature and novelty is to realize the augmented reality with natural features instead of marker, which address the limitations of the marker, such as only black and white, the inapplicability of different environment conditions, and particularly cannot work when the marker is partially blocked. Further, 3D stereoscopic perception of virtual animation model is achieved. The high-speed and stable socket native communication method is adopted for transmission of the key video stream data, which can reduce the calculation burden of the system.

  9. Naval Research Laboratory 1984 Review.

    DTIC Science & Technology

    1985-07-16

    pulsed infrared comprehensive characterization of ultrahigh trans- sources and electronics for video signal process- parency fluoride glasses and...operates a video system through this port if desired. The optical bench in consisting of visible and infrared television cam- the trailer holds a high...resolution Fourier eras, a high-quality video cassette recorder and transform spectrometer to use in the receiving display, and a digitizer to convert

  10. Multi-Target Camera Tracking, Hand-off and Display LDRD 158819 Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Robert J.

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn’t lead to more alarms, moremore » monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identify individual moving targets from the background imagery, and then display the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.« less

  11. Multi-target camera tracking, hand-off and display LDRD 158819 final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Robert J.

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn't lead to more alarms, moremore » monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identifies individual moving targets from the background imagery, and then displays the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.« less

  12. 40 CFR 91.1007 - Display exemption.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) CONTROL OF EMISSIONS FROM MARINE SPARK-IGNITION ENGINES Exclusion and Exemption of Marine SI Engines § 91.1007 Display exemption. An uncertified marine SI engine is a display engine when it is to be used... will not be sold unless an applicable certificate of conformity has been received or the engine has...

  13. Fast repurposing of high-resolution stereo video content for mobile use

    NASA Astrophysics Data System (ADS)

    Karaoglu, Ali; Lee, Bong Ho; Boev, Atanas; Cheong, Won-Sik; Gotchev, Atanas

    2012-06-01

    3D video content is captured and created mainly in high resolution targeting big cinema or home TV screens. For 3D mobile devices, equipped with small-size auto-stereoscopic displays, such content has to be properly repurposed, preferably in real-time. The repurposing requires not only spatial resizing but also properly maintaining the output stereo disparity, as it should deliver realistic, pleasant and harmless 3D perception. In this paper, we propose an approach to adapt the disparity range of the source video to the comfort disparity zone of the target display. To achieve this, we adapt the scale and the aspect ratio of the source video. We aim at maximizing the disparity range of the retargeted content within the comfort zone, and minimizing the letterboxing of the cropped content. The proposed algorithm consists of five stages. First, we analyse the display profile, which characterises what 3D content can be comfortably observed in the target display. Then, we perform fast disparity analysis of the input stereoscopic content. Instead of returning the dense disparity map, it returns an estimate of the disparity statistics (min, max, meanand variance) per frame. Additionally, we detect scene cuts, where sharp transitions in disparities occur. Based on the estimated input, and desired output disparity ranges, we derive the optimal cropping parameters and scale of the cropping window, which would yield the targeted disparity range and minimize the area of cropped and letterboxed content. Once the rescaling and cropping parameters are known, we perform resampling procedure using spline-based and perceptually optimized resampling (anti-aliasing) kernels, which have also a very efficient computational structure. Perceptual optimization is achieved through adjusting the cut-off frequency of the anti-aliasing filter with the throughput of the target display.

  14. Modernizing engine displays

    NASA Technical Reports Server (NTRS)

    Schneider, E. T.; Enevoldson, E. K.

    1984-01-01

    The introduction of electronic fuel control to modern turbine engines has a number of advantages, which are related to an increase in engine performance and to a reduction or elimination of the problems associated with high angle of attack engine operation from the surface to 50,000 feet. If the appropriate engine display devices are available to the pilot, the fuel control system can provide a great amount of information. Some of the wealth of information available from modern fuel controls are discussed in this paper. The considered electronic engine control systems in their most recent forms are known as the Full Authority Digital Engine Control (FADEC) and the Digital Electronic Engine Control (DEEC). Attention is given to some details regarding the control systems, typical engine problems, the solution of problems with the aid of displays, engine displays in normal operation, an example display format, a multipage format, flight strategies, and hardware considerations.

  15. Innovative railroad information displays : video guide

    DOT National Transportation Integrated Search

    1998-01-01

    The objectives of this study were to explore the potential of advanced digital technology, : novel concepts of information management, geographic information databases and : display capabilities in order to enhance planning and decision-making proces...

  16. Development of 40-in hybrid hologram screen for auto-stereoscopic video display

    NASA Astrophysics Data System (ADS)

    Song, Hyun Ho; Nakashima, Y.; Momonoi, Y.; Honda, Toshio

    2004-06-01

    Usually in auto stereoscopic display, there are two problems. The first problem is that large image display is difficult, and the second problem is that the view zone (which means the zone in which both eyes are put for stereoscopic or 3-D image observation) is very narrow. We have been developing an auto stereoscopic large video display system (over 100 inches diagonal) which a few people can view simultaneously1,2. Usually in displays that are over 100 inches diagonal, an optical video projection system is used. As one of auto stereoscopic display systems the hologram screen has been proposed3,4,5,6. However, if the hologram screen becomes too large, the view zone (corresponding to the reconstructed diffused object) causes color dispersion and color aberration7. We also proposed the additional Fresnel lens attached to the hologram screen. We call the screen a "hybrid hologram screen", (HHS in short). We made the HHS 866mm(H)×433mm(V) (about 40 inch diagonal)8,9,10,11. By using the lens in the reconstruction step, the angle between object light and reference light can be small, compared to without the lens. So, the spread of the view zone by the color dispersion and color aberration becomes small. And also, the virtual image which is reconstructed from the hologram screen can be transformed to a real image (view zone). So, it is not necessary to use a large lens or concave mirror while making a large hologram screen.

  17. Perceptual tools for quality-aware video networks

    NASA Astrophysics Data System (ADS)

    Bovik, A. C.

    2014-01-01

    Monitoring and controlling the quality of the viewing experience of videos transmitted over increasingly congested networks (especially wireless networks) is a pressing problem owing to rapid advances in video-centric mobile communication and display devices that are straining the capacity of the network infrastructure. New developments in automatic perceptual video quality models offer tools that have the potential to be used to perceptually optimize wireless video, leading to more efficient video data delivery and better received quality. In this talk I will review key perceptual principles that are, or could be used to create effective video quality prediction models, and leading quality prediction models that utilize these principles. The goal is to be able to monitor and perceptually optimize video networks by making them "quality-aware."

  18. Representing videos in tangible products

    NASA Astrophysics Data System (ADS)

    Fageth, Reiner; Weiting, Ralf

    2014-03-01

    Videos can be taken with nearly every camera, digital point and shoot cameras, DSLRs as well as smartphones and more and more with so-called action cameras mounted on sports devices. The implementation of videos while generating QR codes and relevant pictures out of the video stream via a software implementation was contents in last years' paper. This year we present first data about what contents is displayed and how the users represent their videos in printed products, e.g. CEWE PHOTOBOOKS and greeting cards. We report the share of the different video formats used, the number of images extracted out of the video in order to represent the video, the positions in the book and different design strategies compared to regular books.

  19. New generation of the multimedia search engines

    NASA Astrophysics Data System (ADS)

    Mijes Cruz, Mario Humberto; Soto Aldaco, Andrea; Maldonado Cano, Luis Alejandro; López Rodríguez, Mario; Rodríguez Vázqueza, Manuel Antonio; Amaya Reyes, Laura Mariel; Cano Martínez, Elizabeth; Pérez Rosas, Osvaldo Gerardo; Rodríguez Espejo, Luis; Flores Secundino, Jesús Abimelek; Rivera Martínez, José Luis; García Vázquez, Mireya Saraí; Zamudio Fuentes, Luis Miguel; Sánchez Valenzuela, Juan Carlos; Montoya Obeso, Abraham; Ramírez Acosta, Alejandro Álvaro

    2016-09-01

    Current search engines are based upon search methods that involve the combination of words (text-based search); which has been efficient until now. However, the Internet's growing demand indicates that there's more diversity on it with each passing day. Text-based searches are becoming limited, as most of the information on the Internet can be found in different types of content denominated multimedia content (images, audio files, video files). Indeed, what needs to be improved in current search engines is: search content, and precision; as well as an accurate display of expected search results by the user. Any search can be more precise if it uses more text parameters, but it doesn't help improve the content or speed of the search itself. One solution is to improve them through the characterization of the content for the search in multimedia files. In this article, an analysis of the new generation multimedia search engines is presented, focusing the needs according to new technologies. Multimedia content has become a central part of the flow of information in our daily life. This reflects the necessity of having multimedia search engines, as well as knowing the real tasks that it must comply. Through this analysis, it is shown that there are not many search engines that can perform content searches. The area of research of multimedia search engines of new generation is a multidisciplinary area that's in constant growth, generating tools that satisfy the different needs of new generation systems.

  20. A simulation evaluation of the engine monitoring and control system display

    NASA Technical Reports Server (NTRS)

    Abbott, Terence S.

    1990-01-01

    The Engine Monitoring and Control System (E-MACS) display is a new concept for an engine instrument display, the purpose of which is to provide an enhanced means for a pilot to control and monitor aircraft engine performance. It provides graphically-presented information about performance capabilities, current performance, and engine component or subsystem operational conditions relative to nominal conditions. The concept was evaluated by sixteen pilot-subjects against a traditional, state-of-the-art electronic engine display format. The results of this evaluation showed a substantial pilot preference for the E-MACS display relative to the traditional display. The results of the failure detection portion of the evaluation showed a 100 percent detection rate for the E-MACS display relative to a 57 percent rate for the traditional display. From these results, it is concluded that by providing this type of information in the cockpit, a reduction in pilot workload and an enhanced ability for detecting degraded or off-nominal conditions is probable, thus leading to an increase in operational safety.

  1. Video-speed electronic paper based on electrowetting

    NASA Astrophysics Data System (ADS)

    Hayes, Robert A.; Feenstra, B. J.

    2003-09-01

    In recent years, a number of different technologies have been proposed for use in reflective displays. One of the most appealing applications of a reflective display is electronic paper, which combines the desirable viewing characteristics of conventional printed paper with the ability to manipulate the displayed information electronically. Electronic paper based on the electrophoretic motion of particles inside small capsules has been demonstrated and commercialized; but the response speed of such a system is rather slow, limited by the velocity of the particles. Recently, we have demonstrated that electrowetting is an attractive technology for the rapid manipulation of liquids on a micrometre scale. Here we show that electrowetting can also be used to form the basis of a reflective display that is significantly faster than electrophoretic displays, so that video content can be displayed. Our display principle utilizes the voltage-controlled movement of a coloured oil film adjacent to a white substrate. The reflectivity and contrast of our system approach those of paper. In addition, we demonstrate a colour concept, which is intrinsically four times brighter than reflective liquid-crystal displays and twice as bright as other emerging technologies. The principle of microfluidic motion at low voltages is applicable in a wide range of electro-optic devices.

  2. Applications of yeast surface display for protein engineering

    PubMed Central

    Cherf, Gerald M.; Cochran, Jennifer R.

    2015-01-01

    The method of displaying recombinant proteins on the surface of Saccharomyces cerevisiae via genetic fusion to an abundant cell wall protein, a technology known as yeast surface display, or simply, yeast display, has become a valuable protein engineering tool for a broad spectrum of biotechnology and biomedical applications. This review focuses on the use of yeast display for engineering protein affinity, stability, and enzymatic activity. Strategies and examples for each protein engineering goal are discussed. Additional applications of yeast display are also briefly presented, including protein epitope mapping, identification of protein-protein interactions, and uses of displayed proteins in industry and medicine. PMID:26060074

  3. Deep Web video

    ScienceCinema

    None Available

    2018-02-06

    To make the web work better for science, OSTI has developed state-of-the-art technologies and services including a deep web search capability. The deep web includes content in searchable databases available to web users but not accessible by popular search engines, such as Google. This video provides an introduction to the deep web search engine.

  4. 47 CFR 73.3617 - Information available on the Internet.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ....fcc.gov/mb/; the Audio Division's address is http://www.fcc.gov/mmb/audio; the Video Division's address is http://www.fcc.gov/mb/video; the Policy Division's address is http://www.fcc.gov/mb/policy; the Engineering Division's address is http://www.fcc.gov/mb/engineering; and the Industry Analysis Division's...

  5. 47 CFR 73.3617 - Information available on the Internet.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ....fcc.gov/mb/; the Audio Division's address is http://www.fcc.gov/mmb/audio; the Video Division's address is http://www.fcc.gov/mb/video; the Policy Division's address is http://www.fcc.gov/mb/policy; the Engineering Division's address is http://www.fcc.gov/mb/engineering; and the Industry Analysis Division's...

  6. 47 CFR 73.3617 - Information available on the Internet.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ....fcc.gov/mb/; the Audio Division's address is http://www.fcc.gov/mmb/audio; the Video Division's address is http://www.fcc.gov/mb/video; the Policy Division's address is http://www.fcc.gov/mb/policy; the Engineering Division's address is http://www.fcc.gov/mb/engineering; and the Industry Analysis Division's...

  7. 2. CHANNEL DIMENSIONS AND ALIGNMENT RESEARCH INSTRUMENTATION. HYDRAULIC ENGINEER PILOTING ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    2. CHANNEL DIMENSIONS AND ALIGNMENT RESEARCH INSTRUMENTATION. HYDRAULIC ENGINEER PILOTING VIDEO-CONTROLED BOAT MODEL FROM CONTROL TRAILER. NOTE VIEW FROM BOAT-MOUNTED VIDEO CAMERA SHOWN ON MONITOR, AND MODEL WATERWAY VISIBLE THROUGH WINDOW AT LEFT. - Waterways Experiment Station, Hydraulics Laboratory, Halls Ferry Road, 2 miles south of I-20, Vicksburg, Warren County, MS

  8. Students Designing Video Games about Immunology: Insights for Science Learning

    ERIC Educational Resources Information Center

    Khalili, Neda; Sheridan, Kimberly; Williams, Asia; Clark, Kevin; Stegman, Melanie

    2011-01-01

    Exposing American K-12 students to science, technology, engineering, and math (STEM) content is a national initiative. Game Design Through Mentoring and Collaboration targets students from underserved communities and uses their interest in video games as a way to introduce science, technology, engineering, and math topics. This article describes a…

  9. Dissecting children's observational learning of complex actions through selective video displays.

    PubMed

    Flynn, Emma; Whiten, Andrew

    2013-10-01

    Children can learn how to use complex objects by watching others, yet the relative importance of different elements they may observe, such as the interactions of the individual parts of the apparatus, a model's movements, and desirable outcomes, remains unclear. In total, 140 3-year-olds and 140 5-year-olds participated in a study where they observed a video showing tools being used to extract a reward item from a complex puzzle box. Conditions varied according to the elements that could be seen in the video: (a) the whole display, including the model's hands, the tools, and the box; (b) the tools and the box but not the model's hands; (c) the model's hands and the tools but not the box; (d) only the end state with the box opened; and (e) no demonstration. Children's later attempts at the task were coded to establish whether they imitated the hierarchically organized sequence of the model's actions, the action details, and/or the outcome. Children's successful retrieval of the reward from the box and the replication of hierarchical sequence information were reduced in all but the whole display condition. Only once children had attempted the task and witnessed a second demonstration did the display focused on the tools and box prove to be better for hierarchical sequence information than the display focused on the tools and hands only. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. 4D megahertz optical coherence tomography (OCT): imaging and live display beyond 1 gigavoxel/sec (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Huber, Robert A.; Draxinger, Wolfgang; Wieser, Wolfgang; Kolb, Jan Philip; Pfeiffer, Tom; Karpf, Sebastian N.; Eibl, Matthias; Klein, Thomas

    2016-03-01

    Over the last 20 years, optical coherence tomography (OCT) has become a valuable diagnostic tool in ophthalmology with several 10,000 devices sold today. Other applications, like intravascular OCT in cardiology and gastro-intestinal imaging will follow. OCT provides 3-dimensional image data with microscopic resolution of biological tissue in vivo. In most applications, off-line processing of the acquired OCT-data is sufficient. However, for OCT applications like OCT aided surgical microscopes, for functional OCT imaging of tissue after a stimulus, or for interactive endoscopy an OCT engine capable of acquiring, processing and displaying large and high quality 3D OCT data sets at video rate is highly desired. We developed such a prototype OCT engine and demonstrate live OCT with 25 volumes per second at a size of 320x320x320 pixels. The computer processing load of more than 1.5 TFLOPS was handled by a GTX 690 graphics processing unit with more than 3000 stream processors operating in parallel. In the talk, we will describe the optics and electronics hardware as well as the software of the system in detail and analyze current limitations. The talk also focuses on new OCT applications, where such a system improves diagnosis and monitoring of medical procedures. The additional acquisition of hyperspectral stimulated Raman signals with the system will be discussed.

  11. Thermal Protection System Imagery Inspection Management System -TIIMS

    NASA Technical Reports Server (NTRS)

    Goza, Sharon; Melendrez, David L.; Henningan, Marsha; LaBasse, Daniel; Smith, Daniel J.

    2011-01-01

    TIIMS is used during the inspection phases of every mission to provide quick visual feedback, detailed inspection data, and determination to the mission management team. This system consists of a visual Web page interface, an SQL database, and a graphical image generator. These combine to allow a user to ascertain quickly the status of the inspection process, and current determination of any problem zones. The TIIMS system allows inspection engineers to enter their determinations into a database and to link pertinent images and video to those database entries. The database then assigns criteria to each zone and tile, and via query, sends the information to a graphical image generation program. Using the official TIPS database tile positions and sizes, the graphical image generation program creates images of the current status of the orbiter, coloring zones, and tiles based on a predefined key code. These images are then displayed on a Web page using customized JAVA scripts to display the appropriate zone of the orbiter based on the location of the user's cursor. The close-up graphic and database entry for that particular zone can then be seen by selecting the zone. This page contains links into the database to access the images used by the inspection engineer when they make the determination entered into the database. Status for the inspection zones changes as determinations are refined and shown by the appropriate color code.

  12. Quantitative Measurement of Eyestrain on 3D Stereoscopic Display Considering the Eye Foveation Model and Edge Information

    PubMed Central

    Heo, Hwan; Lee, Won Oh; Shin, Kwang Yong; Park, Kang Ryoung

    2014-01-01

    We propose a new method for measuring the degree of eyestrain on 3D stereoscopic displays using a glasses-type of eye tracking device. Our study is novel in the following four ways: first, the circular area where a user's gaze position exists is defined based on the calculated gaze position and gaze estimation error. Within this circular area, the position where edge strength is maximized can be detected, and we determine this position as the gaze position that has a higher probability of being the correct one. Based on this gaze point, the eye foveation model is defined. Second, we quantitatively evaluate the correlation between the degree of eyestrain and the causal factors of visual fatigue, such as the degree of change of stereoscopic disparity (CSD), stereoscopic disparity (SD), frame cancellation effect (FCE), and edge component (EC) of the 3D stereoscopic display using the eye foveation model. Third, by comparing the eyestrain in conventional 3D video and experimental 3D sample video, we analyze the characteristics of eyestrain according to various factors and types of 3D video. Fourth, by comparing the eyestrain with or without the compensation of eye saccades movement in 3D video, we analyze the characteristics of eyestrain according to the types of eye movements in 3D video. Experimental results show that the degree of CSD causes more eyestrain than other factors. PMID:24834910

  13. A Conceptual Characterization of Online Videos Explaining Natural Selection

    ERIC Educational Resources Information Center

    Bohlin, Gustav; Göransson, Andreas; Höst, Gunnar E.; Tibell, Lena A. E.

    2017-01-01

    Educational videos on the Internet comprise a vast and highly diverse source of information. Online search engines facilitate access to numerous videos claiming to explain natural selection, but little is known about the degree to which the video content match key evolutionary content identified as important in evolution education research. In…

  14. Quantitative Analysis of Color Differences within High Contrast, Low Power Reversible Electrophoretic Displays

    DOE PAGES

    Giera, Brian; Bukosky, Scott; Lee, Elaine; ...

    2018-01-23

    Here, quantitative color analysis is performed on videos of high contrast, low power reversible electrophoretic deposition (EPD)-based displays operated under different applied voltages. This analysis is coded in an open-source software, relies on a color differentiation metric, ΔE * 00, derived from digital video, and provides an intuitive relationship between the operating conditions of the devices and their performance. Time-dependent ΔE * 00 color analysis reveals color relaxation behavior, recoverability for different voltage sequences, and operating conditions that can lead to optimal performance.

  15. Quantitative Analysis of Color Differences within High Contrast, Low Power Reversible Electrophoretic Displays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giera, Brian; Bukosky, Scott; Lee, Elaine

    Here, quantitative color analysis is performed on videos of high contrast, low power reversible electrophoretic deposition (EPD)-based displays operated under different applied voltages. This analysis is coded in an open-source software, relies on a color differentiation metric, ΔE * 00, derived from digital video, and provides an intuitive relationship between the operating conditions of the devices and their performance. Time-dependent ΔE * 00 color analysis reveals color relaxation behavior, recoverability for different voltage sequences, and operating conditions that can lead to optimal performance.

  16. A Low Cost Video Display System Using the Motorola 6811 Single-Chip Microcomputer.

    DTIC Science & Technology

    1986-08-01

    EB JSR VIDEO display data;wait for keyentry 0426 E1EB BD E2 4E JSR CLRBUFF clean out buffer 0427 EEE C601 LDAB #1 reset pointer 0428 ElFO D7 02 STAB...E768 Al 00 REGI CMPA OX 1303 E76A 27 OE BEQ REG3 1304 E76C E6 00 LDAB 0,X 1305 E76E 08 INX 1306 E76F Cl 53 CMPB #’S’ 1307 E771 26 15 BNE REGI jump if

  17. Evaluating the content and reception of messages from incarcerated parents to their children.

    PubMed

    Folk, Johanna B; Nichols, Emily B; Dallaire, Danielle H; Loper, Ann B

    2012-10-01

    In the current study, children's reactions to video messages from their incarcerated parents were evaluated. Previous research has yielded mixed results when it examined the impact of contact between incarcerated parents and their children; one reason for these mixed results may be a lack of attention to the quality of contact. This is the first study to examine the actual content and quality of a remote form of contact in this population. Participants included 186 incarcerated parents (54% mothers) who participated in a filming with The Messages Project and 61 caregivers of their children. Parental mood prior to filming the message and children's mood after viewing the message were assessed using the Positive and Negative Affect Scale. After coding the content of 172 videos, the data from the 61 videos with caregiver responses were used in subsequent path analyses. Analyses indicated that when parents were in more negative moods prior to filming their message, they displayed more negative emotions in the video messages ( = .210), and their children were in more negative moods after viewing the message ( = .288). Considering that displays of negative emotion can directly affect how children respond to contact, it seems important for parents to learn to regulate these emotional displays to improve the quality of their contact with their children. © 2012 American Orthopsychiatric Association.

  18. Stereoscopic 3D video games and their effects on engagement

    NASA Astrophysics Data System (ADS)

    Hogue, Andrew; Kapralos, Bill; Zerebecki, Chris; Tawadrous, Mina; Stanfield, Brodie; Hogue, Urszula

    2012-03-01

    With television manufacturers developing low-cost stereoscopic 3D displays, a large number of consumers will undoubtedly have access to 3D-capable televisions at home. The availability of 3D technology places the onus on content creators to develop interesting and engaging content. While the technology of stereoscopic displays and content generation are well understood, there are many questions yet to be answered surrounding its effects on the viewer. Effects of stereoscopic display on passive viewers for film are known, however video games are fundamentally different since the viewer/player is actively (rather than passively) engaged in the content. Questions of how stereoscopic viewing affects interaction mechanics have previously been studied in the context of player performance but very few have attempted to quantify the player experience to determine whether stereoscopic 3D has a positive or negative influence on their overall engagement. In this paper we present a preliminary study of the effects stereoscopic 3D have on player engagement in video games. Participants played a video game in two conditions, traditional 2D and stereoscopic 3D and their engagement was quantified using a previously validated self-reporting tool. The results suggest that S3D has a positive effect on immersion, presence, flow, and absorption.

  19. Orbital thermal analysis of lattice structured spacecraft using color video display techniques

    NASA Technical Reports Server (NTRS)

    Wright, R. L.; Deryder, D. D.; Palmer, M. T.

    1983-01-01

    A color video display technique is demonstrated as a tool for rapid determination of thermal problems during the preliminary design of complex space systems. A thermal analysis is presented for the lattice-structured Earth Observation Satellite (EOS) spacecraft at 32 points in a baseline non Sun-synchronous (60 deg inclination) orbit. Large temperature variations (on the order of 150 K) were observed on the majority of the members. A gradual decrease in temperature was observed as the spacecraft traversed the Earth's shadow, followed by a sudden rise in temperature (100 K) as the spacecraft exited the shadow. Heating rate and temperature histories of selected members and color graphic displays of temperatures on the spacecraft are presented.

  20. LEDing the Way.

    ERIC Educational Resources Information Center

    Dahlgren, Sally

    2000-01-01

    Discusses how advances in light-emitting diode (LED) technology is helping video displays at sporting events get fans closer to the action than ever before. The types of LED displays available are discussed as are their operation and maintenance issues. (GR)

  1. High Resolution Displays Using NCAP Liquid Crystals

    NASA Astrophysics Data System (ADS)

    Macknick, A. Brian; Jones, Phil; White, Larry

    1989-07-01

    Nematic curvilinear aligned phase (NCAP) liquid crystals have been found useful for high information content video displays. NCAP materials are liquid crystals which have been encapsulated in a polymer matrix and which have a light transmission which is variable with applied electric fields. Because NCAP materials do not require polarizers, their on-state transmission is substantially better than twisted nematic cells. All dimensional tolerances are locked in during the encapsulation process and hence there are no critical sealing or spacing issues. By controlling the polymer/liquid crystal morphology, switching speeds of NCAP materials have been significantly improved over twisted nematic systems. Recent work has combined active matrix addressing with NCAP materials. Active matrices, such as thin film transistors, have given displays of high resolution. The paper will discuss the advantages of NCAP materials specifically designed for operation at video rates on transistor arrays; applications for both backlit and projection displays will be discussed.

  2. An Intensive Presentations Course in English for Aeronautical Engineering Students Using Cyclic Video Recordings

    ERIC Educational Resources Information Center

    Tatzl, Dietmar

    2017-01-01

    This article presents the design and evaluation of an intensive presentations course for aeronautical engineering students based on cyclic video recordings. The target group of this course in English for specific purposes (ESP) were undergraduate final-year students who needed to improve their presentation and foreign language skills to prepare…

  3. Effectiveness of Using a Video Game to Teach a Course in Mechanical Engineering

    ERIC Educational Resources Information Center

    Coller, B. D.; Scott, M. J.

    2009-01-01

    One of the core courses in the undergraduate mechanical engineering curriculum has been completely redesigned. In the new numerical methods course, all assignments and learning experiences are built around a video/computer game. Students are given the task of writing computer programs to race a simulated car around a track. In doing so, students…

  4. Expert Behavior in Children's Video Game Play.

    ERIC Educational Resources Information Center

    VanDeventer, Stephanie S.; White, James A.

    2002-01-01

    Investigates the display of expert behavior by seven outstanding video game-playing children ages 10 and 11. Analyzes observation and debriefing transcripts for evidence of self-monitoring, pattern recognition, principled decision making, qualitative thinking, and superior memory, and discusses implications for educators regarding the development…

  5. Feasibility of dynamic cardiac ultrasound transmission via mobile phone for basic emergency teleconsultation.

    PubMed

    Lim, Tae Ho; Choi, Hyuk Joong; Kang, Bo Seung

    2010-01-01

    We assessed the feasibility of using a camcorder mobile phone for teleconsulting about cardiac echocardiography. The diagnostic performance of evaluating left ventricle (LV) systolic function was measured by three emergency medicine physicians. A total of 138 short echocardiography video sequences (from 70 subjects) was selected from previous emergency room ultrasound examinations. The measurement of LV ejection fraction based on the transmitted video displayed on a mobile phone was compared with the original video displayed on the LCD monitor of the ultrasound machine. The image quality was evaluated using the double stimulation impairment scale (DSIS). All observers showed high sensitivity. There was an improvement in specificity with the observer's increasing experience of cardiac ultrasound. Although the image quality of video on the mobile phone was lower than that of the original, a receiver operating characteristic (ROC) analysis indicated that there was no significant difference in diagnostic performance. Immediate basic teleconsulting of echocardiography movies is possible using current commercially-available mobile phone systems.

  6. Video-laryngoscopy introduction in a Sub-Saharan national teaching hospital: luxury or necessity?

    PubMed Central

    Alain, Traoré Ibrahim; Drissa, Barro Sié; Flavien, Kaboré; Serge, Ilboudo; Idriss, Traoré

    2015-01-01

    Tracheal intubation using Macintosh blade is the technique of choice for the liberation of airways. It can turn out to be difficult, causing severe complications which can entail the prognosis for survival or the adjournment of the surgical operation. The video-laryngoscope allows a better display of the larynx and a good exposure of the glottis and then making tracheal intubation simpler compared with a conventional laryngoscope. It is little spread in sub-Saharan Africa and more particularly in Burkina Faso because of its high cost. We report our first experiences of use of the video-laryngoscope through two cases of difficult tracheal intubation which had required the adjournment of the interventions. It results that the video-laryngoscope makes tracheal intubation easier even in it's the first use because of the good glottal display which it gives and because its allows apprenticeship easy. Therefore, it is not a luxury to have it in our therapeutic arsenal. PMID:27047621

  7. Coupled auralization and virtual video for immersive multimedia displays

    NASA Astrophysics Data System (ADS)

    Henderson, Paul D.; Torres, Rendell R.; Shimizu, Yasushi; Radke, Richard; Lonsway, Brian

    2003-04-01

    The implementation of maximally-immersive interactive multimedia in exhibit spaces requires not only the presentation of realistic visual imagery but also the creation of a perceptually accurate aural experience. While conventional implementations treat audio and video problems as essentially independent, this research seeks to couple the visual sensory information with dynamic auralization in order to enhance perceptual accuracy. An implemented system has been developed for integrating accurate auralizations with virtual video techniques for both interactive presentation and multi-way communication. The current system utilizes a multi-channel loudspeaker array and real-time signal processing techniques for synthesizing the direct sound, early reflections, and reverberant field excited by a moving sound source whose path may be interactively defined in real-time or derived from coupled video tracking data. In this implementation, any virtual acoustic environment may be synthesized and presented in a perceptually-accurate fashion to many participants over a large listening and viewing area. Subject tests support the hypothesis that the cross-modal coupling of aural and visual displays significantly affects perceptual localization accuracy.

  8. Image Descriptors for Displays

    DTIC Science & Technology

    1975-03-01

    sampled with composite blanking signal; (c) signal in (a) formed into composite video signal ... 24 3. Power spectral density of the signals shown in...Curve A: composite video signal formed from 20 Hz to 2.5 MH.i band-limited, Gaussian white noise. Curve B: average spectrum of off-the-air video...previously. Our experimental procedure was the following. Off-the-air television signals broadcast on VHP channels were analyzed with a commercially

  9. An Augmented Virtuality Display for Improving UAV Usability

    DTIC Science & Technology

    2005-01-01

    cockpit. For a more universally-understood metaphor, we have turned to virtual environments of the type represented in video games . Many of the...people who have the need to fly UAVs (such as military personnel) have experience with playing video games . They are skilled in navigating virtual...Another aspect of tailoring the interface to those with video game experience is to use familiar controls. Microsoft has developed a popular and

  10. Toward a 3D video format for auto-stereoscopic displays

    NASA Astrophysics Data System (ADS)

    Vetro, Anthony; Yea, Sehoon; Smolic, Aljoscha

    2008-08-01

    There has been increased momentum recently in the production of 3D content for cinema applications; for the most part, this has been limited to stereo content. There are also a variety of display technologies on the market that support 3DTV, each offering a different viewing experience and having different input requirements. More specifically, stereoscopic displays support stereo content and require glasses, while auto-stereoscopic displays avoid the need for glasses by rendering view-dependent stereo pairs for a multitude of viewing angles. To realize high quality auto-stereoscopic displays, multiple views of the video must either be provided as input to the display, or these views must be created locally at the display. The former approach has difficulties in that the production environment is typically limited to stereo, and transmission bandwidth for a large number of views is not likely to be available. This paper discusses an emerging 3D data format that enables the latter approach to be realized. A new framework for efficiently representing a 3D scene and enabling the reconstruction of an arbitrarily large number of views prior to rendering is introduced. Several design challenges are also highlighted through experimental results.

  11. Enabling Collaboration and Video Assessment: Exposing Trends in Science Preservice Teachers' Assessments

    ERIC Educational Resources Information Center

    Borowczak, Mike; Burrows, Andrea C.

    2016-01-01

    This article details a new, free resource for continuous video assessment named YouDemo. The tool enables real time rating of uploaded YouTube videos for use in science, technology, engineering, and mathematics (STEM) education and beyond. The authors discuss trends of preservice science teachers' assessments of self- and peer-created videos using…

  12. Wide-Field-of-View, High-Resolution, Stereoscopic Imager

    NASA Technical Reports Server (NTRS)

    Prechtl, Eric F.; Sedwick, Raymond J.

    2010-01-01

    A device combines video feeds from multiple cameras to provide wide-field-of-view, high-resolution, stereoscopic video to the user. The prototype under development consists of two camera assemblies, one for each eye. One of these assemblies incorporates a mounting structure with multiple cameras attached at offset angles. The video signals from the cameras are fed to a central processing platform where each frame is color processed and mapped into a single contiguous wide-field-of-view image. Because the resolution of most display devices is typically smaller than the processed map, a cropped portion of the video feed is output to the display device. The positioning of the cropped window will likely be controlled through the use of a head tracking device, allowing the user to turn his or her head side-to-side or up and down to view different portions of the captured image. There are multiple options for the display of the stereoscopic image. The use of head mounted displays is one likely implementation. However, the use of 3D projection technologies is another potential technology under consideration, The technology can be adapted in a multitude of ways. The computing platform is scalable, such that the number, resolution, and sensitivity of the cameras can be leveraged to improve image resolution and field of view. Miniaturization efforts can be pursued to shrink the package down for better mobility. Power savings studies can be performed to enable unattended, remote sensing packages. Image compression and transmission technologies can be incorporated to enable an improved telepresence experience.

  13. [The prevalence and influencing factors of eye diseases for IT industry video operation workers].

    PubMed

    Zhao, Liang-liang; Yu, Yan-yan; Yu, Wen-lan; Xu, Ming; Cao, Wen-dong; Zhang, Hong-bing; Han, Lei; Zhang, Heng-dong

    2013-05-01

    To investigate the situation of video-contact and eye diseases for IT industry video operation workers, and to analyze the influencing factors, providing scientific evidence for the make of health-strategy for IT industry video operation workers. We take the random cluster sampling method to choose 190 IT industry video operation workers in a city of Jiangsu province, analyzing the relations between video contact and eye diseases. The daily video contact time of IT industry video operation workers is 6.0-16.0 hours, whose mean value is (I 0.1 ± 1.8) hours. 79.5% of workers in this survey wear myopic lens, 35.8% of workers have a rest during their working, and 14.2% of IT workers use protective products when they feel unwell of their eyes. Following the BUT experiment, 54.7% of IT workers have the normal examine results of hinoculus, while 45.3% have the abnormal results of at least one eye. Simultaneously, 54.7% workers have the normal examine results of hinoculus in the SIT experiment, however, 42.1% workers are abnormal. According to the broad linear model, there are six influencing factors (daily mean time to video, distance between eye and displayer, the frequency of rest, whether to use protective products when they feel unwell of their eyes, the type of dis player and daily time watching TV.) have significant influence on vision, having statistical significance. At the same time, there are also six influencing factors (whether have a rest regularly,sex, the situation of diaphaneity for cornea, the shape of pupil, family history and whether to use protective products when they feel unwell of their eyes.) have significant influence on the results of BUT experiment,having statistical significance. However, there are seven influencing factors (the type of computer, sex, the shape of pupil, the situation of diaphaneity for cornea, the angle between displayer and workers' sight, the type of displayer and the height of operating floor.) have significant influence on the results of SIT experiment,having statistical significance. The health-situation of IT industry video operation workers' eye is not optimistic, most of workers are lack of protection awareness; we need to strengthen propaganda and education according to its influencing factors and to improve the level of medical control and prevention for eye diseases in relevant industries.

  14. Travel guidance system for vehicles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takanabe, K.; Yamamoto, M.; Ito, K.

    1987-02-24

    A travel guidance system is described for vehicles including: a heading sensor for detecting a direction of movement of a vehicle; a distance sensor for detecting a distance traveled by the vehicle; a map data storage medium preliminarily storing map data; a control unit for receiving a heading signal from the heading sensor and a distance signal from the distance sensor to successively compute a present position of the vehicle and for generating video signals corresponding to display data including map data from the map data storage medium and data of the present position; and a display having first andmore » second display portions and responsive to the video signals from the control unit to display on the first display portion a map and a present portion mark, in which: the map data storage medium comprises means for preliminarily storing administrative division name data and landmark data; and the control unit comprises: landmark display means for: (1) determining a landmark closest to the present position, (2) causing a position of the landmark to be displayed on the map and (3) retrieving a landmark massage concerning the landmark from the storage medium to cause the display to display the landmark message on the second display portion; division name display means for retrieving the name of an administrative division to which the present position belongs from the storage medium and causing the display to display a division name message on the second display portion; and selection means for selectively actuating at least one of the landmark display means and the division name display means.« less

  15. Holo-Chidi video concentrator card

    NASA Astrophysics Data System (ADS)

    Nwodoh, Thomas A.; Prabhakar, Aditya; Benton, Stephen A.

    2001-12-01

    The Holo-Chidi Video Concentrator Card is a frame buffer for the Holo-Chidi holographic video processing system. Holo- Chidi is designed at the MIT Media Laboratory for real-time computation of computer generated holograms and the subsequent display of the holograms at video frame rates. The Holo-Chidi system is made of two sets of cards - the set of Processor cards and the set of Video Concentrator Cards (VCCs). The Processor cards are used for hologram computation, data archival/retrieval from a host system, and for higher-level control of the VCCs. The VCC formats computed holographic data from multiple hologram computing Processor cards, converting the digital data to analog form to feed the acousto-optic-modulators of the Media lab's Mark-II holographic display system. The Video Concentrator card is made of: a High-Speed I/O (HSIO) interface whence data is transferred from the hologram computing Processor cards, a set of FIFOs and video RAM used as buffer for data for the hololines being displayed, a one-chip integrated microprocessor and peripheral combination that handles communication with other VCCs and furnishes the card with a USB port, a co-processor which controls display data formatting, and D-to-A converters that convert digital fringes to analog form. The co-processor is implemented with an SRAM-based FPGA with over 500,000 gates and controls all the signals needed to format the data from the multiple Processor cards into the format required by Mark-II. A VCC has three HSIO ports through which up to 500 Megabytes of computed holographic data can flow from the Processor Cards to the VCC per second. A Holo-Chidi system with three VCCs has enough frame buffering capacity to hold up to thirty two 36Megabyte hologram frames at a time. Pre-computed holograms may also be loaded into the VCC from a host computer through the low- speed USB port. Both the microprocessor and the co- processor in the VCC can access the main system memory used to store control programs and data for the VCC. The Card also generates the control signals used by the scanning mirrors of Mark-II. In this paper we discuss the design of the VCC and its implementation in the Holo-Chidi system.

  16. Method and apparatus for telemetry adaptive bandwidth compression

    NASA Technical Reports Server (NTRS)

    Graham, Olin L.

    1987-01-01

    Methods and apparatus are provided for automatic and/or manual adaptive bandwidth compression of telemetry. An adaptive sampler samples a video signal from a scanning sensor and generates a sequence of sampled fields. Each field and range rate information from the sensor are hence sequentially transmitted to and stored in a multiple and adaptive field storage means. The field storage means then, in response to an automatic or manual control signal, transfers the stored sampled field signals to a video monitor in a form for sequential or simultaneous display of a desired number of stored signal fields. The sampling ratio of the adaptive sample, the relative proportion of available communication bandwidth allocated respectively to transmitted data and video information, and the number of fields simultaneously displayed are manually or automatically selectively adjustable in functional relationship to each other and detected range rate. In one embodiment, when relatively little or no scene motion is detected, the control signal maximizes sampling ratio and causes simultaneous display of all stored fields, thus maximizing resolution and bandwidth available for data transmission. When increased scene motion is detected, the control signal is adjusted accordingly to cause display of fewer fields. If greater resolution is desired, the control signal is adjusted to increase the sampling ratio.

  17. Analysis and Selection of a Remote Docking Simulation Visual Display System

    NASA Technical Reports Server (NTRS)

    Shields, N., Jr.; Fagg, M. F.

    1984-01-01

    The development of a remote docking simulation visual display system is examined. Video system and operator performance are discussed as well as operator command and control requirements and a design analysis of the reconfigurable work station.

  18. MEMS-based flexible reflective analog modulators (FRAM) for projection displays: a technology review and scale-down study

    NASA Astrophysics Data System (ADS)

    Picard, Francis; Ilias, Samir; Asselin, Daniel; Boucher, Marc-André; Duchesne, François; Jacob, Michel; Larouche, Carl; Vachon, Carl; Niall, Keith K.; Jerominek, Hubert

    2011-02-01

    A MEMS based technology for projection display is reviewed. This technology relies on mechanically flexible and reflective microbridges made of aluminum alloy. A linear array of such micromirrors is combined with illumination and Schlieren optics to produce a pixels line. Each microbridge in the array is individually controlled using electrostatic actuation to adjust the pixels intensities. Results of the simulation, fabrication and characterization of these microdevices are presented. Activation voltages below 250 V with response times below 10 μs were obtained for 25 μm × 25 μm micromirrors. With appropriate actuation voltage waveforms, response times of 5 μs and less are achievable. A damage threshold of the mirrors above 8 kW/cm2 has been evaluated. Development of the technology has produced projector engines demonstrating this light modulation principle. The most recent of these engines is DVI compatible and displays VGA video streams at 60 Hz. Recently applications have emerged that impose more stringent requirements on the dimensions of the MEMS array and associated optical system. This triggered a scale down study to evaluate the minimum micromirror size achievable, the impact of this reduced size on the damage threshold and the achievable minimum size of the associated optical system. Preliminary results of this scale down study are reported. FRAM with active surface as small as 5 μm × 5 μm have been investigated. Simulations have shown that such micromirrors could be activated with 107 V to achieve f-number of 1.25. The damage threshold has been estimated for various FRAM sizes. Finally, design of a conceptual miniaturized projector based on 1000×1 array of 5 μm × 5 μm micromirrors is presented. The volume of this projector concept is about 12 cm3.

  19. Eavesdropping and signal matching in visual courtship displays of spiders.

    PubMed

    Clark, David L; Roberts, J Andrew; Uetz, George W

    2012-06-23

    Eavesdropping on communication is widespread among animals, e.g. bystanders observing male-male contests, female mate choice copying and predator detection of prey cues. Some animals also exhibit signal matching, e.g. overlapping of competitors' acoustic signals in aggressive interactions. Fewer studies have examined male eavesdropping on conspecific courtship, although males could increase mating success by attending to others' behaviour and displaying whenever courtship is detected. In this study, we show that field-experienced male Schizocosa ocreata wolf spiders exhibit eavesdropping and signal matching when exposed to video playback of courting male conspecifics. Male spiders had longer bouts of interaction with a courting male stimulus, and more bouts of courtship signalling during and after the presence of a male on the video screen. Rates of courtship (leg tapping) displayed by individual focal males were correlated with the rates of the video exemplar to which they were exposed. These findings suggest male wolf spiders might gain information by eavesdropping on conspecific courtship and adjust performance to match that of rivals. This represents a novel finding, as these behaviours have previously been seen primarily among vertebrates.

  20. Eavesdropping and signal matching in visual courtship displays of spiders

    PubMed Central

    Clark, David L.; Roberts, J. Andrew; Uetz, George W.

    2012-01-01

    Eavesdropping on communication is widespread among animals, e.g. bystanders observing male–male contests, female mate choice copying and predator detection of prey cues. Some animals also exhibit signal matching, e.g. overlapping of competitors' acoustic signals in aggressive interactions. Fewer studies have examined male eavesdropping on conspecific courtship, although males could increase mating success by attending to others' behaviour and displaying whenever courtship is detected. In this study, we show that field-experienced male Schizocosa ocreata wolf spiders exhibit eavesdropping and signal matching when exposed to video playback of courting male conspecifics. Male spiders had longer bouts of interaction with a courting male stimulus, and more bouts of courtship signalling during and after the presence of a male on the video screen. Rates of courtship (leg tapping) displayed by individual focal males were correlated with the rates of the video exemplar to which they were exposed. These findings suggest male wolf spiders might gain information by eavesdropping on conspecific courtship and adjust performance to match that of rivals. This represents a novel finding, as these behaviours have previously been seen primarily among vertebrates. PMID:22219390

  1. 19. SITE BUILDING 002 SCANNER BUILDING AIR POLICE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    19. SITE BUILDING 002 - SCANNER BUILDING - AIR POLICE SITE SECURITY OFFICE WITH "SITE PERIMETER STATUS PANEL" AND REAL TIME VIDEO DISPLAY OUTPUT FROM VIDEO CAMERA SYSTEM AT SECURITY FENCE LOCATIONS. - Cape Cod Air Station, Technical Facility-Scanner Building & Power Plant, Massachusetts Military Reservation, Sandwich, Barnstable County, MA

  2. Video bandwidth compression system

    NASA Astrophysics Data System (ADS)

    Ludington, D.

    1980-08-01

    The objective of this program was the development of a Video Bandwidth Compression brassboard model for use by the Air Force Avionics Laboratory, Wright-Patterson Air Force Base, in evaluation of bandwidth compression techniques for use in tactical weapons and to aid in the selection of particular operational modes to be implemented in an advanced flyable model. The bandwidth compression system is partitioned into two major divisions: the encoder, which processes the input video with a compression algorithm and transmits the most significant information; and the decoder where the compressed data is reconstructed into a video image for display.

  3. Fractional screen video enhancement apparatus

    DOEpatents

    Spletzer, Barry L [Albuquerque, NM; Davidson, George S [Albuquerque, NM; Zimmerer, Daniel J [Tijeras, NM; Marron, Lisa C [Albuquerque, NM

    2005-07-19

    The present invention provides a method and apparatus for displaying two portions of an image at two resolutions. For example, the invention can display an entire image at a first resolution, and a subset of the image at a second, higher resolution. Two inexpensive, low resolution displays can be used to produce a large image with high resolution only where needed.

  4. A Comparison of a Traditional Lecture-Based and Online Supplemental Video and Lecture-Based Approach in an Engineering Statics Class

    ERIC Educational Resources Information Center

    Halupa, Colleen M.; Caldwell, Benjamin W.

    2015-01-01

    This quasi-experimental research study evaluated two intact undergraduate engineering statics classes at a private university in Texas. Students in the control group received traditional lecture, readings and homework assignments. Those in the experimental group also were given access to a complete set of online video lectures and videos…

  5. Video Outreach Graduate Program.

    ERIC Educational Resources Information Center

    Rigas, Anthony L.

    The University of Idaho's video outreach graduate program is described. The program is designed to provide continuing education, credit courses, and graduate degree-granting programs anywhere in the state by producing these programs on video cassette and Betamax formats. Presently the Master of Engineering in electrical and Mechanical Engineering…

  6. Dementia and Robotics: People with Advancing Dementia and Their Carers Driving an Exploration into an Engineering Solution to Maintaining Safe Exercise Regimes.

    PubMed

    Cooper, Carol; Penders, Jacques; Procter, Paula M

    2016-01-01

    The merging of the human world and the information technology world is advancing at a pace, even for those with dementia there are many useful smart 'phone applications including reminders, family pictures display, GPS functions and video communications. This paper will report upon initial collaborative work developing a robotic solution to engaging individuals with advancing dementia in safe exercise regimes. The research team has been driven by the needs of people with advancing dementia and their carers through a focus group methodology, the format, discussions and outcomes of these groups will be reported. The plans for the next stage of the research will be outlined including the continuing collaboration with advancing dementia and their carers.

  7. Exploiting spatio-temporal characteristics of human vision for mobile video applications

    NASA Astrophysics Data System (ADS)

    Jillani, Rashad; Kalva, Hari

    2008-08-01

    Video applications on handheld devices such as smart phones pose a significant challenge to achieve high quality user experience. Recent advances in processor and wireless networking technology are producing a new class of multimedia applications (e.g. video streaming) for mobile handheld devices. These devices are light weight and have modest sizes, and therefore very limited resources - lower processing power, smaller display resolution, lesser memory, and limited battery life as compared to desktop and laptop systems. Multimedia applications on the other hand have extensive processing requirements which make the mobile devices extremely resource hungry. In addition, the device specific properties (e.g. display screen) significantly influence the human perception of multimedia quality. In this paper we propose a saliency based framework that exploits the structure in content creation as well as the human vision system to find the salient points in the incoming bitstream and adapt it according to the target device, thus improving the quality of new adapted area around salient points. Our experimental results indicate that the adaptation process that is cognizant of video content and user preferences can produce better perceptual quality video for mobile devices. Furthermore, we demonstrated how such a framework can affect user experience on a handheld device.

  8. Highly Reflective Multi-stable Electrofluidic Display Pixels

    NASA Astrophysics Data System (ADS)

    Yang, Shu

    Electronic papers (E-papers) refer to the displays that mimic the appearance of printed papers, but still owning the features of conventional electronic displays, such as the abilities of browsing websites and playing videos. The motivation of creating paper-like displays is inspired by the truths that reading on a paper caused least eye fatigue due to the paper's reflective and light diffusive nature, and, unlike the existing commercial displays, there is no cost of any form of energy for sustaining the displayed image. To achieve the equivalent visual effect of a paper print, an ideal E-paper has to be a highly reflective with good contrast ratio and full-color capability. To sustain the image with zero power consumption, the display pixels need to be bistable, which means the "on" and "off" states are both lowest energy states. Pixel can change its state only when sufficient external energy is given. There are many emerging technologies competing to demonstrate the first ideal E-paper device. However, none is able to achieve satisfactory visual effect, bistability and video speed at the same time. Challenges come from either the inherent physical/chemical properties or the fabrication process. Electrofluidic display is one of the most promising E-paper technologies. It has successfully demonstrated high reflectivity, brilliant color and video speed operation by moving colored pigment dispersion between visible and invisible places with electrowetting force. However, the pixel design did not allow the image bistability. Presented in this dissertation are the multi-stable electrofluidic display pixels that are able to sustain grayscale levels without any power consumption, while keeping the favorable features of the previous generation electrofluidic display. The pixel design, fabrication method using multiple layer dry film photoresist lamination, and physical/optical characterizations are discussed in details. Based on the pixel structure, the preliminary results of a simplified design and fabrication method are demonstrated. As advanced research topics regarding the device optical performance, firstly an optical model for evaluating reflective displays' light out-coupling efficiency is established to guide the pixel design; Furthermore, Aluminum surface diffusers are analytically modeled and then fabricated onto multi-stable electrofluidic display pixels to demonstrate truly "white" multi-stable electrofluidic display modules. The achieved results successfully promoted multi-stable electrofluidic display as excellent candidate for the ultimate E-paper device especially for larger scale signage applications.

  9. Display aids for remote control of untethered undersea vehicles

    NASA Technical Reports Server (NTRS)

    Verplank, W. L.

    1978-01-01

    A predictor display superimposed on slow-scan video or sonar data is proposed as a method to allow better remote manual control of an untethered submersible. Simulation experiments show good control under circumstances which otherwise make control practically impossible.

  10. Development of a Low Cost Graphics Terminal.

    ERIC Educational Resources Information Center

    Lehr, Ted

    1985-01-01

    Describes modifications made to expand the capabilities of a display unit (Lear Siegler ADM-3A) to include medium resolution graphics. The modifying circuitry is detailed along with software subroutined written in Z-80 machine language for controlling the video display. (JN)

  11. The Video PATSEARCH System: An Interview with Peter Urbach.

    ERIC Educational Resources Information Center

    Videodisc/Videotext, 1982

    1982-01-01

    The Video PATSEARCH system consists of a microcomputer with a special keyboard and two display screens which accesses the PATSEARCH database of United States government patents on the Bibliographic Retrieval Services (BRS) search system. The microcomputer retrieves text from BRS and matching graphics from an analog optical videodisc. (Author/JJD)

  12. Preliminary experience with a stereoscopic video system in a remotely piloted aircraft application

    NASA Technical Reports Server (NTRS)

    Rezek, T. W.

    1983-01-01

    Remote piloting video display development at the Dryden Flight Research Facility of NASA's Ames Research Center is summarized, and the reasons for considering stereo television are presented. Pertinent equipment is described. Limited flight experience is also discussed, along with recommendations for further study.

  13. Comparing Pictures and Videos for Teaching Action Labels to Children with Communication Delays

    ERIC Educational Resources Information Center

    Schebell, Shannon; Shepley, Collin; Mataras, Theologia; Wunderlich, Kara

    2018-01-01

    Children with communication delays often display difficulties labeling stimuli in their environment, particularly related to actions. Research supports direct instruction with video and picture stimuli for increasing children's action labeling repertoires; however, no studies have compared which type of stimuli results in more efficient,…

  14. Microgravity

    NASA Image and Video Library

    1996-01-01

    Ted Brunzie and Peter Mason observe the float package and the data rack aboard the DC-9 reduced gravity aircraft. The float package contains a cryostat, a video camera, a pump and accelerometers. The data rack displays and record the video signal from the float package on tape and stores acceleration and temperature measurements on disk.

  15. Inspired to Work

    ERIC Educational Resources Information Center

    Krumboltz, John D.; Babineaux, Ryan; Wientjes, Greg

    2010-01-01

    The supply of occupational information appears to exceed the demand. A website displaying over 100 videos about various occupations was created to help career searchers find attractive alternatives. Access to the videos was free for anyone in the world. It had been hoped that many thousands of people would make use of the resource. However, the…

  16. On-line content creation for photo products: understanding what the user wants

    NASA Astrophysics Data System (ADS)

    Fageth, Reiner

    2015-03-01

    This paper describes how videos can be implemented into printed photo books and greeting cards. We will show that - surprisingly or not- pictures from videos are similarly used such as classical images to tell compelling stories. Videos can be taken with nearly every camera, digital point and shoot cameras, DSLRs as well as smartphones and more and more with so-called action cameras mounted on sports devices. The implementation of videos while generating QR codes and relevant pictures out of the video stream via a software implementation was contents in last years' paper. This year we present first data about what contents is displayed and how the users represent their videos in printed products, e.g. CEWE PHOTOBOOKS and greeting cards. We report the share of the different video formats used.

  17. Design and implementation of H.264 based embedded video coding technology

    NASA Astrophysics Data System (ADS)

    Mao, Jian; Liu, Jinming; Zhang, Jiemin

    2016-03-01

    In this paper, an embedded system for remote online video monitoring was designed and developed to capture and record the real-time circumstances in elevator. For the purpose of improving the efficiency of video acquisition and processing, the system selected Samsung S5PV210 chip as the core processor which Integrated graphics processing unit. And the video was encoded with H.264 format for storage and transmission efficiently. Based on S5PV210 chip, the hardware video coding technology was researched, which was more efficient than software coding. After running test, it had been proved that the hardware video coding technology could obviously reduce the cost of system and obtain the more smooth video display. It can be widely applied for the security supervision [1].

  18. Automatic view synthesis by image-domain-warping.

    PubMed

    Stefanoski, Nikolce; Wang, Oliver; Lang, Manuel; Greisen, Pierre; Heinzle, Simon; Smolic, Aljosa

    2013-09-01

    Today, stereoscopic 3D (S3D) cinema is already mainstream, and almost all new display devices for the home support S3D content. S3D distribution infrastructure to the home is already established partly in the form of 3D Blu-ray discs, video on demand services, or television channels. The necessity to wear glasses is, however, often considered as an obstacle, which hinders broader acceptance of this technology in the home. Multiviewautostereoscopic displays enable a glasses free perception of S3D content for several observers simultaneously, and support head motion parallax in a limited range. To support multiviewautostereoscopic displays in an already established S3D distribution infrastructure, a synthesis of new views from S3D video is needed. In this paper, a view synthesis method based on image-domain-warping (IDW) is presented that automatically synthesizes new views directly from S3D video and functions completely. IDW relies on an automatic and robust estimation of sparse disparities and image saliency information, and enforces target disparities in synthesized images using an image warping framework. Two configurations of the view synthesizer in the scope of a transmission and view synthesis framework are analyzed and evaluated. A transmission and view synthesis system that uses IDW is recently submitted to MPEG's call for proposals on 3D video technology, where it is ranked among the four best performing proposals.

  19. Comparative analysis of video processing and 3D rendering for cloud video games using different virtualization technologies

    NASA Astrophysics Data System (ADS)

    Bada, Adedayo; Alcaraz-Calero, Jose M.; Wang, Qi; Grecos, Christos

    2014-05-01

    This paper describes a comprehensive empirical performance evaluation of 3D video processing employing the physical/virtual architecture implemented in a cloud environment. Different virtualization technologies, virtual video cards and various 3D benchmarks tools have been utilized in order to analyse the optimal performance in the context of 3D online gaming applications. This study highlights 3D video rendering performance under each type of hypervisors, and other factors including network I/O, disk I/O and memory usage. Comparisons of these factors under well-known virtual display technologies such as VNC, Spice and Virtual 3D adaptors reveal the strengths and weaknesses of the various hypervisors with respect to 3D video rendering and streaming.

  20. Improving School Lighting for Video Display Units.

    ERIC Educational Resources Information Center

    Parker-Jenkins, Marie; Parker-Jenkins, William

    1985-01-01

    Provides information to identify and implement the key characteristics which contribute to an efficient and comfortable visual display unit (VDU) lighting installation. Areas addressed include VDU lighting requirements, glare, lighting controls, VDU environment, lighting retrofit, optical filters, and lighting recommendations. A checklist to…

  1. When less is best: female brown-headed cowbirds prefer less intense male displays.

    PubMed

    O'Loghlen, Adrian L; Rothstein, Stephen I

    2012-01-01

    Sexual selection theory predicts that females should prefer males with the most intense courtship displays. However, wing-spread song displays that male brown-headed cowbirds (Molothrus ater) direct at females are generally less intense than versions of this display that are directed at other males. Because male-directed displays are used in aggressive signaling, we hypothesized that females should prefer lower intensity performances of this display. To test this hypothesis, we played audiovisual recordings showing the same males performing both high intensity male-directed and low intensity female-directed displays to females (N = 8) and recorded the females' copulation solicitation display (CSD) responses. All eight females responded strongly to both categories of playbacks but were more sexually stimulated by the low intensity female-directed displays. Because each pair of high and low intensity playback videos had the exact same audio track, the divergent responses of females must have been based on differences in the visual content of the displays shown in the videos. Preferences female cowbirds show in acoustic CSD studies are correlated with mate choice in field and captivity studies and this is also likely to be true for preferences elucidated by playback of audiovisual displays. Female preferences for low intensity female-directed displays may explain why male cowbirds rarely use high intensity displays when signaling to females. Repetitive high intensity displays may demonstrate a male's current condition and explain why these displays are used in male-male interactions which can escalate into physical fights in which males in poorer condition could be injured or killed. This is the first study in songbirds to use audiovisual playbacks to assess how female sexual behavior varies in response to variation in a male visual display.

  2. Game On, Science - How Video Game Technology May Help Biologists Tackle Visualization Challenges

    PubMed Central

    Da Silva, Franck; Empereur-mot, Charly; Chavent, Matthieu; Baaden, Marc

    2013-01-01

    The video games industry develops ever more advanced technologies to improve rendering, image quality, ergonomics and user experience of their creations providing very simple to use tools to design new games. In the molecular sciences, only a small number of experts with specialized know-how are able to design interactive visualization applications, typically static computer programs that cannot easily be modified. Are there lessons to be learned from video games? Could their technology help us explore new molecular graphics ideas and render graphics developments accessible to non-specialists? This approach points to an extension of open computer programs, not only providing access to the source code, but also delivering an easily modifiable and extensible scientific research tool. In this work, we will explore these questions using the Unity3D game engine to develop and prototype a biological network and molecular visualization application for subsequent use in research or education. We have compared several routines to represent spheres and links between them, using either built-in Unity3D features or our own implementation. These developments resulted in a stand-alone viewer capable of displaying molecular structures, surfaces, animated electrostatic field lines and biological networks with powerful, artistic and illustrative rendering methods. We consider this work as a proof of principle demonstrating that the functionalities of classical viewers and more advanced novel features could be implemented in substantially less time and with less development effort. Our prototype is easily modifiable and extensible and may serve others as starting point and platform for their developments. A webserver example, standalone versions for MacOS X, Linux and Windows, source code, screen shots, videos and documentation are available at the address: http://unitymol.sourceforge.net/. PMID:23483961

  3. Simulating video-assisted thoracoscopic lobectomy: a virtual reality cognitive task simulation.

    PubMed

    Solomon, Brian; Bizekis, Costas; Dellis, Sophia L; Donington, Jessica S; Oliker, Aaron; Balsam, Leora B; Zervos, Michael; Galloway, Aubrey C; Pass, Harvey; Grossi, Eugene A

    2011-01-01

    Current video-assisted thoracoscopic surgery training models rely on animals or mannequins to teach procedural skills. These approaches lack inherent teaching/testing capability and are limited by cost, anatomic variations, and single use. In response, we hypothesized that video-assisted thoracoscopic surgery right upper lobe resection could be simulated in a virtual reality environment with commercial software. An anatomy explorer (Maya [Autodesk Inc, San Rafael, Calif] models of the chest and hilar structures) and simulation engine were adapted. Design goals included freedom of port placement, incorporation of well-known anatomic variants, teaching and testing modes, haptic feedback for the dissection, ability to perform the anatomic divisions, and a portable platform. Preexisting commercial models did not provide sufficient surgical detail, and extensive modeling modifications were required. Video-assisted thoracoscopic surgery right upper lobe resection simulation is initiated with a random vein and artery variation. The trainee proceeds in a teaching or testing mode. A knowledge database currently includes 13 anatomic identifications and 20 high-yield lung cancer learning points. The "patient" is presented in the left lateral decubitus position. After initial camera port placement, the endoscopic view is displayed and the thoracoscope is manipulated via the haptic device. The thoracoscope port can be relocated; additional ports are placed using an external "operating room" view. Unrestricted endoscopic exploration of the thorax is allowed. An endo-dissector tool allows for hilar dissection, and a virtual stapling device divides structures. The trainee's performance is reported. A virtual reality cognitive task simulation can overcome the deficiencies of existing training models. Performance scoring is being validated as we assess this simulator for cognitive and technical surgical education. Copyright © 2011. Published by Mosby, Inc.

  4. Game on, science - how video game technology may help biologists tackle visualization challenges.

    PubMed

    Lv, Zhihan; Tek, Alex; Da Silva, Franck; Empereur-mot, Charly; Chavent, Matthieu; Baaden, Marc

    2013-01-01

    The video games industry develops ever more advanced technologies to improve rendering, image quality, ergonomics and user experience of their creations providing very simple to use tools to design new games. In the molecular sciences, only a small number of experts with specialized know-how are able to design interactive visualization applications, typically static computer programs that cannot easily be modified. Are there lessons to be learned from video games? Could their technology help us explore new molecular graphics ideas and render graphics developments accessible to non-specialists? This approach points to an extension of open computer programs, not only providing access to the source code, but also delivering an easily modifiable and extensible scientific research tool. In this work, we will explore these questions using the Unity3D game engine to develop and prototype a biological network and molecular visualization application for subsequent use in research or education. We have compared several routines to represent spheres and links between them, using either built-in Unity3D features or our own implementation. These developments resulted in a stand-alone viewer capable of displaying molecular structures, surfaces, animated electrostatic field lines and biological networks with powerful, artistic and illustrative rendering methods. We consider this work as a proof of principle demonstrating that the functionalities of classical viewers and more advanced novel features could be implemented in substantially less time and with less development effort. Our prototype is easily modifiable and extensible and may serve others as starting point and platform for their developments. A webserver example, standalone versions for MacOS X, Linux and Windows, source code, screen shots, videos and documentation are available at the address: http://unitymol.sourceforge.net/.

  5. Communicating Science on YouTube and Beyond: OSIRIS-REx Presents 321Science!

    NASA Astrophysics Data System (ADS)

    Spitz, Anna H.; Dykhuis, Melissa; Platts, Symeon; Keane, James T.; Tanquary, Hannah E.; Zellem, Robert; Hawley, Tiffany; Lauretta, Dante; Beshore, Ed; Bottke, Bill; Hergenrother, Carl; Dworkin, Jason P.; Patchell, Rose; Spitz, Sarah E.; Bentley, Zoe

    2014-11-01

    NASA’s OSIRIS-REx asteroid sample return mission launched OSIRIS-REx Presents 321Science!, a series of short videos, in December 2013 at youtube.com/osirisrex. A multi-disciplinary team of communicators, film and graphic arts students, teens, scientists, and engineers produces one video per month on a science and engineering topic related to the OSIRIS-REx mission. The format is designed to engage all members of the public, but especially younger audiences with the science and engineering of the mission. The videos serve as a resource for team members and others, complementing more traditional formats such as formal video interviews, mission animations, and hands-on activities. In creating this new form of OSIRIS-REx engagement, we developed 321Science! as an umbrella program to encourage expansion of the concept and topics beyond the OSIRIS-REx mission through partnerships. Such an expansion strengthens and magnifies the reach of the OSIRIS-REx efforts.321Science! has a detailed proposed schedule of video production through launch in 2016. Production plans are categorized to coincide with the course of the mission beginning with Learning the basics - about asteroids and the mission - and proceeding to Building the spacecraft, Run up to launch, Cruising to Bennu, Run up to rendezvous, Mapping Bennu, Sampling, Analyzing data, Cruising home and Returning and analyzing the sample. The video library will host a combination of videos on broad science topics and short specialized concepts with an average length of 2-3 minutes. Video production also takes into account external events, such as other missions’ milestones, to draw attention to our videos. Production will remain flexible and responsive to audience interests and needs and to developments in the mission, science, and external events. As of August 2014, 321Science! videos have over 22,000 views. We use YouTube analytics to evaluate our success and we are investigating additional and more rigorous evaluation methods for future analysis.

  6. Enabling high grayscale resolution displays and accurate response time measurements on conventional computers.

    PubMed

    Li, Xiangrui; Lu, Zhong-Lin

    2012-02-29

    Display systems based on conventional computer graphics cards are capable of generating images with 8-bit gray level resolution. However, most experiments in vision research require displays with more than 12 bits of luminance resolution. Several solutions are available. Bit++ (1) and DataPixx (2) use the Digital Visual Interface (DVI) output from graphics cards and high resolution (14 or 16-bit) digital-to-analog converters to drive analog display devices. The VideoSwitcher (3) described here combines analog video signals from the red and blue channels of graphics cards with different weights using a passive resister network (4) and an active circuit to deliver identical video signals to the three channels of color monitors. The method provides an inexpensive way to enable high-resolution monochromatic displays using conventional graphics cards and analog monitors. It can also provide trigger signals that can be used to mark stimulus onsets, making it easy to synchronize visual displays with physiological recordings or response time measurements. Although computer keyboards and mice are frequently used in measuring response times (RT), the accuracy of these measurements is quite low. The RTbox is a specialized hardware and software solution for accurate RT measurements. Connected to the host computer through a USB connection, the driver of the RTbox is compatible with all conventional operating systems. It uses a microprocessor and high-resolution clock to record the identities and timing of button events, which are buffered until the host computer retrieves them. The recorded button events are not affected by potential timing uncertainties or biases associated with data transmission and processing in the host computer. The asynchronous storage greatly simplifies the design of user programs. Several methods are available to synchronize the clocks of the RTbox and the host computer. The RTbox can also receive external triggers and be used to measure RT with respect to external events. Both VideoSwitcher and RTbox are available for users to purchase. The relevant information and many demonstration programs can be found at http://lobes.usc.edu/.

  7. Fuels Performance | Transportation Research | NREL

    Science.gov Websites

    . Video Promotes Safe CNG Tank Decommissioning Practices A video on CNG fuel tank defueling instruct transit agencies and others about safe CNG tank end-of-life practices. The video was previewed at Biodiesel Performance in Modern Engines NREL is working cooperatively with the National Biodiesel Board on

  8. Modern Display Technologies for Airborne Applications.

    DTIC Science & Technology

    1983-04-01

    the case of LED head-down direct view displays, this requires that special attention be paid to the optical filtering , the electrical drive/address...effectively attenuates the LED specular reflectance component, the colour and neutral density filtering attentuate the diffuse component and the... filter techniques are planned for use with video, multi- colour and advanced versions of numeric, alphanumeric and graphic displays; this technique

  9. Payload specialist station study. Part 2: CEI specifications (part 1). [space shuttles

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The performance, design, and verification specifications are established for the multifunction display system (MFDS) to be located at the payload station in the shuttle orbiter aft flight deck. The system provides the display units (with video, alphanumerics, and graphics capabilities), associated with electronic units and the keyboards in support of the payload dedicated controls and the displays concept.

  10. Time-Lapse Video of SLS Engine Section Test Article Being Stacked at Michoud

    NASA Image and Video Library

    2017-04-25

    This time-lapse video shows the Space Launch System engine section structural qualification test article being stacked at NASA's Michoud Assembly Facility in New Orleans. The rocket's engine section is the bottom of the core stage and houses the four RS-25 engines. The engine section test article was moved to Michoud's Cell A in Building 110 for vertical stacking with hardware that simulates the rocket's liquid hydrogen tank, which is the fuel tank that joins to the engine section. Once stacked, the entire test article will load onto the barge Pegasus and ship to NASA's Marshall Space Flight Center in Huntsville, Alabama. There, it will be subjected to millions of pounds of force during testing to ensure the hardware can withstand the incredible stresses of launch.

  11. Eye movements while viewing narrated, captioned, and silent videos

    PubMed Central

    Ross, Nicholas M.; Kowler, Eileen

    2013-01-01

    Videos are often accompanied by narration delivered either by an audio stream or by captions, yet little is known about saccadic patterns while viewing narrated video displays. Eye movements were recorded while viewing video clips with (a) audio narration, (b) captions, (c) no narration, or (d) concurrent captions and audio. A surprisingly large proportion of time (>40%) was spent reading captions even in the presence of a redundant audio stream. Redundant audio did not affect the saccadic reading patterns but did lead to skipping of some portions of the captions and to delays of saccades made into the caption region. In the absence of captions, fixations were drawn to regions with a high density of information, such as the central region of the display, and to regions with high levels of temporal change (actions and events), regardless of the presence of narration. The strong attraction to captions, with or without redundant audio, raises the question of what determines how time is apportioned between captions and video regions so as to minimize information loss. The strategies of apportioning time may be based on several factors, including the inherent attraction of the line of sight to any available text, the moment by moment impressions of the relative importance of the information in the caption and the video, and the drive to integrate visual text accompanied by audio into a single narrative stream. PMID:23457357

  12. Effects of Picture Prompts Delivered by a Video iPod on Pedestrian Navigation

    ERIC Educational Resources Information Center

    Kelley, Kelly R.; Test, David W.; Cooke, Nancy L.

    2013-01-01

    Transportation access is a major contributor to independence, productivity, and societal inclusion for individuals with intellectual and development disabilities (IDD). This study examined the effects of pedestrian navigation training using picture prompts displayed through a video iPod on travel route completion with 4 adults and IDD. Results…

  13. Video signal processing system uses gated current mode switches to perform high speed multiplication and digital-to-analog conversion

    NASA Technical Reports Server (NTRS)

    Gilliland, M. G.; Rougelot, R. S.; Schumaker, R. A.

    1966-01-01

    Video signal processor uses special-purpose integrated circuits with nonsaturating current mode switching to accept texture and color information from a digital computer in a visual spaceflight simulator and to combine these, for display on color CRT with analog information concerning fading.

  14. 38 CFR 1.9 - Description, use, and display of VA seal and flag.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) Official awards, certificates, medals, and plaques. (E) Motion picture film, video tape, and other... governments. (F) Official awards, certificates, and medals. (G) Motion picture film, video tape, and other... with this section shall be subject to the penalty provisions of 18 U.S.C. 506, 701, or 1017, providing...

  15. 38 CFR 1.9 - Description, use, and display of VA seal and flag.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) Official awards, certificates, medals, and plaques. (E) Motion picture film, video tape, and other... governments. (F) Official awards, certificates, and medals. (G) Motion picture film, video tape, and other... with this section shall be subject to the penalty provisions of 18 U.S.C. 506, 701, or 1017, providing...

  16. 38 CFR 1.9 - Description, use, and display of VA seal and flag.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) Official awards, certificates, medals, and plaques. (E) Motion picture film, video tape, and other... governments. (F) Official awards, certificates, and medals. (G) Motion picture film, video tape, and other... with this section shall be subject to the penalty provisions of 18 U.S.C. 506, 701, or 1017, providing...

  17. 38 CFR 1.9 - Description, use, and display of VA seal and flag.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) Official awards, certificates, medals, and plaques. (E) Motion picture film, video tape, and other... governments. (F) Official awards, certificates, and medals. (G) Motion picture film, video tape, and other... with this section shall be subject to the penalty provisions of 18 U.S.C. 506, 701, or 1017, providing...

  18. 38 CFR 1.9 - Description, use, and display of VA seal and flag.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) Official awards, certificates, medals, and plaques. (E) Motion picture film, video tape, and other... governments. (F) Official awards, certificates, and medals. (G) Motion picture film, video tape, and other... with this section shall be subject to the penalty provisions of 18 U.S.C. 506, 701, or 1017, providing...

  19. CCD high-speed videography system with new concepts and techniques

    NASA Astrophysics Data System (ADS)

    Zheng, Zengrong; Zhao, Wenyi; Wu, Zhiqiang

    1997-05-01

    A novel CCD high speed videography system with brand-new concepts and techniques is developed by Zhejiang University recently. The system can send a series of short flash pulses to the moving object. All of the parameters, such as flash numbers, flash durations, flash intervals, flash intensities and flash colors, can be controlled according to needs by the computer. A series of moving object images frozen by flash pulses, carried information of moving object, are recorded by a CCD video camera, and result images are sent to a computer to be frozen, recognized and processed with special hardware and software. Obtained parameters can be displayed, output as remote controlling signals or written into CD. The highest videography frequency is 30,000 images per second. The shortest image freezing time is several microseconds. The system has been applied to wide fields of energy, chemistry, medicine, biological engineering, aero- dynamics, explosion, multi-phase flow, mechanics, vibration, athletic training, weapon development and national defense engineering. It can also be used in production streamline to carry out the online, real-time monitoring and controlling.

  20. Resource Materials for Nanoscale Science and Technology Education

    NASA Astrophysics Data System (ADS)

    Lisensky, George

    2006-12-01

    Nanotechnology and advanced materials examples can be used to explore science and engineering concepts, exhibiting the "wow" and potential of nanotechnology, introducing prospective scientists to key ideas, and educating a citizenry capable of making well-informed technology-driven decisions. For example, material syntheses an atomic layer at a time have already revolutionized lighting and display technologies and dramatically expanded hard drive storage capacities. Resource materials include kits, models, and demonstrations that explain scanning probe microscopy, x-ray diffraction, information storage, energy and light, carbon nanotubes, and solid-state structures. An online Video Lab Manual, where movies show each step of the experiment, illustrates more than a dozen laboratory experiments involving nanoscale science and technology. Examples that are useful at a variety of levels when instructors provide the context include preparation of self-assembled monolayers, liquid crystals, colloidal gold, ferrofluid nanoparticles, nickel nanowires, solar cells, electrochromic thin films, organic light emitting diodes, and quantum dots. These resources have been developed, refined and class tested at institutions working with the Materials Research Science and Engineering Center on Nanostructured Interfaces at the University of Wisconsin-Madison (http://mrsec.wisc.edu/nano).

  1. Imaging System for Vaginal Surgery.

    PubMed

    Taylor, G Bernard; Myers, Erinn M

    2015-12-01

    The vaginal surgeon is challenged with performing complex procedures within a surgical field of limited light and exposure. The video telescopic operating microscope is an illumination and imaging system that provides visualization during open surgical procedures with a limited field of view. The imaging system is positioned within the surgical field and then secured to the operating room table with a maneuverable holding arm. A high-definition camera and Xenon light source allow transmission of the magnified image to a high-definition monitor in the operating room. The monitor screen is positioned above the patient for the surgeon and assistants to view real time throughout the operation. The video telescopic operating microscope system was used to provide surgical illumination and magnification during total vaginal hysterectomy and salpingectomy, midurethral sling, and release of vaginal scar procedures. All procedures were completed without complications. The video telescopic operating microscope provided illumination of the vaginal operative field and display of the magnified image onto high-definition monitors in the operating room for the surgeon and staff to simultaneously view the procedures. The video telescopic operating microscope provides high-definition display, magnification, and illumination during vaginal surgery.

  2. Multi-star processing and gyro filtering for the video inertial pointing system

    NASA Technical Reports Server (NTRS)

    Murphy, J. P.

    1976-01-01

    The video inertial pointing (VIP) system is being developed to satisfy the acquisition and pointing requirements of astronomical telescopes. The VIP system uses a single video sensor to provide star position information that can be used to generate three-axis pointing error signals (multi-star processing) and for input to a cathode ray tube (CRT) display of the star field. The pointing error signals are used to update the telescope's gyro stabilization system (gyro filtering). The CRT display facilitates target acquisition and positioning of the telescope by a remote operator. Linearized small angle equations are used for the multistar processing and a consideration of error performance and singularities lead to star pair location restrictions and equation selection criteria. A discrete steady-state Kalman filter which uses the integration of the gyros is developed and analyzed. The filter includes unit time delays representing asynchronous operations of the VIP microprocessor and video sensor. A digital simulation of a typical gyro stabilized gimbal is developed and used to validate the approach to the gyro filtering.

  3. The Eyes Have It.

    ERIC Educational Resources Information Center

    Walsh, Janet

    1982-01-01

    Discusses the health hazards of working with the visual display systems of computers, in particular the eye problems associated with long-term use of video display terminals. Excerpts from and ordering information for the National Institute for Occupational Safety and Health report on such hazards are included. (JJD)

  4. Impact of pain behaviors on evaluations of warmth and competence.

    PubMed

    Ashton-James, Claire E; Richardson, Daniel C; de C Williams, Amanda C; Bianchi-Berthouze, Nadia; Dekker, Peter H

    2014-12-01

    This study investigated the social judgments that are made about people who appear to be in pain. Fifty-six participants viewed 2 video clips of human figures exercising. The videos were created by a motion tracking system, and showed dots that had been placed at various points on the body, so that body motion was the only visible cue. One of the figures displayed pain behaviors (eg, rubbing, holding, hesitating), while the other did not. Without any other information about the person in each video, participants evaluated each person on a variety of attributes associated with interpersonal warmth, competence, mood, and physical fitness. As well as judging them to be in more pain, participants evaluated the person who displayed pain behavior as less warm and less competent than the person who did not display pain behavior. In addition, the person who displayed pain behavior was perceived to be in a more negative mood and to have poorer physical fitness than the person who did not, and these perceptions contributed to the impact of pain behaviors on evaluations of warmth and competence, respectively. The implications of these negative social evaluations for social relationships, well-being, and pain assessment in persons in chronic pain are discussed. Copyright © 2014 International Association for the Study of Pain. Published by Elsevier B.V. All rights reserved.

  5. Expedition 32 Video Message Recording

    NASA Image and Video Library

    2012-07-25

    ISS032-E-009061 (25 July 2012) --- NASA astronauts Joe Acaba and Sunita Williams, both Expedition 32 flight engineers, perform video message recording in the Destiny laboratory of the International Space Station.

  6. Comparison of form in potential functions while maintaining upright posture during exposure to stereoscopic video clips.

    PubMed

    Kutsuna, Kenichiro; Matsuura, Yasuyuki; Fujikake, Kazuhiro; Miyao, Masaru; Takada, Hiroki

    2013-01-01

    Visually induced motion sickness (VIMS) is caused by sensory conflict, the disagreement between vergence and visual accommodation while observing stereoscopic images. VIMS can be measured by psychological and physiological methods. We propose a mathematical methodology to measure the effect of three-dimensional (3D) images on the equilibrium function. In this study, body sway in the resting state is compared with that during exposure to 3D video clips on a liquid crystal display (LCD) and on a head mounted display (HMD). In addition, the Simulator Sickness Questionnaire (SSQ) was completed immediately afterward. Based on the statistical analysis of the SSQ subscores and each index for stabilograms, we succeeded in determining the quantity of the VIMS during exposure to the stereoscopic images. Moreover, we discuss the metamorphism in the potential functions to control the standing posture during the exposure to stereoscopic video clips.

  7. Evaluation of stereoscopic medical video content on an autostereoscopic display for undergraduate medical education

    NASA Astrophysics Data System (ADS)

    Ilgner, Justus F. R.; Kawai, Takashi; Shibata, Takashi; Yamazoe, Takashi; Westhofen, Martin

    2006-02-01

    Introduction: An increasing number of surgical procedures are performed in a microsurgical and minimally-invasive fashion. However, the performance of surgery, its possibilities and limitations become difficult to teach. Stereoscopic video has evolved from a complex production process and expensive hardware towards rapid editing of video streams with standard and HDTV resolution which can be displayed on portable equipment. This study evaluates the usefulness of stereoscopic video in teaching undergraduate medical students. Material and methods: From an earlier study we chose two clips each of three different microsurgical operations (tympanoplasty type III of the ear, endonasal operation of the paranasal sinuses and laser chordectomy for carcinoma of the larynx). This material was added by 23 clips of a cochlear implantation, which was specifically edited for a portable computer with an autostereoscopic display (PC-RD1-3D, SHARP Corp., Japan). The recording and synchronization of left and right image was performed at the University Hospital Aachen. The footage was edited stereoscopically at the Waseda University by means of our original software for non-linear editing of stereoscopic 3-D movies. Then the material was converted into the streaming 3-D video format. The purpose of the conversion was to present the video clips by a file type that does not depend on a television signal such as PAL or NTSC. 25 4th year medical students who participated in the general ENT course at Aachen University Hospital were asked to estimate depth clues within the six video clips plus cochlear implantation clips. Another 25 4th year students who were shown the material monoscopically on a conventional laptop served as control. Results: All participants noted that the additional depth information helped with understanding the relation of anatomical structures, even though none had hands-on experience with Ear, Nose and Throat operations before or during the course. The monoscopic group generally estimated resection depth to much lesser values than in reality. Although this was the case with some participants in the stereoscopic group, too, the estimation of depth features reflected the enhanced depth impression provided by stereoscopy. Conclusion: Following first implementation of stereoscopic video teaching, medical students who are inexperienced with ENT surgical procedures are able to reproduce depth information and therefore anatomically complex structures to a greater extent following stereoscopic video teaching. Besides extending video teaching to junior doctors, the next evaluation step will address its effect on the learning curve during the surgical training program.

  8. Compilation of Abstracts of Theses Submitted by Candidates for Degrees.

    DTIC Science & Technology

    1986-09-30

    Musitano, J.R. Fin-line Horn Antennas 118 LCDR, USNR Muth, L.R. VLSI Tutorials Through the 119 LT, USN Video -computer Courseware Implementation...Engineer Allocation 432 CPT, USA Model Kiziltan, M. Cognitive Performance Degrada- 433 LTJG, Turkish Navy tion on Sonar Operator and Tor- pedo Data...and Computer Engineering 118 VLSI TUTORIALS THROUGH THE VIDEO -COMPUTER COURSEWARE IMPLEMENTATION SYSTEM Liesel R. Muth Lieutenant, United States Navy

  9. Method and system for monitoring and displaying engine performance parameters

    NASA Technical Reports Server (NTRS)

    Abbott, Terence S. (Inventor); Person, Lee H., Jr. (Inventor)

    1988-01-01

    The invention is believed a major improvement that will have a broad application in governmental and commercial aviation. It provides a dynamic method and system for monitoring and simultaneously displaying in easily scanned form the available, predicted, and actual thrust of a jet aircraft engine under actual operating conditions. The available and predicted thrusts are based on the performance of a functional model of the aircraft engine under the same operating conditions. Other critical performance parameters of the aircraft engine and functional model are generated and compared, the differences in value being simultaneously displayed in conjunction with the displayed thrust values. Thus, the displayed information permits the pilot to make power adjustments directly while keeping him aware of total performance at a glance of a single display panel.

  10. Video over IP design guidebook.

    DOT National Transportation Integrated Search

    2009-12-01

    Texas Department of Transportation (TxDOT) engineers are responsible for the design, evaluation, and : implementation of video solutions across the entire state. These installations occur with vast differences in : requirements, expectations, and con...

  11. Using Photogrammetry to Estimate Tank Waste Volumes from Video

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Field, Jim G.

    Washington River Protection Solutions (WRPS) contracted with HiLine Engineering & Fabrication, Inc. to assess the accuracy of photogrammetry tools as compared to video Camera/CAD Modeling System (CCMS) estimates. This test report documents the results of using photogrammetry to estimate the volume of waste in tank 241-C-I04 from post-retrieval videos and results using photogrammetry to estimate the volume of waste piles in the CCMS test video.

  12. Problem Decomposition and Recomposition in Engineering Design: A Comparison of Design Behavior between Professional Engineers, Engineering Seniors, and Engineering Freshmen

    ERIC Educational Resources Information Center

    Song, Ting; Becker, Kurt; Gero, John; DeBerard, Scott; DeBerard, Oenardi; Reeve, Edward

    2016-01-01

    The authors investigated the differences in using problem decomposition and problem recomposition between dyads of engineering experts, engineering seniors, and engineering freshmen. Participants worked in dyads to complete an engineering design challenge within 1 hour. The entire design process was video and audio recorded. After the design…

  13. High-speed reconstruction of compressed images

    NASA Astrophysics Data System (ADS)

    Cox, Jerome R., Jr.; Moore, Stephen M.

    1990-07-01

    A compression scheme is described that allows high-definition radiological images with greater than 8-bit intensity resolution to be represented by 8-bit pixels. Reconstruction of the images with their original intensity resolution can be carried out by means of a pipeline architecture suitable for compact, high-speed implementation. A reconstruction system is described that can be fabricated according to this approach and placed between an 8-bit display buffer and the display's video system thereby allowing contrast control of images at video rates. Results for 50 CR chest images are described showing that error-free reconstruction of the original 10-bit CR images can be achieved.

  14. Reconfigurable work station for a video display unit and keyboard

    NASA Technical Reports Server (NTRS)

    Shields, Nicholas L. (Inventor); Roe, Fred D., Jr. (Inventor); Fagg, Mary F. (Inventor); Henderson, David E. (Inventor)

    1988-01-01

    A reconfigurable workstation is described having video, keyboard, and hand operated motion controller capabilities. The workstation includes main side panels between which a primary work panel is pivotally carried in a manner in which the primary work panel may be adjusted and set in a negatively declined or positively inclined position for proper forearm support when operating hand controllers. A keyboard table supports a keyboard in such a manner that the keyboard is set in a positively inclined position with respect to the negatively declined work panel. Various adjustable devices are provided for adjusting the relative declinations and inclinations of the work panels, tables, and visual display panels.

  15. Description and flight tests of an oculometer

    NASA Technical Reports Server (NTRS)

    Middleton, D. B.; Hurt, G. J., Jr.; Wise, M. A.; Holt, J. D.

    1977-01-01

    A remote sensing oculometer was successfully operated during flight tests with a NASA experimental Twin Otter aircraft at the Langley Research Center. Although the oculometer was designed primarily for the laboratory, it was able to track the pilot's eye-point-of-regard (lookpoint) consistently and unobtrusively in the flight environment. The instantaneous position of the lookpoint was determined to within approximately 1 deg. Data were recorded on both analog and video tape. The video data consisted of continuous scenes of the aircraft's instrument display and a superimposed white dot (simulating the lookpoint) dwelling on an instrument or moving from instrument to instrument as the pilot monitored the display information during landing approaches.

  16. Involvement of the ventral premotor cortex in controlling image motion of the hand during performance of a target-capturing task.

    PubMed

    Ochiai, Tetsuji; Mushiake, Hajime; Tanji, Jun

    2005-07-01

    The ventral premotor cortex (PMv) has been implicated in the visual guidance of movement. To examine whether neuronal activity in the PMv is involved in controlling the direction of motion of a visual image of the hand or the actual movement of the hand, we trained a monkey to capture a target that was presented on a video display using the same side of its hand as was displayed on the video display. We found that PMv neurons predominantly exhibited premovement activity that reflected the image motion to be controlled, rather than the physical motion of the hand. We also found that the activity of half of such direction-selective PMv neurons depended on which side (left versus right) of the video image of the hand was used to capture the target. Furthermore, this selectivity for a portion of the hand was not affected by changing the starting position of the hand movement. These findings suggest that PMv neurons play a crucial role in determining which part of the body moves in which direction, at least under conditions in which a visual image of a limb is used to guide limb movements.

  17. Optical links in handheld multimedia devices

    NASA Astrophysics Data System (ADS)

    van Geffen, S.; Duis, J.; Miller, R.

    2008-04-01

    Ever emerging applications in handheld multimedia devices such as mobile phones, laptop computers, portable video games and digital cameras requiring increased screen resolutions are driving higher aggregate bitrates between host processor and display(s) enabling services such as mobile video conferencing, video on demand and TV broadcasting. Larger displays and smaller phones require complex mechanical 3D hinge configurations striving to combine maximum functionality with compact building volumes. Conventional galvanic interconnections such as Micro-Coax and FPC carrying parallel digital data between host processor and display module may produce Electromagnetic Interference (EMI) and bandwidth limitations caused by small cable size and tight cable bends. To reduce the number of signals through a hinge, the mobile phone industry, organized in the MIPI (Mobile Industry Processor Interface) alliance, is currently defining an electrical interface transmitting serialized digital data at speeds >1Gbps. This interface allows for electrical or optical interconnects. Above 1Gbps optical links may offer a cost effective alternative because of their flexibility, increased bandwidth and immunity to EMI. This paper describes the development of optical links for handheld communication devices. A cable assembly based on a special Plastic Optical Fiber (POF) selected for its mechanical durability is terminated with a small form factor molded lens assembly which interfaces between an 850nm VCSEL transmitter and a receiving device on the printed circuit board of the display module. A statistical approach based on a Lean Design For Six Sigma (LDFSS) roadmap for new product development tries to find an optimum link definition which will be robust and low cost meeting the power consumption requirements appropriate for battery operated systems.

  18. Augmenting reality in Direct View Optical (DVO) overlay applications

    NASA Astrophysics Data System (ADS)

    Hogan, Tim; Edwards, Tim

    2014-06-01

    The integration of overlay displays into rifle scopes can transform precision Direct View Optical (DVO) sights into intelligent interactive fire-control systems. Overlay displays can provide ballistic solutions within the sight for dramatically improved targeting, can fuse sensor video to extend targeting into nighttime or dirty battlefield conditions, and can overlay complex situational awareness information over the real-world scene. High brightness overlay solutions for dismounted soldier applications have previously been hindered by excessive power consumption, weight and bulk making them unsuitable for man-portable, battery powered applications. This paper describes the advancements and capabilities of a high brightness, ultra-low power text and graphics overlay display module developed specifically for integration into DVO weapon sight applications. Central to the overlay display module was the development of a new general purpose low power graphics controller and dual-path display driver electronics. The graphics controller interface is a simple 2-wire RS-232 serial interface compatible with existing weapon systems such as the IBEAM ballistic computer and the RULR and STORM laser rangefinders (LRF). The module features include multiple graphics layers, user configurable fonts and icons, and parameterized vector rendering, making it suitable for general purpose DVO overlay applications. The module is configured for graphics-only operation for daytime use and overlays graphics with video for nighttime applications. The miniature footprint and ultra-low power consumption of the module enables a new generation of intelligent DVO systems and has been implemented for resolutions from VGA to SXGA, in monochrome and color, and in graphics applications with and without sensor video.

  19. Reviewing Instructional Studies Conducted Using Video Modeling to Children with Autism

    ERIC Educational Resources Information Center

    Acar, Cimen; Diken, Ibrahim H.

    2012-01-01

    This study explored 31 instructional research articles written using video modeling to children with autism and published in peer-reviewed journals. The studies in this research have been reached by searching EBSCO, Academic Search Complete, ERIC and other Anadolu University online search engines and using keywords such as "autism, video modeling,…

  20. 1981 Image II Conference Proceedings.

    DTIC Science & Technology

    1981-11-01

    rapid motion of terrain detail across the display requires fast display processors. Other difficulties are perceptual: the visual displays must convey...has been a continuing effort by Vought in the last decade. Early systems were restricted by the unavailability of video bulk storage with fast random...each photograph. The calculations aided in the proper sequencing of the scanned scenes on the tape recorder and eventually facilitated fast random

  1. Dynamic resource allocation engine for cloud-based real-time video transcoding in mobile cloud computing environments

    NASA Astrophysics Data System (ADS)

    Adedayo, Bada; Wang, Qi; Alcaraz Calero, Jose M.; Grecos, Christos

    2015-02-01

    The recent explosion in video-related Internet traffic has been driven by the widespread use of smart mobile devices, particularly smartphones with advanced cameras that are able to record high-quality videos. Although many of these devices offer the facility to record videos at different spatial and temporal resolutions, primarily with local storage considerations in mind, most users only ever use the highest quality settings. The vast majority of these devices are optimised for compressing the acquired video using a single built-in codec and have neither the computational resources nor battery reserves to transcode the video to alternative formats. This paper proposes a new low-complexity dynamic resource allocation engine for cloud-based video transcoding services that are both scalable and capable of being delivered in real-time. Firstly, through extensive experimentation, we establish resource requirement benchmarks for a wide range of transcoding tasks. The set of tasks investigated covers the most widely used input formats (encoder type, resolution, amount of motion and frame rate) associated with mobile devices and the most popular output formats derived from a comprehensive set of use cases, e.g. a mobile news reporter directly transmitting videos to the TV audience of various video format requirements, with minimal usage of resources both at the reporter's end and at the cloud infrastructure end for transcoding services.

  2. Recovery of Images from the AMOS ELSI Data for STS-33

    DTIC Science & Technology

    1990-04-19

    ore recorded on tape in both video and digital formats. The ELSI \\-. used on thrce passes, orbits 21, 37, and 67 on 24,2S, and 27 November. These data...November, in video fontit, were hin&narried to Gcopih)sics labontory (0L) :t the beginning or December 1989; tli cl.ified data, in digital formn.t, were...are also sampled and reconverted to maulog form, in a stanicrd viko format, for display on a video monitor and recording on videotape. 3. TAPE FORMAT

  3. Design Issues in Video Disc Map Display.

    DTIC Science & Technology

    1984-10-01

    such items as the equipment used by ETL in its work with discs and selected images from a disc. % %. I 4 11. VIDEO DISC TECHNOLOGY AND VOCABULARY 0...The term video refers to a television image. The standard home television set is equipped with a receiver, which is capable of picking up a signal...plays for one hour per side and is played at a constant linear velocity. The industria )y-formatted disc has 54,000 frames per side in concentric tracks

  4. 12. NBS LOWER ROOM. BEHIND FAR GLASS WALL IS VIDEO ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    12. NBS LOWER ROOM. BEHIND FAR GLASS WALL IS VIDEO TAPE EQUIPMENT AND VOICE INTERCOM EQUIPMENT. THE MONITORS ABOVE GLASS WALL DISPLAY UNDERWATER TEST VIDEO TO CONTROL ROOM. FARTHEST CONSOLE ROW CONTAINS CAMERA SWITCHING, PANNING, TILTING, FOCUSING, AND ZOOMING. MIDDLE CONSOLE ROW CONTAINS TEST CONDUCTOR CONSOLES FOR MONITORING TEST ACTIVITIES AND DATA. THE CLOSEST CONSOLE ROW IS NBS FACILITY CONSOLES FOR TEST DIRECTOR, SAFETY AND QUALITY ASSURANCE REPRESENTATIVES. - Marshall Space Flight Center, Neutral Buoyancy Simulator Facility, Rideout Road, Huntsville, Madison County, AL

  5. 13. NBS LOWER ROOM. BEHIND FAR GLASS WALL IS VIDEO ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    13. NBS LOWER ROOM. BEHIND FAR GLASS WALL IS VIDEO TAPE EQUIPMENT AND VOICE INTERCOM EQUIPMENT. THE MONITORS ABOVE GLASS WALL DISPLAY UNDERWATER TEST VIDEO TO CONTROL ROOM. FARTHEST CONSOLE ROW CONTAINS CAMERA SWITCHING, PANNING, TILTING, FOCUSING, AND ZOOMING. MIDDLE CONSOLE ROW CONTAINS TEST CONDUCTOR CONSOLES FOR MONITORING TEST ACTIVITIES AND DATA. THE CLOSEST CONSOLE ROW IS NBC FACILITY CONSOLES FOR TEST DIRECTOR, SAFETY AND QUALITY ASSURANCE REPRESENTATIVES. - Marshall Space Flight Center, Neutral Buoyancy Simulator Facility, Rideout Road, Huntsville, Madison County, AL

  6. Effects Of Frame Rates In Video Displays

    NASA Technical Reports Server (NTRS)

    Kellogg, Gary V.; Wagner, Charles A.

    1991-01-01

    Report describes experiment on subjective effects of rates at which display on cathode-ray tube in flight simulator updated and refreshed. Conducted to learn more about jumping, blurring, flickering, and multiple lines that observer perceives when line moves at high speed across screen of a calligraphic CRT.

  7. Aqua Education and Public Outreach

    NASA Astrophysics Data System (ADS)

    Graham, S. M.; Parkinson, C. L.; Chambers, L. H.; Ray, S. E.

    2011-12-01

    NASA's Aqua satellite was launched on May 4, 2002, with six instruments designed to collect data about the Earth's atmosphere, biosphere, hydrosphere, and cryosphere. Since the late 1990s, the Aqua mission has involved considerable education and public outreach (EPO) activities, including printed products, formal education, an engineering competition, webcasts, and high-profile multimedia efforts. The printed products include Aqua and instrument brochures, an Aqua lithograph, Aqua trading cards, NASA Fact Sheets on Aqua, the water cycle, and weather forecasting, and an Aqua science writers' guide. On-going formal education efforts include the Students' Cloud Observations On-Line (S'COOL) Project, the MY NASA DATA Project, the Earth System Science Education Alliance, and, in partnership with university professors, undergraduate student research modules. Each of these projects incorporates Aqua data into its inquiry-based framework. Additionally, high school and undergraduate students have participated in summer internship programs. An earlier formal education activity was the Aqua Engineering Competition, which was a high school program sponsored by the NASA Goddard Space Flight Center, Morgan State University, and the Baltimore Museum of Industry. The competition began with the posting of a Round 1 Aqua-related engineering problem in December 2002 and concluded in April 2003 with a final round of competition among the five finalist teams. The Aqua EPO efforts have also included a wide range of multimedia products. Prior to launch, the Aqua team worked closely with the Special Projects Initiative (SPI) Office to produce a series of live webcasts on Aqua science and the Cool Science website aqua.nasa.gov/coolscience, which displays short video clips of Aqua scientists and engineers explaining the many aspects of the Aqua mission. These video clips, the Aqua website, and numerous presentations have benefited from dynamic visualizations showing the Aqua launch, instrument deployments, instrument sensing, and the Aqua orbit. More recently, in 2008 the Aqua team worked with the ViewSpace production team from the Space Telescope Science Institute to create an 18-minute ViewSpace feature showcasing the science and applications of the Aqua mission. Then in 2010 and 2011, Aqua and other NASA Earth-observing missions partnered with National CineMedia on the "Know Your Earth" (KYE) project. During January and July 2010 and 2011, KYE ran 2-minute segments highlighting questions that promoted global climate literacy on lobby LCD screens in movie theaters throughout the U.S. Among the ongoing Aqua EPO efforts is the incorporation of Aqua data sets onto the Dynamic Planet, a large digital video globe that projects a wide variety of spherical data sets. Aqua also has a highly successful collaboration with EarthSky communications on the production of an Aqua/EarthSky radio show and podcast series. To date, eleven productions have been completed and distributed via the EarthSky network. In addition, a series of eight video podcasts (i.e., vodcasts) are under production by NASA Goddard TV in conjunction with Aqua personnel, highlighting various aspects of the Aqua mission.

  8. Internet Protocol Display Sharing Solution for Mission Control Center Video System

    NASA Technical Reports Server (NTRS)

    Brown, Michael A.

    2009-01-01

    With the advent of broadcast television as a constant source of information throughout the NASA manned space flight Mission Control Center (MCC) at the Johnson Space Center (JSC), the current Video Transport System (VTS) characteristics provides the ability to visually enhance real-time applications as a broadcast channel that decision making flight controllers come to rely on, but can be difficult to maintain and costly. The Operations Technology Facility (OTF) of the Mission Operations Facility Division (MOFD) has been tasked to provide insight to new innovative technological solutions for the MCC environment focusing on alternative architectures for a VTS. New technology will be provided to enable sharing of all imagery from one specific computer display, better known as Display Sharing (DS), to other computer displays and display systems such as; large projector systems, flight control rooms, and back supporting rooms throughout the facilities and other offsite centers using IP networks. It has been stated that Internet Protocol (IP) applications are easily readied to substitute for the current visual architecture, but quality and speed may need to be forfeited for reducing cost and maintainability. Although the IP infrastructure can support many technologies, the simple task of sharing ones computer display can be rather clumsy and difficult to configure and manage to the many operators and products. The DS process shall invest in collectively automating the sharing of images while focusing on such characteristics as; managing bandwidth, encrypting security measures, synchronizing disconnections from loss of signal / loss of acquisitions, performance latency, and provide functions like, scalability, multi-sharing, ease of initial integration / sustained configuration, integration with video adjustments packages, collaborative tools, host / recipient controllability, and the utmost paramount priority, an enterprise solution that provides ownership to the whole process, while maintaining the integrity of the latest technological displayed image devices. This study will provide insights to the many possibilities that can be filtered down to a harmoniously responsive product that can be used in today's MCC environment.

  9. High End Visualization of Geophysical Datasets Using Immersive Technology: The SIO Visualization Center.

    NASA Astrophysics Data System (ADS)

    Newman, R. L.

    2002-12-01

    How many images can you display at one time with Power Point without getting "postage stamps"? Do you have fantastic datasets that you cannot view because your computer is too slow/small? Do you assume a few 2-D images of a 3-D picture are sufficient? High-end visualization centers can minimize and often eliminate these problems. The new visualization center [http://siovizcenter.ucsd.edu] at Scripps Institution of Oceanography [SIO] immerses users into a virtual world by projecting 3-D images onto a Panoram GVR-120E wall-sized floor-to-ceiling curved screen [7' x 23'] that has 3.2 mega-pixels of resolution. The Infinite Reality graphics subsystem is driven by a single-pipe SGI Onyx 3400 with a system bandwidth of 44 Gbps. The Onyx is powered by 16 MIPS R12K processors and 16 GB of addressable memory. The system is also equipped with transmitters and LCD shutter glasses which permit stereographic 3-D viewing of high-resolution images. This center is ideal for groups of up to 60 people who can simultaneously view these large-format images. A wide range of hardware and software is available, giving the users a totally immersive working environment in which to display, analyze, and discuss large datasets. The system enables simultaneous display of video and audio streams from sources such as SGI megadesktop and stereo megadesktop, S-VHS video, DVD video, and video from a Macintosh or PC. For instance, one-third of the screen might be displaying S-VHS video from a remotely-operated-vehicle [ROV], while the remaining portion of the screen might be used for an interactive 3-D flight over the same parcel of seafloor. The video and audio combinations using this system are numerous, allowing users to combine and explore data and images in innovative ways, greatly enhancing scientists' ability to visualize, understand and collaborate on complex datasets. In the not-distant future, with the rapid growth in networking speeds in the US, it will be possible for Earth Sciences Departments to collaborate effectively while limiting the amount of physical travel required. This includes porting visualization content to the popular, low-cost Geowall visualization systems, and providing web-based access to databanks filled with stock geoscience visualizations.

  10. Design of video interface conversion system based on FPGA

    NASA Astrophysics Data System (ADS)

    Zhao, Heng; Wang, Xiang-jun

    2014-11-01

    This paper presents a FPGA based video interface conversion system that enables the inter-conversion between digital and analog video. Cyclone IV series EP4CE22F17C chip from Altera Corporation is used as the main video processing chip, and single-chip is used as the information interaction control unit between FPGA and PC. The system is able to encode/decode messages from the PC. Technologies including video decoding/encoding circuits, bus communication protocol, data stream de-interleaving and de-interlacing, color space conversion and the Camera Link timing generator module of FPGA are introduced. The system converts Composite Video Broadcast Signal (CVBS) from the CCD camera into Low Voltage Differential Signaling (LVDS), which will be collected by the video processing unit with Camera Link interface. The processed video signals will then be inputted to system output board and displayed on the monitor.The current experiment shows that it can achieve high-quality video conversion with minimum board size.

  11. Simultaneous video analysis of the kinematics of opercular movement and electromyographic activity during agonistic display in Siamese fighting fish.

    PubMed

    Polnau, D G; Ma, P M

    2001-12-01

    Neuroethology seeks to uncover the neural mechanisms underlying natural behaviour. One of the major challenges in this field is the need to correlate directly neural activity and behavioural output. In most cases, recording of neural activity in freely moving animals is extremely difficult. However, electromyographic recording can often be used in lieu of neural recording to gain an understanding of the motor output program underlying a well-defined behaviour. Electromyographic recording is less invasive than most other recording methods, and does not impede the performance of most natural tasks. Using the opercular display of the Siamese fighting fish as a model, we developed a protocol for correlating directly electromyographic activity and kinematics of opercular movement: electromyographic activity was recorded in the audio channel of a video cassette recorder while video taping the display behaviour. By combining computer-assisted, quantitative video analysis and spike analysis, the kinematics of opercular movement are linked to the motor output program. Since the muscle that mediates opercular abduction in this fish, the dilator operculi, is a relatively small muscle with several subdivisions, we also describe methods for recording from small muscles and marking the precise recording site with electrolytic corrosion. The protocol described here is applicable to studies of a variety of natural behaviour that can be performed in a relatively confined space. It is also useful for analyzing complex or rapidly changing behaviour in which a precise correlation between kinematics and electromyography is required.

  12. Development of a video over IP guidebook.

    DOT National Transportation Integrated Search

    2009-09-01

    Texas Department of Transportation (TxDOT) engineers are responsible for the design, evaluation, and implementation of video solutions across the entire state. These installations occur with vast differences in requirements, expectations, and constra...

  13. Designing Real-time Decision Support for Trauma Resuscitations

    PubMed Central

    Yadav, Kabir; Chamberlain, James M.; Lewis, Vicki R.; Abts, Natalie; Chawla, Shawn; Hernandez, Angie; Johnson, Justin; Tuveson, Genevieve; Burd, Randall S.

    2016-01-01

    Background Use of electronic clinical decision support (eCDS) has been recommended to improve implementation of clinical decision rules. Many eCDS tools, however, are designed and implemented without taking into account the context in which clinical work is performed. Implementation of the pediatric traumatic brain injury (TBI) clinical decision rule at one Level I pediatric emergency department includes an electronic questionnaire triggered when ordering a head computed tomography using computerized physician order entry (CPOE). Providers use this CPOE tool in less than 20% of trauma resuscitation cases. A human factors engineering approach could identify the implementation barriers that are limiting the use of this tool. Objectives The objective was to design a pediatric TBI eCDS tool for trauma resuscitation using a human factors approach. The hypothesis was that clinical experts will rate a usability-enhanced eCDS tool better than the existing CPOE tool for user interface design and suitability for clinical use. Methods This mixed-methods study followed usability evaluation principles. Pediatric emergency physicians were surveyed to identify barriers to using the existing eCDS tool. Using standard trauma resuscitation protocols, a hierarchical task analysis of pediatric TBI evaluation was developed. Five clinical experts, all board-certified pediatric emergency medicine faculty members, then iteratively modified the hierarchical task analysis until reaching consensus. The software team developed a prototype eCDS display using the hierarchical task analysis. Three human factors engineers provided feedback on the prototype through a heuristic evaluation, and the software team refined the eCDS tool using a rapid prototyping process. The eCDS tool then underwent iterative usability evaluations by the five clinical experts using video review of 50 trauma resuscitation cases. A final eCDS tool was created based on their feedback, with content analysis of the evaluations performed to ensure all concerns were identified and addressed. Results Among 26 EPs (76% response rate), the main barriers to using the existing tool were that the information displayed is redundant and does not fit clinical workflow. After the prototype eCDS tool was developed based on the trauma resuscitation hierarchical task analysis, the human factors engineers rated it to be better than the CPOE tool for nine of 10 standard user interface design heuristics on a three-point scale. The eCDS tool was also rated better for clinical use on the same scale, in 84% of 50 expert–video pairs, and was rated equivalent in the remainder. Clinical experts also rated barriers to use of the eCDS tool as being low. Conclusions An eCDS tool for diagnostic imaging designed using human factors engineering methods has improved perceived usability among pediatric emergency physicians. PMID:26300010

  14. Slow Scan Telemedicine

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Originally developed under contract for NASA by Ball Bros. Research Corporation for acquiring visual information from lunar and planetary spacecraft, system uses standard closed circuit camera connected to a device called a scan converter, which slows the stream of images to match an audio circuit, such as a telephone line. Transmitted to its destination, the image is reconverted by another scan converter and displayed on a monitor. In addition to assist scans, technique allows transmission of x-rays, nuclear scans, ultrasonic imagery, thermograms, electrocardiograms or live views of patient. Also allows conferencing and consultation among medical centers, general practitioners, specialists and disease control centers. Commercialized by Colorado Video, Inc., major employment is in business and industry for teleconferencing, cable TV news, transmission of scientific/engineering data, security, information retrieval, insurance claim adjustment, instructional programs, and remote viewing of advertising layouts, real estate, construction sites or products.

  15. Literacy-Related Play Activities and Preschool Staffs' Strategies to Support Children's Concept Development

    ERIC Educational Resources Information Center

    Norling, Martina; Lillvist, Anne

    2016-01-01

    This study investigates language-promoting strategies and support of concept development displayed by preschool staffs' when interacting with preschool children in literacy-related play activities. The data analysed consisted of 39 minutes of video, selected systematically from a total of 11 hours of video material from six Swedish preschool…

  16. Video-Out Projection and Lecture Hall Set-Up. Microcomputing Working Paper Series.

    ERIC Educational Resources Information Center

    Gibson, Chris

    This paper details the considerations involved in determining suitable video projection systems for displaying the Apple Macintosh's screen to large groups of people, both in classrooms with approximately 25 people, and in lecture halls with approximately 250. To project the Mac screen to groups in lecture halls, the Electrohome EDP-57 video…

  17. Float Package and the Data Rack aboard the DC-9

    NASA Technical Reports Server (NTRS)

    1996-01-01

    Ted Brunzie and Peter Mason observe the float package and the data rack aboard the DC-9 reduced gravity aircraft. The float package contains a cryostat, a video camera, a pump and accelerometers. The data rack displays and record the video signal from the float package on tape and stores acceleration and temperature measurements on disk.

  18. VENI, video, VICI: The merging of computer and video technologies

    NASA Technical Reports Server (NTRS)

    Horowitz, Jay G.

    1993-01-01

    The topics covered include the following: High Definition Television (HDTV) milestones; visual information bandwidth; television frequency allocation and bandwidth; horizontal scanning; workstation RGB color domain; NTSC color domain; American HDTV time-table; HDTV image size; digital HDTV hierarchy; task force on digital image architecture; open architecture model; future displays; and the ULTIMATE imaging system.

  19. Study to Expand Simulation Cockpit Displays of Advanced Sensors

    DTIC Science & Technology

    1981-03-01

    common source is being used for multiple sensor types). If inde- pendent displays and controls are desired then two independent video sources or sensor...line is inserted in each gap, the result is the familiar 211 in- terlace. If two lines are inserted, the result is 31l interlace, and so on. The total...symbol generators. If these systems are oper- ating at various scan rates and if a common display device, such as a multifunction display (MFD) is to

  20. Practical, Real-Time, and Robust Watermarking on the Spatial Domain for High-Definition Video Contents

    NASA Astrophysics Data System (ADS)

    Kim, Kyung-Su; Lee, Hae-Yeoun; Im, Dong-Hyuck; Lee, Heung-Kyu

    Commercial markets employ digital right management (DRM) systems to protect valuable high-definition (HD) quality videos. DRM system uses watermarking to provide copyright protection and ownership authentication of multimedia contents. We propose a real-time video watermarking scheme for HD video in the uncompressed domain. Especially, our approach is in aspect of practical perspectives to satisfy perceptual quality, real-time processing, and robustness requirements. We simplify and optimize human visual system mask for real-time performance and also apply dithering technique for invisibility. Extensive experiments are performed to prove that the proposed scheme satisfies the invisibility, real-time processing, and robustness requirements against video processing attacks. We concentrate upon video processing attacks that commonly occur in HD quality videos to display on portable devices. These attacks include not only scaling and low bit-rate encoding, but also malicious attacks such as format conversion and frame rate change.

  1. Video PATSEARCH: A Mixed-Media System.

    ERIC Educational Resources Information Center

    Schulman, Jacque-Lynne

    1982-01-01

    Describes a videodisc-based information display system in which a computer terminal is used to search the online PATSEARCH database from a remote host with local microcomputer control to select and display drawings from the retrieved records. System features and system components are discussed and criteria for system evaluation are presented.…

  2. Software Aids Visualization Of Mars Pathfinder Mission

    NASA Technical Reports Server (NTRS)

    Weidner, Richard J.

    1996-01-01

    Report describes Simulator for Imager for Mars Pathfinder (SIMP) computer program. SIMP generates "virtual reality" display of view through video camera on Mars lander spacecraft of Mars Pathfinder mission, along with display of pertinent textual and graphical data, for use by scientific investigators in planning sequences of activities for mission.

  3. System status display information

    NASA Technical Reports Server (NTRS)

    Summers, L. G.; Erickson, J. B.

    1984-01-01

    The system Status Display is an electronic display system which provides the flight crew with enhanced capabilities for monitoring and managing aircraft systems. Guidelines for the design of the electronic system displays were established. The technical approach involved the application of a system engineering approach to the design of candidate displays and the evaluation of a Hernative concepts by part-task simulation. The system engineering and selection of candidate displays are covered.

  4. Kuipers installs and routes RCS Video Cables in the U.S. Laboratory

    NASA Image and Video Library

    2012-02-01

    ISS030-E-060117 (1 Feb. 2012) --- In the International Space Station?s Destiny laboratory, European Space Agency astronaut Andre Kuipers, Expedition 30 flight engineer, routes video cable for the High Rate Communication System (HRCS). HRCS will allow for two additional space-to-ground audio channels and two additional downlink video channels.

  5. The Use of Video-Taped Lectures and Web-Based Communications in Teaching: A Distance-Teaching and Cross-Atlantic Collaboration Experiment.

    ERIC Educational Resources Information Center

    Herder, P. M.; Subrahmanian, E.; Talukdar, S.; Turk, A. L.; Westerberg, A. W.

    2002-01-01

    Explains distance education approach applied to the 'Engineering Design Problem Formulation' course simultaneously at the Delft University of Technology (the Netherlands) and at Carnegie Mellon University (CMU, Pittsburgh, USA). Uses video taped lessons, video conferencing, electronic mails and web-accessible document management system LIRE in the…

  6. YouTube Fridays: Student Led Development of Engineering Estimate Problems

    ERIC Educational Resources Information Center

    Liberatore, Matthew W.; Vestal, Charles R.; Herring, Andrew M.

    2012-01-01

    YouTube Fridays devotes a small fraction of class time to student-selected videos related to the course topic, e.g., thermodynamics. The students then write and solve a homework-like problem based on the events in the video. Three recent pilots involving over 300 students have developed a database of videos and questions that reinforce important…

  7. Student Interactions with Online Videos in a Large Hybrid Mechanics of Materials Course

    ERIC Educational Resources Information Center

    Ahn, Benjamin; Bir, Devayan D.

    2018-01-01

    The hybrid course format has gained popularity in the engineering education community over the past few years. Although studies have examined student outcomes and attitudes toward hybrid courses, a limited number of studies have examined how students interact with online videos in hybrid courses. This study examined the video-viewing behaviors of…

  8. Learned saliency transformations for gaze guidance

    NASA Astrophysics Data System (ADS)

    Vig, Eleonora; Dorr, Michael; Barth, Erhardt

    2011-03-01

    The saliency of an image or video region indicates how likely it is that the viewer of the image or video fixates that region due to its conspicuity. An intriguing question is how we can change the video region to make it more or less salient. Here, we address this problem by using a machine learning framework to learn from a large set of eye movements collected on real-world dynamic scenes how to alter the saliency level of the video locally. We derive saliency transformation rules by performing spatio-temporal contrast manipulations (on a spatio-temporal Laplacian pyramid) on the particular video region. Our goal is to improve visual communication by designing gaze-contingent interactive displays that change, in real time, the saliency distribution of the scene.

  9. Video stereo-laparoscopy system

    NASA Astrophysics Data System (ADS)

    Xiang, Yang; Hu, Jiasheng; Jiang, Huilin

    2006-01-01

    Minimally invasive surgery (MIS) has contributed significantly to patient care by reducing the morbidity associated with more invasive procedures. MIS procedures have become standard treatment for gallbladder disease and some abdominal malignancies. The imaging system has played a major role in the evolving field of minimally invasive surgery (MIS). The image need to have good resolution, large magnification, especially, the image need to have depth cue at the same time the image have no flicker and suit brightness. The video stereo-laparoscopy system can meet the demand of the doctors. This paper introduces the 3d video laparoscopy has those characteristic, field frequency: 100Hz, the depth space: 150mm, resolution: 10pl/mm. The work principle of the system is introduced in detail, and the optical system and time-division stereo-display system are described briefly in this paper. The system has focusing image lens, it can image on the CCD chip, the optical signal can change the video signal, and through A/D switch of the image processing system become the digital signal, then display the polarized image on the screen of the monitor through the liquid crystal shutters. The doctors with the polarized glasses can watch the 3D image without flicker of the tissue or organ. The 3D video laparoscope system has apply in the MIS field and praised by doctors. Contrast to the traditional 2D video laparoscopy system, it has some merit such as reducing the time of surgery, reducing the problem of surgery and the trained time.

  10. Autonomous spacecraft rendezvous and docking

    NASA Technical Reports Server (NTRS)

    Tietz, J. C.; Almand, B. J.

    1985-01-01

    A storyboard display is presented which summarizes work done recently in design and simulation of autonomous video rendezvous and docking systems for spacecraft. This display includes: photographs of the simulation hardware, plots of chase vehicle trajectories from simulations, pictures of the docking aid including image processing interpretations, and drawings of the control system strategy. Viewgraph-style sheets on the display bulletin board summarize the simulation objectives, benefits, special considerations, approach, and results.

  11. Autonomous spacecraft rendezvous and docking

    NASA Astrophysics Data System (ADS)

    Tietz, J. C.; Almand, B. J.

    A storyboard display is presented which summarizes work done recently in design and simulation of autonomous video rendezvous and docking systems for spacecraft. This display includes: photographs of the simulation hardware, plots of chase vehicle trajectories from simulations, pictures of the docking aid including image processing interpretations, and drawings of the control system strategy. Viewgraph-style sheets on the display bulletin board summarize the simulation objectives, benefits, special considerations, approach, and results.

  12. Military display performance parameters

    NASA Astrophysics Data System (ADS)

    Desjardins, Daniel D.; Meyer, Frederick

    2012-06-01

    The military display market is analyzed in terms of four of its segments: avionics, vetronics, dismounted soldier, and command and control. Requirements are summarized for a number of technology-driving parameters, to include luminance, night vision imaging system compatibility, gray levels, resolution, dimming range, viewing angle, video capability, altitude, temperature, shock and vibration, etc., for direct-view and virtual-view displays in cockpits and crew stations. Technical specifications are discussed for selected programs.

  13. [Influence of different lighting levels at workstations with video display terminals on operators' work efficiency].

    PubMed

    Janosik, Elzbieta; Grzesik, Jan

    2003-01-01

    The aim of this work was to evaluate the influence of different lighting levels at workstations with video display terminals (VDTs) on the course of the operators' visual work, and to determine the optimal levels of lighting at VDT workstations. For two kinds of job (entry of figures from a typescript and edition of the text displayed on the screen), the work capacity, the degree of the visual strain and the operators' subjective symptoms were determined for four lighting levels (200, 300, 500 and 750 lx). It was found that the work at VDT workstations may overload the visual system and cause eyes complaints as well as the reduction of accommodation or convergence strength. It was also noted that the edition of the text displayed on the screen is more burdening for operators than the entry of figures from a typescript. Moreover, the examination results showed that the lighting at VDT workstations should be higher than 200 lx and that 300 lx makes the work conditions most comfortable during the entry of figures from a typescript, and 500 lx during the edition of the text displayed on the screen.

  14. A teleconference with three-dimensional surgical video presentation on the 'usual' Internet.

    PubMed

    Obuchi, Toshiro; Moroga, Toshihiko; Nakamura, Hiroshige; Shima, Hiroji; Iwasaki, Akinori

    2015-03-01

    Endoscopic surgery employing three-dimensional (3D) video images, such as a robotic surgery, has recently become common. However, the number of opportunities to watch such actual 3D videos is still limited due to many technical difficulties associated with showing 3D videos in front of an audience. A teleconference with 3D video presentations of robotic surgeries was held between our institution and a distant institution using a commercially available telecommunication appliance on the 'usual' Internet. Although purpose-built video displays and 3D glasses were necessary, no technical problems occurred during the presentation and discussion. This high-definition 3D telecommunication system can be applied to discussions about and education on 3D endoscopic surgeries for many surgeons, even in distant places, without difficulty over the usual Internet connection.

  15. Simple video format for mobile applications

    NASA Astrophysics Data System (ADS)

    Smith, John R.; Miao, Zhourong; Li, Chung-Sheng

    2000-04-01

    With the advent of pervasive computing, there is a growing demand for enabling multimedia applications on mobile devices. Large numbers of pervasive computing devices, such as personal digital assistants (PDAs), hand-held computer (HHC), smart phones, portable audio players, automotive computing devices, and wearable computers are gaining access to online information sources. However, the pervasive computing devices are often constrained along a number of dimensions, such as processing power, local storage, display size and depth, connectivity, and communication bandwidth, which makes it difficult to access rich image and video content. In this paper, we report on our initial efforts in designing a simple scalable video format with low-decoding and transcoding complexity for pervasive computing. The goal is to enable image and video access for mobile applications such as electronic catalog shopping, video conferencing, remote surveillance and video mail using pervasive computing devices.

  16. Advances in display technology III; Proceedings of the Meeting, Los Angeles, CA, January 18, 19, 1983

    NASA Astrophysics Data System (ADS)

    Schlam, E.

    1983-01-01

    Human factors in visible displays are discussed, taking into account an introduction to color vision, a laser optometric assessment of visual display viewability, the quantification of color contrast, human performance evaluations of digital image quality, visual problems of office video display terminals, and contemporary problems in airborne displays. Other topics considered are related to electroluminescent technology, liquid crystal and related technologies, plasma technology, and display terminal and systems. Attention is given to the application of electroluminescent technology to personal computers, electroluminescent driving techniques, thin film electroluminescent devices with memory, the fabrication of very large electroluminescent displays, the operating properties of thermally addressed dye switching liquid crystal display, light field dichroic liquid crystal displays for very large area displays, and hardening military plasma displays for a nuclear environment.

  17. Are Health Videos from Hospitals, Health Organizations, and Active Users Available to Health Consumers? An Analysis of Diabetes Health Video Ranking in YouTube

    PubMed Central

    Borras-Morell, Jose-Enrique; Martinez-Millana, Antonio; Karlsen, Randi

    2017-01-01

    Health consumers are increasingly using the Internet to search for health information. The existence of overloaded, inaccurate, obsolete, or simply incorrect health information available on the Internet is a serious obstacle for finding relevant and good-quality data that actually helps patients. Search engines of multimedia Internet platforms are thought to help users to find relevant information according to their search. But, is the information recovered by those search engines from quality sources? Is the health information uploaded from reliable sources, such as hospitals and health organizations, easily available to patients? The availability of videos is directly related to the ranking position in YouTube search. The higher the ranking of the information is, the more accessible it is. The aim of this study is to analyze the ranking evolution of diabetes health videos on YouTube in order to discover how videos from reliable channels, such as hospitals and health organizations, are evolving in the ranking. The analysis was done by tracking the ranking of 2372 videos on a daily basis during a 30-day period using 20 diabetes-related queries. Our conclusions are that the current YouTube algorithm favors the presence of reliable videos in upper rank positions in diabetes-related searches. PMID:28243314

  18. Are Health Videos from Hospitals, Health Organizations, and Active Users Available to Health Consumers? An Analysis of Diabetes Health Video Ranking in YouTube.

    PubMed

    Fernandez-Llatas, Carlos; Traver, Vicente; Borras-Morell, Jose-Enrique; Martinez-Millana, Antonio; Karlsen, Randi

    2017-01-01

    Health consumers are increasingly using the Internet to search for health information. The existence of overloaded, inaccurate, obsolete, or simply incorrect health information available on the Internet is a serious obstacle for finding relevant and good-quality data that actually helps patients. Search engines of multimedia Internet platforms are thought to help users to find relevant information according to their search. But, is the information recovered by those search engines from quality sources? Is the health information uploaded from reliable sources, such as hospitals and health organizations, easily available to patients? The availability of videos is directly related to the ranking position in YouTube search. The higher the ranking of the information is, the more accessible it is. The aim of this study is to analyze the ranking evolution of diabetes health videos on YouTube in order to discover how videos from reliable channels, such as hospitals and health organizations, are evolving in the ranking. The analysis was done by tracking the ranking of 2372 videos on a daily basis during a 30-day period using 20 diabetes-related queries. Our conclusions are that the current YouTube algorithm favors the presence of reliable videos in upper rank positions in diabetes-related searches.

  19. Backscatter absorption gas imaging system

    DOEpatents

    McRae, Jr., Thomas G.

    1985-01-01

    A video imaging system for detecting hazardous gas leaks. Visual displays of invisible gas clouds are produced by radiation augmentation of the field of view of an imaging device by radiation corresponding to an absorption line of the gas to be detected. The field of view of an imager is irradiated by a laser. The imager receives both backscattered laser light and background radiation. When a detectable gas is present, the backscattered laser light is highly attenuated, producing a region of contrast or shadow on the image. A flying spot imaging system is utilized to synchronously irradiate and scan the area to lower laser power requirements. The imager signal is processed to produce a video display.

  20. Backscatter absorption gas imaging system

    DOEpatents

    McRae, T.G. Jr.

    A video imaging system for detecting hazardous gas leaks. Visual displays of invisible gas clouds are produced by radiation augmentation of the field of view of an imaging device by radiation corresponding to an absorption line of the gas to be detected. The field of view of an imager is irradiated by a laser. The imager receives both backscattered laser light and background radiation. When a detectable gas is present, the backscattered laser light is highly attenuated, producing a region of contrast or shadow on the image. A flying spot imaging system is utilized to synchronously irradiate and scan the area to lower laser power requirements. The imager signal is processed to produce a video display.

  1. Intersection video detection field handbook : an update.

    DOT National Transportation Integrated Search

    2010-12-01

    This handbook is intended to assist engineers and technicians with the design, layout, and : operation of a video imaging vehicle detection system (VIVDS). This assistance is provided in : three ways. First, the handbook identifies the optimal detect...

  2. DC-8 Scanning Lidar Characterization of Aircraft Contrails and Cirrus Clouds

    NASA Technical Reports Server (NTRS)

    Uthe, Edward E.; Nielsen, Norman B.; Oseberg, Terje E.

    1998-01-01

    An angular-scanning large-aperture (36 cm) backscatter lidar was developed and deployed on the NASA DC-8 research aircraft as part of the SUCCESS (Subsonic Aircraft: Contrail and Cloud Effects Special Study) program. The lidar viewing direction could be scanned continuously during aircraft flight from vertically upward to forward to vertically downward, or the viewing could be at fixed angles. Real-time pictorial displays generated from the lidar signatures were broadcast on the DC-8 video network and used to locate clouds and contrails above, ahead of, and below the DC-8 to depict their spatial structure and to help select DC-8 altitudes for achieving optimum sampling by onboard in situ sensors. Several lidar receiver systems and real-time data displays were evaluated to help extend in situ data into vertical dimensions and to help establish possible lidar configurations and applications on future missions. Digital lidar signatures were recorded on 8 mm Exabyte tape and generated real-time displays were recorded on 8mm video tape. The digital records were transcribed in a common format to compact disks to facilitate data analysis and delivery to SUCCESS participants. Data selected from the real-time display video recordings were processed for publication-quality displays incorporating several standard lidar data corrections. Data examples are presented that illustrate: (1) correlation with particulate, gas, and radiometric measurements made by onboard sensors, (2) discrimination and identification between contrails observed by onboard sensors, (3) high-altitude (13 km) scattering layer that exhibits greatly enhanced vertical backscatter relative to off-vertical backscatter, and (4) mapping of vertical distributions of individual precipitating ice crystals and their capture by cloud layers. An angular scan plotting program was developed that accounts for DC-8 pitch and velocity.

  3. Emotional Processing of Infants Displays in Eating Disorders

    PubMed Central

    Cardi, Valentina; Corfield, Freya; Leppanen, Jenni; Rhind, Charlotte; Deriziotis, Stephanie; Hadjimichalis, Alexandra; Hibbs, Rebecca; Micali, Nadia; Treasure, Janet

    2014-01-01

    Aim The aim of this study is to examine emotional processing of infant displays in people with Eating Disorders (EDs). Background Social and emotional factors are implicated as causal and maintaining factors in EDs. Difficulties in emotional regulation have been mainly studied in relation to adult interactions, with less interest given to interactions with infants. Method A sample of 138 women were recruited, of which 49 suffered from Anorexia Nervosa (AN), 16 from Bulimia Nervosa (BN), and 73 were healthy controls (HCs). Attentional responses to happy and sad infant faces were tested with the visual probe detection task. Emotional identification of, and reactivity to, infant displays were measured using self-report measures. Facial expressions to video clips depicting sad, happy and frustrated infants were also recorded. Results No significant differences between groups were observed in the attentional response to infant photographs. However, there was a trend for patients to disengage from happy faces. People with EDs also reported lower positive ratings of happy infant displays and greater subjective negative reactions to sad infants. Finally, patients showed a significantly lower production of facial expressions, especially in response to the happy infant video clip. Insecure attachment was negatively correlated with positive facial expressions displayed in response to the happy infant and positively correlated with the intensity of negative emotions experienced in response to the sad infant video clip. Conclusion People with EDs do not have marked abnormalities in their attentional processing of infant emotional faces. However, they do have a reduction in facial affect particularly in response to happy infants. Also, they report greater negative reactions to sadness, and rate positive emotions less intensively than HCs. This pattern of emotional responsivity suggests abnormalities in social reward sensitivity and might indicate new treatment targets. PMID:25463051

  4. Counselor Nonverbal Self-Disclosure and Fear of Intimacy during Employment Counseling: An Aptitude-Treatment Interaction Illustration

    ERIC Educational Resources Information Center

    Carrein, Cindy; Bernaud, Jean-Luc

    2010-01-01

    This study investigated the effects of nonverbal self-disclosure within the dynamic of aptitude-treatment interaction. Participants (N = 94) watched a video of a career counseling session aimed at helping the jobseeker to find employment. The video was then edited to display 3 varying degrees of nonverbal self-disclosure. In conjunction with the…

  5. VID-R and SCAN: Tools and Methods for the Automated Analysis of Visual Records.

    ERIC Educational Resources Information Center

    Ekman, Paul; And Others

    The VID-R (Visual Information Display and Retrieval) system that enables computer-aided analysis of visual records is composed of a film-to-television chain, two videotape recorders with complete remote control of functions, a video-disc recorder, three high-resolution television monitors, a teletype, a PDP-8, a video and audio interface, three…

  6. Glass Vision 3D: Digital Discovery for the Deaf

    ERIC Educational Resources Information Center

    Parton, Becky Sue

    2017-01-01

    Glass Vision 3D was a grant-funded project focused on developing and researching a Google Glass app that would allowed young Deaf children to look at the QR code of an object in the classroom and see an augmented reality projection that displays an American Sign Language (ASL) related video. Twenty five objects and videos were prepared and tested…

  7. Engine monitoring display study

    NASA Technical Reports Server (NTRS)

    Hornsby, Mary E.

    1992-01-01

    The current study is part of a larger NASA effort to develop displays for an engine-monitoring system to enable the crew to monitor engine parameter trends more effectively. The objective was to evaluate the operational utility of adding three types of information to the basic Boeing Engine Indicating and Crew Alerting System (EICAS) display formats: alphanumeric alerting messages for engine parameters whose values exceed caution or warning limits; alphanumeric messages to monitor engine parameters that deviate from expected values; and a graphic depiction of the range of expected values for current conditions. Ten training and line pilots each flew 15 simulated flight scenarios with five variants of the basic EICAS format; these variants included different combinations of the added information. The pilots detected engine problems more quickly when engine alerting messages were included in the display; adding a graphic depiction of the range of expected values did not affect detection speed. The pilots rated both types of alphanumeric messages (alert and monitor parameter) as more useful and easier to interpret than the graphic depiction. Integrating engine parameter messages into the EICAS alerting system appears to be both useful and preferred.

  8. Blinded evaluation of the effects of high definition and magnification on perceived image quality in laryngeal imaging.

    PubMed

    Otto, Kristen J; Hapner, Edie R; Baker, Michael; Johns, Michael M

    2006-02-01

    Advances in commercial video technology have improved office-based laryngeal imaging. This study investigates the perceived image quality of a true high-definition (HD) video camera and the effect of magnification on laryngeal videostroboscopy. We performed a prospective, dual-armed, single-blinded analysis of a standard laryngeal videostroboscopic examination comparing 3 separate add-on camera systems: a 1-chip charge-coupled device (CCD) camera, a 3-chip CCD camera, and a true 720p (progressive scan) HD camera. Displayed images were controlled for magnification and image size (20-inch [50-cm] display, red-green-blue, and S-video cable for 1-chip and 3-chip cameras; digital visual interface cable and HD monitor for HD camera). Ten blinded observers were then asked to rate the following 5 items on a 0-to-100 visual analog scale: resolution, color, ability to see vocal fold vibration, sense of depth perception, and clarity of blood vessels. Eight unblinded observers were then asked to rate the difference in perceived resolution and clarity of laryngeal examination images when displayed on a 10-inch (25-cm) monitor versus a 42-inch (105-cm) monitor. A visual analog scale was used. These monitors were controlled for actual resolution capacity. For each item evaluated, randomized block design analysis demonstrated that the 3-chip camera scored significantly better than the 1-chip camera (p < .05). For the categories of color and blood vessel discrimination, the 3-chip camera scored significantly better than the HD camera (p < .05). For magnification alone, observers rated the 42-inch monitor statistically better than the 10-inch monitor. The expense of new medical technology must be judged against its added value. This study suggests that HD laryngeal imaging may not add significant value over currently available video systems, in perceived image quality, when a small monitor is used. Although differences in clarity between standard and HD cameras may not be readily apparent on small displays, a large display size coupled with HD technology may impart improved diagnosis of subtle vocal fold lesions and vibratory anomalies.

  9. Use of videos for Distribution Construction and Maintenance (DC M) training

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, G.M.

    This paper presents the results of a survey taken among members of the American Gas Association (AGA)'s Distribution Construction and Maintenance (DC M) committee to gauge the extent, sources, mode of use, and degree of satisfaction with videos as a training aid in distribution construction and maintenance skills. Also cites AGA Engineering Technical Note, DCM-88-3-1, as a catalog of the videos listed by respondents to the survey. Comments on the various sources of training videos and the characteristics of videos from each. Conference presentation included showing of a sampling of video segments from these various sources. 1 fig.

  10. Video games: a route to large-scale STEM education?

    PubMed

    Mayo, Merrilea J

    2009-01-02

    Video games have enormous mass appeal, reaching audiences in the hundreds of thousands to millions. They also embed many pedagogical practices known to be effective in other environments. This article reviews the sparse but encouraging data on learning outcomes for video games in science, technology, engineering, and math (STEM) disciplines, then reviews the infrastructural obstacles to wider adoption of this new medium.

  11. Mapping Self-Guided Learners' Searches for Video Tutorials on YouTube

    ERIC Educational Resources Information Center

    Garrett, Nathan

    2016-01-01

    While YouTube has a wealth of educational videos, how self-guided learners use these resources has not been fully described. An analysis of search engine queries for help with the use of Microsoft Excel shows that few users search for specific features or functions but instead use very general terms. Because the same videos are returned in…

  12. Objectively Determining the Educational Potential of Computer and Video-Based Courseware; or, Producing Reliable Evaluations Despite the Dog and Pony Show.

    ERIC Educational Resources Information Center

    Barrett, Andrew J.; And Others

    The Center for Interactive Technology, Applications, and Research at the College of Engineering of the University of South Florida (Tampa) has developed objective and descriptive evaluation models to assist in determining the educational potential of computer and video courseware. The computer-based courseware evaluation model and the video-based…

  13. Contour Detector and Data Acquisition System for the Left Ventricular Outline

    NASA Technical Reports Server (NTRS)

    Reiber, J. H. C. (Inventor)

    1978-01-01

    A real-time contour detector and data acquisition system is described for an angiographic apparatus having a video scanner for converting an X-ray image of a structure characterized by a change in brightness level compared with its surrounding into video format and displaying the X-ray image in recurring video fields. The real-time contour detector and data acqusition system includes track and hold circuits; a reference level analog computer circuit; an analog compartor; a digital processor; a field memory; and a computer interface.

  14. The Video Display Terminal Health Hazard Debate.

    ERIC Educational Resources Information Center

    Clark, Carolyn A.

    A study was conducted to identify the potential health hazards of visual display terminals for employees and then to develop a list of recommendations for improving the physical conditions of the workplace. Data were collected by questionnaires from 55 employees in 10 word processing departments in Topeka, Kansas. A majority of the employees…

  15. Perceived Intensity of Emotional Point-Light Displays Is Reduced in Subjects with ASD

    ERIC Educational Resources Information Center

    Krüger, Britta; Kaletsch, Morten; Pilgramm, Sebastian; Schwippert, Sven-Sören; Hennig, Jürgen; Stark, Rudolf; Lis, Stefanie; Gallhofer, Bernd; Sammer, Gebhard; Zentgraf, Karen; Munzert, Jörn

    2018-01-01

    One major characteristic of autism spectrum disorder (ASD) is problems with social interaction and communication. The present study explored ASD-related alterations in perceiving emotions expressed via body movements. 16 participants with ASD and 16 healthy controls observed video scenes of human interactions conveyed by point-light displays. They…

  16. Free viewpoint TV and its international standardization

    NASA Astrophysics Data System (ADS)

    Tanimoto, Masayuki

    2009-05-01

    We have developed a new type of television named FTV (Free-viewpoint TV). FTV is an innovative visual media that enables us to view a 3D scene by freely changing our viewpoints. We proposed the concept of FTV and constructed the world's first real-time system including the complete chain of operation from image capture to display. We also realized FTV on a single PC and FTV with free listening-point audio. FTV is based on the ray-space method that represents one ray in real space with one point in the ray-space. We have also developed new type of ray capture and display technologies such as a 360-degree mirror-scan ray capturing system and a 360 degree ray-reproducing display. MPEG regarded FTV as the most challenging 3D media and started the international standardization activities of FTV. The first phase of FTV is MVC (Multi-view Video Coding) and the second phase is 3DV (3D Video). MVC was completed in March 2009. 3DV is a standard that targets serving a variety of 3D displays. It will be completed within the next two years.

  17. Human Factors Engineering Program Review Model

    DTIC Science & Technology

    2004-02-01

    Institute, 1993). ANSI HFS-100: American National Standard for Human Factors Engineering of Visual Display Terminal Workstations (American National... American National Standard for Human Factors Engineering of Visual Display Terminal Workstations (ANSI HFS-100-1988). Santa Monica, California

  18. A new display stream compression standard under development in VESA

    NASA Astrophysics Data System (ADS)

    Jacobson, Natan; Thirumalai, Vijayaraghavan; Joshi, Rajan; Goel, James

    2017-09-01

    The Advanced Display Stream Compression (ADSC) codec project is in development in response to a call for technologies from the Video Electronics Standards Association (VESA). This codec targets visually lossless compression of display streams at a high compression rate (typically 6 bits/pixel) for mobile/VR/HDR applications. Functionality of the ADSC codec is described in this paper, and subjective trials results are provided using the ISO 29170-2 testing protocol.

  19. Hip Hop Dance Experience Linked to Sociocognitive Ability.

    PubMed

    Bonny, Justin W; Lindberg, Jenna C; Pacampara, Marc C

    2017-01-01

    Expertise within gaming (e.g., chess, video games) and kinesthetic (e.g., sports, classical dance) activities has been found to be linked with specific cognitive skills. Some of these skills, working memory, mental rotation, problem solving, are linked to higher performance in science, technology, math, and engineering (STEM) disciplines. In the present study, we examined whether experience in a different activity, hip hop dance, is also linked to cognitive abilities connected with STEM skills as well as social cognition ability. Dancers who varied in hip hop and other dance style experience were presented with a set of computerized tasks that assessed working memory capacity, mental rotation speed, problem solving efficiency, and theory of mind. We found that, when controlling for demographic factors and other dance style experience, those with greater hip hop dance experience were faster at mentally rotating images of hands at greater angle disparities and there was a trend for greater accuracy at identifying positive emotions displayed by cropped images of human faces. We suggest that hip hop dance, similar to other more technical activities such as video gameplay, tap some specific cognitive abilities that underlie STEM skills. Furthermore, we suggest that hip hop dance experience can be used to reach populations who may not otherwise be interested in other kinesthetic or gaming activities and potentially enhance select sociocognitive skills.

  20. Design and implementation of a flipped classroom learning environment in the biomedical engineering context.

    PubMed

    Corrias, Alberto; Cho Hong, James Goh

    2015-01-01

    The design and implementation of a learning environment that leverages on the use of various technologies is presented. The context is an undergraduate core engineering course within the biomedical engineering curriculum. The topic of the course is data analysis in biomedical engineering problems. One of the key ideas of this study is to confine the most mathematical and statistical aspects of data analysis in prerecorded video lectures. Students are asked to watch the video lectures before coming to class. Since the classroom session does not need to cover the mathematical theory, the time is spent on a selected real world scenario in the field of biomedical engineering that exposes students to an actual application of the theory. The weekly cycle is concluded with a hands-on tutorial session in the computer rooms. A potential problem would arise in such learning environment if the students do not follow the recommendation of watching the video lecture before coming to class. In an attempt to limit these occurrences, two key instruments were put in place: a set of online self-assessment questions that students are asked to take before the classroom session and a simple rewards system during the classroom session. Thanks to modern learning analytics tools, we were able to show that, on average, 57.9% of students followed the recommendation of watching the video lecture before class. The efficacy of the learning environment was assessed through various means. A survey was conducted among the students and the gathered data support the view that the learning environment was well received by the students. Attempts were made to quantify the impacts on learning of the proposed measures by taking into account the results of selected questions of the final examination of the course. Although the presence of confounding factors demands caution in the interpretation, these data seem to indicate a possible positive effect of the use of video lectures in this technologically enhanced learning environment.

  1. Passive ultra-brief video training improves performance of compression-only cardiopulmonary resuscitation.

    PubMed

    Benoit, Justin L; Vogele, Jennifer; Hart, Kimberly W; Lindsell, Christopher J; McMullan, Jason T

    2017-06-01

    Bystander compression-only cardiopulmonary resuscitation (CPR) improves survival after out-of-hospital cardiac arrest. To broaden CPR training, 1-2min ultra-brief videos have been disseminated via the Internet and television. Our objective was to determine whether participants passively exposed to a televised ultra-brief video perform CPR better than unexposed controls. This before-and-after study was conducted with non-patients in an urban Emergency Department waiting room. The intervention was an ultra-brief CPR training video displayed via closed-circuit television 3-6 times/hour. Participants were unaware of the study and not told to watch the video. Pre-intervention, no video was displayed. Participants were asked to demonstrate compression-only CPR on a manikin. Performance was scored based on critical actions: check for responsiveness, call for help, begin compressions immediately, and correct hand placement, compression rate and depth. The primary outcome was the proportion of participants who performed all actions correctly. There were 50 control and 50 exposed participants. Mean age was 37, 51% were African-American, 52% were female, and 10% self-reported current CPR certification. There were no statistically significant differences in baseline characteristics between groups. The number of participants who performed all actions correctly was 0 (0%) control vs. 10 (20%) exposed (difference 20%, 95% confidence interval [CI] 8.9-31.1%, p<0.001). Correct compression rate and depth were 11 (22%) control vs. 22 (44%) exposed (22%, 95% CI 4.1-39.9%, p=0.019), and 5 (10%) control vs. 15 (30%) exposed (20%, 95% CI 4.8-35.2%, p=0.012), respectively. Passive ultra-brief video training is associated with improved performance of compression-only CPR. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. The relative importance of different perceptual-cognitive skills during anticipation.

    PubMed

    North, Jamie S; Hope, Ed; Williams, A Mark

    2016-10-01

    We examined whether anticipation is underpinned by perceiving structured patterns or postural cues and whether the relative importance of these processes varied as a function of task constraints. Skilled and less-skilled soccer players completed anticipation paradigms in video-film and point light display (PLD) format. Skilled players anticipated more accurately regardless of display condition, indicating that both perception of structured patterns between players and postural cues contribute to anticipation. However, the Skill×Display interaction showed skilled players' advantage was enhanced in the video-film condition, suggesting that they make better use of postural cues when available during anticipation. We also examined anticipation as a function of proximity to the ball. When participants were near the ball, anticipation was more accurate for video-film than PLD clips, whereas when the ball was far away there was no difference between viewing conditions. Perceiving advance postural cues appears more important than structured patterns when the ball is closer to the observer, whereas the reverse is true when the ball is far away. Various perceptual-cognitive skills contribute to anticipation with the relative importance of perceiving structured patterns and advance postural cues being determined by task constraints and the availability of perceptual information. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. An integrated port camera and display system for laparoscopy.

    PubMed

    Terry, Benjamin S; Ruppert, Austin D; Steinhaus, Kristen R; Schoen, Jonathan A; Rentschler, Mark E

    2010-05-01

    In this paper, we built and tested the port camera, a novel, inexpensive, portable, and battery-powered laparoscopic tool that integrates the components of a vision system with a cannula port. This new device 1) minimizes the invasiveness of laparoscopic surgery by combining a camera port and tool port; 2) reduces the cost of laparoscopic vision systems by integrating an inexpensive CMOS sensor and LED light source; and 3) enhances laparoscopic surgical procedures by mechanically coupling the camera, tool port, and liquid crystal display (LCD) screen to provide an on-patient visual display. The port camera video system was compared to two laparoscopic video systems: a standard resolution unit from Karl Storz (model 22220130) and a high definition unit from Stryker (model 1188HD). Brightness, contrast, hue, colorfulness, and sharpness were compared. The port camera video is superior to the Storz scope and approximately equivalent to the Stryker scope. An ex vivo study was conducted to measure the operative performance of the port camera. The results suggest that simulated tissue identification and biopsy acquisition with the port camera is as efficient as with a traditional laparoscopic system. The port camera was successfully used by a laparoscopic surgeon for exploratory surgery and liver biopsy during a porcine surgery, demonstrating initial surgical feasibility.

  4. Approach and Evaluation of a Mobile Video-Based and Location-Based Augmented Reality Platform for Information Brokerage

    NASA Astrophysics Data System (ADS)

    Dastageeri, H.; Storz, M.; Koukofikis, A.; Knauth, S.; Coors, V.

    2016-09-01

    Providing mobile location-based information for pedestrians faces many challenges. On one hand the accuracy of localisation indoors and outdoors is restricted due to technical limitations of GPS and Beacons. Then again only a small display is available to display information as well as to develop a user interface. Plus, the software solution has to consider the hardware characteristics of mobile devices during the implementation process for aiming a performance with minimum latency. This paper describes our approach by including a combination of image tracking and GPS or Beacons to ensure orientation and precision of localisation. To communicate the information on Points of Interest (POIs), we decided to choose Augmented Reality (AR). For this concept of operations, we used besides the display also the acceleration and positions sensors as a user interface. This paper especially goes into detail on the optimization of the image tracking algorithms, the development of the video-based AR player for the Android platform and the evaluation of videos as an AR element in consideration of providing a good user experience. For setting up content for the POIs or even generate a tour we used and extended the Open Geospatial Consortium (OGC) standard Augmented Reality Markup Language (ARML).

  5. NASA Lewis' Telescience Support Center Supports Orbiting Microgravity Experiments

    NASA Technical Reports Server (NTRS)

    Hawersaat, Bob W.

    1998-01-01

    The Telescience Support Center (TSC) at the NASA Lewis Research Center was developed to enable Lewis-based science teams and principal investigators to monitor and control experimental and operational payloads onboard the International Space Station. The TSC is a remote operations hub that can interface with other remote facilities, such as universities and industrial laboratories. As a pathfinder for International Space Station telescience operations, the TSC has incrementally developed an operational capability by supporting space shuttle missions. The TSC has evolved into an environment where experimenters and scientists can control and monitor the health and status of their experiments in near real time. Remote operations (or telescience) allow local scientists and their experiment teams to minimize their travel and maintain a local complement of expertise for hardware and software troubleshooting and data analysis. The TSC was designed, developed, and is operated by Lewis' Engineering and Technical Services Directorate and its support contractors, Analex Corporation and White's Information System, Inc. It is managed by Lewis' Microgravity Science Division. The TSC provides operational support in conjunction with the NASA Marshall Space Flight Center and NASA Johnson Space Center. It enables its customers to command, receive, and view telemetry; monitor the science video from their on-orbit experiments; and communicate over mission-support voice loops. Data can be received and routed to experimenter-supplied ground support equipment and/or to the TSC data system for display. Video teleconferencing capability and other video sources, such as NASA TV, are also available. The TSC has a full complement of standard services to aid experimenters in telemetry operations.

  6. Subjective quality of video sequences rendered on LCD with local backlight dimming at different lighting conditions

    NASA Astrophysics Data System (ADS)

    Mantel, Claire; Korhonen, Jari; Pedersen, Jesper M.; Bech, Søren; Andersen, Jakob Dahl; Forchhammer, Søren

    2015-01-01

    This paper focuses on the influence of ambient light on the perceived quality of videos displayed on Liquid Crystal Display (LCD) with local backlight dimming. A subjective test assessing the quality of videos with two backlight dimming methods and three lighting conditions, i.e. no light, low light level (5 lux) and higher light level (60 lux) was organized to collect subjective data. Results show that participants prefer the method exploiting local dimming possibilities to the conventional full backlight but that this preference varies depending on the ambient light level. The clear preference for one method at the low light conditions decreases at the high ambient light, confirming that the ambient light significantly attenuates the perception of the leakage defect (light leaking through dark pixels). Results are also highly dependent on the content of the sequence, which can modulate the effect of the ambient light from having an important influence on the quality grades to no influence at all.

  7. STS-114 Flight Day 13 and 14 Highlights

    NASA Technical Reports Server (NTRS)

    2005-01-01

    On Flight Day 13, the crew of Space Shuttle Discovery on the STS-114 Return to Flight mission (Commander Eileen Collins, Pilot James Kelly, Mission Specialists Soichi Noguchi, Stephen Robinson, Andrew Thomas, Wendy Lawrence, and Charles Camarda) hear a weather report from Mission Control on conditions at the shuttle's possible landing sites. The video includes a view of a storm at sea. Noguchi appears in front of a banner for the Japanese Space Agency JAXA, displaying a baseball signed by Japanese MLB players, demonstrating origami, displaying other crafts, and playing the keyboard. The primary event on the video is an interview of the whole crew, in which they discuss the importance of their mission, lessons learned, shuttle operations, shuttle safety and repair, extravehicular activities (EVAs), astronaut training, and shuttle landing. Mission Control dedicates the song "A Piece of Sky" to the Shuttle crew, while the Earth is visible below the orbiter. The video ends with a view of the Earth limb lit against a dark background.

  8. Evaluation of a HDR image sensor with logarithmic response for mobile video-based applications

    NASA Astrophysics Data System (ADS)

    Tektonidis, Marco; Pietrzak, Mateusz; Monnin, David

    2017-10-01

    The performance of mobile video-based applications using conventional LDR (Low Dynamic Range) image sensors highly depends on the illumination conditions. As an alternative, HDR (High Dynamic Range) image sensors with logarithmic response are capable to acquire illumination-invariant HDR images in a single shot. We have implemented a complete image processing framework for a HDR sensor, including preprocessing methods (nonuniformity correction (NUC), cross-talk correction (CTC), and demosaicing) as well as tone mapping (TM). We have evaluated the HDR sensor for video-based applications w.r.t. the display of images and w.r.t. image analysis techniques. Regarding the display we have investigated the image intensity statistics over time, and regarding image analysis we assessed the number of feature correspondences between consecutive frames of temporal image sequences. For the evaluation we used HDR image data recorded from a vehicle on outdoor or combined outdoor/indoor itineraries, and we performed a comparison with corresponding conventional LDR image data.

  9. Video flowmeter

    DOEpatents

    Lord, D.E.; Carter, G.W.; Petrini, R.R.

    1983-08-02

    A video flowmeter is described that is capable of specifying flow nature and pattern and, at the same time, the quantitative value of the rate of volumetric flow. An image of a determinable volumetric region within a fluid containing entrained particles is formed and positioned by a rod optic lens assembly on the raster area of a low-light level television camera. The particles are illuminated by light transmitted through a bundle of glass fibers surrounding the rod optic lens assembly. Only particle images having speeds on the raster area below the raster line scanning speed may be used to form a video picture which is displayed on a video screen. The flowmeter is calibrated so that the locus of positions of origin of the video picture gives a determination of the volumetric flow rate of the fluid. 4 figs.

  10. Sixty Symbols, by The University of Nottingham

    NASA Astrophysics Data System (ADS)

    MacIsaac, Dan

    2009-11-01

    Faculty at the University of Nottingham are continuing to develop short (5-10 minutes long) insightful video-streamed vignettes for the web. Their earlier sites: Test Tube: Behind the World of Science and the widely known Periodic Table of Videos (a video on each element in the periodic table featured in WebSights last semester) have been joined by a new effort from the faculty of Physics, Astronomy and Engineering-Sixty Symbols: Videos about the Symbols of Physics and Astronomy. I liked the vignette on chi myself.

  11. Reliability of trauma management videos on YouTube and their compliance with ATLS® (9th edition) guideline.

    PubMed

    Şaşmaz, M I; Akça, A H

    2017-06-01

    In this study, the reliability of trauma management scenario videos (in English) on YouTube and their compliance with Advanced Trauma Life Support (ATLS ® ) guidelines were investigated. The search was conducted on February 15, 2016 by using the terms "assessment of trauma" and ''management of trauma''. All videos that were uploaded between January 2011 and June 2016 were viewed by two experienced emergency physicians. The data regarding the date of upload, the type of the uploader, duration of the video and view counts were recorded. The videos were categorized according to the video source and scores. The search results yielded 880 videos. Eight hundred and thirteen videos were excluded by the researchers. The distribution of videos by years was found to be balanced. The scores of videos uploaded by an institution were determined to be higher compared to other groups (p = 0.003). The findings of this study display that trauma management videos on YouTube in the majority of cases are not reliable/compliant with ATLS-guidelines and can therefore not be recommended for educational purposes. These data may only be used in public education after making necessary arrangements.

  12. Improving land vehicle situational awareness using a distributed aperture system

    NASA Astrophysics Data System (ADS)

    Fortin, Jean; Bias, Jason; Wells, Ashley; Riddle, Larry; van der Wal, Gooitzen; Piacentino, Mike; Mandelbaum, Robert

    2005-05-01

    U.S. Army Research, Development, and Engineering Command (RDECOM) Communications Electronics Research, Development and Engineering Center (CERDEC) Night Vision and Electronic Sensors Directorate (NVESD) has performed early work to develop a Distributed Aperture System (DAS). The DAS aims at improving the situational awareness of armored fighting vehicle crews under closed-hatch conditions. The concept is based on a plurality of sensors configured to create a day and night dome of surveillance coupled with heads up displays slaved to the operator's head to give a "glass turret" feel. State-of-the-art image processing is used to produce multiple seamless hemispherical views simultaneously available to the vehicle commander, crew members and dismounting infantry. On-the-move automatic cueing of multiple moving/pop-up low silhouette threats is also done with the possibility to save/revisit/share past events. As a first step in this development program, a contract was awarded to United Defense to further develop the Eagle VisionTM system. The second-generation prototype features two camera heads, each comprising four high-resolution (2048x1536) color sensors, and each covering a field of view of 270°hx150°v. High-bandwidth digital links interface the camera heads with a field programmable gate array (FPGA) based custom processor developed by Sarnoff Corporation. The processor computes the hemispherical stitch and warp functions required for real-time, low latency, immersive viewing (360°hx120°v, 30° down) and generates up to six simultaneous extended graphics array (XGA) video outputs for independent display either on a helmet-mounted display (with associated head tracking device) or a flat panel display (and joystick). The prototype is currently in its last stage of development and will be integrated on a vehicle for user evaluation and testing. Near-term improvements include the replacement of the color camera heads with a pixel-level fused combination of uncooled long wave infrared (LWIR) and low light level intensified imagery. It is believed that the DAS will significantly increase situational awareness by providing the users with a day and night, wide area coverage, immersive visualization capability.

  13. Aircraft Engine-Monitoring System And Display

    NASA Technical Reports Server (NTRS)

    Abbott, Terence S.; Person, Lee H., Jr.

    1992-01-01

    Proposed Engine Health Monitoring System and Display (EHMSD) provides enhanced means for pilot to control and monitor performances of engines. Processes raw sensor data into information meaningful to pilot. Provides graphical information about performance capabilities, current performance, and operational conditions in components or subsystems of engines. Provides means to control engine thrust directly and innovative means to monitor performance of engine system rapidly and reliably. Features reduce pilot workload and increase operational safety.

  14. Stennis Space Center's approach to liquid rocket engine health monitoring using exhaust plume diagnostics

    NASA Technical Reports Server (NTRS)

    Gardner, D. G.; Tejwani, G. D.; Bircher, F. E.; Loboda, J. A.; Van Dyke, D. B.; Chenevert, D. J.

    1991-01-01

    Details are presented of the approach used in a comprehensive program to utilize exhaust plume diagnostics for rocket engine health-and-condition monitoring and assessing SSME component wear and degradation. This approach incorporates both spectral and video monitoring of the exhaust plume. Video monitoring provides qualitative data for certain types of component wear while spectral monitoring allows both quantitative and qualitative information. Consideration is given to spectral identification of SSME materials and baseline plume emissions.

  15. Playing a first-person shooter video game induces neuroplastic change.

    PubMed

    Wu, Sijing; Cheng, Cho Kin; Feng, Jing; D'Angelo, Lisa; Alain, Claude; Spence, Ian

    2012-06-01

    Playing a first-person shooter (FPS) video game alters the neural processes that support spatial selective attention. Our experiment establishes a causal relationship between playing an FPS game and neuroplastic change. Twenty-five participants completed an attentional visual field task while we measured ERPs before and after playing an FPS video game for a cumulative total of 10 hr. Early visual ERPs sensitive to bottom-up attentional processes were little affected by video game playing for only 10 hr. However, participants who played the FPS video game and also showed the greatest improvement on the attentional visual field task displayed increased amplitudes in the later visual ERPs. These potentials are thought to index top-down enhancement of spatial selective attention via increased inhibition of distractors. Individual variations in learning were observed, and these differences show that not all video game players benefit equally, either behaviorally or in terms of neural change.

  16. Using a Low Cost Flight Simulation Environment for Interdisciplinary Education

    NASA Technical Reports Server (NTRS)

    Khan, M. Javed; Rossi, Marcia; ALi, Syed F.

    2004-01-01

    A multi-disciplinary and inter-disciplinary education is increasingly being emphasized for engineering undergraduates. However, often the focus is on interaction between engineering disciplines. This paper discusses the experience at Tuskegee University in providing interdisciplinary research experiences for undergraduate students in both Aerospace Engineering and Psychology through the utilization of a low cost flight simulation environment. The environment, which is pc-based, runs a low-cost of-the-shelf software and is configured for multiple out-of-the-window views and a synthetic heads down display with joystick, rudder and throttle controls. While the environment is being utilized to investigate and evaluate various strategies for training novice pilots, students were involved to provide them with experience in conducting such interdisciplinary research. On the global inter-disciplinary level these experiences included developing experimental designs and research protocols, consideration of human participant ethical issues, and planning and executing the research studies. During the planning phase students were apprised of the limitations of the software in its basic form and the enhancements desired to investigate human factors issues. A number of enhancements to the flight environment were then undertaken, from creating Excel macros for determining the performance of the 'pilots', to interacting with the software to provide various audio/video cues based on the experimental protocol. These enhancements involved understanding the flight model and performance, stability & control issues. Throughout this process, discussions of data analysis included a focus from a human factors perspective as well as an engineering point of view.

  17. Joint force protection advanced security system (JFPASS) "the future of force protection: integrate and automate"

    NASA Astrophysics Data System (ADS)

    Lama, Carlos E.; Fagan, Joe E.

    2009-09-01

    The United States Department of Defense (DoD) defines 'force protection' as "preventive measures taken to mitigate hostile actions against DoD personnel (to include family members), resources, facilities, and critical information." Advanced technologies enable significant improvements in automating and distributing situation awareness, optimizing operator time, and improving sustainability, which enhance protection and lower costs. The JFPASS Joint Capability Technology Demonstration (JCTD) demonstrates a force protection environment that combines physical security and Chemical, Biological, Radiological, Nuclear, and Explosive (CBRNE) defense through the application of integrated command and control and data fusion. The JFPASS JCTD provides a layered approach to force protection by integrating traditional sensors used in physical security, such as video cameras, battlefield surveillance radars, unmanned and unattended ground sensors. The optimization of human participation and automation of processes is achieved by employment of unmanned ground vehicles, along with remotely operated lethal and less-than-lethal weapon systems. These capabilities are integrated via a tailorable, user-defined common operational picture display through a data fusion engine operating in the background. The combined systems automate the screening of alarms, manage the information displays, and provide assessment and response measures. The data fusion engine links disparate sensors and systems, and applies tailored logic to focus the assessment of events. It enables timely responses by providing the user with automated and semi-automated decision support tools. The JFPASS JCTD uses standard communication/data exchange protocols, which allow the system to incorporate future sensor technologies or communication networks, while maintaining the ability to communicate with legacy or existing systems.

  18. Influencing Gameplay in Support of Early Synthetic Prototyping Studies

    DTIC Science & Technology

    2016-06-01

    Synthetic Prototyping, acquisition, video games, crowdsourcing, Engineering Resilient Systems, science and technology, game environment 15. NUMBER OF...even though he did not play video games for entertainment, he would find time to participate in ESP (Vogt, Megiveron, & Smith, 2015). Meckler’s...reward used in video games: 1) score systems, 2) experience points, 3) item granting, 4) collectible resources, 5) achievement systems, 6) feedback

  19. Applying the systems engineering approach to video over IP projects : workshop.

    DOT National Transportation Integrated Search

    2011-12-01

    In 2009, the Texas Transportation Institute produced for the Texas Department of Transportation a document : called Video over IP Design Guidebook. This report summarizes an implementation of that project in the : form of a workshop. The workshop was...

  20. Educational Outreach at the M.I.T. Plasma Fusion Center

    NASA Astrophysics Data System (ADS)

    Censabella, V.

    1996-11-01

    Educational outreach at the MIT Plasma Fusion Center consists of volunteers working together to increase the public's knowledge of fusion and plasma-related experiments. Seeking to generate excitement about science, engineering and mathematics, the PFC holds a number of outreach activities throughout the year, such as Middle and High School Outreach Days. Outreach also includes the Mr. Magnet Program, which uses an interactive strategy to engage elementary school children. Included in this year's presentation will be a new and improved C-MOD Jr, a confinement video game which helps students to discover how computers manipulate magnetic pulses to keep a plasma confined for as long as possible. Also on display will be an educational toy created by the Cambridge Physics Outlet, a PFC spin-off company. The PFC maintains a Home Page on the World Wide Web, which can be reached at http://cmod2.pfc.mit.edu/.

  1. In-network adaptation of SHVC video in software-defined networks

    NASA Astrophysics Data System (ADS)

    Awobuluyi, Olatunde; Nightingale, James; Wang, Qi; Alcaraz Calero, Jose Maria; Grecos, Christos

    2016-04-01

    Software Defined Networks (SDN), when combined with Network Function Virtualization (NFV) represents a paradigm shift in how future networks will behave and be managed. SDN's are expected to provide the underpinning technologies for future innovations such as 5G mobile networks and the Internet of Everything. The SDN architecture offers features that facilitate an abstracted and centralized global network view in which packet forwarding or dropping decisions are based on application flows. Software Defined Networks facilitate a wide range of network management tasks, including the adaptation of real-time video streams as they traverse the network. SHVC, the scalable extension to the recent H.265 standard is a new video encoding standard that supports ultra-high definition video streams with spatial resolutions of up to 7680×4320 and frame rates of 60fps or more. The massive increase in bandwidth required to deliver these U-HD video streams dwarfs the bandwidth requirements of current high definition (HD) video. Such large bandwidth increases pose very significant challenges for network operators. In this paper we go substantially beyond the limited number of existing implementations and proposals for video streaming in SDN's all of which have primarily focused on traffic engineering solutions such as load balancing. By implementing and empirically evaluating an SDN enabled Media Adaptation Network Entity (MANE) we provide a valuable empirical insight into the benefits and limitations of SDN enabled video adaptation for real time video applications. The SDN-MANE is the video adaptation component of our Video Quality Assurance Manager (VQAM) SDN control plane application, which also includes an SDN monitoring component to acquire network metrics and a decision making engine using algorithms to determine the optimum adaptation strategy for any real time video application flow given the current network conditions. Our proposed VQAM application has been implemented and evaluated on an SDN allowing us to provide important benchmarks for video streaming over SDN and for SDN control plane latency.

  2. Evaluating Maintenance Performance: A Video Approach to Symbolic Testing of Electronics Maintenance Tasks. Final Report.

    ERIC Educational Resources Information Center

    Shriver, Edgar L.; And Others

    This volume reports an effort to use the video media as an approach for the preparation of a battery of symbolic tests that would be empirically valid substitutes for criterion referenced Job Task Performance Tests. The graphic symbolic tests require the storage of a large amount of pictorial information which must be searched rapidly for display.…

  3. Multimedia category preferences of working engineers

    NASA Astrophysics Data System (ADS)

    Baukal, Charles E.; Ausburn, Lynna J.

    2016-09-01

    Many have argued for the importance of continuing engineering education (CEE), but relatively few recommendations were found in the literature for how to use multimedia technologies to deliver it most effectively. The study reported here addressed this gap by investigating the multimedia category preferences of working engineers. Four categories of multimedia, with two types in each category, were studied: verbal (text and narration), static graphics (drawing and photograph), dynamic non-interactive graphics (animation and video), and dynamic interactive graphics (simulated virtual reality (VR) and photo-real VR). The results showed that working engineers strongly preferred text over narration and somewhat preferred drawing over photograph, animation over video, and simulated VR over photo-real VR. These results suggest that a variety of multimedia types should be used in the instructional design of CEE content.

  4. 3D Image Display Courses for Information Media Students.

    PubMed

    Yanaka, Kazuhisa; Yamanouchi, Toshiaki

    2016-01-01

    Three-dimensional displays are used extensively in movies and games. These displays are also essential in mixed reality, where virtual and real spaces overlap. Therefore, engineers and creators should be trained to master 3D display technologies. For this reason, the Department of Information Media at the Kanagawa Institute of Technology has launched two 3D image display courses specifically designed for students who aim to become information media engineers and creators.

  5. DYNA3D, INGRID, and TAURUS: an integrated, interactive software system for crashworthiness engineering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benson, D.J.; Hallquist, J.O.; Stillman, D.W.

    1985-04-01

    Crashworthiness engineering has always been a high priority at Lawrence Livermore National Laboratory because of its role in the safe transport of radioactive material for the nuclear power industry and military. As a result, the authors have developed an integrated, interactive set of finite element programs for crashworthiness analysis. The heart of the system is DYNA3D, an explicit, fully vectorized, large deformation structural dynamics code. DYNA3D has the following four capabilities that are critical for the efficient and accurate analysis of crashes: (1) fully nonlinear solid, shell, and beam elements for representing a structure, (2) a broad range of constitutivemore » models for representing the materials, (3) sophisticated contact algorithms for the impact interactions, and (4) a rigid body capability to represent the bodies away from the impact zones at a greatly reduced cost without sacrificing any accuracy in the momentum calculations. To generate the large and complex data files for DYNA3D, INGRID, a general purpose mesh generator, is used. It runs on everything from IBM PCs to CRAYS, and can generate 1000 nodes/minute on a PC. With its efficient hidden line algorithms and many options for specifying geometry, INGRID also doubles as a geometric modeller. TAURUS, an interactive post processor, is used to display DYNA3D output. In addition to the standard monochrome hidden line display, time history plotting, and contouring, TAURUS generates interactive color displays on 8 color video screens by plotting color bands superimposed on the mesh which indicate the value of the state variables. For higher quality color output, graphic output files may be sent to the DICOMED film recorders. We have found that color is every bit as important as hidden line removal in aiding the analyst in understanding his results. In this paper the basic methodologies of the programs are presented along with several crashworthiness calculations.« less

  6. Are New Image Quality Figures of Merit Needed for Flat Panel Displays?

    DTIC Science & Technology

    1998-06-01

    American National Standard for Human Factors Engineering of Visual Display Terminal Workstations in 1988 have adopted the MTFA as the standard...References American National Standard for Human Factors Engineering of Visual Display Terminal Workstations (ANSI/HFS 100-1988). 1988. Santa Monica

  7. A new technique for presentation of scientific works: video in poster.

    PubMed

    Bozdag, Ali Dogan

    2008-07-01

    Presentations at scientific congresses and symposiums can be in two different forms: poster or oral presentation. Each method has some advantages and disadvantages. To combine the advantages of oral and poster presentations, a new presentation type was conceived: "video in poster." The top of the portable digital video display (DVD) player is opened 180 degrees to keep the screen and the body of the DVD player in the same plane. The poster is attached to the DVD player and a window is made in the poster to expose the screen of the DVD player so the screen appears as a picture on the poster. Then this video in poster is fixed to the panel. When the DVD player is turned on, the video presentation of the surgical procedure starts. Several posters were presented at different medical congresses in 2007 using the "video in poster" technique, and they received poster awards. The video in poster combines the advantages of both oral and poster presentations.

  8. Optimized static and video EEG rapid serial visual presentation (RSVP) paradigm based on motion surprise computation

    NASA Astrophysics Data System (ADS)

    Khosla, Deepak; Huber, David J.; Bhattacharyya, Rajan

    2017-05-01

    In this paper, we describe an algorithm and system for optimizing search and detection performance for "items of interest" (IOI) in large-sized images and videos that employ the Rapid Serial Visual Presentation (RSVP) based EEG paradigm and surprise algorithms that incorporate motion processing to determine whether static or video RSVP is used. The system works by first computing a motion surprise map on image sub-regions (chips) of incoming sensor video data and then uses those surprise maps to label the chips as either "static" or "moving". This information tells the system whether to use a static or video RSVP presentation and decoding algorithm in order to optimize EEG based detection of IOI in each chip. Using this method, we are able to demonstrate classification of a series of image regions from video with an azimuth value of 1, indicating perfect classification, over a range of display frequencies and video speeds.

  9. Widgets to the Rescue

    ERIC Educational Resources Information Center

    Kroski, Ellyssa

    2008-01-01

    A widget displays Web content from external sources and can be embedded into a blog, social network, or other Web page, or downloaded to one's desktop. With widgets--sometimes referred to as gadgets--one can insert video into a blog post, display slideshows on MySpace, get the weather delivered to his mobile device, drag-and-drop his Netflix queue…

  10. Video enhancement of X-ray and neutron radiographs

    NASA Technical Reports Server (NTRS)

    Vary, A.

    1973-01-01

    System was devised for displaying radiographs on television screen and enhancing fine detail in picture. System uses analog-computer circuits to process television signal from low-noise television camera. Enhanced images are displayed in black and white and can be controlled to vary degree of enhancement and magnification of details in either radiographic transparencies or opaque photographs.

  11. Engineering education using a remote laboratory through the Internet

    NASA Astrophysics Data System (ADS)

    Axaopoulos, Petros J.; Moutsopoulos, Konstantinos N.; Theodoridis, Michael P.

    2012-03-01

    An experiment using real hardware and under real test conditions can be remotely conducted by engineering students and other interested individuals in the world via the Internet and with the capability of live video streaming from the test site. The presentation of this innovative experiment refers to the determination of the current voltage characteristic curve of a photovoltaic panel installed on the roof of a laboratory, facing south and with the ability to alter its tilt angle, using a closed loop servo motor mounted on the horizontal axis of the panel. The user has the sense of a direct contact with the system since they can intervene and alter the tilt of the panel and get a live visual feedback besides the remote instrumentation panel. The whole procedure takes a few seconds to complete and the characteristic curve is displayed in a chart giving the student and anyone else interested the chance to analyse the results and understand the respective theory; meanwhile, the test data are stored in a file for future use. This type of remote experiment could be used for distance education, training, part-time study and to help students with disabilities to participate in a laboratory environment.

  12. Mobile visual communications and displays

    NASA Astrophysics Data System (ADS)

    Valliath, George T.

    2004-09-01

    The different types of mobile visual communication modes and the types of displays needed in cellular handsets are explored. The well-known 2-way video conferencing is only one of the possible modes. Some modes are already supported on current handsets while others need the arrival of advanced network capabilities to be supported. Displays for devices that support these visual communication modes need to deliver the required visual experience. Over the last 20 years the display has grown in size while the rest of the handset has shrunk. However, the display is still not large enough - the processor performance and network capabilities continue to outstrip the display ability. This makes the display a bottleneck. This paper will explore potential solutions to a small large image on a small handset.

  13. Real-Time Detection and Reading of LED/LCD Displays for Visually Impaired Persons

    PubMed Central

    Tekin, Ender; Coughlan, James M.; Shen, Huiying

    2011-01-01

    Modern household appliances, such as microwave ovens and DVD players, increasingly require users to read an LED or LCD display to operate them, posing a severe obstacle for persons with blindness or visual impairment. While OCR-enabled devices are emerging to address the related problem of reading text in printed documents, they are not designed to tackle the challenge of finding and reading characters in appliance displays. Any system for reading these characters must address the challenge of first locating the characters among substantial amounts of background clutter; moreover, poor contrast and the abundance of specular highlights on the display surface – which degrade the image in an unpredictable way as the camera is moved – motivate the need for a system that processes images at a few frames per second, rather than forcing the user to take several photos, each of which can take seconds to acquire and process, until one is readable. We describe a novel system that acquires video, detects and reads LED/LCD characters in real time, reading them aloud to the user with synthesized speech. The system has been implemented on both a desktop and a cell phone. Experimental results are reported on videos of display images, demonstrating the feasibility of the system. PMID:21804957

  14. Impact of packet losses in scalable 3D holoscopic video coding

    NASA Astrophysics Data System (ADS)

    Conti, Caroline; Nunes, Paulo; Ducla Soares, Luís.

    2014-05-01

    Holoscopic imaging became a prospective glassless 3D technology to provide more natural 3D viewing experiences to the end user. Additionally, holoscopic systems also allow new post-production degrees of freedom, such as controlling the plane of focus or the viewing angle presented to the user. However, to successfully introduce this technology into the consumer market, a display scalable coding approach is essential to achieve backward compatibility with legacy 2D and 3D displays. Moreover, to effectively transmit 3D holoscopic content over error-prone networks, e.g., wireless networks or the Internet, error resilience techniques are required to mitigate the impact of data impairments in the user quality perception. Therefore, it is essential to deeply understand the impact of packet losses in terms of decoding video quality for the specific case of 3D holoscopic content, notably when a scalable approach is used. In this context, this paper studies the impact of packet losses when using a three-layer display scalable 3D holoscopic video coding architecture previously proposed, where each layer represents a different level of display scalability (i.e., L0 - 2D, L1 - stereo or multiview, and L2 - full 3D holoscopic). For this, a simple error concealment algorithm is used, which makes use of inter-layer redundancy between multiview and 3D holoscopic content and the inherent correlation of the 3D holoscopic content to estimate lost data. Furthermore, a study of the influence of 2D views generation parameters used in lower layers on the performance of the used error concealment algorithm is also presented.

  15. The Stirling Engine: A Wave of the Future

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This video describes the Stirling engine, an external combustion engine which creates heat energy to power the motor, and can use many types of fuel. It can be used for both stationary and propulsion purposes and has advantages of better fuel economy and cleaner exhaust than internal combustion engines. The engine is shown being road tested at Langley Air Force Base.

  16. Video conferencing made easy

    NASA Astrophysics Data System (ADS)

    Larsen, D. G.; Schwieder, P. R.

    Network video conferencing is advancing rapidly throughout the nation, and the Idaho National Engineering Laboratory (INEL), a Department of Energy (DOE) facility, is at the forefront of the development. Engineers at INEL/EG&G designed and installed a very unique DOE video conferencing system, offering many outstanding features, that include true multipoint conferencing, user-friendly design and operation with no full-time operators required, and the potential for cost effective expansion of the system. One area where INEL/EG&G engineers made a significant contribution to video conferencing was in the development of effective, user-friendly, end station driven scheduling software. A PC at each user site is used to schedule conferences via a windows package. This software interface provides information to the users concerning conference availability, scheduling, initiation, and termination. The menus are 'mouse' controlled. Once a conference is scheduled, a workstation at the hub monitors the network to initiate all scheduled conferences. No active operator participation is required once a user schedules a conference through the local PC; the workstation automatically initiates and terminates the conference as scheduled. As each conference is scheduled, hard copy notification is also printed at each participating site. Video conferencing is the wave of the future. The use of these user-friendly systems will save millions in lost productivity and travel costs throughout the nation. The ease of operation and conference scheduling will play a key role on the extent industry uses this new technology. The INEL/EG&G has developed a prototype scheduling system for both commercial and federal government use.

  17. Teaching engineering ethics using BLOCKS game.

    PubMed

    Lau, Shiew Wei; Tan, Terence Peng Lian; Goh, Suk Meng

    2013-09-01

    The aim of this study was to investigate the use of a newly developed design game called BLOCKS to stimulate awareness of ethical responsibilities amongst engineering students. The design game was played by seventeen teams of chemical engineering students, with each team having to arrange pieces of colored paper to produce two letters each. Before the end of the game, additional constraints were introduced to the teams such that they faced similar ambiguity in the technical facts that the engineers involved in the Challenger disaster had faced prior to the space shuttle launch. At this stage, the teams had to decide whether to continue with their original design or to develop alternative solutions. After the teams had made their decisions, a video of the Challenger explosion was shown followed by a post-game discussion. The students' opinion on five Statements on ethics was tracked via a Five-Item Likert survey which was administered three times, before and after the ethical scenario was introduced, and after the video and post-game discussion. The results from this study indicated that the combination of the game and the real-life incident from the video had generally strengthened the students' opinions of the Statements.

  18. Millisecond accuracy video display using OpenGL under Linux.

    PubMed

    Stewart, Neil

    2006-02-01

    To measure people's reaction times to the nearest millisecond, it is necessary to know exactly when a stimulus is displayed. This article describes how to display stimuli with millisecond accuracy on a normal CRT monitor, using a PC running Linux. A simple C program is presented to illustrate how this may be done within X Windows using the OpenGL rendering system. A test of this system is reported that demonstrates that stimuli may be consistently displayed with millisecond accuracy. An algorithm is presented that allows the exact time of stimulus presentation to be deduced, even if there are relatively large errors in measuring the display time.

  19. Implementation of a Landscape Lighting System to Display Images

    NASA Astrophysics Data System (ADS)

    Sun, Gi-Ju; Cho, Sung-Jae; Kim, Chang-Beom; Moon, Cheol-Hong

    The system implemented in this study consists of a PC, MASTER, SLAVEs and MODULEs. The PC sets the various landscape lighting displays, and the image files can be sent to the MASTER through a virtual serial port connected to the USB (Universal Serial Bus). The MASTER sends a sync signal to the SLAVE. The SLAVE uses the signal received from the MASTER and the landscape lighting display pattern. The video file is saved in the NAND Flash memory and the R, G, B signals are separated using the self-made display signal and sent to the MODULE so that it can display the image.

  20. Video flowmeter

    DOEpatents

    Lord, David E.; Carter, Gary W.; Petrini, Richard R.

    1983-01-01

    A video flowmeter is described that is capable of specifying flow nature and pattern and, at the same time, the quantitative value of the rate of volumetric flow. An image of a determinable volumetric region within a fluid (10) containing entrained particles (12) is formed and positioned by a rod optic lens assembly (31) on the raster area of a low-light level television camera (20). The particles (12) are illuminated by light transmitted through a bundle of glass fibers (32) surrounding the rod optic lens assembly (31). Only particle images having speeds on the raster area below the raster line scanning speed may be used to form a video picture which is displayed on a video screen (40). The flowmeter is calibrated so that the locus of positions of origin of the video picture gives a determination of the volumetric flow rate of the fluid (10).

  1. Engineering M13 for phage display.

    PubMed

    Sidhu, S S

    2001-09-01

    Phage display is achieved by fusing polypeptide libraries to phage coat proteins. The resulting phage particles display the polypeptides on their surfaces and they also contain the encoding DNA. Library members with particular functions can be isolated with simple selections and polypeptide sequences can be decoded from the encapsulated DNA. The technology's success depends on the efficiency with which polypeptides can be displayed on the phage surface, and significant progress has been made in engineering M13 bacteriophage coat proteins as improved phage display platforms. Functional display has been achieved with all five M13 coat proteins, with both N- and C-terminal fusions. Also, coat protein mutants have been designed and selected to improve the efficiency of heterologous protein display, and in the extreme case, completely artificial coat proteins have been evolved specifically as display platforms. These studies demonstrate that the M13 phage coat is extremely malleable, and this property can be used to engineer the phage particle specifically for phage display. These improvements expand the utility of phage display as a powerful tool in modern biotechnology.

  2. SAFER Under Vehicle Inspection Through Video Mosaic Building

    DTIC Science & Technology

    2004-01-01

    this work were taken using a Polaris Wp-300c Lipstick video camera mounted on a mobile platform. Infrared video was taken using a Raytheon PalmIR PRO...Tank- Automotive Research, Development and Engineering Center, US Army RDECOM, Warren, Michigan, USA. Keywords Inspection, Road vehicles, State...security, Robotics Abstract The current threats to US security, both military and civilian, have led to an increased interest in the development of

  3. Video System Highlights Hydrogen Fires

    NASA Technical Reports Server (NTRS)

    Youngquist, Robert C.; Gleman, Stuart M.; Moerk, John S.

    1992-01-01

    Video system combines images from visible spectrum and from three bands in infrared spectrum to produce color-coded display in which hydrogen fires distinguished from other sources of heat. Includes linear array of 64 discrete lead selenide mid-infrared detectors operating at room temperature. Images overlaid on black and white image of same scene from standard commercial video camera. In final image, hydrogen fires appear red; carbon-based fires, blue; and other hot objects, mainly green and combinations of green and red. Where no thermal source present, image remains in black and white. System enables high degree of discrimination between hydrogen flames and other thermal emitters.

  4. Power-Constrained Fuzzy Logic Control of Video Streaming over a Wireless Interconnect

    NASA Astrophysics Data System (ADS)

    Razavi, Rouzbeh; Fleury, Martin; Ghanbari, Mohammed

    2008-12-01

    Wireless communication of video, with Bluetooth as an example, represents a compromise between channel conditions, display and decode deadlines, and energy constraints. This paper proposes fuzzy logic control (FLC) of automatic repeat request (ARQ) as a way of reconciling these factors, with a 40% saving in power in the worst channel conditions from economizing on transmissions when channel errors occur. Whatever the channel conditions are, FLC is shown to outperform the default Bluetooth scheme and an alternative Bluetooth-adaptive ARQ scheme in terms of reduced packet loss and delay, as well as improved video quality.

  5. Design of large format commercial display holograms

    NASA Astrophysics Data System (ADS)

    Perry, John F. W.

    1989-05-01

    Commercial display holography is approaching a critical stage where the ability to compete with other graphic media will dictate its future. Factors involved will be cost, technical quality and, in particular, design. The tenuous commercial success of display holography has relied heavily on its appeal to an audience with little or no previous experience in the medium. Well designed images were scarce, leading many commercial designers to avoid holography. As the public became more accustomed to holograms, the excitement dissipated, leaving a need for strong visual design if the medium is to survive in this marketplace. Drawing on the vast experience of TV, rock music and magazine advertising, competitive techniques such as video walls, mural duratrans, laser light shows and interactive videos attract a professional support structure far greater than does holography. This paper will address design principles developed at Holographics North for large format commercial holography. Examples will be drawn from a number of foreign and domestic corporate trade exhibitions. Recommendations will also be made on how to develop greater awareness of a holographic design.

  6. The use of animation video in teaching to enhance the imagination and visualization of student in engineering drawing

    NASA Astrophysics Data System (ADS)

    Ismail M., E.; Mahazir I., Irwan; Othman, H.; Amiruddin M., H.; Ariffin, A.

    2017-05-01

    The rapid development of information technology today has given a new breath toward usage of computer in education. One of the increasingly popular nowadays is a multimedia technology that merges a variety of media such as text, graphics, animation, video and audio controlled by a computer. With this technology, a wide range of multimedia element can be developed to improve the quality of education. For that reason, this study aims to investigate the use of multimedia element based on animated video that was developed for Engineering Drawing subject according to the syllabus of Vocational College of Malaysia. The design for this study was a survey method using a quantitative approach and involved 30 respondents from Industrial Machining students. The instruments used in study is questionnaire with correlation coefficient value (0.83), calculated on Alpha-Cronbach. Data was collected and analyzed descriptive analyzed using SPSS. The study found that multimedia element for animation video was use significant have capable to increase imagination and visualization of student. The implications of this study provide information of use of multimedia element will student effect imagination and visualization. In general, these findings contribute to the formation of multimedia element of materials appropriate to enhance the quality of learning material for engineering drawing.

  7. Measurement of interfacial tension by use of pendant drop video techniques

    NASA Astrophysics Data System (ADS)

    Herd, Melvin D.; Thomas, Charles P.; Bala, Gregory A.; Lassahn, Gordon D.

    1993-09-01

    This report describes an instrument to measure the interfacial tension (IFT) of aqueous surfactant solutions and crude oil. The method involves injection of a drop of fluid (such as crude oil) into a second immiscible phase to determine the IFT between the two phases. The instrument is composed of an AT-class computer, optical cell, illumination, video camera and lens, video frame digitizer board, monitor, and software. The camera displays an image of the pendant drop on the monitor, which is then processed by the frame digitizer board and non-proprietary software to determine the IFT. Several binary and ternary phase systems were taken from the literature and used to measure the precision and accuracy of the instrument in determining IFT's. A copy of the software program is included in the report. A copy of the program on diskette can be obtained from the Energy Science and Technology Software Center, P.O. Box 1020, Oak Ridge, TN 37831-1020. The accuracy and precision of the technique and apparatus presented is very good for measurement of IFT's in the range from 72 to 10(exp -2) mN/m, which is adequate for many EOR applications. With modifications to the equipment and the numerical techniques, measurements of ultralow IFT's (less than 10(exp -3) mN/m) should be possible as well as measurements at reservoir temperature and pressure conditions. The instrument has been used at the Idaho National Engineering Laboratory to support the research program on microbial enhanced oil recovery. Measurements of IFT's for several bacterial supernatants and unfractionated acid precipitates of microbial cultures containing biosurfactants against medium to heavy crude oils are reported. These experiments demonstrate that the use of automated video imaging of pendant drops is a simple and fast method to reliably determine interfacial tension between two immiscible liquid phases, or between a gas and a liquid phase.

  8. An Automatic Portable Telecine Camera.

    DTIC Science & Technology

    1978-08-01

    five television frames to achieve synchronous operation, that is about 0.2 second. 6.3 Video recorder noise imnunity The synchronisation pulse separator...display is filmed by a modified 16 am cine camera driven by a control unit in which the camera supply voltage is derived from the field synchronisation ...pulses of the video signal. Automatic synchronisation of the camera mechanism is achieved over a wide range of television field frequencies and the

  9. Fusion Helmet: Electronic Analysis

    DTIC Science & Technology

    2014-04-01

    Table 1: LYR203-101B Board Feature P1 (SEC MODULE) DM648 GPIO PORn Video Ports (2) Bootmode SPI/UART I2C CLKIN MDIO DDR2 128MB/16bit SPI Flash 16...McASP EMAC-SGMII /2 MDIO I2C GPIO DDR2 128MB/16bit JTAG Memory CLKGEN I2C PGoodPGood PORn Pwr LED Power DSP SPI/UART DSP SPI/UARTSPI/UART Video Display

  10. On Target: Organizing and Executing the Strategic Air Campaign Against Iraq

    DTIC Science & Technology

    2002-01-01

    possession, use, sale, creation or display of any porno graphic photograph, videotape, movie, drawing, book, or magazine or similar represen- tations. This...forward-looking infrared (FLIR) sensor to create daylight-quality video images of terrain and utilized terrain-following radar to enable the aircraft to...The Black Hole Planners had pleaded with CENTAF Intel to provide them with photos of targets, provide additional personnel to analyze PGM video

  11. STS-111 Flight Day 2 Highlights

    NASA Technical Reports Server (NTRS)

    2002-01-01

    On Flight Day 2 of STS-111, the crew of Endeavour (Kenneth Cockrell, Commander; Paul Lockhart, Pilot; Franklin Chang-Diaz, Mission Specialist; Philippe Perrin, Mission Specialist) and the Expedition 5 crew (Valery Korzun, Commander; Peggy Whitson, Flight Engineer; Sergei Treschev, Flight Engineer), having successfully entered orbit around the Earth, begin to maneuver towards the International Space Station (ISS), where the Expedition 5 crew will replace the Expedition 4 crew. Live video is shown of the Earth from several vantage points aboard the Shuttle. The center-line camera, which will allow Shuttle pilots to align the docking apparatus with that on the ISS, provides footage of the Earth. Chang-Diaz participates in an interview, in Spanish, conducted from the ground via radio communications, with Cockrell also appearing. Footage of the Earth includes: Daytime video of the Eastern United States with some cloud cover as Endeavour passes over the Florida panhandle, Georgia, and the Carolinas; Daytime video of Lake Michigan unobscured by cloud cover; Nighttime low-light camera video of Madrid, Spain.

  12. Fronto-parietal regulation of media violence exposure in adolescents: a multi-method study

    PubMed Central

    Strenziok, Maren; Krueger, Frank; Deshpande, Gopikrishna; Lenroot, Rhoshel K.; van der Meer, Elke

    2011-01-01

    Adolescents spend a significant part of their leisure time watching TV programs and movies that portray violence. It is unknown, however, how the extent of violent media use and the severity of aggression displayed affect adolescents’ brain function. We investigated skin conductance responses, brain activation and functional brain connectivity to media violence in healthy adolescents. In an event-related functional magnetic resonance imaging experiment, subjects repeatedly viewed normed videos that displayed different degrees of aggressive behavior. We found a downward linear adaptation in skin conductance responses with increasing aggression and desensitization towards more aggressive videos. Our results further revealed adaptation in a fronto-parietal network including the left lateral orbitofrontal cortex (lOFC), right precuneus and bilateral inferior parietal lobules, again showing downward linear adaptations and desensitization towards more aggressive videos. Granger causality mapping analyses revealed attenuation in the left lOFC, indicating that activation during viewing aggressive media is driven by input from parietal regions that decreased over time, for more aggressive videos. We conclude that aggressive media activates an emotion–attention network that has the capability to blunt emotional responses through reduced attention with repeated viewing of aggressive media contents, which may restrict the linking of the consequences of aggression with an emotional response, and therefore potentially promotes aggressive attitudes and behavior. PMID:20934985

  13. Depth assisted compression of full parallax light fields

    NASA Astrophysics Data System (ADS)

    Graziosi, Danillo B.; Alpaslan, Zahir Y.; El-Ghoroury, Hussein S.

    2015-03-01

    Full parallax light field displays require high pixel density and huge amounts of data. Compression is a necessary tool used by 3D display systems to cope with the high bandwidth requirements. One of the formats adopted by MPEG for 3D video coding standards is the use of multiple views with associated depth maps. Depth maps enable the coding of a reduced number of views, and are used by compression and synthesis software to reconstruct the light field. However, most of the developed coding and synthesis tools target linearly arranged cameras with small baselines. Here we propose to use the 3D video coding format for full parallax light field coding. We introduce a view selection method inspired by plenoptic sampling followed by transform-based view coding and view synthesis prediction to code residual views. We determine the minimal requirements for view sub-sampling and present the rate-distortion performance of our proposal. We also compare our method with established video compression techniques, such as H.264/AVC, H.264/MVC, and the new 3D video coding algorithm, 3DV-ATM. Our results show that our method not only has an improved rate-distortion performance, it also preserves the structure of the perceived light fields better.

  14. Discontinuity minimization for omnidirectional video projections

    NASA Astrophysics Data System (ADS)

    Alshina, Elena; Zakharchenko, Vladyslav

    2017-09-01

    Advances in display technologies both for head mounted devices and television panels demand resolution increase beyond 4K for source signal in virtual reality video streaming applications. This poses a problem of content delivery trough a bandwidth limited distribution networks. Considering a fact that source signal covers entire surrounding space investigation reviled that compression efficiency may fluctuate 40% in average depending on origin selection at the conversion stage from 3D space to 2D projection. Based on these knowledge the origin selection algorithm for video compression applications has been proposed. Using discontinuity entropy minimization function projection origin rotation may be defined to provide optimal compression results. Outcome of this research may be applied across various video compression solutions for omnidirectional content.

  15. A Data Hiding Technique to Synchronously Embed Physiological Signals in H.264/AVC Encoded Video for Medicine Healthcare.

    PubMed

    Peña, Raul; Ávila, Alfonso; Muñoz, David; Lavariega, Juan

    2015-01-01

    The recognition of clinical manifestations in both video images and physiological-signal waveforms is an important aid to improve the safety and effectiveness in medical care. Physicians can rely on video-waveform (VW) observations to recognize difficult-to-spot signs and symptoms. The VW observations can also reduce the number of false positive incidents and expand the recognition coverage to abnormal health conditions. The synchronization between the video images and the physiological-signal waveforms is fundamental for the successful recognition of the clinical manifestations. The use of conventional equipment to synchronously acquire and display the video-waveform information involves complex tasks such as the video capture/compression, the acquisition/compression of each physiological signal, and the video-waveform synchronization based on timestamps. This paper introduces a data hiding technique capable of both enabling embedding channels and synchronously hiding samples of physiological signals into encoded video sequences. Our data hiding technique offers large data capacity and simplifies the complexity of the video-waveform acquisition and reproduction. The experimental results revealed successful embedding and full restoration of signal's samples. Our results also demonstrated a small distortion in the video objective quality, a small increment in bit-rate, and embedded cost savings of -2.6196% for high and medium motion video sequences.

  16. Signaling threat: how situational cues affect women in math, science, and engineering settings.

    PubMed

    Murphy, Mary C; Steele, Claude M; Gross, James J

    2007-10-01

    This study examined the cues hypothesis, which holds that situational cues, such as a setting's features and organization, can make potential targets vulnerable to social identity threat. Objective and subjective measures of identity threat were collected from male and female math, science, and engineering (MSE) majors who watched an MSE conference video depicting either an unbalanced ratio of men to women or a balanced ratio. Women who viewed the unbalanced video exhibited more cognitive and physiological vigilance, and reported a lower sense of belonging and less desire to participate in the conference, than did women who viewed the gender-balanced video. Men were unaffected by this situational cue. The implications for understanding vulnerability to social identity threat, particularly among women in MSE settings, are discussed.

  17. Heat Exchanger Design and Testing for a 6-Inch Rotating Detonation Engine

    DTIC Science & Technology

    2013-03-01

    Engine Research Facility HHV Higher heating value LHV Lower heating value PDE Pulsed detonation engine RDE Rotating detonation engine RTD...the combustion community are pulse detonation engines ( PDEs ) and rotating detonation engines (RDEs). 1.1 Differences between Pulsed and Rotating ...steadier than that of a PDE (2, 3). (2) (3) Figure 1. Unrolled rotating detonation wave from high-speed video (4) Another difference that

  18. Special Technology Area Review on Displays. Report of Department of Defense Advisory Group on Electron Devices Working Group C (Electro-Optics)

    DTIC Science & Technology

    2004-03-01

    mirror device ( DMD ) for C4ISR applications, the IBM 9.2 megapixel 22-in. diagonal active matrix liquid crystal display (AMLCD) monitor for data...FED, VFD, OLED and a variety of microdisplays (uD, comprising uLCD, uOLED, DMD and other MEMs) (see glossary). 3 CDT = cathode display tubes (used in...than SVGA, greater battery life and brightness, decreased weight and thickness, electromagnetic interference (EMI), and development of video

  19. Assessment of 3D Viewers for the Display of Interactive Documents in the Learning of Graphic Engineering

    ERIC Educational Resources Information Center

    Barbero, Basilio Ramos; Pedrosa, Carlos Melgosa; Mate, Esteban Garcia

    2012-01-01

    The purpose of this study is to determine which 3D viewers should be used for the display of interactive graphic engineering documents, so that the visualization and manipulation of 3D models provide useful support to students of industrial engineering (mechanical, organizational, electronic engineering, etc). The technical features of 26 3D…

  20. ORNL Fuels, Engines, and Emissions Research Center (FEERC)

    ScienceCinema

    None

    2018-02-13

    This video highlights the Vehicle Research Laboratory's capabilities at the Fuels, Engines, and Emissions Research Center (FEERC). FEERC is a Department of Energy user facility located at the Oak Ridge National Laboratory.

  1. On-screen-display (OSD) menu detection for proper stereo content reproduction for 3D TV

    NASA Astrophysics Data System (ADS)

    Tolstaya, Ekaterina V.; Bucha, Victor V.; Rychagov, Michael N.

    2011-03-01

    Modern consumer 3D TV sets are able to show video content in two different modes: 2D and 3D. In 3D mode, stereo pair comes from external device such as Blue-ray player, satellite receivers etc. The stereo pair is split into left and right images that are shown one after another. The viewer sees different image for left and right eyes using shutter-glasses properly synchronized with a 3DTV. Besides, some devices that provide TV with a stereo content are able to display some additional information by imposing an overlay picture on video content, an On-Screen-Display (OSD) menu. Some OSDs are not always 3D compatible and lead to incorrect 3D reproduction. In this case, TV set must recognize the type of OSD, whether it is 3D compatible, and visualize it correctly by either switching off stereo mode, or continue demonstration of stereo content. We propose a new stable method for detection of 3D incompatible OSD menus on stereo content. Conventional OSD is a rectangular area with letters and pictograms. OSD menu can be of different transparency levels and colors. To be 3D compatible, an OSD is overlaid separately on both images of a stereo pair. The main problem in detecting OSD is to distinguish whether the color difference is due to OSD presence, or due to stereo parallax. We applied special techniques to find reliable image difference and additionally used a cue that usually OSD has very implicit geometrical features: straight parallel lines. The developed algorithm was tested on our video sequences database, with several types of OSD with different colors and transparency levels overlaid upon video content. Detection quality exceeded 99% of true answers.

  2. The impact of video technology on learning: A cooking skills experiment.

    PubMed

    Surgenor, Dawn; Hollywood, Lynsey; Furey, Sinéad; Lavelle, Fiona; McGowan, Laura; Spence, Michelle; Raats, Monique; McCloat, Amanda; Mooney, Elaine; Caraher, Martin; Dean, Moira

    2017-07-01

    This study examines the role of video technology in the development of cooking skills. The study explored the views of 141 female participants on whether video technology can promote confidence in learning new cooking skills to assist in meal preparation. Prior to each focus group participants took part in a cooking experiment to assess the most effective method of learning for low-skilled cooks across four experimental conditions (recipe card only; recipe card plus video demonstration; recipe card plus video demonstration conducted in segmented stages; and recipe card plus video demonstration whereby participants freely accessed video demonstrations as and when needed). Focus group findings revealed that video technology was perceived to assist learning in the cooking process in the following ways: (1) improved comprehension of the cooking process; (2) real-time reassurance in the cooking process; (3) assisting the acquisition of new cooking skills; and (4) enhancing the enjoyment of the cooking process. These findings display the potential for video technology to promote motivation and confidence as well as enhancing cooking skills among low-skilled individuals wishing to cook from scratch using fresh ingredients. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. A Framework for Realistic Modeling and Display of Object Surface Appearance

    NASA Astrophysics Data System (ADS)

    Darling, Benjamin A.

    With advances in screen and video hardware technology, the type of content presented on computers has progressed from text and simple shapes to high-resolution photographs, photorealistic renderings, and high-definition video. At the same time, there have been significant advances in the area of content capture, with the development of devices and methods for creating rich digital representations of real-world objects. Unlike photo or video capture, which provide a fixed record of the light in a scene, these new technologies provide information on the underlying properties of the objects, allowing their appearance to be simulated for novel lighting and viewing conditions. These capabilities provide an opportunity to continue the computer display progression, from high-fidelity image presentations to digital surrogates that recreate the experience of directly viewing objects in the real world. In this dissertation, a framework was developed for representing objects with complex color, gloss, and texture properties and displaying them onscreen to appear as if they are part of the real-world environment. At its core, there is a conceptual shift from a traditional image-based display workflow to an object-based one. Instead of presenting the stored patterns of light from a scene, the objective is to reproduce the appearance attributes of a stored object by simulating its dynamic patterns of light for the real viewing and lighting geometry. This is accomplished using a computational approach where the physical light sources are modeled and the observer and display screen are actively tracked. Surface colors are calculated for the real spectral composition of the illumination with a custom multispectral rendering pipeline. In a set of experiments, the accuracy of color and gloss reproduction was evaluated by measuring the screen directly with a spectroradiometer. Gloss reproduction was assessed by comparing gonio measurements of the screen output to measurements of the real samples in the same measurement configuration. A chromatic adaptation experiment was performed to evaluate color appearance in the framework and explore the factors that contribute to differences when viewing self-luminous displays as opposed to reflective objects. A set of sample applications was developed to demonstrate the potential utility of the object display technology for digital proofing, psychophysical testing, and artwork display.

  4. The Effect of a Poster, Display, and Recommended Listening List on the Circulation of Audiobooks in the Public Library.

    ERIC Educational Resources Information Center

    Kucalaba, Linda

    Previous studies have found that the librarian's use of book displays and recommended lists are an effective means to increase circulation in the public library. Yet conflicting results were found when these merchandising techniques were used with collection materials in the nonprint format, specifically audiobooks and videos, instead of books.…

  5. [Development of a system for ultrasonic three-dimensional reconstruction of fetus].

    PubMed

    Baba, K

    1989-04-01

    We have developed a system for ultrasonic three-dimensional (3-D) fetus reconstruction using computers. Either a real-time linear array probe or a convex array probe of an ultrasonic scanner was mounted on a position sensor arm of a manual compound scanner in order to detect the position of the probe. A microcomputer was used to convert the position information to what could be recorded on a video tape as an image. This image was superimposed on the ultrasonic tomographic image simultaneously with a superimposer and recorded on a video tape. Fetuses in utero were scanned in seven cases. More than forty ultrasonic section image on the video tape were fed into a minicomputer. The shape of the fetus was displayed three-dimensionally by means of computer graphics. The computer-generated display produced a 3-D image of the fetus and showed the usefulness and accuracy of this system. Since it took only a few seconds for data collection by ultrasonic inspection, fetal movement did not adversely affect the results. Data input took about ten minutes for 40 slices, and 3-D reconstruction and display took about two minutes. The system made it possible to observe and record the 3-D image of the fetus in utero non-invasively and therefore is expected to make it much easier to obtain a 3-D picture of the fetus in utero.

  6. A head-mounted display-based personal integrated-image monitoring system for transurethral resection of the prostate.

    PubMed

    Yoshida, Soichiro; Kihara, Kazunori; Takeshita, Hideki; Fujii, Yasuhisa

    2014-12-01

    The head-mounted display (HMD) is a new image monitoring system. We developed the Personal Integrated-image Monitoring System (PIM System) using the HMD (HMZ-T2, Sony Corporation, Tokyo, Japan) in combination with video splitters and multiplexers as a surgical guide system for transurethral resection of the prostate (TURP). The imaging information obtained from the cystoscope, the transurethral ultrasonography (TRUS), the video camera attached to the HMD, and the patient's vital signs monitor were split and integrated by the PIM System and a composite image was displayed by the HMD using a four-split screen technique. Wearing the HMD, the lead surgeon and the assistant could simultaneously and continuously monitor the same information displayed by the HMD in an ergonomically efficient posture. Each participant could independently rearrange the images comprising the composite image depending on the engaging step. Two benign prostatic hyperplasia (BPH) patients underwent TURP performed by surgeons guided with this system. In both cases, the TURP procedure was successfully performed, and their postoperative clinical courses had no remarkable unfavorable events. During the procedure, none of the participants experienced any HMD-wear related adverse effects or reported any discomfort.

  7. Micro-video display with ocular tracking and interactive voice control

    NASA Technical Reports Server (NTRS)

    Miller, James E.

    1993-01-01

    In certain space-restricted environments, many of the benefits resulting from computer technology have been foregone because of the size, weight, inconvenience, and lack of mobility associated with existing computer interface devices. Accordingly, an effort to develop a highly miniaturized and 'wearable' computer display and control interface device, referred to as the Sensory Integrated Data Interface (SIDI), is underway. The system incorporates a micro-video display that provides data display and ocular tracking on a lightweight headset. Software commands are implemented by conjunctive eye movement and voice commands of the operator. In this initial prototyping effort, various 'off-the-shelf' components have been integrated into a desktop computer and with a customized menu-tree software application to demonstrate feasibility and conceptual capabilities. When fully developed as a customized system, the interface device will allow mobile, 'hand-free' operation of portable computer equipment. It will thus allow integration of information technology applications into those restrictive environments, both military and industrial, that have not yet taken advantage of the computer revolution. This effort is Phase 1 of Small Business Innovative Research (SBIR) Topic number N90-331 sponsored by the Naval Undersea Warfare Center Division, Newport. The prime contractor is Foster-Miller, Inc. of Waltham, MA.

  8. How to make mathematics relevant to first-year engineering students: perceptions of students on student-produced resources

    NASA Astrophysics Data System (ADS)

    Loch, Birgit; Lamborn, Julia

    2016-01-01

    Many approaches to make mathematics relevant to first-year engineering students have been described. These include teaching practical engineering applications, or a close collaboration between engineering and mathematics teaching staff on unit design and teaching. In this paper, we report on a novel approach where we gave higher year engineering and multimedia students the task to 'make maths relevant' for first-year students. This approach is novel as we moved away from the traditional thinking that staff should produce these resources to students producing the same. These students have more recently undertaken first-year mathematical study themselves and can also provide a more mature student perspective to the task than first-year students. Two final-year engineering students and three final-year multimedia students worked on this project over the Australian summer term and produced two animated videos showing where concepts taught in first-year mathematics are applied by professional engineers. It is this student perspective on how to make mathematics relevant to first-year students that we investigate in this paper. We analyse interviews with higher year students as well as focus groups with first-year students who had been shown the videos in class, with a focus on answering the following three research questions: (1) How would students demonstrate the relevance of mathematics in engineering? (2) What are first-year students' views on the resources produced for them? (3) Who should produce resources to demonstrate the relevance of mathematics? There seemed to be some disagreement between first- and final-year students as to how the importance of mathematics should be demonstrated in a video. We therefore argue that it should ideally be a collaboration between higher year students and first-year students, with advice from lecturers, to produce such resources.

  9. A large flat panel multifunction display for military and space applications

    NASA Astrophysics Data System (ADS)

    Pruitt, James S.

    1992-09-01

    A flat panel multifunction display (MFD) that offers the size and reliability benefits of liquid crystal display technology while achieving near-CRT display quality is presented. Display generation algorithms that provide exceptional display quality are being implemented in custom VLSI components to minimize MFD size. A high-performance processor converts user-specified display lists to graphics commands used by these components, resulting in high-speed updates of two-dimensional and three-dimensional images. The MFD uses the MIL-STD-1553B data bus for compatibility with virtually all avionics systems. The MFD can generate displays directly from display lists received from the MIL-STD-1553B bus. Complex formats can be stored in the MFD and displayed using parameters from the data bus. The MFD also accepts direct video input and performs special processing on this input to enhance image quality.

  10. Hierarchical video summarization based on context clustering

    NASA Astrophysics Data System (ADS)

    Tseng, Belle L.; Smith, John R.

    2003-11-01

    A personalized video summary is dynamically generated in our video personalization and summarization system based on user preference and usage environment. The three-tier personalization system adopts the server-middleware-client architecture in order to maintain, select, adapt, and deliver rich media content to the user. The server stores the content sources along with their corresponding MPEG-7 metadata descriptions. In this paper, the metadata includes visual semantic annotations and automatic speech transcriptions. Our personalization and summarization engine in the middleware selects the optimal set of desired video segments by matching shot annotations and sentence transcripts with user preferences. Besides finding the desired contents, the objective is to present a coherent summary. There are diverse methods for creating summaries, and we focus on the challenges of generating a hierarchical video summary based on context information. In our summarization algorithm, three inputs are used to generate the hierarchical video summary output. These inputs are (1) MPEG-7 metadata descriptions of the contents in the server, (2) user preference and usage environment declarations from the user client, and (3) context information including MPEG-7 controlled term list and classification scheme. In a video sequence, descriptions and relevance scores are assigned to each shot. Based on these shot descriptions, context clustering is performed to collect consecutively similar shots to correspond to hierarchical scene representations. The context clustering is based on the available context information, and may be derived from domain knowledge or rules engines. Finally, the selection of structured video segments to generate the hierarchical summary efficiently balances between scene representation and shot selection.

  11. Rocket Engines Displayed for 1966 Inspection at Lewis Research Center

    NASA Image and Video Library

    1966-10-21

    An array of rocket engines displayed in the Propulsion Systems Laboratory for the 1966 Inspection held at the National Aeronautics and Space Administration (NASA) Lewis Research Center. Lewis engineers had been working on chemical, nuclear, and solid rocket engines throughout the 1960s. The engines on display are from left to right: two scale models of the Aerojet M-1, a Rocketdyne J-2, a Pratt and Whitney RL-10, and a Rocketdyne throttleable engine. Also on display are several ejector plates and nozzles. The Chemical Rocket Division resolved issues such as combustion instability and screech, and improved operation of cooling systems and turbopumps. The 1.5-million pound thrust M-1 engine was the largest hydrogen-fueled rocket engine ever created. It was a joint project between NASA Lewis and Aerojet-General. Although much larger in size, the M-1 used technology developed for the RL-10 and J-2. The M-1 program was cancelled in late 1965 due to budget cuts and the lack of a post-Apollo mission. The October 1966 Inspection was the culmination of almost a year of events held to mark the centers’ 25th anniversary. The three‐day Inspection, Lewis’ first since 1957, drew 2000 business, industry, and government executives and included an employee open house. The visitors witnessed presentations at the major facilities and viewed the Gemini VII spacecraft, a Centaur rocket, and other displays in the hangar. In addition, Lewis’ newest facility, the Zero Gravity Facility, was shown off for the first time.

  12. Synthesis multi-projector content for multi-projector three dimension display using a layered representation

    NASA Astrophysics Data System (ADS)

    Qin, Chen; Ren, Bin; Guo, Longfei; Dou, Wenhua

    2014-11-01

    Multi-projector three dimension display is a promising multi-view glass-free three dimension (3D) display technology, can produce full colour high definition 3D images on its screen. One key problem of multi-projector 3D display is how to acquire the source images of projector array while avoiding pseudoscopic problem. This paper analysis the displaying characteristics of multi-projector 3D display first and then propose a projector content synthetic method using tetrahedral transform. A 3D video format that based on stereo image pair and associated disparity map is presented, it is well suit for any type of multi-projector 3D display and has advantage in saving storage usage. Experiment results show that our method solved the pseudoscopic problem.

  13. Mechanical Objects and the Engineering Learner: An Experimental Study of How the Presence of Objects Affects Students' Performance on Engineering Related Tasks

    ERIC Educational Resources Information Center

    Bairaktarova, Diana N.

    2013-01-01

    People display varying levels of interaction with the mechanical objects in their environment; engineers in particular as makers and users of these objects display a higher level of interaction with them. Investigating the educational potential of mechanical objects in stimulating and supporting learning in engineering is warranted by the fact…

  14. A Human Factors Framework for Payload Display Design

    NASA Technical Reports Server (NTRS)

    Dunn, Mariea C.; Hutchinson, Sonya L.

    1998-01-01

    During missions to space, one charge of the astronaut crew is to conduct research experiments. These experiments, referred to as payloads, typically are controlled by computers. Crewmembers interact with payload computers by using visual interfaces or displays. To enhance the safety, productivity, and efficiency of crewmember interaction with payload displays, particular attention must be paid to the usability of these displays. Enhancing display usability requires adoption of a design process that incorporates human factors engineering principles at each stage. This paper presents a proposed framework for incorporating human factors engineering principles into the payload display design process.

  15. Broadening the interface bandwidth in simulation based training

    NASA Technical Reports Server (NTRS)

    Somers, Larry E.

    1989-01-01

    Currently most computer based simulations rely exclusively on computer generated graphics to create the simulation. When training is involved, the method almost exclusively used to display information to the learner is text displayed on the cathode ray tube. MICROEXPERT Systems is concentrating on broadening the communications bandwidth between the computer and user by employing a novel approach to video image storage combined with sound and voice output. An expert system is used to combine and control the presentation of analog video, sound, and voice output with computer based graphics and text. Researchers are currently involved in the development of several graphics based user interfaces for NASA, the U.S. Army, and the U.S. Navy. Here, the focus is on the human factors considerations, software modules, and hardware components being used to develop these interfaces.

  16. Psycho-physiological effects of head-mounted displays in ubiquitous use

    NASA Astrophysics Data System (ADS)

    Kawai, Takashi; Häkkinen, Jukka; Oshima, Keisuke; Saito, Hiroko; Yamazoe, Takashi; Morikawa, Hiroyuki; Nyman, Göte

    2011-02-01

    In this study, two experiments were conducted to evaluate the psycho-physiological effects by practical use of monocular head-mounted display (HMD) in a real-world environment, based on the assumption of consumer-level applications as viewing video content and receiving navigation information while walking. In the experiment 1, the workload was examined for different types of presenting stimuli using an HMD (monocular or binocular, see-through or non-see-through). The experiment 2 focused on the relationship between the real-world environment and the visual information presented using a monocular HMD. The workload was compared between a case where participants walked while viewing video content without relation to the real-world environment, and a case where participants walked while viewing visual information to augment the real-world environment as navigations.

  17. Synchronized voltage contrast display analysis system

    NASA Technical Reports Server (NTRS)

    Johnston, M. F.; Shumka, A.; Miller, E.; Evans, K. C. (Inventor)

    1982-01-01

    An apparatus and method for comparing internal voltage potentials of first and second operating electronic components such as large scale integrated circuits (LSI's) in which voltage differentials are visually identified via an appropriate display means are described. More particularly, in a first embodiment of the invention a first and second scanning electron microscope (SEM) are configured to scan a first and second operating electronic component respectively. The scan pattern of the second SEM is synchronized to that of the first SEM so that both simultaneously scan corresponding portions of the two operating electronic components. Video signals from each SEM corresponding to secondary electron signals generated as a result of a primary electron beam intersecting each operating electronic component in accordance with a predetermined scan pattern are provided to a video mixer and color encoder.

  18. Flow visualization of CFD using graphics workstations

    NASA Technical Reports Server (NTRS)

    Lasinski, Thomas; Buning, Pieter; Choi, Diana; Rogers, Stuart; Bancroft, Gordon

    1987-01-01

    High performance graphics workstations are used to visualize the fluid flow dynamics obtained from supercomputer solutions of computational fluid dynamic programs. The visualizations can be done independently on the workstation or while the workstation is connected to the supercomputer in a distributed computing mode. In the distributed mode, the supercomputer interactively performs the computationally intensive graphics rendering tasks while the workstation performs the viewing tasks. A major advantage of the workstations is that the viewers can interactively change their viewing position while watching the dynamics of the flow fields. An overview of the computer hardware and software required to create these displays is presented. For complex scenes the workstation cannot create the displays fast enough for good motion analysis. For these cases, the animation sequences are recorded on video tape or 16 mm film a frame at a time and played back at the desired speed. The additional software and hardware required to create these video tapes or 16 mm movies are also described. Photographs illustrating current visualization techniques are discussed. Examples of the use of the workstations for flow visualization through animation are available on video tape.

  19. The virtual brain: 30 years of video-game play and cognitive abilities.

    PubMed

    Latham, Andrew J; Patston, Lucy L M; Tippett, Lynette J

    2013-09-13

    Forty years have passed since video-games were first made widely available to the public and subsequently playing games has become a favorite past-time for many. Players continuously engage with dynamic visual displays with success contingent on the time-pressured deployment, and flexible allocation, of attention as well as precise bimanual movements. Evidence to date suggests that both brief and extensive exposure to video-game play can result in a broad range of enhancements to various cognitive faculties that generalize beyond the original context. Despite promise, video-game research is host to a number of methodological issues that require addressing before progress can be made in this area. Here an effort is made to consolidate the past 30 years of literature examining the effects of video-game play on cognitive faculties and, more recently, neural systems. Future work is required to identify the mechanism that allows the act of video-game play to generate such a broad range of generalized enhancements.

  20. The virtual brain: 30 years of video-game play and cognitive abilities

    PubMed Central

    Latham, Andrew J.; Patston, Lucy L. M.; Tippett, Lynette J.

    2013-01-01

    Forty years have passed since video-games were first made widely available to the public and subsequently playing games has become a favorite past-time for many. Players continuously engage with dynamic visual displays with success contingent on the time-pressured deployment, and flexible allocation, of attention as well as precise bimanual movements. Evidence to date suggests that both brief and extensive exposure to video-game play can result in a broad range of enhancements to various cognitive faculties that generalize beyond the original context. Despite promise, video-game research is host to a number of methodological issues that require addressing before progress can be made in this area. Here an effort is made to consolidate the past 30 years of literature examining the effects of video-game play on cognitive faculties and, more recently, neural systems. Future work is required to identify the mechanism that allows the act of video-game play to generate such a broad range of generalized enhancements. PMID:24062712

  1. Scorebox extraction from mobile sports videos using Support Vector Machines

    NASA Astrophysics Data System (ADS)

    Kim, Wonjun; Park, Jimin; Kim, Changick

    2008-08-01

    Scorebox plays an important role in understanding contents of sports videos. However, the tiny scorebox may give the small-display-viewers uncomfortable experience in grasping the game situation. In this paper, we propose a novel framework to extract the scorebox from sports video frames. We first extract candidates by using accumulated intensity and edge information after short learning period. Since there are various types of scoreboxes inserted in sports videos, multiple attributes need to be used for efficient extraction. Based on those attributes, the optimal information gain is computed and top three ranked attributes in terms of information gain are selected as a three-dimensional feature vector for Support Vector Machines (SVM) to distinguish the scorebox from other candidates, such as logos and advertisement boards. The proposed method is tested on various videos of sports games and experimental results show the efficiency and robustness of our proposed method.

  2. Viewing the viewers: how adults with attentional deficits watch educational videos.

    PubMed

    Hassner, Tal; Wolf, Lior; Lerner, Anat; Leitner, Yael

    2014-10-01

    Knowing how adults with ADHD interact with prerecorded video lessons at home may provide a novel means of early screening and long-term monitoring for ADHD. Viewing patterns of 484 students with known ADHD were compared with 484 age, gender, and academically matched controls chosen from 8,699 non-ADHD students. Transcripts generated by their video playback software were analyzed using t tests and regression analysis. ADHD students displayed significant tendencies (p ≤ .05) to watch videos with more pauses and more reviews of previously watched parts. Other parameters showed similar tendencies. Regression analysis indicated that attentional deficits remained constant for age and gender but varied for learning experience. There were measurable and significant differences between the video-viewing habits of the ADHD and non-ADHD students. This provides a new perspective on how adults cope with attention deficits and suggests a novel means of early screening for ADHD. © 2011 SAGE Publications.

  3. Introducing a Public Stereoscopic 3D High Dynamic Range (SHDR) Video Database

    NASA Astrophysics Data System (ADS)

    Banitalebi-Dehkordi, Amin

    2017-03-01

    High dynamic range (HDR) displays and cameras are paving their ways through the consumer market at a rapid growth rate. Thanks to TV and camera manufacturers, HDR systems are now becoming available commercially to end users. This is taking place only a few years after the blooming of 3D video technologies. MPEG/ITU are also actively working towards the standardization of these technologies. However, preliminary research efforts in these video technologies are hammered by the lack of sufficient experimental data. In this paper, we introduce a Stereoscopic 3D HDR database of videos that is made publicly available to the research community. We explain the procedure taken to capture, calibrate, and post-process the videos. In addition, we provide insights on potential use-cases, challenges, and research opportunities, implied by the combination of higher dynamic range of the HDR aspect, and depth impression of the 3D aspect.

  4. Correlates and consequences of exposure to video game violence: hostile personality, empathy, and aggressive behavior.

    PubMed

    Bartholow, Bruce D; Sestir, Marc A; Davis, Edward B

    2005-11-01

    Research has shown that exposure to violent video games causes increases in aggression, but the mechanisms of this effect have remained elusive. Also, potential differences in short-term and long-term exposure are not well understood. An initial correlational study shows that video game violence exposure (VVE) is positively correlated with self-reports of aggressive behavior and that this relation is robust to controlling for multiple aspects of personality. A lab experiment showed that individuals low in VVE behave more aggressively after playing a violent video game than after a nonviolent game but that those high in VVE display relatively high levels of aggression regardless of game content. Mediational analyses show that trait hostility, empathy, and hostile perceptions partially account for the VVE effect on aggression. These findings suggest that repeated exposure to video game violence increases aggressive behavior in part via changes in cognitive and personality factors associated with desensitization.

  5. Do you think you have what it takes to set up a long-term video monitoring unit?

    PubMed

    Smith, Sheila L

    2006-03-01

    The single most important factor when setting up a long-term video monitoring unit is research. Research all vendors by traveling to other sites and calling other facilities. Considerations with equipment include the server, acquisition units, review units, cameras, software, and monitors as well as other factors including Health Insurance Portability and Accountability Act (HIPAA) compliance. Research customer support including both field and telephone support. Involve your Clinical Engineering Department in your investigations. Be sure to obtain warranty information. Researching placement of the equipment is essential. Communication with numerous groups is vital. Administration, engineers, clinical engineering, physicians, infection control, environmental services, house supervisors, security, and all involved parties should be involved in the planning.

  6. The Study on Neuro-IE Management Software in Manufacturing Enterprises. -The Application of Video Analysis Technology

    NASA Astrophysics Data System (ADS)

    Bian, Jun; Fu, Huijian; Shang, Qian; Zhou, Xiangyang; Ma, Qingguo

    This paper analyzes the outstanding problems in current industrial production by reviewing the three stages of the Industrial Engineering Development. Based on investigations and interviews in enterprises, we propose the new idea of applying "computer video analysis technology" to new industrial engineering management software, and add "loose-coefficient" of the working station to this software in order to arrange scientific and humanistic production. Meanwhile, we suggest utilizing Biofeedback Technology to promote further research on "the rules of workers' physiological, psychological and emotional changes in production". This new kind of combination will push forward industrial engineering theories and benefit enterprises in progressing towards flexible social production, thus it will be of great theory innovation value, social significance and application value.

  7. User interface using a 3D model for video surveillance

    NASA Astrophysics Data System (ADS)

    Hata, Toshihiko; Boh, Satoru; Tsukada, Akihiro; Ozaki, Minoru

    1998-02-01

    These days fewer people, who must carry out their tasks quickly and precisely, are required in industrial surveillance and monitoring applications such as plant control or building security. Utilizing multimedia technology is a good approach to meet this need, and we previously developed Media Controller, which is designed for the applications and provides realtime recording and retrieval of digital video data in a distributed environment. In this paper, we propose a user interface for such a distributed video surveillance system in which 3D models of buildings and facilities are connected to the surveillance video. A novel method of synchronizing camera field data with each frame of a video stream is considered. This method records and reads the camera field data similarity to the video data and transmits it synchronously with the video stream. This enables the user interface to have such useful functions as comprehending the camera field immediately and providing clues when visibility is poor, for not only live video but also playback video. We have also implemented and evaluated the display function which makes surveillance video and 3D model work together using Media Controller with Java and Virtual Reality Modeling Language employed for multi-purpose and intranet use of 3D model.

  8. Speech Auditory Alerts Promote Memory for Alerted Events in a Video-Simulated Self-Driving Car Ride.

    PubMed

    Nees, Michael A; Helbein, Benji; Porter, Anna

    2016-05-01

    Auditory displays could be essential to helping drivers maintain situation awareness in autonomous vehicles, but to date, few or no studies have examined the effectiveness of different types of auditory displays for this application scenario. Recent advances in the development of autonomous vehicles (i.e., self-driving cars) have suggested that widespread automation of driving may be tenable in the near future. Drivers may be required to monitor the status of automation programs and vehicle conditions as they engage in secondary leisure or work tasks (entertainment, communication, etc.) in autonomous vehicles. An experiment compared memory for alerted events-a component of Level 1 situation awareness-using speech alerts, auditory icons, and a visual control condition during a video-simulated self-driving car ride with a visual secondary task. The alerts gave information about the vehicle's operating status and the driving scenario. Speech alerts resulted in better memory for alerted events. Both auditory display types resulted in less perceived effort devoted toward the study tasks but also greater perceived annoyance with the alerts. Speech auditory displays promoted Level 1 situation awareness during a simulation of a ride in a self-driving vehicle under routine conditions, but annoyance remains a concern with auditory displays. Speech auditory displays showed promise as a means of increasing Level 1 situation awareness of routine scenarios during an autonomous vehicle ride with an unrelated secondary task. © 2016, Human Factors and Ergonomics Society.

  9. Video Bandwidth Compression System.

    DTIC Science & Technology

    1980-08-01

    scaling function, located between the inverse DPCM and inverse transform , on the decoder matrix multiplier chips. 1"V1 T.. ---- i.13 SECURITY...Bit Unpacker and Inverse DPCM Slave Sync Board 15 e. Inverse DPCM Loop Boards 15 f. Inverse Transform Board 16 g. Composite Video Output Board 16...36 a. Display Refresh Memory 36 (1) Memory Section 37 (2) Timing and Control 39 b. Bit Unpacker and Inverse DPCM 40 c. Inverse Transform Processor 43

  10. Sexual Orientation and U.S. Military Personnel Policy: Options and Assessment

    DTIC Science & Technology

    1993-01-01

    include smaller actions, such as allocation of time to the new policy and keeping the change before members through video or other messages such as...were also taken. A condensed video and still picture S record has been provided separately, and the complete videotape and all photography have been...touching, leering. las- s’ylimilter,.Ies attouchements.Iles regards concupis-* civous remarks and the display of porno - cents, les remarques lascives et

  11. Leading the Development of Concepts of Operations for Next-Generation Remotely Piloted Aircraft

    DTIC Science & Technology

    2016-01-01

    overarching CONOPS. RPAs must provide full motion video and signals intelli- gence (SIGINT) capabilities to fulfill their intelligence, surveillance, and...reached full capacity, combatant commanders had an insatiable demand for this new breed of capability, and phrases like Pred porn and drone strike...dimensional steering line on the video feed of the pilot’s head-up display (HUD) that would indicate turning cues and finite steering paths for optimal

  12. Method and apparatus for calibrating a tiled display

    NASA Technical Reports Server (NTRS)

    Chen, Chung-Jen (Inventor); Johnson, Michael J. (Inventor); Chandrasekhar, Rajesh (Inventor)

    2001-01-01

    A display system that can be calibrated and re-calibrated with a minimal amount of manual intervention. To accomplish this, one or more cameras are provided to capture an image of the display screen. The resulting captured image is processed to identify any non-desirable characteristics, including visible artifacts such as seams, bands, rings, etc. Once the non-desirable characteristics are identified, an appropriate transformation function is determined. The transformation function is used to pre-warp the input video signal that is provided to the display such that the non-desirable characteristics are reduced or eliminated from the display. The transformation function preferably compensates for spatial non-uniformity, color non-uniformity, luminance non-uniformity, and other visible artifacts.

  13. Integrating critical interface elements for intuitive single-display aviation control of UAVs

    NASA Astrophysics Data System (ADS)

    Cooper, Joseph L.; Goodrich, Michael A.

    2006-05-01

    Although advancing levels of technology allow UAV operators to give increasingly complex commands with expanding temporal scope, it is unlikely that the need for immediate situation awareness and local, short-term flight adjustment will ever be completely superseded. Local awareness and control are particularly important when the operator uses the UAV to perform a search or inspection task. There are many different tasks which would be facilitated by search and inspection capabilities of a camera-equipped UAV. These tasks range from bridge inspection and news reporting to wilderness search and rescue. The system should be simple, inexpensive, and intuitive for non-pilots. An appropriately designed interface should (a) provide a context for interpreting video and (b) support UAV tasking and control, all within a single display screen. In this paper, we present and analyze an interface that attempts to accomplish this goal. The interface utilizes a georeferenced terrain map rendered from publicly available altitude data and terrain imagery to create a context in which the location of the UAV and the source of the video are communicated to the operator. Rotated and transformed imagery from the UAV provides a stable frame of reference for the operator and integrates cleanly into the terrain model. Simple icons overlaid onto the main display provide intuitive control and feedback when necessary but fade to a semi-transparent state when not in use to avoid distracting the operator's attention from the video signal. With various interface elements integrated into a single display, the interface runs nicely on a small, portable, inexpensive system with a single display screen and simple input device, but is powerful enough to allow a single operator to deploy, control, and recover a small UAV when coupled with appropriate autonomy. As we present elements of the interface design, we will identify concepts that can be leveraged into a large class of UAV applications.

  14. SES cupola interactive display design environment

    NASA Technical Reports Server (NTRS)

    Vu, Bang Q.; Kirkhoff, Kevin R.

    1989-01-01

    The Systems Engineering Simulator, located at the Lyndon B. Johnson Space Center in Houston, Texas, is tasked with providing a real-time simulator for developing displays and controls targeted for the Space Station Freedom. These displays and controls will exist inside an enclosed workstation located on the space station. The simulation is currently providing the engineering analysis environment for NASA and contractor personnel to design, prototype, and test alternatives for graphical presentation of data to an astronaut while he performs specified tasks. A highly desirable aspect of this environment is to have the capability to rapidly develop and bring on-line a number of different displays for use in determining the best utilization of graphics techniques in achieving maximum efficiency of the test subject fulfilling his task. The Systems Engineering Simulator now has available a tool which assists in the rapid development of displays for these graphic workstations. The Display Builder was developed in-house to provide an environment which allows easy construction and modification of displays within minutes of receiving requirements for specific tests.

  15. Engineering Novel and Improved Biocatalysts by Cell Surface Display

    PubMed Central

    Smith, Mason R.; Khera, Eshita; Wen, Fei

    2017-01-01

    Biocatalysts, especially enzymes, have the ability to catalyze reactions with high product selectivity, utilize a broad range of substrates, and maintain activity at low temperature and pressure. Therefore, they represent a renewable, environmentally friendly alternative to conventional catalysts. Most current industrial-scale chemical production processes using biocatalysts employ soluble enzymes or whole cells expressing intracellular enzymes. Cell surface display systems differ by presenting heterologous enzymes extracellularly, overcoming some of the limitations associated with enzyme purification and substrate transport. Additionally, coupled with directed evolution, cell surface display is a powerful platform for engineering enzymes with enhanced properties. In this review, we will introduce the molecular and cellular principles of cell surface display and discuss how it has been applied to engineer enzymes with improved properties as well as to develop surface-engineered microbes as whole-cell biocatalysts. PMID:29056821

  16. M13 bacteriophage displaying DOPA on surfaces: fabrication of various nanostructured inorganic materials without time-consuming screening processes.

    PubMed

    Park, Joseph P; Do, Minjae; Jin, Hyo-Eon; Lee, Seung-Wuk; Lee, Haeshin

    2014-01-01

    M13 bacteriophage (phage) was engineered for the use as a versatile template for preparing various nanostructured materials via genetic engineering coupled to enzymatic chemical conversions. First, we engineered the M13 phage to display TyrGluGluGlu (YEEE) on the pVIII coat protein and then enzymatically converted the Tyr residue to 3,4-dihydroxyl-l-phenylalanine (DOPA). The DOPA-displayed M13 phage could perform two functions: assembly and nucleation. The engineered phage assembles various noble metals, metal oxides, and semiconducting nanoparticles into one-dimensional arrays. Furthermore, the DOPA-displayed phage triggered the nucleation and growth of gold, silver, platinum, bimetallic cobalt-platinum, and bimetallic iron-platinum nanowires. This versatile phage template enables rapid preparation of phage-based prototype devices by eliminating the screening process, thus reducing effort and time.

  17. Pelvic Floor Dyssynergia

    MedlinePlus

    ... It is a painless process that uses a computer and a video monitor to display bodily functions ... or as linegraphs we can see on a computer screen. In this way, we receive information (feedback) ...

  18. Issues in visual support to real-time space system simulation solved in the Systems Engineering Simulator

    NASA Technical Reports Server (NTRS)

    Yuen, Vincent K.

    1989-01-01

    The Systems Engineering Simulator has addressed the major issues in providing visual data to its real-time man-in-the-loop simulations. Out-the-window views and CCTV views are provided by three scene systems to give the astronauts their real-world views. To expand the window coverage for the Space Station Freedom workstation a rotating optics system is used to provide the widest field of view possible. To provide video signals to as many viewpoints as possible, windows and CCTVs, with a limited amount of hardware, a video distribution system has been developed to time-share the video channels among viewpoints at the selection of the simulation users. These solutions have provided the visual simulation facility for real-time man-in-the-loop simulations for the NASA space program.

  19. Hardware/Software Issues for Video Guidance Systems: The Coreco Frame Grabber

    NASA Technical Reports Server (NTRS)

    Bales, John W.

    1996-01-01

    The F64 frame grabber is a high performance video image acquisition and processing board utilizing the TMS320C40 and TMS34020 processors. The hardware is designed for the ISA 16 bit bus and supports multiple digital or analog cameras. It has an acquisition rate of 40 million pixels per second, with a variable sampling frequency of 510 kHz to MO MHz. The board has a 4MB frame buffer memory expandable to 32 MB, and has a simultaneous acquisition and processing capability. It supports both VGA and RGB displays, and accepts all analog and digital video input standards.

  20. Plant Chlorophyll Content Imager with Reference Detection Signals

    NASA Technical Reports Server (NTRS)

    Spiering, Bruce A. (Inventor); Carter, Gregory A. (Inventor)

    2000-01-01

    A portable plant chlorophyll imaging system is described which collects light reflected from a target plant and separates the collected light into two different wavelength bands. These wavelength bands, or channels, are described as having center wavelengths of 700 nm and 840 nm. The light collected in these two channels is processed using synchronized video cameras. A controller provided in the system compares the level of light of video images reflected from a target plant with a reference level of light from a source illuminating the plant. The percent of reflection in the two separate wavelength bands from a target plant are compared to provide a ratio video image which indicates a relative level of plant chlorophyll content and physiological stress. Multiple display modes are described for viewing the video images.

  1. Visual Motion Perception

    DTIC Science & Technology

    1991-08-15

    Conversely, displays Atr con- past experience to the experimental stimuli. structed %xith normal density- controlled KDE cues but %ith 5. Excluding...frame. This 3Ndisplays, gray background is displayed’ on ail introduces 50% -scintillation (density control lion even frames (labelled 1:0). Other non ...video tapes were prepared, each of whsich contained all the experimental ASL signs but distributed 1 2 3 4 into dliffereint. filter groups . Eight

  2. [The hygienic characteristics of the medical technology accompaniment to the development, creation and operation of installations equipped with video display terminals].

    PubMed

    Prygun, A V; Lazarev, N V

    1998-10-01

    Radiation measuring on the work places of operators in command and control installations proved that environment parameters depending on electronic display functioning are in line with the regulations' requirements. Nevertheless the operator health estimates show that the problem of personnel security still exists. The authors recommend some measures to improve the situation.

  3. The Systems Engineering Design of a Smart Forward Operating Base Surveillance System for Forward Operating Base Protection

    DTIC Science & Technology

    2013-06-01

    fixed sensors located along the perimeter of the FOB. The video is analyzed for facial recognition to alert the Network Operations Center (NOC...the UAV is processed on board for facial recognition and video for behavior analysis is sent directly to the Network Operations Center (NOC). Video...captured by the fixed sensors are sent directly to the NOC for facial recognition and behavior analysis processing. The multi- directional signal

  4. Hip Hop Dance Experience Linked to Sociocognitive Ability

    PubMed Central

    Bonny, Justin W.; Lindberg, Jenna C.; Pacampara, Marc C.

    2017-01-01

    Expertise within gaming (e.g., chess, video games) and kinesthetic (e.g., sports, classical dance) activities has been found to be linked with specific cognitive skills. Some of these skills, working memory, mental rotation, problem solving, are linked to higher performance in science, technology, math, and engineering (STEM) disciplines. In the present study, we examined whether experience in a different activity, hip hop dance, is also linked to cognitive abilities connected with STEM skills as well as social cognition ability. Dancers who varied in hip hop and other dance style experience were presented with a set of computerized tasks that assessed working memory capacity, mental rotation speed, problem solving efficiency, and theory of mind. We found that, when controlling for demographic factors and other dance style experience, those with greater hip hop dance experience were faster at mentally rotating images of hands at greater angle disparities and there was a trend for greater accuracy at identifying positive emotions displayed by cropped images of human faces. We suggest that hip hop dance, similar to other more technical activities such as video gameplay, tap some specific cognitive abilities that underlie STEM skills. Furthermore, we suggest that hip hop dance experience can be used to reach populations who may not otherwise be interested in other kinesthetic or gaming activities and potentially enhance select sociocognitive skills. PMID:28146562

  5. Print, Broadcast Students Share VDTs at West Fla.

    ERIC Educational Resources Information Center

    Roberts, Churchill L.; Dickson, Sandra H.

    1985-01-01

    Describes the use of video display terminals in the journalism lab of a Florida university. Discusses the different purposes for which broadcast and print journalism students use such equipment. (HTH)

  6. The Use of Smart Glasses for Surgical Video Streaming.

    PubMed

    Hiranaka, Takafumi; Nakanishi, Yuta; Fujishiro, Takaaki; Hida, Yuichi; Tsubosaka, Masanori; Shibata, Yosaku; Okimura, Kenjiro; Uemoto, Harunobu

    2017-04-01

    Observation of surgical procedures performed by experts is extremely important for acquisition and improvement of surgical skills. Smart glasses are small computers, which comprise a head-mounted monitor and video camera, and can be connected to the internet. They can be used for remote observation of surgeries by video streaming. Although Google Glass is the most commonly used smart glasses for medical purposes, it is still unavailable commercially and has some limitations. This article reports the use of a different type of smart glasses, InfoLinker, for surgical video streaming. InfoLinker has been commercially available in Japan for industrial purposes for more than 2 years. It is connected to a video server via wireless internet directly, and streaming video can be seen anywhere an internet connection is available. We have attempted live video streaming of knee arthroplasty operations that were viewed at several different locations, including foreign countries, on a common web browser. Although the quality of video images depended on the resolution and dynamic range of the video camera, speed of internet connection, and the wearer's attention to minimize image shaking, video streaming could be easily performed throughout the procedure. The wearer could confirm the quality of the video as the video was being shot by the head-mounted display. The time and cost for observation of surgical procedures can be reduced by InfoLinker, and further improvement of hardware as well as the wearer's video shooting technique is expected. We believe that this can be used in other medical settings.

  7. 14. AERIAL VIEW OF ENGINE DISPLAY INSIDE PASSENGER CAR SHOP ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    14. AERIAL VIEW OF ENGINE DISPLAY INSIDE PASSENGER CAR SHOP (NOW A TRANSPORTATION MUSEUM) - Baltimore & Ohio Railroad, Mount Clare Passenger Car Shop, Southwest corner of Pratt & Poppleton Streets, Baltimore, Independent City, MD

  8. Feasibility of video codec algorithms for software-only playback

    NASA Astrophysics Data System (ADS)

    Rodriguez, Arturo A.; Morse, Ken

    1994-05-01

    Software-only video codecs can provide good playback performance in desktop computers with a 486 or 68040 CPU running at 33 MHz without special hardware assistance. Typically, playback of compressed video can be categorized into three tasks: the actual decoding of the video stream, color conversion, and the transfer of decoded video data from system RAM to video RAM. By current standards, good playback performance is the decoding and display of video streams of 320 by 240 (or larger) compressed frames at 15 (or greater) frames-per- second. Software-only video codecs have evolved by modifying and tailoring existing compression methodologies to suit video playback in desktop computers. In this paper we examine the characteristics used to evaluate software-only video codec algorithms, namely: image fidelity (i.e., image quality), bandwidth (i.e., compression) ease-of-decoding (i.e., playback performance), memory consumption, compression to decompression asymmetry, scalability, and delay. We discuss the tradeoffs among these variables and the compromises that can be made to achieve low numerical complexity for software-only playback. Frame- differencing approaches are described since software-only video codecs typically employ them to enhance playback performance. To complement other papers that appear in this session of the Proceedings, we review methods derived from binary pattern image coding since these methods are amenable for software-only playback. In particular, we introduce a novel approach called pixel distribution image coding.

  9. An Evaluation of HigherEd 2.0 Technologies in Undergraduate Mechanical Engineering Courses

    ERIC Educational Resources Information Center

    Orange, Amy; Heinecke, Walter; Berger, Edward; Krousgrill, Charles; Mikic, Borjana; Quinn, Dane

    2012-01-01

    Between 2006 and 2010, sophomore engineering students at four universities were exposed to technologies designed to increase their learning in undergraduate engineering courses. Our findings suggest that students at all sites found the technologies integrated into their courses useful to their learning. Video solutions received the most positive…

  10. Engineering Students' Conceptions of Entrepreneurial Learning as Part of Their Education

    ERIC Educational Resources Information Center

    Täks, Marge; Tynjälä, Päivi; Kukemelk, Hasso

    2016-01-01

    The purpose of this study was to examine what kinds of conceptions of entrepreneurial learning engineering students expressed in an entrepreneurship course integrated in their study programme. The data were collected during an entrepreneurship course in Estonia that was organised for fourth-year engineering students, using video-recorded group…

  11. Use of the Colorado SURGE System for Continuing Education for Civil Engineers.

    ERIC Educational Resources Information Center

    Fead, J. W. N.

    The Colorado State University Resources in Graduate Education (SURGE) program is described in this report. Since it is expected that not all the participants in a graduate engineering program will be able to attend university-based lectures, presentations are video-taped and transported to industrial plants, engineering offices, and other…

  12. Next generation phage display by use of pVII and pIX as display scaffolds.

    PubMed

    Løset, Geir Åge; Sandlie, Inger

    2012-09-01

    Phage display technology has evolved to become an extremely versatile and powerful platform for protein engineering. The robustness of the phage particle, its ease of handling and its ability to tolerate a range of different capsid fusions are key features that explain the dominance of phage display in combinatorial engineering. Implementation of new technology is likely to ensure the continuation of its success, but has also revealed important short comings inherent to current phage display systems. This is in particular related to the biology of the two most popular display capsids, namely pIII and pVIII. Recent findings using two alternative capsids, pVII and pIX, located to the phage tip opposite that of pIII, suggest how they may be exploited to alleviate or circumvent many of these short comings. This review addresses important aspects of the current phage display standard and then discusses the use of pVII and pIX. These may both complement current systems and be used as alternative scaffolds for display and selection to further improve phage display as the ultimate combinatorial engineering platform. Copyright © 2012 Elsevier Inc. All rights reserved.

  13. Engineering dihydropteroate synthase (DHPS) for efficient expression on M13 phage.

    PubMed

    Brockmann, Eeva-Christine; Lamminmäki, Urpo; Saviranta, Petri

    2005-06-20

    Phage display is a commonly used selection technique in protein engineering, but not all proteins can be expressed on phage. Here, we describe the expression of a cytoplasmic homodimeric enzyme dihydropteroate synthetase (DHPS) on M13 phage, established by protein engineering of DHPS. The strategy included replacement of cysteine residues and screening for periplasmic expression followed by random mutagenesis and phage display selection with a conformation-specific anti-DHPS antibody. Cysteine replacement alone resulted in a 12-fold improvement in phage display of DHPS, but after random mutagenesis and three rounds of phage display selection, phage display efficiency of the library had improved 280-fold. Most of the selected clones had a common Asp96Asn mutation that was largely responsible for the efficient phage display of DHPS. Asp96Asn affected synergistically with the cysteine replacing mutations that were needed to remove the denaturing effect of potential wrong disulfide bridging in phage display. Asp96Asn alone resulted in a 1.8-fold improvement in phage display efficiency, but in combination with the cysteine replacing mutations, a total of 130-fold improvement in phage display efficiency of DHPS was achieved.

  14. Display of travelling 3D scenes from single integral-imaging capture

    NASA Astrophysics Data System (ADS)

    Martinez-Corral, Manuel; Dorado, Adrian; Hong, Seok-Min; Sola-Pikabea, Jorge; Saavedra, Genaro

    2016-06-01

    Integral imaging (InI) is a 3D auto-stereoscopic technique that captures and displays 3D images. We present a method for easily projecting the information recorded with this technique by transforming the integral image into a plenoptic image, as well as choosing, at will, the field of view (FOV) and the focused plane of the displayed plenoptic image. Furthermore, with this method we can generate a sequence of images that simulates a camera travelling through the scene from a single integral image. The application of this method permits to improve the quality of 3D display images and videos.

  15. The USL NASA PC R and D interactive presentation development system

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Moreau, Dennis R.

    1984-01-01

    The Interactive Presentation Development System (IPFS) is a highly interactive system for creating, editing, and displaying video presentation sequences, e.g., for developing and presenting displays of instructional material similiar to overhead transparency or slide presentations. However, since this system is PC-based, users (instructors) can step through sequences forward or backward, focusing attention to areas of the display with special cursor pointers. Additionally, screen displays may be dynamically modified during the presentation to show assignments or to answer questions, much like a traditional blackboard. This system is now implemented at the University of Southwestern Louisiana for use within the piloting phases of the NASA contract work.

  16. AOIPS water resources data management system

    NASA Technical Reports Server (NTRS)

    Vanwie, P.

    1977-01-01

    The text and computer-generated displays used to demonstrate the AOIPS (Atmospheric and Oceanographic Information Processing System) water resources data management system are investigated. The system was developed to assist hydrologists in analyzing the physical processes occurring in watersheds. It was designed to alleviate some of the problems encountered while investigating the complex interrelationships of variables such as land-cover type, topography, precipitation, snow melt, surface runoff, evapotranspiration, and streamflow rates. The system has an interactive image processing capability and a color video display to display results as they are obtained.

  17. Interactive display system having a scaled virtual target zone

    DOEpatents

    Veligdan, James T.; DeSanto, Leonard

    2006-06-13

    A display system includes a waveguide optical panel having an inlet face and an opposite outlet face. A projector and imaging device cooperate with the panel for projecting a video image thereon. An optical detector bridges at least a portion of the waveguides for detecting a location on the outlet face within a target zone of an inbound light spot. A controller is operatively coupled to the imaging device and detector for displaying a cursor on the outlet face corresponding with the detected location of the spot within the target zone.

  18. Enhanced Eddy-Current Detection Of Weld Flaws

    NASA Technical Reports Server (NTRS)

    Van Wyk, Lisa M.; Willenberg, James D.

    1992-01-01

    Mixing of impedances measured at different frequencies reduces noise and helps reveal flaws. In new method, one excites eddy-current probe simultaneously at two different frequencies; usually, one of which integral multiple of other. Resistive and reactive components of impedance of eddy-current probe measured at two frequencies, mixed in computer, and displayed in real time on video terminal of computer. Mixing of measurements obtained at two different frequencies often "cleans up" displayed signal in situations in which band-pass filtering alone cannot: mixing removes most noise, and displayed signal resolves flaws well.

  19. Debunking a Video on YouTube as an Authentic Research Experience

    NASA Astrophysics Data System (ADS)

    Davidowsky, Philip; Rogers, Michael

    2015-05-01

    Students are exposed to a variety of unrealistic physical experiences seen in movies, video games, and short online videos. A popular classroom activity has students examine footage to identify what aspects of physics are correctly and incorrectly represented.1-7 Some of the physical phenomena pictured might be tricks or illusions made easier to perform with the use of video, while others are removed from their historical context, leaving the audience to form misguided conclusions about what they saw with only the information in the video. One such video in which the late Eric Laithwaite, a successful British engineer and inventor, claims that a spinning wheel "becomes light as a feather" provides an opportunity for students to investigate Laithwaite's claim.8 The use of video footage can engage students in learning physics9 but also provide an opportunity for authentic research experiences.

  20. Effect of protein properties on display efficiency using the M13 phage display system.

    PubMed

    Imai, S; Mukai, Y; Takeda, T; Abe, Y; Nagano, K; Kamada, H; Nakagawa, S; Tsunoda, S; Tsutsumi, Y

    2008-10-01

    The M13 phage display system is a powerful technology for engineering proteins such as functional mutant proteins and peptides. In this system, it is necessary that the protein is displayed on the phage surface. Therefore, its application is often limited when a protein is poorly displayed. In this study, we attempted to understand the relationship between a protein's properties and its display efficiency using the well-known pIII and pVIII type phage display system. The display of positively charged SV40 NLS and HIV-1 Tat peptides on pill was less efficient than that of the neutrally charged RGDS peptide. When different molecular weight proteins (1.5-58 kDa) were displayed on pIII and pVIII, their display efficiencies were directly influenced by their molecular weights. These results indicate the usefulness in predicting a desired protein's compatibility with protein and peptide engineering using the phage display system.

Top