ERIC Educational Resources Information Center
Bing, Jon
1982-01-01
The rapid evolution of today's video games now fills arcades, snack bars, and homes with an array of highly interactive, graphically vivid technical devices. This electronic environment is creating a worldwide communication network. Developments in this area will be beneficial provided that appropriate media policies can be framed. (Author/JN)
Low-SWaP coincidence processing for Geiger-mode LIDAR video
NASA Astrophysics Data System (ADS)
Schultz, Steven E.; Cervino, Noel P.; Kurtz, Zachary D.; Brown, Myron Z.
2015-05-01
Photon-counting Geiger-mode lidar detector arrays provide a promising approach for producing three-dimensional (3D) video at full motion video (FMV) data rates, resolution, and image size from long ranges. However, coincidence processing required to filter raw photon counts is computationally expensive, generally requiring significant size, weight, and power (SWaP) and also time. In this paper, we describe a laboratory test-bed developed to assess the feasibility of low-SWaP, real-time processing for 3D FMV based on Geiger-mode lidar. First, we examine a design based on field programmable gate arrays (FPGA) and demonstrate proof-of-concept results. Then we examine a design based on a first-of-its-kind embedded graphical processing unit (GPU) and compare performance with the FPGA. Results indicate feasibility of real-time Geiger-mode lidar processing for 3D FMV and also suggest utility for real-time onboard processing for mapping lidar systems.
Stereo and IMU-Assisted Visual Odometry for Small Robots
NASA Technical Reports Server (NTRS)
2012-01-01
This software performs two functions: (1) taking stereo image pairs as input, it computes stereo disparity maps from them by cross-correlation to achieve 3D (three-dimensional) perception; (2) taking a sequence of stereo image pairs as input, it tracks features in the image sequence to estimate the motion of the cameras between successive image pairs. A real-time stereo vision system with IMU (inertial measurement unit)-assisted visual odometry was implemented on a single 750 MHz/520 MHz OMAP3530 SoC (system on chip) from TI (Texas Instruments). Frame rates of 46 fps (frames per second) were achieved at QVGA (Quarter Video Graphics Array i.e. 320 240), or 8 fps at VGA (Video Graphics Array 640 480) resolutions, while simultaneously tracking up to 200 features, taking full advantage of the OMAP3530's integer DSP (digital signal processor) and floating point ARM processors. This is a substantial advancement over previous work as the stereo implementation produces 146 Mde/s (millions of disparities evaluated per second) in 2.5W, yielding a stereo energy efficiency of 58.8 Mde/J, which is 3.75 better than prior DSP stereo while providing more functionality.
NASA Technical Reports Server (NTRS)
Jedlovec, Gary; Srikishen, Jayanthi; Edwards, Rita; Cross, David; Welch, Jon; Smith, Matt
2013-01-01
The use of collaborative scientific visualization systems for the analysis, visualization, and sharing of "big data" available from new high resolution remote sensing satellite sensors or four-dimensional numerical model simulations is propelling the wider adoption of ultra-resolution tiled display walls interconnected by high speed networks. These systems require a globally connected and well-integrated operating environment that provides persistent visualization and collaboration services. This abstract and subsequent presentation describes a new collaborative visualization system installed for NASA's Shortterm Prediction Research and Transition (SPoRT) program at Marshall Space Flight Center and its use for Earth science applications. The system consists of a 3 x 4 array of 1920 x 1080 pixel thin bezel video monitors mounted on a wall in a scientific collaboration lab. The monitors are physically and virtually integrated into a 14' x 7' for video display. The display of scientific data on the video wall is controlled by a single Alienware Aurora PC with a 2nd Generation Intel Core 4.1 GHz processor, 32 GB memory, and an AMD Fire Pro W600 video card with 6 mini display port connections. Six mini display-to-dual DVI cables are used to connect the 12 individual video monitors. The open source Scalable Adaptive Graphics Environment (SAGE) windowing and media control framework, running on top of the Ubuntu 12 Linux operating system, allows several users to simultaneously control the display and storage of high resolution still and moving graphics in a variety of formats, on tiled display walls of any size. The Ubuntu operating system supports the open source Scalable Adaptive Graphics Environment (SAGE) software which provides a common environment, or framework, enabling its users to access, display and share a variety of data-intensive information. This information can be digital-cinema animations, high-resolution images, high-definition video-teleconferences, presentation slides, documents, spreadsheets or laptop screens. SAGE is cross-platform, community-driven, open-source visualization and collaboration middleware that utilizes shared national and international cyberinfrastructure for the advancement of scientific research and education.
NASA Astrophysics Data System (ADS)
Jedlovec, G.; Srikishen, J.; Edwards, R.; Cross, D.; Welch, J. D.; Smith, M. R.
2013-12-01
The use of collaborative scientific visualization systems for the analysis, visualization, and sharing of 'big data' available from new high resolution remote sensing satellite sensors or four-dimensional numerical model simulations is propelling the wider adoption of ultra-resolution tiled display walls interconnected by high speed networks. These systems require a globally connected and well-integrated operating environment that provides persistent visualization and collaboration services. This abstract and subsequent presentation describes a new collaborative visualization system installed for NASA's Short-term Prediction Research and Transition (SPoRT) program at Marshall Space Flight Center and its use for Earth science applications. The system consists of a 3 x 4 array of 1920 x 1080 pixel thin bezel video monitors mounted on a wall in a scientific collaboration lab. The monitors are physically and virtually integrated into a 14' x 7' for video display. The display of scientific data on the video wall is controlled by a single Alienware Aurora PC with a 2nd Generation Intel Core 4.1 GHz processor, 32 GB memory, and an AMD Fire Pro W600 video card with 6 mini display port connections. Six mini display-to-dual DVI cables are used to connect the 12 individual video monitors. The open source Scalable Adaptive Graphics Environment (SAGE) windowing and media control framework, running on top of the Ubuntu 12 Linux operating system, allows several users to simultaneously control the display and storage of high resolution still and moving graphics in a variety of formats, on tiled display walls of any size. The Ubuntu operating system supports the open source Scalable Adaptive Graphics Environment (SAGE) software which provides a common environment, or framework, enabling its users to access, display and share a variety of data-intensive information. This information can be digital-cinema animations, high-resolution images, high-definition video-teleconferences, presentation slides, documents, spreadsheets or laptop screens. SAGE is cross-platform, community-driven, open-source visualization and collaboration middleware that utilizes shared national and international cyberinfrastructure for the advancement of scientific research and education.
17 CFR 232.304 - Graphic, image, audio and video material.
Code of Federal Regulations, 2011 CFR
2011-04-01
... video material. 232.304 Section 232.304 Commodity and Securities Exchanges SECURITIES AND EXCHANGE... Submissions § 232.304 Graphic, image, audio and video material. (a) If a filer includes graphic, image, audio or video material in a document delivered to investors and others that is not reproduced in an...
17 CFR 232.304 - Graphic, image, audio and video material.
Code of Federal Regulations, 2012 CFR
2012-04-01
... video material. 232.304 Section 232.304 Commodity and Securities Exchanges SECURITIES AND EXCHANGE... Submissions § 232.304 Graphic, image, audio and video material. (a) If a filer includes graphic, image, audio or video material in a document delivered to investors and others that is not reproduced in an...
17 CFR 232.304 - Graphic, image, audio and video material.
Code of Federal Regulations, 2013 CFR
2013-04-01
... video material. 232.304 Section 232.304 Commodity and Securities Exchanges SECURITIES AND EXCHANGE... Submissions § 232.304 Graphic, image, audio and video material. (a) If a filer includes graphic, image, audio or video material in a document delivered to investors and others that is not reproduced in an...
17 CFR 232.304 - Graphic, image, audio and video material.
Code of Federal Regulations, 2010 CFR
2010-04-01
... video material. 232.304 Section 232.304 Commodity and Securities Exchanges SECURITIES AND EXCHANGE... Submissions § 232.304 Graphic, image, audio and video material. (a) If a filer includes graphic, image, audio or video material in a document delivered to investors and others that is not reproduced in an...
17 CFR 232.304 - Graphic, image, audio and video material.
Code of Federal Regulations, 2014 CFR
2014-04-01
... video material. 232.304 Section 232.304 Commodity and Securities Exchanges SECURITIES AND EXCHANGE... Submissions § 232.304 Graphic, image, audio and video material. (a) If a filer includes graphic, image, audio or video material in a document delivered to investors and others that is not reproduced in an...
Adaptive correlation filter-based video stabilization without accumulative global motion estimation
NASA Astrophysics Data System (ADS)
Koh, Eunjin; Lee, Chanyong; Jeong, Dong Gil
2014-12-01
We present a digital video stabilization approach that provides both robustness and efficiency for practical applications. In this approach, we adopt a stabilization model that maintains spatio-temporal information of past input frames efficiently and can track original stabilization position. Because of the stabilization model, the proposed method does not need accumulative global motion estimation and can recover the original position even if there is a failure in interframe motion estimation. It can also intelligently overcome the situation of damaged or interrupted video sequences. Moreover, because it is simple and suitable to parallel scheme, we implement it on a commercial field programmable gate array and a graphics processing unit board with compute unified device architecture in a breeze. Experimental results show that the proposed approach is both fast and robust.
Probabilistic Methods for Image Generation and Encoding.
1993-10-15
video and graphics lab at Georgia Tech, linking together Silicon Graphics workstations, a laser video recorder, a Betacam video recorder, scanner...computer laboratory at Georgia Tech, based on two Silicon Graphics Personal Iris workstations, a SONY laser video recorder, a SONY Betacam SP video...laser disk in component RGB form, with variable speed playback. From the laser recorder the images can be dubbed to the Betacam or the VHS recorder in
New Integrated Video and Graphics Technology: Digital Video Interactive.
ERIC Educational Resources Information Center
Optical Information Systems, 1987
1987-01-01
Describes digital video interactive (DVI), a new technology which combines the interactivity of the graphics capabilities in personal computers with the realism of high-quality motion video and multitrack audio in an all-digital integrated system. (MES)
Comparison of ISS Power System Telemetry with Analytically Derived Data for Shadowed Cases
NASA Technical Reports Server (NTRS)
Fincannon, H. James
2002-01-01
Accurate International Space Station (ISS) power prediction requires the quantification of solar array shadowing. Prior papers have discussed the NASA Glenn Research Center (GRC) ISS power system tool SPACE (System Power Analysis for Capability Evaluation) and its integrated shadowing algorithms. On-orbit telemetry has become available that permits the correlation of theoretical shadowing predictions with actual data. This paper documents the comparison of a shadowing metric (total solar array current) as derived from SPACE predictions and on-orbit flight telemetry data for representative significant shadowing cases. Images from flight video recordings and the SPACE computer program graphical output are used to illustrate the comparison. The accuracy of the SPACE shadowing capability is demonstrated for the cases examined.
Interactive Video Courseware for Graphic Communications Teachers and Students.
ERIC Educational Resources Information Center
Sanders, Mark
1985-01-01
At Virginia Polytechnic Institute and State University, interactive video serves both as an instructional tool and a project for creative students in graphic communications. The package facilitates courseware development and teaches students simultaneously about microcomputer and video technology. (SK)
Automated fall detection on privacy-enhanced video.
Edgcomb, Alex; Vahid, Frank
2012-01-01
A privacy-enhanced video obscures the appearance of a person in the video. We consider four privacy enhancements: blurring of the person, silhouetting of the person, covering the person with a graphical box, and covering the person with a graphical oval. We demonstrate that an automated video-based fall detection algorithm can be as accurate on privacy-enhanced video as on raw video. The algorithm operated on video from a stationary in-home camera, using a foreground-background segmentation algorithm to extract a minimum bounding rectangle (MBR) around the motion in the video, and using time series shapelet analysis on the height and width of the rectangle to detect falls. We report accuracy applying fall detection on 23 scenarios depicted as raw video and privacy-enhanced videos involving a sole actor portraying normal activities and various falls. We found that fall detection on privacy-enhanced video, except for the common approach of blurring of the person, was competitive with raw video, and in particular that the graphical oval privacy enhancement yielded the same accuracy as raw video, namely 0.91 sensitivity and 0.92 specificity.
[Development of a system for ultrasonic three-dimensional reconstruction of fetus].
Baba, K
1989-04-01
We have developed a system for ultrasonic three-dimensional (3-D) fetus reconstruction using computers. Either a real-time linear array probe or a convex array probe of an ultrasonic scanner was mounted on a position sensor arm of a manual compound scanner in order to detect the position of the probe. A microcomputer was used to convert the position information to what could be recorded on a video tape as an image. This image was superimposed on the ultrasonic tomographic image simultaneously with a superimposer and recorded on a video tape. Fetuses in utero were scanned in seven cases. More than forty ultrasonic section image on the video tape were fed into a minicomputer. The shape of the fetus was displayed three-dimensionally by means of computer graphics. The computer-generated display produced a 3-D image of the fetus and showed the usefulness and accuracy of this system. Since it took only a few seconds for data collection by ultrasonic inspection, fetal movement did not adversely affect the results. Data input took about ten minutes for 40 slices, and 3-D reconstruction and display took about two minutes. The system made it possible to observe and record the 3-D image of the fetus in utero non-invasively and therefore is expected to make it much easier to obtain a 3-D picture of the fetus in utero.
Engineering visualization utilizing advanced animation
NASA Technical Reports Server (NTRS)
Sabionski, Gunter R.; Robinson, Thomas L., Jr.
1989-01-01
Engineering visualization is the use of computer graphics to depict engineering analysis and simulation in visual form from project planning through documentation. Graphics displays let engineers see data represented dynamically which permits the quick evaluation of results. The current state of graphics hardware and software generally allows the creation of two types of 3D graphics. The use of animated video as an engineering visualization tool is presented. The engineering, animation, and videography aspects of animated video production are each discussed. Specific issues include the integration of staffing expertise, hardware, software, and the various production processes. A detailed explanation of the animation process reveals the capabilities of this unique engineering visualization method. Automation of animation and video production processes are covered and future directions are proposed.
Authoritative Authoring: Software That Makes Multimedia Happen.
ERIC Educational Resources Information Center
Florio, Chris; Murie, Michael
1996-01-01
Compares seven mid- to high-end multimedia authoring software systems that combine graphics, sound, animation, video, and text for Windows and Macintosh platforms. A run-time project was created with each program using video, animation, graphics, sound, formatted text, hypertext, and buttons. (LRW)
Purohit, Bharathi M; Singh, Abhinav; Dwivedi, Ashish
2017-03-01
The study aims to assess the reliability of video-graphic method as a tool to screen the dental caries among 12-year-old school children in a rural region of India. A total of 139 school children participated in the study. Visual tactile examinations were conducted using the Decayed, Missing, and Filled Teeth (DMFT) index. Simultaneously, standardized video recording of the oral cavity was performed. Sensitivity and specificity values were calculated for video-graphic assessment of dental caries. Bland-Altman plot was used to assess agreement between the two methods of caries assessment. Likelihood ratio (LR) and receiver-operating characteristic (ROC) curve were used to assess the predictive accuracy of the video-graphic method. Mean DMFT for the study population was 2.47 ± 2.01 and 2.46 ± 1.91 by visual tactile and video-graphic assessment (P = 0.76; > 0.05). Sensitivity and specificity values of 0.86 and 0.58 were established for video-graphic assessment. A fair degree of agreement was noted between the two methods with Intraclass correlation coefficient (ICC) value of 0.56. LR for video-graphic assessment was 2.05. Bland-Altman plot confirmed the level of agreement between the two assessment methods. The area under curve was 0.69 (CI 0.57, 0.80, P = 0.001). Teledentistry examination is comparable to clinical examination when screening for dental caries among school children. This study provides evidence that teledentistry may be used as an alternative screening tool for assessment of dental caries and is viable for remote consultation and treatment planning. Teledentistry offers to change the dynamics of dental care delivery and may effectively bridge the rural-urban oral health divide. © 2016 American Association of Public Health Dentistry.
Pairwise graphical models for structural health monitoring with dense sensor arrays
NASA Astrophysics Data System (ADS)
Mohammadi Ghazi, Reza; Chen, Justin G.; Büyüköztürk, Oral
2017-09-01
Through advances in sensor technology and development of camera-based measurement techniques, it has become affordable to obtain high spatial resolution data from structures. Although measured datasets become more informative by increasing the number of sensors, the spatial dependencies between sensor data are increased at the same time. Therefore, appropriate data analysis techniques are needed to handle the inference problem in presence of these dependencies. In this paper, we propose a novel approach that uses graphical models (GM) for considering the spatial dependencies between sensor measurements in dense sensor networks or arrays to improve damage localization accuracy in structural health monitoring (SHM) application. Because there are always unobserved damaged states in this application, the available information is insufficient for learning the GMs. To overcome this challenge, we propose an approximated model that uses the mutual information between sensor measurements to learn the GMs. The study is backed by experimental validation of the method on two test structures. The first is a three-story two-bay steel model structure that is instrumented by MEMS accelerometers. The second experimental setup consists of a plate structure and a video camera to measure the displacement field of the plate. Our results show that considering the spatial dependencies by the proposed algorithm can significantly improve damage localization accuracy.
Nuutinen, Mikko; Virtanen, Toni; Rummukainen, Olli; Häkkinen, Jukka
2016-03-01
This article presents VQone, a graphical experiment builder, written as a MATLAB toolbox, developed for image and video quality ratings. VQone contains the main elements needed for the subjective image and video quality rating process. This includes building and conducting experiments and data analysis. All functions can be controlled through graphical user interfaces. The experiment builder includes many standardized image and video quality rating methods. Moreover, it enables the creation of new methods or modified versions from standard methods. VQone is distributed free of charge under the terms of the GNU general public license and allows code modifications to be made so that the program's functions can be adjusted according to a user's requirements. VQone is available for download from the project page (http://www.helsinki.fi/psychology/groups/visualcognition/).
NASA Astrophysics Data System (ADS)
Le, Minh Tuan; Nguyen, Congdu; Yoon, Dae-Il; Jung, Eun Ku; Jia, Jie; Kim, Hae-Kwang
2007-12-01
In this paper, we propose a method of 3D graphics to video encoding and streaming that are embedded into a remote interactive 3D visualization system for rapidly representing a 3D scene on mobile devices without having to download it from the server. In particular, a 3D graphics to video framework is presented that increases the visual quality of regions of interest (ROI) of the video by performing more bit allocation to ROI during H.264 video encoding. The ROI are identified by projection 3D objects to a 2D plane during rasterization. The system offers users to navigate the 3D scene and interact with objects of interests for querying their descriptions. We developed an adaptive media streaming server that can provide an adaptive video stream in term of object-based quality to the client according to the user's preferences and the variation of network bandwidth. Results show that by doing ROI mode selection, PSNR of test sample slightly change while visual quality of objects increases evidently.
The Generative Effects of Instructional Organizers with Computer-Based Interactive Video.
ERIC Educational Resources Information Center
Kenny, Richard F.
This study compared the use of three instructional organizers--the advance organizer (AO), the participatory pictorial graphic organizer (PGO), and the final form pictorial graphic organizer (FGO)--in the design and use of computer-based interactive video (CBIV) programs. That is, it attempted to determine whether a less generative or more…
Combining 3D structure of real video and synthetic objects
NASA Astrophysics Data System (ADS)
Kim, Man-Bae; Song, Mun-Sup; Kim, Do-Kyoon
1998-04-01
This paper presents a new approach of combining real video and synthetic objects. The purpose of this work is to use the proposed technology in the fields of advanced animation, virtual reality, games, and so forth. Computer graphics has been used in the fields previously mentioned. Recently, some applications have added real video to graphic scenes for the purpose of augmenting the realism that the computer graphics lacks in. This approach called augmented or mixed reality can produce more realistic environment that the entire use of computer graphics. Our approach differs from the virtual reality and augmented reality in the manner that computer- generated graphic objects are combined to 3D structure extracted from monocular image sequences. The extraction of the 3D structure requires the estimation of 3D depth followed by the construction of a height map. Graphic objects are then combined to the height map. The realization of our proposed approach is carried out in the following steps: (1) We derive 3D structure from test image sequences. The extraction of the 3D structure requires the estimation of depth and the construction of a height map. Due to the contents of the test sequence, the height map represents the 3D structure. (2) The height map is modeled by Delaunay triangulation or Bezier surface and each planar surface is texture-mapped. (3) Finally, graphic objects are combined to the height map. Because 3D structure of the height map is already known, Step (3) is easily manipulated. Following this procedure, we produced an animation video demonstrating the combination of the 3D structure and graphic models. Users can navigate the realistic 3D world whose associated image is rendered on the display monitor.
Design of a system based on DSP and FPGA for video recording and replaying
NASA Astrophysics Data System (ADS)
Kang, Yan; Wang, Heng
2013-08-01
This paper brings forward a video recording and replaying system with the architecture of Digital Signal Processor (DSP) and Field Programmable Gate Array (FPGA). The system achieved encoding, recording, decoding and replaying of Video Graphics Array (VGA) signals which are displayed on a monitor during airplanes and ships' navigating. In the architecture, the DSP is a main processor which is used for a large amount of complicated calculation during digital signal processing. The FPGA is a coprocessor for preprocessing video signals and implementing logic control in the system. In the hardware design of the system, Peripheral Device Transfer (PDT) function of the External Memory Interface (EMIF) is utilized to implement seamless interface among the DSP, the synchronous dynamic RAM (SDRAM) and the First-In-First-Out (FIFO) in the system. This transfer mode can avoid the bottle-neck of the data transfer and simplify the circuit between the DSP and its peripheral chips. The DSP's EMIF and two level matching chips are used to implement Advanced Technology Attachment (ATA) protocol on physical layer of the interface of an Integrated Drive Electronics (IDE) Hard Disk (HD), which has a high speed in data access and does not rely on a computer. Main functions of the logic on the FPGA are described and the screenshots of the behavioral simulation are provided in this paper. In the design of program on the DSP, Enhanced Direct Memory Access (EDMA) channels are used to transfer data between the FIFO and the SDRAM to exert the CPU's high performance on computing without intervention by the CPU and save its time spending. JPEG2000 is implemented to obtain high fidelity in video recording and replaying. Ways and means of acquiring high performance for code are briefly present. The ability of data processing of the system is desirable. And smoothness of the replayed video is acceptable. By right of its design flexibility and reliable operation, the system based on DSP and FPGA for video recording and replaying has a considerable perspective in analysis after the event, simulated exercitation and so forth.
Choe, Sun; Lim, Rod Seung-Hwan; Clark, Karen; Wang, Regina; Branz, Patricia; Sadler, Georgia Robins
2009-01-01
Deaf women encounter barriers to accessing cancer information. In this study, we evaluated whether deaf women's knowledge could be increased by viewing a graphically enriched, American Sign Language (ASL) cervical cancer education video. A blind, randomized trial evaluated knowledge gain and retention. Deaf women (n = 130) completed questionnaires before, after, and 2 months after viewing the video. With only a single viewing of the in-depth video, the experimental group gained and retained significantly more cancer knowledge than the control group. Giving deaf women access to the ASL cervical cancer education video (http://cancer.ucsd.edu/deafinfo) significantly increased their knowledge of cervical cancer.
Fast image interpolation for motion estimation using graphics hardware
NASA Astrophysics Data System (ADS)
Kelly, Francis; Kokaram, Anil
2004-05-01
Motion estimation and compensation is the key to high quality video coding. Block matching motion estimation is used in most video codecs, including MPEG-2, MPEG-4, H.263 and H.26L. Motion estimation is also a key component in the digital restoration of archived video and for post-production and special effects in the movie industry. Sub-pixel accurate motion vectors can improve the quality of the vector field and lead to more efficient video coding. However sub-pixel accuracy requires interpolation of the image data. Image interpolation is a key requirement of many image processing algorithms. Often interpolation can be a bottleneck in these applications, especially in motion estimation due to the large number pixels involved. In this paper we propose using commodity computer graphics hardware for fast image interpolation. We use the full search block matching algorithm to illustrate the problems and limitations of using graphics hardware in this way.
Choe, Sun; Lim, Rod Seung-Hwan; Clark, Karen; Wang, Regina; Branz, Patricia; Sadler, Georgia Robins
2013-01-01
Background Deaf women encounter barriers to accessing cancer information. In this study, we evaluated whether deaf women's knowledge could be increased by viewing a graphically enriched, American Sign Language (ASL) cervical cancer education video. Methods A blind, randomized trial evaluated knowledge gain and retention. Deaf women (n = 130) completed questionnaires before, after, and 2 months after viewing the video. Results With only a single viewing of the in-depth video, the experimental group gained and retained significantly more cancer knowledge than the control group. Conclusions Giving deaf women access to the ASL cervical cancer education video (http://cancer.ucsd.edu/deafinfo) significantly increased their knowledge of cervical cancer. PMID:19259859
ERIC Educational Resources Information Center
Casey, Carl
1992-01-01
Discussion of transactions in computer-based instruction for ill-structured and visual domains focuses on two transactions developed for meteorology training that provide the capability to interact with video and graphic images at a very detailed level. Potential applications for the transactions are suggested, and early evaluation reports are…
Graphic overlays in high-precision teleoperation: Current and future work at JPL
NASA Technical Reports Server (NTRS)
Diner, Daniel B.; Venema, Steven C.
1989-01-01
In space teleoperation additional problems arise, including signal transmission time delays. These can greatly reduce operator performance. Recent advances in graphics open new possibilities for addressing these and other problems. Currently a multi-camera system with normal 3-D TV and video graphics capabilities is being developed. Trained and untrained operators will be tested for high precision performance using two force reflecting hand controllers and a voice recognition system to control two robot arms and up to 5 movable stereo or non-stereo TV cameras. A number of new techniques of integrating TV and video graphics displays to improve operator training and performance in teleoperation and supervised automation are evaluated.
NASA Astrophysics Data System (ADS)
de Villiers, Jason P.; Bachoo, Asheer K.; Nicolls, Fred C.; le Roux, Francois P. J.
2011-05-01
Tracking targets in a panoramic image is in many senses the inverse problem of tracking targets with a narrow field of view camera on a pan-tilt pedestal. In a narrow field of view camera tracking a moving target, the object is constant and the background is changing. A panoramic camera is able to model the entire scene, or background, and those areas it cannot model well are the potential targets and typically subtended far fewer pixels in the panoramic view compared to the narrow field of view. The outputs of an outward staring array of calibrated machine vision cameras are stitched into a single omnidirectional panorama and used to observe False Bay near Simon's Town, South Africa. A ground truth data-set was created by geo-aligning the camera array and placing a differential global position system receiver on a small target boat thus allowing its position in the array's field of view to be determined. Common tracking techniques including level-sets, Kalman filters and particle filters were implemented to run on the central processing unit of the tracking computer. Image enhancement techniques including multi-scale tone mapping, interpolated local histogram equalisation and several sharpening techniques were implemented on the graphics processing unit. An objective measurement of each tracking algorithm's robustness in the presence of sea-glint, low contrast visibility and sea clutter - such as white caps is performed on the raw recorded video data. These results are then compared to those obtained with the enhanced video data.
Multimedia Instruction Puts Teachers in the Director's Chair.
ERIC Educational Resources Information Center
Trotter, Andrew
1990-01-01
Teachers can produce and direct their own instructional videos using computer-driven multimedia. Outlines the basics in combining audio and video technologies to produce videotapes that mix animated and still graphics, sound, and full-motion video. (MLF)
NASA Astrophysics Data System (ADS)
Jackson, Christopher Robert
"Lucky-region" fusion (LRF) is a synthetic imaging technique that has proven successful in enhancing the quality of images distorted by atmospheric turbulence. The LRF algorithm selects sharp regions of an image obtained from a series of short exposure frames, and fuses the sharp regions into a final, improved image. In previous research, the LRF algorithm had been implemented on a PC using the C programming language. However, the PC did not have sufficient sequential processing power to handle real-time extraction, processing and reduction required when the LRF algorithm was applied to real-time video from fast, high-resolution image sensors. This thesis describes two hardware implementations of the LRF algorithm to achieve real-time image processing. The first was created with a VIRTEX-7 field programmable gate array (FPGA). The other developed using the graphics processing unit (GPU) of a NVIDIA GeForce GTX 690 video card. The novelty in the FPGA approach is the creation of a "black box" LRF video processing system with a general camera link input, a user controller interface, and a camera link video output. We also describe a custom hardware simulation environment we have built to test the FPGA LRF implementation. The advantage of the GPU approach is significantly improved development time, integration of image stabilization into the system, and comparable atmospheric turbulence mitigation.
Bandwidth characteristics of multimedia data traffic on a local area network
NASA Technical Reports Server (NTRS)
Chuang, Shery L.; Doubek, Sharon; Haines, Richard F.
1993-01-01
Limited spacecraft communication links call for users to investigate the potential use of video compression and multimedia technologies to optimize bandwidth allocations. The objective was to determine the transmission characteristics of multimedia data - motion video, text or bitmap graphics, and files transmitted independently and simultaneously over an ethernet local area network. Commercial desktop video teleconferencing hardware and software and Intel's proprietary Digital Video Interactive (DVI) video compression algorithm were used, and typical task scenarios were selected. The transmission time, packet size, number of packets, and network utilization of the data were recorded. Each data type - compressed motion video, text and/or bitmapped graphics, and a compressed image file - was first transmitted independently and its characteristics recorded. The results showed that an average bandwidth of 7.4 kilobits per second (kbps) was used to transmit graphics; an average bandwidth of 86.8 kbps was used to transmit an 18.9-kilobyte (kB) image file; a bandwidth of 728.9 kbps was used to transmit compressed motion video at 15 frames per second (fps); and a bandwidth of 75.9 kbps was used to transmit compressed motion video at 1.5 fps. Average packet sizes were 933 bytes for graphics, 498.5 bytes for the image file, 345.8 bytes for motion video at 15 fps, and 341.9 bytes for motion video at 1.5 fps. Simultaneous transmission of multimedia data types was also characterized. The multimedia packets used transmission bandwidths of 341.4 kbps and 105.8kbps. Bandwidth utilization varied according to the frame rate (frames per second) setting for the transmission of motion video. Packet size did not vary significantly between the data types. When these characteristics are applied to Space Station Freedom (SSF), the packet sizes fall within the maximum specified by the Consultative Committee for Space Data Systems (CCSDS). The uplink of imagery to SSF may be performed at minimal frame rates and/or within seconds of delay, depending on the user's allocated bandwidth. Further research to identify the acceptable delay interval and its impact on human performance is required. Additional studies in network performance using various video compression algorithms and integrated multimedia techniques are needed to determine the optimal design approach for utilizing SSF's data communications system.
Digital video technology, today and tomorrow
NASA Astrophysics Data System (ADS)
Liberman, J.
1994-10-01
Digital video is probably computing's fastest moving technology today. Just three years ago, the zenith of digital video technology on the PC was the successful marriage of digital text and graphics with analog audio and video by means of expensive analog laser disc players and video overlay boards. The state of the art involves two different approaches to fully digital video on computers: hardware-assisted and software-only solutions.
NASA Astrophysics Data System (ADS)
Chen, Charlene; Abe, Katsumi; Fung, Tze-Ching; Kumomi, Hideya; Kanicki, Jerzy
2009-03-01
In this paper, we analyze application of amorphous In-Ga-Zn-O thin film transistors (a-InGaZnO TFTs) to current-scaling pixel electrode circuit that could be used for 3-in. quarter video graphics array (QVGA) full color active-matrix organic light-emitting displays (AM-OLEDs). Simulation results, based on a-InGaZnO TFT and OLED experimental data, show that both device sizes and operational voltages can be reduced when compare to the same circuit using hydrogenated amorphous silicon (a-Si:H) TFTs. Moreover, the a-InGaZnO TFT pixel circuit can compensate for the drive TFT threshold voltage variation (ΔVT) within acceptable operating error range.
Real-time lens distortion correction: speed, accuracy and efficiency
NASA Astrophysics Data System (ADS)
Bax, Michael R.; Shahidi, Ramin
2014-11-01
Optical lens systems suffer from nonlinear geometrical distortion. Optical imaging applications such as image-enhanced endoscopy and image-based bronchoscope tracking require correction of this distortion for accurate localization, tracking, registration, and measurement of image features. Real-time capability is desirable for interactive systems and live video. The use of a texture-mapping graphics accelerator, which is standard hardware on current motherboard chipsets and add-in video graphics cards, to perform distortion correction is proposed. Mesh generation for image tessellation, an error analysis, and performance results are presented. It is shown that distortion correction using commodity graphics hardware is substantially faster than using the main processor and can be performed at video frame rates (faster than 30 frames per second), and that the polar-based method of mesh generation proposed here is more accurate than a conventional grid-based approach. Using graphics hardware to perform distortion correction is not only fast and accurate but also efficient as it frees the main processor for other tasks, which is an important issue in some real-time applications.
X-window-based 2K display workstation
NASA Astrophysics Data System (ADS)
Weinberg, Wolfram S.; Hayrapetian, Alek S.; Cho, Paul S.; Valentino, Daniel J.; Taira, Ricky K.; Huang, H. K.
1991-07-01
A high-definition, high-performance display station for reading and review of digital radiological images is introduced. The station is based on a Sun SPARC Station 4 and employs X window system for display and manipulation of images. A mouse-operated graphic user interface is implemented utilizing Motif-style tools. The system supports up to four MegaScan gray-scale 2560 X 2048 monitors. A special configuration of frame and video buffer yields a data transfer of 50 M pixels/s. A magnetic disk array supplies a storage capacity of 2 GB with a data transfer rate of 4-6 MB/s. The system has access to the central archive through an ultrahigh-speed fiber-optic network and patient studies are automatically transferred to the local disk. The available image processing functions include change of lookup table, zoom and pan, and cine. Future enhancements will provide for manual contour tracing, length, area, and density measurements, text and graphic overlay, as well as composition of selected images. Additional preprocessing procedures under development will optimize the initial lookup table and adjust the images to a standard orientation.
Making Art Connections with Graphic Organizers
ERIC Educational Resources Information Center
Stephens, Pam; Hermus, Cindy
2007-01-01
Posters, slide shows, videos, diagrams, charts, written or illustrated class notes, daily logs, to do lists, and written instructions are all helpful modes of teaching for visual learners. Another form of instruction that is helpful for visual learners is the graphic organizers. Sometimes called "mind maps", graphic organizers are illustrative…
Augmenting reality in Direct View Optical (DVO) overlay applications
NASA Astrophysics Data System (ADS)
Hogan, Tim; Edwards, Tim
2014-06-01
The integration of overlay displays into rifle scopes can transform precision Direct View Optical (DVO) sights into intelligent interactive fire-control systems. Overlay displays can provide ballistic solutions within the sight for dramatically improved targeting, can fuse sensor video to extend targeting into nighttime or dirty battlefield conditions, and can overlay complex situational awareness information over the real-world scene. High brightness overlay solutions for dismounted soldier applications have previously been hindered by excessive power consumption, weight and bulk making them unsuitable for man-portable, battery powered applications. This paper describes the advancements and capabilities of a high brightness, ultra-low power text and graphics overlay display module developed specifically for integration into DVO weapon sight applications. Central to the overlay display module was the development of a new general purpose low power graphics controller and dual-path display driver electronics. The graphics controller interface is a simple 2-wire RS-232 serial interface compatible with existing weapon systems such as the IBEAM ballistic computer and the RULR and STORM laser rangefinders (LRF). The module features include multiple graphics layers, user configurable fonts and icons, and parameterized vector rendering, making it suitable for general purpose DVO overlay applications. The module is configured for graphics-only operation for daytime use and overlays graphics with video for nighttime applications. The miniature footprint and ultra-low power consumption of the module enables a new generation of intelligent DVO systems and has been implemented for resolutions from VGA to SXGA, in monochrome and color, and in graphics applications with and without sensor video.
This Rock 'n' Roll Video Teaches Math
ERIC Educational Resources Information Center
Niess, Margaret L.; Walker, Janet M.
2009-01-01
Mathematics is a discipline that has significantly advanced through the use of digital technologies with improved computational, graphical, and symbolic capabilities. Digital videos can be used to present challenging mathematical questions for students. Video clips offer instructional possibilities for moving students from a passive mode of…
This document provides graphical arrays and tables of key information on the derivation of human inhalation health effect reference values for specific chemicals, allowing comparisons across durations, populations, and intended use. A number of program offices within the Agency, ...
Data Visualization and Animation Lab (DVAL) overview
NASA Technical Reports Server (NTRS)
Stacy, Kathy; Vonofenheim, Bill
1994-01-01
The general capabilities of the Langley Research Center Data Visualization and Animation Laboratory is described. These capabilities include digital image processing, 3-D interactive computer graphics, data visualization and analysis, video-rate acquisition and processing of video images, photo-realistic modeling and animation, video report generation, and color hardcopies. A specialized video image processing system is also discussed.
Development and Assessment of a Chemistry-Based Computer Video Game as a Learning Tool
ERIC Educational Resources Information Center
Martinez-Hernandez, Kermin Joel
2010-01-01
The chemistry-based computer video game is a multidisciplinary collaboration between chemistry and computer graphics and technology fields developed to explore the use of video games as a possible learning tool. This innovative approach aims to integrate elements of commercial video game and authentic chemistry context environments into a learning…
Using Globe Browsing Systems in Planetariums to Take Audiences to Other Worlds.
NASA Astrophysics Data System (ADS)
Emmart, C. B.
2014-12-01
For the last decade planetariums have been adding capability of "full dome video" systems for both movie playback and interactive display. True scientific data visualization has now come to planetarium audiences as a means to display the actual three dimensional layout of the universe, the time based array of planets, minor bodies and spacecraft across the solar system, and now globe browsing systems to examine planetary bodies to the limits of resolutions acquired. Additionally, such planetarium facilities can be networked for simultaneous display across the world for wider audience and reach to authoritative scientist description and commentary. Data repositories such as NASA's Lunar Mapping and Modeling Project (LMMP), NASA GSFC's LANCE-MODIS, and others conforming to the Open Geospatial Consortium (OGC) standard of Web Map Server (WMS) protocols make geospatial data available for a growing number of dome supporting globe visualization systems. The immersive surround graphics of full dome video replicates our visual system creating authentic virtual scenes effectively placing audiences on location in some cases to other worlds only mapped robotically.
Real-time mid-infrared imaging of living microorganisms.
Haase, Katharina; Kröger-Lui, Niels; Pucci, Annemarie; Schönhals, Arthur; Petrich, Wolfgang
2016-01-01
The speed and efficiency of quantum cascade laser-based mid-infrared microspectroscopy are demonstrated using two different model organisms as examples. For the slowly moving Amoeba proteus, a quantum cascade laser is tuned over the wavelength range of 7.6 µm to 8.6 µm (wavenumbers 1320 cm(-1) and 1160 cm(-1) , respectively). The recording of a hyperspectral image takes 11.3 s whereby an average signal-to-noise ratio of 29 is achieved. The limits of time resolution are tested by imaging the fast moving Caenorhabditis elegans at a discrete wavenumber of 1265 cm(-1) . Mid-infrared imaging is performed with the 640 × 480 pixel video graphics array (VGA) standard and at a full-frame time resolution of 0.02 s (i.e. well above the most common frame rate standards). An average signal-to-noise ratio of 16 is obtained. To the best of our knowledge, these findings constitute the first mid-infrared imaging of living organisms at VGA standard and video frame rate. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Mesoscale and severe storms (Mass) data management and analysis system
NASA Technical Reports Server (NTRS)
Hickey, J. S.; Karitani, S.; Dickerson, M.
1984-01-01
Progress on the Mesoscale and Severe Storms (MASS) data management and analysis system is described. An interactive atmospheric data base management software package to convert four types of data (Sounding, Single Level, Grid, Image) into standard random access formats is implemented and integrated with the MASS AVE80 Series general purpose plotting and graphics display data analysis software package. An interactive analysis and display graphics software package (AVE80) to analyze large volumes of conventional and satellite derived meteorological data is enhanced to provide imaging/color graphics display utilizing color video hardware integrated into the MASS computer system. Local and remote smart-terminal capability is provided by installing APPLE III computer systems within individual scientist offices and integrated with the MASS system, thus providing color video display, graphics, and characters display of the four data types.
Teaching Graphics in Technical Communication Classes.
ERIC Educational Resources Information Center
Spurgeon, Kristene C.
Perhaps because the United States is undergoing a video revolution, perhaps because of its increasing sales of goods to non-English speaking markets where graphics can help explain the products, perhaps because of the decreasing communication skills of the work force, graphic aids are becoming more and more widely used and more and more important.…
ERIC Educational Resources Information Center
Chen, Ching-chih
1991-01-01
Describes compact disc interactive (CD-I) as a multimedia home entertainment system that combines audio, visual, text, graphic, and interactive capabilities. Full-screen video and full-motion video (FMV) are explained, hardware for FMV decoding is described, software is briefly discussed, and CD-I titles planned for future production are listed.…
13 point video tape quality guidelines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaunt, R.
1997-05-01
Until high definition television (ATV) arrives, in the U.S. we must still contend with the National Television Systems Committee (NTSC) video standard (or PAL or SECAM-depending on your country). NTSC, a 40-year old standard designed for transmission of color video camera images over a small bandwidth, is not well suited for the sharp, full-color images that todays computers are capable of producing. PAL and SECAM also suffers from many of NTSC`s problems, but to varying degrees. Video professionals, when working with computer graphic (CG) images, use two monitors: a computer monitor for producing CGs and an NTSC monitor to viewmore » how a CG will look on video. More often than not, the NTSC image will differ significantly from the CG image, and outputting it to NTSC as an artist works enables the him or her to see the images as others will see it. Below are thirteen guidelines designed to increase the quality of computer graphics recorded onto video tape. Viewing your work in NTSC and attempting to follow the below tips will enable you to create higher quality videos. No video is perfect, so don`t expect to abide by every guideline every time.« less
Multi-mode sensor processing on a dynamically reconfigurable massively parallel processor array
NASA Astrophysics Data System (ADS)
Chen, Paul; Butts, Mike; Budlong, Brad; Wasson, Paul
2008-04-01
This paper introduces a novel computing architecture that can be reconfigured in real time to adapt on demand to multi-mode sensor platforms' dynamic computational and functional requirements. This 1 teraOPS reconfigurable Massively Parallel Processor Array (MPPA) has 336 32-bit processors. The programmable 32-bit communication fabric provides streamlined inter-processor connections with deterministically high performance. Software programmability, scalability, ease of use, and fast reconfiguration time (ranging from microseconds to milliseconds) are the most significant advantages over FPGAs and DSPs. This paper introduces the MPPA architecture, its programming model, and methods of reconfigurability. An MPPA platform for reconfigurable computing is based on a structural object programming model. Objects are software programs running concurrently on hundreds of 32-bit RISC processors and memories. They exchange data and control through a network of self-synchronizing channels. A common application design pattern on this platform, called a work farm, is a parallel set of worker objects, with one input and one output stream. Statically configured work farms with homogeneous and heterogeneous sets of workers have been used in video compression and decompression, network processing, and graphics applications.
NASA Astrophysics Data System (ADS)
Speyerer, E. J.; Ferrari, K. A.; Lowes, L. L.; Raad, P. E.; Cuevas, T.; Purdy, J. A.
2006-03-01
With advances in computers, graphics, and especially video games, manned space exploration can become real, by creating a safe, fun learning environment that allows players to explore the solar system from the comfort of their personal computers.
Strategies for combining physics videos and virtual laboratories in the training of physics teachers
NASA Astrophysics Data System (ADS)
Dickman, Adriana; Vertchenko, Lev; Martins, Maria Inés
2007-03-01
Among the multimedia resources used in physics education, the most prominent are virtual laboratories and videos. On one hand, computer simulations and applets have very attractive graphic interfaces, showing an incredible amount of detail and movement. On the other hand, videos, offer the possibility of displaying high quality images, and are becoming more feasible with the increasing availability of digital resources. We believe it is important to discuss, throughout the teacher training program, both the functionality of information and communication technology (ICT) in physics education and, the varied applications of these resources. In our work we suggest the introduction of ICT resources in a sequence integrating these important tools in the teacher training program, as opposed to the traditional approach, in which virtual laboratories and videos are introduced separately. In this perspective, when we introduce and utilize virtual laboratory techniques we also provide for its use in videos, taking advantage of graphic interfaces. Thus the students in our program learn to use instructional software in the production of videos for classroom use.
Kwon, Min-Woo; Kim, Seung-Cheol; Kim, Eun-Soo
2016-01-20
A three-directional motion-compensation mask-based novel look-up table method is proposed and implemented on graphics processing units (GPUs) for video-rate generation of digital holographic videos of three-dimensional (3D) scenes. Since the proposed method is designed to be well matched with the software and memory structures of GPUs, the number of compute-unified-device-architecture kernel function calls can be significantly reduced. This results in a great increase of the computational speed of the proposed method, allowing video-rate generation of the computer-generated hologram (CGH) patterns of 3D scenes. Experimental results reveal that the proposed method can generate 39.8 frames of Fresnel CGH patterns with 1920×1080 pixels per second for the test 3D video scenario with 12,088 object points on dual GPU boards of NVIDIA GTX TITANs, and they confirm the feasibility of the proposed method in the practical application fields of electroholographic 3D displays.
Development of a Low Cost Graphics Terminal.
ERIC Educational Resources Information Center
Lehr, Ted
1985-01-01
Describes modifications made to expand the capabilities of a display unit (Lear Siegler ADM-3A) to include medium resolution graphics. The modifying circuitry is detailed along with software subroutined written in Z-80 machine language for controlling the video display. (JN)
Evaluation of Digital Technology and Software Use among Business Education Teachers
ERIC Educational Resources Information Center
Ellis, Richard S.; Okpala, Comfort O.
2004-01-01
Digital video cameras are part of the evolution of multimedia digital products that have positive applications for educators, students, and industry. Multimedia digital video can be utilized by any personal computer and it allows the user to control, combine, and manipulate different types of media, such as text, sound, video, computer graphics,…
The New Film Technologies: Computerized Video-Assisted Film Production.
ERIC Educational Resources Information Center
Mott, Donald R.
Over the past few years, video technology has been used to assist film directors after they have shot a scene, to control costs, and to create special effects, especially computer assisted graphics. At present, a computer based editing system called "Film 5" combines computer technology and video tape with film to save as much as 50% of…
ERIC Educational Resources Information Center
Embregts, Petri J. C. M.
2002-01-01
A study evaluated effects of a multifaceted training procedure on the inappropriate and appropriate social behavior of five adolescents with mild intellectual disability and on staff responses. The training included video feedback and self-management procedures and staff training with video and graphic feedback. Results indicated increases in…
Video Vectorization via Tetrahedral Remeshing.
Wang, Chuan; Zhu, Jie; Guo, Yanwen; Wang, Wenping
2017-02-09
We present a video vectorization method that generates a video in vector representation from an input video in raster representation. A vector-based video representation offers the benefits of vector graphics, such as compactness and scalability. The vector video we generate is represented by a simplified tetrahedral control mesh over the spatial-temporal video volume, with color attributes defined at the mesh vertices. We present novel techniques for simplification and subdivision of a tetrahedral mesh to achieve high simplification ratio while preserving features and ensuring color fidelity. From an input raster video, our method is capable of generating a compact video in vector representation that allows a faithful reconstruction with low reconstruction errors.
Broadening the interface bandwidth in simulation based training
NASA Technical Reports Server (NTRS)
Somers, Larry E.
1989-01-01
Currently most computer based simulations rely exclusively on computer generated graphics to create the simulation. When training is involved, the method almost exclusively used to display information to the learner is text displayed on the cathode ray tube. MICROEXPERT Systems is concentrating on broadening the communications bandwidth between the computer and user by employing a novel approach to video image storage combined with sound and voice output. An expert system is used to combine and control the presentation of analog video, sound, and voice output with computer based graphics and text. Researchers are currently involved in the development of several graphics based user interfaces for NASA, the U.S. Army, and the U.S. Navy. Here, the focus is on the human factors considerations, software modules, and hardware components being used to develop these interfaces.
Multimedia category preferences of working engineers
NASA Astrophysics Data System (ADS)
Baukal, Charles E.; Ausburn, Lynna J.
2016-09-01
Many have argued for the importance of continuing engineering education (CEE), but relatively few recommendations were found in the literature for how to use multimedia technologies to deliver it most effectively. The study reported here addressed this gap by investigating the multimedia category preferences of working engineers. Four categories of multimedia, with two types in each category, were studied: verbal (text and narration), static graphics (drawing and photograph), dynamic non-interactive graphics (animation and video), and dynamic interactive graphics (simulated virtual reality (VR) and photo-real VR). The results showed that working engineers strongly preferred text over narration and somewhat preferred drawing over photograph, animation over video, and simulated VR over photo-real VR. These results suggest that a variety of multimedia types should be used in the instructional design of CEE content.
Multilocation Video Conference By Optical Fiber
NASA Astrophysics Data System (ADS)
Gray, Donald J.
1982-10-01
An experimental system that permits interconnection of many offices in a single video conference is described. Video images transmitted to conference participants are selected by the conference chairman and switched by a microprocessor-controlled video switch. Speakers can, at their choice, transmit their own images or images of graphics they wish to display. Users are connected to the Switching Center by optical fiber subscriber loops that carry analog video, digitized telephone, data and signaling. The same system also provides user-selectable distribution of video program and video library material. Experience in the operation of the conference system is discussed.
Real-time interactive simulation: using touch panels, graphics tablets, and video-terminal keyboards
DOE Office of Scientific and Technical Information (OSTI.GOV)
Venhuizen, J.R.
1983-01-01
A Simulation Laboratory utilizing only digital computers for interactive computing must rely on CRT based graphics devices for output devices, and keyboards, graphics tablets, and touch panels, etc., for input devices. The devices all work well, with the combination of a CRT with a touch panel mounted on it as the most flexible combination of input/output devices for interactive simulation.
Flow visualization of CFD using graphics workstations
NASA Technical Reports Server (NTRS)
Lasinski, Thomas; Buning, Pieter; Choi, Diana; Rogers, Stuart; Bancroft, Gordon
1987-01-01
High performance graphics workstations are used to visualize the fluid flow dynamics obtained from supercomputer solutions of computational fluid dynamic programs. The visualizations can be done independently on the workstation or while the workstation is connected to the supercomputer in a distributed computing mode. In the distributed mode, the supercomputer interactively performs the computationally intensive graphics rendering tasks while the workstation performs the viewing tasks. A major advantage of the workstations is that the viewers can interactively change their viewing position while watching the dynamics of the flow fields. An overview of the computer hardware and software required to create these displays is presented. For complex scenes the workstation cannot create the displays fast enough for good motion analysis. For these cases, the animation sequences are recorded on video tape or 16 mm film a frame at a time and played back at the desired speed. The additional software and hardware required to create these video tapes or 16 mm movies are also described. Photographs illustrating current visualization techniques are discussed. Examples of the use of the workstations for flow visualization through animation are available on video tape.
Virtual Reality Calibration for Telerobotic Servicing
NASA Technical Reports Server (NTRS)
Kim, W.
1994-01-01
A virtual reality calibration technique of matching a virtual environment of simulated graphics models in 3-D geometry and perspective with actual camera views of the remote site task environment has been developed to enable high-fidelity preview/predictive displays with calibrated graphics overlay on live video.
Li, Xianchun; Cheng, Xiaojun; Li, Jiaying; Pan, Yafeng; Hu, Yi; Ku, Yixuan
2015-01-01
Previous studies have shown enhanced memory performance resulting from extensive action video game playing. The mechanisms underlying the cognitive benefit were investigated in the current study. We presented two types of retro-cues, with variable intervals to memory array (Task 1) or test array (Task 2), during the retention interval in a change detection task. In Task 1, action video game players demonstrated steady performance while non-action video game players showed decreased performance as cues occurred later, indicating their performance difference increased as the cue-to-memory-array intervals became longer. In Task 2, both participant groups increased their performance at similar rates as cues presented later, implying the performance difference in two groups were irrespective of the test-array-to-cue intervals. These findings suggested that memory benefit from game plays is not attributable to the higher ability of overcoming interference from the test array, but to the interactions between the two processes of protection from decay and resistance from interference, or from alternative hypotheses. Implications for future studies were discussed. PMID:26136720
Li, Xianchun; Cheng, Xiaojun; Li, Jiaying; Pan, Yafeng; Hu, Yi; Ku, Yixuan
2015-01-01
Previous studies have shown enhanced memory performance resulting from extensive action video game playing. The mechanisms underlying the cognitive benefit were investigated in the current study. We presented two types of retro-cues, with variable intervals to memory array (Task 1) or test array (Task 2), during the retention interval in a change detection task. In Task 1, action video game players demonstrated steady performance while non-action video game players showed decreased performance as cues occurred later, indicating their performance difference increased as the cue-to-memory-array intervals became longer. In Task 2, both participant groups increased their performance at similar rates as cues presented later, implying the performance difference in two groups were irrespective of the test-array-to-cue intervals. These findings suggested that memory benefit from game plays is not attributable to the higher ability of overcoming interference from the test array, but to the interactions between the two processes of protection from decay and resistance from interference, or from alternative hypotheses. Implications for future studies were discussed.
Classroom Writing Activities to Support the Curriculum.
ERIC Educational Resources Information Center
Piper, Judy
1990-01-01
Offers writing activities related to the reading of E. B. White's "Charlotte's Web," including showing the movie, using HyperCard, showing a video about a webspinning spider as a prewriting activity, and using computer graphics and video cameras to create related visual projects. (SR)
NASA Technical Reports Server (NTRS)
Liu, Hua-Kuang (Inventor)
1992-01-01
A relatively small and low-cost system is provided for projecting a large and bright television image onto a screen. A miniature liquid crystal array is driven by video circuitry to produce a pattern of transparencies in the array corresponding to a television image. Light is directed against the rear surface of the array to illuminate it, while a projection lens lies in front of the array to project the image of the array onto a large screen. Grid lines in the liquid crystal array are eliminated by a spacial filter which comprises a negative of the Fourier transform of the grid.
Predictable Programming on a Precision Timed Architecture
2008-04-18
Application: A Video Game Figure 6: Structure of the Video Game Example Inspired by an example game sup- plied with the Hydra development board [17...we implemented a sim- ple video game in C targeted to our PRET architecture. Our example centers on rendering graphics and is otherwise fairly simple...background image. 13 Figure 10: A Screen Dump From Our Video Game Ultimately, each displayed pixel is one of only four col- ors, but the pixels in
Interactive graphical computer-aided design system
NASA Technical Reports Server (NTRS)
Edge, T. M.
1975-01-01
System is used for design, layout, and modification of large-scale-integrated (LSI) metal-oxide semiconductor (MOS) arrays. System is structured around small computer which provides real-time support for graphics storage display unit with keyboard, slave display unit, hard copy unit, and graphics tablet for designer/computer interface.
State of the art in video system performance
NASA Technical Reports Server (NTRS)
Lewis, Michael J.
1990-01-01
The closed circuit television (CCTV) system that is onboard the Space Shuttle has the following capabilities: camera, video signal switching and routing unit (VSU); and Space Shuttle video tape recorder. However, this system is inadequate for use with many experiments that require video imaging. In order to assess the state-of-the-art in video technology and data storage systems, a survey was conducted of the High Resolution, High Frame Rate Video Technology (HHVT) products. The performance of the state-of-the-art solid state cameras and image sensors, video recording systems, data transmission devices, and data storage systems versus users' requirements are shown graphically.
Preplanning and Evaluating Video Documentaries and Features.
ERIC Educational Resources Information Center
Maynard, Riley
1997-01-01
This article presents a ten-part pre-production outline and post-production evaluation that helps communications students more effectively improve video skills. Examines camera movement and motion, camera angle and perspective, lighting, audio, graphics, backgrounds and color, special effects, editing, transitions, and music. Provides a glossary…
Efficient processing of two-dimensional arrays with C or C++
Donato, David I.
2017-07-20
Because fast and efficient serial processing of raster-graphic images and other two-dimensional arrays is a requirement in land-change modeling and other applications, the effects of 10 factors on the runtimes for processing two-dimensional arrays with C and C++ are evaluated in a comparative factorial study. This study’s factors include the choice among three C or C++ source-code techniques for array processing; the choice of Microsoft Windows 7 or a Linux operating system; the choice of 4-byte or 8-byte array elements and indexes; and the choice of 32-bit or 64-bit memory addressing. This study demonstrates how programmer choices can reduce runtimes by 75 percent or more, even after compiler optimizations. Ten points of practical advice for faster processing of two-dimensional arrays are offered to C and C++ programmers. Further study and the development of a C and C++ software test suite are recommended.Key words: array processing, C, C++, compiler, computational speed, land-change modeling, raster-graphic image, two-dimensional array, software efficiency
NASA Astrophysics Data System (ADS)
Levanon, Assaf; Yitzhaky, Yitzhak; Kopeika, Natan S.; Rozban, Daniel; Abramovich, Amir
2014-10-01
In recent years, much effort has been invested to develop inexpensive but sensitive Millimeter Wave (MMW) detectors that can be used in focal plane arrays (FPAs), in order to implement real time MMW imaging. Real time MMW imaging systems are required for many varied applications in many fields as homeland security, medicine, communications, military products and space technology. It is mainly because this radiation has high penetration and good navigability through dust storm, fog, heavy rain, dielectric materials, biological tissue, and diverse materials. Moreover, the atmospheric attenuation in this range of the spectrum is relatively low and the scattering is also low compared to NIR and VIS. The lack of inexpensive room temperature imaging systems makes it difficult to provide a suitable MMW system for many of the above applications. In last few years we advanced in research and development of sensors using very inexpensive (30-50 cents) Glow Discharge Detector (GDD) plasma indicator lamps as MMW detectors. This paper presents three kinds of GDD sensor based lamp Focal Plane Arrays (FPA). Those three kinds of cameras are different in the number of detectors, scanning operation, and detection method. The 1st and 2nd generations are 8 × 8 pixel array and an 18 × 2 mono-rail scanner array respectively, both of them for direct detection and limited to fixed imaging. The last designed sensor is a multiplexing frame rate of 16x16 GDD FPA. It permits real time video rate imaging of 30 frames/ sec and comprehensive 3D MMW imaging. The principle of detection in this sensor is a frequency modulated continuous wave (FMCW) system while each of the 16 GDD pixel lines is sampled simultaneously. Direct detection is also possible and can be done with a friendly user interface. This FPA sensor is built over 256 commercial GDD lamps with 3 mm diameter International Light, Inc., Peabody, MA model 527 Ne indicator lamps as pixel detectors. All three sensors are fully supported by software Graphical Unit Interface (GUI). They were tested and characterized through different kinds of optical systems for imaging applications, super resolution, and calibration methods. Capability of the 16x16 sensor is to employ a chirp radar like method to produced depth and reflectance information in the image. This enables 3-D MMW imaging in real time with video frame rate. In this work we demonstrate different kinds of optical imaging systems. Those systems have capability of 3-D imaging for short range and longer distances to at least 10-20 meters.
Bringing Graphic Novels into a School's Curriculum
ERIC Educational Resources Information Center
Bucher, Katherine T.; Manning, M. Lee
2004-01-01
Many young adults enjoy graphic novels because the genre differs so dramatically from the books that educators traditionally have encouraged adolescents to read. Growing up with television and video games, contemporary young adults look for print media that contain the same visual impact and pared-down writing style and contribute to their…
HyperGLOB/Freedom: Preparing Student Designers for a New Media.
ERIC Educational Resources Information Center
Slawson, Brian
The HyperGLOB project introduced university-level graphic design students to interactive multimedia. This technology involves using the personal computer to display and manipulate a variety of electronic media simultaneously (combining elements of text and speech, music and sound, still images, motion video, and animated graphics) and allows…
Hyperspectral processing in graphical processing units
NASA Astrophysics Data System (ADS)
Winter, Michael E.; Winter, Edwin M.
2011-06-01
With the advent of the commercial 3D video card in the mid 1990s, we have seen an order of magnitude performance increase with each generation of new video cards. While these cards were designed primarily for visualization and video games, it became apparent after a short while that they could be used for scientific purposes. These Graphical Processing Units (GPUs) are rapidly being incorporated into data processing tasks usually reserved for general purpose computers. It has been found that many image processing problems scale well to modern GPU systems. We have implemented four popular hyperspectral processing algorithms (N-FINDR, linear unmixing, Principal Components, and the RX anomaly detection algorithm). These algorithms show an across the board speedup of at least a factor of 10, with some special cases showing extreme speedups of a hundred times or more.
The use of hypermedia to increase the productivity of software development teams
NASA Technical Reports Server (NTRS)
Coles, L. Stephen
1991-01-01
Rapid progress in low-cost commercial PC-class multimedia workstation technology will potentially have a dramatic impact on the productivity of distributed work groups of 50-100 software developers. Hypermedia/multimedia involves the seamless integration in a graphical user interface (GUI) of a wide variety of data structures, including high-resolution graphics, maps, images, voice, and full-motion video. Hypermedia will normally require the manipulation of large dynamic files for which relational data base technology and SQL servers are essential. Basic machine architecture, special-purpose video boards, video equipment, optical memory, software needed for animation, network technology, and the anticipated increase in productivity that will result for the introduction of hypermedia technology are covered. It is suggested that the cost of the hardware and software to support an individual multimedia workstation will be on the order of $10,000.
Development of an all-in-one gamma camera/CCD system for safeguard verification
NASA Astrophysics Data System (ADS)
Kim, Hyun-Il; An, Su Jung; Chung, Yong Hyun; Kwak, Sung-Woo
2014-12-01
For the purpose of monitoring and verifying efforts at safeguarding radioactive materials in various fields, a new all-in-one gamma camera/charged coupled device (CCD) system was developed. This combined system consists of a gamma camera, which gathers energy and position information on gamma-ray sources, and a CCD camera, which identifies the specific location in a monitored area. Therefore, 2-D image information and quantitative information regarding gamma-ray sources can be obtained using fused images. A gamma camera consists of a diverging collimator, a 22 × 22 array CsI(Na) pixelated scintillation crystal with a pixel size of 2 × 2 × 6 mm3 and Hamamatsu H8500 position-sensitive photomultiplier tube (PSPMT). The Basler scA640-70gc CCD camera, which delivers 70 frames per second at video graphics array (VGA) resolution, was employed. Performance testing was performed using a Co-57 point source 30 cm from the detector. The measured spatial resolution and sensitivity were 4.77 mm full width at half maximum (FWHM) and 7.78 cps/MBq, respectively. The energy resolution was 18% at 122 keV. These results demonstrate that the combined system has considerable potential for radiation monitoring.
OpenSesame: an open-source, graphical experiment builder for the social sciences.
Mathôt, Sebastiaan; Schreij, Daniel; Theeuwes, Jan
2012-06-01
In the present article, we introduce OpenSesame, a graphical experiment builder for the social sciences. OpenSesame is free, open-source, and cross-platform. It features a comprehensive and intuitive graphical user interface and supports Python scripting for complex tasks. Additional functionality, such as support for eyetrackers, input devices, and video playback, is available through plug-ins. OpenSesame can be used in combination with existing software for creating experiments.
Prototyping the graphical user interface for the operator of the Cherenkov Telescope Array
NASA Astrophysics Data System (ADS)
Sadeh, I.; Oya, I.; Schwarz, J.; Pietriga, E.
2016-07-01
The Cherenkov Telescope Array (CTA) is a planned gamma-ray observatory. CTA will incorporate about 100 imaging atmospheric Cherenkov telescopes (IACTs) at a Southern site, and about 20 in the North. Previous IACT experiments have used up to five telescopes. Subsequently, the design of a graphical user interface (GUI) for the operator of CTA involves new challenges. We present a GUI prototype, the concept for which is being developed in collaboration with experts from the field of Human-Computer Interaction (HCI). The prototype is based on Web technology; it incorporates a Python web server, Web Sockets and graphics generated with the d3.js Javascript library.
New Adventures in Screencasting
ERIC Educational Resources Information Center
Flynn, Stephen X.
2013-01-01
There are universal elements to great videos and games that drive large audiences: beautiful cinematography and graphics, engaging and purposeful screenplays and storylines, unforgettable soundtracks, and a brand name that makes viewers and players want to return for more. Libraries can, and should, employ these elements in their own videos to…
Laserprinter applications in a medical graphics department.
Lynch, P J
1987-01-01
Our experience with the Apple Macintosh and LaserWriter equipment has convinced us that lasergraphics holds much current and future promise in the creation of line graphics and typography for the biomedical community. Although we continue to use other computer graphics equipment to produce color slides and an occasional pen-plotter graphic, the most rapidly growing segment of our graphics workload is in material well-suited to production on the Macintosh/LaserWriter system. At present our goal is to integrate all of our computer graphics production (color slides, video paint graphics and monochrome print graphics) into a single Macintosh-based system within the next two years. The software and hardware currently available are capable of producing a wide range of science graphics very quickly and inexpensively. The cost-effectiveness, versatility and relatively low initial investment required to install this equipment make it an attractive alternative for cost-recovery departments just entering the field of computer graphics.
Graphical Environment Tools for Application to Gamma-Ray Energy Tracking Arrays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Todd, Richard A.; Radford, David C.
2013-12-30
Highly segmented, position-sensitive germanium detector systems are being developed for nuclear physics research where traditional electronic signal processing with mixed analog and digital function blocks would be enormously complex and costly. Future systems will be constructed using pipelined processing of high-speed digitized signals as is done in the telecommunications industry. Techniques which provide rapid algorithm and system development for future systems are desirable. This project has used digital signal processing concepts and existing graphical system design tools to develop a set of re-usable modular functions and libraries targeted for the nuclear physics community. Researchers working with complex nuclear detector arraysmore » such as the Gamma-Ray Energy Tracking Array (GRETA) have been able to construct advanced data processing algorithms for implementation in field programmable gate arrays (FPGAs) through application of these library functions using intuitive graphical interfaces.« less
Causal Video Object Segmentation From Persistence of Occlusions
2015-05-01
Precision, recall, and F-measure are reported on the ground truth anno - tations converted to binary masks. Note we cannot evaluate “number of...to lack of occlusions. References [1] P. Arbelaez, M. Maire, C. Fowlkes, and J . Malik. Con- tour detection and hierarchical image segmentation. TPAMI...X. Bai, J . Wang, D. Simons, and G. Sapiro. Video snapcut: robust video object cutout using localized classifiers. In ACM Transactions on Graphics
The Video PATSEARCH System: An Interview with Peter Urbach.
ERIC Educational Resources Information Center
Videodisc/Videotext, 1982
1982-01-01
The Video PATSEARCH system consists of a microcomputer with a special keyboard and two display screens which accesses the PATSEARCH database of United States government patents on the Bibliographic Retrieval Services (BRS) search system. The microcomputer retrieves text from BRS and matching graphics from an analog optical videodisc. (Author/JJD)
Apparatus and method for imaging metallic objects using an array of giant magnetoresistive sensors
Chaiken, Alison
2000-01-01
A portable, low-power, metallic object detector and method for providing an image of a detected metallic object. In one embodiment, the present portable low-power metallic object detector an array of giant magnetoresistive (GMR) sensors. The array of GMR sensors is adapted for detecting the presence of and compiling image data of a metallic object. In the embodiment, the array of GMR sensors is arranged in a checkerboard configuration such that axes of sensitivity of alternate GMR sensors are orthogonally oriented. An electronics portion is coupled to the array of GMR sensors. The electronics portion is adapted to receive and process the image data of the metallic object compiled by the array of GMR sensors. The embodiment also includes a display unit which is coupled to the electronics portion. The display unit is adapted to display a graphical representation of the metallic object detected by the array of GMR sensors. In so doing, a graphical representation of the detected metallic object is provided.
Video quality assesment using M-SVD
NASA Astrophysics Data System (ADS)
Tao, Peining; Eskicioglu, Ahmet M.
2007-01-01
Objective video quality measurement is a challenging problem in a variety of video processing application ranging from lossy compression to printing. An ideal video quality measure should be able to mimic the human observer. We present a new video quality measure, M-SVD, to evaluate distorted video sequences based on singular value decomposition. A computationally efficient approach is developed for full-reference (FR) video quality assessment. This measure is tested on the Video Quality Experts Group (VQEG) phase I FR-TV test data set. Our experiments show the graphical measure displays the amount of distortion as well as the distribution of error in all frames of the video sequence while the numerical measure has a good correlation with perceived video quality outperforms PSNR and other objective measures by a clear margin.
Model-Driven Development of Interactive Multimedia Applications with MML
NASA Astrophysics Data System (ADS)
Pleuss, Andreas; Hussmann, Heinrich
There is an increasing demand for high-quality interactive applications which combine complex application logic with a sophisticated user interface, making use of individual media objects like graphics, animations, 3D graphics, audio or video. Their development is still challenging as it requires the integration of software design, user interface design, and media design.
Computer Graphics in Research: Some State -of-the-Art Systems
ERIC Educational Resources Information Center
Reddy, R.; And Others
1975-01-01
A description is given of the structure and functional characteristics of three types of interactive computer graphic systems, developed by the Department of Computer Science at Carnegie-Mellon; a high-speed programmable display capable of displaying 50,000 short vectors, flicker free; a shaded-color video display for the display of gray-scale…
10-Second Demos: Boiling Asynchronous Online Instruction down to the Essentials with GIF Graphics
ERIC Educational Resources Information Center
Aleman, Karla J.; Porter, Toccara D.
2016-01-01
Connecting with text-weary students can be a challenge in the online instructional environment. Librarians have often developed screencast videos and integrated screenshots into online learning objects to teach students basic research skills. An alternative technology, graphical interchange format (GIF), may prove to be an excellent blend of the…
The Visual Side to Numeracy: Students' Sensemaking with Graphics
ERIC Educational Resources Information Center
Diezmann, Carmel; Lowrie, Tom; Sugars, Lindy; Logan, Tracy
2009-01-01
The 21st century has placed increasing demand on individuals' proficiency with a wide array of visual representations, that is graphics. Hence, proficiency with visual tasks needs to be embedded across the curriculum. In mathematics, various graphics (e.g., maps, charts, number lines, graphs) are used as means of communication of mathematical…
Linear array of photodiodes to track a human speaker for video recording
NASA Astrophysics Data System (ADS)
DeTone, D.; Neal, H.; Lougheed, R.
2012-12-01
Communication and collaboration using stored digital media has garnered more interest by many areas of business, government and education in recent years. This is due primarily to improvements in the quality of cameras and speed of computers. An advantage of digital media is that it can serve as an effective alternative when physical interaction is not possible. Video recordings that allow for viewers to discern a presenter's facial features, lips and hand motions are more effective than videos that do not. To attain this, one must maintain a video capture in which the speaker occupies a significant portion of the captured pixels. However, camera operators are costly, and often do an imperfect job of tracking presenters in unrehearsed situations. This creates motivation for a robust, automated system that directs a video camera to follow a presenter as he or she walks anywhere in the front of a lecture hall or large conference room. Such a system is presented. The system consists of a commercial, off-the-shelf pan/tilt/zoom (PTZ) color video camera, a necklace of infrared LEDs and a linear photodiode array detector. Electronic output from the photodiode array is processed to generate the location of the LED necklace, which is worn by a human speaker. The computer controls the video camera movements to record video of the speaker. The speaker's vertical position and depth are assumed to remain relatively constant- the video camera is sent only panning (horizontal) movement commands. The LED necklace is flashed at 70Hz at a 50% duty cycle to provide noise-filtering capability. The benefit to using a photodiode array versus a standard video camera is its higher frame rate (4kHz vs. 60Hz). The higher frame rate allows for the filtering of infrared noise such as sunlight and indoor lighting-a capability absent from other tracking technologies. The system has been tested in a large lecture hall and is shown to be effective.
The Graphics Tablet - A Valuable Tool for the Digital STEM Teacher
NASA Astrophysics Data System (ADS)
Stephens, Jeff
2018-04-01
I am inspired to write this article after coming across some publications in The Physics Teacher that all hit on topics of personal interest and experience. Similarly to Christensen my goal in writing this is to encourage other physics educators to take advantage of modern technology in delivering content to students and to feel comfortable doing so. There are numerous ways in which to create screencasts and lecture videos, some of which have been addressed in other articles. I invite those interested in learning how to create these videos to contact their educational technology staff or perform some internet searches on the topic. I will focus this article on the technology that enhanced the content I was delivering to my students. I will share a bit of my journey towards creating video materials and introduce a vital piece of technology, the graphics tablet, which changed the way I communicate with my students.
Intelligent viewing control for robotic and automation systems
NASA Astrophysics Data System (ADS)
Schenker, Paul S.; Peters, Stephen F.; Paljug, Eric D.; Kim, Won S.
1994-10-01
We present a new system for supervisory automated control of multiple remote cameras. Our primary purpose in developing this system has been to provide capability for knowledge- based, `hands-off' viewing during execution of teleoperation/telerobotic tasks. The reported technology has broader applicability to remote surveillance, telescience observation, automated manufacturing workcells, etc. We refer to this new capability as `Intelligent Viewing Control (IVC),' distinguishing it from a simple programmed camera motion control. In the IVC system, camera viewing assignment, sequencing, positioning, panning, and parameter adjustment (zoom, focus, aperture, etc.) are invoked and interactively executed by real-time by a knowledge-based controller, drawing on a priori known task models and constraints, including operator preferences. This multi-camera control is integrated with a real-time, high-fidelity 3D graphics simulation, which is correctly calibrated in perspective to the actual cameras and their platform kinematics (translation/pan-tilt). Such merged graphics- with-video design allows the system user to preview and modify the planned (`choreographed') viewing sequences. Further, during actual task execution, the system operator has available both the resulting optimized video sequence, as well as supplementary graphics views from arbitrary perspectives. IVC, including operator-interactive designation of robot task actions, is presented to the user as a well-integrated video-graphic single screen user interface allowing easy access to all relevant telerobot communication/command/control resources. We describe and show pictorial results of a preliminary IVC system implementation for telerobotic servicing of a satellite.
MAP3D: a media processor approach for high-end 3D graphics
NASA Astrophysics Data System (ADS)
Darsa, Lucia; Stadnicki, Steven; Basoglu, Chris
1999-12-01
Equator Technologies, Inc. has used a software-first approach to produce several programmable and advanced VLIW processor architectures that have the flexibility to run both traditional systems tasks and an array of media-rich applications. For example, Equator's MAP1000A is the world's fastest single-chip programmable signal and image processor targeted for digital consumer and office automation markets. The Equator MAP3D is a proposal for the architecture of the next generation of the Equator MAP family. The MAP3D is designed to achieve high-end 3D performance and a variety of customizable special effects by combining special graphics features with high performance floating-point and media processor architecture. As a programmable media processor, it offers the advantages of a completely configurable 3D pipeline--allowing developers to experiment with different algorithms and to tailor their pipeline to achieve the highest performance for a particular application. With the support of Equator's advanced C compiler and toolkit, MAP3D programs can be written in a high-level language. This allows the compiler to successfully find and exploit any parallelism in a programmer's code, thus decreasing the time to market of a given applications. The ability to run an operating system makes it possible to run concurrent applications in the MAP3D chip, such as video decoding while executing the 3D pipelines, so that integration of applications is easily achieved--using real-time decoded imagery for texturing 3D objects, for instance. This novel architecture enables an affordable, integrated solution for high performance 3D graphics.
ARCUS Internet Media Archive (IMA): A Resource for Outreach and Education
NASA Astrophysics Data System (ADS)
Polly, Z.; Warnick, W. K.; Polly, J.
2008-12-01
The ARCUS Internet Media Archive (IMA) is a collection of photos, graphics, videos, and presentations about the Arctic that are shared through the Internet. It provides the arctic research community and the public at large with a centralized location where images and video pertaining to polar research can be browsed and retrieved for a variety of uses. The IMA currently contains almost 6,500 publicly accessible photos, including 4,000 photos from the National Science Foundation funded Teachers and Researchers Exploring and Collaborating (TREC, now PolarTREC) program, an educational research experience in which K-12 teachers participate in arctic research as a pathway to improving science education. The IMA also includes 450 video files, 270 audio files, nearly 100 graphics and logos, 28 presentations, and approximately 10,000 additional resources that are being prepared for public access. The contents of this archive are organized by file type, contributor's name, event, or by organization, with each photo or file accompanied by information on content, contributor source, and usage requirements. All the files are key-worded and all information, including file name and description, is completely searchable. ARCUS plans to continue to improve and expand the IMA with a particular focus on providing graphics depicting key arctic research results and findings as well as edited video archives of relevant scientific community meetings. To submit files or for more information and to view the ARCUS Internet Media Archive, please go to: http://media.arcus.org or email photo@arcus.org.
Architectures for single-chip image computing
NASA Astrophysics Data System (ADS)
Gove, Robert J.
1992-04-01
This paper will focus on the architectures of VLSI programmable processing components for image computing applications. TI, the maker of industry-leading RISC, DSP, and graphics components, has developed an architecture for a new-generation of image processors capable of implementing a plurality of image, graphics, video, and audio computing functions. We will show that the use of a single-chip heterogeneous MIMD parallel architecture best suits this class of processors--those which will dominate the desktop multimedia, document imaging, computer graphics, and visualization systems of this decade.
Cervical cancer control: deaf and hearing women's response to an educational video.
Yao, Catherine S; Merz, Erin L; Nakaji, Melanie; Harry, Kadie M; Malcarne, Vanessa L; Sadler, Georgia Robins
2012-03-01
Deaf people encounter barriers to accessing cancer information. In this study, a graphically enriched educational video about cervical cancer was created in American Sign Language, with English open captioning and voice overlay. Deaf (n = 127) and hearing (n = 106) women completed cancer knowledge surveys before and after viewing the video. Hearing women yielded higher scores before the intervention. Both groups demonstrated a significant increase in general and cervical cancer knowledge after viewing the video, rendering posttest knowledge scores nearly equal between the groups. These findings indicate that this video is an effective strategy for increasing cervical cancer knowledge among deaf women.
Antal, Holly; Bunnell, H Timothy; McCahan, Suzanne M; Pennington, Chris; Wysocki, Tim; Blake, Kathryn V
2017-02-01
Poor participant comprehension of research procedures following the conventional face-to-face consent process for biomedical research is common. We describe the development of a multimedia informed consent video and website that incorporates cognitive strategies to enhance comprehension of study related material directed to parents and adolescents. A multidisciplinary team was assembled for development of the video and website that included human subjects professionals; psychologist researchers; institutional video and web developers; bioinformaticians and programmers; and parent and adolescent stakeholders. Five learning strategies that included Sensory-Modality view, Coherence, Signaling, Redundancy, and Personalization were integrated into a 15-min video and website material that describes a clinical research trial. A diverse team collaborated extensively over 15months to design and build a multimedia platform for obtaining parental permission and adolescent assent for participant in as asthma clinical trial. Examples of the learning principles included, having a narrator describe what was being viewed on the video (sensory-modality); eliminating unnecessary text and graphics (coherence); having the initial portion of the video explain the sections of the video to be viewed (signaling); avoiding simultaneous presentation of text and graphics (redundancy); and having a consistent narrator throughout the video (personalization). Existing conventional and multimedia processes for obtaining research informed consent have not actively incorporated basic principles of human cognition and learning in the design and implementation of these processes. The present paper illustrates how this can be achieved, setting the stage for rigorous evaluation of potential benefits such as improved comprehension, satisfaction with the consent process, and completion of research objectives. New consent strategies that have an integrated cognitive approach need to be developed and tested in controlled trials. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Knuth, F.; Vardaro, M.; Belabbassi, L.; Smith, M. J.; Garzio, L. M.; Crowley, M. F.; Kerfoot, J.; Kawka, O. E.
2016-02-01
The National Science Foundation's Ocean Observatories Initiative (OOI), is a broad-scale, multidisciplinary facility that will transform oceanographic research by providing users with unprecedented access to long-term datasets from a variety of deployed physical, chemical, biological, and geological sensors. The Cabled Array component of the OOI, installed and operated by the University of Washington, is located on the Juan de Fuca tectonic plate off the coast of Oregon. It is a unique network of >100 cabled instruments and instrumented moorings transmitting data to shore in real-time via fiber optic technology. Instruments now installed include HD video and digital still cameras, mass spectrometers, a resistivity-temperature probe inside the orifice of a high-temperature hydrothermal vent, upward-looking ADCP's, pH and pC02 sensors, Horizontal Electrometer Pressure Inverted Echosounders and many others. Here, we present the technical aspects of data streaming from the Cabled Array through the OOI Cyberinfrastructure. We illustrate the types of instruments and data products available, data volume and density, processing levels and algorithms used, data delivery methods, file formats and access methods through the graphical user interface. Our goal is to facilitate the use and access to these unprecedented, co-registered oceanographic datasets. We encourage researchers to collaborate through the use of these simultaneous, interdisciplinary measurements, in the exploration of short-lived events (tectonic, volcanic, biological, severe storms), as well as long-term trends in ocean systems (circulation patterns, climate change, ocean acidity, ecosystem shifts).
Geometric database maintenance using CCTV cameras and overlay graphics
NASA Astrophysics Data System (ADS)
Oxenberg, Sheldon C.; Landell, B. Patrick; Kan, Edwin
1988-01-01
An interactive graphics system using closed circuit television (CCTV) cameras for remote verification and maintenance of a geometric world model database has been demonstrated in GE's telerobotics testbed. The database provides geometric models and locations of objects viewed by CCTV cameras and manipulated by telerobots. To update the database, an operator uses the interactive graphics system to superimpose a wireframe line drawing of an object with known dimensions on a live video scene containing that object. The methodology used is multipoint positioning to easily superimpose a wireframe graphic on the CCTV image of an object in the work scene. An enhanced version of GE's interactive graphics system will provide the object designation function for the operator control station of the Jet Propulsion Laboratory's telerobot demonstration system.
Design and implementation of H.264 based embedded video coding technology
NASA Astrophysics Data System (ADS)
Mao, Jian; Liu, Jinming; Zhang, Jiemin
2016-03-01
In this paper, an embedded system for remote online video monitoring was designed and developed to capture and record the real-time circumstances in elevator. For the purpose of improving the efficiency of video acquisition and processing, the system selected Samsung S5PV210 chip as the core processor which Integrated graphics processing unit. And the video was encoded with H.264 format for storage and transmission efficiently. Based on S5PV210 chip, the hardware video coding technology was researched, which was more efficient than software coding. After running test, it had been proved that the hardware video coding technology could obviously reduce the cost of system and obtain the more smooth video display. It can be widely applied for the security supervision [1].
Internet Television News in the Classroom TF1: Improved Features Make Sites More Useful
ERIC Educational Resources Information Center
LeLoup, Jean W.; Ponterio, Robert
2004-01-01
Truly current, up-to-the-minute video by native speakers using the language for real communication can make the language and culture come alive for students. With the explosion of broadband Internet access through cable and DSL, better connections in schools, faster low cost computers, and better graphics adapters, access to authentic video on the…
Learning Projectile Motion with the Computer Game "Scorched 3D"
ERIC Educational Resources Information Center
Jurcevic, John S.
2008-01-01
For most of our students, video games are a normal part of their lives. We should take advantage of this medium to teach physics in a manner that is engrossing for our students. In particular, modern video games incorporate accurate physics in their game engines, and they allow us to visualize the physics through flashy and captivating graphics. I…
ERIC Educational Resources Information Center
Eisenstadt, Marc; Brayshaw, Mike
This paper describes a Prolog execution model which serves as the uniform basis of textbook material, video-based teaching material, and an advanced graphical user interface for Prolog programmers. The model, based upon an augmented AND/OR tree representation of Prolog programs, uses an enriched "status box" in place of the traditional…
Submillimeter video imaging with a superconducting bolometer array
NASA Astrophysics Data System (ADS)
Becker, Daniel Thomas
Millimeter wavelength radiation holds promise for detection of security threats at a distance, including suicide bombers and maritime threats in poor weather. The high sensitivity of superconducting Transition Edge Sensor (TES) bolometers makes them ideal for passive imaging of thermal signals at millimeter and submillimeter wavelengths. I have built a 350 GHz video-rate imaging system using an array of feedhorn-coupled TES bolometers. The system operates at standoff distances of 16 m to 28 m with a measured spatial resolution of 1.4 cm (at 17 m). It currently contains one 251-detector sub-array, and can be expanded to contain four sub-arrays for a total of 1004 detectors. The system has been used to take video images that reveal the presence of weapons concealed beneath a shirt in an indoor setting. This dissertation describes the design, implementation and characterization of this system. It presents an overview of the challenges associated with standoff passive imaging and how these problems can be overcome through the use of large-format TES bolometer arrays. I describe the design of the system and cover the results of detector and optical characterization. I explain the procedure used to generate video images using the system, and present a noise analysis of those images. This analysis indicates that the Noise Equivalent Temperature Difference (NETD) of the video images is currently limited by artifacts of the scanning process. More sophisticated image processing algorithms can eliminate these artifacts and reduce the NETD to 100 mK, which is the target value for the most demanding passive imaging scenarios. I finish with an overview of future directions for this system.
Climate Science Communications - Video Visualization Techniques
NASA Astrophysics Data System (ADS)
Reisman, J. P.; Mann, M. E.
2010-12-01
Communicating Climate science is challenging due to it's complexity. But as they say, a picture is worth a thousand words. Visualization techniques can be merely graphical or combine multimedia so as to make graphs come alive in context with other visual and auditory cues. This can also make the information come alive in a way that better communicates what the science is all about. What types of graphics to use depends on your audience, some graphs are great for scientists but if you are trying to communicate to a less sophisticated audience, certain visuals translate information in a more easily perceptible manner. Hollywood techniques and style can be applied to these graphs to give them even more impact. Video is one of the most powerful communication tools in its ability to combine visual and audio through time. Adding music and visual cues such as pans and zooms can greatly enhance the ability to communicate your concepts. Video software ranges from relatively simple to very sophisticated. In reality, you don't need the best tools to get your point across. In fact, with relatively inexpensive software, you can put together powerful videos that more effectively convey the science you are working on with greater sophistication, and in an entertaining way. We will examine some basic techniques to increase the quality of video visualization to make it more effective in communicating complexity. If a picture is worth a thousand words, a decent video with music, and a bit of narration is priceless.
SEURAT: visual analytics for the integrated analysis of microarray data.
Gribov, Alexander; Sill, Martin; Lück, Sonja; Rücker, Frank; Döhner, Konstanze; Bullinger, Lars; Benner, Axel; Unwin, Antony
2010-06-03
In translational cancer research, gene expression data is collected together with clinical data and genomic data arising from other chip based high throughput technologies. Software tools for the joint analysis of such high dimensional data sets together with clinical data are required. We have developed an open source software tool which provides interactive visualization capability for the integrated analysis of high-dimensional gene expression data together with associated clinical data, array CGH data and SNP array data. The different data types are organized by a comprehensive data manager. Interactive tools are provided for all graphics: heatmaps, dendrograms, barcharts, histograms, eventcharts and a chromosome browser, which displays genetic variations along the genome. All graphics are dynamic and fully linked so that any object selected in a graphic will be highlighted in all other graphics. For exploratory data analysis the software provides unsupervised data analytics like clustering, seriation algorithms and biclustering algorithms. The SEURAT software meets the growing needs of researchers to perform joint analysis of gene expression, genomical and clinical data.
2010-07-01
imagery, persistent sensor array I. Introduction New device fabrication technologies and heterogeneous embedded processors have led to the emergence of a...geometric occlusions between target and sensor , motion blur, urban scene complexity, and high data volumes. In practical terms the targets are small...distributed airborne narrow-field-of-view video sensor networks. Airborne camera arrays combined with com- putational photography techniques enable the
NASA Astrophysics Data System (ADS)
Kwak, Bong-Choon; Lim, Han-Sin; Kwon, Oh-Kyong
2011-03-01
In this paper, we propose a pixel circuit immune to the electrical characteristic variation of organic light-emitting diodes (OLEDs) for organic light-emitting diode-on-silicon (OLEDoS) microdisplays with a 0.4 inch video graphics array (VGA) resolution and a 6-bit gray scale. The proposed pixel circuit is implemented using five p-channel metal oxide semiconductor field-effect transistors (MOSFETs) and one storage capacitor. The proposed pixel circuit has a source follower with a diode-connected transistor as an active load for improving the immunity against the electrical characteristic variation of OLEDs. The deviation in the measured emission current ranges from -0.165 to 0.212 least significant bit (LSB) among 11 samples while the anode voltage of OLED is 0 V. Also, the deviation in the measured emission current ranges from -0.262 to 0.272 LSB in pixel samples, while the anode voltage of OLED varies from 0 to 2.5 V owing to the electrical characteristic variation of OLEDs.
Video games and aggressive thoughts, feelings, and behavior in the laboratory and in life.
Anderson, C A; Dill, K E
2000-04-01
Two studies examined violent video game effects on aggression-related variables. Study 1 found that real-life violent video game play was positively related to aggressive behavior and delinquency. The relation was stronger for individuals who are characteristically aggressive and for men. Academic achievement was negatively related to overall amount of time spent playing video games. In Study 2, laboratory exposure to a graphically violent video game increased aggressive thoughts and behavior. In both studies, men had a more hostile view of the world than did women. The results from both studies are consistent with the General Affective Aggression Model, which predicts that exposure to violent video games will increase aggressive behavior in both the short term (e.g., laboratory aggression) and the long term (e.g., delinquency).
A 500 megabyte/second disk array
NASA Technical Reports Server (NTRS)
Ruwart, Thomas M.; Okeefe, Matthew T.
1994-01-01
Applications at the Army High Performance Computing Research Center's (AHPCRC) Graphic and Visualization Laboratory (GVL) at the University of Minnesota require a tremendous amount of I/O bandwidth and this appetite for data is growing. Silicon Graphics workstations are used to perform the post-processing, visualization, and animation of multi-terabyte size datasets produced by scientific simulations performed of AHPCRC supercomputers. The M.A.X. (Maximum Achievable Xfer) was designed to find the maximum achievable I/O performance of the Silicon Graphics CHALLENGE/Onyx-class machines that run these applications. Running a fully configured Onyx machine with 12-150MHz R4400 processors, 512MB of 8-way interleaved memory, 31 fast/wide SCSI-2 channel each with a Ciprico disk array controller we were able to achieve a maximum sustained transfer rate of 509.8 megabytes per second. However, after analyzing the results it became clear that the true maximum transfer rate is somewhat beyond this figure and we will need to do further testing with more disk array controllers in order to find the true maximum.
NASA Technical Reports Server (NTRS)
Bogart, Edward H. (Inventor); Pope, Alan T. (Inventor)
2000-01-01
A system for display on a single video display terminal of multiple physiological measurements is provided. A subject is monitored by a plurality of instruments which feed data to a computer programmed to receive data, calculate data products such as index of engagement and heart rate, and display the data in a graphical format simultaneously on a single video display terminal. In addition live video representing the view of the subject and the experimental setup may also be integrated into the single data display. The display may be recorded on a standard video tape recorder for retrospective analysis.
ERIC Educational Resources Information Center
Shriver, Edgar L.; And Others
This volume reports an effort to use the video media as an approach for the preparation of a battery of symbolic tests that would be empirically valid substitutes for criterion referenced Job Task Performance Tests. The graphic symbolic tests require the storage of a large amount of pictorial information which must be searched rapidly for display.…
1989-11-27
drive against pornography, and it has also achieved new breakthroughs and progress in eradicating porno - graphic materials in certain localities...September, more than 45,000 law enforcement personnel in the province made more than 5,900 inspections of bookstores and audio and video shops and stalls...on 3 October. Second, the sources of Shishi City’s illegal and pornographic video - tapes have been ascertained. Third, the channels through which
Design of a video teleconference facility for a synchronous satellite communications link
NASA Technical Reports Server (NTRS)
Richardson, M. D.
1979-01-01
The system requirements, design tradeoffs, and final design of a video teleconference facility are discussed, including proper lighting, graphics transmission, and picture aesthetics. Methods currently accepted in the television broadcast industry are used in the design. The unique problems associated with using an audio channel with a synchronous satellite communications link are discussed, and a final audio system design is presented.
Information Sharing for Medical Triage Tasking During Mass Casualty/Humanitarian Operations
2009-12-01
military patrol units or surreptitious " cloak and dagger " fact gathering missions to gain photographic/video graphic data for dissemination to the...fractured command and control organization and retarded deployment of resources. Tragedies such as Hurricane Katrina in 2005, the September 11 attacks of...with PKI certificates and HMAC protection from replay attacks and UDP flooding [17]. 3. Triage Graphical User Interface (GUI) Currently the GUI for
Practical system for generating digital mixed reality video holograms.
Song, Joongseok; Kim, Changseob; Park, Hanhoon; Park, Jong-Il
2016-07-10
We propose a practical system that can effectively mix the depth data of real and virtual objects by using a Z buffer and can quickly generate digital mixed reality video holograms by using multiple graphic processing units (GPUs). In an experiment, we verify that real objects and virtual objects can be merged naturally in free viewing angles, and the occlusion problem is well handled. Furthermore, we demonstrate that the proposed system can generate mixed reality video holograms at 7.6 frames per second. Finally, the system performance is objectively verified by users' subjective evaluations.
STS-74/MIR Photogrammetric Appendage Structural Dynamics Experiment Preliminary Data Analysis
NASA Technical Reports Server (NTRS)
Gilbert, Michael G.; Welch, Sharon S.; Pappa, Richard S.; Demeo, Martha E.
1997-01-01
The Photogrammetric Appendage Structural Dynamics Experiment was designed, developed, and flown to demonstrate and prove measurement of the structural vibration response of a Russian Space Station Mir solar array using photogrammetric methods. The experiment flew on the STS-74 Space Shuttle mission to Mir in November 1995 and obtained video imagery of solar array structural response to various excitation events. The video imagery has been digitized and triangulated to obtain response time history data at discrete points on the solar array. This data has been further processed using the Eigensystem Realization Algorithm modal identification technique to determine the natural vibration frequencies, damping, and mode shapes of the solar array. The results demonstrate that photogrammetric measurement of articulating, nonoptically targeted, flexible solar arrays and appendages is a viable, low-cost measurement option for the International Space Station.
NASA Astrophysics Data System (ADS)
Lazar, Aurel A.; White, John S.
1987-07-01
Theoretical analysis of integrated local area network model of MAGNET, an integrated network testbed developed at Columbia University, shows that the bandwidth freed up during video and voice calls during periods of little movement in the images and periods of silence in the speech signals could be utilized efficiently for graphics and data transmission. Based on these investigations, an architecture supporting adaptive protocols that are dynamicaly controlled by the requirements of a fluctuating load and changing user environment has been advanced. To further analyze the behavior of the network, a real-time packetized video system has been implemented. This system is embedded in the real-time multimedia workstation EDDY, which integrates video, voice, and data traffic flows. Protocols supporting variable-bandwidth, fixed-quality packetized video transport are described in detail.
NASA Astrophysics Data System (ADS)
Lazar, Aurel A.; White, John S.
1986-11-01
Theoretical analysis of an ILAN model of MAGNET, an integrated network testbed developed at Columbia University, shows that the bandwidth freed up by video and voice calls during periods of little movement in the images and silence periods in the speech signals could be utilized efficiently for graphics and data transmission. Based on these investigations, an architecture supporting adaptive protocols that are dynamically controlled by the requirements of a fluctuating load and changing user environment has been advanced. To further analyze the behavior of the network, a real-time packetized video system has been implemented. This system is embedded in the real time multimedia workstation EDDY that integrates video, voice and data traffic flows. Protocols supporting variable bandwidth, constant quality packetized video transport are descibed in detail.
A Graphical Operator Interface for a Telerobotic Inspection System
NASA Technical Reports Server (NTRS)
Kim, W. S.; Tso, K. S.; Hayati, S.
1993-01-01
Operator interface has recently emerged as an important element for efficient and safe operatorinteractions with the telerobotic system. Recent advances in graphical user interface (GUI) andgraphics/video merging technologies enable development of more efficient, flexible operatorinterfaces. This paper describes an advanced graphical operator interface newly developed for aremote surface inspection system at Jet Propulsion Laboratory. The interface has been designed sothat remote surface inspection can be performed by a single operator with an integrated robot controland image inspection capability. It supports three inspection strategies of teleoperated human visual inspection, human visual inspection with automated scanning, and machine-vision-based automated inspection.
NASA Astrophysics Data System (ADS)
Buxbaum, T. M.; Warnick, W. K.; Polly, B.; Breen, K. J.
2007-12-01
The ARCUS Internet Media Archive (IMA) is a collection of photos, graphics, videos, and presentations about the Arctic and Antarctic that are shared through the Internet. It provides the polar research community and the public at large with a centralized location where images and video pertaining to polar research can be browsed and retrieved for a variety of uses. The IMA currently contains almost 6,500 publicly accessible photos, including 4,000 photos from the National Science Foundation (NSF) funded Teachers and Researchers Exploring and Collaborating (TREC) program, an educational research experience in which K-12 teachers participate in arctic research as a pathway to improving science education. The IMA is also the future home of all electronic media from the NSF funded PolarTREC program, a continuation of TREC that now takes place in both the Arctic and Antarctic. The IMA includes 450 video files, 270 audio files, nearly 100 graphics and logos, 28 presentations, and approximately 10,000 additional resources that are being prepared for public access. The contents of this archive are organized by file type, photographer's name, event, or by organization, with each photo or file accompanied by information on content, contributor source, and usage requirements. All the files are keyworded and all information, including file name and description, is completely searchable. ARCUS plans to continue to improve and expand the IMA with a particular focus on providing graphics depicting key arctic research results and findings as well as edited video archives of relevant scientific community meetings. To submit files or for more information and to view the ARCUS Internet Media Archive, please go to: http://media.arcus.org or email photo@arcus.org.
Network and user interface for PAT DOME virtual motion environment system
NASA Technical Reports Server (NTRS)
Worthington, J. W.; Duncan, K. M.; Crosier, W. G.
1993-01-01
The Device for Orientation and Motion Environments Preflight Adaptation Trainer (DOME PAT) provides astronauts a virtual microgravity sensory environment designed to help alleviate tye symptoms of space motion sickness (SMS). The system consists of four microcomputers networked to provide real time control, and an image generator (IG) driving a wide angle video display inside a dome structure. The spherical display demands distortion correction. The system is currently being modified with a new graphical user interface (GUI) and a new Silicon Graphics IG. This paper will concentrate on the new GUI and the networking scheme. The new GUI eliminates proprietary graphics hardware and software, and instead makes use of standard and low cost PC video (CGA) and off the shelf software (Microsoft's Quick C). Mouse selection for user input is supported. The new Silicon Graphics IG requires an Ethernet interface. The microcomputer known as the Real Time Controller (RTC), which has overall control of the system and is written in Ada, was modified to use the free public domain NCSA Telnet software for Ethernet communications with the Silicon Graphics IG. The RTC also maintains the original ARCNET communications through Novell Netware IPX with the rest of the system. The Telnet TCP/IP protocol was first used for real-time communication, but because of buffering problems the Telnet datagram (UDP) protocol needed to be implemented. Since the Telnet modules are written in C, the Adap pragma 'Interface' was used to interface with the network calls.
Computer-aided light sheet flow visualization using photogrammetry
NASA Technical Reports Server (NTRS)
Stacy, Kathryn; Severance, Kurt; Childers, Brooks A.
1994-01-01
A computer-aided flow visualization process has been developed to analyze video images acquired from rotating and translating light sheet visualization systems. The computer process integrates a mathematical model for image reconstruction, advanced computer graphics concepts, and digital image processing to provide a quantitative and a visual analysis capability. The image reconstruction model, based on photogrammetry, uses knowledge of the camera and light sheet locations and orientations to project two-dimensional light sheet video images into three-dimensional space. A sophisticated computer visualization package, commonly used to analyze computational fluid dynamics (CFD) results, was chosen to interactively display the reconstructed light sheet images with the numerical surface geometry for the model or aircraft under study. The photogrammetric reconstruction technique and the image processing and computer graphics techniques and equipment are described. Results of the computer-aided process applied to both a wind tunnel translating light sheet experiment and an in-flight rotating light sheet experiment are presented. The capability to compare reconstructed experimental light sheet images with CFD solutions in the same graphics environment is also demonstrated.
Computer-Aided Light Sheet Flow Visualization
NASA Technical Reports Server (NTRS)
Stacy, Kathryn; Severance, Kurt; Childers, Brooks A.
1993-01-01
A computer-aided flow visualization process has been developed to analyze video images acquired from rotating and translating light sheet visualization systems. The computer process integrates a mathematical model for image reconstruction, advanced computer graphics concepts, and digital image processing to provide a quantitative and visual analysis capability. The image reconstruction model, based on photogrammetry, uses knowledge of the camera and light sheet locations and orientations to project two-dimensional light sheet video images into three-dimensional space. A sophisticated computer visualization package, commonly used to analyze computational fluid dynamics (CFD) data sets, was chosen to interactively display the reconstructed light sheet images, along with the numerical surface geometry for the model or aircraft under study. A description is provided of the photogrammetric reconstruction technique, and the image processing and computer graphics techniques and equipment. Results of the computer aided process applied to both a wind tunnel translating light sheet experiment and an in-flight rotating light sheet experiment are presented. The capability to compare reconstructed experimental light sheet images and CFD solutions in the same graphics environment is also demonstrated.
Computer-aided light sheet flow visualization
NASA Technical Reports Server (NTRS)
Stacy, Kathryn; Severance, Kurt; Childers, Brooks A.
1993-01-01
A computer-aided flow visualization process has been developed to analyze video images acquired from rotating and translating light sheet visualization systems. The computer process integrates a mathematical model for image reconstruction, advanced computer graphics concepts, and digital image processing to provide a quantitative and visual analysis capability. The image reconstruction model, based on photogrammetry, uses knowledge of the camera and light sheet locations and orientations to project two-dimensional light sheet video images into three-dimensional space. A sophisticated computer visualization package, commonly used to analyze computational fluid dynamics (CFD) data sets, was chosen to interactively display the reconstructed light sheet images, along with the numerical surface geometry for the model or aircraft under study. A description is provided of the photogrammetric reconstruction technique, and the image processing and computer graphics techniques and equipment. Results of the computer aided process applied to both a wind tunnel translating light sheet experiment and an in-flight rotating light sheet experiment are presented. The capability to compare reconstructed experimental light sheet images and CFD solutions in the same graphics environment is also demonstrated.
A Standard-Compliant Virtual Meeting System with Active Video Object Tracking
NASA Astrophysics Data System (ADS)
Lin, Chia-Wen; Chang, Yao-Jen; Wang, Chih-Ming; Chen, Yung-Chang; Sun, Ming-Ting
2002-12-01
This paper presents an H.323 standard compliant virtual video conferencing system. The proposed system not only serves as a multipoint control unit (MCU) for multipoint connection but also provides a gateway function between the H.323 LAN (local-area network) and the H.324 WAN (wide-area network) users. The proposed virtual video conferencing system provides user-friendly object compositing and manipulation features including 2D video object scaling, repositioning, rotation, and dynamic bit-allocation in a 3D virtual environment. A reliable, and accurate scheme based on background image mosaics is proposed for real-time extracting and tracking foreground video objects from the video captured with an active camera. Chroma-key insertion is used to facilitate video objects extraction and manipulation. We have implemented a prototype of the virtual conference system with an integrated graphical user interface to demonstrate the feasibility of the proposed methods.
2010-09-01
53 Figure 26. Image of the phased array antenna...................................................................54...69 Figure 38. Computation of correction angle from array factor and sum/difference beams...71 Figure 39. Front panel of the tracking algorithm
Li, Xiangrui; Lu, Zhong-Lin
2012-02-29
Display systems based on conventional computer graphics cards are capable of generating images with 8-bit gray level resolution. However, most experiments in vision research require displays with more than 12 bits of luminance resolution. Several solutions are available. Bit++ (1) and DataPixx (2) use the Digital Visual Interface (DVI) output from graphics cards and high resolution (14 or 16-bit) digital-to-analog converters to drive analog display devices. The VideoSwitcher (3) described here combines analog video signals from the red and blue channels of graphics cards with different weights using a passive resister network (4) and an active circuit to deliver identical video signals to the three channels of color monitors. The method provides an inexpensive way to enable high-resolution monochromatic displays using conventional graphics cards and analog monitors. It can also provide trigger signals that can be used to mark stimulus onsets, making it easy to synchronize visual displays with physiological recordings or response time measurements. Although computer keyboards and mice are frequently used in measuring response times (RT), the accuracy of these measurements is quite low. The RTbox is a specialized hardware and software solution for accurate RT measurements. Connected to the host computer through a USB connection, the driver of the RTbox is compatible with all conventional operating systems. It uses a microprocessor and high-resolution clock to record the identities and timing of button events, which are buffered until the host computer retrieves them. The recorded button events are not affected by potential timing uncertainties or biases associated with data transmission and processing in the host computer. The asynchronous storage greatly simplifies the design of user programs. Several methods are available to synchronize the clocks of the RTbox and the host computer. The RTbox can also receive external triggers and be used to measure RT with respect to external events. Both VideoSwitcher and RTbox are available for users to purchase. The relevant information and many demonstration programs can be found at http://lobes.usc.edu/.
The Photovoltaic Array Space Power plus Diagnostics (PASP Plus) Flight Experiment
NASA Technical Reports Server (NTRS)
Piszczor, Michael F.; Curtis, Henry B.; Guidice, Donald A.; Severance, Paul S.
1992-01-01
An overview of the Photovoltaic Array Space Power Plus Diagnostics (PASP Plus) flight experiment is presented in outline and graphic form. The goal of the experiment is to test a variety of photovoltaic cell and array technologies under various space environmental conditions. Experiment objectives, flight hardware, experiment control and diagnostic instrumentation, and illuminated thermal vacuum testing are addressed.
The SCEC/UseIT Intern Program: Creating Open-Source Visualization Software Using Diverse Resources
NASA Astrophysics Data System (ADS)
Francoeur, H.; Callaghan, S.; Perry, S.; Jordan, T.
2004-12-01
The Southern California Earthquake Center undergraduate IT intern program (SCEC UseIT) conducts IT research to benefit collaborative earth science research. Through this program, interns have developed real-time, interactive, 3D visualization software using open-source tools. Dubbed LA3D, a distribution of this software is now in use by the seismic community. LA3D enables the user to interactively view Southern California datasets and models of importance to earthquake scientists, such as faults, earthquakes, fault blocks, digital elevation models, and seismic hazard maps. LA3D is now being extended to support visualizations anywhere on the planet. The new software, called SCEC-VIDEO (Virtual Interactive Display of Earth Objects), makes use of a modular, plugin-based software architecture which supports easy development and integration of new data sets. Currently SCEC-VIDEO is in beta testing, with a full open-source release slated for the future. Both LA3D and SCEC-VIDEO were developed using a wide variety of software technologies. These, which included relational databases, web services, software management technologies, and 3-D graphics in Java, were necessary to integrate the heterogeneous array of data sources which comprise our software. Currently the interns are working to integrate new technologies and larger data sets to increase software functionality and value. In addition, both LA3D and SCEC-VIDEO allow the user to script and create movies. Thus program interns with computer science backgrounds have been writing software while interns with other interests, such as cinema, geology, and education, have been making movies that have proved of great use in scientific talks, media interviews, and education. Thus, SCEC UseIT incorporates a wide variety of scientific and human resources to create products of value to the scientific and outreach communities. The program plans to continue with its interdisciplinary approach, increasing the relevance of the software and expanding its use in the scientific community.
2016-04-27
Name/Title of Video: Marshall Space Flight Center Media Resource Reel 2016 Description: Edited b-roll video of NASA's Marshall Space Flight Center in Huntsville, Ala., and of various projects and programs located at or associated with the center. For more information and more detailed footage, please contact the center's Public & Employee Communications Office. Graphic Information: PAO Name:Jennifer Stanfield Phone Number:256-544-0034 Email Address: jennifer.stanfield@nasa.gov
Watermarking 3D Objects for Verification
1999-01-01
signal (audio/ image /video) pro- cessing and steganography fields, and even newer to the computer graphics community. Inherently, digital watermarking of...quality images , and digital video. The field of digital watermarking is relatively new, and many of its terms have not been well defined. Among the dif...ferent media types, watermarking of 2D still images is comparatively better studied. Inherently, digital water- marking of 3D objects remains a
On Target: Organizing and Executing the Strategic Air Campaign Against Iraq
2002-01-01
possession, use, sale, creation or display of any porno graphic photograph, videotape, movie, drawing, book, or magazine or similar represen- tations. This...forward-looking infrared (FLIR) sensor to create daylight-quality video images of terrain and utilized terrain-following radar to enable the aircraft to...The Black Hole Planners had pleaded with CENTAF Intel to provide them with photos of targets, provide additional personnel to analyze PGM video
Visualization of fluid dynamics at NASA Ames
NASA Technical Reports Server (NTRS)
Watson, Val
1989-01-01
The hardware and software currently used for visualization of fluid dynamics at NASA Ames is described. The software includes programs to create scenes (for example particle traces representing the flow over an aircraft), programs to interactively view the scenes, and programs to control the creation of video tapes and 16mm movies. The hardware includes high performance graphics workstations, a high speed network, digital video equipment, and film recorders.
NASA Astrophysics Data System (ADS)
Buxbaum, T. M.; Warnick, W. K.; Polly, B.; Hueffer, L. J.; Behr, S. A.
2006-12-01
The ARCUS Internet Media Archive (IMA) is a collection of photos, graphics, videos, and presentations about the Arctic that are shared through the Internet. It provides the arctic research community and the public at large with a centralized location where images and video pertaining to polar research can be browsed and retrieved for a variety of uses. The IMA currently contains almost 5,000 publicly accessible photos, including 3,000 photos from the National Science Foundation funded Teachers and Researchers Exploring and Collaborating (TREC) program, an educational research experience in which K-12 teachers participate in arctic research as a pathway to improving science education. The IMA also includes 360 video files, 260 audio files, and approximately 8,000 additional resources that are being prepared for public access. The contents of this archive are organized by file type, contributor's name, event, or by organization, with each photo or file accompanied by information on content, contributor source, and usage requirements. All the files are keyworded and all information, including file name and description, is completely searchable. ARCUS plans to continue to improve and expand the IMA with a particular focus on providing graphics depicting key arctic research results and findings as well as edited video archives of relevant scientific community meetings.
Design of multi-mode compatible image acquisition system for HD area array CCD
NASA Astrophysics Data System (ADS)
Wang, Chen; Sui, Xiubao
2014-11-01
Combining with the current development trend in video surveillance-digitization and high-definition, a multimode-compatible image acquisition system for HD area array CCD is designed. The hardware and software designs of the color video capture system of HD area array CCD KAI-02150 presented by Truesense Imaging company are analyzed, and the structure parameters of the HD area array CCD and the color video gathering principle of the acquisition system are introduced. Then, the CCD control sequence and the timing logic of the whole capture system are realized. The noises of the video signal (KTC noise and 1/f noise) are filtered by using the Correlated Double Sampling (CDS) technique to enhance the signal-to-noise ratio of the system. The compatible designs in both software and hardware for the two other image sensors of the same series: KAI-04050 and KAI-08050 are put forward; the effective pixels of these two HD image sensors are respectively as many as four million and eight million. A Field Programmable Gate Array (FPGA) is adopted as the key controller of the system to perform the modularization design from top to bottom, which realizes the hardware design by software and improves development efficiency. At last, the required time sequence driving is simulated accurately by the use of development platform of Quartus II 12.1 combining with VHDL. The result of the simulation indicates that the driving circuit is characterized by simple framework, low power consumption, and strong anti-interference ability, which meet the demand of miniaturization and high-definition for the current tendency.
SEURAT: Visual analytics for the integrated analysis of microarray data
2010-01-01
Background In translational cancer research, gene expression data is collected together with clinical data and genomic data arising from other chip based high throughput technologies. Software tools for the joint analysis of such high dimensional data sets together with clinical data are required. Results We have developed an open source software tool which provides interactive visualization capability for the integrated analysis of high-dimensional gene expression data together with associated clinical data, array CGH data and SNP array data. The different data types are organized by a comprehensive data manager. Interactive tools are provided for all graphics: heatmaps, dendrograms, barcharts, histograms, eventcharts and a chromosome browser, which displays genetic variations along the genome. All graphics are dynamic and fully linked so that any object selected in a graphic will be highlighted in all other graphics. For exploratory data analysis the software provides unsupervised data analytics like clustering, seriation algorithms and biclustering algorithms. Conclusions The SEURAT software meets the growing needs of researchers to perform joint analysis of gene expression, genomical and clinical data. PMID:20525257
A voxel visualization and analysis system based on AutoCAD
NASA Astrophysics Data System (ADS)
Marschallinger, Robert
1996-05-01
A collection of AutoLISP programs is presented which enable the visualization and analysis of voxel models by AutoCAD rel. 12/rel. 13. The programs serve as an interactive, graphical front end for manipulating the results of three-dimensional modeling software producing block estimation data. ASCII data files describing geometry and attributes per estimation block are imported and stored as a voxel array. Each voxel may contain multiple attributes, therefore different parameters may be incorporated in one voxel array. Voxel classification is implemented on a layer basis providing flexible treatment of voxel classes such as recoloring, peeling, or volumetry. A versatile clipping tool enables slicing voxel arrays according to combinations of three perpendicular clipping planes. The programs feature an up-to-date, graphical user interface for user-friendly operation by non AutoCAD specialists.
Take-home video for adult literacy
NASA Astrophysics Data System (ADS)
Yule, Valerie
1996-01-01
In the past, it has not been possible to "teach oneself to read" at home, because learners could not read the books to teach them. Videos and interactive compact discs have changed that situation and challenge current assumptions of the pedagogy of literacy. This article describes an experimental adult literacy project using video technology. The language used is English, but the basic concepts apply to any alphabetic or syllabic writing system. A half-hour cartoon video can help adults and adolescents with learning difficulties. Computer-animated cartoon graphics are attractive to look at, and simplify complex material in a clear, lively way. This video technique is also proving useful for distance learners, children, and learners of English as a second language. Methods and principles are to be extended using interactive compact discs.
Evaluating video digitizer errors
NASA Astrophysics Data System (ADS)
Peterson, C.
2016-01-01
Analog output video cameras remain popular for recording meteor data. Although these cameras uniformly employ electronic detectors with fixed pixel arrays, the digitization process requires resampling the horizontal lines as they are output in order to reconstruct the pixel data, usually resulting in a new data array of different horizontal dimensions than the native sensor. Pixel timing is not provided by the camera, and must be reconstructed based on line sync information embedded in the analog video signal. Using a technique based on hot pixels, I present evidence that jitter, sync detection, and other timing errors introduce both position and intensity errors which are not present in cameras which internally digitize their sensors and output the digital data directly.
Real-time radar signal processing using GPGPU (general-purpose graphic processing unit)
NASA Astrophysics Data System (ADS)
Kong, Fanxing; Zhang, Yan Rockee; Cai, Jingxiao; Palmer, Robert D.
2016-05-01
This study introduces a practical approach to develop real-time signal processing chain for general phased array radar on NVIDIA GPUs(Graphical Processing Units) using CUDA (Compute Unified Device Architecture) libraries such as cuBlas and cuFFT, which are adopted from open source libraries and optimized for the NVIDIA GPUs. The processed results are rigorously verified against those from the CPUs. Performance benchmarked in computation time with various input data cube sizes are compared across GPUs and CPUs. Through the analysis, it will be demonstrated that GPGPUs (General Purpose GPU) real-time processing of the array radar data is possible with relatively low-cost commercial GPUs.
A system for the real-time display of radar and video images of targets
NASA Technical Reports Server (NTRS)
Allen, W. W.; Burnside, W. D.
1990-01-01
Described here is a software and hardware system for the real-time display of radar and video images for use in a measurement range. The main purpose is to give the reader a clear idea of the software and hardware design and its functions. This system is designed around a Tektronix XD88-30 graphics workstation, used to display radar images superimposed on video images of the actual target. The system's purpose is to provide a platform for tha analysis and documentation of radar images and their associated targets in a menu-driven, user oriented environment.
Graphics performance in rich Internet applications.
Hoetzlein, Rama C
2012-01-01
Rendering performance for rich Internet applications (RIAs) has recently focused on the debate between using Flash and HTML5 for streaming video and gaming on mobile devices. A key area not widely explored, however, is the scalability of raw bitmap graphics performance for RIAs. Does Flash render animated sprites faster than HTML5? How much faster is WebGL than Flash? Answers to these questions are essential for developing large-scale data visualizations, online games, and truly dynamic websites. A new test methodology analyzes graphics performance across RIA frameworks and browsers, revealing specific performance outliers in existing frameworks. The results point toward a future in which all online experiences might be GPU accelerated.
Program Helps Generate And Manage Graphics
NASA Technical Reports Server (NTRS)
Truong, L. V.
1994-01-01
Living Color Frame Maker (LCFM) computer program generates computer-graphics frames. Graphical frames saved as text files, in readable and disclosed format, easily retrieved and manipulated by user programs for wide range of real-time visual information applications. LCFM implemented in frame-based expert system for visual aids in management of systems. Monitoring, diagnosis, and/or control, diagrams of circuits or systems brought to "life" by use of designated video colors and intensities to symbolize status of hardware components (via real-time feedback from sensors). Status of systems can be displayed. Written in C++ using Borland C++ 2.0 compiler for IBM PC-series computers and compatible computers running MS-DOS.
ISS Expedition 42 Time Lapse Video of Earth
2015-05-18
This time lapse video taken during ISS Expedition 42 is assembled from JSC still photo collection (still photos iss042e190769 - iss042e191096). Shows Earth views. Solar Array Wing (SAW) in foreground.
ISS Expedition 42 Time Lapse Video of Earth
2015-05-18
This time lapse video taken during ISS Expedition 42 is assembled from JSC still photo collection (still photos iss042e330173 - iss042e331530). Shows Earth views. Solar Array Wing (SAW) in foreground.
ISS Expedition 42 Time Lapse Video of Earth
2015-05-18
This time lapse video taken during ISS Expedition 42 is assembled from JSC still photo collection (still photos iss042e238532 - iss042e239150). Shows Earth views. Solar Array Wing (SAW) in foreground.
ISS Expedition 42 Time Lapse Video of Earth
2015-05-18
This time lapse video taken during ISS Expedition 42 is assembled from JSC still photo collection (still photos iss042e177446 - iss042e178444 ). Shows Earth views. Solar Array Wing (SAW) in foreground.
ISS Expedition 42 Time Lapse Video of Earth
2015-05-18
This time lapse video taken during ISS Expedition 42 is assembled from JSC still photo collection (still photos iss042e110489 - iss042e111902). Shows Earth views. Solar Array Wing (SAW) in foreground.
ISS Expedition 42 Time Lapse Video of Earth
2015-05-18
This time lapse video taken during ISS Expedition 42 is assembled from JSC still photo collection (still photos iss042e212874 - iss042e213080). Shows Earth views. Solar Array Wing (SAW) in foreground.
ISS Expedition 42 Time Lapse Video of Earth
2015-05-18
This time lapse video taken during ISS Expedition 42 is assembled from JSC still photo collection (still photos iss042e285752 - iss042e286830). Shows Earth views. Solar Array Wing (SAW) in foreground.
ISS Expedition 42 Time Lapse Video of Earth
2015-05-18
This time lapse video taken during ISS Expedition 42 is assembled from JSC still photo collection (still photos iss042e116561 - iss042e117265). Shows Earth views. Solar Array Wing (SAW) in foreground.
An Intuitive Graphical User Interface for Small UAS
2013-05-01
reduced from two to one . The stock displays, including video with text overlay on one and FalconView on the other, are replaced with a single, graphics...INTRODUCTION Tactical UAVs such as the Raven, Puma and Wasp are often used by dismounted warfighters on missions that require a high degree of mobility by...the operators on the ground. The current ground control stations (GCS) for the Wasp, Raven and Puma tactical UAVs require two people and two user
Learning Projectile Motion with the Computer Game ``Scorched 3D``
NASA Astrophysics Data System (ADS)
Jurcevic, John S.
2008-01-01
For most of our students, video games are a normal part of their lives. We should take advantage of this medium to teach physics in a manner that is engrossing for our students. In particular, modern video games incorporate accurate physics in their game engines, and they allow us to visualize the physics through flashy and captivating graphics. I recently used the game "Scorched 3D" to help my students understand projectile motion.
Multi-tasking computer control of video related equipment
NASA Technical Reports Server (NTRS)
Molina, Rod; Gilbert, Bob
1989-01-01
The flexibility, cost-effectiveness and widespread availability of personal computers now makes it possible to completely integrate the previously separate elements of video post-production into a single device. Specifically, a personal computer, such as the Commodore-Amiga, can perform multiple and simultaneous tasks from an individual unit. Relatively low cost, minimal space requirements and user-friendliness, provides the most favorable environment for the many phases of video post-production. Computers are well known for their basic abilities to process numbers, text and graphics and to reliably perform repetitive and tedious functions efficiently. These capabilities can now apply as either additions or alternatives to existing video post-production methods. A present example of computer-based video post-production technology is the RGB CVC (Computer and Video Creations) WorkSystem. A wide variety of integrated functions are made possible with an Amiga computer existing at the heart of the system.
Photos & Graphics: Urticaria (Hives) and Angioedema
... Conditions Drug Guide Conditions Dictionary Just for Kids Library School Tools Videos Virtual Allergist Education & Training Careers in ... Support the AAAAI Foundation Donate Utility navigation Español Journals Annual Meeting Member Login / My Membership Search navigation ...
47 CFR 51.5 - Terms and definitions.
Code of Federal Regulations, 2013 CFR
2013-10-01
... voice, data, graphics or video telecommunications using any technology. Arbitration, final offer. Final... 1344, Jan. 10, 2000; 65 FR 2550, Jan. 18, 2000; 65 FR 54438, Sept. 8, 2000; 66 FR 43521, Aug. 20, 2001...
ISS Expedition 42 Time Lapse Video of Earth
2015-05-18
This time lapse video taken during ISS Expedition 42 is assembled from JSC still photo collection (still photos iss042e334978 - iss042e335976). Shows Earth views. Solar Array Wing (SAW) comes into view.
International Space Station (ISS)
2000-12-04
This video still depicts the recently deployed starboard and port solar arrays towering over the International Space Station (ISS). The video was recorded on STS-97's 65th orbit. Delivery, assembly, and activation of the solar arrays was the main mission objective of STS-97. The electrical power system, which is built into a 73-meter (240-foot) long solar array structure consists of solar arrays, radiators, batteries, and electronics, and will provide the power necessary for the first ISS crews to live and work in the U.S. segment. The entire 15.4-metric ton (17-ton) package is called the P6 Integrated Truss Segment, and is the heaviest and largest element yet delivered to the station aboard a space shuttle. The STS-97 crew of five launched aboard the Space Shuttle Orbiter Endeavor on November 30, 2000 for an 11 day mission.
(abstract) Synthesis of Speaker Facial Movements to Match Selected Speech Sequences
NASA Technical Reports Server (NTRS)
Scott, Kenneth C.
1994-01-01
We are developing a system for synthesizing image sequences the simulate the facial motion of a speaker. To perform this synthesis, we are pursuing two major areas of effort. We are developing the necessary computer graphics technology to synthesize a realistic image sequence of a person speaking selected speech sequences. Next, we are developing a model that expresses the relation between spoken phonemes and face/mouth shape. A subject is video taped speaking an arbitrary text that contains expression of the full list of desired database phonemes. The subject is video taped from the front speaking normally, recording both audio and video detail simultaneously. Using the audio track, we identify the specific video frames on the tape relating to each spoken phoneme. From this range we digitize the video frame which represents the extreme of mouth motion/shape. Thus, we construct a database of images of face/mouth shape related to spoken phonemes. A selected audio speech sequence is recorded which is the basis for synthesizing a matching video sequence; the speaker need not be the same as used for constructing the database. The audio sequence is analyzed to determine the spoken phoneme sequence and the relative timing of the enunciation of those phonemes. Synthesizing an image sequence corresponding to the spoken phoneme sequence is accomplished using a graphics technique known as morphing. Image sequence keyframes necessary for this processing are based on the spoken phoneme sequence and timing. We have been successful in synthesizing the facial motion of a native English speaker for a small set of arbitrary speech segments. Our future work will focus on advancement of the face shape/phoneme model and independent control of facial features.
Software Graphics Processing Unit (sGPU) for Deep Space Applications
NASA Technical Reports Server (NTRS)
McCabe, Mary; Salazar, George; Steele, Glen
2015-01-01
A graphics processing capability will be required for deep space missions and must include a range of applications, from safety-critical vehicle health status to telemedicine for crew health. However, preliminary radiation testing of commercial graphics processing cards suggest they cannot operate in the deep space radiation environment. Investigation into an Software Graphics Processing Unit (sGPU)comprised of commercial-equivalent radiation hardened/tolerant single board computers, field programmable gate arrays, and safety-critical display software shows promising results. Preliminary performance of approximately 30 frames per second (FPS) has been achieved. Use of multi-core processors may provide a significant increase in performance.
ISS Expedition 42 Time Lapse Video of Earth
2015-05-18
This time lapse video taken during ISS Expedition 42 is assembled from JSC still photo collection (still photos iss042e210380 - iss042e211441). Shows Earth views. Solar Array Wing (SAW) in and out of view.
Storyboard Development for Interactive Multimedia Training.
ERIC Educational Resources Information Center
Orr, Kay L.; And Others
1994-01-01
Discusses procedures for storyboard development and provides guidelines for designing interactive multimedia courseware, including interactivity, learner control, feedback, visual elements, motion video, graphics/animation, text, audio, and programming. A topical bibliography that lists 98 items is included. (LRW)
Computer Vision Assisted Virtual Reality Calibration
NASA Technical Reports Server (NTRS)
Kim, W.
1999-01-01
A computer vision assisted semi-automatic virtual reality (VR) calibration technology has been developed that can accurately match a virtual environment of graphically simulated three-dimensional (3-D) models to the video images of the real task environment.
"Tuberculosis Case Management" Training.
ERIC Educational Resources Information Center
Knebel, Elisa; Kolodner, Jennifer
2001-01-01
The need to isolated health providers with critical knowledge in tuberculosis (TB) case management prompted the development of "Tuberculosis Case Management" CD-ROM. Features include "Learning Center,""Examination Room," and "Library." The combination of audio, video, and graphics allows participants to…
Fort Hood: Home of the Third Corps
1989-01-01
34 Special Effects Design ...................................................... 34 Post...145 Digital Special Effects ...................................................... 148...geographic sphere of responsibility. The use of aerial video is effective in showing the vast size and varying terrain of the p cs an’ graphics depict
HVS: an image-based approach for constructing virtual environments
NASA Astrophysics Data System (ADS)
Zhang, Maojun; Zhong, Li; Sun, Lifeng; Li, Yunhao
1998-09-01
Virtual Reality Systems can construct virtual environment which provide an interactive walkthrough experience. Traditionally, walkthrough is performed by modeling and rendering 3D computer graphics in real-time. Despite the rapid advance of computer graphics technique, the rendering engine usually places a limit on scene complexity and rendering quality. This paper presents a approach which uses the real-world image or synthesized image to comprise a virtual environment. The real-world image or synthesized image can be recorded by camera, or synthesized by off-line multispectral image processing for Landsat TM (Thematic Mapper) Imagery and SPOT HRV imagery. They are digitally warped on-the-fly to simulate walking forward/backward, to left/right and 360-degree watching around. We have developed a system HVS (Hyper Video System) based on these principles. HVS improves upon QuickTime VR and Surround Video in the walking forward/backward.
Optimized Two-Party Video Chat with Restored Eye Contact Using Graphics Hardware
NASA Astrophysics Data System (ADS)
Dumont, Maarten; Rogmans, Sammy; Maesen, Steven; Bekaert, Philippe
We present a practical system prototype to convincingly restore eye contact between two video chat participants, with a minimal amount of constraints. The proposed six-fold camera setup is easily integrated into the monitor frame, and is used to interpolate an image as if its virtual camera captured the image through a transparent screen. The peer user has a large freedom of movement, resulting in system specifications that enable genuine practical usage. Our software framework thereby harnesses the powerful computational resources inside graphics hardware, and maximizes arithmetic intensity to achieve over real-time performance up to 42 frames per second for 800 ×600 resolution images. Furthermore, an optimal set of fine tuned parameters are presented, that optimizes the end-to-end performance of the application to achieve high subjective visual quality, and still allows for further algorithmic advancement without loosing its real-time capabilities.
Military Review: The Professional Journal of the U.S. Army. January-February 2002
2002-02-01
Internet.”9 He accuses bin Laden of hiding maps and photos of targets and of posting instructions on sports chat rooms, porno - graphic bulletin boards...anything unusual. Messages can be hidden in audio, video , or still image files, with information stored in the least significant bits of a digitized file...steganography, embedding secret messages in other messages to prevent observers from suspecting anything unusual. Messages can be hidden in audio, video , or
Computer-based desktop system for surgical videotape editing.
Vincent-Hamelin, E; Sarmiento, J M; de la Puente, J M; Vicente, M
1997-05-01
The educational role of surgical video presentations should be optimized by linking surgical images to graphic evaluation of indications, techniques, and results. We describe a PC-based video production system for personal editing of surgical tapes, according to the objectives of each presentation. The hardware requirement is a personal computer (100 MHz processor, 1-Gb hard disk, 16 Mb RAM) with a PC-to-TV/video transfer card plugged into a slot. Computer-generated numerical data, texts, and graphics are transformed into analog signals displayed on TV/video. A Genlock interface (a special interface card) synchronizes digital and analog signals, to overlay surgical images to electronic illustrations. The presentation is stored as digital information or recorded on a tape. The proliferation of multimedia tools is leading us to adapt presentations to the objectives of lectures and to integrate conceptual analyses with dynamic image-based information. We describe a system that handles both digital and analog signals, production being recorded on a tape. Movies may be managed in a digital environment, with either an "on-line" or "off-line" approach. System requirements are high, but handling a single device optimizes editing without incurring such complexity that management becomes impractical to surgeons. Our experience suggests that computerized editing allows linking surgical scientific and didactic messages on a single communication medium, either a videotape or a CD-ROM.
STS-74/Mir photogrammetric appendage structural dynamics experiment
NASA Technical Reports Server (NTRS)
Welch, Sharon S.; Gilbert, Michael G.
1996-01-01
The Photogrammetric Appendage Structural Dynamics Experiment (PASDE) is an International Space Station (ISS) Phase-1 risk mitigation experiment. Phase-1 experiments are performed during docking missions of the U.S. Space Shuttle to the Russian Space Station Mir. The purpose of the experiment is to demonstrate the use of photogrammetric techniques for determination of structural dynamic mode parameters of solar arrays and other spacecraft appendages. Photogrammetric techniques are a low cost alternative to appendage mounted accelerometers for the ISS program. The objective of the first flight of PASDE, on STS-74 in November 1995, was to obtain video images of Mir Kvant-2 solar array response to various structural dynamic excitation events. More than 113 minutes of high quality structural response video data was collected during the mission. The PASDE experiment hardware consisted of three instruments each containing two video cameras, two video tape recorders, a modified video signal time inserter, and associated avionics boxes. The instruments were designed, fabricated, and tested at the NASA Langley Research Center in eight months. The flight hardware was integrated into standard Hitchhiker canisters at the NASA Goddard Space Flight Center and then installed into the Space Shuttle cargo bay in locations selected to achieve good video coverage and photogrammetric geometry.
Sanges, Remo; Cordero, Francesca; Calogero, Raffaele A
2007-12-15
OneChannelGUI is an add-on Bioconductor package providing a new set of functions extending the capability of the affylmGUI package. This library provides a graphical interface (GUI) for Bioconductor libraries to be used for quality control, normalization, filtering, statistical validation and data mining for single channel microarrays. Affymetrix 3' expression (IVT) arrays as well as the new whole transcript expression arrays, i.e. gene/exon 1.0 ST, are actually implemented. oneChannelGUI is available for most platforms on which R runs, i.e. Windows and Unix-like machines. http://www.bioconductor.org/packages/2.0/bioc/html/oneChannelGUI.html
Development of High-speed Visualization System of Hypocenter Data Using CUDA-based GPU computing
NASA Astrophysics Data System (ADS)
Kumagai, T.; Okubo, K.; Uchida, N.; Matsuzawa, T.; Kawada, N.; Takeuchi, N.
2014-12-01
After the Great East Japan Earthquake on March 11, 2011, intelligent visualization of seismic information is becoming important to understand the earthquake phenomena. On the other hand, to date, the quantity of seismic data becomes enormous as a progress of high accuracy observation network; we need to treat many parameters (e.g., positional information, origin time, magnitude, etc.) to efficiently display the seismic information. Therefore, high-speed processing of data and image information is necessary to handle enormous amounts of seismic data. Recently, GPU (Graphic Processing Unit) is used as an acceleration tool for data processing and calculation in various study fields. This movement is called GPGPU (General Purpose computing on GPUs). In the last few years the performance of GPU keeps on improving rapidly. GPU computing gives us the high-performance computing environment at a lower cost than before. Moreover, use of GPU has an advantage of visualization of processed data, because GPU is originally architecture for graphics processing. In the GPU computing, the processed data is always stored in the video memory. Therefore, we can directly write drawing information to the VRAM on the video card by combining CUDA and the graphics API. In this study, we employ CUDA and OpenGL and/or DirectX to realize full-GPU implementation. This method makes it possible to write drawing information to the VRAM on the video card without PCIe bus data transfer: It enables the high-speed processing of seismic data. The present study examines the GPU computing-based high-speed visualization and the feasibility for high-speed visualization system of hypocenter data.
In-Space Structural Validation Plan for a Stretched-Lens Solar Array Flight Experiment
NASA Technical Reports Server (NTRS)
Pappa, Richard S.; Woods-Vedeler, Jessica A.; Jones, Thomas W.
2001-01-01
This paper summarizes in-space structural validation plans for a proposed Space Shuttle-based flight experiment. The test article is an innovative, lightweight solar array concept that uses pop-up, refractive stretched-lens concentrators to achieve a power/mass density of at least 175 W/kg, which is more than three times greater than current capabilities. The flight experiment will validate this new technology to retire the risk associated with its first use in space. The experiment includes structural diagnostic instrumentation to measure the deployment dynamics, static shape, and modes of vibration of the 8-meter-long solar array and several of its lenses. These data will be obtained by photogrammetry using the Shuttle payload-bay video cameras and miniature video cameras on the array. Six accelerometers are also included in the experiment to measure base excitations and small-amplitude tip motions.
STS-109 Mission Highlights Resource Tape
NASA Astrophysics Data System (ADS)
2002-05-01
This video, Part 2 of 4, shows the activities of the STS-109 crew (Scott Altman, Commander; Duane Carey, Pilot; John Grunsfeld, Payload Commander; Nancy Currie, James Newman, Richard Linnehan, Michael Massimino, Mission Specialists) during flight days 4 and 5. The activities from other flights days can be seen on 'STS-109 Mission Highlights Resource Tape' Part 1 of 4 (internal ID 2002139471), 'STS-109 Mission Highlights Resource Tape' Part 3 of 4 (internal ID 2002139476), and 'STS-109 Mission Highlights Resource Tape' Part 4 of 4 (internal ID 2002137577). The primary activities during these days were EVAs (extravehicular activities) to replace two solar arrays on the HST (Hubble Space Telescope). Footage from flight day 4 records an EVA by Grunsfeld and Linnehan, including their exit from Columbia's payload bay airlock, their stowing of the old HST starboard rigid array on the rigid array carrier in Columbia's payload bay, their attachment of the new array on HST, the installation of a new starboard diode box, and the unfolding of the new array. The pistol grip space tool used to fasten the old array in its new location is shown in use. The video also includes several shots of the HST with Earth in the background. On flight day 5 Newman and Massimino conduct an EVA to change the port side array and diode box on HST. This EVA is very similar to the one on flight day 4, and is covered similarly in the video. A hand operated ratchet is shown in use. In addition to a repeat of the previous tasks, the astronauts change HST's reaction wheel assembly, and because they are ahead of schedule, install installation and lubricate an instrument door on the telescope. The Earth views include a view of Egypt and Israel, with the Nile River, Red Sea, and Mediterranean Sea.
Two-Way Pattern Design for Distributed Subarray Antennas
2012-09-01
GUI Graphical User Interface HPBW Half-power Beamwidth MFR Multifunction Radar RCS Radar Cross Section RRE Radar Range Equation...The Aegis ships in the US Navy use phased arrays for the AN/SPY-1 multifunction radar ( MFR ) [2]. The phased array for the AN/SPY-1 radar is shown in...arrays. This is a challenge for design of antenna apertures for shipboard radar systems. One design approach is to use multi-function subarray
... HEADS UP Resources Training Custom PDFs Mobile Apps Videos Graphics Podcasts Social Media File Formats Help: How do I view different file formats (PDF, DOC, PPT, MPEG) on this site? Adobe PDF file Microsoft PowerPoint ... file Apple Quicktime file RealPlayer file Text file ...
NASA Astrophysics Data System (ADS)
Niblack, Carlton W.; Zhu, Xiaoming; Hafner, James L.; Breuel, Tom; Ponceleon, Dulce B.; Petkovic, Dragutin; Flickner, Myron D.; Upfal, Eli; Nin, Sigfredo I.; Sull, Sanghoon; Dom, Byron E.; Yeo, Boon-Lock; Srinivasan, Savitha; Zivkovic, Dan; Penner, Mike
1997-12-01
QBICTM (Query By Image Content) is a set of technologies and associated software that allows a user to search, browse, and retrieve image, graphic, and video data from large on-line collections. This paper discusses current research directions of the QBIC project such as indexing for high-dimensional multimedia data, retrieval of gray level images, and storyboard generation suitable for video. It describes aspects of QBIC software including scripting tools, application interfaces, and available GUIs, and gives examples of applications and demonstration systems using it.
2013-12-11
Name/Title of Video: Marshall Space Flight Center Historic Resource Reel Description: A brief collection of film and video b-roll of historic events and programs associated with NASA's Marshall Space Flight Center in Huntsville, Ala. For more information and/or more footage of these events, please contact the Marshall Center Public & Employee Communications Office. Graphic Information:file footage PAO Name:News Chief Jennifer Stanfield or MSFC Historian Mike Wright Phone Number:256-544-0034 Email Address: jennifer.stanfield@nasa.gov or mike.d.wright@nasa.gov
Annual Report of the Secretary of Defense to the President and the Congress, January 1994
1994-01-01
latftoirm. This iJl id-..s such basic ,,SC-vice.,S as voice, video , data, imagery, and graphics transmission, as well as organi/.ational and individual...messaging, video -teleconferencing, and electronic data interchange. Overarchin e the system are standardization, security, and technology insertion...WTli (’ pedo shldlow watecr upgradiing, and improvements to the Standoffl’Lind Attatck Missile for1 stI~ke’-IL’IlCer ai ra iit. NAVAL. AVIATION
2016-04-01
International Space Station Resource Reel. This video describes shows the International Space Station components, such as the Destiny laboratory and the Quest Airlock, being manufactured at NASA's Marshall Space Flight Center in Huntsville, Ala. It provides manufacturing and ground testing video and in-flight video of key space station components: the Microgravity Science Glovebox, the Materials Science Research Facility, the Window Observational Research Facility, the Environmental Control Life Support System, and basic research racks. There is video of people working in Marshall's Payload Operations Integration Center where controllers operate experiments 24/7, 365 days a week. Various crews are shown conducting experiments on board the station. PAO Name:Jennifer Stanfield Phone Number:256-544-0034 Email Address: JENNIFER.STANFIELD@NASA.GOV Name/Title of Video: ISS Resource Reel Description: ISS Resource Reel Graphic Information: NASA PAO Name:Tracy McMahan Phone Number:256-544-1634 Email Address: tracy.mcmahan@nasa.gov
Video capture virtual reality as a flexible and effective rehabilitation tool
Weiss, Patrice L; Rand, Debbie; Katz, Noomi; Kizony, Rachel
2004-01-01
Video capture virtual reality (VR) uses a video camera and software to track movement in a single plane without the need to place markers on specific bodily locations. The user's image is thereby embedded within a simulated environment such that it is possible to interact with animated graphics in a completely natural manner. Although this technology first became available more than 25 years ago, it is only within the past five years that it has been applied in rehabilitation. The objective of this article is to describe the way this technology works, to review its assets relative to other VR platforms, and to provide an overview of some of the major studies that have evaluated the use of video capture technologies for rehabilitation. PMID:15679949
Absentee Voting Voter Language Assistance Disabled Voter Assistance Military And Overseas Voters Election informational videos with ASL interpretation. icon graphic of translation services Language Assistance Information about the language assistance program and election information in Tagalog, Spanish, Yup'ik
JPRS Report, Science & Technology, Japan
1987-12-02
casting nathod —.ntnoshar ic sintering nethod _Reaotion sintering nethod .Btlf-iintering nethod „Recrystallizing sintering nathod -Hot Porning ...was completed by photo- graphically recalling even such things as how bolts were fastened to achieve a complete copy similar to a video replay of
Eye gaze correction with stereovision for video-teleconferencing.
Yang, Ruigang; Zhang, Zhengyou
2004-07-01
The lack of eye contact in desktop video teleconferencing substantially reduces the effectiveness of video contents. While expensive and bulky hardware is available on the market to correct eye gaze, researchers have been trying to provide a practical software-based solution to bring video-teleconferencing one step closer to the mass market. This paper presents a novel approach: Based on stereo analysis combined with rich domain knowledge (a personalized face model), we synthesize, using graphics hardware, a virtual video that maintains eye contact. A 3D stereo head tracker with a personalized face model is used to compute initial correspondences across two views. More correspondences are then added through template and feature matching. Finally, all the correspondence information is fused together for view synthesis using view morphing techniques. The combined methods greatly enhance the accuracy and robustness of the synthesized views. Our current system is able to generate an eye-gaze corrected video stream at five frames per second on a commodity 1 GHz PC.
Compact discs as versatile cost-effective substrates for releasable nanopatterned aluminium films
NASA Astrophysics Data System (ADS)
Barrios, Carlos Angulo; Canalejas-Tejero, Víctor
2015-02-01
We demonstrate that standard polycarbonate compact disk surfaces can provide unique adhesion to Al films that is both strong enough to permit Al film nanopatterning and weak enough to allow easy nanopatterned Al film detachment using Scotch tape. Transferred Al nanohole arrays on Scotch tape exhibit excellent optical and plasmonic performance.We demonstrate that standard polycarbonate compact disk surfaces can provide unique adhesion to Al films that is both strong enough to permit Al film nanopatterning and weak enough to allow easy nanopatterned Al film detachment using Scotch tape. Transferred Al nanohole arrays on Scotch tape exhibit excellent optical and plasmonic performance. Electronic supplementary information (ESI) available: 1. Optical simulations (Fig. SI.1); 2. Optical coupling via an Al NHA on the Scotch tape (Fig. SI.2); 3. Electrostatics-based opto-mechanical cantilever (Fig. SI.3). Video 1. Transfer of the Al film nanostructured with a nanohole array from a polycarbonate CD surface onto a Scotch tape; Video 2. Opto-mechanical electrostatics-based sensor: electrical attraction. Video 3. Opto-mechanical electrostatics-based sensor: electrical repulsion. See DOI: 10.1039/c4nr06271j
Progress in passive submillimeter-wave video imaging
NASA Astrophysics Data System (ADS)
Heinz, Erik; May, Torsten; Born, Detlef; Zieger, Gabriel; Peiselt, Katja; Zakosarenko, Vyacheslav; Krause, Torsten; Krüger, André; Schulz, Marco; Bauer, Frank; Meyer, Hans-Georg
2014-06-01
Since 2007 we are developing passive submillimeter-wave video cameras for personal security screening. In contradiction to established portal-based millimeter-wave scanning techniques, these are suitable for stand-off or stealth operation. The cameras operate in the 350GHz band and use arrays of superconducting transition-edge sensors (TES), reflector optics, and opto-mechanical scanners. Whereas the basic principle of these devices remains unchanged, there has been a continuous development of the technical details, as the detector array, the scanning scheme, and the readout, as well as system integration and performance. The latest prototype of this camera development features a linear array of 128 detectors and a linear scanner capable of 25Hz frame rate. Using different types of reflector optics, a field of view of 1×2m2 and a spatial resolution of 1-2 cm is provided at object distances of about 5-25m. We present the concept of this camera and give details on system design and performance. Demonstration videos show its capability for hidden threat detection and illustrate possible application scenarios.
Processing And Display Of Medical Three Dimensional Arrays Of Numerical Data Using Octree Encoding
NASA Astrophysics Data System (ADS)
Amans, Jean-Louis; Darier, Pierre
1986-05-01
imaging modalities such as X-Ray computerized Tomography (CT), Nuclear Medecine and Nuclear Magnetic Resonance can produce three-dimensional (3-D) arrays of numerical data of medical object internal structures. The analysis of 3-D data by synthetic generation of realistic images is an important area of computer graphics and imaging.
1978-11-28
Noise was sponsored by CNO (OP-95) and supported by Chief of Naval Research (CNR) and held at Woods Hole Oceano - graphic Institute (WHOI) in October...SURFACE ARRAY 1 Sol ’ ARRAY 2 S~BOTTOM (C) Calculate standard deviation of phase-difference fluctuations as a function of integration time, Calculate
Keyhole imaging method for dynamic objects behind the occlusion area
NASA Astrophysics Data System (ADS)
Hao, Conghui; Chen, Xi; Dong, Liquan; Zhao, Yuejin; Liu, Ming; Kong, Lingqin; Hui, Mei; Liu, Xiaohua; Wu, Hong
2018-01-01
A method of keyhole imaging based on camera array is realized to obtain the video image behind a keyhole in shielded space at a relatively long distance. We get the multi-angle video images by using a 2×2 CCD camera array to take the images behind the keyhole in four directions. The multi-angle video images are saved in the form of frame sequences. This paper presents a method of video frame alignment. In order to remove the non-target area outside the aperture, we use the canny operator and morphological method to realize the edge detection of images and fill the images. The image stitching of four images is accomplished on the basis of the image stitching algorithm of two images. In the image stitching algorithm of two images, the SIFT method is adopted to accomplish the initial matching of images, and then the RANSAC algorithm is applied to eliminate the wrong matching points and to obtain a homography matrix. A method of optimizing transformation matrix is proposed in this paper. Finally, the video image with larger field of view behind the keyhole can be synthesized with image frame sequence in which every single frame is stitched. The results show that the screen of the video is clear and natural, the brightness transition is smooth. There is no obvious artificial stitching marks in the video, and it can be applied in different engineering environment .
New space sensor and mesoscale data analysis
NASA Technical Reports Server (NTRS)
Hickey, John S.
1987-01-01
The developed Earth Science and Application Division (ESAD) system/software provides the research scientist with the following capabilities: an extensive data base management capibility to convert various experiment data types into a standard format; and interactive analysis and display package (AVE80); an interactive imaging/color graphics capability utilizing the Apple III and IBM PC workstations integrated into the ESAD computer system; and local and remote smart-terminal capability which provides color video, graphics, and Laserjet output. Recommendations for updating and enhancing the performance of the ESAD computer system are listed.
Concerning the Video Drift Method to Measure Double Stars
NASA Astrophysics Data System (ADS)
Nugent, Richard L.; Iverson, Ernest W.
2015-05-01
Classical methods to measure position angles and separations of double stars rely on just a few measurements either from visual observations or photographic means. Visual and photographic CCD observations are subject to errors from the following sources: misalignments from eyepiece/camera/barlow lens/micrometer/focal reducers, systematic errors from uncorrected optical distortions, aberrations from the telescope system, camera tilt, magnitude and color effects. Conventional video methods rely on calibration doubles and graphically calculating the east-west direction plus careful choice of select video frames stacked for measurement. Atmospheric motion is one of the larger sources of error in any exposure/measurement method which is on the order of 0.5-1.5. Ideally, if a data set from a short video can be used to derive position angle and separation, with each data set self-calibrating independent of any calibration doubles or star catalogues, this would provide measurements of high systematic accuracy. These aims are achieved by the video drift method first proposed by the authors in 2011. This self calibrating video method automatically analyzes 1,000's of measurements from a short video clip.
Authoring Data-Driven Videos with DataClips.
Amini, Fereshteh; Riche, Nathalie Henry; Lee, Bongshin; Monroy-Hernandez, Andres; Irani, Pourang
2017-01-01
Data videos, or short data-driven motion graphics, are an increasingly popular medium for storytelling. However, creating data videos is difficult as it involves pulling together a unique combination of skills. We introduce DataClips, an authoring tool aimed at lowering the barriers to crafting data videos. DataClips allows non-experts to assemble data-driven "clips" together to form longer sequences. We constructed the library of data clips by analyzing the composition of over 70 data videos produced by reputable sources such as The New York Times and The Guardian. We demonstrate that DataClips can reproduce over 90% of our data videos corpus. We also report on a qualitative study comparing the authoring process and outcome achieved by (1) non-experts using DataClips, and (2) experts using Adobe Illustrator and After Effects to create data-driven clips. Results indicated that non-experts are able to learn and use DataClips with a short training period. In the span of one hour, they were able to produce more videos than experts using a professional editing tool, and their clips were rated similarly by an independent audience.
Parallel Key Frame Extraction for Surveillance Video Service in a Smart City.
Zheng, Ran; Yao, Chuanwei; Jin, Hai; Zhu, Lei; Zhang, Qin; Deng, Wei
2015-01-01
Surveillance video service (SVS) is one of the most important services provided in a smart city. It is very important for the utilization of SVS to provide design efficient surveillance video analysis techniques. Key frame extraction is a simple yet effective technique to achieve this goal. In surveillance video applications, key frames are typically used to summarize important video content. It is very important and essential to extract key frames accurately and efficiently. A novel approach is proposed to extract key frames from traffic surveillance videos based on GPU (graphics processing units) to ensure high efficiency and accuracy. For the determination of key frames, motion is a more salient feature in presenting actions or events, especially in surveillance videos. The motion feature is extracted in GPU to reduce running time. It is also smoothed to reduce noise, and the frames with local maxima of motion information are selected as the final key frames. The experimental results show that this approach can extract key frames more accurately and efficiently compared with several other methods.
ISS Expedition 42 Time Lapse Video of Earth
2015-05-18
This time lapse video taken during ISS Expedition 42 is assembled from JSC still photo collection (still photos iss042e103580 - iss042e104044). Shows night time Earth views. Solar Array Wing (SAW) and Space Station Remote Manipulator System (SSRMS) or Canadarm in foreground.
The Impact of Developing Technology on Media Communications.
ERIC Educational Resources Information Center
MacDonald, Lindsay W.
1997-01-01
Examines changes in media communications resulting from new information technologies: communications technologies (networks, World Wide Web, digital set-top box); graphic arts (digital photography, CD and digital archives, desktop design and publishing, printing technology); television and video (digital editing, interactive television, news and…
WORKSHOP ON MINING IMPACTED NATIVE AMERICAN LANDS CD
Multimedia Technology is an exciting mix of cutting-edge Information Technologies that utilize a variety of interactive structures, digital video and audio technologies, 3-D animation, high-end graphics, and peer-reviewed content that are then combined in a variety of user-friend...
17 CFR 232.104 - Unofficial PDF copies included in an electronic submission.
Code of Federal Regulations, 2010 CFR
2010-04-01
... graphics, or audio or video material), notwithstanding the fact that its HTML or ASCII document counterpart... copy is not filed for purposes of section 11 of the Securities Act (15 U.S.C. 77k), section 18 of the...
17 CFR 232.104 - Unofficial PDF copies included in an electronic submission.
Code of Federal Regulations, 2014 CFR
2014-04-01
... graphics, or audio or video material), notwithstanding the fact that its HTML or ASCII document counterpart... copy is not filed for purposes of section 11 of the Securities Act (15 U.S.C. 77k), section 18 of the...
17 CFR 232.104 - Unofficial PDF copies included in an electronic submission.
Code of Federal Regulations, 2012 CFR
2012-04-01
... graphics, or audio or video material), notwithstanding the fact that its HTML or ASCII document counterpart... copy is not filed for purposes of section 11 of the Securities Act (15 U.S.C. 77k), section 18 of the...
17 CFR 232.104 - Unofficial PDF copies included in an electronic submission.
Code of Federal Regulations, 2013 CFR
2013-04-01
... graphics, or audio or video material), notwithstanding the fact that its HTML or ASCII document counterpart... copy is not filed for purposes of section 11 of the Securities Act (15 U.S.C. 77k), section 18 of the...
17 CFR 232.104 - Unofficial PDF copies included in an electronic submission.
Code of Federal Regulations, 2011 CFR
2011-04-01
... graphics, or audio or video material), notwithstanding the fact that its HTML or ASCII document counterpart... copy is not filed for purposes of section 11 of the Securities Act (15 U.S.C. 77k), section 18 of the...
The graphics and data acquisition software package
NASA Technical Reports Server (NTRS)
Crosier, W. G.
1981-01-01
A software package was developed for use with micro and minicomputers, particularly the LSI-11/DPD-11 series. The package has a number of Fortran-callable subroutines which perform a variety of frequently needed tasks for biomedical applications. All routines are well documented, flexible, easy to use and modify, and require minimal programmer knowledge of peripheral hardware. The package is also economical of memory and CPU time. A single subroutine call can perform any one of the following functions: (1) plot an array of integer values from sampled A/D data, (2) plot an array of Y values versus an array of X values; (3) draw horizontal and/or vertical grid lines of selectable type; (4) annotate grid lines with user units; (5) get coordinates of user controlled crosshairs from the terminal for interactive graphics; (6) sample any analog channel with program selectable gain; (7) wait a specified time interval, and (8) perform random access I/O of one or more blocks of a sequential disk file. Several miscellaneous functions are also provided.
Public Education and Outreach Through Full-Dome Video Technology
NASA Astrophysics Data System (ADS)
Pollock, John
2009-03-01
My long-term goal is to enhance public understanding of complex systems that can be best demonstrated through richly detailed computer graphic animation displayed with full-dome video technology. My current focus is on health science advances that focus on regenerative medicine, which helps the body heal itself. Such topics facilitate science learning and health literacy. My team develops multi-media presentations that bring the scientific and medical advances to the public through immersive high-definition video animation. Implicit in treating the topics of regenerative medicine will be the need to address stem cell biology. The topics are clarified and presented from a platform of facts and balanced ethical consideration. The production process includes communicating scientific information about the excitement and importance of stem cell research. Principles of function are emphasized over specific facts or terminology by focusing on a limited, but fundamental set of concepts. To achieve this, visually rich, biologically accurate 3D computer graphic environments are created to illustrate the cells, tissues and organs of interest. A suite of films are produced, and evaluated in pre- post-surveys assessing attitudes, knowledge and learning. Each film uses engaging interactive demonstrations to illustrate biological functions, the things that go wrong due to disease and disability, and the remedy provided by regenerative medicine. While the images are rich and detailed, the language is accessible and appropriate to the audience. The digital, high-definition video is also re-edited for presentation in other ``flat screen'' formats, increasing our distribution potential. Show content is also presented in an interactive web space (www.sepa.duq.edu) with complementing teacher resource guides and student workbooks and companion video games.
Two Dimensional Array Based Overlay Network for Balancing Load of Peer-to-Peer Live Video Streaming
NASA Astrophysics Data System (ADS)
Faruq Ibn Ibrahimy, Abdullah; Rafiqul, Islam Md; Anwar, Farhat; Ibn Ibrahimy, Muhammad
2013-12-01
The live video data is streaming usually in a tree-based overlay network or in a mesh-based overlay network. In case of departure of a peer with additional upload bandwidth, the overlay network becomes very vulnerable to churn. In this paper, a two dimensional array-based overlay network is proposed for streaming the live video stream data. As there is always a peer or a live video streaming server to upload the live video stream data, so the overlay network is very stable and very robust to churn. Peers are placed according to their upload and download bandwidth, which enhances the balance of load and performance. The overlay network utilizes the additional upload bandwidth of peers to minimize chunk delivery delay and to maximize balance of load. The procedure, which is used for distributing the additional upload bandwidth of the peers, distributes the additional upload bandwidth to the heterogeneous strength peers in a fair treat distribution approach and to the homogeneous strength peers in a uniform distribution approach. The proposed overlay network has been simulated by Qualnet from Scalable Network Technologies and results are presented in this paper.
A Macintosh based data system for array spectrometers (Poster)
NASA Astrophysics Data System (ADS)
Bregman, J.; Moss, N.
An interactive data aquisition and reduction system has been assembled by combining a Macintosh computer with an instrument controller (an Apple II computer) via an RS-232 interface. The data system provides flexibility for operating different linear array spectrometers. The standard Macintosh interface is used to provide ease of operation and to allow transferring the reduced data to commercial graphics software.
ERIC Educational Resources Information Center
Luskin, Bernard J.; Krinsky, Ira W.
1994-01-01
The growing number of mergers, takeovers, and joint ventures has dramatic implications for communications, publishing, and education worldwide. Apple Computer and IBM have created Kaleida, a standard for software used in devices that combine text, sound, video, and graphics. Apple, AT&T, Matsushita, Motorola, Philips, and Sony are developing a…
University of Arizona: College and University Systems Environment.
ERIC Educational Resources Information Center
CAUSE/EFFECT, 1985
1985-01-01
The University of Arizona has begun to reorganize campus computing. Six working groups were formed to address six areas of computing: academic computing, library automation, administrative data processing and information systems, writing and graphics, video and audio services, and outreach and public service. (MLW)
Multimedia Materials for Language and Literacy Learning.
ERIC Educational Resources Information Center
Hallett, Terry L.
1999-01-01
Introduces educators to inexpensive, commercially-available CD-ROM software that combines speech, text, graphics, sound, video, animation, and special effects that may be incorporated into classroom activities for both normally developing and language learning disabled children. Discusses three types of multimedia CD-ROM products: (1) virtual…
ERIC Educational Resources Information Center
Cannon, Glenn; Jobe, Holly
Proper cleaning and storage of audiovisual aids is outlined in this brief guide. Materials and equipment needed for first line maintenance are listed, as well as maintenance procedures for records, audio and video tape, film, filmstrips, slides, realia, models, prints, graphics, maps, and overhead transparencies. A 15-item quiz on software…
Suzuki, Naoki; Hattori, Asaki; Hayashibe, Mitsuhiro; Suzuki, Shigeyuki; Otake, Yoshito
2003-01-01
We have developed an imaging system for free and quantitative observation of human locomotion in a time-spatial domain by way of real time imaging. The system is equipped with 60 computer controlled video cameras to film human locomotion from all angles simultaneously. Images are installed into the main graphic workstation and translated into a 2D image matrix. Observation of the subject from optional directions is able to be performed by selecting the view point from the optimum image sequence in this image matrix. This system also possesses a function to reconstruct 4D models of the subject's moving human body by using 60 images taken from all directions at one particular time. And this system also has the capability to visualize inner structures such as the skeletal or muscular systems of the subject by compositing computer graphics reconstructed from the MRI data set. We are planning to apply this imaging system to clinical observation in the area of orthopedics, rehabilitation and sports science.
NASA Technical Reports Server (NTRS)
Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)
1991-01-01
A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.
Composite video and graphics display for camera viewing systems in robotics and teleoperation
NASA Technical Reports Server (NTRS)
Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)
1993-01-01
A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.
The recovery and utilization of space suit range-of-motion data
NASA Technical Reports Server (NTRS)
Reinhardt, AL; Walton, James S.
1988-01-01
A technique for recovering data for the range of motion of a subject wearing a space suit is described along with the validation of this technique on an EVA space suit. Digitized data are automatically acquired from video images of the subject; three-dimensional trajectories are recovered from these data, and can be displayed using three-dimensional computer graphics. Target locations are recovered using a unique video processor and close-range photogrammetry. It is concluded that such data can be used in such applications as the animation of anthropometric computer models.
MMIC Phased Array Demonstrations with ACTS
NASA Technical Reports Server (NTRS)
Raquet, Charles A. (Compiler); Martzaklis, Konstantinos (Compiler); Zakrajsek, Robert J. (Compiler); Andro, Monty (Compiler); Turtle, John P.
1996-01-01
Over a one year period from May 1994 to May 1995, a number of demonstrations were conducted by the NASA Lewis Research Center (LeRC) in which voice, data, and/or video links were established via NASA's advanced communications technology satellite (ACTS) between the ACTS link evaluation terminal (LET) in Cleveland, OH, and aeronautical and mobile or fixed Earth terminals having monolithic microwave integrated circuit (MMIC) phased array antenna systems. This paper describes four of these. In one, a duplex voice link between an aeronautical terminal on the LeRC Learjet and the ACTS was achieved. Two others demonstrated duplex voice (and in one case video as well) links between the ACTS and an Army vehicle. The fourth demonstrated a high data rate downlink from ACTS to a fixed terminal. Array antenna systems used in these demonstrations were developed by LeRC and featured LeRC and Air Force experimental arrays using gallium arsenide MMIC devices at each radiating element for electronic beam steering and distributed power amplification. The single 30 GHz transmit array was developed by NASA/LeRC and Texas Instruments. The three 20 GHz receive arrays were developed in a cooperative effort with the Air Force Rome Laboratory, taking advantage of existing Air Force array development contracts with Boeing and Lockheed Martin. The paper describes the four proof-of-concept arrays and the array control system. The system configured for each of the demonstrations is described, and results are discussed.
Graphic gambling warnings: how they affect emotions, cognitive responses and attitude change.
Muñoz, Yaromir; Chebat, Jean-Charles; Borges, Adilson
2013-09-01
The present study focuses on the effects of graphic warnings related to excessive gambling. It is based upon a theoretical model derived from both the Protection Motivation Theory (PMT) and the Elaboration Likelihood Model (ELM). We focus on video lottery terminal (VLT), one of the most hazardous format in the gaming industry. Our cohort consisted of 103 actual gamblers who reported previous gambling activity on VLT's on a regular basis. We assess the effectiveness of graphic warnings vs. text-only warnings and the effectiveness of two major arguments (i.e., family vs. financial disruption). A 2 × 2 factorial design was used to test the direct and combined effects of two variables (i.e., warning content and presence vs. absence of a graphic). It was found that the presence of a graphic enhances both cognitive appraisal and fear, and has positive effects on the Depth of Information Processing. In addition, graphic content combined with family disruptions is more effective for changing attitudes and complying with the warning than other combinations of the manipulated variables. It is proposed that ELM and PMT complement each other to explain the effects of warnings. Theoretical and practical implications are discussed.
Graphical user interface for a dual-module EMCCD x-ray detector array
NASA Astrophysics Data System (ADS)
Wang, Weiyuan; Ionita, Ciprian; Kuhls-Gilcrist, Andrew; Huang, Ying; Qu, Bin; Gupta, Sandesh K.; Bednarek, Daniel R.; Rudin, Stephen
2011-03-01
A new Graphical User Interface (GUI) was developed using Laboratory Virtual Instrumentation Engineering Workbench (LabVIEW) for a high-resolution, high-sensitivity Solid State X-ray Image Intensifier (SSXII), which is a new x-ray detector for radiographic and fluoroscopic imaging, consisting of an array of Electron-Multiplying CCDs (EMCCDs) each having a variable on-chip electron-multiplication gain of up to 2000x to reduce the effect of readout noise. To enlarge the field-of-view (FOV), each EMCCD sensor is coupled to an x-ray phosphor through a fiberoptic taper. Two EMCCD camera modules are used in our prototype to form a computer-controlled array; however, larger arrays are under development. The new GUI provides patient registration, EMCCD module control, image acquisition, and patient image review. Images from the array are stitched into a 2kx1k pixel image that can be acquired and saved at a rate of 17 Hz (faster with pixel binning). When reviewing the patient's data, the operator can select images from the patient's directory tree listed by the GUI and cycle through the images using a slider bar. Commonly used camera parameters including exposure time, trigger mode, and individual EMCCD gain can be easily adjusted using the GUI. The GUI is designed to accommodate expansion of the EMCCD array to even larger FOVs with more modules. The high-resolution, high-sensitivity EMCCD modular-array SSXII imager with the new user-friendly GUI should enable angiographers and interventionalists to visualize smaller vessels and endovascular devices, helping them to make more accurate diagnoses and to perform more precise image-guided interventions.
ERIC Educational Resources Information Center
Rogness, Jonathan
2011-01-01
Advances in computer graphics have provided mathematicians with the ability to create stunning visualizations, both to gain insight and to help demonstrate the beauty of mathematics to others. As educators these tools can be particularly important as we search for ways to work with students raised with constant visual stimulation, from video games…
Drawing on Text Features for Reading Comprehension and Composing
ERIC Educational Resources Information Center
Risko, Victoria J.; Walker-Dalhouse, Doris
2011-01-01
Students read multiple-genre texts such as graphic novels, poetry, brochures, digitized texts with videos, and informational and narrative texts. Features such as overlapping illustrations and implied cause-and-effect relationships can affect students' comprehension. Teaching with these texts and drawing attention to organizational features hold…
Multimedia CALLware: The Developer's Responsibility.
ERIC Educational Resources Information Center
Dodigovic, Marina
The early computer-assisted-language-learning (CALL) programs were silent and mostly limited to screen or printer supported written text as the prevailing communication resource. The advent of powerful graphics, sound and video combined with AI-based parsers and sound recognition devices gradually turned the computer into a rather anthropomorphic…
What's New in Software? Hot New Tool: The Hypertext.
ERIC Educational Resources Information Center
Hedley, Carolyn N.
1989-01-01
This article surveys recent developments in hypertext software, a highly interactive nonsequential reading/writing/database approach to research and teaching that allows paths to be created through related materials including text, graphics, video, and animation sources. Described are uses, advantages, and problems of hypertext. (PB)
Message Design Guidelines For Screen-Based Programs.
ERIC Educational Resources Information Center
Rimar, G. I.
1996-01-01
Effective message design for screen-based computer or video instructional programs requires knowledge from many disciplines. Evaluates current conventions and suggests a new set of guidelines for screen-based designers. Discusses screen layout, highlighting and cueing, text font and style, text positioning, color, and graphical user interfaces for…
Weather Fundamentals: Hurricanes & Tornadoes. [Videotape].
ERIC Educational Resources Information Center
1998
The videos in this educational series, for grades 4-7, help students understand the science behind weather phenomena through dramatic live-action footage, vivid animated graphics, detailed weather maps, and hands-on experiments. This episode (23 minutes) features information on the deadliest and most destructive storms on Earth. Through satellite…
Job Language Performance Requirements for MOS 15E PERSHING Missile Crewman.
1977-04-12
DATA GATHERING TASK OBSERVATION STRUCTURAL PRIORITIZATION PORN INVENTORYq CHECKLIST FIGUJRE 2 In order to establish Job Language Performance Requirements...Profanity F. Shop talk/slang G. Non-standard English Media of Instruction I. Other Comments: A. ilms B. Video cassettes C. Graphic Training Aids
Weather Fundamentals: Meteorology. [Videotape].
ERIC Educational Resources Information Center
1998
The videos in this educational series, for grades 4-7, help students understand the science behind weather phenomena through dramatic live-action footage, vivid animated graphics, detailed weather maps, and hands-on experiments. This episode (23 minutes) looks at how meteorologists gather and interpret current weather data collected from sources…
Development of MPEG standards for 3D and free viewpoint video
NASA Astrophysics Data System (ADS)
Smolic, Aljoscha; Kimata, Hideaki; Vetro, Anthony
2005-11-01
An overview of 3D and free viewpoint video is given in this paper with special focus on related standardization activities in MPEG. Free viewpoint video allows the user to freely navigate within real world visual scenes, as known from virtual worlds in computer graphics. Suitable 3D scene representation formats are classified and the processing chain is explained. Examples are shown for image-based and model-based free viewpoint video systems, highlighting standards conform realization using MPEG-4. Then the principles of 3D video are introduced providing the user with a 3D depth impression of the observed scene. Example systems are described again focusing on their realization based on MPEG-4. Finally multi-view video coding is described as a key component for 3D and free viewpoint video systems. MPEG is currently working on a new standard for multi-view video coding. The conclusion is that the necessary technology including standard media formats for 3D and free viewpoint is available or will be available in the near future, and that there is a clear demand from industry and user side for such applications. 3DTV at home and free viewpoint video on DVD will be available soon, and will create huge new markets.
Headlines: Planet Earth: Improving Climate Literacy with Short Format News Videos
NASA Astrophysics Data System (ADS)
Tenenbaum, L. F.; Kulikov, A.; Jackson, R.
2012-12-01
One of the challenges of communicating climate science is the sense that climate change is remote and unconnected to daily life--something that's happening to someone else or in the future. To help face this challenge, NASA's Global Climate Change website http://climate.nasa.gov has launched a new video series, "Headlines: Planet Earth," which focuses on current climate news events. This rapid-response video series uses 3D video visualization technology combined with real-time satellite data and images, to throw a spotlight on real-world events.. The "Headlines: Planet Earth" news video products will be deployed frequently, ensuring timeliness. NASA's Global Climate Change Website makes extensive use of interactive media, immersive visualizations, ground-based and remote images, narrated and time-lapse videos, time-series animations, and real-time scientific data, plus maps and user-friendly graphics that make the scientific content both accessible and engaging to the public. The site has also won two consecutive Webby Awards for Best Science Website. Connecting climate science to current real-world events will contribute to improving climate literacy by making climate science relevant to everyday life.
The Biology and Space Exploration Video Series
NASA Technical Reports Server (NTRS)
William, Jacqueline M.; Murthy, Gita; Rapa, Steve; Hargens, Alan R.
1995-01-01
The Biology and Space Exploration video series illustrates NASA's commitment to increasing the public awareness and understanding of life sciences in space. The video series collection, which was initiated by Dr. Joan Vernikos at NASA headquarters and Dr. Alan Hargens at NASA Ames Research Center, will be distributed to universities and other institutions around the United States. The video series parallels the "Biology and Space Exploration" course taught by NASA Ames scientists at Stanford University, Palo Alto, California. In the past, students have shown considerable enthusiasm for this course and have gained a much better appreciation and understanding of space life sciences and exploration. However, due to the unique nature of the topics and the scarcity of available educational materials, most students in other universities around the country are unable to benefit from this educational experience. Therefore, with the assistance of Ames experts, we are producing a video series on selected aspects of life sciences in space to expose undergraduate students to the effects of gravity on living systems. Additionally, the video series collection contains space flight footage, graphics, charts, pictures, and interviews to make the materials interesting and intelligible to viewers.
InSight Lander Solar Array Test
2018-01-23
The solar arrays on NASA's InSight Mars lander were deployed as part of testing conducted Jan. 23, 2018, at Lockheed Martin Space in Littleton, Colorado. Engineers and technicians evaluated the solar arrays and performed an illumination test to confirm that the solar cells were collecting power. The launch window for InSight opens May 5, 2018. A video is available at https://photojournal.jpl.nasa.gov/catalog/PIA22205
WCE video segmentation using textons
NASA Astrophysics Data System (ADS)
Gallo, Giovanni; Granata, Eliana
2010-03-01
Wireless Capsule Endoscopy (WCE) integrates wireless transmission with image and video technology. It has been used to examine the small intestine non invasively. Medical specialists look for signicative events in the WCE video by direct visual inspection manually labelling, in tiring and up to one hour long sessions, clinical relevant frames. This limits the WCE usage. To automatically discriminate digestive organs such as esophagus, stomach, small intestine and colon is of great advantage. In this paper we propose to use textons for the automatic discrimination of abrupt changes within a video. In particular, we consider, as features, for each frame hue, saturation, value, high-frequency energy content and the responses to a bank of Gabor filters. The experiments have been conducted on ten video segments extracted from WCE videos, in which the signicative events have been previously labelled by experts. Results have shown that the proposed method may eliminate up to 70% of the frames from further investigations. The direct analysis of the doctors may hence be concentrated only on eventful frames. A graphical tool showing sudden changes in the textons frequencies for each frame is also proposed as a visual aid to find clinically relevant segments of the video.
Syracuse University's Center for Instructional Development; Its Role, Organization, and Procedures.
ERIC Educational Resources Information Center
Diamond, Robert M.
A brief report on the Syracuse University Center for Instructional Development is presented which describes the Center's organizational structure and operational procedures. The center combines support services for video, audio, graphics and photographic preparation of materials for instructional use; a research and evaluation unit to assess…
From Icons to iPods: Visual Electronic Media Use and Worship Satisfaction
ERIC Educational Resources Information Center
Gilbert, Ronald
2010-01-01
A steady transition has been taking place in church services with the employment of visual electronic media intended to enhance the worship experience for congregants. Electronically assisted worship utilizes presentational software and hardware to incorporate video, film clips, texts, graphics, lyrics, TV broadcasts, Internet, Twitter, and even…
Combining Traditional and New Literacies in a 21st-Century Writing Workshop
ERIC Educational Resources Information Center
Bogard, Jennifer M.; McMackin, Mary C.
2012-01-01
This article describes how third graders combine traditional literacy practices, including writer's notebooks and graphic organizers, with new literacies, such as video editing software, to create digital personal narratives. The authors emphasize the role of planning in the recursive writing process and describe how technology-based audio…
Schools Gear Up for "Hypermedia"--A Quantum Leap in Electronic Learning.
ERIC Educational Resources Information Center
Trotter, Andrew
1989-01-01
A new technological phenomenon known as "hypermedia" or "interactive multimedia" allows the learner to be in control and to access a variety of media with a computer. Advances in information storage technology have placed libraries of documents, sounds, and video and graphic images on laser discs. (MLF)
Using Scratch: An Integrated Problem-Solving Approach to Mathematical Thinking
ERIC Educational Resources Information Center
Calder, Nigel
2010-01-01
"Scratch" is a media-rich digital environment that utilises a building block command structure to manipulate graphic, audio, and video aspects. It incorporates elements of Logo including "tinkerability" in the programming process. In "Scratch" students use geometric and measurement concepts such as coordinates, angle, and length measurements. It…
Weather Fundamentals: Rain & Snow. [Videotape].
ERIC Educational Resources Information Center
1998
The videos in this educational series, for grades 4-7, help students understand the science behind weather phenomena through dramatic live-action footage, vivid animated graphics, detailed weather maps, and hands-on experiments. This episode (23 minutes) gives concise explanations of the various types of precipitation and describes how the water…
Weather Fundamentals: Wind. [Videotape].
ERIC Educational Resources Information Center
1998
The videos in this educational series, for grades 4-7, help students understand the science behind weather phenomena through dramatic live-action footage, vivid animated graphics, detailed weather maps, and hands-on experiments. This episode (23 minutes) describes the roles of the sun, temperature, and air pressure in creating the incredible power…
Notions of Technology and Visual Literacy
ERIC Educational Resources Information Center
Stankiewicz, Mary Ann
2004-01-01
For many art educators, the word "technology" conjures up visions of overhead projectors and VCRs, video and digital cameras, computers equipped with graphic programs and presentation software, digital labs where images rendered in pixels replace the debris of charcoal dust and puddled paints. One forgets that visual literacy and technology have…
Getting Started in Multimedia Training: Cutting or Bleeding Edge?
ERIC Educational Resources Information Center
Anderson, Vicki; Sleezer, Catherine M.
1995-01-01
Defines multimedia, explores uses of multimedia training, and discusses the effects and challenges of adding multimedia such as graphics, photographs, full motion video, sound effects, or CD-ROMs to existing training methods. Offers planning tips, and suggests software and hardware tools to help set up multimedia training programs. (JMV)
50 CFR 660.15 - Equipment requirements.
Code of Federal Regulations, 2011 CFR
2011-10-01
... perceived weight of water, slime, mud, debris, or other materials. Scale printouts must show: (A) The vessel... with Pentium 75-MHz or higher. Random Access Memory (RAM) must have sufficient megabyte (MB) space to... space of 217 MB or greater. A CD-ROM drive with a Video Graphics Adapter (VGA) or higher resolution...
Quo Vadimus? The 21st Century and Multimedia.
ERIC Educational Resources Information Center
Kuhn, Allan D.
This paper relates the concept of computer-driven multimedia to the National Aeronautics and Space Administration (NASA) Scientific and Technical Information Program (STIP). Multimedia is defined here as computer integration and output of text, animation, audio, video, and graphics. Multimedia is the stage of computer-based information that allows…
Full-Featured Web Conferencing Systems
ERIC Educational Resources Information Center
Foreman, Joel; Jenkins, Roy
2005-01-01
In order to match the customary strengths of the still dominant face-to-face instructional mode, a high-performance online learning system must employ synchronous as well as asynchronous communications; buttress graphics, animation, and text with live audio and video; and provide many of the features and processes associated with course management…
The Integration of Teleconferencing in Distance Teaching.
ERIC Educational Resources Information Center
Murphy, Karen L.
Teleconferencing is a discussion by telephone among individuals and groups of people in two or more locations for the purposes of instruction and conducting meetings. Audio conferencing connects people by voice alone, audiographic conferencing allows people to speak and to exchange text and graphics over the telephone line, and video conferencing…
Weather Fundamentals: Climate & Seasons. [Videotape].
ERIC Educational Resources Information Center
1998
The videos in this educational series for grades 4-7, help students understand the science behind weather phenomena through dramatic live-action footage, vivid animated graphics, detailed weather maps, and hands-on experiments. This episode (23 minutes), describes weather patterns and cycles around the globe. The various types of climates around…
Software Aids Visualization Of Mars Pathfinder Mission
NASA Technical Reports Server (NTRS)
Weidner, Richard J.
1996-01-01
Report describes Simulator for Imager for Mars Pathfinder (SIMP) computer program. SIMP generates "virtual reality" display of view through video camera on Mars lander spacecraft of Mars Pathfinder mission, along with display of pertinent textual and graphical data, for use by scientific investigators in planning sequences of activities for mission.
Weather Fundamentals: Clouds. [Videotape].
ERIC Educational Resources Information Center
1998
The videos in this educational series, for grades 4-7, help students understand the science behind weather phenomena through dramatic live-action footage, vivid animated graphics, detailed weather maps, and hands-on experiments. This episode (23 minutes) discusses how clouds form, the different types of clouds, and the important role they play in…
Restored Moonwalk Footage Release
2009-07-15
Graphics showing how TV signals were sent from the Apollo 11 mission back to Earth are shown on a large video monitor above panelists at NASA's briefing where restored Apollo 11 moonwalk footage was revealed for the first time at the Newseum, Thursday, July 16, 2009, in Washington, DC. Photo Credit: (NASA/Carla Cioffi)
Generation of animation sequences of three dimensional models
NASA Technical Reports Server (NTRS)
Poi, Sharon (Inventor); Bell, Brad N. (Inventor)
1990-01-01
The invention is directed toward a method and apparatus for generating an animated sequence through the movement of three-dimensional graphical models. A plurality of pre-defined graphical models are stored and manipulated in response to interactive commands or by means of a pre-defined command file. The models may be combined as part of a hierarchical structure to represent physical systems without need to create a separate model which represents the combined system. System motion is simulated through the introduction of translation, rotation and scaling parameters upon a model within the system. The motion is then transmitted down through the system hierarchy of models in accordance with hierarchical definitions and joint movement limitations. The present invention also calls for a method of editing hierarchical structure in response to interactive commands or a command file such that a model may be included, deleted, copied or moved within multiple system model hierarchies. The present invention also calls for the definition of multiple viewpoints or cameras which may exist as part of a system hierarchy or as an independent camera. The simulated movement of the models and systems is graphically displayed on a monitor and a frame is recorded by means of a video controller. Multiple movement and hierarchy manipulations are then recorded as a sequence of frames which may be played back as an animation sequence on a video cassette recorder.
Social Properties of Mobile Video
NASA Astrophysics Data System (ADS)
Mitchell, April Slayden; O'Hara, Kenton; Vorbau, Alex
Mobile video is now an everyday possibility with a wide array of commercially available devices, services, and content. These new technologies have created dramatic shifts in the way video-based media can be produced, consumed, and delivered by people beyond the familiar behaviors associated with fixed TV and video technologies. Such technology revolutions change the way users behave and change their expectations in regards to their mobile video experiences. Building upon earlier studies of mobile video, this paper reports on a study using diary techniques and ethnographic interviews to better understand how people are using commercially available mobile video technologies in their everyday lives. Drawing on reported episodes of mobile video behavior, the study identifies the social motivations and values underpinning these behaviors that help characterize mobile video consumption beyond the simplistic notion of viewing video only to kill time. This paper also discusses the significance of user-generated content and the usage of video in social communities through the description of two mobile video technology services that allow users to create and share content. Implications for adoption and design of mobile video technologies and services are discussed as well.
APRON: A Cellular Processor Array Simulation and Hardware Design Tool
NASA Astrophysics Data System (ADS)
Barr, David R. W.; Dudek, Piotr
2009-12-01
We present a software environment for the efficient simulation of cellular processor arrays (CPAs). This software (APRON) is used to explore algorithms that are designed for massively parallel fine-grained processor arrays, topographic multilayer neural networks, vision chips with SIMD processor arrays, and related architectures. The software uses a highly optimised core combined with a flexible compiler to provide the user with tools for the design of new processor array hardware architectures and the emulation of existing devices. We present performance benchmarks for the software processor array implemented on standard commodity microprocessors. APRON can be configured to use additional processing hardware if necessary and can be used as a complete graphical user interface and development environment for new or existing CPA systems, allowing more users to develop algorithms for CPA systems.
NASA Technical Reports Server (NTRS)
Mcanulty, M. A.
1986-01-01
The orbital Maneuvering Vehicle (OMV) is intended to close with orbiting targets for relocation or servicing. It will be controlled via video signals and thruster activation based upon Earth or space station directives. A human operator is squarely in the middle of the control loop for close work. Without directly addressing future, more autonomous versions of a remote servicer, several techniques that will doubtless be important in a future increase of autonomy also have some direct application to the current situation, particularly in the area of image enhancement and predictive analysis. Several techniques are presentet, and some few have been implemented, which support a machine vision capability proposed to be adequate for detection, recognition, and tracking. Once feasibly implemented, they must then be further modified to operate together in real time. This may be achieved by two courses, the use of an array processor and some initial steps toward data reduction. The methodology or adapting to a vector architecture is discussed in preliminary form, and a highly tentative rationale for data reduction at the front end is also discussed. As a by-product, a working implementation of the most advanced graphic display technique, ray-casting, is described.
MultiElec: A MATLAB Based Application for MEA Data Analysis.
Georgiadis, Vassilis; Stephanou, Anastasis; Townsend, Paul A; Jackson, Thomas R
2015-01-01
We present MultiElec, an open source MATLAB based application for data analysis of microelectrode array (MEA) recordings. MultiElec displays an extremely user-friendly graphic user interface (GUI) that allows the simultaneous display and analysis of voltage traces for 60 electrodes and includes functions for activation-time determination, the production of activation-time heat maps with activation time and isoline display. Furthermore, local conduction velocities are semi-automatically calculated along with their corresponding vector plots. MultiElec allows ad hoc signal suppression, enabling the user to easily and efficiently handle signal artefacts and for incomplete data sets to be analysed. Voltage traces and heat maps can be simply exported for figure production and presentation. In addition, our platform is able to produce 3D videos of signal progression over all 60 electrodes. Functions are controlled entirely by a single GUI with no need for command line input or any understanding of MATLAB code. MultiElec is open source under the terms of the GNU General Public License as published by the Free Software Foundation, version 3. Both the program and source code are available to download from http://www.cancer.manchester.ac.uk/MultiElec/.
NASA Astrophysics Data System (ADS)
Wang, Jun; Min, Kyeong-Yuk; Chong, Jong-Wha
2010-11-01
Overdrive is commonly used to reduce the liquid-crystal response time and motion blur in liquid-crystal displays (LCDs). However, overdrive requires a large frame memory in order to store the previous frame for reference. In this paper, a high-compression-ratio codec is presented to compress the image data stored in the on-chip frame memory so that only 1 Mbit of on-chip memory is required in the LCD overdrives of mobile devices. The proposed algorithm further compresses the color bitmaps and representative values (RVs) resulting from the block truncation coding (BTC). The color bitmaps are represented by a luminance bitmap, which is further reduced and reconstructed using median filter interpolation in the decoder, while the RVs are compressed using adaptive quantization coding (AQC). Interpolation and AQC can provide three-level compression, which leads to 16 combinations. Using a rate-distortion analysis, we select the three optimal schemes to compress the image data for video graphics array (VGA), wide-VGA LCD, and standard-definitionTV applications. Our simulation results demonstrate that the proposed schemes outperform interpolation BTC both in PSNR (by 1.479 to 2.205 dB) and in subjective visual quality.
Gray, A J; Beecher, D E; Olson, M V
1984-01-01
A stand-alone, interactive computer system has been developed that automates the analysis of ethidium bromide-stained agarose and acrylamide gels on which DNA restriction fragments have been separated by size. High-resolution digital images of the gels are obtained using a camera that contains a one-dimensional, 2048-pixel photodiode array that is mechanically translated through 2048 discrete steps in a direction perpendicular to the gel lanes. An automatic band-detection algorithm is used to establish the positions of the gel bands. A color-video graphics system, on which both the gel image and a variety of operator-controlled overlays are displayed, allows the operator to visualize and interact with critical stages of the analysis. The principal interactive steps involve defining the regions of the image that are to be analyzed and editing the results of the band-detection process. The system produces a machine-readable output file that contains the positions, intensities, and descriptive classifications of all the bands, as well as documentary information about the experiment. This file is normally further processed on a larger computer to obtain fragment-size assignments. Images PMID:6320097
Video transmission on ATM networks. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Chen, Yun-Chung
1993-01-01
The broadband integrated services digital network (B-ISDN) is expected to provide high-speed and flexible multimedia applications. Multimedia includes data, graphics, image, voice, and video. Asynchronous transfer mode (ATM) is the adopted transport techniques for B-ISDN and has the potential for providing a more efficient and integrated environment for multimedia. It is believed that most broadband applications will make heavy use of visual information. The prospect of wide spread use of image and video communication has led to interest in coding algorithms for reducing bandwidth requirements and improving image quality. The major results of a study on the bridging of network transmission performance and video coding are: Using two representative video sequences, several video source models are developed. The fitness of these models are validated through the use of statistical tests and network queuing performance. A dual leaky bucket algorithm is proposed as an effective network policing function. The concept of the dual leaky bucket algorithm can be applied to a prioritized coding approach to achieve transmission efficiency. A mapping of the performance/control parameters at the network level into equivalent parameters at the video coding level is developed. Based on that, a complete set of principles for the design of video codecs for network transmission is proposed.
Adaptive smart simulator for characterization and MPPT construction of PV array
NASA Astrophysics Data System (ADS)
Ouada, Mehdi; Meridjet, Mohamed Salah; Dib, Djalel
2016-07-01
Partial shading conditions are among the most important problems in large photovoltaic array. Many works of literature are interested in modeling, control and optimization of photovoltaic conversion of solar energy under partial shading conditions, The aim of this study is to build a software simulator similar to hard simulator and to produce a shading pattern of the proposed photovoltaic array in order to use the delivered information to obtain an optimal configuration of the PV array and construct MPPT algorithm. Graphical user interfaces (Matlab GUI) are built using a developed script, this tool is easy to use, simple, and has a rapid of responsiveness, the simulator supports large array simulations that can be interfaced with MPPT and power electronic converters.
Besse, Nadine; Rosset, Samuel; Zarate, Juan Jose; Ferrari, Elisabetta; Brayda, Luca; Shea, Herbert
2018-01-01
We present a fully latching and scalable 4 × 4 haptic display with 4 mm pitch, 5 s refresh time, 400 mN holding force, and 650 μm displacement per taxel. The display serves to convey dynamic graphical information to blind and visually impaired users. Combining significant holding force with high taxel density and large amplitude motion in a very compact overall form factor was made possible by exploiting the reversible, fast, hundred-fold change in the stiffness of a thin shape memory polymer (SMP) membrane when heated above its glass transition temperature. Local heating is produced using an addressable array of 3 mm in diameter stretchable microheaters patterned on the SMP. Each taxel is selectively and independently actuated by synchronizing the local Joule heating with a single pressure supply. Switching off the heating locks each taxel into its position (up or down), enabling holding any array configuration with zero power consumption. A 3D-printed pin array is mounted over the SMP membrane, providing the user with a smooth and room temperature array of movable pins to explore by touch. Perception tests were carried out with 24 blind users resulting in 70 percent correct pattern recognition over a 12-word tactile dictionary.
Graphical User Interface for a Dual-Module EMCCD X-ray Detector Array.
Wang, Weiyuan; Ionita, Ciprian; Kuhls-Gilcrist, Andrew; Huang, Ying; Qu, Bin; Gupta, Sandesh K; Bednarek, Daniel R; Rudin, Stephen
2011-03-16
A new Graphical User Interface (GUI) was developed using Laboratory Virtual Instrumentation Engineering Workbench (LabVIEW) for a high-resolution, high-sensitivity Solid State X-ray Image Intensifier (SSXII), which is a new x-ray detector for radiographic and fluoroscopic imaging, consisting of an array of Electron-Multiplying CCDs (EMCCDs) each having a variable on-chip electron-multiplication gain of up to 2000× to reduce the effect of readout noise. To enlarge the field-of-view (FOV), each EMCCD sensor is coupled to an x-ray phosphor through a fiberoptic taper. Two EMCCD camera modules are used in our prototype to form a computer-controlled array; however, larger arrays are under development. The new GUI provides patient registration, EMCCD module control, image acquisition, and patient image review. Images from the array are stitched into a 2k×1k pixel image that can be acquired and saved at a rate of 17 Hz (faster with pixel binning). When reviewing the patient's data, the operator can select images from the patient's directory tree listed by the GUI and cycle through the images using a slider bar. Commonly used camera parameters including exposure time, trigger mode, and individual EMCCD gain can be easily adjusted using the GUI. The GUI is designed to accommodate expansion of the EMCCD array to even larger FOVs with more modules. The high-resolution, high-sensitivity EMCCD modular-array SSXII imager with the new user-friendly GUI should enable angiographers and interventionalists to visualize smaller vessels and endovascular devices, helping them to make more accurate diagnoses and to perform more precise image-guided interventions.
Chi, Bryan; DeLeeuw, Ronald J; Coe, Bradley P; MacAulay, Calum; Lam, Wan L
2004-02-09
Array comparative genomic hybridization (CGH) is a technique which detects copy number differences in DNA segments. Complete sequencing of the human genome and the development of an array representing a tiling set of tens of thousands of DNA segments spanning the entire human genome has made high resolution copy number analysis throughout the genome possible. Since array CGH provides signal ratio for each DNA segment, visualization would require the reassembly of individual data points into chromosome profiles. We have developed a visualization tool for displaying whole genome array CGH data in the context of chromosomal location. SeeGH is an application that translates spot signal ratio data from array CGH experiments to displays of high resolution chromosome profiles. Data is imported from a simple tab delimited text file obtained from standard microarray image analysis software. SeeGH processes the signal ratio data and graphically displays it in a conventional CGH karyotype diagram with the added features of magnification and DNA segment annotation. In this process, SeeGH imports the data into a database, calculates the average ratio and standard deviation for each replicate spot, and links them to chromosome regions for graphical display. Once the data is displayed, users have the option of hiding or flagging DNA segments based on user defined criteria, and retrieve annotation information such as clone name, NCBI sequence accession number, ratio, base pair position on the chromosome, and standard deviation. SeeGH represents a novel software tool used to view and analyze array CGH data. The software gives users the ability to view the data in an overall genomic view as well as magnify specific chromosomal regions facilitating the precise localization of genetic alterations. SeeGH is easily installed and runs on Microsoft Windows 2000 or later environments.
Techniques for animation of CFD results. [computational fluid dynamics
NASA Technical Reports Server (NTRS)
Horowitz, Jay; Hanson, Jeffery C.
1992-01-01
Video animation is becoming increasingly vital to the computational fluid dynamics researcher, not just for presentation, but for recording and comparing dynamic visualizations that are beyond the current capabilities of even the most powerful graphic workstation. To meet these needs, Lewis Research Center has recently established a facility to provide users with easy access to advanced video animation capabilities. However, producing animation that is both visually effective and scientifically accurate involves various technological and aesthetic considerations that must be understood both by the researcher and those supporting the visualization process. These considerations include: scan conversion, color conversion, and spatial ambiguities.
SIRSALE: integrated video database management tools
NASA Astrophysics Data System (ADS)
Brunie, Lionel; Favory, Loic; Gelas, J. P.; Lefevre, Laurent; Mostefaoui, Ahmed; Nait-Abdesselam, F.
2002-07-01
Video databases became an active field of research during the last decade. The main objective in such systems is to provide users with capabilities to friendly search, access and playback distributed stored video data in the same way as they do for traditional distributed databases. Hence, such systems need to deal with hard issues : (a) video documents generate huge volumes of data and are time sensitive (streams must be delivered at a specific bitrate), (b) contents of video data are very hard to be automatically extracted and need to be humanly annotated. To cope with these issues, many approaches have been proposed in the literature including data models, query languages, video indexing etc. In this paper, we present SIRSALE : a set of video databases management tools that allow users to manipulate video documents and streams stored in large distributed repositories. All the proposed tools are based on generic models that can be customized for specific applications using ad-hoc adaptation modules. More precisely, SIRSALE allows users to : (a) browse video documents by structures (sequences, scenes, shots) and (b) query the video database content by using a graphical tool, adapted to the nature of the target video documents. This paper also presents an annotating interface which allows archivists to describe the content of video documents. All these tools are coupled to a video player integrating remote VCR functionalities and are based on active network technology. So, we present how dedicated active services allow an optimized video transport for video streams (with Tamanoir active nodes). We then describe experiments of using SIRSALE on an archive of news video and soccer matches. The system has been demonstrated to professionals with a positive feedback. Finally, we discuss open issues and present some perspectives.
Potential digitization/compression techniques for Shuttle video
NASA Technical Reports Server (NTRS)
Habibi, A.; Batson, B. H.
1978-01-01
The Space Shuttle initially will be using a field-sequential color television system but it is possible that an NTSC color TV system may be used for future missions. In addition to downlink color TV transmission via analog FM links, the Shuttle will use a high resolution slow-scan monochrome system for uplink transmission of text and graphics information. This paper discusses the characteristics of the Shuttle video systems, and evaluates digitization and/or bandwidth compression techniques for the various links. The more attractive techniques for the downlink video are based on a two-dimensional DPCM encoder that utilizes temporal and spectral as well as the spatial correlation of the color TV imagery. An appropriate technique for distortion-free coding of the uplink system utilizes two-dimensional HCK codes.
Hammoud, Riad I.; Sahin, Cem S.; Blasch, Erik P.; Rhodes, Bradley J.; Wang, Tao
2014-01-01
We describe two advanced video analysis techniques, including video-indexed by voice annotations (VIVA) and multi-media indexing and explorer (MINER). VIVA utilizes analyst call-outs (ACOs) in the form of chat messages (voice-to-text) to associate labels with video target tracks, to designate spatial-temporal activity boundaries and to augment video tracking in challenging scenarios. Challenging scenarios include low-resolution sensors, moving targets and target trajectories obscured by natural and man-made clutter. MINER includes: (1) a fusion of graphical track and text data using probabilistic methods; (2) an activity pattern learning framework to support querying an index of activities of interest (AOIs) and targets of interest (TOIs) by movement type and geolocation; and (3) a user interface to support streaming multi-intelligence data processing. We also present an activity pattern learning framework that uses the multi-source associated data as training to index a large archive of full-motion videos (FMV). VIVA and MINER examples are demonstrated for wide aerial/overhead imagery over common data sets affording an improvement in tracking from video data alone, leading to 84% detection with modest misdetection/false alarm results due to the complexity of the scenario. The novel use of ACOs and chat messages in video tracking paves the way for user interaction, correction and preparation of situation awareness reports. PMID:25340453
Hammoud, Riad I; Sahin, Cem S; Blasch, Erik P; Rhodes, Bradley J; Wang, Tao
2014-10-22
We describe two advanced video analysis techniques, including video-indexed by voice annotations (VIVA) and multi-media indexing and explorer (MINER). VIVA utilizes analyst call-outs (ACOs) in the form of chat messages (voice-to-text) to associate labels with video target tracks, to designate spatial-temporal activity boundaries and to augment video tracking in challenging scenarios. Challenging scenarios include low-resolution sensors, moving targets and target trajectories obscured by natural and man-made clutter. MINER includes: (1) a fusion of graphical track and text data using probabilistic methods; (2) an activity pattern learning framework to support querying an index of activities of interest (AOIs) and targets of interest (TOIs) by movement type and geolocation; and (3) a user interface to support streaming multi-intelligence data processing. We also present an activity pattern learning framework that uses the multi-source associated data as training to index a large archive of full-motion videos (FMV). VIVA and MINER examples are demonstrated for wide aerial/overhead imagery over common data sets affording an improvement in tracking from video data alone, leading to 84% detection with modest misdetection/false alarm results due to the complexity of the scenario. The novel use of ACOs and chat Sensors 2014, 14 19844 messages in video tracking paves the way for user interaction, correction and preparation of situation awareness reports.
General characteristics of preliminary data processing in the Copernicus experiment
NASA Technical Reports Server (NTRS)
Ziolkovski, K.; Kossatski, K.
1975-01-01
Data from the 'Copernicus' experiment is processed in four stages: setting up of basic arrays, data calibration, graphical display of results, and assignment of results to navigation parameters. Each stage is briefly discussed.
ERIC Educational Resources Information Center
Forrest, Charles
1988-01-01
Reviews technological developments centered around microcomputers that have led to the design of integrated workstations. Topics discussed include methods of information storage, information retrieval, telecommunications networks, word processing, data management, graphics, interactive video, sound, interfaces, artificial intelligence, hypermedia,…
Using Presentation Software to Flip an Undergraduate Analytical Chemistry Course
ERIC Educational Resources Information Center
Fitzgerald, Neil; Li, Luisa
2015-01-01
An undergraduate analytical chemistry course has been adapted to a flipped course format. Course content was provided by video clips, text, graphics, audio, and simple animations organized as concept maps using the cloud-based presentation platform, Prezi. The advantages of using Prezi to present course content in a flipped course format are…
Note-Taking Habits of Online Students: Value, Quality, and Support
ERIC Educational Resources Information Center
Watkins, Ryan; Corry, Michael; Dardick, William; Stella, Julie
2015-01-01
Do online students take notes when reading lecture content or watching video lectures? Can they benefit from note-taking supports, such as graphic organizers, to improve their study skills? These are among the questions explored in a pilot study with student participants enrolled in a 100% online graduate program. Students were provided academic…
Computer Generated Graphics in Television Advertising.
ERIC Educational Resources Information Center
Ulloth, Dana
An organization that has led the way in opening new frontiers in using advanced technology to create innovative commercials is Charlex, a New York-based company in business since 1977. Charlex has produced music videos, and also commercials for Diet Pepsi, Cherry Coke, Crest Toothpaste, and White Mountain Cooler. It has used a wide range of…
Visual Basic Applications to Physics Teaching
ERIC Educational Resources Information Center
Chitu, Catalin; Inpuscatu, Razvan Constantin; Viziru, Marilena
2011-01-01
Derived from basic language, VB (Visual Basic) is a programming language focused on the video interface component. With graphics and functional components implemented, the programmer is able to bring and use their components to achieve the desired application in a relatively short time. Language VB is a useful tool in physics teaching by creating…
Be Smart! Don't Start! Just Say No!
ERIC Educational Resources Information Center
Children's Television Workshop, New York, NY.
This magazine-style publication was developed to help children and young adolescents say "No" to alcohol. Profusely illustrated with color photographs and other graphics, the guide includes a preview of the Jets' new music video; a question-and-answer section about drinking and alcoholic beverages; a set of reasons for not drinking; and a section…
The Effects of Captions on EFL Learners' Comprehension of English-Language Television Programs
ERIC Educational Resources Information Center
Rodgers, Michael P. H.; Webb, Stuart
2017-01-01
The Multimedia Principle (Fletcher & Tobias, 2005) states that people learn better and comprehend more when words and pictures are presented together. The potential for English language learners to increase their comprehension of video through the use of captions, which graphically display the same language as the spoken dialogue, has been…
ERIC Educational Resources Information Center
Center for Renewable Energy and Sustainable Tech., Washington, DC.
An educational tool concerning renewable energy and the environment, this CD-ROM provides nearly 1,000 screens of text, graphics, videos, and interactive exercises. It also provides a detailed index, charts of U.S. energy consumption by state, an energy glossary, and a list of related Web sites. This CD-ROM, additionally, offers "The School…
ERIC Educational Resources Information Center
Stalker, Sandra
"Journey Home," an interactive CD-ROM program about Homer's "Odyssey," was produced at North Shore Community College (Massachusetts) to create an innovative method for teaching literature. Based on a prototype developed on an Apple II, the program incorporates video, text, graphics, music, and artwork related to the Odyssey and…
Constructing Liminal Blends in a Collaborative Augmented-Reality Learning Environment
ERIC Educational Resources Information Center
Enyedy, Noel; Danish, Joshua A.; DeLiema, David
2015-01-01
In vision-based augmented-reality (AR) environments, users view the physical world through a video feed or device that "augments" the display with a graphical or informational overlay. Our goal in this manuscript is to ask "how" and "why" these new technologies create opportunities for learning. We suggest that AR is…
The State of Simulations: Soft-Skill Simulations Emerge as a Powerful New Form of E-Learning.
ERIC Educational Resources Information Center
Aldrich, Clark
2001-01-01
Presents responses of leaders from six simulation companies about challenges and opportunities of soft-skills simulations in e-learning. Discussion includes: evaluation metrics; role of subject matter experts in developing simulations; video versus computer graphics; technology needed to run simulations; technology breakthroughs; pricing;…
ERIC Educational Resources Information Center
Faulkner, Thomas P.; Sprague, Jon E.
1996-01-01
A multimedia approach to drug therapy for Parkinson's Disease, part of a pharmacy school central nervous system course, integrated use of lecture, textbook, video/graphic technology, the movie "Awakenings," Internet and World Wide Web, and an interactive animated movie. A followup questionnaire found generally positive student attitudes…
Application of Multimedia Technologies to Enhance Distance Learning
ERIC Educational Resources Information Center
Buckley, Wendy; Smith, Alexandra
2008-01-01
Educators' use of multimedia enhances the online learning experience by presenting content in a combination of audio, video, graphics, and text in various formats to address a range of student learning styles. Many personnel preparation programs in visual impairments have turned to online education to serve students over a larger geographic area.…
The Use of Digitized Images in Developing Software for Young Children.
ERIC Educational Resources Information Center
Wright, June L.
1992-01-01
Parents and children interacted with computer-based representations of a park, one with animated picture graphics and one with digitized full motion video. Children who interacted with the digitized representation replayed the program more and showed a stronger cognitive focus on the representation than did the other children. (LB)
78 FR 41084 - Solicitation for a Cooperative Agreement-Video Production: Direct Supervision Jails
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-09
... narration, interviews, graphics, and footage shot in jails. This project will be a collaborative venture... solicitation, one award will be made. Funds awarded under this solicitation may only be used for activities... the complete production. Project Director The production company will assign one staff to oversee the...
NASA Astrophysics Data System (ADS)
Thoma, George R.
1996-03-01
The virtual digital library, a concept that is quickly becoming a reality, offers rapid and geography-independent access to stores of text, images, graphics, motion video and other datatypes. Furthermore, a user may move from one information source to another through hypertext linkages. The projects described here further the notion of such an information paradigm from an end user viewpoint.
Trends in Vocational Education in the Arts 1980. Fast Response Survey System.
ERIC Educational Resources Information Center
Wright, Douglas A.; Farris, Elizabeth
A study examined the nature and extent of vocational arts education programs throughout the 50 states. During the study, data were sought concerning those programs designed to prepare students for occupations in 14 arts areas: dance; vocal music; instrumental music; theater; radio, television, and video; cinematography; photography; graphic arts;…
Comprehension and Recall of Television's Computerized Image: An Exploratory Study.
ERIC Educational Resources Information Center
Metallinos, Nikos; Chartrand, Sylvie
This exploratory study of the effects of the new visual communications media imagery (e.g., video games, digital television, and computer graphics) on the visual perception process is designed to provide a theoretical framework for research, introduce appropriate research instruments for such study, and experiment with the application of biometric…
ERIC Educational Resources Information Center
Ivy, Diana K.; And Others
Continuous Attitudinal Response Technology (CART) is an alternative approach to testing students' instantaneous response to teacher behaviors in the classroom. The system uses a microcomputer and video technology device that allows researchers to measure subjects' instantaneous responses to static and continuous stimuli, graphic or verbal. A…
Producing Videotape Programs for Computer Training: An Example with AMA/NET
Novey, Donald W.
1990-01-01
To facilitate user proficiency with AMA/Net, an 80-minute training videotape has been produced. The production was designed to use videotape's advantages, where information and emotion are combined; and to accommodate its chief disadvantage, lack of resolution for fine text, with close-ups and graphics. Content of the videotape was conceived, out-lined, demonstrated with simultaneous text capture, edited into script form, narration added, and scripts marked for videotaping and narrating. Videotaping was performed with actual keyboard sounds for realism. The recording was divided into four areas: office mock-up, keyboard close-ups, scan-conversion and screen close-ups. Once the footage was recorded, it was logged and rough-edited. Care was taken to balance the pace of the program with visual stimulation and amount of narration. The final edit was performed as a culmination of all scripts, video materials and rough edit, with graphics and steady change of visual information offsetting the static nature of the screen display. Carefully planned video programs can be a useful and economical adjunct in the training process for online services.
Producing Videotape Programs for Computer Training: An Example with AMA/NET
Novey, Donald W.
1990-01-01
To facilitate user proficiency with AMA/Net, an 80-minute training videotape has been produced. The production was designed to use videotape's advantages, where information and emotion are combined; and to accommodate its chief disadvantage, lack of resolution for fine text, with close-ups and graphics. Content of the videotape was conceived, outlined, demonstrated with simultaneous text capture, edited into script form, narration added, and scripts marked for videotaping and narrating. Videotaping was performed with actual keyboard sounds for realism. The recording was divided into four areas: office mock-up, keyboard close-ups, scan-conversion and screen close-ups. Once the footage was recorded, it was logged and rough-edited. Care was taken to balance the pace of the program with visual stimulation and amount of narration. The final edit was performed as a culmination of all scripts, video materials and rough edit, with graphics and steady change of visual information offsetting the static nature of the screen display. Carefully planned video programs can be a useful and economical adjunct in the training process for online services.
ERIC Educational Resources Information Center
Groeneveld, Marleen G.; Vermeer, Harriet J.; van IJzendoorn, Marinus H.; Linting, Mariëlle
2016-01-01
Background: The childcare environment offers a wide array of developmental opportunities for children. Providing children with a feeling of security to explore this environment is one of the most fundamental goals of childcare. Objective: In the current study the effectiveness of Video-feedback Intervention to promote Positive Parenting-Child Care…
Goal-seismic computer programs in BASIC: Part I; Store, plot, and edit array data
Hasbrouck, Wilfred P.
1979-01-01
Processing of geophysical data taken with the U.S. Geological Survey's coal-seismic system is done with a desk-top, stand-alone computer. Programs for this computer are written in an extended BASIC language specially augmented for acceptance by the Tektronix 4051 Graphic System. This report presents five computer programs used to store, plot, and edit array data for the line, cross, and triangle arrays commonly employed in our coal-seismic investigations. * Use of brand names in this report is for descriptive purposes only and does not constitute endorsement by the U.S. Geological Survey.
Luo, Xiongbiao; Mori, Kensaku
2014-06-01
Endoscope 3-D motion tracking, which seeks to synchronize pre- and intra-operative images in endoscopic interventions, is usually performed as video-volume registration that optimizes the similarity between endoscopic video and pre-operative images. The tracking performance, in turn, depends significantly on whether a similarity measure can successfully characterize the difference between video sequences and volume rendering images driven by pre-operative images. The paper proposes a discriminative structural similarity measure, which uses the degradation of structural information and takes image correlation or structure, luminance, and contrast into consideration, to boost video-volume registration. By applying the proposed similarity measure to endoscope tracking, it was demonstrated to be more accurate and robust than several available similarity measures, e.g., local normalized cross correlation, normalized mutual information, modified mean square error, or normalized sum squared difference. Based on clinical data evaluation, the tracking error was reduced significantly from at least 14.6 mm to 4.5 mm. The processing time was accelerated more than 30 frames per second using graphics processing unit.
Presence for design: conveying atmosphere through video collages.
Keller, I; Stappers, P J
2001-04-01
Product designers use imagery for inspiration in their creative design process. To support creativity, designers apply many tools and techniques, which often rely on their ability to be inspired by found and previously made visual material and to experience the atmosphere of the user environment. Computer tools and developments in VR offer perspectives to support this kind of imagery and presence in the design process. But currently these possibilities come at too high a technological overhead and price to be usable in the design practice. This article proposes an expressive and technically lightweight approach using the possibilities of VR and computer tools, by creating a sketchy environment using video collages. Instead of relying on highly realistic or even "hyperreal" graphics, these video collages use lessons learned from theater and cinema to get a sense of atmosphere across. Product designers can use these video collages to reexperience their observations in the environment in which a product is to be used, and to communicate this atmosphere to their colleagues and clients. For user-centered design, video collages can also provide an environmental context for concept testing with prospective user groups.
Game On, Science - How Video Game Technology May Help Biologists Tackle Visualization Challenges
Da Silva, Franck; Empereur-mot, Charly; Chavent, Matthieu; Baaden, Marc
2013-01-01
The video games industry develops ever more advanced technologies to improve rendering, image quality, ergonomics and user experience of their creations providing very simple to use tools to design new games. In the molecular sciences, only a small number of experts with specialized know-how are able to design interactive visualization applications, typically static computer programs that cannot easily be modified. Are there lessons to be learned from video games? Could their technology help us explore new molecular graphics ideas and render graphics developments accessible to non-specialists? This approach points to an extension of open computer programs, not only providing access to the source code, but also delivering an easily modifiable and extensible scientific research tool. In this work, we will explore these questions using the Unity3D game engine to develop and prototype a biological network and molecular visualization application for subsequent use in research or education. We have compared several routines to represent spheres and links between them, using either built-in Unity3D features or our own implementation. These developments resulted in a stand-alone viewer capable of displaying molecular structures, surfaces, animated electrostatic field lines and biological networks with powerful, artistic and illustrative rendering methods. We consider this work as a proof of principle demonstrating that the functionalities of classical viewers and more advanced novel features could be implemented in substantially less time and with less development effort. Our prototype is easily modifiable and extensible and may serve others as starting point and platform for their developments. A webserver example, standalone versions for MacOS X, Linux and Windows, source code, screen shots, videos and documentation are available at the address: http://unitymol.sourceforge.net/. PMID:23483961
Game on, science - how video game technology may help biologists tackle visualization challenges.
Lv, Zhihan; Tek, Alex; Da Silva, Franck; Empereur-mot, Charly; Chavent, Matthieu; Baaden, Marc
2013-01-01
The video games industry develops ever more advanced technologies to improve rendering, image quality, ergonomics and user experience of their creations providing very simple to use tools to design new games. In the molecular sciences, only a small number of experts with specialized know-how are able to design interactive visualization applications, typically static computer programs that cannot easily be modified. Are there lessons to be learned from video games? Could their technology help us explore new molecular graphics ideas and render graphics developments accessible to non-specialists? This approach points to an extension of open computer programs, not only providing access to the source code, but also delivering an easily modifiable and extensible scientific research tool. In this work, we will explore these questions using the Unity3D game engine to develop and prototype a biological network and molecular visualization application for subsequent use in research or education. We have compared several routines to represent spheres and links between them, using either built-in Unity3D features or our own implementation. These developments resulted in a stand-alone viewer capable of displaying molecular structures, surfaces, animated electrostatic field lines and biological networks with powerful, artistic and illustrative rendering methods. We consider this work as a proof of principle demonstrating that the functionalities of classical viewers and more advanced novel features could be implemented in substantially less time and with less development effort. Our prototype is easily modifiable and extensible and may serve others as starting point and platform for their developments. A webserver example, standalone versions for MacOS X, Linux and Windows, source code, screen shots, videos and documentation are available at the address: http://unitymol.sourceforge.net/.
The scope of nonsuicidal self-injury on YouTube.
Lewis, Stephen P; Heath, Nancy L; St Denis, Jill M; Noble, Rick
2011-03-01
Nonsuicidal self-injury, the deliberate destruction of one's body tissue (eg, self-cutting, burning) without suicidal intent, has consistent rates ranging from 14% to 24% among youth and young adults. With more youth using video-sharing Web sites (eg, YouTube), this study examined the accessibility and scope of nonsuicidal self-injury videos online. Using YouTube's search engine (and the following key words: "self-injury" and "self-harm"), the 50 most viewed character (ie, with a live individual) and noncharacter videos (100 total) were selected and examined across key quantitative and qualitative variables. The top 100 videos analyzed were viewed over 2 million times, and most (80%) were accessible to a general audience. Viewers rated the videos positively (M = 4.61; SD: 0.61 out of 5.0) and selected videos as a favorite over 12 000 times. The videos' tones were largely factual or educational (53%) or melancholic (51%). Explicit imagery of self-injury was common. Specifically, 90% of noncharacter videos had nonsuicidal self-injury photographs, whereas 28% of character videos had in-action nonsuicidal self-injury. For both, cutting was the most common method. Many videos (58%) do not warn about this content. The nature of nonsuicidal self-injury videos on YouTube may foster normalization of nonsuicidal self-injury and may reinforce the behavior through regular viewing of nonsuicidal self-injury-themed videos. Graphic videos showing nonsuicidal self-injury are frequently accessed and received positively by viewers. These videos largely provide nonsuicidal self-injury information and/or express a hopeless or melancholic message. Professionals working with youth and young adults who enact nonsuicidal self-injury need to be aware of the scope and nature of nonsuicidal self-injury on YouTube.
Adaptive smart simulator for characterization and MPPT construction of PV array
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ouada, Mehdi, E-mail: mehdi.ouada@univ-annaba.org; Meridjet, Mohamed Salah; Dib, Djalel
2016-07-25
Partial shading conditions are among the most important problems in large photovoltaic array. Many works of literature are interested in modeling, control and optimization of photovoltaic conversion of solar energy under partial shading conditions, The aim of this study is to build a software simulator similar to hard simulator and to produce a shading pattern of the proposed photovoltaic array in order to use the delivered information to obtain an optimal configuration of the PV array and construct MPPT algorithm. Graphical user interfaces (Matlab GUI) are built using a developed script, this tool is easy to use, simple, and hasmore » a rapid of responsiveness, the simulator supports large array simulations that can be interfaced with MPPT and power electronic converters.« less
Knowledge representation in space flight operations
NASA Technical Reports Server (NTRS)
Busse, Carl
1989-01-01
In space flight operations rapid understanding of the state of the space vehicle is essential. Representation of knowledge depicting space vehicle status in a dynamic environment presents a difficult challenge. The NASA Jet Propulsion Laboratory has pursued areas of technology associated with the advancement of spacecraft operations environment. This has led to the development of several advanced mission systems which incorporate enhanced graphics capabilities. These systems include: (1) Spacecraft Health Automated Reasoning Prototype (SHARP); (2) Spacecraft Monitoring Environment (SME); (3) Electrical Power Data Monitor (EPDM); (4) Generic Payload Operations Control Center (GPOCC); and (5) Telemetry System Monitor Prototype (TSM). Knowledge representation in these systems provides a direct representation of the intrinsic images associated with the instrument and satellite telemetry and telecommunications systems. The man-machine interface includes easily interpreted contextual graphic displays. These interactive video displays contain multiple display screens with pop-up windows and intelligent, high resolution graphics linked through context and mouse-sensitive icons and text.
Blasting, graphical interfaces and Unix
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knudsen, S.; Preece, D.S.
1993-11-01
A discrete element computer program, DMC (Distinct Motion Code) was developed to simulate blast-induced rock motion. To simplify the complex task of entering material and explosive design parameters as well as bench configuration, a full-featured graphical interface has been developed. DMC is currently executed on both Sun SPARCstation 2 and Sun SPARCstation 10 platforms and routinely used to model bench and crater blasting problems. This paper will document the design and development of the full-featured interface to DMC. The development of the interface will be tracked through the various stages, highlighting the adjustments made to allow the necessary parameters tomore » be entered in terms and units that field blasters understand. The paper also discusses a novel way of entering non-integer numbers and the techniques necessary to display blasting parameters in an understandable visual manner. A video presentation will demonstrate the graphics interface and explains its use.« less
Blasting, graphical interfaces and Unix
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knudsen, S.; Preece, D.S.
1994-12-31
A discrete element computer program, DMC (Distinct Motion Code) was developed to simulate blast-induced rock motion. To simplify the complex task of entering material and explosive design parameters as well as bench configuration, a full-featured graphical interface has been developed. DMC is currently executed on both Sun SPARCstation 2 and Sun SPARCstation 10 platforms and routinely used to model bench and crater blasting problems. This paper will document the design and development of the full-featured interface to DMC. The development of the interface will be tracked through the various stages, highlighting the adjustments made to allow the necessary parameters tomore » be entered in terms and units that field blasters understand. The paper also discusses a novel way of entering non-integer numbers and the techniques necessary to display blasting parameters in an understandable visual manner. A video presentation will demonstrate the graphics interface and explains its use.« less
Graphic Depictions: Portrayals of Mental Illness in Video Games.
Shapiro, Samuel; Rotter, Merrill
2016-11-01
Although studies have examined portrayals of mental illness in the mass media, little attention has been paid to such portrayals in video games. In this descriptive study, the fifty highest-selling video games in each year from 2011 to 2013 were surveyed through application of search terms to the Wikia search engine, with subsequent review of relevant footage on YouTube. Depiction categories were then assigned based on the extent of portrayal and qualitative characteristics compared against mental illness stereotypes in cinema. Twenty-three of the 96 surveyed games depicted at least one character with mental illness. Forty-two characters were identified as portraying mental illness, with most characters classified under a "homicidal maniac" stereotype, although many characters did not clearly reflect cinema stereotypes and were subcategorized based on the shared traits. Video games contain frequent and varied portrayals of mental illness, with depictions most commonly linking mental illness to dangerous and violent behaviors. © 2016 American Academy of Forensic Sciences.
SCD's uncooled detectors and video engines for a wide-range of applications
NASA Astrophysics Data System (ADS)
Fraenkel, A.; Mizrahi, U.; Bikov, L.; Giladi, A.; Shiloah, N.; Elkind, S.; Kogan, I.; Maayani, S.; Amsterdam, A.; Vaserman, I.; Duman, O.; Hirsh, Y.; Schapiro, F.; Tuito, A.; Ben-Ezra, M.
2011-06-01
Over the last decade SCD has established a state of the art VOx μ-Bolometer product line. Due to its overall advantages this technology is penetrating a large range of systems. In addition to a large variety of detectors, SCD has also recently introduced modular video engines with an open architecture. In this paper we will describe the versatile applications supported by the products based on 17μm pitch: Low SWaP short range systems, mid range systems based on VGA arrays and high-end systems that will utilize the XGA format. These latter systems have the potential to compete with cooled 2nd Gen scanning LWIR arrays, as will be demonstrated by TRM3 system level calculations.
Development of a ground signal processor for digital synthetic array radar data
NASA Technical Reports Server (NTRS)
Griffin, C. R.; Estes, J. M.
1981-01-01
A modified APQ-102 sidelooking array radar (SLAR) in a B-57 aircraft test bed is used, with other optical and infrared sensors, in remote sensing of Earth surface features for various users at NASA Johnson Space Center. The video from the radar is normally recorded on photographic film and subsequently processed photographically into high resolution radar images. Using a high speed sampling (digitizing) system, the two receiver channels of cross-and co-polarized video are recorded on wideband magnetic tape along with radar and platform parameters. These data are subsequently reformatted and processed into digital synthetic aperture radar images with the image data available on magnetic tape for subsequent analysis by investigators. The system design and results obtained are described.
Real-time unmanned aircraft systems surveillance video mosaicking using GPU
NASA Astrophysics Data System (ADS)
Camargo, Aldo; Anderson, Kyle; Wang, Yi; Schultz, Richard R.; Fevig, Ronald A.
2010-04-01
Digital video mosaicking from Unmanned Aircraft Systems (UAS) is being used for many military and civilian applications, including surveillance, target recognition, border protection, forest fire monitoring, traffic control on highways, monitoring of transmission lines, among others. Additionally, NASA is using digital video mosaicking to explore the moon and planets such as Mars. In order to compute a "good" mosaic from video captured by a UAS, the algorithm must deal with motion blur, frame-to-frame jitter associated with an imperfectly stabilized platform, perspective changes as the camera tilts in flight, as well as a number of other factors. The most suitable algorithms use SIFT (Scale-Invariant Feature Transform) to detect the features consistent between video frames. Utilizing these features, the next step is to estimate the homography between two consecutives video frames, perform warping to properly register the image data, and finally blend the video frames resulting in a seamless video mosaick. All this processing takes a great deal of resources of resources from the CPU, so it is almost impossible to compute a real time video mosaic on a single processor. Modern graphics processing units (GPUs) offer computational performance that far exceeds current CPU technology, allowing for real-time operation. This paper presents the development of a GPU-accelerated digital video mosaicking implementation and compares it with CPU performance. Our tests are based on two sets of real video captured by a small UAS aircraft; one video comes from Infrared (IR) and Electro-Optical (EO) cameras. Our results show that we can obtain a speed-up of more than 50 times using GPU technology, so real-time operation at a video capture of 30 frames per second is feasible.
Video integrated measurement system. [Diagnostic display devices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spector, B.; Eilbert, L.; Finando, S.
A Video Integrated Measurement (VIM) System is described which incorporates the use of various noninvasive diagnostic procedures (moire contourography, electromyography, posturometry, infrared thermography, etc.), used individually or in combination, for the evaluation of neuromusculoskeletal and other disorders and their management with biofeedback and other therapeutic procedures. The system provides for measuring individual diagnostic and therapeutic modes, or multiple modes by split screen superimposition, of real time (actual) images of the patient and idealized (ideal-normal) models on a video monitor, along with analog and digital data, graphics, color, and other transduced symbolic information. It is concluded that this system provides anmore » innovative and efficient method by which the therapist and patient can interact in biofeedback training/learning processes and holds considerable promise for more effective measurement and treatment of a wide variety of physical and behavioral disorders.« less
Eyewitness lineups: is the appearance-change instruction a good idea?
Charman, Steve D; Wells, Gary L
2007-02-01
The Department of Justice's Guide for lineups recommends warning eyewitnesses that the culprit's appearance might have changed since the time of the crime. This appearance-change instruction (ACI) has never been empirically tested. A video crime with four culprits was viewed by 289 participants who then attempted to identify the culprits from four 6-person arrays that either included or did not include the culprit. Participants either received the ACI or not and all were warned that the culprit might or might not be in the arrays. The culprits varied in how much their appearance changed from the video to their lineup arrays, but the ACI did not improve identification decisions for any of the lineups. Collapsed over the four culprits, the ACI increased false alarms and filler identifications but did not increase culprit identifications. The ACI reduced confidence and increased response latency. Two processes that could account for these results are discussed, namely a decision criterion shift and a general increase in ecphoric similarity.
MethLAB: a graphical user interface package for the analysis of array-based DNA methylation data.
Kilaru, Varun; Barfield, Richard T; Schroeder, James W; Smith, Alicia K; Conneely, Karen N
2012-03-01
Recent evidence suggests that DNA methylation changes may underlie numerous complex traits and diseases. The advent of commercial, array-based methods to interrogate DNA methylation has led to a profusion of epigenetic studies in the literature. Array-based methods, such as the popular Illumina GoldenGate and Infinium platforms, estimate the proportion of DNA methylated at single-base resolution for thousands of CpG sites across the genome. These arrays generate enormous amounts of data, but few software resources exist for efficient and flexible analysis of these data. We developed a software package called MethLAB (http://genetics.emory.edu/conneely/MethLAB) using R, an open source statistical language that can be edited to suit the needs of the user. MethLAB features a graphical user interface (GUI) with a menu-driven format designed to efficiently read in and manipulate array-based methylation data in a user-friendly manner. MethLAB tests for association between methylation and relevant phenotypes by fitting a separate linear model for each CpG site. These models can incorporate both continuous and categorical phenotypes and covariates, as well as fixed or random batch or chip effects. MethLAB accounts for multiple testing by controlling the false discovery rate (FDR) at a user-specified level. Standard output includes a spreadsheet-ready text file and an array of publication-quality figures. Considering the growing interest in and availability of DNA methylation data, there is a great need for user-friendly open source analytical tools. With MethLAB, we present a timely resource that will allow users with no programming experience to implement flexible and powerful analyses of DNA methylation data.
Short-range/Long-range Integrated Target (SLIT) for Video Guidance Sensor Rendezvous and Docking
NASA Technical Reports Server (NTRS)
Roe, Fred D. (Inventor); Bryan, Thomas C. (Inventor)
2009-01-01
A laser target reflector assembly for mounting upon spacecraft having a long-range reflector array formed from a plurality of unfiltered light reflectors embedded in an array pattern upon a hemispherical reflector disposed upon a mounting plate. The reflector assembly also includes a short-range reflector array positioned upon the mounting body proximate to the long-range reflector array. The short-range reflector array includes three filtered light reflectors positioned upon extensions from the mounting body. The three filtered light reflectors retro-reflect substantially all incident light rays that are transmissive by their monochromatic filters and received by the three filtered light reflectors. In one embodiment the short-range reflector array is embedded within the hemispherical reflector,
From Geocentrism to Allocentrism: Teaching the Phases of the Moon in a Digital Full-Dome Planetarium
ERIC Educational Resources Information Center
Chastenay, Pierre
2016-01-01
An increasing number of planetariums worldwide are turning digital, using ultra-fast computers, powerful graphic cards, and high-resolution video projectors to create highly realistic astronomical imagery in real time. This modern technology makes it so that the audience can observe astronomical phenomena from a geocentric as well as an…
Bullying Prevention for School Safety (with Related Video)
ERIC Educational Resources Information Center
Johnson, Katie
2012-01-01
The nation watched in shock recently as four middle school boys barraged 68-year-old bus monitor Karen Klein with jabs about her weight, attacks on her family, and chuckled as they made violent and graphic threats. Klein remained quiet, taking the abuse and failing to respond to the students. This incident might have gone unnoticed and unreported,…
Graphic Novels, Web Comics, and Creator Blogs: Examining Product and Process
ERIC Educational Resources Information Center
Carter, James Bucky
2011-01-01
Young adult literature (YAL) of the late 20th and early 21st century is exploring hybrid forms with growing regularity by embracing textual conventions from sequential art, video games, film, and more. As well, Web-based technologies have given those who consume YAL more immediate access to authors, their metacognitive creative processes, and…
Reinventing the Book Club: Graphic Novels as Educational Heavyweights
ERIC Educational Resources Information Center
Seyfried, Jonathan
2008-01-01
Teachers often find themselves lamenting the loss of emergent readers to video games, television, and, most recently, the TTYL (talk/type to you later) culture of text messaging and Internet social networking. Trying to impart the joy of a good read to middle school students feels like pushing religion onto the perfectly content worshipers of…
Motivating Middle School Readers: The Graphic Novel Link
ERIC Educational Resources Information Center
Edwards, Buffy
2009-01-01
Middle school students are not reading for pleasure as frequently as they formally have, due to the influx of video games, cell phones, MP3 players, and other electronic device. This is not even to mention the common stresses of the average middle school student. Current research on reading motivation finds that as children move from upper…
Developing CD-ROMs: Pitfalls and Detours on the Road to the Digital Village.
ERIC Educational Resources Information Center
Texas State Technical Coll., Waco.
This guide provides information on many aspects of CD-ROM development. Storage requirements of multimedia applications such as graphic images, audio, video, and animation are provided in section one. Storage capacity, transfer rate, and access time are the three criteria used to judge various storage media. In section two, specifications for these…
Introduction to Multimedia in Instruction. An IAT Technology Primer.
ERIC Educational Resources Information Center
Oblinger, Diana
Multimedia allows computing to move from text and data into the realm of graphics, sound, images, and full-motion video, thus allowing both students and teachers to use the power of computers in new ways. Key elements of multimedia are natural presentation of information and non-linear navigation through applications for access to information on…
ERIC Educational Resources Information Center
Lawless-Reljic, Sabine Karine
2010-01-01
Growing interest of educational institutions in desktop 3D graphic virtual environments for hybrid and distance education prompts questions on the efficacy of such tools. Virtual worlds, such as Second Life[R], enable computer-mediated immersion and interactions encompassing multimodal communication channels including audio, video, and text-.…
ERIC Educational Resources Information Center
Sung, K.; Hillyard, C.; Angotti, R. L.; Panitz, M. W.; Goldstein, D. S.; Nordlinger, J.
2011-01-01
Despite the proven success of using computer video games as a context for teaching introductory programming (CS1/2) courses, barriers including the lack of adoptable materials, required background expertise (in graphics/games), and institutional acceptance still prevent interested faculty members from experimenting with this approach. Game-themed…
Bring It to Class: Unpacking Pop Culture in Literacy Learning (Grades 4 through 12)
ERIC Educational Resources Information Center
Hagood, Margaret C.; Alvermann, Donna E.; Heron-Hruby, Alison
2010-01-01
Students' backpacks bulge not just with oversize textbooks, but with paperbacks, graphic novels, street lit, and electronics such as iPods and handheld video games. This book is about unpacking those texts to explore previously unexamined assumptions regarding their usefulness to classroom learning. With a strong theoretical grounding and many…
1991-05-08
375,083 Color TV 10,000s 83.10 58.12 Lithopone ton 1,711 7,828 Video recorder 10,000s 0.46 0.92 JPRS-CAR-91-025 46 ECONOMIC 8 May 1991 Item Unit 2/91 2...and than 60 illegal activities on making and selling porno - herself to do sewing work with an apprentice. graphic publications, and completed the work
Job Language Performance Requirements for MOS 19D.
1982-10-01
Group or Committee group 4. Small (12 or less) 5. Other (comments) MODES OF INSTRUCTION COMMETS 1. Films q4 " 2. Video cassettes 3. Graphic training aids...A. Sin oulThr man, porn B. pa1.Arl men, pens . Count chairs D. Mass flour E 3. Posseesive soldier’s F. Collective fish Adjectives: A. Predicative The
ERIC Educational Resources Information Center
Zapata-Rivera, Diego; Zwick, Rebecca; Vezzu, Margaret
2016-01-01
The goal of this study was to explore the effectiveness of a short web-based tutorial in helping teachers to better understand the portrayal of measurement error in test score reports. The short video tutorial included both verbal and graphical representations of measurement error. Results showed a significant difference in comprehension scores…
Learning in LAMS: Lesson from a Student Teacher Exploring Gene Ethics
ERIC Educational Resources Information Center
Dennis, Carina
2012-01-01
Due to its complex and microscopic nature, genetics is a difficult subject for many learners to conceptually grasp. Graphics, animation and video material can be extremely helpful to their understanding. A wealth of educational online content about genetics has been created over the past decade in the wake of the human genome being sequenced.…
Mapping Students' Ideas to Understand Learning in a Collaborative Programming Environment
ERIC Educational Resources Information Center
Harlow, Danielle Boyd; Leak, Anne Emerson
2014-01-01
Recent studies in learning programming have largely focused on high school and college students; less is known about how young children learn to program. From video data of 20 students using a graphical programming interface, we identified ideas that were shared and evolved through an elementary school classroom. In mapping these ideas and their…
Using Graphic Organizers to Improve Reading Comprehension Skills for the Middle School ESL Students
ERIC Educational Resources Information Center
Praveen, Sam D.; Rajan, Premalatha
2013-01-01
"A picture is worth a thousand words." In a modern-day classroom, students are surrounded by visual imagery through textbooks, notice boards, television, videos, or computers. Many middle school classrooms are filled with colorful pictures and photographs. However, it is unclear how--or if --these images impact the middle school ESL…
Microphone Array Phased Processing System (MAPPS): Version 4.0 Manual
NASA Technical Reports Server (NTRS)
Watts, Michael E.; Mosher, Marianne; Barnes, Michael; Bardina, Jorge
1999-01-01
A processing system has been developed to meet increasing demands for detailed noise measurement of individual model components. The Microphone Array Phased Processing System (MAPPS) uses graphical user interfaces to control all aspects of data processing and visualization. The system uses networked parallel computers to provide noise maps at selected frequencies in a near real-time testing environment. The system has been successfully used in the NASA Ames 7- by 10-Foot Wind Tunnel.
Loudspeaker line array educational demonstration.
Anderson, Brian E; Moser, Brad; Gee, Kent L
2012-03-01
This paper presents a physical demonstration of an audio-range line array used to teach interference of multiple sources in a classroom or laboratory exercise setting. Software has been developed that permits real-time control and steering of the array. The graphical interface permits a user to vary the frequency, the angular response by phase shading, and reduce sidelobes through amplitude shading. An inexpensive, eight-element loudspeaker array has been constructed to test the control program. Directivity measurements of this array in an anechoic chamber and in a large classroom are presented. These measurements have good agreement with theoretical directivity predictions, thereby allowing its use as a quantitative learning tool for advanced students as well as a qualitative demonstration of arrays in other settings. Portions of this paper are directed toward educators who may wish to implement a similar demonstration for their advanced undergraduate or graduate level course in acoustics. © 2012 Acoustical Society of America
ICL: The Image Composition Language
NASA Technical Reports Server (NTRS)
Foley, James D.; Kim, Won Chul
1986-01-01
The Image Composition Language (ICL) provides a convenient way for programmers of interactive graphics application programs to define how the video look-up table of a raster display system is to be loaded. The ICL allows one or several images stored in the frame buffer to be combined in a variety of ways. The ICL treats these images as variables, and provides arithematic, relational, and conditional operators to combine the images, scalar variables, and constants in image composition expressions. The objective of ICL is to provide programmers with a simple way to compose images, to relieve the tedium usually associated with loading the video look-up table to obtain desired results.
Comparing Subscription-Based Anatomy E-Resources for Collections Development.
McClurg, Caitlin; Stieda, Vivian; Talsma, Nicole
2015-01-01
This article describes a chart-based approach for health sciences libraries to compare anatomy e-resources. The features, functionalities, and user experiences of seven leading subscription-based e-resources were assessed using a chart that was iteratively developed by the investigators. Acland's Video Atlas of Human Anatomy, Thieme Winking Skull, and Visible Body were the preferred products as they respectively excel in cadaver-based videos, self-assessment, and 3D graphical manipulation. Moreover, each product affords a pleasant user experience. The investigative team found that resources specializing in one aspect of anatomy teaching are superior to those that contain a wealth of content for diverse audiences.
Parallax visualization of full motion video using the Pursuer GUI
NASA Astrophysics Data System (ADS)
Mayhew, Christopher A.; Forgues, Mark B.
2014-06-01
In 2013, the Authors reported to the SPIE on the Phase 1 development of a Parallax Visualization (PV) plug-in toolset for Wide Area Motion Imaging (WAMI) data using the Pursuer Graphical User Interface (GUI).1 In addition to the ability to PV WAMI data, the Phase 1 plug-in toolset also featured a limited ability to visualize Full Motion video (FMV) data. The ability to visualize both WAMI and FMV data is highly advantageous capability for an Electric Light Table (ELT) toolset. This paper reports on the Phase 2 development and addition of a full featured FMV capability to the Pursuer WAMI PV Plug-in.
NASA Astrophysics Data System (ADS)
Crone, T. J.; Knuth, F.; Marburg, A.
2016-12-01
A broad array of Earth science problems can be investigated using high-definition video imagery from the seafloor, ranging from those that are geological and geophysical in nature, to those that are biological and water-column related. A high-definition video camera was installed as part of the Ocean Observatory Initiative's core instrument suite on the Cabled Array, a real-time fiber optic data and power system that stretches from the Oregon Coast to Axial Seamount on the Juan de Fuca Ridge. This camera runs a 14-minute pan-tilt-zoom routine 8 times per day, focusing on locations of scientific interest on and near the Mushroom vent in the ASHES hydrothermal field inside the Axial caldera. The system produces 13 GB of lossless HD video every 3 hours, and at the time of this writing it has generated 2100 recordings totaling 28.5 TB since it began streaming data into the OOI archive in August of 2015. Because of the large size of this dataset, downloading the entirety of the video for long timescale investigations is not practical. We are developing a set of user-side tools for downloading single frames and frame ranges from the OOI HD camera raw data archive to aid users interested in using these data for their research. We use these tools to download about one year's worth of partial frame sets to investigate several questions regarding the hydrothermal system at ASHES, including the variability of bacterial "floc" in the water-column, and changes in high temperature fluid fluxes using optical flow techniques. We show that while these user-side tools can facilitate rudimentary scientific investigations using the HD camera data, a server-side computing environment that allows users to explore this dataset without downloading any raw video will be required for more advanced investigations to flourish.
Donati, Maria Anna; Chiesi, Francesca; Ammannato, Giulio; Primi, Caterina
2015-02-01
This study tested the predictive power of gaming versatility (i.e., the number of video game genres engaged in) on game addiction in male adolescents, controlling for time spent on gaming. Participants were 701 male adolescents attending high school (Mage=15.6 years). Analyses showed that pathological gaming was predicted not only by higher time spent on gaming, but also by participation in a greater number of video game genres. Specifically, the wider the array of video game genres played, the higher were the negative consequences caused by gaming. Findings show that versatility can be considered as one of the behavioral risk factors related to gaming addiction, which may be characterized by a composite and diversified experience with video games. This study suggests that educational efforts designed to prevent gaming addiction among youth may also be focused on adolescents' engagement in different video games.
Computer Drawing Method for Operating Characteristic Curve of PV Power Plant Array Unit
NASA Astrophysics Data System (ADS)
Tan, Jianbin
2018-02-01
According to the engineering design of large-scale grid-connected photovoltaic power stations and the research and development of many simulation and analysis systems, it is necessary to draw a good computer graphics of the operating characteristic curves of photovoltaic array elements and to propose a good segmentation non-linear interpolation algorithm. In the calculation method, Component performance parameters as the main design basis, the computer can get 5 PV module performances. At the same time, combined with the PV array series and parallel connection, the computer drawing of the performance curve of the PV array unit can be realized. At the same time, the specific data onto the module of PV development software can be calculated, and the good operation of PV array unit can be improved on practical application.
NASA Astrophysics Data System (ADS)
Baukal, Charles Edward, Jr.
A literature search revealed very little information on how to teach working engineers, which became the motivation for this research. Effective training is important for many reasons such as preventing accidents, maximizing fuel efficiency, minimizing pollution emissions, and reducing equipment downtime. The conceptual framework for this study included the development of a new instructional design framework called the Multimedia Cone of Abstraction (MCoA). This was developed by combining Dale's Cone of Experience and Mayer's Cognitive Theory of Multimedia Learning. An anonymous survey of 118 engineers from a single Midwestern manufacturer was conducted to determine their demographics, learning strategy preferences, verbal-visual cognitive styles, and multimedia preferences. The learning strategy preference profile and verbal-visual cognitive styles of the sample were statistically significantly different than the general population. The working engineers included more Problem Solvers and were much more visually-oriented than the general population. To study multimedia preferences, five of the seven levels in the MCoA were used. Eight types of multimedia were compared in four categories (types in parantheses): text (text and narration), static graphics (drawing and photograph), non-interactive dynamic graphics (animation and video), and interactive dynamic graphics (simulated virtual reality and real virtual reality). The first phase of the study examined multimedia preferences within a category. Participants compared multimedia types in pairs on dual screens using relative preference, rating, and ranking. Surprisingly, the more abstract multimedia (text, drawing, animation, and simulated virtual reality) were preferred in every category to the more concrete multimedia (narration, photograph, video, and real virtual reality), despite the fact that most participants had relatively little prior subject knowledge. However, the more abstract graphics were only slightly preferred to the more concrete graphics. In the second phase, the more preferred multimedia types in each category from the first phase were compared against each other using relative preference, rating, and ranking and overall rating and ranking. Drawing was the most preferred multimedia type overall, although only slightly more than animation and simulated virtual reality. Text was a distant fourth. These results suggest that instructional content for continuing engineering education should include problem solving and should be highly visual.
Electronic data generation and display system
NASA Technical Reports Server (NTRS)
Wetekamm, Jules
1988-01-01
The Electronic Data Generation and Display System (EDGADS) is a field tested paperless technical manual system. The authoring provides subject matter experts the option of developing procedureware from digital or hardcopy inputs of technical information from text, graphics, pictures, and recorded media (video, audio, etc.). The display system provides multi-window presentations of graphics, pictures, animations, and action sequences with text and audio overlays on high resolution color CRT and monochrome portable displays. The database management system allows direct access via hierarchical menus, keyword name, ID number, voice command or touch of a screen pictoral of the item (ICON). It contains operations and maintenance technical information at three levels of intelligence for a total system.
Solid State Television Camera (CID)
NASA Technical Reports Server (NTRS)
Steele, D. W.; Green, W. T.
1976-01-01
The design, development and test are described of a charge injection device (CID) camera using a 244x248 element array. A number of video signal processing functions are included which maximize the output video dynamic range while retaining the inherently good resolution response of the CID. Some of the unique features of the camera are: low light level performance, high S/N ratio, antiblooming, geometric distortion, sequential scanning and AGC.
Video System Highlights Hydrogen Fires
NASA Technical Reports Server (NTRS)
Youngquist, Robert C.; Gleman, Stuart M.; Moerk, John S.
1992-01-01
Video system combines images from visible spectrum and from three bands in infrared spectrum to produce color-coded display in which hydrogen fires distinguished from other sources of heat. Includes linear array of 64 discrete lead selenide mid-infrared detectors operating at room temperature. Images overlaid on black and white image of same scene from standard commercial video camera. In final image, hydrogen fires appear red; carbon-based fires, blue; and other hot objects, mainly green and combinations of green and red. Where no thermal source present, image remains in black and white. System enables high degree of discrimination between hydrogen flames and other thermal emitters.
Ultra High Definition Video from the International Space Station (Reel 1)
2015-06-15
The view of life in space is getting a major boost with the introduction of 4K Ultra High-Definition (UHD) video, providing an unprecedented look at what it's like to live and work aboard the International Space Station. This important new capability will allow researchers to acquire high resolution - high frame rate video to provide new insight into the vast array of experiments taking place every day. It will also bestow the most breathtaking views of planet Earth and space station activities ever acquired for consumption by those still dreaming of making the trip to outer space.
DOT National Transportation Integrated Search
2003-06-10
Geographic information systems (GIS) manipulate, analyze, and graphically present an array of information associated with geographic locations, have been invaluable to all levels of government. The federal government has long been attempting to devel...
ERIC Educational Resources Information Center
Bester, Susanna Jacoba
2016-01-01
Today's learners are born into a multimedia world and feel quite comfortable in an electronic learning environment. The high-quality sound, realistic colour images, graphics, narrations, real-time recordings and full motion videos from multimedia, which are integrated in History lessons, are what the learners of today want and need in their…
International Disability Educational Alliance (IDEAnet)
2011-03-23
scientists who have graduated from the Field Epidemiology Training Program (FELTP) GUI: Graphical User Interface GIS : Global Implementation Solutions...how to participate in IDEAnet programs, and background information. 47 “Materials for Download ” is used here to denote materials that will be provided...for download and use by interested parties. The materials may include: written/pictorial instructions, articles, videos, images, and other materials
Human Dimensions in Future Battle Command Systems: A Workshop Report
2008-04-01
information processing). These dimensions can best be described anecdotally and metaphorically as: • Battle command is a human-centric...enhance information visualization techniques in the decision tools, including multimodal platforms: video, graphics, symbols, etc. This should be...organization members. Each dimension can metaphorically represent the spatial location of individuals and group thinking in a trajectory of social norms
ERIC Educational Resources Information Center
Sabatini, John P.
An analysis was conducted of the results of a formative evaluation of the LiteracyLink "Workplace Essential Skills" (WES) learning system conducted in the fall of 1998. (The WES learning system is a multimedia learning system integrating text, sound, graphics, animation, video, and images in a computer system and includes a videotape series, a…
ERIC Educational Resources Information Center
Hill, Rebecca
2012-01-01
When it comes to reading formats, readers can take their pick from a variety of choices, and the new face at this party is the enhanced eBook. Videos are embedded. Music plays at certain intervals. Artwork and graphics fade in and out. They are like someone all dressed up for a party; enhanced eBooks are, in essence, prettier and more enticing…
Uses of Integrated Media Instruction in a Self-Contained Class for Children with Mild Disabilities.
ERIC Educational Resources Information Center
Narita, Shigeru
This conference paper describes the use of integrated media-oriented instruction in a self-contained class at Yokohama Municipal Elementary School in Japan. Three students with mild disabilities, in grades 5 and 6, participated in the project. Integrated media (IM) is defined as the linkage of text, sound, video, graphics, and the computer in such…
ERIC Educational Resources Information Center
Palmer, Loretta
A basic algebra unit was developed at Utah Valley State College to emphasize applications of mathematical concepts in the work world, using video and computer-generated graphics to integrate textual material. The course was implemented in three introductory algebra sections involving 80 students and taught algebraic concepts using such areas as…
Modern Display Technologies for Airborne Applications.
1983-04-01
the case of LED head-down direct view displays, this requires that special attention be paid to the optical filtering , the electrical drive/address...effectively attenuates the LED specular reflectance component, the colour and neutral density filtering attentuate the diffuse component and the... filter techniques are planned for use with video, multi- colour and advanced versions of numeric, alphanumeric and graphic displays; this technique
Payload specialist station study. Part 2: CEI specifications (part 1). [space shuttles
NASA Technical Reports Server (NTRS)
1976-01-01
The performance, design, and verification specifications are established for the multifunction display system (MFDS) to be located at the payload station in the shuttle orbiter aft flight deck. The system provides the display units (with video, alphanumerics, and graphics capabilities), associated with electronic units and the keyboards in support of the payload dedicated controls and the displays concept.
InSight Lander Solar Array Test
2018-01-23
While in the landed configuration for the last time before arriving on Mars, NASA's InSight lander was commanded to deploy its solar arrays to test and verify the exact process that it will use on the surface of the Red Planet. During the test on Jan. 23, 2018 from the Lockheed Martin clean room in Littleton, Colorado, engineers and technicians evaluated that the solar arrays fully deployed and conducted an illumination test to confirm that the solar cells were collecting power. A video is available at https://photojournal.jpl.nasa.gov/catalog/PIA22200
InSight Lander Solar Array Test
2018-01-23
While in the landed configuration for the last time before arriving on Mars, NASA's InSight lander was commanded to deploy its solar arrays to test and verify the exact process that it will use on the surface of the Red Planet. During the test on Jan. 23, 2018 from the Lockheed Martin clean room in Littleton, Colorado, engineers and technicians evaluated that the solar arrays fully deployed and conducted an illumination test to confirm that the solar cells were collecting power. A video is available at https://photojournal.jpl.nasa.gov/catalog/PIA22203
InSight Lander Solar Array Test
2018-01-23
While in the landed configuration for the last time before arriving on Mars, NASA's InSight lander was commanded to deploy its solar arrays to test and verify the exact process that it will use on the surface of the Red Planet. During the test on Jan. 23, 2018 from the Lockheed Martin clean room in Littleton, Colorado, engineers and technicians evaluated that the solar arrays fully deployed and conducted an illumination test to confirm that the solar cells were collecting power. A video is available at https://photojournal.jpl.nasa.gov/catalog/PIA22202
InSight Lander Solar Array Test
2018-01-23
While in the landed configuration for the last time before arriving on Mars, NASA's InSight lander was commanded to deploy its solar arrays to test and verify the exact process that it will use on the surface of the Red Planet. During the test on Jan. 23, 2018 from the Lockheed Martin clean room in Littleton, Colorado, engineers and technicians evaluated that the solar arrays fully deployed and conducted an illumination test to confirm that the solar cells were collecting power. A video is available at https://photojournal.jpl.nasa.gov/catalog/PIA22201
InSight Lander Solar Array Test
2018-01-23
While in the landed configuration for the last time before arriving on Mars, NASA's InSight lander was commanded to deploy its solar arrays to test and verify the exact process that it will use on the surface of the Red Planet. During the test on Jan. 23, 2018 from the Lockheed Martin clean room in Littleton, Colorado, engineers and technicians evaluated that the solar arrays fully deployed and conducted an illumination test to confirm that the solar cells were collecting power. A video is available at https://photojournal.jpl.nasa.gov/catalog/PIA22204
NASA Astrophysics Data System (ADS)
Knuth, F.; Crone, T. J.; Marburg, A.
2017-12-01
The Ocean Observatories Initiative's (OOI) Cabled Array is delivering real-time high-definition video data from an HD video camera (CAMHD), installed at the Mushroom hydrothermal vent in the ASHES hydrothermal vent field within the caldera of Axial Seamount, an active submarine volcano located approximately 450 kilometers off the coast of Washington at a depth of 1,542 m. Every three hours the camera pans, zooms and focuses in on nine distinct scenes of scientific interest across the vent, producing 14-minute-long videos during each run. This standardized video sampling routine enables scientists to programmatically analyze the content of the video using automated image analysis techniques. Each scene-specific time series dataset can service a wide range of scientific investigations, including the estimation of bacterial flux into the system by quantifying chemosynthetic bacterial clusters (floc) present in the water column, relating periodicity in hydrothermal vent fluid flow to earth tides, measuring vent chimney growth in response to changing hydrothermal fluid flow rates, or mapping the patterns of fauna colonization, distribution and composition across the vent over time. We are currently investigating the seventh scene in the sampling routine, focused on the bacterial mat covering the seafloor at the base of the vent. We quantify the change in bacterial mat coverage over time using image analysis techniques, and examine the relationship between mat coverage, fluid flow processes, episodic chimney collapse events, and other processes observed by Cabled Array instrumentation. This analysis is being conducted using cloud-enabled computer vision processing techniques, programmatic image analysis, and time-lapse video data collected over the course of the first CAMHD deployment, from November 2015 to July 2016.
Automated UAV-based video exploitation using service oriented architecture framework
NASA Astrophysics Data System (ADS)
Se, Stephen; Nadeau, Christian; Wood, Scott
2011-05-01
Airborne surveillance and reconnaissance are essential for successful military missions. Such capabilities are critical for troop protection, situational awareness, mission planning, damage assessment, and others. Unmanned Aerial Vehicles (UAVs) gather huge amounts of video data but it is extremely labour-intensive for operators to analyze hours and hours of received data. At MDA, we have developed a suite of tools that can process the UAV video data automatically, including mosaicking, change detection and 3D reconstruction, which have been integrated within a standard GIS framework. In addition, the mosaicking and 3D reconstruction tools have also been integrated in a Service Oriented Architecture (SOA) framework. The Visualization and Exploitation Workstation (VIEW) integrates 2D and 3D visualization, processing, and analysis capabilities developed for UAV video exploitation. Visualization capabilities are supported through a thick-client Graphical User Interface (GUI), which allows visualization of 2D imagery, video, and 3D models. The GUI interacts with the VIEW server, which provides video mosaicking and 3D reconstruction exploitation services through the SOA framework. The SOA framework allows multiple users to perform video exploitation by running a GUI client on the operator's computer and invoking the video exploitation functionalities residing on the server. This allows the exploitation services to be upgraded easily and allows the intensive video processing to run on powerful workstations. MDA provides UAV services to the Canadian and Australian forces in Afghanistan with the Heron, a Medium Altitude Long Endurance (MALE) UAV system. On-going flight operations service provides important intelligence, surveillance, and reconnaissance information to commanders and front-line soldiers.
Khan, Haseeb Ahmad
2004-01-01
The massive surge in the production of microarray data poses a great challenge for proper analysis and interpretation. In recent years numerous computational tools have been developed to extract meaningful interpretation of microarray gene expression data. However, a convenient tool for two-groups comparison of microarray data is still lacking and users have to rely on commercial statistical packages that might be costly and require special skills, in addition to extra time and effort for transferring data from one platform to other. Various statistical methods, including the t-test, analysis of variance, Pearson test and Mann-Whitney U test, have been reported for comparing microarray data, whereas the utilization of the Wilcoxon signed-rank test, which is an appropriate test for two-groups comparison of gene expression data, has largely been neglected in microarray studies. The aim of this investigation was to build an integrated tool, ArraySolver, for colour-coded graphical display and comparison of gene expression data using the Wilcoxon signed-rank test. The results of software validation showed similar outputs with ArraySolver and SPSS for large datasets. Whereas the former program appeared to be more accurate for 25 or fewer pairs (n < or = 25), suggesting its potential application in analysing molecular signatures that usually contain small numbers of genes. The main advantages of ArraySolver are easy data selection, convenient report format, accurate statistics and the familiar Excel platform.
2004-01-01
The massive surge in the production of microarray data poses a great challenge for proper analysis and interpretation. In recent years numerous computational tools have been developed to extract meaningful interpretation of microarray gene expression data. However, a convenient tool for two-groups comparison of microarray data is still lacking and users have to rely on commercial statistical packages that might be costly and require special skills, in addition to extra time and effort for transferring data from one platform to other. Various statistical methods, including the t-test, analysis of variance, Pearson test and Mann–Whitney U test, have been reported for comparing microarray data, whereas the utilization of the Wilcoxon signed-rank test, which is an appropriate test for two-groups comparison of gene expression data, has largely been neglected in microarray studies. The aim of this investigation was to build an integrated tool, ArraySolver, for colour-coded graphical display and comparison of gene expression data using the Wilcoxon signed-rank test. The results of software validation showed similar outputs with ArraySolver and SPSS for large datasets. Whereas the former program appeared to be more accurate for 25 or fewer pairs (n ≤ 25), suggesting its potential application in analysing molecular signatures that usually contain small numbers of genes. The main advantages of ArraySolver are easy data selection, convenient report format, accurate statistics and the familiar Excel platform. PMID:18629036
Aghdasi, Hadi S; Abbaspour, Maghsoud; Moghadam, Mohsen Ebrahimi; Samei, Yasaman
2008-08-04
Technological progress in the fields of Micro Electro-Mechanical Systems (MEMS) and wireless communications and also the availability of CMOS cameras, microphones and small-scale array sensors, which may ubiquitously capture multimedia content from the field, have fostered the development of low-cost limited resources Wireless Video-based Sensor Networks (WVSN). With regards to the constraints of videobased sensor nodes and wireless sensor networks, a supporting video stream is not easy to implement with the present sensor network protocols. In this paper, a thorough architecture is presented for video transmission over WVSN called Energy-efficient and high-Quality Video transmission Architecture (EQV-Architecture). This architecture influences three layers of communication protocol stack and considers wireless video sensor nodes constraints like limited process and energy resources while video quality is preserved in the receiver side. Application, transport, and network layers are the layers in which the compression protocol, transport protocol, and routing protocol are proposed respectively, also a dropping scheme is presented in network layer. Simulation results over various environments with dissimilar conditions revealed the effectiveness of the architecture in improving the lifetime of the network as well as preserving the video quality.
Overlapping sphincteroplasty and posterior repair.
Crane, Andrea K; Myers, Erinn M; Lippmann, Quinn K; Matthews, Catherine A
2014-12-01
Knowledge of how to anatomically reconstruct extensive posterior-compartment defects is variable among gynecologists. The objective of this video is to demonstrate an effective technique of overlapping sphincteroplasty and posterior repair. In this video, a scripted storyboard was constructed that outlines the key surgical steps of a comprehensive posterior compartment repair: (1) surgical incision that permits access to posterior compartment and perineal body, (2) dissection of the rectovaginal space up to the level of the cervix, (3) plication of the rectovaginal muscularis, (4) repair of internal and external anal sphincters, and (5) reconstruction of the perineal body. Using a combination of graphic illustrations and live video footage, tips on repair are highlighted. The goals at the end of repair are to: (1) have improved vaginal caliber, (2) increase rectal tone along the entire posterior vaginal wall, (3) have the posterior vaginal wall at a perpendicular plane to the perineal body, (4) reform the hymenal ring, and (5) not have an overly elongated perineal body. This video provides a step-by-step guide on how to perform an overlapping sphincteroplasty and posterior repair.
A web-based system for home monitoring of patients with Parkinson's disease using wearable sensors.
Chen, Bor-Rong; Patel, Shyamal; Buckley, Thomas; Rednic, Ramona; McClure, Douglas J; Shih, Ludy; Tarsy, Daniel; Welsh, Matt; Bonato, Paolo
2011-03-01
This letter introduces MercuryLive, a platform to enable home monitoring of patients with Parkinson's disease (PD) using wearable sensors. MercuryLive contains three tiers: a resource-aware data collection engine that relies upon wearable sensors, web services for live streaming and storage of sensor data, and a web-based graphical user interface client with video conferencing capability. Besides, the platform has the capability of analyzing sensor (i.e., accelerometer) data to reliably estimate clinical scores capturing the severity of tremor, bradykinesia, and dyskinesia. Testing results showed an average data latency of less than 400 ms and video latency of about 200 ms with video frame rate of about 13 frames/s when 800 kb/s of bandwidth were available and we used a 40% video compression, and data feature upload requiring 1 min of extra time following a 10 min interactive session. These results indicate that the proposed platform is suitable to monitor patients with PD to facilitate the titration of medications in the late stages of the disease.
Echolocation signals of wild Atlantic spotted dolphin (Stenella frontalis)
NASA Astrophysics Data System (ADS)
Au, Whitlow W. L.; Herzing, Denise L.
2003-01-01
An array of four hydrophones arranged in a symmetrical star configuration was used to measure the echolocation signals of the Atlantic spotted dolphin (Stenella frontalis) in the Bahamas. The spacing between the center hydrophone and the other hydrophones was 45.7 cm. A video camera was attached to the array and a video tape recorder was time synchronized with the computer used to digitize the acoustic signals. The echolocation signals had bi-modal frequency spectra with a low-frequency peak between 40 and 50 kHz and a high-frequency peak between 110 and 130 kHz. The low-frequency peak was dominant when the signal the source level was low and the high-frequency peak dominated when the source level was high. Peak-to-peak source levels as high as 210 dB re 1 μPa were measured. The source level varied in amplitude approximately as a function of the one-way transmission loss for signals traveling from the animals to the array. The characteristics of the signals were similar to those of captive Tursiops truncatus, Delphinapterus leucas and Pseudorca crassidens measured in open waters under controlled conditions.
Real-time strategy game training: emergence of a cognitive flexibility trait.
Glass, Brian D; Maddox, W Todd; Love, Bradley C
2013-01-01
Training in action video games can increase the speed of perceptual processing. However, it is unknown whether video-game training can lead to broad-based changes in higher-level competencies such as cognitive flexibility, a core and neurally distributed component of cognition. To determine whether video gaming can enhance cognitive flexibility and, if so, why these changes occur, the current study compares two versions of a real-time strategy (RTS) game. Using a meta-analytic Bayes factor approach, we found that the gaming condition that emphasized maintenance and rapid switching between multiple information and action sources led to a large increase in cognitive flexibility as measured by a wide array of non-video gaming tasks. Theoretically, the results suggest that the distributed brain networks supporting cognitive flexibility can be tuned by engrossing video game experience that stresses maintenance and rapid manipulation of multiple information sources. Practically, these results suggest avenues for increasing cognitive function.
Bi, Sheng; Zeng, Xiao; Tang, Xin; Qin, Shujia; Lai, King Wai Chiu
2016-01-01
Compressive sensing (CS) theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%. PMID:26950127
Real-Time Strategy Game Training: Emergence of a Cognitive Flexibility Trait
Glass, Brian D.; Maddox, W. Todd; Love, Bradley C.
2013-01-01
Training in action video games can increase the speed of perceptual processing. However, it is unknown whether video-game training can lead to broad-based changes in higher-level competencies such as cognitive flexibility, a core and neurally distributed component of cognition. To determine whether video gaming can enhance cognitive flexibility and, if so, why these changes occur, the current study compares two versions of a real-time strategy (RTS) game. Using a meta-analytic Bayes factor approach, we found that the gaming condition that emphasized maintenance and rapid switching between multiple information and action sources led to a large increase in cognitive flexibility as measured by a wide array of non-video gaming tasks. Theoretically, the results suggest that the distributed brain networks supporting cognitive flexibility can be tuned by engrossing video game experience that stresses maintenance and rapid manipulation of multiple information sources. Practically, these results suggest avenues for increasing cognitive function. PMID:23950921
NASA Astrophysics Data System (ADS)
Tanci, Claudio; Tosti, Gino; Antolini, Elisa; Gambini, Giorgio F.; Bruno, Pietro; Canestrari, Rodolfo; Conforti, Vito; Lombardi, Saverio; Russo, Federico; Sangiorgi, Pierluca; Scuderi, Salvatore
2016-08-01
ASTRI is an on-going project developed in the framework of the Cherenkov Telescope Array (CTA). An end- to-end prototype of a dual-mirror small-size telescope (SST-2M) has been installed at the INAF observing station on Mt. Etna, Italy. The next step is the development of the ASTRI mini-array composed of nine ASTRI SST-2M telescopes proposed to be installed at the CTA southern site. The ASTRI mini-array is a collaborative and international effort carried on by Italy, Brazil and South-Africa and led by the Italian National Institute of Astrophysics, INAF. To control the ASTRI telescopes, a specific ASTRI Mini-Array Software System (MASS) was designed using a scalable and distributed architecture to monitor all the hardware devices for the telescopes. Using code generation we built automatically from the ASTRI Interface Control Documents a set of communication libraries and extensive Graphical User Interfaces that provide full access to the capabilities offered by the telescope hardware subsystems for testing and maintenance. Leveraging these generated libraries and components we then implemented a human designed, integrated, Engineering GUI for MASS to perform the verification of the whole prototype and test shared services such as the alarms, configurations, control systems, and scientific on-line outcomes. In our experience the use of code generation dramatically reduced the amount of effort in development, integration and testing of the more basic software components and resulted in a fast software release life cycle. This approach could be valuable for the whole CTA project, characterized by a large diversity of hardware components.
An Imaging And Graphics Workstation For Image Sequence Analysis
NASA Astrophysics Data System (ADS)
Mostafavi, Hassan
1990-01-01
This paper describes an application-specific engineering workstation designed and developed to analyze imagery sequences from a variety of sources. The system combines the software and hardware environment of the modern graphic-oriented workstations with the digital image acquisition, processing and display techniques. The objective is to achieve automation and high throughput for many data reduction tasks involving metric studies of image sequences. The applications of such an automated data reduction tool include analysis of the trajectory and attitude of aircraft, missile, stores and other flying objects in various flight regimes including launch and separation as well as regular flight maneuvers. The workstation can also be used in an on-line or off-line mode to study three-dimensional motion of aircraft models in simulated flight conditions such as wind tunnels. The system's key features are: 1) Acquisition and storage of image sequences by digitizing real-time video or frames from a film strip; 2) computer-controlled movie loop playback, slow motion and freeze frame display combined with digital image sharpening, noise reduction, contrast enhancement and interactive image magnification; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored image sequence; 4) automatic and manual field-of-view and spatial calibration; 5) image sequence data base generation and management, including the measurement data products; 6) off-line analysis software for trajectory plotting and statistical analysis; 7) model-based estimation and tracking of object attitude angles; and 8) interface to a variety of video players and film transport sub-systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Springmeyer, R R; Brugger, E; Cook, R
The Data group provides data analysis and visualization support to its customers. This consists primarily of the development and support of VisIt, a data analysis and visualization tool. Support ranges from answering questions about the tool, providing classes on how to use the tool, and performing data analysis and visualization for customers. The Information Management and Graphics Group supports and develops tools that enhance our ability to access, display, and understand large, complex data sets. Activities include applying visualization software for large scale data exploration; running video production labs on two networks; supporting graphics libraries and tools for end users;more » maintaining PowerWalls and assorted other displays; and developing software for searching and managing scientific data. Researchers in the Center for Applied Scientific Computing (CASC) work on various projects including the development of visualization techniques for large scale data exploration that are funded by the ASC program, among others. The researchers also have LDRD projects and collaborations with other lab researchers, academia, and industry. The IMG group is located in the Terascale Simulation Facility, home to Dawn, Atlas, BGL, and others, which includes both classified and unclassified visualization theaters, a visualization computer floor and deployment workshop, and video production labs. We continued to provide the traditional graphics group consulting and video production support. We maintained five PowerWalls and many other displays. We deployed a 576-node Opteron/IB cluster with 72 TB of memory providing a visualization production server on our classified network. We continue to support a 128-node Opteron/IB cluster providing a visualization production server for our unclassified systems and an older 256-node Opteron/IB cluster for the classified systems, as well as several smaller clusters to drive the PowerWalls. The visualization production systems includes NFS servers to provide dedicated storage for data analysis and visualization. The ASC projects have delivered new versions of visualization and scientific data management tools to end users and continue to refine them. VisIt had 4 releases during the past year, ending with VisIt 2.0. We released version 2.4 of Hopper, a Java application for managing and transferring files. This release included a graphical disk usage view which works on all types of connections and an aggregated copy feature for quickly transferring massive datasets quickly and efficiently to HPSS. We continue to use and develop Blockbuster and Telepath. Both the VisIt and IMG teams were engaged in a variety of movie production efforts during the past year in addition to the development tasks.« less
A System for Video Surveillance and Monitoring CMU VSAM Final Report
1999-11-30
motion-based skeletonization, neural network , spatio-temporal salience Patterns inside image chips, spurious motion rejection, model -based... network of sensors with respect to the model coordinate system, computation of 3D geolocation estimates, and graphical display of object hypotheses...rithms have been developed. The first uses view dependent visual properties to train a neural network classifier to recognize four classes: single
An introduction to scriptwriting for video and multimedia.
Guth, J
1995-06-01
The elements of audiovisual productions are explained and illustrated, including words, moving images, still images, graphics, narration, music, landscape sounds, pacing and tilting and font styles. Three different production styles are analysed, and examples of those styles are discussed. Rules for writing spoken words, composing blocks of information, and explaining technical information to a lay audience are also provided. Storyboard and scripting forms and examples are included.
Software Accelerates Computing Time for Complex Math
NASA Technical Reports Server (NTRS)
2014-01-01
Ames Research Center awarded Newark, Delaware-based EM Photonics Inc. SBIR funding to utilize graphic processing unit (GPU) technology- traditionally used for computer video games-to develop high-computing software called CULA. The software gives users the ability to run complex algorithms on personal computers with greater speed. As a result of the NASA collaboration, the number of employees at the company has increased 10 percent.
ERIC Educational Resources Information Center
Halsall, Jane
2004-01-01
What is the appeal of anime? For one thing, the graphic storytelling is uniquely compelling and spans multiple genres. It tends to be targeted to different audiences: young children and families, males or females aged 10-18, or strictly adults for the mature genre called hentai. In America, almost all animation is produced for and watched by…
Muscle forces analysis in the shoulder mechanism during wheelchair propulsion.
Lin, Hwai-Ting; Su, Fong-Chin; Wu, Hong-Wen; An, Kai-Nan
2004-01-01
This study combines an ergometric wheelchair, a six-camera video motion capture system and a prototype computer graphics based musculoskeletal model (CGMM) to predict shoulder joint loading, muscle contraction force per muscle and the sequence of muscular actions during wheelchair propulsion, and also to provide an animated computer graphics model of the relative interactions. Five healthy male subjects with no history of upper extremity injury participated. A conventional manual wheelchair was equipped with a six-component load cell to collect three-dimensional forces and moments experienced by the wheel, allowing real-time measurement of hand/rim force applied by subjects during normal wheelchair operation. An ExpertVision six-camera video motion capture system collected trajectory data of markers attached on anatomical positions. The CGMM was used to simulate and animate muscle action by using an optimization technique combining observed muscular motions with physiological constraints to estimate muscle contraction forces during wheelchair propulsion. The CGMM provides results that satisfactorily match the predictions of previous work, disregarding minor differences which presumably result from differing experimental conditions, measurement technologies and subjects. Specifically, the CGMM shows that the supraspinatus, infraspinatus, anterior deltoid, pectoralis major and biceps long head are the prime movers during the propulsion phase. The middle and posterior deltoid and supraspinatus muscles are responsible for arm return during the recovery phase. CGMM modelling shows that the rotator cuff and pectoralis major play an important role during wheelchair propulsion, confirming the known risk of injury for these muscles during wheelchair propulsion. The CGMM successfully transforms six-camera video motion capture data into a technically useful and visually interesting animated video model of the shoulder musculoskeletal system. The CGMM further yields accurate estimates of muscular forces during motion, indicating that this prototype modelling and analysis technique will aid in study, analysis and therapy of the mechanics and underlying pathomechanics involved in various musculoskeletal overuse syndromes.
Delivery of video-on-demand services using local storages within passive optical networks.
Abeywickrama, Sandu; Wong, Elaine
2013-01-28
At present, distributed storage systems have been widely studied to alleviate Internet traffic build-up caused by high-bandwidth, on-demand applications. Distributed storage arrays located locally within the passive optical network were previously proposed to deliver Video-on-Demand services. As an added feature, a popularity-aware caching algorithm was also proposed to dynamically maintain the most popular videos in the storage arrays of such local storages. In this paper, we present a new dynamic bandwidth allocation algorithm to improve Video-on-Demand services over passive optical networks using local storages. The algorithm exploits the use of standard control packets to reduce the time taken for the initial request communication between the customer and the central office, and to maintain the set of popular movies in the local storage. We conduct packet level simulations to perform a comparative analysis of the Quality-of-Service attributes between two passive optical networks, namely the conventional passive optical network and one that is equipped with a local storage. Results from our analysis highlight that strategic placement of a local storage inside the network enables the services to be delivered with improved Quality-of-Service to the customer. We further formulate power consumption models of both architectures to examine the trade-off between enhanced Quality-of-Service performance versus the increased power requirement from implementing a local storage within the network.
Schlosser, Ralf W; Koul, Rajinder; Shane, Howard; Sorce, James; Brock, Kristofer; Harmon, Ashley; Moerlein, Dorothy; Hearn, Emilia
2014-10-01
The effects of animation on naming and identification of graphic symbols for verbs and prepositions were studied in 2 graphic symbol sets in preschoolers. Using a 2 × 2 × 2 × 3 completely randomized block design, preschoolers across three age groups were randomly assigned to combinations of symbol set (Autism Language Program [ALP] Animated Graphics or Picture Communication Symbols [PCS]), symbol format (animated or static), and word class (verbs or prepositions). Children were asked to name symbols and to identify a target symbol from an array given the spoken label. Animated symbols were more readily named than static symbols, although this was more pronounced for verbs than for prepositions. ALP symbols were named more accurately than PCS in particular with prepositions. Animation did not facilitate identification. ALP symbols for prepositions were identified better than PCS, but there was no difference for verbs. Finally, older children guessed and identified symbols more effectively than younger children. Animation improves the naming of graphic symbols for verbs. For prepositions, ALP symbols are named more accurately and are more readily identifiable than PCS. Naming and identifying symbols are learned skills that develop over time. Limitations and future research directions are discussed.
Demirkus, Meltem; Precup, Doina; Clark, James J; Arbel, Tal
2016-06-01
Recent literature shows that facial attributes, i.e., contextual facial information, can be beneficial for improving the performance of real-world applications, such as face verification, face recognition, and image search. Examples of face attributes include gender, skin color, facial hair, etc. How to robustly obtain these facial attributes (traits) is still an open problem, especially in the presence of the challenges of real-world environments: non-uniform illumination conditions, arbitrary occlusions, motion blur and background clutter. What makes this problem even more difficult is the enormous variability presented by the same subject, due to arbitrary face scales, head poses, and facial expressions. In this paper, we focus on the problem of facial trait classification in real-world face videos. We have developed a fully automatic hierarchical and probabilistic framework that models the collective set of frame class distributions and feature spatial information over a video sequence. The experiments are conducted on a large real-world face video database that we have collected, labelled and made publicly available. The proposed method is flexible enough to be applied to any facial classification problem. Experiments on a large, real-world video database McGillFaces [1] of 18,000 video frames reveal that the proposed framework outperforms alternative approaches, by up to 16.96 and 10.13%, for the facial attributes of gender and facial hair, respectively.
Transducer with a sense of touch
NASA Technical Reports Server (NTRS)
Bejczy, A. K.; Paine, G.
1979-01-01
Matrix of pressure sensors determines shape and pressure distribution of object in contact with its surface. Output can be used to develop pressure map of objects' surface and displayed as array of alphanumeric symbols on video monitor.
Orbital thermal analysis of lattice structured spacecraft using color video display techniques
NASA Technical Reports Server (NTRS)
Wright, R. L.; Deryder, D. D.; Palmer, M. T.
1983-01-01
A color video display technique is demonstrated as a tool for rapid determination of thermal problems during the preliminary design of complex space systems. A thermal analysis is presented for the lattice-structured Earth Observation Satellite (EOS) spacecraft at 32 points in a baseline non Sun-synchronous (60 deg inclination) orbit. Large temperature variations (on the order of 150 K) were observed on the majority of the members. A gradual decrease in temperature was observed as the spacecraft traversed the Earth's shadow, followed by a sudden rise in temperature (100 K) as the spacecraft exited the shadow. Heating rate and temperature histories of selected members and color graphic displays of temperatures on the spacecraft are presented.
Recent progress of flexible AMOLED displays
NASA Astrophysics Data System (ADS)
Pang, Huiqing; Rajan, Kamala; Silvernail, Jeff; Mandlik, Prashant; Ma, Ruiqing; Hack, Mike; Brown, Julie J.; Yoo, Juhn S.; Jung, Sang-Hoon; Kim, Yong-Cheol; Byun, Seung-Chan; Kim, Jong-Moo; Yoon, Soo-Young; Kim, Chang-Dong; Hwang, Yong-Kee; Chung, In-Jae; Fletcher, Mark; Green, Derek; Pangle, Mike; McIntyre, Jim; Smith, Randal D.
2011-03-01
Significant progress has been made in recent years in flexible AMOLED displays and numerous prototypes have been demonstrated. Replacing rigid glass with flexible substrates and thin-film encapsulation makes displays thinner, lighter, and non-breakable - all attractive features for portable applications. Flexible AMOLEDs equipped with phosphorescent OLEDs are considered one of the best candidates for low-power, rugged, full-color video applications. Recently, we have demonstrated a portable communication display device, built upon a full-color 4.3-inch HVGA foil display with a resolution of 134 dpi using an all-phosphorescent OLED frontplane. The prototype is shaped into a thin and rugged housing that will fit over a user's wrist, providing situational awareness and enabling the wearer to see real-time video and graphics information.
NFL Films audio, video, and film production facilities
NASA Astrophysics Data System (ADS)
Berger, Russ; Schrag, Richard C.; Ridings, Jason J.
2003-04-01
The new NFL Films 200,000 sq. ft. headquarters is home for the critically acclaimed film production that preserves the NFL's visual legacy week-to-week during the football season, and is also the technical plant that processes and archives football footage from the earliest recorded media to the current network broadcasts. No other company in the country shoots more film than NFL Films, and the inclusion of cutting-edge video and audio formats demands that their technical spaces continually integrate the latest in the ever-changing world of technology. This facility houses a staggering array of acoustically sensitive spaces where music and sound are equal partners with the visual medium. Over 90,000 sq. ft. of sound critical technical space is comprised of an array of sound stages, music scoring stages, audio control rooms, music writing rooms, recording studios, mixing theaters, video production control rooms, editing suites, and a screening theater. Every production control space in the building is designed to monitor and produce multi channel surround sound audio. An overview of the architectural and acoustical design challenges encountered for each sophisticated listening, recording, viewing, editing, and sound critical environment will be discussed.
OpenMP Parallelization and Optimization of Graph-based Machine Learning Algorithms
2016-05-01
composed of hyper - spectral video sequences recording the release of chemical plumes at the Dugway Proving Ground. We use the 329 frames of the...video. Each frame is a hyper - spectral image with dimension 128 × 320 × 129, where 129 is the dimension of the channel of each pixel. The total number of...j=1 . Then we use the nested for- loop to calculate the values of WXY by the formula (1). We then put the corresponding value in an array which
Effect of Arrangement of Stick Figures on Estimates of Proportion in Risk Graphics
Ancker, Jessica S.; Weber, Elke U.; Kukafka, Rita
2017-01-01
Background Health risks are sometimes illustrated with stick figures, with a certain proportion colored to indicate they are affected by the disease. Perception of these graphics may be affected by whether the affected stick figures are scattered randomly throughout the group or arranged in a block. Objective To assess the effects of stick-figure arrangement on first impressions of estimates of proportion, under a 10-s deadline. Design Questionnaire. Participants and Setting Respondents recruited online (n = 100) or in waiting rooms at an urban hospital (n = 65). Intervention Participants were asked to estimate the proportion represented in 6 unlabeled graphics, half randomly arranged and half sequentially arranged. Measurements Estimated proportions. Results Although average estimates were fairly good, the variability of estimates was high. Overestimates of random graphics were larger than overestimates of sequential ones, except when the proportion was near 50%; variability was also higher with random graphics. Although the average inaccuracy was modest, it was large enough that more than one quarter of respondents confused 2 graphics depicting proportions that differed by 11 percentage points. Low numeracy and educational level were associated with inaccuracy. Limitations Participants estimated proportions but did not report perceived risk. Conclusions Randomly arranged arrays of stick figures should be used with care because viewers’ ability to estimate the proportion in these graphics is so poor that moderate differences between risks may not be visible. In addition, random arrangements may create an initial impression that proportions, especially large ones, are larger than they are. PMID:20671209
Specialized Computer Systems for Environment Visualization
NASA Astrophysics Data System (ADS)
Al-Oraiqat, Anas M.; Bashkov, Evgeniy A.; Zori, Sergii A.
2018-06-01
The need for real time image generation of landscapes arises in various fields as part of tasks solved by virtual and augmented reality systems, as well as geographic information systems. Such systems provide opportunities for collecting, storing, analyzing and graphically visualizing geographic data. Algorithmic and hardware software tools for increasing the realism and efficiency of the environment visualization in 3D visualization systems are proposed. This paper discusses a modified path tracing algorithm with a two-level hierarchy of bounding volumes and finding intersections with Axis-Aligned Bounding Box. The proposed algorithm eliminates the branching and hence makes the algorithm more suitable to be implemented on the multi-threaded CPU and GPU. A modified ROAM algorithm is used to solve the qualitative visualization of reliefs' problems and landscapes. The algorithm is implemented on parallel systems—cluster and Compute Unified Device Architecture-networks. Results show that the implementation on MPI clusters is more efficient than Graphics Processing Unit/Graphics Processing Clusters and allows real-time synthesis. The organization and algorithms of the parallel GPU system for the 3D pseudo stereo image/video synthesis are proposed. With realizing possibility analysis on a parallel GPU-architecture of each stage, 3D pseudo stereo synthesis is performed. An experimental prototype of a specialized hardware-software system 3D pseudo stereo imaging and video was developed on the CPU/GPU. The experimental results show that the proposed adaptation of 3D pseudo stereo imaging to the architecture of GPU-systems is efficient. Also it accelerates the computational procedures of 3D pseudo-stereo synthesis for the anaglyph and anamorphic formats of the 3D stereo frame without performing optimization procedures. The acceleration is on average 11 and 54 times for test GPUs.
Remote console for virtual telerehabilitation.
Lewis, Jeffrey A; Boian, Rares F; Burdea, Grigore; Deutsch, Judith E
2005-01-01
The Remote Console (ReCon) telerehabilitation system provides a platform for therapists to guide rehabilitation sessions from a remote location. The ReCon system integrates real-time graphics, audio/video communication, private therapist chat, post-test data graphs, extendable patient and exercise performance monitoring, exercise pre-configuration and modification under a single application. These tools give therapists the ability to conduct training, monitoring/assessment, and therapeutic intervention remotely and in real-time.
1988-03-01
Kernel System (GKS). This combination of hardware and software allows real-time generation of maps using DMA digitized data.[Ref. 4: p. 44, 46] Though...releases are in MST*.BOO. MSV55X.BOO Sanyo MBC-550 with IBM compatible video board MSVAP3.BOO NEC APC3 MSVAPC.BOO NEC APC MSVAPR.BOO ACT Apricot MSVDM2
2015-06-01
GEOINT geospatial intelligence GFC ground force commander GPS global positioning system GUI graphical user interface HA/DR humanitarian...transport stream UAS unmanned aerial system . See UAV. UAV unmanned aerial vehicle. See UAS. VM virtual machine VMU Marine Unmanned Aerial Vehicle... Unmanned Air Systems (UASs). Current programs promise to dramatically increase the number of FMV feeds in the near future. However, there are too
Application of Optical Disc Databases and Related Technology to Public Access Settings
1992-03-01
users to download and retain data. A Video Graphics Adapter (VGA) monitor was included. No printer was provided. 2. CD-ROM Product Computer Select, a...download facilities, without printer support, satisfy user needs? 38 A secondary, but significant, objective was avoidance of unnecessary Reader...design of User Log sheets and mitigated against attachment of a printer to the workstation. F. DATA COLLECTION This section describes the methodology
Advanced Extravehicular Mobility Unit Informatics Software Design
NASA Technical Reports Server (NTRS)
Wright, Theodore
2014-01-01
This is a description of the software design for the 2013 edition of the Advanced Extravehicular Mobility Unit (AEMU) Informatics computer assembly. The Informatics system is an optional part of the space suit assembly. It adds a graphical interface for displaying suit status, timelines, procedures, and caution and warning information. In the future it will display maps with GPS position data, and video and still images captured by the astronaut.
Real-time terrain storage generation from multiple sensors towards mobile robot operation interface.
Song, Wei; Cho, Seoungjae; Xi, Yulong; Cho, Kyungeun; Um, Kyhyun
2014-01-01
A mobile robot mounted with multiple sensors is used to rapidly collect 3D point clouds and video images so as to allow accurate terrain modeling. In this study, we develop a real-time terrain storage generation and representation system including a nonground point database (PDB), ground mesh database (MDB), and texture database (TDB). A voxel-based flag map is proposed for incrementally registering large-scale point clouds in a terrain model in real time. We quantize the 3D point clouds into 3D grids of the flag map as a comparative table in order to remove the redundant points. We integrate the large-scale 3D point clouds into a nonground PDB and a node-based terrain mesh using the CPU. Subsequently, we program a graphics processing unit (GPU) to generate the TDB by mapping the triangles in the terrain mesh onto the captured video images. Finally, we produce a nonground voxel map and a ground textured mesh as a terrain reconstruction result. Our proposed methods were tested in an outdoor environment. Our results show that the proposed system was able to rapidly generate terrain storage and provide high resolution terrain representation for mobile mapping services and a graphical user interface between remote operators and mobile robots.
A streaming-based solution for remote visualization of 3D graphics on mobile devices.
Lamberti, Fabrizio; Sanna, Andrea
2007-01-01
Mobile devices such as Personal Digital Assistants, Tablet PCs, and cellular phones have greatly enhanced user capability to connect to remote resources. Although a large set of applications are now available bridging the gap between desktop and mobile devices, visualization of complex 3D models is still a task hard to accomplish without specialized hardware. This paper proposes a system where a cluster of PCs, equipped with accelerated graphics cards managed by the Chromium software, is able to handle remote visualization sessions based on MPEG video streaming involving complex 3D models. The proposed framework allows mobile devices such as smart phones, Personal Digital Assistants (PDAs), and Tablet PCs to visualize objects consisting of millions of textured polygons and voxels at a frame rate of 30 fps or more depending on hardware resources at the server side and on multimedia capabilities at the client side. The server is able to concurrently manage multiple clients computing a video stream for each one; resolution and quality of each stream is tailored according to screen resolution and bandwidth of the client. The paper investigates in depth issues related to latency time, bit rate and quality of the generated stream, screen resolutions, as well as frames per second displayed.
Real-Time Terrain Storage Generation from Multiple Sensors towards Mobile Robot Operation Interface
Cho, Seoungjae; Xi, Yulong; Cho, Kyungeun
2014-01-01
A mobile robot mounted with multiple sensors is used to rapidly collect 3D point clouds and video images so as to allow accurate terrain modeling. In this study, we develop a real-time terrain storage generation and representation system including a nonground point database (PDB), ground mesh database (MDB), and texture database (TDB). A voxel-based flag map is proposed for incrementally registering large-scale point clouds in a terrain model in real time. We quantize the 3D point clouds into 3D grids of the flag map as a comparative table in order to remove the redundant points. We integrate the large-scale 3D point clouds into a nonground PDB and a node-based terrain mesh using the CPU. Subsequently, we program a graphics processing unit (GPU) to generate the TDB by mapping the triangles in the terrain mesh onto the captured video images. Finally, we produce a nonground voxel map and a ground textured mesh as a terrain reconstruction result. Our proposed methods were tested in an outdoor environment. Our results show that the proposed system was able to rapidly generate terrain storage and provide high resolution terrain representation for mobile mapping services and a graphical user interface between remote operators and mobile robots. PMID:25101321
PCI-based WILDFIRE reconfigurable computing engines
NASA Astrophysics Data System (ADS)
Fross, Bradley K.; Donaldson, Robert L.; Palmer, Douglas J.
1996-10-01
WILDFORCE is the first PCI-based custom reconfigurable computer that is based on the Splash 2 technology transferred from the National Security Agency and the Institute for Defense Analyses, Supercomputing Research Center (SRC). The WILDFORCE architecture has many of the features of the WILDFIRE computer, such as field- programmable gate array (FPGA) based processing elements, linear array and crossbar interconnection, and high- performance memory and I/O subsystems. New features introduced in the PCI-based WILDFIRE systems include memory/processor options that can be added to any processing element. These options include static and dynamic memory, digital signal processors (DSPs), FPGAs, and microprocessors. In addition to memory/processor options, many different application specific connectors can be used to extend the I/O capabilities of the system, including systolic I/O, camera input and video display output. This paper also discusses how this new PCI-based reconfigurable computing engine is used for rapid-prototyping, real-time video processing and other DSP applications.
NASA Astrophysics Data System (ADS)
Selker, Ted
1983-05-01
Lens focusing using a hardware model of a retina (Reticon RL256 light sensitive array) with a low cost processor (8085 with 512 bytes of ROM and 512 bytes of RAM) was built. This system was developed and tested on a variety of visual stimuli to demonstrate that: a)an algorithm which moves a lens to maximize the sum of the difference of light level on adjacent light sensors will converge to best focus in all but contrived situations. This is a simpler algorithm than any previously suggested; b) it is feasible to use unmodified video sensor arrays with in-expensive processors to aid video camera use. In the future, software could be developed to extend the processor's usefulness, possibly to track an actor by panning and zooming to give a earners operator increased ease of framing; c) lateral inhibition is an adequate basis for determining best focus. This supports a simple anatomically motivated model of how our brain focuses our eyes.
An ultrahigh-speed color video camera operating at 1,000,000 fps with 288 frame memories
NASA Astrophysics Data System (ADS)
Kitamura, K.; Arai, T.; Yonai, J.; Hayashida, T.; Kurita, T.; Maruyama, H.; Namiki, J.; Yanagi, T.; Yoshida, T.; van Kuijk, H.; Bosiers, Jan T.; Saita, A.; Kanayama, S.; Hatade, K.; Kitagawa, S.; Etoh, T. Goji
2008-11-01
We developed an ultrahigh-speed color video camera that operates at 1,000,000 fps (frames per second) and had capacity to store 288 frame memories. In 2005, we developed an ultrahigh-speed, high-sensitivity portable color camera with a 300,000-pixel single CCD (ISIS-V4: In-situ Storage Image Sensor, Version 4). Its ultrahigh-speed shooting capability of 1,000,000 fps was made possible by directly connecting CCD storages, which record video images, to the photodiodes of individual pixels. The number of consecutive frames was 144. However, longer capture times were demanded when the camera was used during imaging experiments and for some television programs. To increase ultrahigh-speed capture times, we used a beam splitter and two ultrahigh-speed 300,000-pixel CCDs. The beam splitter was placed behind the pick up lens. One CCD was located at each of the two outputs of the beam splitter. The CCD driving unit was developed to separately drive two CCDs, and the recording period of the two CCDs was sequentially switched. This increased the recording capacity to 288 images, an increase of a factor of two over that of conventional ultrahigh-speed camera. A problem with the camera was that the incident light on each CCD was reduced by a factor of two by using the beam splitter. To improve the light sensitivity, we developed a microlens array for use with the ultrahigh-speed CCDs. We simulated the operation of the microlens array in order to optimize its shape and then fabricated it using stamping technology. Using this microlens increased the light sensitivity of the CCDs by an approximate factor of two. By using a beam splitter in conjunction with the microlens array, it was possible to make an ultrahigh-speed color video camera that has 288 frame memories but without decreasing the camera's light sensitivity.
Kocna, P
1995-01-01
GastroBase, a clinical information system, incorporates patient identification, medical records, images, laboratory data, patient history, physical examination, and other patient-related information. Program modules are written in C; all data is processed using Novell-Btrieve data manager. Patient identification database represents the main core of this information systems. A graphic library developed in the past year and graphic modules with a special video-card enables the storing, archiving, and linking of different images to the electronic patient-medical-record. GastroBase has been running for more than four years in daily routine and the database contains more than 25,000 medical records and 1,500 images. This new version of GastroBase is now incorporated into the clinical information system of University Clinic in Prague.
2010-04-29
magnitude greater than today’s high-definition video coding standards. Moreover, the micromirror devices of maskless lithography are smaller than those...be found in the literature [33]. In this architecture, the optical source flashes on a writer system, which consists of a micromirror array and a...the writer system. Due to the physical dimension constraints of the micromirror array and writer system, an entire wafer can be written in a few
Two-dimensional systolic-array architecture for pixel-level vision tasks
NASA Astrophysics Data System (ADS)
Vijverberg, Julien A.; de With, Peter H. N.
2010-05-01
This paper presents ongoing work on the design of a two-dimensional (2D) systolic array for image processing. This component is designed to operate on a multi-processor system-on-chip. In contrast with other 2D systolic-array architectures and many other hardware accelerators, we investigate the applicability of executing multiple tasks in a time-interleaved fashion on the Systolic Array (SA). This leads to a lower external memory bandwidth and better load balancing of the tasks on the different processing tiles. To enable the interleaving of tasks, we add a shadow-state register for fast task switching. To reduce the number of accesses to the external memory, we propose to share the communication assist between consecutive tasks. A preliminary, non-functional version of the SA has been synthesized for an XV4S25 FPGA device and yields a maximum clock frequency of 150 MHz requiring 1,447 slices and 5 memory blocks. Mapping tasks from video content-analysis applications from literature on the SA yields reductions in the execution time of 1-2 orders of magnitude compared to the software implementation. We conclude that the choice for an SA architecture is useful, but a scaled version of the SA featuring less logic with fewer processing and pipeline stages yielding a lower clock frequency, would be sufficient for a video analysis system-on-chip.
The interactive digital video interface
NASA Technical Reports Server (NTRS)
Doyle, Michael D.
1989-01-01
A frequent complaint in the computer oriented trade journals is that current hardware technology is progressing so quickly that software developers cannot keep up. A example of this phenomenon can be seen in the field of microcomputer graphics. To exploit the advantages of new mechanisms of information storage and retrieval, new approaches must be made towards incorporating existing programs as well as developing entirely new applications. A particular area of need is the correlation of discrete image elements to textural information. The interactive digital video (IDV) interface embodies a new concept in software design which addresses these needs. The IDV interface is a patented device and language independent process for identifying image features on a digital video display and which allows a number of different processes to be keyed to that identification. Its capabilities include the correlation of discrete image elements to relevant text information and the correlation of these image features to other images as well as to program control mechanisms. Sophisticated interrelationships can be set up between images, text, and program control mechanisms.
Cognitive chrono-ethnography lite.
Nakajima, Masato; Yamada, Kosuke C; Kitajima, Muneo
2012-01-01
Conducting field research facilitates understanding human daily activities. Cognitive Chrono-Ethnography (CCE) is a study methodology used to understand how people select actions in daily life by conducting ethnographical field research. CCE consists of measuring monitors' daily activities in a specified field and in-depth interviews using the recorded videos afterward. However, privacy issues may arise when conducting standard CCE with video recordings in a daily field. To resolve these issues, we developed a new study methodology, CCE Lite. To replace video recordings, we created pseudo-first-personview (PFPV) movies using a computer-graphic technique. The PFPV movies were used to remind the monitors of their activities. These movies replicated monitors' activities (e.g., locomotion and change in physical direction), with no human images and voices. We applied CCE Lite in a case study that involved female employees of hotels at a spa resort. In-depth interviews while showing the PFPV movies determined service schema of the employees (i.e., hospitality). Results indicated that using PFPV movies helped the employees to remember and reconstruct the situation of recorded activities.
NASA Technical Reports Server (NTRS)
Truong, L. V.
1994-01-01
Computer graphics are often applied for better understanding and interpretation of data under observation. These graphics become more complicated when animation is required during "run-time", as found in many typical modern artificial intelligence and expert systems. Living Color Frame Maker is a solution to many of these real-time graphics problems. Living Color Frame Maker (LCFM) is a graphics generation and management tool for IBM or IBM compatible personal computers. To eliminate graphics programming, the graphic designer can use LCFM to generate computer graphics frames. The graphical frames are then saved as text files, in a readable and disclosed format, which can be easily accessed and manipulated by user programs for a wide range of "real-time" visual information applications. For example, LCFM can be implemented in a frame-based expert system for visual aids in management of systems. For monitoring, diagnosis, and/or controlling purposes, circuit or systems diagrams can be brought to "life" by using designated video colors and intensities to symbolize the status of hardware components (via real-time feedback from sensors). Thus status of the system itself can be displayed. The Living Color Frame Maker is user friendly with graphical interfaces, and provides on-line help instructions. All options are executed using mouse commands and are displayed on a single menu for fast and easy operation. LCFM is written in C++ using the Borland C++ 2.0 compiler for IBM PC series computers and compatible computers running MS-DOS. The program requires a mouse and an EGA/VGA display. A minimum of 77K of RAM is also required for execution. The documentation is provided in electronic form on the distribution medium in WordPerfect format. A sample MS-DOS executable is provided on the distribution medium. The standard distribution medium for this program is one 5.25 inch 360K MS-DOS format diskette. The contents of the diskette are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. The Living Color Frame Maker tool was developed in 1992.
Real-time 3D visualization of volumetric video motion sensor data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlson, J.; Stansfield, S.; Shawver, D.
1996-11-01
This paper addresses the problem of improving detection, assessment, and response capabilities of security systems. Our approach combines two state-of-the-art technologies: volumetric video motion detection (VVMD) and virtual reality (VR). This work capitalizes on the ability of VVMD technology to provide three-dimensional (3D) information about the position, shape, and size of intruders within a protected volume. The 3D information is obtained by fusing motion detection data from multiple video sensors. The second component involves the application of VR technology to display information relating to the sensors and the sensor environment. VR technology enables an operator, or security guard, to bemore » immersed in a 3D graphical representation of the remote site. VVMD data is transmitted from the remote site via ordinary telephone lines. There are several benefits to displaying VVMD information in this way. Because the VVMD system provides 3D information and because the sensor environment is a physical 3D space, it seems natural to display this information in 3D. Also, the 3D graphical representation depicts essential details within and around the protected volume in a natural way for human perception. Sensor information can also be more easily interpreted when the operator can `move` through the virtual environment and explore the relationships between the sensor data, objects and other visual cues present in the virtual environment. By exploiting the powerful ability of humans to understand and interpret 3D information, we expect to improve the means for visualizing and interpreting sensor information, allow a human operator to assess a potential threat more quickly and accurately, and enable a more effective response. This paper will detail both the VVMD and VR technologies and will discuss a prototype system based upon their integration.« less
Design and implementation of highly parallel pipelined VLSI systems
NASA Astrophysics Data System (ADS)
Delange, Alphonsus Anthonius Jozef
A methodology and its realization as a prototype CAD (Computer Aided Design) system for the design and analysis of complex multiprocessor systems is presented. The design is an iterative process in which the behavioral specifications of the system components are refined into structural descriptions consisting of interconnections and lower level components etc. A model for the representation and analysis of multiprocessor systems at several levels of abstraction and an implementation of a CAD system based on this model are described. A high level design language, an object oriented development kit for tool design, a design data management system, and design and analysis tools such as a high level simulator and graphics design interface which are integrated into the prototype system and graphics interface are described. Procedures for the synthesis of semiregular processor arrays, and to compute the switching of input/output signals, memory management and control of processor array, and sequencing and segmentation of input/output data streams due to partitioning and clustering of the processor array during the subsequent synthesis steps, are described. The architecture and control of a parallel system is designed and each component mapped to a module or module generator in a symbolic layout library, compacted for design rules of VLSI (Very Large Scale Integration) technology. An example of the design of a processor that is a useful building block for highly parallel pipelined systems in the signal/image processing domains is given.
Teaching New Literacies in Grades 4-6: Resources for 21st-Century Classrooms
ERIC Educational Resources Information Center
Moss, Barbara, Ed.; Lapp, Diane, Ed.
2009-01-01
Upper-elementary students encounter a sometimes dizzying array of traditional and nontraditional texts both in and outside of the classroom. This practical handbook helps teachers in grades 4-6 harness the instructional potential of fiction, poetry, and plays; informational texts; graphic novels; digital storytelling; Web-based and multimodal…
Tiede, Dirk; Baraldi, Andrea; Sudmanns, Martin; Belgiu, Mariana; Lang, Stefan
2017-01-01
ABSTRACT Spatiotemporal analytics of multi-source Earth observation (EO) big data is a pre-condition for semantic content-based image retrieval (SCBIR). As a proof of concept, an innovative EO semantic querying (EO-SQ) subsystem was designed and prototypically implemented in series with an EO image understanding (EO-IU) subsystem. The EO-IU subsystem is automatically generating ESA Level 2 products (scene classification map, up to basic land cover units) from optical satellite data. The EO-SQ subsystem comprises a graphical user interface (GUI) and an array database embedded in a client server model. In the array database, all EO images are stored as a space-time data cube together with their Level 2 products generated by the EO-IU subsystem. The GUI allows users to (a) develop a conceptual world model based on a graphically supported query pipeline as a combination of spatial and temporal operators and/or standard algorithms and (b) create, save and share within the client-server architecture complex semantic queries/decision rules, suitable for SCBIR and/or spatiotemporal EO image analytics, consistent with the conceptual world model. PMID:29098143
Video-based measurements for wireless capsule endoscope tracking
NASA Astrophysics Data System (ADS)
Spyrou, Evaggelos; Iakovidis, Dimitris K.
2014-01-01
The wireless capsule endoscope is a swallowable medical device equipped with a miniature camera enabling the visual examination of the gastrointestinal (GI) tract. It wirelessly transmits thousands of images to an external video recording system, while its location and orientation are being tracked approximately by external sensor arrays. In this paper we investigate a video-based approach to tracking the capsule endoscope without requiring any external equipment. The proposed method involves extraction of speeded up robust features from video frames, registration of consecutive frames based on the random sample consensus algorithm, and estimation of the displacement and rotation of interest points within these frames. The results obtained by the application of this method on wireless capsule endoscopy videos indicate its effectiveness and improved performance over the state of the art. The findings of this research pave the way for a cost-effective localization and travel distance measurement of capsule endoscopes in the GI tract, which could contribute in the planning of more accurate surgical interventions.
3D graphics, virtual reality, and motion-onset visual evoked potentials in neurogaming.
Beveridge, R; Wilson, S; Coyle, D
2016-01-01
A brain-computer interface (BCI) offers movement-free control of a computer application and is achieved by reading and translating the cortical activity of the brain into semantic control signals. Motion-onset visual evoked potentials (mVEP) are neural potentials employed in BCIs and occur when motion-related stimuli are attended visually. mVEP dynamics are correlated with the position and timing of the moving stimuli. To investigate the feasibility of utilizing the mVEP paradigm with video games of various graphical complexities including those of commercial quality, we conducted three studies over four separate sessions comparing the performance of classifying five mVEP responses with variations in graphical complexity and style, in-game distractions, and display parameters surrounding mVEP stimuli. To investigate the feasibility of utilizing contemporary presentation modalities in neurogaming, one of the studies compared mVEP classification performance when stimuli were presented using the oculus rift virtual reality headset. Results from 31 independent subjects were analyzed offline. The results show classification performances ranging up to 90% with variations in conditions in graphical complexity having limited effect on mVEP performance; thus, demonstrating the feasibility of using the mVEP paradigm within BCI-based neurogaming. © 2016 Elsevier B.V. All rights reserved.
Visualization of instationary flows by particle traces
NASA Astrophysics Data System (ADS)
Raasch, S.
An abstract on a study which represents a model of atmospheric flow output by computer movies is presented. The structure and evolution of the flow is visualized by starting weightless particles at the locations of the model grid points at distinct, equally spaced times. These particles are then only advected by the flow. In order to avoid useless accumulation of particles, they can be provided with a limited lifetime. Scalar quantities can be shown in addition to using color shaded contours as background information. A movie with several examples of atmospheric flows, for example convection in the atmospheric boundary layer, slope winds, land seabreeze and Kelvin-Helmholtz waves is presented. The simulations are performed by two dimensional and three dimensional nonhydrostatic, finite difference models. Graphics are produced by using the UNIRAS software and the graphic output is in form of CGM metafiles. The single frames are stored on an ABEKAS real time video disc and then transferred to a BETACAM-SP tape recorder. The graphic software is suitable to produce 2 dimensional pictures, for example only cross sections of three dimensional simulations can be made. To produce a movie of typically 90 seconds duration, the graphic software and the particle model need about 10 hours CPU time on a CCD CYBER 990 and the CGM metafile has a size of about 1.4 GByte.
Brunner, J; Krummenauer, F; Lehr, H A
2000-04-01
Study end-points in microcirculation research are usually video-taped images rather than numeric computer print-outs. Analysis of these video-taped images for the quantification of microcirculatory parameters usually requires computer-based image analysis systems. Most software programs for image analysis are custom-made, expensive, and limited in their applicability to selected parameters and study end-points. We demonstrate herein that an inexpensive, commercially available computer software (Adobe Photoshop), run on a Macintosh G3 computer with inbuilt graphic capture board provides versatile, easy to use tools for the quantification of digitized video images. Using images obtained by intravital fluorescence microscopy from the pre- and postischemic muscle microcirculation in the skinfold chamber model in hamsters, Photoshop allows simple and rapid quantification (i) of microvessel diameters, (ii) of the functional capillary density and (iii) of postischemic leakage of FITC-labeled high molecular weight dextran from postcapillary venules. We present evidence of the technical accuracy of the software tools and of a high degree of interobserver reliability. Inexpensive commercially available imaging programs (i.e., Adobe Photoshop) provide versatile tools for image analysis with a wide range of potential applications in microcirculation research.
A practical implementation of free viewpoint video system for soccer games
NASA Astrophysics Data System (ADS)
Suenaga, Ryo; Suzuki, Kazuyoshi; Tezuka, Tomoyuki; Panahpour Tehrani, Mehrdad; Takahashi, Keita; Fujii, Toshiaki
2015-03-01
In this paper, we present a free viewpoint video generation system with billboard representation for soccer games. Free viewpoint video generation is a technology that enables users to watch 3-D objects from their desired viewpoints. Practical implementation of free viewpoint video for sports events is highly demanded. However, a commercially acceptable system has not yet been developed. The main obstacles are insufficient user-end quality of the synthesized images and highly complex procedures that sometimes require manual operations. In this work, we aim to develop a commercially acceptable free viewpoint video system with a billboard representation. A supposed scenario is that soccer games during the day can be broadcasted in 3-D, even in the evening of the same day. Our work is still ongoing. However, we have already developed several techniques to support our goal. First, we captured an actual soccer game at an official stadium where we used 20 full-HD professional cameras. Second, we have implemented several tools for free viewpoint video generation as follow. In order to facilitate free viewpoint video generation, all cameras should be calibrated. We calibrated all cameras using checker board images and feature points on the field (cross points of the soccer field lines). We extract each player region from captured images manually. The background region is estimated by observing chrominance changes of each pixel in temporal domain (automatically). Additionally, we have developed a user interface for visualizing free viewpoint video generation using a graphic library (OpenGL), which is suitable for not only commercialized TV sets but also devices such as smartphones. However, practical system has not yet been completed and our study is still ongoing.
3D Integration for Wireless Multimedia
NASA Astrophysics Data System (ADS)
Kimmich, Georg
The convergence of mobile phone, internet, mapping, gaming and office automation tools with high quality video and still imaging capture capability is becoming a strong market trend for portable devices. High-density video encode and decode, 3D graphics for gaming, increased application-software complexity and ultra-high-bandwidth 4G modem technologies are driving the CPU performance and memory bandwidth requirements close to the PC segment. These portable multimedia devices are battery operated, which requires the deployment of new low-power-optimized silicon process technologies and ultra-low-power design techniques at system, architecture and device level. Mobile devices also need to comply with stringent silicon-area and package-volume constraints. As for all consumer devices, low production cost and fast time-to-volume production is key for success. This chapter shows how 3D architectures can bring a possible breakthrough to meet the conflicting power, performance and area constraints. Multiple 3D die-stacking partitioning strategies are described and analyzed on their potential to improve the overall system power, performance and cost for specific application scenarios. Requirements and maturity of the basic process-technology bricks including through-silicon via (TSV) and die-to-die attachment techniques are reviewed. Finally, we highlight new challenges which will arise with 3D stacking and an outlook on how they may be addressed: Higher power density will require thermal design considerations, new EDA tools will need to be developed to cope with the integration of heterogeneous technologies and to guarantee signal and power integrity across the die stack. The silicon/wafer test strategies have to be adapted to handle high-density IO arrays, ultra-thin wafers and provide built-in self-test of attached memories. New standards and business models have to be developed to allow cost-efficient assembly and testing of devices from different silicon and technology providers.
Public service user terminus study compendium of terminus equipment
NASA Technical Reports Server (NTRS)
1979-01-01
General descriptions and specifications are given for equipments which facilitate satellite and terrestrial communications delivery by acting as interfaces between a human, mechanical, or electrical information generator (or source) and the communication system. Manufactures and suppliers are given as well as the purchase, service, or lease costs of various products listed under the following cateories: voice/telephony/facsimile equipment; data/graphics terminals; full motion and processes video equipment; and multiple access equipment.
A Trusted Path Design and Implementation for Security Enhanced Linux
2004-09-01
functionality by a member of the team? Witten, et al., [21] provides an excellent discussion of some aspects of the subject. Ultimately, open vs ...terminal window is a program like gnome - terminal that provides a TTY-like environment as a window inside an X Windows session. The phrase computer...Editors selected No sound or video No graphics Check all development boxes except KDE Administrative tools System tools No printing support
GRAPE: a graphical pipeline environment for image analysis in adaptive magnetic resonance imaging.
Gabr, Refaat E; Tefera, Getaneh B; Allen, William J; Pednekar, Amol S; Narayana, Ponnada A
2017-03-01
We present a platform, GRAphical Pipeline Environment (GRAPE), to facilitate the development of patient-adaptive magnetic resonance imaging (MRI) protocols. GRAPE is an open-source project implemented in the Qt C++ framework to enable graphical creation, execution, and debugging of real-time image analysis algorithms integrated with the MRI scanner. The platform provides the tools and infrastructure to design new algorithms, and build and execute an array of image analysis routines, and provides a mechanism to include existing analysis libraries, all within a graphical environment. The application of GRAPE is demonstrated in multiple MRI applications, and the software is described in detail for both the user and the developer. GRAPE was successfully used to implement and execute three applications in MRI of the brain, performed on a 3.0-T MRI scanner: (i) a multi-parametric pipeline for segmenting the brain tissue and detecting lesions in multiple sclerosis (MS), (ii) patient-specific optimization of the 3D fluid-attenuated inversion recovery MRI scan parameters to enhance the contrast of brain lesions in MS, and (iii) an algebraic image method for combining two MR images for improved lesion contrast. GRAPE allows graphical development and execution of image analysis algorithms for inline, real-time, and adaptive MRI applications.
The Ocean Observatories Initiative: Data Access and Visualization via the Graphical User Interface
NASA Astrophysics Data System (ADS)
Garzio, L. M.; Belabbassi, L.; Knuth, F.; Smith, M. J.; Crowley, M. F.; Vardaro, M.; Kerfoot, J.
2016-02-01
The Ocean Observatories Initiative (OOI), funded by the National Science Foundation, is a broad-scale, multidisciplinary effort to transform oceanographic research by providing users with real-time access to long-term datasets from a variety of deployed physical, chemical, biological, and geological sensors. The global array component of the OOI includes four high latitude sites: Irminger Sea off Greenland, Station Papa in the Gulf of Alaska, Argentine Basin off the coast of Argentina, and Southern Ocean near coordinates 55°S and 90°W. Each site is composed of fixed moorings, hybrid profiler moorings and mobile assets, with a total of approximately 110 instruments at each site. Near real-time (telemetered) and recovered data from these instruments can be visualized and downloaded via the OOI Graphical User Interface. In this Interface, the user can visualize scientific parameters via six different plotting functions with options to specify time ranges and apply various QA/QC tests. Data streams from all instruments can also be downloaded in different formats (CSV, JSON, and NetCDF) for further data processing, visualization, and comparison to supplementary datasets. In addition, users can view alerts and alarms in the system, access relevant metadata and deployment information for specific instruments, and find infrastructure specifics for each array including location, sampling strategies, deployment schedules, and technical drawings. These datasets from the OOI provide an unprecedented opportunity to transform oceanographic research and education, and will be readily accessible to the general public via the OOI's Graphical User Interface.
NASA Astrophysics Data System (ADS)
Klopfer, Eric; Scheintaub, Hal; Huang, Wendy; Wendel, Daniel
Computational approaches to science are radically altering the nature of scientific investigatiogn. Yet these computer programs and simulations are sparsely used in science education, and when they are used, they are typically “canned” simulations which are black boxes to students. StarLogo The Next Generation (TNG) was developed to make programming of simulations more accessible for students and teachers. StarLogo TNG builds on the StarLogo tradition of agent-based modeling for students and teachers, with the added features of a graphical programming environment and a three-dimensional (3D) world. The graphical programming environment reduces the learning curve of programming, especially syntax. The 3D graphics make for a more immersive and engaging experience for students, including making it easy to design and program their own video games. Another change to StarLogo TNG is a fundamental restructuring of the virtual machine to make it more transparent. As a result of these changes, classroom use of TNG is expanding to new areas. This chapter is concluded with a description of field tests conducted in middle and high school science classes.
2012-01-01
Background It is known from recent studies that more than 90% of human multi-exon genes are subject to Alternative Splicing (AS), a key molecular mechanism in which multiple transcripts may be generated from a single gene. It is widely recognized that a breakdown in AS mechanisms plays an important role in cellular differentiation and pathologies. Polymerase Chain Reactions, microarrays and sequencing technologies have been applied to the study of transcript diversity arising from alternative expression. Last generation Affymetrix GeneChip Human Exon 1.0 ST Arrays offer a more detailed view of the gene expression profile providing information on the AS patterns. The exon array technology, with more than five million data points, can detect approximately one million exons, and it allows performing analyses at both gene and exon level. In this paper we describe BEAT, an integrated user-friendly bioinformatics framework to store, analyze and visualize exon arrays datasets. It combines a data warehouse approach with some rigorous statistical methods for assessing the AS of genes involved in diseases. Meta statistics are proposed as a novel approach to explore the analysis results. BEAT is available at http://beat.ba.itb.cnr.it. Results BEAT is a web tool which allows uploading and analyzing exon array datasets using standard statistical methods and an easy-to-use graphical web front-end. BEAT has been tested on a dataset with 173 samples and tuned using new datasets of exon array experiments from 28 colorectal cancer and 26 renal cell cancer samples produced at the Medical Genetics Unit of IRCCS Casa Sollievo della Sofferenza. To highlight all possible AS events, alternative names, accession Ids, Gene Ontology terms and biochemical pathways annotations are integrated with exon and gene level expression plots. The user can customize the results choosing custom thresholds for the statistical parameters and exploiting the available clinical data of the samples for a multivariate AS analysis. Conclusions Despite exon array chips being widely used for transcriptomics studies, there is a lack of analysis tools offering advanced statistical features and requiring no programming knowledge. BEAT provides a user-friendly platform for a comprehensive study of AS events in human diseases, displaying the analysis results with easily interpretable and interactive tables and graphics. PMID:22536968
Communicating Science on YouTube and Beyond: OSIRIS-REx Presents 321Science!
NASA Astrophysics Data System (ADS)
Spitz, Anna H.; Dykhuis, Melissa; Platts, Symeon; Keane, James T.; Tanquary, Hannah E.; Zellem, Robert; Hawley, Tiffany; Lauretta, Dante; Beshore, Ed; Bottke, Bill; Hergenrother, Carl; Dworkin, Jason P.; Patchell, Rose; Spitz, Sarah E.; Bentley, Zoe
2014-11-01
NASA’s OSIRIS-REx asteroid sample return mission launched OSIRIS-REx Presents 321Science!, a series of short videos, in December 2013 at youtube.com/osirisrex. A multi-disciplinary team of communicators, film and graphic arts students, teens, scientists, and engineers produces one video per month on a science and engineering topic related to the OSIRIS-REx mission. The format is designed to engage all members of the public, but especially younger audiences with the science and engineering of the mission. The videos serve as a resource for team members and others, complementing more traditional formats such as formal video interviews, mission animations, and hands-on activities. In creating this new form of OSIRIS-REx engagement, we developed 321Science! as an umbrella program to encourage expansion of the concept and topics beyond the OSIRIS-REx mission through partnerships. Such an expansion strengthens and magnifies the reach of the OSIRIS-REx efforts.321Science! has a detailed proposed schedule of video production through launch in 2016. Production plans are categorized to coincide with the course of the mission beginning with Learning the basics - about asteroids and the mission - and proceeding to Building the spacecraft, Run up to launch, Cruising to Bennu, Run up to rendezvous, Mapping Bennu, Sampling, Analyzing data, Cruising home and Returning and analyzing the sample. The video library will host a combination of videos on broad science topics and short specialized concepts with an average length of 2-3 minutes. Video production also takes into account external events, such as other missions’ milestones, to draw attention to our videos. Production will remain flexible and responsive to audience interests and needs and to developments in the mission, science, and external events. As of August 2014, 321Science! videos have over 22,000 views. We use YouTube analytics to evaluate our success and we are investigating additional and more rigorous evaluation methods for future analysis.
Pathways to Renewable Hydrogen Video (Text Version) | Hydrogen and Fuel
array of abundant, sugar rich plant-based material. A fermentation process in the lab breaks down the : The photobiological process in a way is a parallel of the fermentation. The only difference is now the
Examination of YouTube videos related to synthetic cannabinoids.
Fullwood, M Dottington; Kecojevic, Aleksandar; Basch, Corey H
2016-08-17
The popularity of synthetic cannabinoids (SCBs) is increasing the chance for adverse health issues in the United States. Moreover, social media platforms such as YouTube that provided a platform for user-generated content can convey misinformation or glorify use of SCBs. The aim of this study was to fill this gap by describing the content of the most popular YouTube videos containing content related to the SCBs. Videos with at least 1000 or more views found under the search terms "K2" and "spice" included in the analysis. The collective number of views was over 7.5 million. Nearly half of videos were consumer produced (n=42). The most common content in the videos was description of K2 (n=69), followed by mentioning dangers of using K2 (n=47), mentioning side effects (n=38) and showing a person using K2 (n=37). One-third of videos (n=34) promoted use of K2, while 22 videos mentioned risk of dying as a consequence of using K2. YouTube could be used as a surveillance tool to combat this epidemic, but instead, the most widely videos related to SCBs are uploaded by consumers. The content of these consumer videos on YouTube often provide the viewer with access to view a wide array of uploaders describing, encouraging, participating and promoting use.
Examination of YouTube videos related to synthetic cannabinoids
Kecojevic, Aleksandar; Basch, Corey H.
2016-01-01
The popularity of synthetic cannabinoids (SCBs) is increasing the chance for adverse health issues in the United States. Moreover, social media platforms such as YouTube that provided a platform for user-generated content can convey misinformation or glorify use of SCBs. The aim of this study was to fill this gap by describing the content of the most popular YouTube videos containing content related to the SCBs. Videos with at least 1000 or more views found under the search terms “K2” and “spice” included in the analysis. The collective number of views was over 7.5 million. Nearly half of videos were consumer produced (n = 42). The most common content in the videos was description of K2 (n = 69), followed by mentioning dangers of using K2 (n = 47), mentioning side effects (n = 38) and showing a person using K2 (n = 37). One-third of videos (n = 34) promoted use of K2, while 22 videos mentioned risk of dying as a consequence of using K2. YouTube could be used as a surveillance tool to combat this epidemic, but instead, the most widely videos related to SCBs are uploaded by consumers. The content of these consumer videos on YouTube often provide the viewer with access to view a wide array of uploaders describing, encouraging, participating and promoting use. PMID:27639268
NASA Astrophysics Data System (ADS)
Yamaguchi, Masahiro; Haneishi, Hideaki; Fukuda, Hiroyuki; Kishimoto, Junko; Kanazawa, Hiroshi; Tsuchida, Masaru; Iwama, Ryo; Ohyama, Nagaaki
2006-01-01
In addition to the great advancement of high-resolution and large-screen imaging technology, the issue of color is now receiving considerable attention as another aspect than the image resolution. It is difficult to reproduce the original color of subject in conventional imaging systems, and that obstructs the applications of visual communication systems in telemedicine, electronic commerce, and digital museum. To breakthrough the limitation of conventional RGB 3-primary systems, "Natural Vision" project aims at an innovative video and still-image communication technology with high-fidelity color reproduction capability, based on spectral information. This paper summarizes the results of NV project including the development of multispectral and multiprimary imaging technologies and the experimental investigations on the applications to medicine, digital archives, electronic commerce, and computer graphics.
Quality metric for spherical panoramic video
NASA Astrophysics Data System (ADS)
Zakharchenko, Vladyslav; Choi, Kwang Pyo; Park, Jeong Hoon
2016-09-01
Virtual reality (VR)/ augmented reality (AR) applications allow users to view artificial content of a surrounding space simulating presence effect with a help of special applications or devices. Synthetic contents production is well known process form computer graphics domain and pipeline has been already fixed in the industry. However emerging multimedia formats for immersive entertainment applications such as free-viewpoint television (FTV) or spherical panoramic video require different approaches in content management and quality assessment. The international standardization on FTV has been promoted by MPEG. This paper is dedicated to discussion of immersive media distribution format and quality estimation process. Accuracy and reliability of the proposed objective quality estimation method had been verified with spherical panoramic images demonstrating good correlation results with subjective quality estimation held by a group of experts.
Using Computer Simulation for Neurolab 2 Mission Planning
NASA Technical Reports Server (NTRS)
Sanders, Betty M.
1997-01-01
This paper presents an overview of the procedure used in the creation of a computer simulation video generated by the Graphics Research and Analysis Facility at NASA/Johnson Space Center. The simulation was preceded by an analysis of anthropometric characteristics of crew members and workspace requirements for 13 experiments to be conducted on Neurolab 2 which is dedicated to neuroscience and behavioral research. Neurolab 2 is being carried out as a partnership among national domestic research institutes and international space agencies. The video is a tour of the Spacelab module as it will be configured for STS-90, scheduled for launch in the spring of 1998, and identifies experiments that can be conducted in parallel during that mission. Therefore, this paper will also address methods for using computer modeling to facilitate the mission planning activity.
A passive terahertz video camera based on lumped element kinetic inductance detectors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rowe, Sam, E-mail: sam.rowe@astro.cf.ac.uk; Pascale, Enzo; Doyle, Simon
We have developed a passive 350 GHz (850 μm) video-camera to demonstrate lumped element kinetic inductance detectors (LEKIDs)—designed originally for far-infrared astronomy—as an option for general purpose terrestrial terahertz imaging applications. The camera currently operates at a quasi-video frame rate of 2 Hz with a noise equivalent temperature difference per frame of ∼0.1 K, which is close to the background limit. The 152 element superconducting LEKID array is fabricated from a simple 40 nm aluminum film on a silicon dielectric substrate and is read out through a single microwave feedline with a cryogenic low noise amplifier and room temperature frequencymore » domain multiplexing electronics.« less
Flat-panel video resolution LED display system
NASA Astrophysics Data System (ADS)
Wareberg, P. G.; Kennedy, D. I.
The system consists of a 128 x 128 element X-Y addressable LED array fabricated from green-emitting gallium phosphide. The LED array is interfaced with a 128 x 128 matrix TV camera. Associated electronics provides for seven levels of grey scale above zero with a grey scale ratio of square root of 2. Picture elements are on 0.008 inch centers resulting in a resolution of 125 lines-per-inch and a display area of approximately 1 sq. in. The LED array concept lends itself to modular construction, permitting assembly of a flat panel screen of any desired size from 1 x 1 inch building blocks without loss of resolution. A wide range of prospective aerospace applications exist extending from helmet-mounted systems involving small dedicated arrays to multimode cockpit displays constructed as modular screens. High-resolution LED arrays are already used as CRT replacements in military film-marking reconnaissance applications.
Qin, Caidie; Bai, Xue; Zhang, Yue; Gao, Kai
2018-05-03
A photoelectrochemical wire microelectrode was constructed based on the use of a TiO 2 nanotube array with electrochemically deposited CdSe semiconductor. A strongly amplified photocurrent is generated on the sensor surface. The microsensor has a response in the 0.05-20 μM dopamine (DA) concentration range and a 16.7 μM detection limit at a signal-to-noise ratio of 3. Sensitivity, recovery and reproducibility of the sensor were validated by detecting DA in spiked human urine, and satisfactory results were obtained. Graphical abstract Schematic of a sensitive photoelectrochemical microsensor based on CdSe modified TiO 2 nanotube array. The photoelectrochemical microsensor was successfully applied to the determination of dopamine in urine samples.
Algorithmic commonalities in the parallel environment
NASA Technical Reports Server (NTRS)
Mcanulty, Michael A.; Wainer, Michael S.
1987-01-01
The ultimate aim of this project was to analyze procedures from substantially different application areas to discover what is either common or peculiar in the process of conversion to the Massively Parallel Processor (MPP). Three areas were identified: molecular dynamic simulation, production systems (rule systems), and various graphics and vision algorithms. To date, only selected graphics procedures have been investigated. They are the most readily available, and produce the most visible results. These include simple polygon patch rendering, raycasting against a constructive solid geometric model, and stochastic or fractal based textured surface algorithms. Only the simplest of conversion strategies, mapping a major loop to the array, has been investigated so far. It is not entirely satisfactory.
HealthTrust: a social network approach for retrieving online health videos.
Fernandez-Luque, Luis; Karlsen, Randi; Melton, Genevieve B
2012-01-31
Social media are becoming mainstream in the health domain. Despite the large volume of accurate and trustworthy health information available on social media platforms, finding good-quality health information can be difficult. Misleading health information can often be popular (eg, antivaccination videos) and therefore highly rated by general search engines. We believe that community wisdom about the quality of health information can be harnessed to help create tools for retrieving good-quality social media content. To explore approaches for extracting metrics about authoritativeness in online health communities and how these metrics positively correlate with the quality of the content. We designed a metric, called HealthTrust, that estimates the trustworthiness of social media content (eg, blog posts or videos) in a health community. The HealthTrust metric calculates reputation in an online health community based on link analysis. We used the metric to retrieve YouTube videos and channels about diabetes. In two different experiments, health consumers provided 427 ratings of 17 videos and professionals gave 162 ratings of 23 videos. In addition, two professionals reviewed 30 diabetes channels. HealthTrust may be used for retrieving online videos on diabetes, since it performed better than YouTube Search in most cases. Overall, of 20 potential channels, HealthTrust's filtering allowed only 3 bad channels (15%) versus 8 (40%) on the YouTube list. Misleading and graphic videos (eg, featuring amputations) were more commonly found by YouTube Search than by searches based on HealthTrust. However, some videos from trusted sources had low HealthTrust scores, mostly from general health content providers, and therefore not highly connected in the diabetes community. When comparing video ratings from our reviewers, we found that HealthTrust achieved a positive and statistically significant correlation with professionals (Pearson r₁₀ = .65, P = .02) and a trend toward significance with health consumers (r₇ = .65, P = .06) with videos on hemoglobinA(1c), but it did not perform as well with diabetic foot videos. The trust-based metric HealthTrust showed promising results when used to retrieve diabetes content from YouTube. Our research indicates that social network analysis may be used to identify trustworthy social media in health communities.
The Brain Database: A Multimedia Neuroscience Database for Research and Teaching
Wertheim, Steven L.
1989-01-01
The Brain Database is an information tool designed to aid in the integration of clinical and research results in neuroanatomy and regional biochemistry. It can handle a wide range of data types including natural images, 2 and 3-dimensional graphics, video, numeric data and text. It is organized around three main entities: structures, substances and processes. The database will support a wide variety of graphical interfaces. Two sample interfaces have been made. This tool is intended to serve as one component of a system that would allow neuroscientists and clinicians 1) to represent clinical and experimental data within a common framework 2) to compare results precisely between experiments and among laboratories, 3) to use computing tools as an aid in collaborative work and 4) to contribute to a shared and accessible body of knowledge about the nervous system.
Area-delay trade-offs of texture decompressors for a graphics processing unit
NASA Astrophysics Data System (ADS)
Novoa Súñer, Emilio; Ituero, Pablo; López-Vallejo, Marisa
2011-05-01
Graphics Processing Units have become a booster for the microelectronics industry. However, due to intellectual property issues, there is a serious lack of information on implementation details of the hardware architecture that is behind GPUs. For instance, the way texture is handled and decompressed in a GPU to reduce bandwidth usage has never been dealt with in depth from a hardware point of view. This work addresses a comparative study on the hardware implementation of different texture decompression algorithms for both conventional (PCs and video game consoles) and mobile platforms. Circuit synthesis is performed targeting both a reconfigurable hardware platform and a 90nm standard cell library. Area-delay trade-offs have been extensively analyzed, which allows us to compare the complexity of decompressors and thus determine suitability of algorithms for systems with limited hardware resources.
NASA Technical Reports Server (NTRS)
Sapp, C. A.; Dragg, J. L.; Snyder, M. W.; Gaunce, M. T.; Decker, J. E.
1998-01-01
This report documents the photogrammetric assessment of the Hubble Space Telescope (HST) solar arrays conducted by the NASA c Center Image Science and Analysis Group during Second Servicing Mission 2 (SM-2) on STS-82 in February 1997. Two type solar array analyses were conducted during the mission using Space Shuttle payload bay video: (1) measurement of solar array motion due to induced loads, and (2) measurement of the solar array static or geometric twist caused by the cumulative array loading. The report describes pre-mission planning and analysis technique development activities conducted to acquire and analyze solar array imagery data during SM-2. This includes analysis of array motion obtained during SM-1 as a proof-of-concept of the SM-2 measurement techniques. The report documents the results of real-time analysis conducted during the mission and subsequent analysis conducted post-flight. This report also provides a summary of lessons learned on solar array imagery analysis from SM-2 and recommendations for future on-orbit measurements applicable to HST SM-3 and to the International Space Station. This work was performed under the direction of the Goddard Space Flight Center HST Flight Systems and Servicing Project.
From Antarctica to space: Use of telepresence and virtual reality in control of remote vehicles
NASA Technical Reports Server (NTRS)
Stoker, Carol; Hine, Butler P., III; Sims, Michael; Rasmussen, Daryl; Hontalas, Phil; Fong, Terrence W.; Steele, Jay; Barch, Don; Andersen, Dale; Miles, Eric
1994-01-01
In the Fall of 1993, NASA Ames deployed a modified Phantom S2 Remotely-Operated underwater Vehicle (ROV) into an ice-covered sea environment near McMurdo Science Station, Antarctica. This deployment was part of the antarctic Space Analog Program, a joint program between NASA and the National Science Foundation to demonstrate technologies relevant for space exploration in realistic field setting in the Antarctic. The goal of the mission was to operationally test the use of telepresence and virtual reality technology in the operator interface to a remote vehicle, while performing a benthic ecology study. The vehicle was operated both locally, from above a dive hole in the ice through which it was launched, and remotely over a satellite communications link from a control room at NASA's Ames Research Center. Local control of the vehicle was accomplished using the standard Phantom control box containing joysticks and switches, with the operator viewing stereo video camera images on a stereo display monitor. Remote control of the vehicle over the satellite link was accomplished using the Virtual Environment Vehicle Interface (VEVI) control software developed at NASA Ames. The remote operator interface included either a stereo display monitor similar to that used locally or a stereo head-mounted head-tracked display. The compressed video signal from the vehicle was transmitted to NASA Ames over a 768 Kbps satellite channel. Another channel was used to provide a bi-directional Internet link to the vehicle control computer through which the command and telemetry signals traveled, along with a bi-directional telephone service. In addition to the live stereo video from the satellite link, the operator could view a computer-generated graphic representation of the underwater terrain, modeled from the vehicle's sensors. The virtual environment contained an animate graphic model of the vehicle which reflected the state of the actual vehicle, along with ancillary information such as the vehicle track, science markers, and locations of video snapshots. The actual vehicle was driven either from within the virtual environment or through a telepresence interface. All vehicle functions could be controlled remotely over the satellite link.
2010-03-01
to a graphics card , and not the redesign of XML. The justification is that if XML is going to be prevalent, special optimized hardware is...the answer, similar to the specialized functions of a video card . Given the Moore’s law that processing power doubles every few years, let the...and numerous multimedia players such as iTunes from Apple. These applications are free to use, but the source is restricted by software licenses
JPRS Report, Soviet Union, Political Affairs
1989-08-04
34hits" occupy the leisure time of children, teenagers and adults. What is alarming are the numerous letters to the editor: " porno " is flooding the...alone took out of circulation more than 1500 video cassettes of films with ideologically harmful and porno - graphic contents. And we know the...Criminalists have many such examples. " Porno " has bred many abcesses. And one usually wants to shout, "Where are the police looking?" Konishev answers
Advanced Spacesuit Informatics Software Design for Power, Avionics and Software Version 2.0
NASA Technical Reports Server (NTRS)
Wright, Theodore W.
2016-01-01
A description of the software design for the 2016 edition of the Informatics computer assembly of the NASAs Advanced Extravehicular Mobility Unit (AEMU), also called the Advanced Spacesuit. The Informatics system is an optional part of the spacesuit assembly. It adds a graphical interface for displaying suit status, timelines, procedures, and warning information. It also provides an interface to the suit mounted camera for recording still images, video, and audio field notes.
Performance and Preference with Various VDT (Video Display Terminal) Phosphors
1987-04-24
Unit M\\100.001-1302. was submitted for review on 13 March 1987, approved for publication on 24 April 1987, and has been designated as Naval Submarine... designed to investigate reading fatigue, Nordqvist et al. (1986) had their subjects read texts for 15 minutes, followed by 5 minutes of performance tests...Doc, Ophthalmol. 3: 138-163. Tu-lis, T.S. (1981). An evaluation of alphanumeric, graphic , and color information displays. -_pman Factors 23: 541-550
Space Astronomy Update: Stars Under Construction
NASA Technical Reports Server (NTRS)
1995-01-01
A discussion of the images obtained by NASA's Hubble Space Telescope (HST) is featured on this video. The discussion panel consists of Dr. Jeff Hester (Arizona State Univ.), Dr. Jon Morse (Space Telescope Science Inst.), Dr. Chris Burrows (European Space Agency), Dr. Bruce Margon (Univ. of Washington), and host Don Savage (Goddard Space Flight Center). A variety of graphics and explanations are provided for the images of star formations and other astronomical features that were viewed by the HST.
Assessing the impact of telestration on surgical telementoring: A randomized controlled trial.
Budrionis, Andrius; Hasvold, Per; Hartvigsen, Gunnar; Bellika, Johan Gustav
2016-01-01
Using graphical annotations in surgical telementoring promises vast improvements in both clinical and educational outcomes. However, these assumptions do not consider the potential patient safety risks resulting from this feature. Major differences in regulations regarding the implementation of telestration encourage an assessment of the utility of this feature on the outcomes of telementoring sessions. Eight students participated in a randomized controlled trial, comparing verbal with annotation-supplemented telementoring via video conferencing. A remote mentor guided the participants through four localization exercises, identifying the features in a still laparoscopic surgery scene using a laparoscopic simulator. Clinical and educational outcomes were assessed; the time consumption and quality of mentoring were determined. The study revealed no significant difference in localizing the intervention between the studied methods, while educational outcomes favoured verbal mentoring. Telestration-supplemented guidance was considerably faster and resulted in fewer miscommunications between the mentor and mentee. The initial hypothesis of the major clinical and education benefits of telestration in telementoring was not supported. A potential 33% decrease in the duration of the mentored episodes is expected due to the ability to annotate live video content. However, the impact of time saving on the outcome of the procedure remains unclear. Regardless of the quantitative measures, most of the participants and the mentor agreed that graphical annotations provide advantages over verbal guidance. © The Author(s) 2015.
Vision-based gait impairment analysis for aided diagnosis.
Ortells, Javier; Herrero-Ezquerro, María Trinidad; Mollineda, Ramón A
2018-02-12
Gait is a firsthand reflection of health condition. This belief has inspired recent research efforts to automate the analysis of pathological gait, in order to assist physicians in decision-making. However, most of these efforts rely on gait descriptions which are difficult to understand by humans, or on sensing technologies hardly available in ambulatory services. This paper proposes a number of semantic and normalized gait features computed from a single video acquired by a low-cost sensor. Far from being conventional spatio-temporal descriptors, features are aimed at quantifying gait impairment, such as gait asymmetry from several perspectives or falling risk. They were designed to be invariant to frame rate and image size, allowing cross-platform comparisons. Experiments were formulated in terms of two databases. A well-known general-purpose gait dataset is used to establish normal references for features, while a new database, introduced in this work, provides samples under eight different walking styles: one normal and seven impaired patterns. A number of statistical studies were carried out to prove the sensitivity of features at measuring the expected pathologies, providing enough evidence about their accuracy. Graphical Abstract Graphical abstract reflecting main contributions of the manuscript: at the top, a robust, semantic and easy-to-interpret feature set to describe impaired gait patterns; at the bottom, a new dataset consisting of video-recordings of a number of volunteers simulating different patterns of pathological gait, where features were statistically assessed.
Trudeau, Natacha; Sutton, Ann; Dagenais, Emmanuelle; de Broeck, Sophie; Morford, Jill
2007-10-01
This study investigated the impact of syntactic complexity and task demands on construction of utterances using picture communication symbols by participants from 3 age groups with no communication disorders. Participants were 30 children (7;0 [years;months] to 8;11), 30 teenagers (12;0 to 13;11), and 30 adults (18 years and above). All participants constructed graphic symbol utterances to describe photographs presented with spoken French stimuli. Stimuli included simple and complex (object relative and subject relative) utterances describing the photographs, which were presented either 1 at a time (neutral condition) or in an array of 4 (contrast condition). Simple utterances lead to more uniform response patterns than complex utterances. Among complex utterances, subject relative sentences appeared more difficult to convey. Increasing the need for message clarity (i.e., contrast condition) elicited changes in the production of graphic symbol sequences for complex propositions. The effects of syntactic complexity and task demands were more pronounced for children. Graphic symbol utterance construction appears to involve more than simply transferring spoken language skills. One possible explanation is that this type of task requires higher levels of metalinguistic ability. Clinical implications and directions for further research are discussed.
Developing a gate-array capability at a research and development laboratory
NASA Astrophysics Data System (ADS)
Balch, J. W.; Current, K. W.; Magnuson, W. G., Jr.; Pocha, M. D.
1983-03-01
Experiences in developing a gate array capability for low volume applications in a research and development (R and D) laboratory are described. By purchasing unfinished wafers and doing the customization steps in-house. Turnaround time was shortened to as little as one week and the direct costs reduced to as low as $5K per design. Designs generally require fast turnaround (a few weeks to a few months) and very low volumes (1 to 25). Design costs must be kept at a minimum. After reviewing available commercial gate array design and fabrication services, it was determined that objectives would best be met by using existing internal integrated circuit fabrication facilities, the COMPUTERVISION interactive graphics layout system, and extensive computational capabilities. The reasons and the approach taken for; selection for a particular gate array wafer, adapting a particular logic simulation program, and how layout aids were enhanced are discussed. Testing of the customized chips is described. The content, schedule, and results of the internal gate array course recently completed are discussed. Finally, problem areas and near term plans are presented.
ARCGRAPH SYSTEM - AMES RESEARCH GRAPHICS SYSTEM
NASA Technical Reports Server (NTRS)
Hibbard, E. A.
1994-01-01
Ames Research Graphics System, ARCGRAPH, is a collection of libraries and utilities which assist researchers in generating, manipulating, and visualizing graphical data. In addition, ARCGRAPH defines a metafile format that contains device independent graphical data. This file format is used with various computer graphics manipulation and animation packages at Ames, including SURF (COSMIC Program ARC-12381) and GAS (COSMIC Program ARC-12379). In its full configuration, the ARCGRAPH system consists of a two stage pipeline which may be used to output graphical primitives. Stage one is associated with the graphical primitives (i.e. moves, draws, color, etc.) along with the creation and manipulation of the metafiles. Five distinct data filters make up stage one. They are: 1) PLO which handles all 2D vector primitives, 2) POL which handles all 3D polygonal primitives, 3) RAS which handles all 2D raster primitives, 4) VEC which handles all 3D raster primitives, and 5) PO2 which handles all 2D polygonal primitives. Stage two is associated with the process of displaying graphical primitives on a device. To generate the various graphical primitives, create and reprocess ARCGRAPH metafiles, and access the device drivers in the VDI (Video Device Interface) library, users link their applications to ARCGRAPH's GRAFIX library routines. Both FORTRAN and C language versions of the GRAFIX and VDI libraries exist for enhanced portability within these respective programming environments. The ARCGRAPH libraries were developed on a VAX running VMS. Minor documented modification of various routines, however, allows the system to run on the following computers: Cray X-MP running COS (no C version); Cray 2 running UNICOS; DEC VAX running BSD 4.3 UNIX, or Ultrix; SGI IRIS Turbo running GL2-W3.5 and GL2-W3.6; Convex C1 running UNIX; Amhdahl 5840 running UTS; Alliant FX8 running UNIX; Sun 3/160 running UNIX (no native device driver); Stellar GS1000 running Stellex (no native device driver); and an SGI IRIS 4D running IRIX (no native device driver). Currently with version 7.0 of ARCGRAPH, the VDI library supports the following output devices: A VT100 terminal with a RETRO-GRAPHICS board installed, a VT240 using the Tektronix 4010 emulation capability, an SGI IRIS turbo using the native GL2 library, a Tektronix 4010, a Tektronix 4105, and the Tektronix 4014. ARCGRAPH version 7.0 was developed in 1988.
Camera array based light field microscopy
Lin, Xing; Wu, Jiamin; Zheng, Guoan; Dai, Qionghai
2015-01-01
This paper proposes a novel approach for high-resolution light field microscopy imaging by using a camera array. In this approach, we apply a two-stage relay system for expanding the aperture plane of the microscope into the size of an imaging lens array, and utilize a sensor array for acquiring different sub-apertures images formed by corresponding imaging lenses. By combining the rectified and synchronized images from 5 × 5 viewpoints with our prototype system, we successfully recovered color light field videos for various fast-moving microscopic specimens with a spatial resolution of 0.79 megapixels at 30 frames per second, corresponding to an unprecedented data throughput of 562.5 MB/s for light field microscopy. We also demonstrated the use of the reported platform for different applications, including post-capture refocusing, phase reconstruction, 3D imaging, and optical metrology. PMID:26417490
Thermal Protection System Imagery Inspection Management System -TIIMS
NASA Technical Reports Server (NTRS)
Goza, Sharon; Melendrez, David L.; Henningan, Marsha; LaBasse, Daniel; Smith, Daniel J.
2011-01-01
TIIMS is used during the inspection phases of every mission to provide quick visual feedback, detailed inspection data, and determination to the mission management team. This system consists of a visual Web page interface, an SQL database, and a graphical image generator. These combine to allow a user to ascertain quickly the status of the inspection process, and current determination of any problem zones. The TIIMS system allows inspection engineers to enter their determinations into a database and to link pertinent images and video to those database entries. The database then assigns criteria to each zone and tile, and via query, sends the information to a graphical image generation program. Using the official TIPS database tile positions and sizes, the graphical image generation program creates images of the current status of the orbiter, coloring zones, and tiles based on a predefined key code. These images are then displayed on a Web page using customized JAVA scripts to display the appropriate zone of the orbiter based on the location of the user's cursor. The close-up graphic and database entry for that particular zone can then be seen by selecting the zone. This page contains links into the database to access the images used by the inspection engineer when they make the determination entered into the database. Status for the inspection zones changes as determinations are refined and shown by the appropriate color code.
Development and test of video systems for airborne surveillance of oil spills
NASA Technical Reports Server (NTRS)
Millard, J. P.; Arvesen, J. C.; Lewis, P. L.
1975-01-01
Five video systems - potentially useful for airborne surveillance of oil spills - were developed, flight tested, and evaluated. The systems are: (1) conventional black and white TV, (2) conventional TV with false color, (3) differential TV, (4) prototype Lunar Surface TV, and (5) field sequential TV. Wavelength and polarization filtering were utilized in all systems. Greatly enhanced detection of oil spills, relative to that possible with the unaided eye, was achieved. The most practical video system is a conventional TV camera with silicon-diode-array image tube, filtered with a Corning 7-54 filter and a polarizer oriented with its principal axis in the horizontal direction. Best contrast between oil and water was achieved when winds and sea states were low. The minimum detectable oil film thickness was about 0.1 micrometer.
Caryoscope: An Open Source Java application for viewing microarray data in a genomic context
Awad, Ihab AB; Rees, Christian A; Hernandez-Boussard, Tina; Ball, Catherine A; Sherlock, Gavin
2004-01-01
Background Microarray-based comparative genome hybridization experiments generate data that can be mapped onto the genome. These data are interpreted more easily when represented graphically in a genomic context. Results We have developed Caryoscope, which is an open source Java application for visualizing microarray data from array comparative genome hybridization experiments in a genomic context. Caryoscope can read General Feature Format files (GFF files), as well as comma- and tab-delimited files, that define the genomic positions of the microarray reporters for which data are obtained. The microarray data can be browsed using an interactive, zoomable interface, which helps users identify regions of chromosomal deletion or amplification. The graphical representation of the data can be exported in a number of graphic formats, including publication-quality formats such as PostScript. Conclusion Caryoscope is a useful tool that can aid in the visualization, exploration and interpretation of microarray data in a genomic context. PMID:15488149
Virtual hand: a 3D tactile interface to virtual environments
NASA Astrophysics Data System (ADS)
Rogowitz, Bernice E.; Borrel, Paul
2008-02-01
We introduce a novel system that allows users to experience the sensation of touch in a computer graphics environment. In this system, the user places his/her hand on an array of pins, which is moved about space on a 6 degree-of-freedom robot arm. The surface of the pins defines a surface in the virtual world. This "virtual hand" can move about the virtual world. When the virtual hand encounters an object in the virtual world, the heights of the pins are adjusted so that they represent the object's shape, surface, and texture. A control system integrates pin and robot arm motions to transmit information about objects in the computer graphics world to the user. It also allows the user to edit, change and move the virtual objects, shapes and textures. This system provides a general framework for touching, manipulating, and modifying objects in a 3-D computer graphics environment, which may be useful in a wide range of applications, including computer games, computer aided design systems, and immersive virtual worlds.
Two schemes for rapid generation of digital video holograms using PC cluster
NASA Astrophysics Data System (ADS)
Park, Hanhoon; Song, Joongseok; Kim, Changseob; Park, Jong-Il
2017-12-01
Computer-generated holography (CGH), which is a process of generating digital holograms, is computationally expensive. Recently, several methods/systems of parallelizing the process using graphic processing units (GPUs) have been proposed. Indeed, use of multiple GPUs or a personal computer (PC) cluster (each PC with GPUs) enabled great improvements in the process speed. However, extant literature has less often explored systems involving rapid generation of multiple digital holograms and specialized systems for rapid generation of a digital video hologram. This study proposes a system that uses a PC cluster and is able to more efficiently generate a video hologram. The proposed system is designed to simultaneously generate multiple frames and accelerate the generation by parallelizing the CGH computations across a number of frames, as opposed to separately generating each individual frame while parallelizing the CGH computations within each frame. The proposed system also enables the subprocesses for generating each frame to execute in parallel through multithreading. With these two schemes, the proposed system significantly reduced the data communication time for generating a digital hologram when compared with that of the state-of-the-art system.
Kwon, M-W; Kim, S-C; Yoon, S-E; Ho, Y-S; Kim, E-S
2015-02-09
A new object tracking mask-based novel-look-up-table (OTM-NLUT) method is proposed and implemented on graphics-processing-units (GPUs) for real-time generation of holographic videos of three-dimensional (3-D) scenes. Since the proposed method is designed to be matched with software and memory structures of the GPU, the number of compute-unified-device-architecture (CUDA) kernel function calls and the computer-generated hologram (CGH) buffer size of the proposed method have been significantly reduced. It therefore results in a great increase of the computational speed of the proposed method and enables real-time generation of CGH patterns of 3-D scenes. Experimental results show that the proposed method can generate 31.1 frames of Fresnel CGH patterns with 1,920 × 1,080 pixels per second, on average, for three test 3-D video scenarios with 12,666 object points on three GPU boards of NVIDIA GTX TITAN, and confirm the feasibility of the proposed method in the practical application of electro-holographic 3-D displays.
Integrated multisensor perimeter detection systems
NASA Astrophysics Data System (ADS)
Kent, P. J.; Fretwell, P.; Barrett, D. J.; Faulkner, D. A.
2007-10-01
The report describes the results of a multi-year programme of research aimed at the development of an integrated multi-sensor perimeter detection system capable of being deployed at an operational site. The research was driven by end user requirements in protective security, particularly in threat detection and assessment, where effective capability was either not available or prohibitively expensive. Novel video analytics have been designed to provide robust detection of pedestrians in clutter while new radar detection and tracking algorithms provide wide area day/night surveillance. A modular integrated architecture based on commercially available components has been developed. A graphical user interface allows intuitive interaction and visualisation with the sensors. The fusion of video, radar and other sensor data provides the basis of a threat detection capability for real life conditions. The system was designed to be modular and extendable in order to accommodate future and legacy surveillance sensors. The current sensor mix includes stereoscopic video cameras, mmWave ground movement radar, CCTV and a commercially available perimeter detection cable. The paper outlines the development of the system and describes the lessons learnt after deployment in a pilot trial.
Digital Signal Processing For Low Bit Rate TV Image Codecs
NASA Astrophysics Data System (ADS)
Rao, K. R.
1987-06-01
In view of the 56 KBPS digital switched network services and the ISDN, low bit rate codecs for providing real time full motion color video are under various stages of development. Some companies have already brought the codecs into the market. They are being used by industry and some Federal Agencies for video teleconferencing. In general, these codecs have various features such as multiplexing audio and data, high resolution graphics, encryption, error detection and correction, self diagnostics, freezeframe, split video, text overlay etc. To transmit the original color video on a 56 KBPS network requires bit rate reduction of the order of 1400:1. Such a large scale bandwidth compression can be realized only by implementing a number of sophisticated,digital signal processing techniques. This paper provides an overview of such techniques and outlines the newer concepts that are being investigated. Before resorting to the data compression techniques, various preprocessing operations such as noise filtering, composite-component transformation and horizontal and vertical blanking interval removal are to be implemented. Invariably spatio-temporal subsampling is achieved by appropriate filtering. Transform and/or prediction coupled with motion estimation and strengthened by adaptive features are some of the tools in the arsenal of the data reduction methods. Other essential blocks in the system are quantizer, bit allocation, buffer, multiplexer, channel coding etc.
NASA Astrophysics Data System (ADS)
Ismail M., E.; Mahazir I., Irwan; Othman, H.; Amiruddin M., H.; Ariffin, A.
2017-05-01
The rapid development of information technology today has given a new breath toward usage of computer in education. One of the increasingly popular nowadays is a multimedia technology that merges a variety of media such as text, graphics, animation, video and audio controlled by a computer. With this technology, a wide range of multimedia element can be developed to improve the quality of education. For that reason, this study aims to investigate the use of multimedia element based on animated video that was developed for Engineering Drawing subject according to the syllabus of Vocational College of Malaysia. The design for this study was a survey method using a quantitative approach and involved 30 respondents from Industrial Machining students. The instruments used in study is questionnaire with correlation coefficient value (0.83), calculated on Alpha-Cronbach. Data was collected and analyzed descriptive analyzed using SPSS. The study found that multimedia element for animation video was use significant have capable to increase imagination and visualization of student. The implications of this study provide information of use of multimedia element will student effect imagination and visualization. In general, these findings contribute to the formation of multimedia element of materials appropriate to enhance the quality of learning material for engineering drawing.
Wrist display concept demonstration based on 2-in. color AMOLED
NASA Astrophysics Data System (ADS)
Meyer, Frederick M.; Longo, Sam J.; Hopper, Darrel G.
2004-09-01
The wrist watch needs an upgrade. Recent advances in optoelectronics, microelectronics, and communication theory have established a technology base that now make the multimedia Dick Tracy watch attainable during the next decade. As a first step towards stuffing the functionality of an entire personnel computer (PC) and television receiver under a watch face, we have set a goal of providing wrist video capability to warfighters. Commercial sector work on the wrist form factor already includes all the functionality of a personal digital assistant (PDA) and full PC operating system. Our strategy is to leverage these commercial developments. In this paper we describe our use of a 2.2 in. diagonal color active matrix light emitting diode (AMOLED) device as a wrist-mounted display (WMD) to present either full motion video or computer generated graphical image formats.
Video-rate imaging of microcirculation with single-exposure oblique back-illumination microscopy
NASA Astrophysics Data System (ADS)
Ford, Tim N.; Mertz, Jerome
2013-06-01
Oblique back-illumination microscopy (OBM) is a new technique for simultaneous, independent measurements of phase gradients and absorption in thick scattering tissues based on widefield imaging. To date, OBM has been used with sequential camera exposures, which reduces temporal resolution, and can produce motion artifacts in dynamic samples. Here, a variation of OBM that allows single-exposure operation with wavelength multiplexing and image splitting with a Wollaston prism is introduced. Asymmetric anamorphic distortion induced by the prism is characterized and corrected in real time using a graphics-processing unit. To demonstrate the capacity of single-exposure OBM to perform artifact-free imaging of blood flow, video-rate movies of microcirculation in ovo in the chorioallantoic membrane of the developing chick are presented. Imaging is performed with a high-resolution rigid Hopkins lens suitable for endoscopy.
Video-rate imaging of microcirculation with single-exposure oblique back-illumination microscopy.
Ford, Tim N; Mertz, Jerome
2013-06-01
Oblique back-illumination microscopy (OBM) is a new technique for simultaneous, independent measurements of phase gradients and absorption in thick scattering tissues based on widefield imaging. To date, OBM has been used with sequential camera exposures, which reduces temporal resolution, and can produce motion artifacts in dynamic samples. Here, a variation of OBM that allows single-exposure operation with wavelength multiplexing and image splitting with a Wollaston prism is introduced. Asymmetric anamorphic distortion induced by the prism is characterized and corrected in real time using a graphics-processing unit. To demonstrate the capacity of single-exposure OBM to perform artifact-free imaging of blood flow, video-rate movies of microcirculation in ovo in the chorioallantoic membrane of the developing chick are presented. Imaging is performed with a high-resolution rigid Hopkins lens suitable for endoscopy.
Study of information transfer optimization for communication satellites
NASA Technical Reports Server (NTRS)
Odenwalder, J. P.; Viterbi, A. J.; Jacobs, I. M.; Heller, J. A.
1973-01-01
The results are presented of a study of source coding, modulation/channel coding, and systems techniques for application to teleconferencing over high data rate digital communication satellite links. Simultaneous transmission of video, voice, data, and/or graphics is possible in various teleconferencing modes and one-way, two-way, and broadcast modes are considered. A satellite channel model including filters, limiter, a TWT, detectors, and an optimized equalizer is treated in detail. A complete analysis is presented for one set of system assumptions which exclude nonlinear gain and phase distortion in the TWT. Modulation, demodulation, and channel coding are considered, based on an additive white Gaussian noise channel model which is an idealization of an equalized channel. Source coding with emphasis on video data compression is reviewed, and the experimental facility utilized to test promising techniques is fully described.
Solid state optical microscope
Young, I.T.
1983-08-09
A solid state optical microscope wherein wide-field and high-resolution images of an object are produced at a rapid rate by utilizing conventional optics with a charge-coupled photodiode array. A galvanometer scanning mirror, for scanning in one of two orthogonal directions is provided, while the charge-coupled photodiode array scans in the other orthogonal direction. Illumination light from the object is incident upon the photodiodes, creating packets of electrons (signals) which are representative of the illuminated object. The signals are then processed, stored in a memory, and finally displayed as a video signal. 2 figs.
Solid-state optical microscope
Young, I.T.
1981-01-07
A solid state optical microscope is described wherein wide-field and high-resolution images of an object are produced at a rapid rate by utilizing conventional optics with a charge-coupled photodiode array. Means for scanning in one of two orthogonal directions are provided, while the charge-coupled photodiode array scans in the other orthogonal direction. Illumination light from the object is incident upon the photodiodes, creating packets of electrons (signals) which are representative of the illuminated object. The signals are then processed, stored in a memory, and finally displayed as a video signal.
Solid state optical microscope
Young, Ian T.
1983-01-01
A solid state optical microscope wherein wide-field and high-resolution images of an object are produced at a rapid rate by utilizing conventional optics with a charge-coupled photodiode array. A galvanometer scanning mirror, for scanning in one of two orthogonal directions is provided, while the charge-coupled photodiode array scans in the other orthogonal direction. Illumination light from the object is incident upon the photodiodes, creating packets of electrons (signals) which are representative of the illuminated object. The signals are then processed, stored in a memory, and finally displayed as a video signal.
Spectral Gaps from Ordered to Disordered Systems.
NASA Astrophysics Data System (ADS)
Lindner, John Florian
As is well known, the allowed energies of periodic electronic systems and the allowed frequencies of periodic elastic systems form banded sets (at least for certain idealized models). Recent work, by Werner Kirsch and others, demonstrates that this band-gap structure persists in disordered versions of these periodic systems. Here, I extend this result by showing that for specific "point" interactions, the spectrum of a generic disordered system is the union of the spectra of all possible pure systems formed from it. This permits the explicit construction of these spectral sets. This result is the outgrowth of a perspective I call "growing disorder." The idea is to evolve, or "grow," an ordered array (whose spectrum is known) into a disordered array (whose spectrum is sought). The trick is to evolve the spectrum along with it. The approach is very visual, lends itself readily to graphical presentation, and accounts in part for the unconventional but appropriate look of this thesis. The unconventional style also reflects an attempt to make the material easily accessible to a physics audience. It is inspired by the way in which physicists informally communicate ideas, namely, with words and pictures in front of a blackboard. Each page, or set of facing pages, of text and graphics is a unit to be assimilated before proceeding onto the next unit. There is, thus, no unique path through the thesis. An intuitive and straightforward approach, constructive proofs, an informal style, and some ingenuity simply communicate the ideas herein. However, the condensation inherent in the graphical presentation demands significant reader engagement!.
Spectral gaps from ordered to disordered systems
NASA Astrophysics Data System (ADS)
Lindner, John Florian
As is well known, the allowed energies of periodic electronic systems and the allowed frequencies of periodic elastic systems form banded sets (at least for certain idealized models). Recent work, by Werner Kirsch and others, demonstrates that this band-gap structure persists in disordered versions of these periodic systems. Here, I extend this result by showing that for specific "point" interactions, the spectrum of a generic disordered system is the union of the spectra of all possible pure systems formed from it. This permits the explicit construction of these spectral sets.This result is the outgrowth of a perspective I call "growing disorder." The idea is to evolve, or "grow," an ordered array (whose spectrum is known) into a disordered array (whose spectrum is sought). The trick is to evolve the spectrum along with it. The approach is very visual, lends itself readily to graphical presentation, and accounts in part for the unconventional but appropriate look of this thesis.The unconventional style also reflects an attempt to make the material easily accessible to a physics audience. It is inspired by the way in which physicists informally communicate ideas, namely, with words and picture in front of a blackboard. Each page, or set of text and graphics is a unit to be assimilated before proceeding onto the next unit. There is thus no unique path through the thesis.An intuitive and straightforward approach, constructive proofs, an informal style, and some ingenuity simply communicate the ideas herein. However, the condensation inherent in the graphical presentation demands significant reader engagement!
GRAPHIC REANALYSIS OF THE TWO NINDS-TPA TRIALS CONFIRMS SUBSTANTIAL TREATMENT BENEFIT
Saver, Jeffrey L.; Gornbein, Jeffrey; Starkman, Sidney
2010-01-01
Background of Comment/Review Multiple statistical analyses of the two NINDS-TPA Trials have confirmed study findings of benefit of fibrinolytic therapy. A recent graphic analysis departed from best practices in the visual display of quantitative information by failing to take into account the skewed functional importance NIH Stroke Scale raw scores and by scaling change axes at up to twenty times the range achievable by individual patients. Methods Using the publicly available datasets of the 2 NINDS-TPA Trials, we generated a variety of figures appropriate to the characteristics of acute stroke trial data. Results A diverse array of figures all visually delineated substantial benefits of fibrinolytic therapy, including: bar charts of normalized gain and loss; stacked bar, bar, and matrix plots of clinically relevant ordinal ranks; a time series stacked line plot of continuous scale disability weights; and line plot, bubble chart, and person icon array graphs of joint outcome table analysis. The achievable change figure showed substantially greater improvement among TPA than placebo patients, median 66.7% (IQR 0–92.0) vs 50.0% (IQR −7.1 – 80.0), p=0.003. Conclusions On average, under 3 hour patients treated with TPA recovered two-thirds while placebo patients improved only half of the way towards fully normal. Graphical analyses of the two NINDS-TPA trials, when performed according to best practices, is a useful means of conveying details about patient response to therapy not fully delineated by summary statistics, and confirms a valuable treatment benefit of under 3 hour fibrinolytic therapy in acute stroke. PMID:20829518
Using a Model of Analysts' Judgments to Augment an Item Calibration Process
ERIC Educational Resources Information Center
Hauser, Carl; Thum, Yeow Meng; He, Wei; Ma, Lingling
2015-01-01
When conducting item reviews, analysts evaluate an array of statistical and graphical information to assess the fit of a field test (FT) item to an item response theory model. The process can be tedious, particularly when the number of human reviews (HR) to be completed is large. Furthermore, such a process leads to decisions that are susceptible…
GPU-Based Real-Time Volumetric Ultrasound Image Reconstruction for a Ring Array
Choe, Jung Woo; Nikoozadeh, Amin; Oralkan, Ömer; Khuri-Yakub, Butrus T.
2014-01-01
Synthetic phased array (SPA) beamforming with Hadamard coding and aperture weighting is an optimal option for real-time volumetric imaging with a ring array, a particularly attractive geometry in intracardiac and intravascular applications. However, the imaging frame rate of this method is limited by the immense computational load required in synthetic beamforming. For fast imaging with a ring array, we developed graphics processing unit (GPU)-based, real-time image reconstruction software that exploits massive data-level parallelism in beamforming operations. The GPU-based software reconstructs and displays three cross-sectional images at 45 frames per second (fps). This frame rate is 4.5 times higher than that for our previously-developed multi-core CPU-based software. In an alternative imaging mode, it shows one B-mode image rotating about the axis and its maximum intensity projection (MIP), processed at a rate of 104 fps. This paper describes the image reconstruction procedure on the GPU platform and presents the experimental images obtained using this software. PMID:23529080
Improving Performance and Predictability of Storage Arrays
ERIC Educational Resources Information Center
Altiparmak, Nihat
2013-01-01
Massive amount of data is generated everyday through sensors, Internet transactions, social networks, video, and all other digital sources available. Many organizations store this data to enable breakthrough discoveries and innovation in science, engineering, medicine, and commerce. Such massive scale of data poses new research problems called big…
Risk Management Collaboration through Sharing Interactive Graphics
NASA Astrophysics Data System (ADS)
Slingsby, Aidan; Dykes, Jason; Wood, Jo; Foote, Matthew
2010-05-01
Risk management involves the cooperation of scientists, underwriters and actuaries all of whom analyse data to support decision-making. Results are often disseminated through static documents with graphics that convey the message the analyst wishes to communicate. Interactive graphics are increasingly popular means of communicating the results of data analyses because they enable other parties to explore and visually analyse some of the data themselves prior to and during discussion. Discussion around interactive graphics can occur synchronously in face-to-face meetings or with video-conferencing and screen sharing or they can occur asynchronously through web-sites such as ManyEyes, web-based fora, blogs, wikis and email. A limitation of approaches that do not involve screen sharing is the difficulty in sharing the results of insights from interacting with the graphic. Static images accompanied can be shared but these themselves cannot be interacted, producing a discussion bottleneck (Baker, 2008). We address this limitation by allowing the state and configuration of graphics to be shared (rather than static images) so that a user can reproduce someone else's graphic, interact with it and then share the results of this accompanied with some commentary. HiVE (Slingsby et al, 2009) is a compact and intuitive text-based language that has been designed for this purpose. We will describe the vizTweets project (a 9-month project funded by JISC) in which we are applying these principles to insurance risk management in the context of the Willis Research Network, the world's largest collaboration between the insurance industry and the academia). The project aims to extend HiVE to meet the needs of the sector, design, implement free-available web services and tools and to provide case studies. We will present a case study that demonstrate the potential of this approach for collaboration within the Willis Research Network. Baker, D. Towards Transparency in Visualisation Based Research. AHRC ICT Methods Network Expert Workshop. Available at http://www.viznet.ac.uk/documents Slingsby, A., Dykes, J. and Wood, J. 2009. Configuring Hierarchical Layouts to Address Research Questions. IEEE Transactions on Visualization and Computer Graphics 15 (6), Nov-Dec 2009, pp977-984.
An Analysis of Mimosa pudica Leaves Movement by Using LoggerPro Software
NASA Astrophysics Data System (ADS)
Sugito; Susilo; Handayani, L.; Marwoto, P.
2016-08-01
The unique phenomena of Mimosa pudica are the closing and opening movements of its leaves when they got a stimulus. By using certain software, these movements can be drawn into graphic that can be analysed. The LoggerPro provides facilities needed to analyse recorded videos of the plant's reaction to stimulus. Then, through the resulted graph, analysis of some variables can be carried out. The result showed that the plant's movement fits an equation of y = mx + c.
NASA Technical Reports Server (NTRS)
1995-01-01
In this educational video series, 'Liftoff to Learning', astronauts from the STS-37 Space Shuttle Mission (Jay Apt, Jerry Ross, Ken Cameron, Steve Nagel, and Linda Godwin) show what EVA (extravehicular activity) means, talk about the history and design of the space suits and why they are designed the way they are, describe different ways they are used (payload work, testing and maintenance of equipment, space environment experiments) in EVA work, and briefly discuss the future applications of the space suits. Computer graphics and animation is included.
A prototype expert/information system for examining environmental risks of KSC activities
NASA Technical Reports Server (NTRS)
Engel, Bernard A.
1993-01-01
Protection of the environment and natural resources at the Kennedy Space Center (KSC) is of great concern. An expert/information system to replace the paper-based KSC Environmental Checklist was developed. The computer-based system requests information only as a required and supplies assistance as needed. The most comprehensive portion of the system provides information about endangered species habitat at KSC. This module uses geographic information system (GIS) data and tools, expert rules, color graphics, computer-based video, and hypertext to provide information.
Integrating Thematic Web Portal Capabilities into the NASA Earthdata Web Infrastructure
NASA Technical Reports Server (NTRS)
Wong, Minnie; Baynes, Kathleen E.; Huang, Thomas; McLaughlin, Brett
2015-01-01
This poster will present the process of integrating thematic web portal capabilities into the NASA Earth data web infrastructure, with examples from the Sea Level Change Portal. The Sea Level Change Portal will be a source of current NASA research, data and information regarding sea level change. The portal will provide sea level change information through articles, graphics, videos and animations, an interactive tool to view and access sea level change data and a dashboard showing sea level change indicators.
Mars Observer Mission: Mapping the Martian World
NASA Technical Reports Server (NTRS)
1992-01-01
The 1992 Mars Observer Mission is highlighted in this video overview of the mission objectives and planning. Using previous photography and computer graphics and simulation, the main objectives of the 687 day (one Martian year) consecutive orbit by the Mars Observer Satellite around Mars are explained. Dr. Arden Albee, the project scientist, speaks about the pole-to-pole mapping of the Martian surface topography, the planned relief maps, the chemical and mineral composition analysis, the gravity fields analysis, and the proposed search for any Mars magnetic fields.
Computer Program Development Specification for Tactical Interface System.
1981-07-31
CNTL CNTL TO ONE VT~i.AE CR1 & TWELVE VT100 LCARD READER VIDEO TERMINALS, SIX LA12O) HARD- COPY TERMINALS, & VECTOR GRAPHICS RPO % TERMINAL 17%M DISK...this data into the TIS para - .. meter tables in the TISGBL common area. ICEHANDL will send test interface ICE to PSS in one of two modes: perio- dically...STOPCauss te TI sotwar toexit ,9.*9~ .r .~ * ~%.’h .9~ .. a .~ .. a. 1 , , p * % .’.-:. .m 7 P : SDSS-MMP-BI ." 31 July 1981 TCL commands authorized
Augmented Computer Mouse Would Measure Applied Force
NASA Technical Reports Server (NTRS)
Li, Larry C. H.
1993-01-01
Proposed computer mouse measures force of contact applied by user. Adds another dimension to two-dimensional-position-measuring capability of conventional computer mouse; force measurement designated to represent any desired continuously variable function of time and position, such as control force, acceleration, velocity, or position along axis perpendicular to computer video display. Proposed mouse enhances sense of realism and intuition in interaction between operator and computer. Useful in such applications as three-dimensional computer graphics, computer games, and mathematical modeling of dynamics.
Information visualization: Beyond traditional engineering
NASA Technical Reports Server (NTRS)
Thomas, James J.
1995-01-01
This presentation addresses a different aspect of the human-computer interface; specifically the human-information interface. This interface will be dominated by an emerging technology called Information Visualization (IV). IV goes beyond the traditional views of computer graphics, CADS, and enables new approaches for engineering. IV specifically must visualize text, documents, sound, images, and video in such a way that the human can rapidly interact with and understand the content structure of information entities. IV is the interactive visual interface between humans and their information resources.
USGS Scientific Visualization Laboratory
,
1995-01-01
The U.S. Geological Survey's (USGS) Scientific Visualization Laboratory at the National Center in Reston, Va., provides a central facility where USGS employees can use state-of-the-art equipment for projects ranging from presentation graphics preparation to complex visual representations of scientific data. Equipment including color printers, black-and-white and color scanners, film recorders, video equipment, and DOS, Apple Macintosh, and UNIX platforms with software are available for both technical and nontechnical users. The laboratory staff provides assistance and demonstrations in the use of the hardware and software products.
Tactical 3D Model Generation using Structure-From-Motion on Video from Unmanned Systems
2015-04-01
available SfM application known as VisualSFM .6,7 VisualSFM is an end-user, “off-the-shelf” implementation of SfM that is easy to configure and used for...most 3D model generation applications from imagery. While the usual interface with VisualSFM is through their graphical user interface (GUI), we will be...of our system.5 There are two types of 3D model generation available within VisualSFM ; sparse and dense reconstruction. Sparse reconstruction begins
DOE Office of Scientific and Technical Information (OSTI.GOV)
This software is an iOS (Apple) Augmented Reality (AR) application that runs on the iPhone and iPad. It is designed to scan in a photograph or graphic and "play" an associated video. This release, SNLSimMagic, was built using Wikitude Augmented Reality (AR) software development kit (SDK) integrated into Apple iOS SDK application and the Cordova libraries. These codes enable the generation of runtime targets using cloud recognition and developer-defined target features which are then accessed by means of a custom application.
NASA Astrophysics Data System (ADS)
Russkova, Tatiana V.
2017-11-01
One tool to improve the performance of Monte Carlo methods for numerical simulation of light transport in the Earth's atmosphere is the parallel technology. A new algorithm oriented to parallel execution on the CUDA-enabled NVIDIA graphics processor is discussed. The efficiency of parallelization is analyzed on the basis of calculating the upward and downward fluxes of solar radiation in both a vertically homogeneous and inhomogeneous models of the atmosphere. The results of testing the new code under various atmospheric conditions including continuous singlelayered and multilayered clouds, and selective molecular absorption are presented. The results of testing the code using video cards with different compute capability are analyzed. It is shown that the changeover of computing from conventional PCs to the architecture of graphics processors gives more than a hundredfold increase in performance and fully reveals the capabilities of the technology used.
Advanced Architectures for Astrophysical Supercomputing
NASA Astrophysics Data System (ADS)
Barsdell, B. R.; Barnes, D. G.; Fluke, C. J.
2010-12-01
Astronomers have come to rely on the increasing performance of computers to reduce, analyze, simulate and visualize their data. In this environment, faster computation can mean more science outcomes or the opening up of new parameter spaces for investigation. If we are to avoid major issues when implementing codes on advanced architectures, it is important that we have a solid understanding of our algorithms. A recent addition to the high-performance computing scene that highlights this point is the graphics processing unit (GPU). The hardware originally designed for speeding-up graphics rendering in video games is now achieving speed-ups of O(100×) in general-purpose computation - performance that cannot be ignored. We are using a generalized approach, based on the analysis of astronomy algorithms, to identify the optimal problem-types and techniques for taking advantage of both current GPU hardware and future developments in computing architectures.
Forman, Bruce H.; Eccles, Randy; Piggins, Judith; Raila, Wayne; Estey, Greg; Barnett, G. Octo
1990-01-01
We have developed a visually oriented, computer-controlled learning environment designed for use by students of gross anatomy. The goals of this module are to reinforce the concepts of organ relationships and topography by using computed axial tomographic (CAT) images accessed from a videodisc integrated with color graphics and to introduce students to cross-sectional radiographic anatomy. We chose to build the program around CAT scan images because they not only provide excellent structural detail but also offer an anatomic orientation (transverse) that complements that used in the dissection laboratory (basically a layer-by-layer, anterior-to-posterior, or coronal approach). Our system, built using a Microsoft Windows-386 based authoring environment which we designed and implemented, integrates text, video images, and graphics into a single screen display. The program allows both user browsing of information, facilitated by hypertext links, and didactic sessions including mini-quizzes for self-assessment.
Animated graphics for comparing two risks: a cautionary tale.
Zikmund-Fisher, Brian J; Witteman, Holly O; Fuhrel-Forbis, Andrea; Exe, Nicole L; Kahn, Valerie C; Dickson, Mark
2012-07-25
The increasing use of computer-administered risk communications affords the potential to replace static risk graphics with animations that use motion cues to reinforce key risk messages. Research on the use of animated graphics, however, has yielded mixed findings, and little research exists to identify the specific animations that might improve risk knowledge and patients' decision making. To test whether viewing animated forms of standard pictograph (icon array) risk graphics displaying risks of side effects would improve people's ability to select the treatment with the lowest risk profile, as compared with viewing static images of the same risks. A total of 4198 members of a demographically diverse Internet panel read a scenario about two hypothetical treatments for thyroid cancer. Each treatment was described as equally effective but varied in side effects (with one option slightly better than the other). Participants were randomly assigned to receive all risk information in 1 of 10 pictograph formats in a quasi-factorial design. We compared a control condition of static grouped icons with a static scattered icon display and with 8 Flash-based animated versions that incorporated different combinations of (1) building the risk 1 icon at a time, (2) having scattered risk icons settle into a group, or (3) having scattered risk icons shuffle themselves (either automatically or by user control). We assessed participants' ability to choose the better treatment (choice accuracy), their gist knowledge of side effects (knowledge accuracy), and their graph evaluation ratings, controlling for subjective numeracy and need for cognition. When compared against static grouped-icon arrays, no animations significantly improved any outcomes, and most showed significant performance degradations. However, participants who received animations of grouped icons in which at-risk icons appeared 1 at a time performed as well on all outcomes as the static grouped-icon control group. Displays with scattered icons (static or animated) performed particularly poorly unless they included the settle animation that allowed users to view event icons grouped. Many combinations of animation, especially those with scattered icons that shuffle randomly, appear to inhibit knowledge accuracy in this context. Static pictographs that group risk icons, however, perform very well on measures of knowledge and choice accuracy. These findings parallel recent evidence in other data communication contexts that less can be more-that is, that simpler, more focused information presentation can result in improved understanding. Decision aid designers and health educators should proceed with caution when considering the use of animated risk graphics to compare two risks, given that evidence-based, static risk graphics appear optimal.
Paper-based immune-affinity arrays for detection of multiple mycotoxins in cereals.
Li, Li; Chen, Hongpu; Lv, Xiaolan; Wang, Min; Jiang, Xizhi; Jiang, Yifei; Wang, Heye; Zhao, Yongfu; Xia, Liru
2018-03-01
Mycotoxins produced by different species of fungi may coexist in cereals and feedstuffs, and could be highly toxic for humans and animals. For quantification of multiple mycotoxins in cereals, we developed a paper-based mycotoxin immune-affinity array. First, paper-based microzone arrays were fabricated by photolithography. Then, monoclonal mycotoxin antibodies were added in a copolymerization reaction with a cross-linker to form an immune-affinity monolith on the paper-based microzone array. With use of a competitive immune-response format, paper-based mycotoxin immune-affinity arrays were successfully applied to detect mycotoxins in samples. The detection limits for deoxynivalenol, zearalenone, T-2 toxin, and HT-2 toxin were 62.7, 10.8, 0.36, and 0.23 μg·kg -1 , respectively, which meet relevant requirements for these compounds in food. The recovery rates were 81-86% for deoxynivalenol, 89-117% for zearalenone, 79-86% for T-2 toxin, and 78-83% for HT-2 toxin, and showed the paper-based immune-affinity arrays had good reproducibility. In summary, the paper-based mycotoxin immune-affinity array provides a sensitive, rapid, accurate, stable, and convenient platform for detection of multiple mycotoxins in agro-foods. Graphical abstract Paper-based immune-affinity monolithic array. DON deoxynivalenol, HT-2 HT-2 toxin, T-2 T-2 toxin, PEGDA polyethylene glycol diacrylate, ZEN zearalenone.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jinlong, Lv, E-mail: ljltsinghua@126.com; State Key Lab of New Ceramic and Fine Processing, Tsinghua University, Beijing 100084; Tongxiang, Liang, E-mail: txliang@mail.tsinghua.edu.cn
The effects of urea concentration on microstructures of Ni{sub 3}S{sub 2}formed on nickel foam and its hydrogen evolution reaction were investigated. The Ni{sub 3}S{sub 2} nanosheets with porous structure were formed on nickel foam during hydrothermal process due to low urea concentration. While high urea concentration facilitated the forming of Ni{sub 3}S{sub 2} nanotube arrays. The resulting Ni{sub 3}S{sub 2} nanotube arrays exhibited higher catalytic activity than Ni3S2nanosheets for hydrogen evolution reaction. This was mainly attributed to a fact that Ni{sub 3}S{sub 2} nanotube arrays facilitated diffusion of electrolyte for hydrogen evolution reaction. - Graphical abstract: The resulting Ni{sub 3}S{submore » 2} nanotube arrays exhibited higher catalytic activity than Ni{sub 3}S{sub 2} nanosheets for hydrogen evolution reaction. This was mainly attributed to a fact that Ni{sub 3}S{sub 2} nanotube arrays facilitated diffusion of electrolyte for hydrogen evolution reaction and hydrogen evolution. - Highlights: • Urea promoted to forming more Ni{sub 3}S{sub 2} nanotube arrays on nickel foam. • Ni{sub 3}S{sub 2} nanotube arrays showed higher catalytic activity in alkaline solution. • Ni{sub 3}S{sub 2} nanotube arrays promoted electron transport and reaction during the HER.« less
Zhong, Xianhua; Li, Dan; Du, Wei; Yan, Mengqiu; Wang, You; Huo, Danqun; Hou, Changjun
2018-06-01
Volatile organic compounds (VOCs) in breath can be used as biomarkers to identify early stages of lung cancer. Herein, we report a disposable colorimetric array that has been constructed from diverse chemo-responsive colorants. Distinguishable difference maps were plotted within 4 min for specifically targeted VOCs. Through the consideration of various chemical interactions with VOCs, the arrays successfully discriminate between 20 different volatile organic compounds in breath that are related to lung cancer. VOCs were identified either with the visualized difference maps or through pattern recognition with an accuracy of at least 90%. No uncertainties or errors were observed in the hierarchical cluster analysis (HCA). Finally, good reproducibility and stability of the array was achieved against changes in humidity. Generally, this work provides fundamental support for construction of simple and rapid VOC sensors. More importantly, this approach provides a hypothesis-free array method for breath testing via VOC profiling. Therefore, this small, rapid, non-invasive, inexpensive, and visualized sensor array is a powerful and promising tool for early screening of lung cancer. Graphical abstract A disposable colorimetric array has been developed with broadly chemo-responsive dyes to incorporate various chemical interactions, through which the arrays successfully discriminate 20 VOCs that are related to lung cancer via difference maps alone or chemometrics within 4 min. The hydrophobic porous matrix provides good stability against changes in humidity.
PSQM-based RR and NR video quality metrics
NASA Astrophysics Data System (ADS)
Lu, Zhongkang; Lin, Weisi; Ong, Eeping; Yang, Xiaokang; Yao, Susu
2003-06-01
This paper presents a new and general concept, PQSM (Perceptual Quality Significance Map), to be used in measuring the visual distortion. It makes use of the selectivity characteristic of HVS (Human Visual System) that it pays more attention to certain area/regions of visual signal due to one or more of the following factors: salient features in image/video, cues from domain knowledge, and association of other media (e.g., speech or audio). PQSM is an array whose elements represent the relative perceptual-quality significance levels for the corresponding area/regions for images or video. Due to its generality, PQSM can be incorporated into any visual distortion metrics: to improve effectiveness or/and efficiency of perceptual metrics; or even to enhance a PSNR-based metric. A three-stage PQSM estimation method is also proposed in this paper, with an implementation of motion, texture, luminance, skin-color and face mapping. Experimental results show the scheme can improve the performance of current image/video distortion metrics.
Coupled auralization and virtual video for immersive multimedia displays
NASA Astrophysics Data System (ADS)
Henderson, Paul D.; Torres, Rendell R.; Shimizu, Yasushi; Radke, Richard; Lonsway, Brian
2003-04-01
The implementation of maximally-immersive interactive multimedia in exhibit spaces requires not only the presentation of realistic visual imagery but also the creation of a perceptually accurate aural experience. While conventional implementations treat audio and video problems as essentially independent, this research seeks to couple the visual sensory information with dynamic auralization in order to enhance perceptual accuracy. An implemented system has been developed for integrating accurate auralizations with virtual video techniques for both interactive presentation and multi-way communication. The current system utilizes a multi-channel loudspeaker array and real-time signal processing techniques for synthesizing the direct sound, early reflections, and reverberant field excited by a moving sound source whose path may be interactively defined in real-time or derived from coupled video tracking data. In this implementation, any virtual acoustic environment may be synthesized and presented in a perceptually-accurate fashion to many participants over a large listening and viewing area. Subject tests support the hypothesis that the cross-modal coupling of aural and visual displays significantly affects perceptual localization accuracy.
Exploiting graphics processing units for computational biology and bioinformatics.
Payne, Joshua L; Sinnott-Armstrong, Nicholas A; Moore, Jason H
2010-09-01
Advances in the video gaming industry have led to the production of low-cost, high-performance graphics processing units (GPUs) that possess more memory bandwidth and computational capability than central processing units (CPUs), the standard workhorses of scientific computing. With the recent release of generalpurpose GPUs and NVIDIA's GPU programming language, CUDA, graphics engines are being adopted widely in scientific computing applications, particularly in the fields of computational biology and bioinformatics. The goal of this article is to concisely present an introduction to GPU hardware and programming, aimed at the computational biologist or bioinformaticist. To this end, we discuss the primary differences between GPU and CPU architecture, introduce the basics of the CUDA programming language, and discuss important CUDA programming practices, such as the proper use of coalesced reads, data types, and memory hierarchies. We highlight each of these topics in the context of computing the all-pairs distance between instances in a dataset, a common procedure in numerous disciplines of scientific computing. We conclude with a runtime analysis of the GPU and CPU implementations of the all-pairs distance calculation. We show our final GPU implementation to outperform the CPU implementation by a factor of 1700.
Nurses' perceptions and problems in the usability of a medication safety app.
Ankem, Kalyani; Cho, Sookyung; Simpson, Diana
2017-10-16
The majority of medication apps support medication adherence. Equally, if not more important, is medication safety. Few apps report on medication safety, and fewer studies have been conducted with these apps. The usability of a medication safety app was tested with nurses to reveal their perceptions of the graphical user interface and to discover problems they encountered in using the app. Usability testing of the app was conducted with RN-BSN students and informatics students (n = 18). Perceptions of the graphical components were gathered in pretest and posttest questionnaires, and video recordings of the usability testing were transcribed. The significance of the difference in mean performance time for 8 tasks was tested, and qualitative analysis was deployed to identify problems encountered and to rate the severity of each problem. While all participants perceived the graphical user interface as easy to understand, nurses took significantly more time to complete certain tasks. More nurses found the medication app to be lacking in intuitiveness of user interface design, in capability to match real-world data, and in providing optimal information architecture. To successfully integrate mobile devices in healthcare, developers must address the problems that nurses encountered in use of the app.
A prototype of a beam steering assistant tool for accelerator operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
M. Bickley; P. Chevtsov
2006-10-24
The CEBAF accelerator provides nuclear physics experiments at Jefferson Lab with high quality electron beams. Three experimental end stations can simultaneously receive the beams with different energies and intensities. For each operational mode, the accelerator setup procedures are complicated and require very careful checking of beam spot sizes and positions on multiple beam viewers. To simplify these procedures and make them reproducible, a beam steering assistant GUI tool has been created. The tool is implemented as a multi-window control screen. The screen has an interactive graphical object window, which is an overlay on top of a digitized live video imagemore » from a beam viewer. It allows a user to easily create and edit any graphical objects consisting of text, ellipses, and lines, right above the live beam viewer image and then save them in a file that is called a beam steering template. The template can show, for example, the area within which the beam must always be on the viewer. Later, this template can be loaded in the interactive graphical object window to help accelerator operators steer the beam to the specified area on the viewer.« less
NASA Astrophysics Data System (ADS)
Urias, Adrian R.; Draghic, Nicole; Lui, Janet; Cho, Angie; Curtis, Calvin; Espinosa, Joseluis; Wottawa, Christopher; Wiesmann, William P.; Schwamm, Lee H.
2005-04-01
Stroke remains the third most frequent cause of death in the United States and the leading cause of disability in adults. Long-term effects of ischemic stroke can be mitigated by the opportune administration of Tissue Plasminogen Activator (t-PA); however, the decision regarding the appropriate use of this therapy is dependant on timely, effective neurological assessment by a trained specialist. The lack of available stroke expertise is a key barrier preventing frequent use of t-PA. We report here on the development of a prototype research system capable of performing a semi-automated neurological examination from an offsite location via the Internet and a Computed Tomography (CT) scanner to facilitate the diagnosis and treatment of acute stroke. The Video Stroke Assessment (VSA) System consists of a video camera, a camera mounting frame, and a computer with software and algorithms to collect, interpret, and store patient neurological responses to stimuli. The video camera is mounted on a mobility track in front of the patient; camera direction and zoom are remotely controlled on a graphical user interface (GUI) by the specialist. The VSA System also performs a partially-autonomous examination based on the NIH Stroke Scale (NIHSS). Various response data indicative of stroke are recorded, analyzed and transmitted in real time to the specialist. The VSA provides unbiased, quantitative results for most categories of the NIHSS along with video and audio playback to assist in accurate diagnosis. The system archives the complete exam and results.
Joint modality fusion and temporal context exploitation for semantic video analysis
NASA Astrophysics Data System (ADS)
Papadopoulos, Georgios Th; Mezaris, Vasileios; Kompatsiaris, Ioannis; Strintzis, Michael G.
2011-12-01
In this paper, a multi-modal context-aware approach to semantic video analysis is presented. Overall, the examined video sequence is initially segmented into shots and for every resulting shot appropriate color, motion and audio features are extracted. Then, Hidden Markov Models (HMMs) are employed for performing an initial association of each shot with the semantic classes that are of interest separately for each modality. Subsequently, a graphical modeling-based approach is proposed for jointly performing modality fusion and temporal context exploitation. Novelties of this work include the combined use of contextual information and multi-modal fusion, and the development of a new representation for providing motion distribution information to HMMs. Specifically, an integrated Bayesian Network is introduced for simultaneously performing information fusion of the individual modality analysis results and exploitation of temporal context, contrary to the usual practice of performing each task separately. Contextual information is in the form of temporal relations among the supported classes. Additionally, a new computationally efficient method for providing motion energy distribution-related information to HMMs, which supports the incorporation of motion characteristics from previous frames to the currently examined one, is presented. The final outcome of this overall video analysis framework is the association of a semantic class with every shot. Experimental results as well as comparative evaluation from the application of the proposed approach to four datasets belonging to the domains of tennis, news and volleyball broadcast video are presented.
Analysis of Camera Arrays Applicable to the Internet of Things.
Yang, Jiachen; Xu, Ru; Lv, Zhihan; Song, Houbing
2016-03-22
The Internet of Things is built based on various sensors and networks. Sensors for stereo capture are essential for acquiring information and have been applied in different fields. In this paper, we focus on the camera modeling and analysis, which is very important for stereo display and helps with viewing. We model two kinds of cameras, a parallel and a converged one, and analyze the difference between them in vertical and horizontal parallax. Even though different kinds of camera arrays are used in various applications and analyzed in the research work, there are few discussions on the comparison of them. Therefore, we make a detailed analysis about their performance over different shooting distances. From our analysis, we find that the threshold of shooting distance for converged cameras is 7 m. In addition, we design a camera array in our work that can be used as a parallel camera array, as well as a converged camera array and take some images and videos with it to identify the threshold.
NASA Technical Reports Server (NTRS)
Miller, James G.
1995-01-01
Development and application of linear array imaging technologies to address specific aging-aircraft inspection issues is described. Real-time video-taped images were obtained from an unmodified commercial linear-array medical scanner of specimens constructed to simulate typical types of flaws encountered in the inspection of aircraft structures. Results suggest that information regarding the characteristics, location, and interface properties of specific types of flaws in materials and structures may be obtained from the images acquired with a linear array. Furthermore, linear array imaging may offer the advantage of being able to compare 'good' regions with 'flawed' regions simultaneously, and in real time. Real-time imaging permits the inspector to obtain image information from various views and provides the opportunity for observing the effects of introducing specific interventions. Observation of an image in real-time can offer the operator the ability to 'interact' with the inspection process, thus providing new capabilities, and perhaps, new approaches to nondestructive inspections.
Micromachined Millimeter- and Submillimeter-wave SIS Heterodyne Receivers for Remote Sensing
NASA Technical Reports Server (NTRS)
Hu, Qing
1997-01-01
This is a progress report for the second year of a NASA-sponsored project. The report discusses the design and fabrication of micromachined Superconductor Insulator Superconductor (SIS) heterodyne receivers with integrated tuning elements. These receivers tune out the functional capacitance at desired frequencies, resulting in less noise, lower temperatures and broader bandwidths. The report also discusses the design and fabrication of the first monolithic 3x3 focal-plane arrays for a frequency range of 170-210 GHz. Also addressed is the construction of a 9-channel bias and read-out system, as well as the redesign of the IF connections to reduce cross talk between SIS junctions, which become significant a frequency of 1.5 GHz IF. Uniformity of the junction arrays were measured and antenna beam patterns of several array elements under operating conditions also were measured. Finally, video and heterodyne responses of our focal-plane arrays were measured as well. Attached is a paper on: 'Development of a 170-210 GHz 3x3 micromachined SIS imaging array'.
An Art of Resistance: From the Street to the Classroom
ERIC Educational Resources Information Center
Chung, Sheng Kuan
2009-01-01
Rooted in graffiti culture and its attitude toward the world, street art is regarded as a postgraffiti movement. Street art encompasses a wide array of media and techniques, such as traditional spray-painted tags, stickers, stencils, posters, photocopies, murals, paper cutouts, mosaics, street installations, performances, and video projections…
A Macintosh-Based Scientific Images Video Analysis System
NASA Technical Reports Server (NTRS)
Groleau, Nicolas; Friedland, Peter (Technical Monitor)
1994-01-01
A set of experiments was designed at MIT's Man-Vehicle Laboratory in order to evaluate the effects of zero gravity on the human orientation system. During many of these experiments, the movements of the eyes are recorded on high quality video cassettes. The images must be analyzed off-line to calculate the position of the eyes at every moment in time. To this aim, I have implemented a simple inexpensive computerized system which measures the angle of rotation of the eye from digitized video images. The system is implemented on a desktop Macintosh computer, processes one play-back frame per second and exhibits adequate levels of accuracy and precision. The system uses LabVIEW, a digital output board, and a video input board to control a VCR, digitize video images, analyze them, and provide a user friendly interface for the various phases of the process. The system uses the Concept Vi LabVIEW library (Graftek's Image, Meudon la Foret, France) for image grabbing and displaying as well as translation to and from LabVIEW arrays. Graftek's software layer drives an Image Grabber board from Neotech (Eastleigh, United Kingdom). A Colour Adapter box from Neotech provides adequate video signal synchronization. The system also requires a LabVIEW driven digital output board (MacADIOS II from GW Instruments, Cambridge, MA) controlling a slightly modified VCR remote control used mainly to advance the video tape frame by frame.
Progress in video immersion using Panospheric imaging
NASA Astrophysics Data System (ADS)
Bogner, Stephen L.; Southwell, David T.; Penzes, Steven G.; Brosinsky, Chris A.; Anderson, Ron; Hanna, Doug M.
1998-09-01
Having demonstrated significant technical and marketplace advantages over other modalities for video immersion, PanosphericTM Imaging (PI) continues to evolve rapidly. This paper reports on progress achieved since AeroSense 97. The first practical field deployment of the technology occurred in June-August 1997 during the NASA-CMU 'Atacama Desert Trek' activity, where the Nomad mobile robot was teleoperated via immersive PanosphericTM imagery from a distance of several thousand kilometers. Research using teleoperated vehicles at DRES has also verified the exceptional utility of the PI technology for achieving high levels of situational awareness, operator confidence, and mission effectiveness. Important performance enhancements have been achieved with the completion of the 4th Generation PI DSP-based array processor system. The system is now able to provide dynamic full video-rate generation of spatial and computational transformations, resulting in a programmable and fully interactive immersive video telepresence. A new multi- CCD camera architecture has been created to exploit the bandwidth of this processor, yielding a well-matched PI system with greatly improved resolution. While the initial commercial application for this technology is expected to be video tele- conferencing, it also appears to have excellent potential for application in the 'Immersive Cockpit' concept. Additional progress is reported in the areas of Long Wave Infrared PI Imaging, Stereo PI concepts, PI based Video-Servoing concepts, PI based Video Navigation concepts, and Foveation concepts (to merge localized high-resolution views with immersive views).
HealthTrust: A Social Network Approach for Retrieving Online Health Videos
Karlsen, Randi; Melton, Genevieve B
2012-01-01
Background Social media are becoming mainstream in the health domain. Despite the large volume of accurate and trustworthy health information available on social media platforms, finding good-quality health information can be difficult. Misleading health information can often be popular (eg, antivaccination videos) and therefore highly rated by general search engines. We believe that community wisdom about the quality of health information can be harnessed to help create tools for retrieving good-quality social media content. Objectives To explore approaches for extracting metrics about authoritativeness in online health communities and how these metrics positively correlate with the quality of the content. Methods We designed a metric, called HealthTrust, that estimates the trustworthiness of social media content (eg, blog posts or videos) in a health community. The HealthTrust metric calculates reputation in an online health community based on link analysis. We used the metric to retrieve YouTube videos and channels about diabetes. In two different experiments, health consumers provided 427 ratings of 17 videos and professionals gave 162 ratings of 23 videos. In addition, two professionals reviewed 30 diabetes channels. Results HealthTrust may be used for retrieving online videos on diabetes, since it performed better than YouTube Search in most cases. Overall, of 20 potential channels, HealthTrust’s filtering allowed only 3 bad channels (15%) versus 8 (40%) on the YouTube list. Misleading and graphic videos (eg, featuring amputations) were more commonly found by YouTube Search than by searches based on HealthTrust. However, some videos from trusted sources had low HealthTrust scores, mostly from general health content providers, and therefore not highly connected in the diabetes community. When comparing video ratings from our reviewers, we found that HealthTrust achieved a positive and statistically significant correlation with professionals (Pearson r 10 = .65, P = .02) and a trend toward significance with health consumers (r 7 = .65, P = .06) with videos on hemoglobinA1 c, but it did not perform as well with diabetic foot videos. Conclusions The trust-based metric HealthTrust showed promising results when used to retrieve diabetes content from YouTube. Our research indicates that social network analysis may be used to identify trustworthy social media in health communities. PMID:22356723
Standoff passive video imaging at 350 GHz with 251 superconducting detectors
NASA Astrophysics Data System (ADS)
Becker, Daniel; Gentry, Cale; Smirnov, Ilya; Ade, Peter; Beall, James; Cho, Hsiao-Mei; Dicker, Simon; Duncan, William; Halpern, Mark; Hilton, Gene; Irwin, Kent; Li, Dale; Paulter, Nicholas; Reintsema, Carl; Schwall, Robert; Tucker, Carole
2014-06-01
Millimeter wavelength radiation holds promise for detection of security threats at a distance, including suicide bomb belts and maritime threats in poor weather. The high sensitivity of superconducting Transition Edge Sensor (TES) detectors makes them ideal for passive imaging of thermal signals at these wavelengths. We have built a 350 GHz video-rate imaging system using a large-format array of feedhorn-coupled TES bolometers. The system operates at a standoff distance of 16m to 28m with a spatial resolution of 1:4 cm (at 17m). It currently contains one 251-detector subarray, and will be expanded to contain four subarrays for a total of 1004 detectors. The system has been used to take video images which reveal the presence of weapons concealed beneath a shirt in an indoor setting. We present a summary of this work.
Performance evaluation of a two detector camera for real-time video.
Lochocki, Benjamin; Gambín-Regadera, Adrián; Artal, Pablo
2016-12-20
Single pixel imaging can be the preferred method over traditional 2D-array imaging in spectral ranges where conventional cameras are not available. However, when it comes to real-time video imaging, single pixel imaging cannot compete with the framerates of conventional cameras, especially when high-resolution images are desired. Here we evaluate the performance of an imaging approach using two detectors simultaneously. First, we present theoretical results on how low SNR affects final image quality followed by experimentally determined results. Obtained video framerates were doubled compared to state of the art systems, resulting in a framerate from 22 Hz for a 32×32 resolution to 0.75 Hz for a 128×128 resolution image. Additionally, the two detector imaging technique enables the acquisition of images with a resolution of 256×256 in less than 3 s.