Sample records for video analysis software

  1. Transana Qualitative Video and Audio Analysis Software as a Tool for Teaching Intellectual Assessment Skills to Graduate Psychology Students

    ERIC Educational Resources Information Center

    Rush, S. Craig

    2014-01-01

    This article draws on the author's experience using qualitative video and audio analysis, most notably through use of the Transana qualitative video and audio analysis software program, as an alternative method for teaching IQ administration skills to students in a graduate psychology program. Qualitative video and audio analysis may be useful for…

  2. Software for Real-Time Analysis of Subsonic Test Shot Accuracy

    DTIC Science & Technology

    2014-03-01

    used the C++ programming language, the Open Source Computer Vision ( OpenCV ®) software library, and Microsoft Windows® Application Programming...video for comparison through OpenCV image analysis tools. Based on the comparison, the software then computed the coordinates of each shot relative to...DWB researchers wanted to use the Open Source Computer Vision ( OpenCV ) software library for capturing and analyzing frames of video. OpenCV contains

  3. Validation of a Video Analysis Software Package for Quantifying Movement Velocity in Resistance Exercises.

    PubMed

    Sañudo, Borja; Rueda, David; Pozo-Cruz, Borja Del; de Hoyo, Moisés; Carrasco, Luis

    2016-10-01

    Sañudo, B, Rueda, D, del Pozo-Cruz, B, de Hoyo, M, and Carrasco, L. Validation of a video analysis software package for quantifying movement velocity in resistance exercises. J Strength Cond Res 30(10): 2934-2941, 2016-The aim of this study was to establish the validity of a video analysis software package in measuring mean propulsive velocity (MPV) and the maximal velocity during bench press. Twenty-one healthy males (21 ± 1 year) with weight training experience were recruited, and the MPV and the maximal velocity of the concentric phase (Vmax) were compared with a linear position transducer system during a standard bench press exercise. Participants performed a 1 repetition maximum test using the supine bench press exercise. The testing procedures involved the simultaneous assessment of bench press propulsive velocity using 2 kinematic (linear position transducer and semi-automated tracking software) systems. High Pearson's correlation coefficients for MPV and Vmax between both devices (r = 0.473 to 0.993) were observed. The intraclass correlation coefficients for barbell velocity data and the kinematic data obtained from video analysis were high (>0.79). In addition, the low coefficients of variation indicate that measurements had low variability. Finally, Bland-Altman plots with the limits of agreement of the MPV and Vmax with different loads showed a negative trend, which indicated that the video analysis had higher values than the linear transducer. In conclusion, this study has demonstrated that the software used for the video analysis was an easy to use and cost-effective tool with a very high degree of concurrent validity. This software can be used to evaluate changes in velocity of training load in resistance training, which may be important for the prescription and monitoring of training programmes.

  4. VideoHacking: Automated Tracking and Quantification of Locomotor Behavior with Open Source Software and Off-the-Shelf Video Equipment.

    PubMed

    Conklin, Emily E; Lee, Kathyann L; Schlabach, Sadie A; Woods, Ian G

    2015-01-01

    Differences in nervous system function can result in differences in behavioral output. Measurements of animal locomotion enable the quantification of these differences. Automated tracking of animal movement is less labor-intensive and bias-prone than direct observation, and allows for simultaneous analysis of multiple animals, high spatial and temporal resolution, and data collection over extended periods of time. Here, we present a new video-tracking system built on Python-based software that is free, open source, and cross-platform, and that can analyze video input from widely available video capture devices such as smartphone cameras and webcams. We validated this software through four tests on a variety of animal species, including larval and adult zebrafish (Danio rerio), Siberian dwarf hamsters (Phodopus sungorus), and wild birds. These tests highlight the capacity of our software for long-term data acquisition, parallel analysis of multiple animals, and application to animal species of different sizes and movement patterns. We applied the software to an analysis of the effects of ethanol on thigmotaxis (wall-hugging) behavior on adult zebrafish, and found that acute ethanol treatment decreased thigmotaxis behaviors without affecting overall amounts of motion. The open source nature of our software enables flexibility, customization, and scalability in behavioral analyses. Moreover, our system presents a free alternative to commercial video-tracking systems and is thus broadly applicable to a wide variety of educational settings and research programs.

  5. Quantification of video-taped images in microcirculation research using inexpensive imaging software (Adobe Photoshop).

    PubMed

    Brunner, J; Krummenauer, F; Lehr, H A

    2000-04-01

    Study end-points in microcirculation research are usually video-taped images rather than numeric computer print-outs. Analysis of these video-taped images for the quantification of microcirculatory parameters usually requires computer-based image analysis systems. Most software programs for image analysis are custom-made, expensive, and limited in their applicability to selected parameters and study end-points. We demonstrate herein that an inexpensive, commercially available computer software (Adobe Photoshop), run on a Macintosh G3 computer with inbuilt graphic capture board provides versatile, easy to use tools for the quantification of digitized video images. Using images obtained by intravital fluorescence microscopy from the pre- and postischemic muscle microcirculation in the skinfold chamber model in hamsters, Photoshop allows simple and rapid quantification (i) of microvessel diameters, (ii) of the functional capillary density and (iii) of postischemic leakage of FITC-labeled high molecular weight dextran from postcapillary venules. We present evidence of the technical accuracy of the software tools and of a high degree of interobserver reliability. Inexpensive commercially available imaging programs (i.e., Adobe Photoshop) provide versatile tools for image analysis with a wide range of potential applications in microcirculation research.

  6. Digital image processing of bone - Problems and potentials

    NASA Technical Reports Server (NTRS)

    Morey, E. R.; Wronski, T. J.

    1980-01-01

    The development of a digital image processing system for bone histomorphometry and fluorescent marker monitoring is discussed. The system in question is capable of making measurements of UV or light microscope features on a video screen with either video or computer-generated images, and comprises a microscope, low-light-level video camera, video digitizer and display terminal, color monitor, and PDP 11/34 computer. Capabilities demonstrated in the analysis of an undecalcified rat tibia include the measurement of perimeter and total bone area, and the generation of microscope images, false color images, digitized images and contoured images for further analysis. Software development will be based on an existing software library, specifically the mini-VICAR system developed at JPL. It is noted that the potentials of the system in terms of speed and reliability far exceed any problems associated with hardware and software development.

  7. Agreement Between Face-to-Face and Free Software Video Analysis for Assessing Hamstring Flexibility in Adolescents.

    PubMed

    Moral-Muñoz, José A; Esteban-Moreno, Bernabé; Arroyo-Morales, Manuel; Cobo, Manuel J; Herrera-Viedma, Enrique

    2015-09-01

    The objective of this study was to determine the level of agreement between face-to-face hamstring flexibility measurements and free software video analysis in adolescents. Reduced hamstring flexibility is common in adolescents (75% of boys and 35% of girls aged 10). The length of the hamstring muscle has an important role in both the effectiveness and the efficiency of basic human movements, and reduced hamstring flexibility is related to various musculoskeletal conditions. There are various approaches to measuring hamstring flexibility with high reliability; the most commonly used approaches in the scientific literature are the sit-and-reach test, hip joint angle (HJA), and active knee extension. The assessment of hamstring flexibility using video analysis could help with adolescent flexibility follow-up. Fifty-four adolescents from a local school participated in a descriptive study of repeated measures using a crossover design. Active knee extension and HJA were measured with an inclinometer and were simultaneously recorded with a video camera. Each video was downloaded to a computer and subsequently analyzed using Kinovea 0.8.15, a free software application for movement analysis. All outcome measures showed reliability estimates with α > 0.90. The lowest reliability was obtained for HJA (α = 0.91). The preliminary findings support the use of a free software tool for assessing hamstring flexibility, offering health professionals a useful tool for adolescent flexibility follow-up.

  8. Video Analysis of Rolling Cylinders

    ERIC Educational Resources Information Center

    Phommarach, S.; Wattanakasiwich, P.; Johnston, I.

    2012-01-01

    In this work, we studied the rolling motion of solid and hollow cylinders down an inclined plane at different angles. The motions were captured on video at 300 frames s[superscript -1], and the videos were analyzed frame by frame using video analysis software. Data from the real motion were compared with the theory of rolling down an inclined…

  9. High-Speed Video Analysis of Damped Harmonic Motion

    ERIC Educational Resources Information Center

    Poonyawatpornkul, J.; Wattanakasiwich, P.

    2013-01-01

    In this paper, we acquire and analyse high-speed videos of a spring-mass system oscillating in glycerin at different temperatures. Three cases of damped harmonic oscillation are investigated and analysed by using high-speed video at a rate of 120 frames s[superscript -1] and Tracker Video Analysis (Tracker) software. We present empirical data for…

  10. Compression Algorithm Analysis of In-Situ (S)TEM Video: Towards Automatic Event Detection and Characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teuton, Jeremy R.; Griswold, Richard L.; Mehdi, Beata L.

    Precise analysis of both (S)TEM images and video are time and labor intensive processes. As an example, determining when crystal growth and shrinkage occurs during the dynamic process of Li dendrite deposition and stripping involves manually scanning through each frame in the video to extract a specific set of frames/images. For large numbers of images, this process can be very time consuming, so a fast and accurate automated method is desirable. Given this need, we developed software that uses analysis of video compression statistics for detecting and characterizing events in large data sets. This software works by converting the datamore » into a series of images which it compresses into an MPEG-2 video using the open source “avconv” utility [1]. The software does not use the video itself, but rather analyzes the video statistics from the first pass of the video encoding that avconv records in the log file. This file contains statistics for each frame of the video including the frame quality, intra-texture and predicted texture bits, forward and backward motion vector resolution, among others. In all, avconv records 15 statistics for each frame. By combining different statistics, we have been able to detect events in various types of data. We have developed an interactive tool for exploring the data and the statistics that aids the analyst in selecting useful statistics for each analysis. Going forward, an algorithm for detecting and possibly describing events automatically can be written based on statistic(s) for each data type.« less

  11. Magnetic Braking: A Video Analysis

    ERIC Educational Resources Information Center

    Molina-Bolivar, J. A.; Abella-Palacios, A. J.

    2012-01-01

    This paper presents a laboratory exercise that introduces students to the use of video analysis software and the Lenz's law demonstration. Digital techniques have proved to be very useful for the understanding of physical concepts. In particular, the availability of affordable digital video offers students the opportunity to actively engage in…

  12. High-Speed Video Analysis in a Conceptual Physics Class

    ERIC Educational Resources Information Center

    Desbien, Dwain M.

    2011-01-01

    The use of probe ware and computers has become quite common in introductory physics classrooms. Video analysis is also becoming more popular and is available to a wide range of students through commercially available and/or free software. Video analysis allows for the study of motions that cannot be easily measured in the traditional lab setting…

  13. Video bioinformatics analysis of human embryonic stem cell colony growth.

    PubMed

    Lin, Sabrina; Fonteno, Shawn; Satish, Shruthi; Bhanu, Bir; Talbot, Prue

    2010-05-20

    Because video data are complex and are comprised of many images, mining information from video material is difficult to do without the aid of computer software. Video bioinformatics is a powerful quantitative approach for extracting spatio-temporal data from video images using computer software to perform dating mining and analysis. In this article, we introduce a video bioinformatics method for quantifying the growth of human embryonic stem cells (hESC) by analyzing time-lapse videos collected in a Nikon BioStation CT incubator equipped with a camera for video imaging. In our experiments, hESC colonies that were attached to Matrigel were filmed for 48 hours in the BioStation CT. To determine the rate of growth of these colonies, recipes were developed using CL-Quant software which enables users to extract various types of data from video images. To accurately evaluate colony growth, three recipes were created. The first segmented the image into the colony and background, the second enhanced the image to define colonies throughout the video sequence accurately, and the third measured the number of pixels in the colony over time. The three recipes were run in sequence on video data collected in a BioStation CT to analyze the rate of growth of individual hESC colonies over 48 hours. To verify the truthfulness of the CL-Quant recipes, the same data were analyzed manually using Adobe Photoshop software. When the data obtained using the CL-Quant recipes and Photoshop were compared, results were virtually identical, indicating the CL-Quant recipes were truthful. The method described here could be applied to any video data to measure growth rates of hESC or other cells that grow in colonies. In addition, other video bioinformatics recipes can be developed in the future for other cell processes such as migration, apoptosis, and cell adhesion.

  14. Quantitative assessment of human motion using video motion analysis

    NASA Technical Reports Server (NTRS)

    Probe, John D.

    1993-01-01

    In the study of the dynamics and kinematics of the human body a wide variety of technologies has been developed. Photogrammetric techniques are well documented and are known to provide reliable positional data from recorded images. Often these techniques are used in conjunction with cinematography and videography for analysis of planar motion, and to a lesser degree three-dimensional motion. Cinematography has been the most widely used medium for movement analysis. Excessive operating costs and the lag time required for film development, coupled with recent advances in video technology, have allowed video based motion analysis systems to emerge as a cost effective method of collecting and analyzing human movement. The Anthropometric and Biomechanics Lab at Johnson Space Center utilizes the video based Ariel Performance Analysis System (APAS) to develop data on shirtsleeved and space-suited human performance in order to plan efficient on-orbit intravehicular and extravehicular activities. APAS is a fully integrated system of hardware and software for biomechanics and the analysis of human performance and generalized motion measurement. Major components of the complete system include the video system, the AT compatible computer, and the proprietary software.

  15. The Simple Video Coder: A free tool for efficiently coding social video data.

    PubMed

    Barto, Daniel; Bird, Clark W; Hamilton, Derek A; Fink, Brandi C

    2017-08-01

    Videotaping of experimental sessions is a common practice across many disciplines of psychology, ranging from clinical therapy, to developmental science, to animal research. Audio-visual data are a rich source of information that can be easily recorded; however, analysis of the recordings presents a major obstacle to project completion. Coding behavior is time-consuming and often requires ad-hoc training of a student coder. In addition, existing software is either prohibitively expensive or cumbersome, which leaves researchers with inadequate tools to quickly process video data. We offer the Simple Video Coder-free, open-source software for behavior coding that is flexible in accommodating different experimental designs, is intuitive for students to use, and produces outcome measures of event timing, frequency, and duration. Finally, the software also offers extraction tools to splice video into coded segments suitable for training future human coders or for use as input for pattern classification algorithms.

  16. Making Sure What You See Is What You Get: Digital Video Technology and the Preparation of Teachers of Elementary Science

    ERIC Educational Resources Information Center

    Bueno de Mesquita, Paul; Dean, Ross F.; Young, Betty J.

    2010-01-01

    Advances in digital video technology create opportunities for more detailed qualitative analyses of actual teaching practice in science and other subject areas. User-friendly digital cameras and highly developed, flexible video-analysis software programs have made the tasks of video capture, editing, transcription, and subsequent data analysis…

  17. Quantitative fluorescence angiography for neurosurgical interventions.

    PubMed

    Weichelt, Claudia; Duscha, Philipp; Steinmeier, Ralf; Meyer, Tobias; Kuß, Julia; Cimalla, Peter; Kirsch, Matthias; Sobottka, Stephan B; Koch, Edmund; Schackert, Gabriele; Morgenstern, Ute

    2013-06-01

    Present methods for quantitative measurement of cerebral perfusion during neurosurgical operations require additional technology for measurement, data acquisition, and processing. This study used conventional fluorescence video angiography--as an established method to visualize blood flow in brain vessels--enhanced by a quantifying perfusion software tool. For these purposes, the fluorescence dye indocyanine green is given intravenously, and after activation by a near-infrared light source the fluorescence signal is recorded. Video data are analyzed by software algorithms to allow quantification of the blood flow. Additionally, perfusion is measured intraoperatively by a reference system. Furthermore, comparing reference measurements using a flow phantom were performed to verify the quantitative blood flow results of the software and to validate the software algorithm. Analysis of intraoperative video data provides characteristic biological parameters. These parameters were implemented in the special flow phantom for experimental validation of the developed software algorithms. Furthermore, various factors that influence the determination of perfusion parameters were analyzed by means of mathematical simulation. Comparing patient measurement, phantom experiment, and computer simulation under certain conditions (variable frame rate, vessel diameter, etc.), the results of the software algorithms are within the range of parameter accuracy of the reference methods. Therefore, the software algorithm for calculating cortical perfusion parameters from video data presents a helpful intraoperative tool without complex additional measurement technology.

  18. SpotMetrics: An Open-Source Image-Analysis Software Plugin for Automatic Chromatophore Detection and Measurement.

    PubMed

    Hadjisolomou, Stavros P; El-Haddad, George

    2017-01-01

    Coleoid cephalopods (squid, octopus, and sepia) are renowned for their elaborate body patterning capabilities, which are employed for camouflage or communication. The specific chromatic appearance of a cephalopod, at any given moment, is a direct result of the combined action of their intradermal pigmented chromatophore organs and reflecting cells. Therefore, a lot can be learned about the cephalopod coloration system by video recording and analyzing the activation of individual chromatophores in time. The fact that adult cephalopods have small chromatophores, up to several hundred thousand in number, makes measurement and analysis over several seconds a difficult task. However, current advancements in videography enable high-resolution and high framerate recording, which can be used to record chromatophore activity in more detail and accuracy in both space and time domains. In turn, the additional pixel information and extra frames per video from such recordings result in large video files of several gigabytes, even when the recording spans only few minutes. We created a software plugin, "SpotMetrics," that can automatically analyze high resolution, high framerate video of chromatophore organ activation in time. This image analysis software can track hundreds of individual chromatophores over several hundred frames to provide measurements of size and color. This software may also be used to measure differences in chromatophore activation during different behaviors which will contribute to our understanding of the cephalopod sensorimotor integration system. In addition, this software can potentially be utilized to detect numbers of round objects and size changes in time, such as eye pupil size or number of bacteria in a sample. Thus, we are making this software plugin freely available as open-source because we believe it will be of benefit to other colleagues both in the cephalopod biology field and also within other disciplines.

  19. Mesoscale and severe storms (Mass) data management and analysis system

    NASA Technical Reports Server (NTRS)

    Hickey, J. S.; Karitani, S.; Dickerson, M.

    1984-01-01

    Progress on the Mesoscale and Severe Storms (MASS) data management and analysis system is described. An interactive atmospheric data base management software package to convert four types of data (Sounding, Single Level, Grid, Image) into standard random access formats is implemented and integrated with the MASS AVE80 Series general purpose plotting and graphics display data analysis software package. An interactive analysis and display graphics software package (AVE80) to analyze large volumes of conventional and satellite derived meteorological data is enhanced to provide imaging/color graphics display utilizing color video hardware integrated into the MASS computer system. Local and remote smart-terminal capability is provided by installing APPLE III computer systems within individual scientist offices and integrated with the MASS system, thus providing color video display, graphics, and characters display of the four data types.

  20. "Diagnosis by behavioral observation" home-videosomnography - a rigorous ethnographic approach to sleep of children with neurodevelopmental conditions.

    PubMed

    Ipsiroglu, Osman S; Hung, Yi-Hsuan Amy; Chan, Forson; Ross, Michelle L; Veer, Dorothee; Soo, Sonja; Ho, Gloria; Berger, Mai; McAllister, Graham; Garn, Heinrich; Kloesch, Gerhard; Barbosa, Adriano Vilela; Stockler, Sylvia; McKellin, William; Vatikiotis-Bateson, Eric

    2015-01-01

    Advanced video technology is available for sleep-laboratories. However, low-cost equipment for screening in the home setting has not been identified and tested, nor has a methodology for analysis of video recordings been suggested. We investigated different combinations of hardware/software for home-videosomnography (HVS) and established a process for qualitative and quantitative analysis of HVS-recordings. A case vignette (HVS analysis for a 5.5-year-old girl with major insomnia and several co-morbidities) demonstrates how methodological considerations were addressed and how HVS added value to clinical assessment. We suggest an "ideal set of hardware/software" that is reliable, affordable (∼$500) and portable (=2.8 kg) to conduct non-invasive HVS, which allows time-lapse analyses. The equipment consists of a net-book, a camera with infrared optics, and a video capture device. (1) We present an HVS-analysis protocol consisting of three steps of analysis at varying replay speeds: (a) basic overview and classification at 16× normal speed; (b) second viewing and detailed descriptions at 4-8× normal speed, and (c) viewing, listening, and in-depth descriptions at real-time speed. (2) We also present a custom software program that facilitates video analysis and note-taking (Annotator(©)), and Optical Flow software that automatically quantifies movement for internal quality control of the HVS-recording. The case vignette demonstrates how the HVS-recordings revealed the dimension of insomnia caused by restless legs syndrome, and illustrated the cascade of symptoms, challenging behaviors, and resulting medications. The strategy of using HVS, although requiring validation and reliability testing, opens the floor for a new "observational sleep medicine," which has been useful in describing discomfort-related behavioral movement patterns in patients with communication difficulties presenting with challenging/disruptive sleep/wake behaviors.

  1. 77 FR 75659 - Certain Video Analytics Software, Systems, Components Thereof, and Products Containing Same...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-21

    ... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-852] Certain Video Analytics Software..., 2012, based on a complaint filed by ObjectVideo, Inc. (``ObjectVideo'') of Reston, Virginia. 77 FR... United States after importation of certain video analytics software systems, components thereof, and...

  2. Recoding low-level simulator data into a record of meaningful task performance: the integrated task modeling environment (ITME).

    PubMed

    King, Robert; Parker, Simon; Mouzakis, Kon; Fletcher, Winston; Fitzgerald, Patrick

    2007-11-01

    The Integrated Task Modeling Environment (ITME) is a user-friendly software tool that has been developed to automatically recode low-level data into an empirical record of meaningful task performance. The present research investigated and validated the performance of the ITME software package by conducting complex simulation missions and comparing the task analyses produced by ITME with taskanalyses produced by experienced video analysts. A very high interrater reliability (> or = .94) existed between experienced video analysts and the ITME for the task analyses produced for each mission. The mean session time:analysis time ratio was 1:24 using video analysis techniques and 1:5 using the ITME. It was concluded that the ITME produced task analyses that were as reliable as those produced by experienced video analysts, and significantly reduced the time cost associated with these analyses.

  3. A system for the real-time display of radar and video images of targets

    NASA Technical Reports Server (NTRS)

    Allen, W. W.; Burnside, W. D.

    1990-01-01

    Described here is a software and hardware system for the real-time display of radar and video images for use in a measurement range. The main purpose is to give the reader a clear idea of the software and hardware design and its functions. This system is designed around a Tektronix XD88-30 graphics workstation, used to display radar images superimposed on video images of the actual target. The system's purpose is to provide a platform for tha analysis and documentation of radar images and their associated targets in a menu-driven, user oriented environment.

  4. Preliminary clinical evaluation of automated analysis of the sublingual microcirculation in the assessment of patients with septic shock: Comparison of automated versus semi-automated software.

    PubMed

    Sharawy, Nivin; Mukhtar, Ahmed; Islam, Sufia; Mahrous, Reham; Mohamed, Hassan; Ali, Mohamed; Hakeem, Amr A; Hossny, Osama; Refaa, Amera; Saka, Ahmed; Cerny, Vladimir; Whynot, Sara; George, Ronald B; Lehmann, Christian

    2017-01-01

    The outcome of patients in septic shock has been shown to be related to changes within the microcirculation. Modern imaging technologies are available to generate high resolution video recordings of the microcirculation in humans. However, evaluation of the microcirculation is not yet implemented in the routine clinical monitoring of critically ill patients. This is mainly due to large amount of time and user interaction required by the current video analysis software. The aim of this study was to validate a newly developed automated method (CCTools®) for microcirculatory analysis of sublingual capillary perfusion in septic patients in comparison to standard semi-automated software (AVA3®). 204 videos from 47 patients were recorded using incident dark field (IDF) imaging. Total vessel density (TVD), proportion of perfused vessels (PPV), perfused vessel density (PVD), microvascular flow index (MFI) and heterogeneity index (HI) were measured using AVA3® and CCTools®. Significant differences between the numeric results obtained by the two different software packages were observed. The values for TVD, PVD and MFI were statistically related though. The automated software technique successes to show septic shock induced microcirculation alterations in near real time. However, we found wide degrees of agreement between AVA3® and CCTools® values due to several technical factors that should be considered in the future studies.

  5. The Study on Neuro-IE Management Software in Manufacturing Enterprises. -The Application of Video Analysis Technology

    NASA Astrophysics Data System (ADS)

    Bian, Jun; Fu, Huijian; Shang, Qian; Zhou, Xiangyang; Ma, Qingguo

    This paper analyzes the outstanding problems in current industrial production by reviewing the three stages of the Industrial Engineering Development. Based on investigations and interviews in enterprises, we propose the new idea of applying "computer video analysis technology" to new industrial engineering management software, and add "loose-coefficient" of the working station to this software in order to arrange scientific and humanistic production. Meanwhile, we suggest utilizing Biofeedback Technology to promote further research on "the rules of workers' physiological, psychological and emotional changes in production". This new kind of combination will push forward industrial engineering theories and benefit enterprises in progressing towards flexible social production, thus it will be of great theory innovation value, social significance and application value.

  6. Feasibility of video codec algorithms for software-only playback

    NASA Astrophysics Data System (ADS)

    Rodriguez, Arturo A.; Morse, Ken

    1994-05-01

    Software-only video codecs can provide good playback performance in desktop computers with a 486 or 68040 CPU running at 33 MHz without special hardware assistance. Typically, playback of compressed video can be categorized into three tasks: the actual decoding of the video stream, color conversion, and the transfer of decoded video data from system RAM to video RAM. By current standards, good playback performance is the decoding and display of video streams of 320 by 240 (or larger) compressed frames at 15 (or greater) frames-per- second. Software-only video codecs have evolved by modifying and tailoring existing compression methodologies to suit video playback in desktop computers. In this paper we examine the characteristics used to evaluate software-only video codec algorithms, namely: image fidelity (i.e., image quality), bandwidth (i.e., compression) ease-of-decoding (i.e., playback performance), memory consumption, compression to decompression asymmetry, scalability, and delay. We discuss the tradeoffs among these variables and the compromises that can be made to achieve low numerical complexity for software-only playback. Frame- differencing approaches are described since software-only video codecs typically employ them to enhance playback performance. To complement other papers that appear in this session of the Proceedings, we review methods derived from binary pattern image coding since these methods are amenable for software-only playback. In particular, we introduce a novel approach called pixel distribution image coding.

  7. Software-codec-based full motion video conferencing on the PC using visual pattern image sequence coding

    NASA Astrophysics Data System (ADS)

    Barnett, Barry S.; Bovik, Alan C.

    1995-04-01

    This paper presents a real time full motion video conferencing system based on the Visual Pattern Image Sequence Coding (VPISC) software codec. The prototype system hardware is comprised of two personal computers, two camcorders, two frame grabbers, and an ethernet connection. The prototype system software has a simple structure. It runs under the Disk Operating System, and includes a user interface, a video I/O interface, an event driven network interface, and a free running or frame synchronous video codec that also acts as the controller for the video and network interfaces. Two video coders have been tested in this system. Simple implementations of Visual Pattern Image Coding and VPISC have both proven to support full motion video conferencing with good visual quality. Future work will concentrate on expanding this prototype to support the motion compensated version of VPISC, as well as encompassing point-to-point modem I/O and multiple network protocols. The application will be ported to multiple hardware platforms and operating systems. The motivation for developing this prototype system is to demonstrate the practicality of software based real time video codecs. Furthermore, software video codecs are not only cheaper, but are more flexible system solutions because they enable different computer platforms to exchange encoded video information without requiring on-board protocol compatible video codex hardware. Software based solutions enable true low cost video conferencing that fits the `open systems' model of interoperability that is so important for building portable hardware and software applications.

  8. 77 FR 808 - Certain Video Analytics Software, Systems, Components Thereof, and Products Containing Same...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-06

    ... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-795] Certain Video Analytics Software... filed by ObjectVideo, Inc. of Reston, Virginia. 76 FR 45859 (Aug. 1, 2011). The complaint, as amended... certain video analytics software, systems, components thereof, and products containing same by reason of...

  9. Development of Students' Conceptual Thinking by Means of Video Analysis and Interactive Simulations at Technical Universities

    ERIC Educational Resources Information Center

    Hockicko, Peter; Krišták, Luboš; Nemec, Miroslav

    2015-01-01

    Video analysis, using the program Tracker (Open Source Physics), in the educational process introduces a new creative method of teaching physics and makes natural sciences more interesting for students. This way of exploring the laws of nature can amaze students because this illustrative and interactive educational software inspires them to think…

  10. Secure Video Surveillance System Acquisition Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2009-12-04

    The SVSS Acquisition Software collects and displays video images from two cameras through a VPN, and store the images onto a collection controller. The software is configured to allow a user to enter a time window to display up to 2 1/2, hours of video review. The software collects images from the cameras at a rate of 1 image per second and automatically deletes images older than 3 hours. The software code operates in a linux environment and can be run in a virtual machine on Windows XP. The Sandia software integrates the different COTS software together to build themore » video review system.« less

  11. SpotMetrics: An Open-Source Image-Analysis Software Plugin for Automatic Chromatophore Detection and Measurement

    PubMed Central

    Hadjisolomou, Stavros P.; El-Haddad, George

    2017-01-01

    Coleoid cephalopods (squid, octopus, and sepia) are renowned for their elaborate body patterning capabilities, which are employed for camouflage or communication. The specific chromatic appearance of a cephalopod, at any given moment, is a direct result of the combined action of their intradermal pigmented chromatophore organs and reflecting cells. Therefore, a lot can be learned about the cephalopod coloration system by video recording and analyzing the activation of individual chromatophores in time. The fact that adult cephalopods have small chromatophores, up to several hundred thousand in number, makes measurement and analysis over several seconds a difficult task. However, current advancements in videography enable high-resolution and high framerate recording, which can be used to record chromatophore activity in more detail and accuracy in both space and time domains. In turn, the additional pixel information and extra frames per video from such recordings result in large video files of several gigabytes, even when the recording spans only few minutes. We created a software plugin, “SpotMetrics,” that can automatically analyze high resolution, high framerate video of chromatophore organ activation in time. This image analysis software can track hundreds of individual chromatophores over several hundred frames to provide measurements of size and color. This software may also be used to measure differences in chromatophore activation during different behaviors which will contribute to our understanding of the cephalopod sensorimotor integration system. In addition, this software can potentially be utilized to detect numbers of round objects and size changes in time, such as eye pupil size or number of bacteria in a sample. Thus, we are making this software plugin freely available as open-source because we believe it will be of benefit to other colleagues both in the cephalopod biology field and also within other disciplines. PMID:28298896

  12. 77 FR 45376 - Certain Video Analytics Software, Systems, Components Thereof, and Products Containing Same...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-31

    ... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-852] Certain Video Analytics Software... 337 of the Tariff Act of 1930, as amended, 19 U.S.C. 1337, on behalf of ObjectVideo, Inc. of Reston... sale within the United States after importation of certain video analytics software, systems...

  13. Representation of the Physiological Factors Contributing to Postflight Changes in Functional Performance Using Motion Analysis Software

    NASA Technical Reports Server (NTRS)

    Parks, Kelsey

    2010-01-01

    Astronauts experience changes in multiple physiological systems due to exposure to the microgravity conditions of space flight. To understand how changes in physiological function influence functional performance, a testing procedure has been developed that evaluates both astronaut postflight functional performance and related physiological changes. Astronauts complete seven functional and physiological tests. The objective of this project is to use motion tracking and digitizing software to visually display the postflight decrement in the functional performance of the astronauts. The motion analysis software will be used to digitize astronaut data videos into stick figure videos to represent the astronauts as they perform the Functional Tasks Tests. This project will benefit NASA by allowing NASA scientists to present data of their neurological studies without revealing the identities of the astronauts.

  14. Magnetic Braking: A Video Analysis

    NASA Astrophysics Data System (ADS)

    Molina-Bolívar, J. A.; Abella-Palacios, A. J.

    2012-10-01

    This paper presents a laboratory exercise that introduces students to the use of video analysis software and the Lenz's law demonstration. Digital techniques have proved to be very useful for the understanding of physical concepts. In particular, the availability of affordable digital video offers students the opportunity to actively engage in kinematics in introductory-level physics.1,2 By using digital videos frame advance features and "marking" the position of a moving object in each frame, students are able to more precisely determine the position of an object at much smaller time increments than would be possible with common time devices. Once the student collects data consisting of positions and times, these values may be manipulated to determine velocity and acceleration. There are a variety of commercial and free applications that can be used for video analysis. Because the relevant technology has become inexpensive, video analysis has become a prevalent tool in introductory physics courses.

  15. Investigating the Magnetic Interaction with Geomag and Tracker Video Analysis: Static Equilibrium and Anharmonic Dynamics

    ERIC Educational Resources Information Center

    Onorato, P.; Mascheretti, P.; DeAmbrosis, A.

    2012-01-01

    In this paper, we describe how simple experiments realizable by using easily found and low-cost materials allow students to explore quantitatively the magnetic interaction thanks to the help of an Open Source Physics tool, the Tracker Video Analysis software. The static equilibrium of a "column" of permanents magnets is carefully investigated by…

  16. Science on TeacherTube: A Mixed Methods Analysis of Teacher Produced Video

    NASA Astrophysics Data System (ADS)

    Chmiel, Margaret (Marjee)

    Increased bandwidth, inexpensive video cameras and easy-to-use video editing software have made social media sites featuring user generated video (UGV) an increasingly popular vehicle for online communication. As such, UGV have come to play a role in education, both formal and informal, but there has been little research on this topic in scholarly literature. In this mixed-methods study, a content and discourse analysis are used to describe the most successful UGV in the science channel of an education-focused site called TeacherTube. The analysis finds that state achievement tests, and their focus on vocabulary and recall-level knowledge, drive much of the content found on TeacherTube.

  17. Picturing Video

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Video Pics is a software program that generates high-quality photos from video. The software was developed under an SBIR contract with Marshall Space Flight Center by Redhawk Vision, Inc.--a subsidiary of Irvine Sensors Corporation. Video Pics takes information content from multiple frames of video and enhances the resolution of a selected frame. The resulting image has enhanced sharpness and clarity like that of a 35 mm photo. The images are generated as digital files and are compatible with image editing software.

  18. An analysis of functional shoulder movements during task performance using Dartfish movement analysis software.

    PubMed

    Khadilkar, Leenesh; MacDermid, Joy C; Sinden, Kathryn E; Jenkyn, Thomas R; Birmingham, Trevor B; Athwal, George S

    2014-01-01

    Video-based movement analysis software (Dartfish) has potential for clinical applications for understanding shoulder motion if functional measures can be reliably obtained. The primary purpose of this study was to describe the functional range of motion (ROM) of the shoulder used to perform a subset of functional tasks. A second purpose was to assess the reliability of functional ROM measurements obtained by different raters using Dartfish software. Ten healthy participants, mean age 29 ± 5 years, were videotaped while performing five tasks selected from the Disabilities of the Arm, Shoulder and Hand (DASH). Video cameras and markers were used to obtain video images suitable for analysis in Dartfish software. Three repetitions of each task were performed. Shoulder movements from all three repetitions were analyzed using Dartfish software. The tracking tool of the Dartfish software was used to obtain shoulder joint angles and arcs of motion. Test-retest and inter-rater reliability of the measurements were evaluated using intraclass correlation coefficients (ICC). Maximum (coronal plane) abduction (118° ± 16°) and (sagittal plane) flexion (111° ± 15°) was observed during 'washing one's hair;' maximum extension (-68° ± 9°) was identified during 'washing one's own back.' Minimum shoulder ROM was observed during 'opening a tight jar' (33° ± 13° abduction and 13° ± 19° flexion). Test-retest reliability (ICC = 0.45 to 0.94) suggests high inter-individual task variability, and inter-rater reliability (ICC = 0.68 to 1.00) showed moderate to excellent agreement. KEY FINDINGS INCLUDE: 1) functional shoulder ROM identified in this study compared to similar studies; 2) healthy individuals require less than full ROM when performing five common ADL tasks 3) high participant variability was observed during performance of the five ADL tasks; and 4) Dartfish software provides a clinically relevant tool to analyze shoulder function.

  19. HITCal: a software tool for analysis of video head impulse test responses.

    PubMed

    Rey-Martinez, Jorge; Batuecas-Caletrio, Angel; Matiño, Eusebi; Perez Fernandez, Nicolás

    2015-09-01

    The developed software (HITCal) may be a useful tool in the analysis and measurement of the saccadic video head impulse test (vHIT) responses and with the experience obtained during its use the authors suggest that HITCal is an excellent method for enhanced exploration of vHIT outputs. To develop a (software) method to analyze and explore the vHIT responses, mainly saccades. HITCal was written using a computational development program; the function to access a vHIT file was programmed; extended head impulse exploration and measurement tools were created and an automated saccade analysis was developed using an experimental algorithm. For pre-release HITCal laboratory tests, a database of head impulse tests (HITs) was created with the data collected retrospectively in three reference centers. This HITs database was evaluated by humans and was also computed with HITCal. The authors have successfully built HITCal and it has been released as open source software; the developed software was fully operative and all the proposed characteristics were incorporated in the released version. The automated saccades algorithm implemented in HITCal has good concordance with the assessment by human observers (Cohen's kappa coefficient = 0.7).

  20. High-Speed Video Analysis in a Conceptual Physics Class

    NASA Astrophysics Data System (ADS)

    Desbien, Dwain M.

    2011-09-01

    The use of probe ware and computers has become quite common in introductory physics classrooms. Video analysis is also becoming more popular and is available to a wide range of students through commercially available and/or free software.2,3 Video analysis allows for the study of motions that cannot be easily measured in the traditional lab setting and also allows real-world situations to be analyzed. Many motions are too fast to easily be captured at the standard video frame rate of 30 frames per second (fps) employed by most video cameras. This paper will discuss using a consumer camera that can record high-frame-rate video in a college-level conceptual physics class. In particular this will involve the use of model rockets to determine the acceleration during the boost period right at launch and compare it to a simple model of the expected acceleration.

  1. Automated software for analysis of ciliary beat frequency and metachronal wave orientation in primary ciliary dyskinesia.

    PubMed

    Mantovani, Giulia; Pifferi, Massimo; Vozzi, Giovanni

    2010-06-01

    Patients with primary ciliary dyskinesia (PCD) have structural and/or functional alterations of cilia that imply deficits in mucociliary clearance and different respiratory pathologies. A useful indicator for the difficult diagnosis is the ciliary beat frequency (CBF) that is significantly lower in pathological cases than in physiological ones. The CBF computation is not rapid, therefore, the aim of this study is to propose an automated method to evaluate it directly from videos of ciliated cells. The cells are taken from inferior nasal turbinates and videos of ciliary movements are registered and eventually processed by the developed software. The software consists in the extraction of features from videos (written with C++ language) and the computation of the frequency (written with Matlab language). This system was tested both on the samples of nasal cavity and software models, and the results were really promising because in a few seconds, it can compute a reliable frequency if compared with that measured with visual methods. It is to be noticed that the reliability of the computation increases with the quality of acquisition system and especially with the sampling frequency. It is concluded that the developed software could be a useful mean for PCD diagnosis.

  2. SwarmSight: Real-time Tracking of Insect Antenna Movements and Proboscis Extension Reflex Using a Common Preparation and Conventional Hardware

    PubMed Central

    Birgiolas, Justas; Jernigan, Christopher M.; Gerkin, Richard C.; Smith, Brian H.; Crook, Sharon M.

    2017-01-01

    Many scientifically and agriculturally important insects use antennae to detect the presence of volatile chemical compounds and extend their proboscis during feeding. The ability to rapidly obtain high-resolution measurements of natural antenna and proboscis movements and assess how they change in response to chemical, developmental, and genetic manipulations can aid the understanding of insect behavior. By extending our previous work on assessing aggregate insect swarm or animal group movements from natural and laboratory videos using the video analysis software SwarmSight, we developed a novel, free, and open-source software module, SwarmSight Appendage Tracking (SwarmSight.org) for frame-by-frame tracking of insect antenna and proboscis positions from conventional web camera videos using conventional computers. The software processes frames about 120 times faster than humans, performs at better than human accuracy, and, using 30 frames per second (fps) videos, can capture antennal dynamics up to 15 Hz. The software was used to track the antennal response of honey bees to two odors and found significant mean antennal retractions away from the odor source about 1 s after odor presentation. We observed antenna position density heat map cluster formation and cluster and mean angle dependence on odor concentration. PMID:29364251

  3. Engineering visualization utilizing advanced animation

    NASA Technical Reports Server (NTRS)

    Sabionski, Gunter R.; Robinson, Thomas L., Jr.

    1989-01-01

    Engineering visualization is the use of computer graphics to depict engineering analysis and simulation in visual form from project planning through documentation. Graphics displays let engineers see data represented dynamically which permits the quick evaluation of results. The current state of graphics hardware and software generally allows the creation of two types of 3D graphics. The use of animated video as an engineering visualization tool is presented. The engineering, animation, and videography aspects of animated video production are each discussed. Specific issues include the integration of staffing expertise, hardware, software, and the various production processes. A detailed explanation of the animation process reveals the capabilities of this unique engineering visualization method. Automation of animation and video production processes are covered and future directions are proposed.

  4. Automated tracking of whiskers in videos of head fixed rodents.

    PubMed

    Clack, Nathan G; O'Connor, Daniel H; Huber, Daniel; Petreanu, Leopoldo; Hires, Andrew; Peron, Simon; Svoboda, Karel; Myers, Eugene W

    2012-01-01

    We have developed software for fully automated tracking of vibrissae (whiskers) in high-speed videos (>500 Hz) of head-fixed, behaving rodents trimmed to a single row of whiskers. Performance was assessed against a manually curated dataset consisting of 1.32 million video frames comprising 4.5 million whisker traces. The current implementation detects whiskers with a recall of 99.998% and identifies individual whiskers with 99.997% accuracy. The average processing rate for these images was 8 Mpx/s/cpu (2.6 GHz Intel Core2, 2 GB RAM). This translates to 35 processed frames per second for a 640 px×352 px video of 4 whiskers. The speed and accuracy achieved enables quantitative behavioral studies where the analysis of millions of video frames is required. We used the software to analyze the evolving whisking strategies as mice learned a whisker-based detection task over the course of 6 days (8148 trials, 25 million frames) and measure the forces at the sensory follicle that most underlie haptic perception.

  5. Automated Tracking of Whiskers in Videos of Head Fixed Rodents

    PubMed Central

    Clack, Nathan G.; O'Connor, Daniel H.; Huber, Daniel; Petreanu, Leopoldo; Hires, Andrew; Peron, Simon; Svoboda, Karel; Myers, Eugene W.

    2012-01-01

    We have developed software for fully automated tracking of vibrissae (whiskers) in high-speed videos (>500 Hz) of head-fixed, behaving rodents trimmed to a single row of whiskers. Performance was assessed against a manually curated dataset consisting of 1.32 million video frames comprising 4.5 million whisker traces. The current implementation detects whiskers with a recall of 99.998% and identifies individual whiskers with 99.997% accuracy. The average processing rate for these images was 8 Mpx/s/cpu (2.6 GHz Intel Core2, 2 GB RAM). This translates to 35 processed frames per second for a 640 px×352 px video of 4 whiskers. The speed and accuracy achieved enables quantitative behavioral studies where the analysis of millions of video frames is required. We used the software to analyze the evolving whisking strategies as mice learned a whisker-based detection task over the course of 6 days (8148 trials, 25 million frames) and measure the forces at the sensory follicle that most underlie haptic perception. PMID:22792058

  6. The Use Of Videography For Three-Dimensional Motion Analysis

    NASA Astrophysics Data System (ADS)

    Hawkins, D. A.; Hawthorne, D. L.; DeLozier, G. S.; Campbell, K. R.; Grabiner, M. D.

    1988-02-01

    Special video path editing capabilities with custom hardware and software, have been developed for use in conjunction with existing video acquisition hardware and firmware. This system has simplified the task of quantifying the kinematics of human movement. A set of retro-reflective markers are secured to a subject performing a given task (i.e. walking, throwing, swinging a golf club, etc.). Multiple cameras, a video processor, and a computer work station collect video data while the task is performed. Software has been developed to edit video files, create centroid data, and identify marker paths. Multi-camera path files are combined to form a 3D path file using the DLT method of cinematography. A separate program converts the 3D path file into kinematic data by creating a set of local coordinate axes and performing a series of coordinate transformations from one local system to the next. The kinematic data is then displayed for appropriate review and/or comparison.

  7. Processing Ocean Images to Detect Large Drift Nets

    NASA Technical Reports Server (NTRS)

    Veenstra, Tim

    2009-01-01

    A computer program processes the digitized outputs of a set of downward-looking video cameras aboard an aircraft flying over the ocean. The purpose served by this software is to facilitate the detection of large drift nets that have been lost, abandoned, or jettisoned. The development of this software and of the associated imaging hardware is part of a larger effort to develop means of detecting and removing large drift nets before they cause further environmental damage to the ocean and to shores on which they sometimes impinge. The software is capable of near-realtime processing of as many as three video feeds at a rate of 30 frames per second. After a user sets the parameters of an adjustable algorithm, the software analyzes each video stream, detects any anomaly, issues a command to point a high-resolution camera toward the location of the anomaly, and, once the camera has been so aimed, issues a command to trigger the camera shutter. The resulting high-resolution image is digitized, and the resulting data are automatically uploaded to the operator s computer for analysis.

  8. Analyzing Virtual Physics Simulations with Tracker

    ERIC Educational Resources Information Center

    Claessens, Tom

    2017-01-01

    In the physics teaching community, Tracker is well known as a user-friendly open source video analysis software, authored by Douglas Brown. With this tool, the user can trace markers indicated on a video or on stroboscopic photos and perform kinematic analyses. Tracker also includes a data modeling tool that allows one to fit some theoretical…

  9. Accuracy and Feasibility of Video Analysis for Assessing Hamstring Flexibility and Validity of the Sit-and-Reach Test

    ERIC Educational Resources Information Center

    Mier, Constance M.

    2011-01-01

    The accuracy of video analysis of the passive straight-leg raise test (PSLR) and the validity of the sit-and-reach test (SR) were tested in 60 men and women. Computer software measured static hip-joint flexion accurately. High within-session reliability of the PSLR was demonstrated (R greater than 0.97). Test-retest (separate days) reliability for…

  10. Creating Math Videos: Comparing Platforms and Software

    ERIC Educational Resources Information Center

    Abbasian, Reza O.; Sieben, John T.

    2016-01-01

    In this paper we present a short tutorial on creating mini-videos using two platforms--PCs and tablets such as iPads--and software packages that work with these devices. Specifically, we describe the step-by-step process of creating and editing videos using a Wacom Intuos pen-tablet plus Camtasia software on a PC platform and using the software…

  11. Software for Photometric and Astrometric Reduction of Video Meteors

    NASA Astrophysics Data System (ADS)

    Atreya, Prakash; Christou, Apostolos

    2007-12-01

    SPARVM is a Software for Photometric and Astrometric Reduction of Video Meteors being developed at Armagh Observatory. It is written in Interactive Data Language (IDL) and is designed to run primarily under Linux platform. The basic features of the software will be derivation of light curves, estimation of angular velocity and radiant position for single station data. For double station data, calculation of 3D coordinates of meteors, velocity, brightness, and estimation of meteoroid's orbit including uncertainties. Currently, the software supports extraction of time and date from video frames, estimation of position of cameras (Azimuth, Altitude), finding stellar sources in video frames and transformation of coordinates from video, frames to Horizontal coordinate system (Azimuth, Altitude), and Equatorial coordinate system (RA, Dec).

  12. Vision-sensing image analysis for GTAW process control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, D.D.

    1994-11-01

    Image analysis of a gas tungsten arc welding (GTAW) process was completed using video images from a charge coupled device (CCD) camera inside a specially designed coaxial (GTAW) electrode holder. Video data was obtained from filtered and unfiltered images, with and without the GTAW arc present, showing weld joint features and locations. Data Translation image processing boards, installed in an IBM PC AT 386 compatible computer, and Media Cybernetics image processing software were used to investigate edge flange weld joint geometry for image analysis.

  13. Video analysis of the flight of a model aircraft

    NASA Astrophysics Data System (ADS)

    Tarantino, Giovanni; Fazio, Claudio

    2011-11-01

    A video-analysis software tool has been employed in order to measure the steady-state values of the kinematics variables describing the longitudinal behaviour of a radio-controlled model aircraft during take-off, climbing and gliding. These experimental results have been compared with the theoretical steady-state configurations predicted by the phugoid model for longitudinal flight. A comparison with the parameters and performance of the full-size aircraft has also been outlined.

  14. Earthscape, a Multi-Purpose Interactive 3d Globe Viewer for Hybrid Data Visualization and Analysis

    NASA Astrophysics Data System (ADS)

    Sarthou, A.; Mas, S.; Jacquin, M.; Moreno, N.; Salamon, A.

    2015-08-01

    The hybrid visualization and interaction tool EarthScape is presented here. The software is able to display simultaneously LiDAR point clouds, draped videos with moving footprint, volume scientific data (using volume rendering, isosurface and slice plane), raster data such as still satellite images, vector data and 3D models such as buildings or vehicles. The application runs on touch screen devices such as tablets. The software is based on open source libraries, such as OpenSceneGraph, osgEarth and OpenCV, and shader programming is used to implement volume rendering of scientific data. The next goal of EarthScape is to perform data analysis using ENVI Services Engine, a cloud data analysis solution. EarthScape is also designed to be a client of Jagwire which provides multisource geo-referenced video fluxes. When all these components will be included, EarthScape will be a multi-purpose platform that will provide at the same time data analysis, hybrid visualization and complex interactions. The software is available on demand for free at france@exelisvis.com.

  15. Integrating Time-Synchronized Video with Other Geospatial and Temporal Data for Remote Science Operations

    NASA Technical Reports Server (NTRS)

    Cohen, Tamar E.; Lees, David S.; Deans, Matthew C.; Lim, Darlene S. S.; Lee, Yeon Jin Grace

    2018-01-01

    Exploration Ground Data Systems (xGDS) supports rapid scientific decision making by synchronizing video in context with map, instrument data visualization, geo-located notes and any other collected data. xGDS is an open source web-based software suite developed at NASA Ames Research Center to support remote science operations in analog missions and prototype solutions for remote planetary exploration. (See Appendix B) Typical video systems are designed to play or stream video only, independent of other data collected in the context of the video. Providing customizable displays for monitoring live video and data as well as replaying recorded video and data helps end users build up a rich situational awareness. xGDS was designed to support remote field exploration with unreliable networks. Commercial digital recording systems operate under the assumption that there is a stable and reliable network between the source of the video and the recording system. In many field deployments and space exploration scenarios, this is not the case - there are both anticipated and unexpected network losses. xGDS' Video Module handles these interruptions, storing the available video, organizing and characterizing the dropouts, and presenting the video for streaming or replay to the end user including visualization of the dropouts. Scientific instruments often require custom or expensive software to analyze and visualize collected data. This limits the speed at which the data can be visualized and limits access to the data to those users with the software. xGDS' Instrument Module integrates with instruments that collect and broadcast data in a single snapshot or that continually collect and broadcast a stream of data. While seeing a visualization of collected instrument data is informative, showing the context for the collected data, other data collected nearby along with events indicating current status helps remote science teams build a better understanding of the environment. Further, sharing geo-located, tagged notes recorded by the scientists and others on the team spurs deeper analysis of the data.

  16. Quantitative Analysis of Color Differences within High Contrast, Low Power Reversible Electrophoretic Displays

    DOE PAGES

    Giera, Brian; Bukosky, Scott; Lee, Elaine; ...

    2018-01-23

    Here, quantitative color analysis is performed on videos of high contrast, low power reversible electrophoretic deposition (EPD)-based displays operated under different applied voltages. This analysis is coded in an open-source software, relies on a color differentiation metric, ΔE * 00, derived from digital video, and provides an intuitive relationship between the operating conditions of the devices and their performance. Time-dependent ΔE * 00 color analysis reveals color relaxation behavior, recoverability for different voltage sequences, and operating conditions that can lead to optimal performance.

  17. Quantitative Analysis of Color Differences within High Contrast, Low Power Reversible Electrophoretic Displays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giera, Brian; Bukosky, Scott; Lee, Elaine

    Here, quantitative color analysis is performed on videos of high contrast, low power reversible electrophoretic deposition (EPD)-based displays operated under different applied voltages. This analysis is coded in an open-source software, relies on a color differentiation metric, ΔE * 00, derived from digital video, and provides an intuitive relationship between the operating conditions of the devices and their performance. Time-dependent ΔE * 00 color analysis reveals color relaxation behavior, recoverability for different voltage sequences, and operating conditions that can lead to optimal performance.

  18. Helmet-Cam: tool for assessing miners’ respirable dust exposure

    PubMed Central

    Cecala, A.B.; Reed, W.R.; Joy, G.J.; Westmoreland, S.C.; O’Brien, A.D.

    2015-01-01

    Video technology coupled with datalogging exposure monitors have been used to evaluate worker exposure to different types of contaminants. However, previous application of this technology used a stationary video camera to record the worker’s activity while the worker wore some type of contaminant monitor. These techniques are not applicable to mobile workers in the mining industry because of their need to move around the operation while performing their duties. The Helmet-Cam is a recently developed exposure assessment tool that integrates a person-wearable video recorder with a datalogging dust monitor. These are worn by the miner in a backpack, safety belt or safety vest to identify areas or job tasks of elevated exposure. After a miner performs his or her job while wearing the unit, the video and dust exposure data files are downloaded to a computer and then merged together through a NIOSH-developed computer software program called Enhanced Video Analysis of Dust Exposure (EVADE). By providing synchronized playback of the merged video footage and dust exposure data, the EVADE software allows for the assessment and identification of key work areas and processes, as well as work tasks that significantly impact a worker’s personal respirable dust exposure. The Helmet-Cam technology has been tested at a number of metal/nonmetal mining operations and has proven to be a valuable assessment tool. Mining companies wishing to use this technique can purchase a commercially available video camera and an instantaneous dust monitor to obtain the necessary data, and the NIOSH-developed EVADE software will be available for download at no cost on the NIOSH website. PMID:26380529

  19. A system for beach video-monitoring: Beachkeeper plus

    NASA Astrophysics Data System (ADS)

    Brignone, Massimo; Schiaffino, Chiara F.; Isla, Federico I.; Ferrari, Marco

    2012-12-01

    A suitable knowledge of coastal systems, of their morphodynamic characteristics and their response to storm events and man-made structures is essential for littoral conservation and management. Nowadays webcams represent a useful device to obtain information from beaches. Video-monitoring techniques are generally site specific and softwares working with any image acquisition system are rare. Therefore, this work aims at submitting theory and applications of an experimental video monitoring software: Beachkeeper plus, a freeware non-profit software, can be employed and redistributed without modifications. A license file is provided inside software package and in the user guide. Beachkeeper plus is based on Matlab® and it can be used for the analysis of images and photos coming from any kind of acquisition system (webcams, digital cameras or images downloaded from internet), without any a-priori information or laboratory study of the acquisition system itself. Therefore, it could become a useful tool for beach planning. Through a simple guided interface, images can be analyzed by performing georeferentiation, rectification, averaging and variance. This software was initially operated in Pietra Ligure (Italy), using images from a tourist webcam, and in Mar del Plata (Argentina) using images from a digital camera. In both cases the reliability in different geomorphologic and morphodynamic conditions was confirmed by the good quality of obtained images after georeferentiation, rectification and averaging.

  20. Image Processing Software

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The Ames digital image velocimetry technology has been incorporated in a commercially available image processing software package that allows motion measurement of images on a PC alone. The software, manufactured by Werner Frei Associates, is IMAGELAB FFT. IMAGELAB FFT is a general purpose image processing system with a variety of other applications, among them image enhancement of fingerprints and use by banks and law enforcement agencies for analysis of videos run during robberies.

  1. Using video-annotation software to identify interactions in group therapies for schizophrenia: assessing reliability and associations with outcomes.

    PubMed

    Orfanos, Stavros; Akther, Syeda Ferhana; Abdul-Basit, Muhammad; McCabe, Rosemarie; Priebe, Stefan

    2017-02-10

    Research has shown that interactions in group therapies for people with schizophrenia are associated with a reduction in negative symptoms. However, it is unclear which specific interactions in groups are linked with these improvements. The aims of this exploratory study were to i) develop and test the reliability of using video-annotation software to measure interactions in group therapies in schizophrenia and ii) explore the relationship between interactions in group therapies for schizophrenia with clinically relevant changes in negative symptoms. Video-annotation software was used to annotate interactions from participants selected across nine video-recorded out-patient therapy groups (N = 81). Using the Individual Group Member Interpersonal Process Scale, interactions were coded from participants who demonstrated either a clinically significant improvement (N = 9) or no change (N = 8) in negative symptoms at the end of therapy. Interactions were measured from the first and last sessions of attendance (>25 h of therapy). Inter-rater reliability between two independent raters was measured. Binary logistic regression analysis was used to explore the association between the frequency of interactive behaviors and changes in negative symptoms, assessed using the Positive and Negative Syndrome Scale. Of the 1275 statements that were annotated using ELAN, 1191 (93%) had sufficient audio and visual quality to be coded using the Individual Group Member Interpersonal Process Scale. Rater-agreement was high across all interaction categories (>95% average agreement). A higher frequency of self-initiated statements measured in the first session was associated with improvements in negative symptoms. The frequency of questions and giving advice measured in the first session of attendance was associated with improvements in negative symptoms; although this was only a trend. Video-annotation software can be used to reliably identify interactive behaviors in groups for schizophrenia. The results suggest that proactive communicative gestures, as assessed by the video-analysis, predict outcomes. Future research should use this novel method in larger and clinically different samples to explore which aspects of therapy facilitate such proactive communication early on in therapy.

  2. TLM-Tracker: software for cell segmentation, tracking and lineage analysis in time-lapse microscopy movies.

    PubMed

    Klein, Johannes; Leupold, Stefan; Biegler, Ilona; Biedendieck, Rebekka; Münch, Richard; Jahn, Dieter

    2012-09-01

    Time-lapse imaging in combination with fluorescence microscopy techniques enable the investigation of gene regulatory circuits and uncovered phenomena like culture heterogeneity. In this context, computational image processing for the analysis of single cell behaviour plays an increasing role in systems biology and mathematical modelling approaches. Consequently, we developed a software package with graphical user interface for the analysis of single bacterial cell behaviour. A new software called TLM-Tracker allows for the flexible and user-friendly interpretation for the segmentation, tracking and lineage analysis of microbial cells in time-lapse movies. The software package, including manual, tutorial video and examples, is available as Matlab code or executable binaries at http://www.tlmtracker.tu-bs.de.

  3. Teasing Apart Complex Motions using VideoPoint

    NASA Astrophysics Data System (ADS)

    Fischer, Mark

    2002-10-01

    Using video analysis software such as VideoPoint, it is possible to explore the physics of any phenomenon that can be captured on videotape. The good news is that complex motions can be filmed and analyzed. The bad news is that the motions can become very complex very quickly. An example of such a complicated motion, the 2-dimensional motion of an object as filmed by a camera that is moving and rotating in the same plane will be discussed. Methods for extracting the desired object motion will be given as well as suggestions for shooting more easily analyzable video clips.

  4. Quick and Easy: Use Screen Capture Software to Train and Communicate

    ERIC Educational Resources Information Center

    Schuster, Ellen

    2011-01-01

    Screen capture (screen cast) software can be used to develop short videos for training purposes. Developing videos is quick and easy. This article describes how these videos are used as tools to reinforce face-to-face and interactive TV curriculum training in a nutrition education program. Advantages of developing these videos are shared.…

  5. [Epilepsy and videogame: which physiopathological mechanisms to expect?].

    PubMed

    Masnou, P; Nahum-Moscovoci, L

    1999-04-01

    Video games may induce epileptic seizures in some subjects. Most of them have photosensitive epilepsy. The triggering factors are multiple: characteristics of the softwares, effects of the electronic screen and interactivity. The wide diffusion of the video games explain the large number of descriptions of videogame induced seizures. Historical aspects and an analysis of the underlying mechanisms of videogame induced seizures are presented.

  6. Exploring Adolescents' Multimodal Responses to "The Kite Runner": Understanding How Students Use Digital Media for Academic Purposes

    ERIC Educational Resources Information Center

    Jocius, Robin

    2013-01-01

    This qualitative study explores how adolescent high school students in an AP English class used multiple forms of media (the internet, digital video, slide show software, video editing tools, literary texts, and writing) to respond to and analyze a contemporary novel, "The Kite Runner". Using a multimodal analysis framework, the author explores…

  7. “Diagnosis by Behavioral Observation” Home-Videosomnography – A Rigorous Ethnographic Approach to Sleep of Children with Neurodevelopmental Conditions

    PubMed Central

    Ipsiroglu, Osman S.; Hung, Yi-Hsuan Amy; Chan, Forson; Ross, Michelle L.; Veer, Dorothee; Soo, Sonja; Ho, Gloria; Berger, Mai; McAllister, Graham; Garn, Heinrich; Kloesch, Gerhard; Barbosa, Adriano Vilela; Stockler, Sylvia; McKellin, William; Vatikiotis-Bateson, Eric

    2015-01-01

    Introduction: Advanced video technology is available for sleep-laboratories. However, low-cost equipment for screening in the home setting has not been identified and tested, nor has a methodology for analysis of video recordings been suggested. Methods: We investigated different combinations of hardware/software for home-videosomnography (HVS) and established a process for qualitative and quantitative analysis of HVS-recordings. A case vignette (HVS analysis for a 5.5-year-old girl with major insomnia and several co-morbidities) demonstrates how methodological considerations were addressed and how HVS added value to clinical assessment. Results: We suggest an “ideal set of hardware/software” that is reliable, affordable (∼$500) and portable (=2.8 kg) to conduct non-invasive HVS, which allows time-lapse analyses. The equipment consists of a net-book, a camera with infrared optics, and a video capture device. (1) We present an HVS-analysis protocol consisting of three steps of analysis at varying replay speeds: (a) basic overview and classification at 16× normal speed; (b) second viewing and detailed descriptions at 4–8× normal speed, and (c) viewing, listening, and in-depth descriptions at real-time speed. (2) We also present a custom software program that facilitates video analysis and note-taking (Annotator©), and Optical Flow software that automatically quantifies movement for internal quality control of the HVS-recording. The case vignette demonstrates how the HVS-recordings revealed the dimension of insomnia caused by restless legs syndrome, and illustrated the cascade of symptoms, challenging behaviors, and resulting medications. Conclusion: The strategy of using HVS, although requiring validation and reliability testing, opens the floor for a new “observational sleep medicine,” which has been useful in describing discomfort-related behavioral movement patterns in patients with communication difficulties presenting with challenging/disruptive sleep/wake behaviors. PMID:25852578

  8. Sexual content in video games: an analysis of the Entertainment Software Rating Board classification from 1994 to 2013.

    PubMed

    Vidaña-Pérez, Dèsirée; Braverman-Bronstein, Ariela; Basto-Abreu, Ana; Barrientos-Gutierrez, Inti; Hilscher, Rainer; Barrientos-Gutierrez, Tonatiuh

    2018-01-11

    Background: Video games are widely used by children and adolescents and have become a significant source of exposure to sexual content. Despite evidence of the important role of media in the development of sexual attitudes and behaviours, little attention has been paid to monitor sexual content in video games. Methods: Data was obtained about sexual content and rating for 23722 video games from 1994 to 2013 from the Entertainment Software Rating Board database; release dates and information on the top 100 selling video games was also obtained. A yearly prevalence of sexual content according to rating categories was calculated. Trends and comparisons were estimated using Joinpoint regression. Results: Sexual content was present in 13% of the video games. Games rated 'Mature' had the highest prevalence of sexual content (34.5%) followed by 'Teen' (30.7%) and 'E10+' (21.3%). Over time, sexual content decreased in the 'Everyone' category, 'E10+' maintained a low prevalence and 'Teen' and 'Mature' showed a marked increase. Both top and non-top video games showed constant increases, with top selling video games having 10.1% more sexual content across the period of study. Conclusion: Over the last 20 years, the prevalence of sexual content has increased in video games with a 'Teen' or 'Mature' rating. Further studies are needed to quantify the potential association between sexual content in video games and sexual behaviour in children and adolescents.

  9. ETHOWATCHER: validation of a tool for behavioral and video-tracking analysis in laboratory animals.

    PubMed

    Crispim Junior, Carlos Fernando; Pederiva, Cesar Nonato; Bose, Ricardo Chessini; Garcia, Vitor Augusto; Lino-de-Oliveira, Cilene; Marino-Neto, José

    2012-02-01

    We present a software (ETHOWATCHER(®)) developed to support ethography, object tracking and extraction of kinematic variables from digital video files of laboratory animals. The tracking module allows controlled segmentation of the target from the background, extracting image attributes used to calculate the distance traveled, orientation, length, area and a path graph of the experimental animal. The ethography module allows recording of catalog-based behaviors from environment or from video files continuously or frame-by-frame. The output reports duration, frequency and latency of each behavior and the sequence of events in a time-segmented format, set by the user. Validation tests were conducted on kinematic measurements and on the detection of known behavioral effects of drugs. This software is freely available at www.ethowatcher.ufsc.br. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. All Source Sensor Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    - PNNL, Harold Trease

    2012-10-10

    ASSA is a software application that processes binary data into summarized index tables that can be used to organize features contained within the data. ASSA's index tables can also be used to search for user specified features. ASSA is designed to organize and search for patterns in unstructured binary data streams or archives, such as video, images, audio, and network traffic. ASSA is basically a very general search engine used to search for any pattern in any binary data stream. It has uses in video analytics, image analysis, audio analysis, searching hard-drives, monitoring network traffic, etc.

  11. Use of Video Analysis System for Working Posture Evaluations

    NASA Technical Reports Server (NTRS)

    McKay, Timothy D.; Whitmore, Mihriban

    1994-01-01

    In a work environment, it is important to identify and quantify the relationship among work activities, working posture, and workplace design. Working posture may impact the physical comfort and well-being of individuals, as well as performance. The Posture Video Analysis Tool (PVAT) is an interactive menu and button driven software prototype written in Supercard (trademark). Human Factors analysts are provided with a predefined set of options typically associated with postural assessments and human performance issues. Once options have been selected, the program is used to evaluate working posture and dynamic tasks from video footage. PVAT has been used to evaluate postures from Orbiter missions, as well as from experimental testing of prototype glove box designs. PVAT can be used for video analysis in a number of industries, with little or no modification. It can contribute to various aspects of workplace design such as training, task allocations, procedural analyses, and hardware usability evaluations. The major advantage of the video analysis approach is the ability to gather data, non-intrusively, in restricted-access environments, such as emergency and operation rooms, contaminated areas, and control rooms. Video analysis also provides the opportunity to conduct preliminary evaluations of existing work areas.

  12. Using Deep Learning Algorithm to Enhance Image-review Software for Surveillance Cameras

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cui, Yonggang; Thomas, Maikael A.

    We propose the development of proven deep learning algorithms to flag objects and events of interest in Next Generation Surveillance System (NGSS) surveillance to make IAEA image review more efficient. Video surveillance is one of the core monitoring technologies used by the IAEA Department of Safeguards when implementing safeguards at nuclear facilities worldwide. The current image review software GARS has limited automated functions, such as scene-change detection, black image detection and missing scene analysis, but struggles with highly cluttered backgrounds. A cutting-edge algorithm to be developed in this project will enable efficient and effective searches in images and video streamsmore » by identifying and tracking safeguards relevant objects and detect anomalies in their vicinity. In this project, we will develop the algorithm, test it with the IAEA surveillance cameras and data sets collected at simulated nuclear facilities at BNL and SNL, and implement it in a software program for potential integration into the IAEA’s IRAP (Integrated Review and Analysis Program).« less

  13. MIXING QUANTIFICATION BY VISUAL IMAGING ANALYSIS

    EPA Science Inventory

    This paper reports on development of a method for quantifying two measures of mixing, the scale and intensity of segregation, through flow visualization, video recording, and software analysis. This non-intrusive method analyzes a planar cross section of a flowing system from an ...

  14. Quantitative analysis of tympanic membrane perforation: a simple and reliable method.

    PubMed

    Ibekwe, T S; Adeosun, A A; Nwaorgu, O G

    2009-01-01

    Accurate assessment of the features of tympanic membrane perforation, especially size, site, duration and aetiology, is important, as it enables optimum management. To describe a simple, cheap and effective method of quantitatively analysing tympanic membrane perforations. The system described comprises a video-otoscope (capable of generating still and video images of the tympanic membrane), adapted via a universal serial bus box to a computer screen, with images analysed using the Image J geometrical analysis software package. The reproducibility of results and their correlation with conventional otoscopic methods of estimation were tested statistically with the paired t-test and correlational tests, using the Statistical Package for the Social Sciences version 11 software. The following equation was generated: P/T x 100 per cent = percentage perforation, where P is the area (in pixels2) of the tympanic membrane perforation and T is the total area (in pixels2) for the entire tympanic membrane (including the perforation). Illustrations are shown. Comparison of blinded data on tympanic membrane perforation area obtained independently from assessments by two trained otologists, of comparative years of experience, using the video-otoscopy system described, showed similar findings, with strong correlations devoid of inter-observer error (p = 0.000, r = 1). Comparison with conventional otoscopic assessment also indicated significant correlation, comparing results for two trained otologists, but some inter-observer variation was present (p = 0.000, r = 0.896). Correlation between the two methods for each of the otologists was also highly significant (p = 0.000). A computer-adapted video-otoscope, with images analysed by Image J software, represents a cheap, reliable, technology-driven, clinical method of quantitative analysis of tympanic membrane perforations and injuries.

  15. Hardware accelerator design for tracking in smart camera

    NASA Astrophysics Data System (ADS)

    Singh, Sanjay; Dunga, Srinivasa Murali; Saini, Ravi; Mandal, A. S.; Shekhar, Chandra; Vohra, Anil

    2011-10-01

    Smart Cameras are important components in video analysis. For video analysis, smart cameras needs to detect interesting moving objects, track such objects from frame to frame, and perform analysis of object track in real time. Therefore, the use of real-time tracking is prominent in smart cameras. The software implementation of tracking algorithm on a general purpose processor (like PowerPC) could achieve low frame rate far from real-time requirements. This paper presents the SIMD approach based hardware accelerator designed for real-time tracking of objects in a scene. The system is designed and simulated using VHDL and implemented on Xilinx XUP Virtex-IIPro FPGA. Resulted frame rate is 30 frames per second for 250x200 resolution video in gray scale.

  16. A prototype to automate the video subsystem routing for the video distribution subsystem of Space Station Freedom

    NASA Astrophysics Data System (ADS)

    Betz, Jessie M. Bethly

    1993-12-01

    The Video Distribution Subsystem (VDS) for Space Station Freedom provides onboard video communications. The VDS includes three major functions: external video switching; internal video switching; and sync and control generation. The Video Subsystem Routing (VSR) is a part of the VDS Manager Computer Software Configuration Item (VSM/CSCI). The VSM/CSCI is the software which controls and monitors the VDS equipment. VSR activates, terminates, and modifies video services in response to Tier-1 commands to connect video sources to video destinations. VSR selects connection paths based on availability of resources and updates the video routing lookup tables. This project involves investigating the current methodology to automate the Video Subsystem Routing and developing and testing a prototype as 'proof of concept' for designers.

  17. A clinically viable capsule endoscopy video analysis platform for automatic bleeding detection

    NASA Astrophysics Data System (ADS)

    Yi, Steven; Jiao, Heng; Xie, Jean; Mui, Peter; Leighton, Jonathan A.; Pasha, Shabana; Rentz, Lauri; Abedi, Mahmood

    2013-02-01

    In this paper, we present a novel and clinically valuable software platform for automatic bleeding detection on gastrointestinal (GI) tract from Capsule Endoscopy (CE) videos. Typical CE videos for GI tract run about 8 hours and are manually reviewed by physicians to locate diseases such as bleedings and polyps. As a result, the process is time consuming and is prone to disease miss-finding. While researchers have made efforts to automate this process, however, no clinically acceptable software is available on the marketplace today. Working with our collaborators, we have developed a clinically viable software platform called GISentinel for fully automated GI tract bleeding detection and classification. Major functional modules of the SW include: the innovative graph based NCut segmentation algorithm, the unique feature selection and validation method (e.g. illumination invariant features, color independent features, and symmetrical texture features), and the cascade SVM classification for handling various GI tract scenes (e.g. normal tissue, food particles, bubbles, fluid, and specular reflection). Initial evaluation results on the SW have shown zero bleeding instance miss-finding rate and 4.03% false alarm rate. This work is part of our innovative 2D/3D based GI tract disease detection software platform. While the overall SW framework is designed for intelligent finding and classification of major GI tract diseases such as bleeding, ulcer, and polyp from the CE videos, this paper will focus on the automatic bleeding detection functional module.

  18. Viewing the viewers: how adults with attentional deficits watch educational videos.

    PubMed

    Hassner, Tal; Wolf, Lior; Lerner, Anat; Leitner, Yael

    2014-10-01

    Knowing how adults with ADHD interact with prerecorded video lessons at home may provide a novel means of early screening and long-term monitoring for ADHD. Viewing patterns of 484 students with known ADHD were compared with 484 age, gender, and academically matched controls chosen from 8,699 non-ADHD students. Transcripts generated by their video playback software were analyzed using t tests and regression analysis. ADHD students displayed significant tendencies (p ≤ .05) to watch videos with more pauses and more reviews of previously watched parts. Other parameters showed similar tendencies. Regression analysis indicated that attentional deficits remained constant for age and gender but varied for learning experience. There were measurable and significant differences between the video-viewing habits of the ADHD and non-ADHD students. This provides a new perspective on how adults cope with attention deficits and suggests a novel means of early screening for ADHD. © 2011 SAGE Publications.

  19. Development of students' conceptual thinking by means of video analysis and interactive simulations at technical universities

    NASA Astrophysics Data System (ADS)

    Hockicko, Peter; Krišt‧ák, L.‧uboš; Němec, Miroslav

    2015-03-01

    Video analysis, using the program Tracker (Open Source Physics), in the educational process introduces a new creative method of teaching physics and makes natural sciences more interesting for students. This way of exploring the laws of nature can amaze students because this illustrative and interactive educational software inspires them to think creatively, improves their performance and helps them in studying physics. This paper deals with increasing the key competencies in engineering by analysing real-life situation videos - physical problems - by means of video analysis and the modelling tools using the program Tracker and simulations of physical phenomena from The Physics Education Technology (PhET™) Project (VAS method of problem tasks). The statistical testing using the t-test confirmed the significance of the differences in the knowledge of the experimental and control groups, which were the result of interactive method application.

  20. Astrometric and Photometric Analysis of the September 2008 ATV-1 Re-Entry Event

    NASA Technical Reports Server (NTRS)

    Mulrooney, Mark K.; Barker, Edwin S.; Maley, Paul D.; Beaulieu, Kevin R.; Stokely, Christopher L.

    2008-01-01

    NASA utilized Image Intensified Video Cameras for ATV data acquisition from a jet flying at 12.8 km. Afterwards the video was digitized and then analyzed with a modified commercial software package, Image Systems Trackeye. Astrometric results were limited by saturation, plate scale, and imposed linear plate solution based on field reference stars. Time-dependent fragment angular trajectories, velocities, accelerations, and luminosities were derived in each video segment. It was evident that individual fragments behave differently. Photometric accuracy was insufficient to confidently assess correlations between luminosity and fragment spatial behavior (velocity, deceleration). Use of high resolution digital video cameras in future should remedy this shortcoming.

  1. Linear momentum, angular momentum and energy in the linear collision between two balls

    NASA Astrophysics Data System (ADS)

    Hanisch, C.; Hofmann, F.; Ziese, M.

    2018-01-01

    In an experiment of the basic physics laboratory, kinematical motion processes were analysed. The motion was recorded with a standard video camera having frame rates from 30 to 240 fps the videos were processed using video analysis software. Video detection was used to analyse the symmetric one-dimensional collision between two balls. Conservation of linear and angular momentum lead to a crossover from rolling to sliding directly after the collision. By variation of the rolling radius the system could be tuned from a regime in which the balls move away from each other after the collision to a situation in which they re-collide.

  2. The National Capital Region closed circuit television video interoperability project.

    PubMed

    Contestabile, John; Patrone, David; Babin, Steven

    2016-01-01

    The National Capital Region (NCR) includes many government jurisdictions and agencies using different closed circuit TV (CCTV) cameras and video management software. Because these agencies often must work together to respond to emergencies and events, a means of providing interoperability for CCTV video is critically needed. Video data from different CCTV systems that are not inherently interoperable is represented in the "data layer." An "integration layer" ingests the data layer source video and normalizes the different video formats. It then aggregates and distributes this video to a "presentation layer" where it can be viewed by almost any application used by other agencies and without any proprietary software. A native mobile video viewing application is also developed that uses the presentation layer to provide video to different kinds of smartphones. The NCR includes Washington, DC, and surrounding counties in Maryland and Virginia. The video sharing architecture allows one agency to see another agency's video in their native viewing application without the need to purchase new CCTV software or systems. A native smartphone application was also developed to enable them to share video via mobile devices even when they use different video management systems. A video sharing architecture has been developed for the NCR that creates an interoperable environment for sharing CCTV video in an efficient and cost effective manner. In addition, it provides the desired capability of sharing video via a native mobile application.

  3. Analysis and segmentation of images in case of solving problems of detecting and tracing objects on real-time video

    NASA Astrophysics Data System (ADS)

    Ezhova, Kseniia; Fedorenko, Dmitriy; Chuhlamov, Anton

    2016-04-01

    The article deals with the methods of image segmentation based on color space conversion, and allow the most efficient way to carry out the detection of a single color in a complex background and lighting, as well as detection of objects on a homogeneous background. The results of the analysis of segmentation algorithms of this type, the possibility of their implementation for creating software. The implemented algorithm is very time-consuming counting, making it a limited application for the analysis of the video, however, it allows us to solve the problem of analysis of objects in the image if there is no dictionary of images and knowledge bases, as well as the problem of choosing the optimal parameters of the frame quantization for video analysis.

  4. STAPP: Spatiotemporal analysis of plantar pressure measurements using statistical parametric mapping.

    PubMed

    Booth, Brian G; Keijsers, Noël L W; Sijbers, Jan; Huysmans, Toon

    2018-05-03

    Pedobarography produces large sets of plantar pressure samples that are routinely subsampled (e.g. using regions of interest) or aggregated (e.g. center of pressure trajectories, peak pressure images) in order to simplify statistical analysis and provide intuitive clinical measures. We hypothesize that these data reductions discard gait information that can be used to differentiate between groups or conditions. To test the hypothesis of null information loss, we created an implementation of statistical parametric mapping (SPM) for dynamic plantar pressure datasets (i.e. plantar pressure videos). Our SPM software framework brings all plantar pressure videos into anatomical and temporal correspondence, then performs statistical tests at each sampling location in space and time. Novelly, we introduce non-linear temporal registration into the framework in order to normalize for timing differences within the stance phase. We refer to our software framework as STAPP: spatiotemporal analysis of plantar pressure measurements. Using STAPP, we tested our hypothesis on plantar pressure videos from 33 healthy subjects walking at different speeds. As walking speed increased, STAPP was able to identify significant decreases in plantar pressure at mid-stance from the heel through the lateral forefoot. The extent of these plantar pressure decreases has not previously been observed using existing plantar pressure analysis techniques. We therefore conclude that the subsampling of plantar pressure videos - a task which led to the discarding of gait information in our study - can be avoided using STAPP. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Video Altimeter and Obstruction Detector for an Aircraft

    NASA Technical Reports Server (NTRS)

    Delgado, Frank J.; Abernathy, Michael F.; White, Janis; Dolson, William R.

    2013-01-01

    Video-based altimetric and obstruction detection systems for aircraft have been partially developed. The hardware of a system of this type includes a downward-looking video camera, a video digitizer, a Global Positioning System receiver or other means of measuring the aircraft velocity relative to the ground, a gyroscope based or other attitude-determination subsystem, and a computer running altimetric and/or obstruction-detection software. From the digitized video data, the altimetric software computes the pixel velocity in an appropriate part of the video image and the corresponding angular relative motion of the ground within the field of view of the camera. Then by use of trigonometric relationships among the aircraft velocity, the attitude of the camera, the angular relative motion, and the altitude, the software computes the altitude. The obstruction-detection software performs somewhat similar calculations as part of a larger task in which it uses the pixel velocity data from the entire video image to compute a depth map, which can be correlated with a terrain map, showing locations of potential obstructions. The depth map can be used as real-time hazard display and/or to update an obstruction database.

  6. The Interaction between Multimedia Data Analysis and Theory Development in Design Research

    ERIC Educational Resources Information Center

    van Nes, Fenna; Doorman, Michiel

    2010-01-01

    Mathematics education researchers conducting instruction experiments using a design research methodology are challenged with the analysis of often complex and large amounts of qualitative data. In this paper, we present two case studies that show how multimedia analysis software can greatly support video data analysis and theory development in…

  7. 4DCAPTURE: a general purpose software package for capturing and analyzing two- and three-dimensional motion data acquired from video sequences

    NASA Astrophysics Data System (ADS)

    Walton, James S.; Hodgson, Peter; Hallamasek, Karen; Palmer, Jake

    2003-07-01

    4DVideo is creating a general purpose capability for capturing and analyzing kinematic data from video sequences in near real-time. The core element of this capability is a software package designed for the PC platform. The software ("4DCapture") is designed to capture and manipulate customized AVI files that can contain a variety of synchronized data streams -- including audio, video, centroid locations -- and signals acquired from more traditional sources (such as accelerometers and strain gauges.) The code includes simultaneous capture or playback of multiple video streams, and linear editing of the images (together with the ancilliary data embedded in the files). Corresponding landmarks seen from two or more views are matched automatically, and photogrammetric algorithms permit multiple landmarks to be tracked in two- and three-dimensions -- with or without lens calibrations. Trajectory data can be processed within the main application or they can be exported to a spreadsheet where they can be processed or passed along to a more sophisticated, stand-alone, data analysis application. Previous attempts to develop such applications for high-speed imaging have been limited in their scope, or by the complexity of the application itself. 4DVideo has devised a friendly ("FlowStack") user interface that assists the end-user to capture and treat image sequences in a natural progression. 4DCapture employs the AVI 2.0 standard and DirectX technology which effectively eliminates the file size limitations found in older applications. In early tests, 4DVideo has streamed three RS-170 video sources to disk for more than an hour without loss of data. At this time, the software can acquire video sequences in three ways: (1) directly, from up to three hard-wired cameras supplying RS-170 (monochrome) signals; (2) directly, from a single camera or video recorder supplying an NTSC (color) signal; and (3) by importing existing video streams in the AVI 1.0 or AVI 2.0 formats. The latter is particularly useful for high-speed applications where the raw images are often captured and stored by the camera before being downloaded. Provision has been made to synchronize data acquired from any combination of these video sources using audio and visual "tags." Additional "front-ends," designed for digital cameras, are anticipated.

  8. Digital Video and the Internet: A Powerful Combination.

    ERIC Educational Resources Information Center

    Barron, Ann E.; Orwig, Gary W.

    1995-01-01

    Provides an overview of digital video and outlines hardware and software necessary for interactive training on the World Wide Web and for videoconferences via the Internet. Lists sites providing additional information on digital video, on CU-SeeMe software, and on MBONE (Multicast BackBONE), a technology that permits real-time transmission of…

  9. 78 FR 57648 - Notice of Issuance of Final Determination Concerning Video Teleconferencing Server

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-19

    ... the Chinese- origin Video Board and the Filter Board, impart the essential character to the video... includes the codec; a network filter electronic circuit board (``Filter Board''); a housing case; a power... (``Linux software''). The Linux software allows the Filter Board to inspect each Ethernet packet of...

  10. Identification of fidgety movements and prediction of CP by the use of computer-based video analysis is more accurate when based on two video recordings.

    PubMed

    Adde, Lars; Helbostad, Jorunn; Jensenius, Alexander R; Langaas, Mette; Støen, Ragnhild

    2013-08-01

    This study evaluates the role of postterm age at assessment and the use of one or two video recordings for the detection of fidgety movements (FMs) and prediction of cerebral palsy (CP) using computer vision software. Recordings between 9 and 17 weeks postterm age from 52 preterm and term infants (24 boys, 28 girls; 26 born preterm) were used. Recordings were analyzed using computer vision software. Movement variables, derived from differences between subsequent video frames, were used for quantitative analysis. Sensitivities, specificities, and area under curve were estimated for the first and second recording, or a mean of both. FMs were classified based on the Prechtl approach of general movement assessment. CP status was reported at 2 years. Nine children developed CP of whom all recordings had absent FMs. The mean variability of the centroid of motion (CSD) from two recordings was more accurate than using only one recording, and identified all children who were diagnosed with CP at 2 years. Age at assessment did not influence the detection of FMs or prediction of CP. The accuracy of computer vision techniques in identifying FMs and predicting CP based on two recordings should be confirmed in future studies.

  11. Data Reduction and Control Software for Meteor Observing Stations Based on CCD Video Systems

    NASA Technical Reports Server (NTRS)

    Madiedo, J. M.; Trigo-Rodriguez, J. M.; Lyytinen, E.

    2011-01-01

    The SPanish Meteor Network (SPMN) is performing a continuous monitoring of meteor activity over Spain and neighbouring countries. The huge amount of data obtained by the 25 video observing stations that this network is currently operating made it necessary to develop new software packages to accomplish some tasks, such as data reduction and remote operation of autonomous systems based on high-sensitivity CCD video devices. The main characteristics of this software are described here.

  12. How to study the Doppler effect with Audacity software

    NASA Astrophysics Data System (ADS)

    Adriano Dias, Marco; Simeão Carvalho, Paulo; Rodrigues Ventura, Daniel

    2016-05-01

    The Doppler effect is one of the recurring themes in college and high school classes. In order to contextualize the topic and engage the students in their own learning process, we propose a simple and easily accessible activity, i.e. the analysis of the videos available on the internet by the students. The sound of the engine of the vehicle passing by the camera is recorded on the video; it is then analyzed with the free software Audacity by measuring the frequency of the sound during approach and recede of the vehicle from the observer. The speed of the vehicle is determined due to the application of Doppler effect equations for acoustic waves.

  13. Automated Visual Event Detection, Tracking, and Data Management System for Cabled- Observatory Video

    NASA Astrophysics Data System (ADS)

    Edgington, D. R.; Cline, D. E.; Schlining, B.; Raymond, E.

    2008-12-01

    Ocean observatories and underwater video surveys have the potential to unlock important discoveries with new and existing camera systems. Yet the burden of video management and analysis often requires reducing the amount of video recorded through time-lapse video or similar methods. It's unknown how many digitized video data sets exist in the oceanographic community, but we suspect that many remain under analyzed due to lack of good tools or human resources to analyze the video. To help address this problem, the Automated Visual Event Detection (AVED) software and The Video Annotation and Reference System (VARS) have been under development at MBARI. For detecting interesting events in the video, the AVED software has been developed over the last 5 years. AVED is based on a neuromorphic-selective attention algorithm, modeled on the human vision system. Frames are decomposed into specific feature maps that are combined into a unique saliency map. This saliency map is then scanned to determine the most salient locations. The candidate salient locations are then segmented from the scene using algorithms suitable for the low, non-uniform light and marine snow typical of deep underwater video. For managing the AVED descriptions of the video, the VARS system provides an interface and database for describing, viewing, and cataloging the video. VARS was developed by the MBARI for annotating deep-sea video data and is currently being used to describe over 3000 dives by our remotely operated vehicles (ROV), making it well suited to this deepwater observatory application with only a few modifications. To meet the compute and data intensive job of video processing, a distributed heterogeneous network of computers is managed using the Condor workload management system. This system manages data storage, video transcoding, and AVED processing. Looking to the future, we see high-speed networks and Grid technology as an important element in addressing the problem of processing and accessing large video data sets.

  14. A complexity-scalable software-based MPEG-2 video encoder.

    PubMed

    Chen, Guo-bin; Lu, Xin-ning; Wang, Xing-guo; Liu, Ji-lin

    2004-05-01

    With the development of general-purpose processors (GPP) and video signal processing algorithms, it is possible to implement a software-based real-time video encoder on GPP, and its low cost and easy upgrade attract developers' interests to transfer video encoding from specialized hardware to more flexible software. In this paper, the encoding structure is set up first to support complexity scalability; then a lot of high performance algorithms are used on the key time-consuming modules in coding process; finally, at programming level, processor characteristics are considered to improve data access efficiency and processing parallelism. Other programming methods such as lookup table are adopted to reduce the computational complexity. Simulation results showed that these ideas could not only improve the global performance of video coding, but also provide great flexibility in complexity regulation.

  15. Real-time synchronization of kinematic and video data for the comprehensive assessment of surgical skills.

    PubMed

    Dosis, Aristotelis; Bello, Fernando; Moorthy, Krishna; Munz, Yaron; Gillies, Duncan; Darzi, Ara

    2004-01-01

    Surgical dexterity in operating theatres has traditionally been assessed subjectively. Electromagnetic (EM) motion tracking systems such as the Imperial College Surgical Assessment Device (ICSAD) have been shown to produce valid and accurate objective measures of surgical skill. To allow for video integration we have modified the data acquisition and built it within the ROVIMAS analysis software. We then used ActiveX 9.0 DirectShow video capturing and the system clock as a time stamp for the synchronized concurrent acquisition of kinematic data and video frames. Interactive video/motion data browsing was implemented to allow the user to concentrate on frames exhibiting certain kinematic properties that could result in operative errors. We exploited video-data synchronization to calculate the camera visual hull by identifying all 3D vertices using the ICSAD electromagnetic sensors. We also concentrated on high velocity peaks as a means of identifying potential erroneous movements to be confirmed by studying the corresponding video frames. The outcome of the study clearly shows that the kinematic data are precisely synchronized with the video frames and that the velocity peaks correspond to large and sudden excursions of the instrument tip. We validated the camera visual hull by both video and geometrical kinematic analysis and we observed that graphs containing fewer sudden velocity peaks are less likely to have erroneous movements. This work presented further developments to the well-established ICSAD dexterity analysis system. Synchronized real-time motion and video acquisition provides a comprehensive assessment solution by combining quantitative motion analysis tools and qualitative targeted video scoring.

  16. The Effectiveness of Classroom-Based Supplementary Video Presentations in Supporting Emergent Literacy Development in Early Childhood Education

    ERIC Educational Resources Information Center

    Sadik, Alaa M.; Badr, Khadeja

    2012-01-01

    This study investigated the impact of supplementary video presentations in supporting young children's emergent literacy development. Videos were produced by teachers using prototype software developed specifically for the purpose of this study. The software obtains media content from a variety of resources and devices, including webcam,…

  17. Enhancements to the Sentinel Fireball Network Video Software

    NASA Astrophysics Data System (ADS)

    Watson, Wayne

    2009-05-01

    The Sentinel Fireball Network that supports meteor imaging of bright meteors (fireballs) has been in existence for over ten years. Nearly five years ago it moved from gathering meteor data with a camera and VCR video tape to a fisheye lens attached to a hardware device, the Sentinel box, which allowed meteor data to be recorded on a PC operating under real-time Linux. In 2006, that software, sentuser, was made available on Apple, Linux, and Window operating systems using the Python computer language. It provides basic video and management functionality and a small amount of analytic software capability. This paper describes the new and attractive future features of the software, and, additionally, it reviews some of the research and networks from the past and present using video equipment to collect and analyze fireball data that have applicability to sentuser.

  18. An Analysis of Mimosa pudica Leaves Movement by Using LoggerPro Software

    NASA Astrophysics Data System (ADS)

    Sugito; Susilo; Handayani, L.; Marwoto, P.

    2016-08-01

    The unique phenomena of Mimosa pudica are the closing and opening movements of its leaves when they got a stimulus. By using certain software, these movements can be drawn into graphic that can be analysed. The LoggerPro provides facilities needed to analyse recorded videos of the plant's reaction to stimulus. Then, through the resulted graph, analysis of some variables can be carried out. The result showed that the plant's movement fits an equation of y = mx + c.

  19. Control Software for Advanced Video Guidance Sensor

    NASA Technical Reports Server (NTRS)

    Howard, Richard T.; Book, Michael L.; Bryan, Thomas C.

    2006-01-01

    Embedded software has been developed specifically for controlling an Advanced Video Guidance Sensor (AVGS). A Video Guidance Sensor is an optoelectronic system that provides guidance for automated docking of two vehicles. Such a system includes pulsed laser diodes and a video camera, the output of which is digitized. From the positions of digitized target images and known geometric relationships, the relative position and orientation of the vehicles are computed. The present software consists of two subprograms running in two processors that are parts of the AVGS. The subprogram in the first processor receives commands from an external source, checks the commands for correctness, performs commanded non-image-data-processing control functions, and sends image data processing parts of commands to the second processor. The subprogram in the second processor processes image data as commanded. Upon power-up, the software performs basic tests of functionality, then effects a transition to a standby mode. When a command is received, the software goes into one of several operational modes (e.g. acquisition or tracking). The software then returns, to the external source, the data appropriate to the command.

  20. Current and future trends in marine image annotation software

    NASA Astrophysics Data System (ADS)

    Gomes-Pereira, Jose Nuno; Auger, Vincent; Beisiegel, Kolja; Benjamin, Robert; Bergmann, Melanie; Bowden, David; Buhl-Mortensen, Pal; De Leo, Fabio C.; Dionísio, Gisela; Durden, Jennifer M.; Edwards, Luke; Friedman, Ariell; Greinert, Jens; Jacobsen-Stout, Nancy; Lerner, Steve; Leslie, Murray; Nattkemper, Tim W.; Sameoto, Jessica A.; Schoening, Timm; Schouten, Ronald; Seager, James; Singh, Hanumant; Soubigou, Olivier; Tojeira, Inês; van den Beld, Inge; Dias, Frederico; Tempera, Fernando; Santos, Ricardo S.

    2016-12-01

    Given the need to describe, analyze and index large quantities of marine imagery data for exploration and monitoring activities, a range of specialized image annotation tools have been developed worldwide. Image annotation - the process of transposing objects or events represented in a video or still image to the semantic level, may involve human interactions and computer-assisted solutions. Marine image annotation software (MIAS) have enabled over 500 publications to date. We review the functioning, application trends and developments, by comparing general and advanced features of 23 different tools utilized in underwater image analysis. MIAS requiring human input are basically a graphical user interface, with a video player or image browser that recognizes a specific time code or image code, allowing to log events in a time-stamped (and/or geo-referenced) manner. MIAS differ from similar software by the capability of integrating data associated to video collection, the most simple being the position coordinates of the video recording platform. MIAS have three main characteristics: annotating events in real time, posteriorly to annotation and interact with a database. These range from simple annotation interfaces, to full onboard data management systems, with a variety of toolboxes. Advanced packages allow to input and display data from multiple sensors or multiple annotators via intranet or internet. Posterior human-mediated annotation often include tools for data display and image analysis, e.g. length, area, image segmentation, point count; and in a few cases the possibility of browsing and editing previous dive logs or to analyze the annotations. The interaction with a database allows the automatic integration of annotations from different surveys, repeated annotation and collaborative annotation of shared datasets, browsing and querying of data. Progress in the field of automated annotation is mostly in post processing, for stable platforms or still images. Integration into available MIAS is currently limited to semi-automated processes of pixel recognition through computer-vision modules that compile expert-based knowledge. Important topics aiding the choice of a specific software are outlined, the ideal software is discussed and future trends are presented.

  1. A Test of the Design of a Video Tutorial for Software Training

    ERIC Educational Resources Information Center

    van der Meij, J.; van der Meij, H.

    2015-01-01

    The effectiveness of a video tutorial versus a paper-based tutorial for software training has yet to be established. Mixed outcomes from the empirical studies to date suggest that for a video tutorial to outperform its paper-based counterpart, the former should be crafted so that it addresses the strengths of both designs. This was attempted in…

  2. The Use of Video Technology for the Fast-Prototyping of Artificially Intelligent Software.

    ERIC Educational Resources Information Center

    Klein, Gary L.

    This paper describes the use of video to provide a screenplay depiction of a proposed artificial intelligence software system. Advantages of such use are identified: (1) the video can be used to provide a clear conceptualization of the proposed system; (2) it can illustrate abstract technical concepts; (3) it can simulate the functions of the…

  3. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center,atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image Stabilization and Registration (VISAR). VISAR may help law enforcement agencies catch criminals by improving the quality of video recorded at crime scenes. In this photograph, the single frame at left, taken at night, was brightened in order to enhance details and reduce noise or snow. To further overcome the video defects in one frame, Law enforcement officials can use VISAR software to add information from multiple frames to reveal a person. Images from less than a second of videotape were added together to create the clarified image at right. VISAR stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. VISAR could also have applications in medical and meteorological imaging. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. The software can be used for defense application by improving recornaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  4. Software for enhanced video capsule endoscopy: challenges for essential progress.

    PubMed

    Iakovidis, Dimitris K; Koulaouzidis, Anastasios

    2015-03-01

    Video capsule endoscopy (VCE) has revolutionized the diagnostic work-up in the field of small bowel diseases. Furthermore, VCE has the potential to become the leading screening technique for the entire gastrointestinal tract. Computational methods that can be implemented in software can enhance the diagnostic yield of VCE both in terms of efficiency and diagnostic accuracy. Since the appearance of the first capsule endoscope in clinical practice in 2001, information technology (IT) research groups have proposed a variety of such methods, including algorithms for detecting haemorrhage and lesions, reducing the reviewing time, localizing the capsule or lesion, assessing intestinal motility, enhancing the video quality and managing the data. Even though research is prolific (as measured by publication activity), the progress made during the past 5 years can only be considered as marginal with respect to clinically significant outcomes. One thing is clear-parallel pathways of medical and IT scientists exist, each publishing in their own area, but where do these research pathways meet? Could the proposed IT plans have any clinical effect and do clinicians really understand the limitations of VCE software? In this Review, we present an in-depth critical analysis that aims to inspire and align the agendas of the two scientific groups.

  5. [Development of an original computer program FISHMet: use for molecular cytogenetic diagnosis and genome mapping by fluorescent in situ hybridization (FISH)].

    PubMed

    Iurov, Iu B; Khazatskiĭ, I A; Akindinov, V A; Dovgilov, L V; Kobrinskiĭ, B A; Vorsanova, S G

    2000-08-01

    Original software FISHMet has been developed and tried for improving the efficiency of diagnosis of hereditary diseases caused by chromosome aberrations and for chromosome mapping by fluorescent in situ hybridization (FISH) method. The program allows creation and analysis of pseudocolor chromosome images and hybridization signals in the Windows 95 system, allows computer analysis and editing of the results of pseudocolor hybridization in situ, including successive imposition of initial black-and-white images created using fluorescent filters (blue, green, and red), and editing of each image individually or of a summary pseudocolor image in BMP, TIFF, and JPEG formats. Components of image computer analysis system (LOMO, Leitz Ortoplan, and Axioplan fluorescent microscopes, COHU 4910 and Sanyo VCB-3512P CCD cameras, Miro-Video, Scion LG-3 and VG-5 image capture maps, and Pentium 100 and Pentium 200 computers) and specialized software for image capture and visualization (Scion Image PC and Video-Cup) have been used with good results in the study.

  6. Long-term video surveillance and automated analyses reveal arousal patterns in groups of hibernating bats

    USGS Publications Warehouse

    Hayman, David T.S.; Cryan, Paul; Fricker, Paul D.; Dannemiller, Nicholas G.

    2017-01-01

    Understanding natural behaviours is essential to determining how animals deal with new threats (e.g. emerging diseases). However, natural behaviours of animals with cryptic lifestyles, like hibernating bats, are often poorly characterized. White-nose syndrome (WNS) is an unprecedented disease threatening multiple species of hibernating bats, and pathogen-induced changes to host behaviour may contribute to mortality. To better understand the behaviours of hibernating bats and how they might relate to WNS, we developed new ways of studying hibernation across entire seasons.We used thermal-imaging video surveillance cameras to observe little brown bats (Myotis lucifugus) and Indiana bats (M. sodalis) in two caves over multiple winters. We developed new, sharable software to test for autocorrelation and periodicity of arousal signals in recorded video.We processed 740 days (17,760 hr) of video at a rate of >1,000 hr of video imagery in less than 1 hr using a desktop computer with sufficient resolution to detect increases in arousals during midwinter in both species and clear signals of daily arousal periodicity in infected M. sodalis.Our unexpected finding of periodic synchronous group arousals in hibernating bats demonstrate the potential for video methods and suggest some bats may have innate behavioural strategies for coping with WNS. Surveillance video and accessible analysis software make it now practical to investigate long-term behaviours of hibernating bats and other hard-to-study animals.

  7. Real-time color image processing for forensic fiber investigations

    NASA Astrophysics Data System (ADS)

    Paulsson, Nils

    1995-09-01

    This paper describes a system for automatic fiber debris detection based on color identification. The properties of the system are fast analysis and high selectivity, a necessity when analyzing forensic fiber samples. An ordinary investigation separates the material into well above 100,000 video images to analyze. The system is based on standard techniques such as CCD-camera, motorized sample table, and IBM-compatible PC/AT with add-on-boards for video frame digitalization and stepping motor control as the main parts. It is possible to operate the instrument at full video rate (25 image/s) with aid of the HSI-color system (hue- saturation-intensity) and software optimization. High selectivity is achieved by separating the analysis into several steps. The first step is fast direct color identification of objects in the analyzed video images and the second step analyzes detected objects with a more complex and time consuming stage of the investigation to identify single fiber fragments for subsequent analysis with more selective techniques.

  8. Computer-assisted 3D kinematic analysis of all leg joints in walking insects.

    PubMed

    Bender, John A; Simpson, Elaine M; Ritzmann, Roy E

    2010-10-26

    High-speed video can provide fine-scaled analysis of animal behavior. However, extracting behavioral data from video sequences is a time-consuming, tedious, subjective task. These issues are exacerbated where accurate behavioral descriptions require analysis of multiple points in three dimensions. We describe a new computer program written to assist a user in simultaneously extracting three-dimensional kinematics of multiple points on each of an insect's six legs. Digital video of a walking cockroach was collected in grayscale at 500 fps from two synchronized, calibrated cameras. We improved the legs' visibility by painting white dots on the joints, similar to techniques used for digitizing human motion. Compared to manual digitization of 26 points on the legs over a single, 8-second bout of walking (or 106,496 individual 3D points), our software achieved approximately 90% of the accuracy with 10% of the labor. Our experimental design reduced the complexity of the tracking problem by tethering the insect and allowing it to walk in place on a lightly oiled glass surface, but in principle, the algorithms implemented are extensible to free walking. Our software is free and open-source, written in the free language Python and including a graphical user interface for configuration and control. We encourage collaborative enhancements to make this tool both better and widely utilized.

  9. Calypso: a user-friendly web-server for mining and visualizing microbiome-environment interactions.

    PubMed

    Zakrzewski, Martha; Proietti, Carla; Ellis, Jonathan J; Hasan, Shihab; Brion, Marie-Jo; Berger, Bernard; Krause, Lutz

    2017-03-01

    Calypso is an easy-to-use online software suite that allows non-expert users to mine, interpret and compare taxonomic information from metagenomic or 16S rDNA datasets. Calypso has a focus on multivariate statistical approaches that can identify complex environment-microbiome associations. The software enables quantitative visualizations, statistical testing, multivariate analysis, supervised learning, factor analysis, multivariable regression, network analysis and diversity estimates. Comprehensive help pages, tutorials and videos are provided via a wiki page. The web-interface is accessible via http://cgenome.net/calypso/ . The software is programmed in Java, PERL and R and the source code is available from Zenodo ( https://zenodo.org/record/50931 ). The software is freely available for non-commercial users. l.krause@uq.edu.au. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  10. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center, atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image Stabilization and Registration (VISAR), which is illustrated in this Quick Time movie. VISAR is a computer algorithm that stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. VISAR could also have applications in law enforcement, medical, and meteorological imaging. The software can be used for defense application by improving reconnaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  11. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center,atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image stabilization and Registration (VISAR), which is illustrated in this Quick Time movie. VISAR is a computer algorithm that stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. VISAR could also have applications in law enforcement, medical, and meteorological imaging. The software can be used for defense application by improving reconnaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  12. Video sensor architecture for surveillance applications.

    PubMed

    Sánchez, Jordi; Benet, Ginés; Simó, José E

    2012-01-01

    This paper introduces a flexible hardware and software architecture for a smart video sensor. This sensor has been applied in a video surveillance application where some of these video sensors are deployed, constituting the sensory nodes of a distributed surveillance system. In this system, a video sensor node processes images locally in order to extract objects of interest, and classify them. The sensor node reports the processing results to other nodes in the cloud (a user or higher level software) in the form of an XML description. The hardware architecture of each sensor node has been developed using two DSP processors and an FPGA that controls, in a flexible way, the interconnection among processors and the image data flow. The developed node software is based on pluggable components and runs on a provided execution run-time. Some basic and application-specific software components have been developed, in particular: acquisition, segmentation, labeling, tracking, classification and feature extraction. Preliminary results demonstrate that the system can achieve up to 7.5 frames per second in the worst case, and the true positive rates in the classification of objects are better than 80%.

  13. Video Sensor Architecture for Surveillance Applications

    PubMed Central

    Sánchez, Jordi; Benet, Ginés; Simó, José E.

    2012-01-01

    This paper introduces a flexible hardware and software architecture for a smart video sensor. This sensor has been applied in a video surveillance application where some of these video sensors are deployed, constituting the sensory nodes of a distributed surveillance system. In this system, a video sensor node processes images locally in order to extract objects of interest, and classify them. The sensor node reports the processing results to other nodes in the cloud (a user or higher level software) in the form of an XML description. The hardware architecture of each sensor node has been developed using two DSP processors and an FPGA that controls, in a flexible way, the interconnection among processors and the image data flow. The developed node software is based on pluggable components and runs on a provided execution run-time. Some basic and application-specific software components have been developed, in particular: acquisition, segmentation, labeling, tracking, classification and feature extraction. Preliminary results demonstrate that the system can achieve up to 7.5 frames per second in the worst case, and the true positive rates in the classification of objects are better than 80%. PMID:22438723

  14. Scene Analysis: Non-Linear Spatial Filtering for Automatic Target Detection.

    DTIC Science & Technology

    1982-12-01

    In this thesis, a method for two-dimensional pattern recognition was developed and tested. The method included a global search scheme for candidate...test global switch TYPEO Creating negative video file only.W 11=0 12=256 13=512 14=768 GO 70 2 1 TYPE" Creating negative and horizontally flipped video...purpose was to develop a base of image processing software for the AFIT Digital Signal Processing Laboratory NOVA- ECLIPSE minicomputer system, for

  15. Droplet morphometry and velocimetry (DMV): a video processing software for time-resolved, label-free tracking of droplet parameters.

    PubMed

    Basu, Amar S

    2013-05-21

    Emerging assays in droplet microfluidics require the measurement of parameters such as drop size, velocity, trajectory, shape deformation, fluorescence intensity, and others. While micro particle image velocimetry (μPIV) and related techniques are suitable for measuring flow using tracer particles, no tool exists for tracking droplets at the granularity of a single entity. This paper presents droplet morphometry and velocimetry (DMV), a digital video processing software for time-resolved droplet analysis. Droplets are identified through a series of image processing steps which operate on transparent, translucent, fluorescent, or opaque droplets. The steps include background image generation, background subtraction, edge detection, small object removal, morphological close and fill, and shape discrimination. A frame correlation step then links droplets spanning multiple frames via a nearest neighbor search with user-defined matching criteria. Each step can be individually tuned for maximum compatibility. For each droplet found, DMV provides a time-history of 20 different parameters, including trajectory, velocity, area, dimensions, shape deformation, orientation, nearest neighbour spacing, and pixel statistics. The data can be reported via scatter plots, histograms, and tables at the granularity of individual droplets or by statistics accrued over the population. We present several case studies from industry and academic labs, including the measurement of 1) size distributions and flow perturbations in a drop generator, 2) size distributions and mixing rates in drop splitting/merging devices, 3) efficiency of single cell encapsulation devices, 4) position tracking in electrowetting operations, 5) chemical concentrations in a serial drop dilutor, 6) drop sorting efficiency of a tensiophoresis device, 7) plug length and orientation of nonspherical plugs in a serpentine channel, and 8) high throughput tracking of >250 drops in a reinjection system. Performance metrics show that highest accuracy and precision is obtained when the video resolution is >300 pixels per drop. Analysis time increases proportionally with video resolution. The current version of the software provides throughputs of 2-30 fps, suggesting the potential for real time analysis.

  16. Applications of Phase-Based Motion Processing

    NASA Technical Reports Server (NTRS)

    Branch, Nicholas A.; Stewart, Eric C.

    2018-01-01

    Image pyramids provide useful information in determining structural response at low cost using commercially available cameras. The current effort applies previous work on the complex steerable pyramid to analyze and identify imperceptible linear motions in video. Instead of implicitly computing motion spectra through phase analysis of the complex steerable pyramid and magnifying the associated motions, instead present a visual technique and the necessary software to display the phase changes of high frequency signals within video. The present technique quickly identifies regions of largest motion within a video with a single phase visualization and without the artifacts of motion magnification, but requires use of the computationally intensive Fourier transform. While Riesz pyramids present an alternative to the computationally intensive complex steerable pyramid for motion magnification, the Riesz formulation contains significant noise, and motion magnification still presents large amounts of data that cannot be quickly assessed by the human eye. Thus, user-friendly software is presented for quickly identifying structural response through optical flow and phase visualization in both Python and MATLAB.

  17. Future-saving audiovisual content for Data Science: Preservation of geoinformatics video heritage with the TIB|AV-Portal

    NASA Astrophysics Data System (ADS)

    Löwe, Peter; Plank, Margret; Ziedorn, Frauke

    2015-04-01

    In data driven research, the access to citation and preservation of the full triad consisting of journal article, research data and -software has started to become good scientific practice. To foster the adoption of this practice the significance of software tools has to be acknowledged, which enable scientists to harness auxiliary audiovisual content in their research work. The advent of ubiquitous computer-based audiovisual recording and corresponding Web 2.0 hosting platforms like Youtube, Slideshare and GitHub has created new ecosystems for contextual information related to scientific software and data, which continues to grow both in size and variety of content. The current Web 2.0 platforms lack capabilities for long term archiving and scientific citation, such as persistent identifiers allowing to reference specific intervals of the overall content. The audiovisual content currently shared by scientists ranges from commented howto-demonstrations on software handling, installation and data-processing, to aggregated visual analytics of the evolution of software projects over time. Such content are crucial additions to the scientific message, as they ensure that software-based data-processing workflows can be assessed, understood and reused in the future. In the context of data driven research, such content needs to be accessible by effective search capabilities, enabling the content to be retrieved and ensuring that the content producers receive credit for their efforts within the scientific community. Improved multimedia archiving and retrieval services for scientific audiovisual content which meet these requirements are currently implemented by the scientific library community. This paper exemplifies the existing challenges, requirements, benefits and the potential of the preservation, accessibility and citability of such audiovisual content for the Open Source communities based on the new audiovisual web service TIB|AV Portal of the German National Library of Science and Technology. The web-based portal allows for extended search capabilities based on enhanced metadata derived by automated video analysis. By combining state-of-the-art multimedia retrieval techniques such as speech-, text-, and image recognition with semantic analysis, content-based access to videos at the segment level is provided. Further, by using the open standard Media Fragment Identifier (MFID), a citable Digital Object Identifier is displayed for each video segment. In addition to the continuously growing footprint of contemporary content, the importance of vintage audiovisual information needs to be considered: This paper showcases the successful application of the TIB|AV-Portal in the preservation and provision of a newly discovered version of a GRASS GIS promotional video produced by US Army -Corps of Enginers Laboratory (US-CERL) in 1987. The video is provides insight into the constraints of the very early days of the GRASS GIS project, which is the oldest active Free and Open Source Software (FOSS) GIS project which has been active for over thirty years. GRASS itself has turned into a collaborative scientific platform and a repository of scientific peer-reviewed code and algorithm/knowledge hub for future generation of scientists [1]. This is a reference case for future preservation activities regarding semantic-enhanced Web 2.0 content from geospatial software projects within Academia and beyond. References: [1] Chemin, Y., Petras V., Petrasova, A., Landa, M., Gebbert, S., Zambelli, P., Neteler, M., Löwe, P.: GRASS GIS: a peer-reviewed scientific platform and future research Repository, Geophysical Research Abstracts, Vol. 17, EGU2015-8314-1, 2015 (submitted)

  18. Equipment issues regarding the collection of video data for research

    NASA Astrophysics Data System (ADS)

    Kung, Rebecca Lippmann; Kung, Peter; Linder, Cedric

    2005-12-01

    Physics education research increasingly makes use of video data for analysis of student learning and teaching practice. Collection of these data is conceptually simple but execution is often fraught with costly and time-consuming complications. This pragmatic paper discusses the development of systems to record and permanently archive audio and video data in real-time. We focus on a system based upon consumer video DVD recorders, but also give an overview of other technologies and detail issues common to all systems. We detail common yet unexpected complications, particularly with regard to sound quality and compatibility with transcription software. Information specific to fixed and transportable systems, other technology options, and generic and specific equipment recommendations are given in supplemental appendices

  19. Studying Upper-Limb Amputee Prosthesis Use to Inform Device Design

    DTIC Science & Technology

    2016-10-01

    study of the resulting videos led to a new prosthetics-use taxonomy that is generalizable to various levels of amputation and terminal devices. The...taxonomy was applied to classification of the recorded videos via custom tagging software with midi controller interface. The software creates...a motion capture studio and video cameras to record accurate and detailed upper body motion during a series of standardized tasks. These tasks are

  20. Authoritative Authoring: Software That Makes Multimedia Happen.

    ERIC Educational Resources Information Center

    Florio, Chris; Murie, Michael

    1996-01-01

    Compares seven mid- to high-end multimedia authoring software systems that combine graphics, sound, animation, video, and text for Windows and Macintosh platforms. A run-time project was created with each program using video, animation, graphics, sound, formatted text, hypertext, and buttons. (LRW)

  1. PC-based high-speed video-oculography for measuring rapid eye movements in mice.

    PubMed

    Sakatani, Tomoya; Isa, Tadashi

    2004-05-01

    We newly developed an infrared video-oculographic system for on-line tracking of the eye position in awake and head-fixed mice, with high temporal resolution (240 Hz). The system consists of a commercially available high-speed CCD camera and an image processing software written in LabVIEW run on IBM-PC with a plug-in video grabber board. This software calculates the center and area of the pupil by fitting circular function to the pupil boundary, and allows robust and stable tracking of the eye position in small animals like mice. On-line calculation is performed to obtain reasonable circular fitting of the pupil boundary even if a part of the pupil is covered with shadows or occluded by eyelids or corneal reflections. The pupil position in the 2-D video plane is converted to the rotation angle of the eyeball by estimating its rotation center based on the anatomical eyeball model. By this recording system, it is possible to perform quantitative analysis of rapid eye movements such as saccades in mice. This will provide a powerful tool for analyzing molecular basis of oculomotor and cognitive functions by using various lines of mutant mice.

  2. Fast and predictable video compression in software design and implementation of an H.261 codec

    NASA Astrophysics Data System (ADS)

    Geske, Dagmar; Hess, Robert

    1998-09-01

    The use of software codecs for video compression becomes commonplace in several videoconferencing applications. In order to reduce conflicts with other applications used at the same time, mechanisms for resource reservation on endsystems need to determine an upper bound for computing time used by the codec. This leads to the demand for predictable execution times of compression/decompression. Since compression schemes as H.261 inherently depend on the motion contained in the video, an adaptive admission control is required. This paper presents a data driven approach based on dynamical reduction of the number of processed macroblocks in peak situations. Besides the absolute speed is a point of interest. The question, whether and how software compression of high quality video is feasible on today's desktop computers, is examined.

  3. Video-Based Systems Research, Analysis, and Applications Opportunities

    DTIC Science & Technology

    1981-07-30

    as a COM software consultant, marketing its own COMTREVE software; * DatagraphiX Inc., San Diego, offers several versions of its COM recorders. AutoCOM...Metropolitan Microforms Ltd. in New York markets its MCAR system, which satisfies the need for a one- or multiple-user information retrieval and input...targeted to the market for high-speed data communications within a single facility, such as a university campus. The first commercial installations were set

  4. Image sequence analysis workstation for multipoint motion analysis

    NASA Astrophysics Data System (ADS)

    Mostafavi, Hassan

    1990-08-01

    This paper describes an application-specific engineering workstation designed and developed to analyze motion of objects from video sequences. The system combines the software and hardware environment of a modem graphic-oriented workstation with the digital image acquisition, processing and display techniques. In addition to automation and Increase In throughput of data reduction tasks, the objective of the system Is to provide less invasive methods of measurement by offering the ability to track objects that are more complex than reflective markers. Grey level Image processing and spatial/temporal adaptation of the processing parameters is used for location and tracking of more complex features of objects under uncontrolled lighting and background conditions. The applications of such an automated and noninvasive measurement tool include analysis of the trajectory and attitude of rigid bodies such as human limbs, robots, aircraft in flight, etc. The system's key features are: 1) Acquisition and storage of Image sequences by digitizing and storing real-time video; 2) computer-controlled movie loop playback, freeze frame display, and digital Image enhancement; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored Image sequence; 4) model-based estimation and tracking of the six degrees of freedom of a rigid body: 5) field-of-view and spatial calibration: 6) Image sequence and measurement data base management; and 7) offline analysis software for trajectory plotting and statistical analysis.

  5. Sit Up Straight

    NASA Technical Reports Server (NTRS)

    1998-01-01

    BioMetric Systems has an exclusive license to the Posture Video Analysis Tool (PVAT) developed at Johnson Space Center. PVAT uses videos from Space Shuttle flights to identify limiting posture and other human factors in the workplace that could be limiting. The software also provides data that recommends appropriate postures for certain tasks and safe duration for potentially harmful positions. BioMetric Systems has further developed PVAT for use by hospitals, physical rehabilitation facilities, insurance companies, sports medicine clinics, oil companies, manufacturers, and the military.

  6. Capsule endoscopy

    MedlinePlus

    Capsule enteroscopy; Wireless capsule endoscopy; Video capsule endoscopy (VCE); Small bowel capsule endoscopy (SBCE) ... a computer and software turns them into a video. Your provider watches the video to look for ...

  7. Development of a ROV Deployed Video Analysis Tool for Rapid Measurement of Submerged Oil/Gas Leaks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Savas, Omer

    Expanded deep sea drilling around the globe makes it necessary to have readily available tools to quickly and accurately measure discharge rates from accidental submerged oil/gas leak jets for the first responders to deploy adequate resources for containment. We have developed and tested a field deployable video analysis software package which is able to provide in the field sufficiently accurate flow rate estimates for initial responders in accidental oil discharges in submarine operations. The essence of our approach is based on tracking coherent features at the interface in the near field of immiscible turbulent jets. The software package, UCB_Plume, ismore » ready to be used by the first responders for field implementation. We have tested the tool on submerged water and oil jets which are made visible using fluorescent dyes. We have been able to estimate the discharge rate within 20% accuracy. A high end WINDOWS laptop computer is suggested as the operating platform and a USB connected high speed, high resolution monochrome camera as the imaging device are sufficient for acquiring flow images under continuous unidirectional illumination and running the software in the field. Results are obtained over a matter of minutes.« less

  8. High-performance software-only H.261 video compression on PC

    NASA Astrophysics Data System (ADS)

    Kasperovich, Leonid

    1996-03-01

    This paper describes an implementation of a software H.261 codec for PC, that takes an advantage of the fast computational algorithms for DCT-based video compression, which have been presented by the author at the February's 1995 SPIE/IS&T meeting. The motivation for developing the H.261 prototype system is to demonstrate a feasibility of real time software- only videoconferencing solution to operate across a wide range of network bandwidth, frame rate, and resolution of the input video. As the bandwidths of current network technology will be increased, the higher frame rate and resolution of video to be transmitted is allowed, that requires, in turn, a software codec to be able to compress pictures of CIF (352 X 288) resolution at up to 30 frame/sec. Running on Pentium 133 MHz PC the codec presented is capable to compress video in CIF format at 21 - 23 frame/sec. This result is comparable to the known hardware-based H.261 solutions, but it doesn't require any specific hardware. The methods to achieve high performance, the program optimization technique for Pentium microprocessor along with the performance profile, showing the actual contribution of the different encoding/decoding stages to the overall computational process, are presented.

  9. Constructing spherical panoramas of a bladder phantom from endoscopic video using bundle adjustment

    NASA Astrophysics Data System (ADS)

    Soper, Timothy D.; Chandler, John E.; Porter, Michael P.; Seibel, Eric J.

    2011-03-01

    The high recurrence rate of bladder cancer requires patients to undergo frequent surveillance screenings over their lifetime following initial diagnosis and resection. Our laboratory is developing panoramic stitching software that would compile several minutes of cystoscopic video into a single panoramic image, covering the entire bladder, for review by an urolgist at a later time or remote location. Global alignment of video frames is achieved by using a bundle adjuster that simultaneously recovers both the 3D structure of the bladder as well as the scope motion using only the video frames as input. The result of the algorithm is a complete 360° spherical panorama of the outer surface. The details of the software algorithms are presented here along with results from both a virtual cystoscopy as well from real endoscopic imaging of a bladder phantom. The software successfully stitched several hundred video frames into a single panoramic with subpixel accuracy and with no knowledge of the intrinsic camera properties, such as focal length and radial distortion. In the discussion, we outline future work in development of the software as well as identifying factors pertinent to clinical translation of this technology.

  10. Novel Sessile Drop Software for Quantitative Estimation of Slag Foaming in Carbon/Slag Interactions

    NASA Astrophysics Data System (ADS)

    Khanna, Rita; Rahman, Mahfuzur; Leow, Richard; Sahajwalla, Veena

    2007-08-01

    Novel video-processing software has been developed for the sessile drop technique for a rapid and quantitative estimation of slag foaming. The data processing was carried out in two stages: the first stage involved the initial transformation of digital video/audio signals into a format compatible with computing software, and the second stage involved the computation of slag droplet volume and area of contact in a chosen video frame. Experimental results are presented on slag foaming from synthetic graphite/slag system at 1550 °C. This technique can be used for determining the extent and stability of foam as a function of time.

  11. Unmanned Vehicle Guidance Using Video Camera/Vehicle Model

    NASA Technical Reports Server (NTRS)

    Sutherland, T.

    1999-01-01

    A video guidance sensor (VGS) system has flown on both STS-87 and STS-95 to validate a single camera/target concept for vehicle navigation. The main part of the image algorithm was the subtraction of two consecutive images using software. For a nominal size image of 256 x 256 pixels this subtraction can take a large portion of the time between successive frames in standard rate video leaving very little time for other computations. The purpose of this project was to integrate the software subtraction into hardware to speed up the subtraction process and allow for more complex algorithms to be performed, both in hardware and software.

  12. Educational Video Recording and Editing for The Hand Surgeon

    PubMed Central

    Rehim, Shady A.; Chung, Kevin C.

    2016-01-01

    Digital video recordings are increasingly used across various medical and surgical disciplines including hand surgery for documentation of patient care, resident education, scientific presentations and publications. In recent years, the introduction of sophisticated computer hardware and software technology has simplified the process of digital video production and improved means of disseminating large digital data files. However, the creation of high quality surgical video footage requires basic understanding of key technical considerations, together with creativity and sound aesthetic judgment of the videographer. In this article we outline the practical steps involved with equipment preparation, video recording, editing and archiving as well as guidance for the choice of suitable hardware and software equipment. PMID:25911212

  13. Biomechanical analysis using Kinovea for sports application

    NASA Astrophysics Data System (ADS)

    Muaza Nor Adnan, Nor; Patar, Mohd Nor Azmi Ab; Lee, Hokyoo; Yamamoto, Shin-Ichiroh; Jong-Young, Lee; Mahmud, Jamaluddin

    2018-04-01

    This paper assesses the reliability of HD VideoCam–Kinovea as an alternative tool in conducting motion analysis and measuring knee relative angle of drop jump movement. The motion capture and analysis procedure were conducted in the Biomechanics Lab, Shibaura Institute of Technology, Omiya Campus, Japan. A healthy subject without any gait disorder (BMI of 28.60 ± 1.40) was recruited. The volunteered subject was asked to per the drop jump movement on preset platform and the motion was simultaneously recorded using an established infrared motion capture system (Hawk–Cortex) and a HD VideoCam in the sagittal plane only. The capture was repeated for 5 times. The outputs (video recordings) from the HD VideoCam were input into Kinovea (an open-source software) and the drop jump pattern was tracked and analysed. These data are compared with the drop jump pattern tracked and analysed earlier using the Hawk–Cortex system. In general, the results obtained (drop jump pattern) using the HD VideoCam–Kinovea are close to the results obtained using the established motion capture system. Basic statistical analyses show that most average variances are less than 10%, thus proving the repeatability of the protocol and the reliability of the results. It can be concluded that the integration of HD VideoCam–Kinovea has the potential to become a reliable motion capture–analysis system. Moreover, it is low cost, portable and easy to use. As a conclusion, the current study and its findings are found useful and has contributed to enhance significant knowledge pertaining to motion capture-analysis, drop jump movement and HD VideoCam–Kinovea integration.

  14. Effectiveness of an automatic tracking software in underwater motion analysis.

    PubMed

    Magalhaes, Fabrício A; Sawacha, Zimi; Di Michele, Rocco; Cortesi, Matteo; Gatta, Giorgio; Fantozzi, Silvia

    2013-01-01

    Tracking of markers placed on anatomical landmarks is a common practice in sports science to perform the kinematic analysis that interests both athletes and coaches. Although different software programs have been developed to automatically track markers and/or features, none of them was specifically designed to analyze underwater motion. Hence, this study aimed to evaluate the effectiveness of a software developed for automatic tracking of underwater movements (DVP), based on the Kanade-Lucas-Tomasi feature tracker. Twenty-one video recordings of different aquatic exercises (n = 2940 markers' positions) were manually tracked to determine the markers' center coordinates. Then, the videos were automatically tracked using DVP and a commercially available software (COM). Since tracking techniques may produce false targets, an operator was instructed to stop the automatic procedure and to correct the position of the cursor when the distance between the calculated marker's coordinate and the reference one was higher than 4 pixels. The proportion of manual interventions required by the software was used as a measure of the degree of automation. Overall, manual interventions were 10.4% lower for DVP (7.4%) than for COM (17.8%). Moreover, when examining the different exercise modes separately, the percentage of manual interventions was 5.6% to 29.3% lower for DVP than for COM. Similar results were observed when analyzing the type of marker rather than the type of exercise, with 9.9% less manual interventions for DVP than for COM. In conclusion, based on these results, the developed automatic tracking software presented can be used as a valid and useful tool for underwater motion analysis. Key PointsThe availability of effective software for automatic tracking would represent a significant advance for the practical use of kinematic analysis in swimming and other aquatic sports.An important feature of automatic tracking software is to require limited human interventions and supervision, thus allowing short processing time.When tracking underwater movements, the degree of automation of the tracking procedure is influenced by the capability of the algorithm to overcome difficulties linked to the small target size, the low image quality and the presence of background clutters.The newly developed feature-tracking algorithm has shown a good automatic tracking effectiveness in underwater motion analysis with significantly smaller percentage of required manual interventions when compared to a commercial software.

  15. Cognitive, Social, and Literacy Competencies: The Chelsea Bank Simulation Project. Year One: Final Report. [Volume 2]: Appendices.

    ERIC Educational Resources Information Center

    Duffy, Thomas; And Others

    This supplementary volume presents appendixes A-E associated with a 1-year study which determined what secondary school students were doing as they engaged in the Chelsea Bank computer software simulation activities. Appendixes present the SCANS Analysis Coding Sheet; coding problem analysis of 50 video segments; student and teacher interview…

  16. IntraFace

    PubMed Central

    De la Torre, Fernando; Chu, Wen-Sheng; Xiong, Xuehan; Vicente, Francisco; Ding, Xiaoyu; Cohn, Jeffrey

    2016-01-01

    Within the last 20 years, there has been an increasing interest in the computer vision community in automated facial image analysis algorithms. This has been driven by applications in animation, market research, autonomous-driving, surveillance, and facial editing among others. To date, there exist several commercial packages for specific facial image analysis tasks such as facial expression recognition, facial attribute analysis or face tracking. However, free and easy-to-use software that incorporates all these functionalities is unavailable. This paper presents IntraFace (IF), a publicly-available software package for automated facial feature tracking, head pose estimation, facial attribute recognition, and facial expression analysis from video. In addition, IFincludes a newly develop technique for unsupervised synchrony detection to discover correlated facial behavior between two or more persons, a relatively unexplored problem in facial image analysis. In tests, IF achieved state-of-the-art results for emotion expression and action unit detection in three databases, FERA, CK+ and RU-FACS; measured audience reaction to a talk given by one of the authors; and discovered synchrony for smiling in videos of parent-infant interaction. IF is free of charge for academic use at http://www.humansensing.cs.cmu.edu/intraface/. PMID:27346987

  17. IntraFace.

    PubMed

    De la Torre, Fernando; Chu, Wen-Sheng; Xiong, Xuehan; Vicente, Francisco; Ding, Xiaoyu; Cohn, Jeffrey

    2015-05-01

    Within the last 20 years, there has been an increasing interest in the computer vision community in automated facial image analysis algorithms. This has been driven by applications in animation, market research, autonomous-driving, surveillance, and facial editing among others. To date, there exist several commercial packages for specific facial image analysis tasks such as facial expression recognition, facial attribute analysis or face tracking. However, free and easy-to-use software that incorporates all these functionalities is unavailable. This paper presents IntraFace (IF), a publicly-available software package for automated facial feature tracking, head pose estimation, facial attribute recognition, and facial expression analysis from video. In addition, IFincludes a newly develop technique for unsupervised synchrony detection to discover correlated facial behavior between two or more persons, a relatively unexplored problem in facial image analysis. In tests, IF achieved state-of-the-art results for emotion expression and action unit detection in three databases, FERA, CK+ and RU-FACS; measured audience reaction to a talk given by one of the authors; and discovered synchrony for smiling in videos of parent-infant interaction. IF is free of charge for academic use at http://www.humansensing.cs.cmu.edu/intraface/.

  18. Administrative/Office Technology. A Guide to Resources.

    ERIC Educational Resources Information Center

    Ohio State Univ., Columbus. Vocational Instructional Materials Lab.

    This guide, which was written for general marketing instructors in Ohio, lists nearly 450 resources for use in conjunction with the Administrative/Office Technology Occupational Competency Analysis Profile. The texts, workbooks, modules, software, videos, and learning activities packets listed are categorized by the following topics:…

  19. General Marketing. A Guide to Resources.

    ERIC Educational Resources Information Center

    Ohio State Univ., Columbus. Vocational Instructional Materials Lab.

    This guide, which was written for general marketing instructors in Ohio, lists more than 600 resources for use in conjunction with the General Marketing Occupational Competency Analysis Profile. The texts, workbooks, modules, software, videos, and learning activities packets listed are categorized by the following topics: human resource…

  20. Video library for video imaging detection at intersection stop lines.

    DOT National Transportation Integrated Search

    2010-04-01

    The objective of this activity was to record video that could be used for controlled : evaluation of video image vehicle detection system (VIVDS) products and software upgrades to : existing products based on a list of conditions that might be diffic...

  1. WPSS: watching people security services

    NASA Astrophysics Data System (ADS)

    Bouma, Henri; Baan, Jan; Borsboom, Sander; van Zon, Kasper; Luo, Xinghan; Loke, Ben; Stoeller, Bram; van Kuilenburg, Hans; Dijk, Judith

    2013-10-01

    To improve security, the number of surveillance cameras is rapidly increasing. However, the number of human operators remains limited and only a selection of the video streams are observed. Intelligent software services can help to find people quickly, evaluate their behavior and show the most relevant and deviant patterns. We present a software platform that contributes to the retrieval and observation of humans and to the analysis of their behavior. The platform consists of mono- and stereo-camera tracking, re-identification, behavioral feature computation, track analysis, behavior interpretation and visualization. This system is demonstrated in a busy shopping mall with multiple cameras and different lighting conditions.

  2. Network Analysis of an Emergent Massively Collaborative Creation on Video Sharing Website

    NASA Astrophysics Data System (ADS)

    Hamasaki, Masahiro; Takeda, Hideaki; Nishimura, Takuichi

    The Web technology enables numerous people to collaborate in creation. We designate it as massively collaborative creation via the Web. As an example of massively collaborative creation, we particularly examine video development on Nico Nico Douga, which is a video sharing website that is popular in Japan. We specifically examine videos on Hatsune Miku, a version of a singing synthesizer application software that has inspired not only song creation but also songwriting, illustration, and video editing. As described herein, creators of interact to create new contents through their social network. In this paper, we analyzed the process of developing thousands of videos based on creators' social networks and investigate relationships among creation activity and social networks. The social network reveals interesting features. Creators generate large and sparse social networks including some centralized communities, and such centralized community's members shared special tags. Different categories of creators have different roles in evolving the network, e.g., songwriters gather more links than other categories, implying that they are triggers to network evolution.

  3. Eye gaze correction with stereovision for video-teleconferencing.

    PubMed

    Yang, Ruigang; Zhang, Zhengyou

    2004-07-01

    The lack of eye contact in desktop video teleconferencing substantially reduces the effectiveness of video contents. While expensive and bulky hardware is available on the market to correct eye gaze, researchers have been trying to provide a practical software-based solution to bring video-teleconferencing one step closer to the mass market. This paper presents a novel approach: Based on stereo analysis combined with rich domain knowledge (a personalized face model), we synthesize, using graphics hardware, a virtual video that maintains eye contact. A 3D stereo head tracker with a personalized face model is used to compute initial correspondences across two views. More correspondences are then added through template and feature matching. Finally, all the correspondence information is fused together for view synthesis using view morphing techniques. The combined methods greatly enhance the accuracy and robustness of the synthesized views. Our current system is able to generate an eye-gaze corrected video stream at five frames per second on a commodity 1 GHz PC.

  4. The use of hypermedia to increase the productivity of software development teams

    NASA Technical Reports Server (NTRS)

    Coles, L. Stephen

    1991-01-01

    Rapid progress in low-cost commercial PC-class multimedia workstation technology will potentially have a dramatic impact on the productivity of distributed work groups of 50-100 software developers. Hypermedia/multimedia involves the seamless integration in a graphical user interface (GUI) of a wide variety of data structures, including high-resolution graphics, maps, images, voice, and full-motion video. Hypermedia will normally require the manipulation of large dynamic files for which relational data base technology and SQL servers are essential. Basic machine architecture, special-purpose video boards, video equipment, optical memory, software needed for animation, network technology, and the anticipated increase in productivity that will result for the introduction of hypermedia technology are covered. It is suggested that the cost of the hardware and software to support an individual multimedia workstation will be on the order of $10,000.

  5. ciliaFA: a research tool for automated, high-throughput measurement of ciliary beat frequency using freely available software

    PubMed Central

    2012-01-01

    Background Analysis of ciliary function for assessment of patients suspected of primary ciliary dyskinesia (PCD) and for research studies of respiratory and ependymal cilia requires assessment of both ciliary beat pattern and beat frequency. While direct measurement of beat frequency from high-speed video recordings is the most accurate and reproducible technique it is extremely time consuming. The aim of this study was to develop a freely available automated method of ciliary beat frequency analysis from digital video (AVI) files that runs on open-source software (ImageJ) coupled to Microsoft Excel, and to validate this by comparison to the direct measuring high-speed video recordings of respiratory and ependymal cilia. These models allowed comparison to cilia beating between 3 and 52 Hz. Methods Digital video files of motile ciliated ependymal (frequency range 34 to 52 Hz) and respiratory epithelial cells (frequency 3 to 18 Hz) were captured using a high-speed digital video recorder. To cover the range above between 18 and 37 Hz the frequency of ependymal cilia were slowed by the addition of the pneumococcal toxin pneumolysin. Measurements made directly by timing a given number of individual ciliary beat cycles were compared with those obtained using the automated ciliaFA system. Results The overall mean difference (± SD) between the ciliaFA and direct measurement high-speed digital imaging methods was −0.05 ± 1.25 Hz, the correlation coefficient was shown to be 0.991 and the Bland-Altman limits of agreement were from −1.99 to 1.49 Hz for respiratory and from −2.55 to 3.25 Hz for ependymal cilia. Conclusions A plugin for ImageJ was developed that extracts pixel intensities and performs fast Fourier transformation (FFT) using Microsoft Excel. The ciliaFA software allowed automated, high throughput measurement of respiratory and ependymal ciliary beat frequency (range 3 to 52 Hz) and avoids operator error due to selection bias. We have included free access to the ciliaFA plugin and installation instructions in Additional file 1 accompanying this manuscript that other researchers may use. PMID:23351276

  6. Automated video surveillance: teaching an old dog new tricks

    NASA Astrophysics Data System (ADS)

    McLeod, Alastair

    1993-12-01

    The automated video surveillance market is booming with new players, new systems, new hardware and software, and an extended range of applications. This paper reviews available technology, and describes the features required for a good automated surveillance system. Both hardware and software are discussed. An overview of typical applications is also given. A shift towards PC-based hybrid systems, use of parallel processing, neural networks, and exploitation of modern telecomms are introduced, highlighting the evolution modern video surveillance systems.

  7. Using Video Analysis and Biomechanics to Engage Life Science Majors in Introductory Physics

    NASA Astrophysics Data System (ADS)

    Stephens, Jeff

    There is an interest in Introductory Physics for the Life Sciences (IPLS) as a way to better engage students in what may be their only physical science course. In this talk I will present some low cost and readily available technologies for video analysis and how they have been implemented in classes and in student research projects. The technologies include software like Tracker and LoggerPro for video analysis and low cost high speed cameras for capturing real world events. The focus of the talk will be on content created by students including two biomechanics research projects performed over the summer by pre-physical therapy majors. One project involved assessing medial knee displacement (MKD), a situation where the subject's knee becomes misaligned during a squatting motion and is a contributing factor in ACL and other knee injuries. The other project looks at the difference in landing forces experienced by gymnasts and cheer-leaders while performing on foam mats versus spring floors. The goal of this talk is to demonstrate how easy it can be to engage life science majors through the use of video analysis and topics like biomechanics and encourage others to try it for themselves.

  8. Video Screen Capture Basics

    ERIC Educational Resources Information Center

    Dunbar, Laura

    2014-01-01

    This article is an introduction to video screen capture. Basic information of two software programs, QuickTime for Mac and BlueBerry Flashback Express for PC, are also discussed. Practical applications for video screen capture are given.

  9. Demonstration-Based Training (DBT) in the Design of a Video Tutorial for Software Training

    ERIC Educational Resources Information Center

    van der Meij, Hans; van der Meij, Jan

    2016-01-01

    This study investigates the design and effectiveness of a video tutorial for software training. In accordance with demonstration-based training, the tutorial consisted of a series of task demonstrations, with instructional features added to enhance learning. An experiment is reported in which a demonstration-only control condition was compared…

  10. ALSC 2011 Notable Videos, Recordings & Interactive Software

    ERIC Educational Resources Information Center

    School Library Journal, 2011

    2011-01-01

    This article presents the Notable Children's Videos, Recordings, and Interactive Software for Kids lists which are compiled annually by committees of the Association for Library Service to children (ALSC), a division of the American Library Association (ALA). These lists were released in January 2011 at the ALA Midwinter meeting in San Diego,…

  11. Flow visualization of CFD using graphics workstations

    NASA Technical Reports Server (NTRS)

    Lasinski, Thomas; Buning, Pieter; Choi, Diana; Rogers, Stuart; Bancroft, Gordon

    1987-01-01

    High performance graphics workstations are used to visualize the fluid flow dynamics obtained from supercomputer solutions of computational fluid dynamic programs. The visualizations can be done independently on the workstation or while the workstation is connected to the supercomputer in a distributed computing mode. In the distributed mode, the supercomputer interactively performs the computationally intensive graphics rendering tasks while the workstation performs the viewing tasks. A major advantage of the workstations is that the viewers can interactively change their viewing position while watching the dynamics of the flow fields. An overview of the computer hardware and software required to create these displays is presented. For complex scenes the workstation cannot create the displays fast enough for good motion analysis. For these cases, the animation sequences are recorded on video tape or 16 mm film a frame at a time and played back at the desired speed. The additional software and hardware required to create these video tapes or 16 mm movies are also described. Photographs illustrating current visualization techniques are discussed. Examples of the use of the workstations for flow visualization through animation are available on video tape.

  12. Particle detection, number estimation, and feature measurement in gene transfer studies: optical fractionator stereology integrated with digital image processing and analysis.

    PubMed

    King, Michael A; Scotty, Nicole; Klein, Ronald L; Meyer, Edwin M

    2002-10-01

    Assessing the efficacy of in vivo gene transfer often requires a quantitative determination of the number, size, shape, or histological visualization characteristics of biological objects. The optical fractionator has become a choice stereological method for estimating the number of objects, such as neurons, in a structure, such as a brain subregion. Digital image processing and analytic methods can increase detection sensitivity and quantify structural and/or spectral features located in histological specimens. We describe a hardware and software system that we have developed for conducting the optical fractionator process. A microscope equipped with a video camera and motorized stage and focus controls is interfaced with a desktop computer. The computer contains a combination live video/computer graphics adapter with a video frame grabber and controls the stage, focus, and video via a commercial imaging software package. Specialized macro programs have been constructed with this software to execute command sequences requisite to the optical fractionator method: defining regions of interest, positioning specimens in a systematic uniform random manner, and stepping through known volumes of tissue for interactive object identification (optical dissectors). The system affords the flexibility to work with count regions that exceed the microscope image field size at low magnifications and to adjust the parameters of the fractionator sampling to best match the demands of particular specimens and object types. Digital image processing can be used to facilitate object detection and identification, and objects that meet criteria for counting can be analyzed for a variety of morphometric and optical properties. Copyright 2002 Elsevier Science (USA)

  13. How to Determine the Centre of Mass of Bodies from Image Modelling

    ERIC Educational Resources Information Center

    Dias, Marco Adriano; Carvalho, Paulo Simeão; Rodrigues, Marcelo

    2016-01-01

    Image modelling is a recent technique in physics education that includes digital tools for image treatment and analysis, such as digital stroboscopic photography (DSP) and video analysis software. It is commonly used to analyse the motion of objects. In this work we show how to determine the position of the centre of mass (CM) of objects with…

  14. A microcomputer interface for a digital audio processor-based data recording system.

    PubMed

    Croxton, T L; Stump, S J; Armstrong, W M

    1987-10-01

    An inexpensive interface is described that performs direct transfer of digitized data from the digital audio processor and video cassette recorder based data acquisition system designed by Bezanilla (1985, Biophys. J., 47:437-441) to an IBM PC/XT microcomputer. The FORTRAN callable software that drives this interface is capable of controlling the video cassette recorder and starting data collection immediately after recognition of a segment of previously collected data. This permits piecewise analysis of long intervals of data that would otherwise exceed the memory capability of the microcomputer.

  15. A microcomputer interface for a digital audio processor-based data recording system.

    PubMed Central

    Croxton, T L; Stump, S J; Armstrong, W M

    1987-01-01

    An inexpensive interface is described that performs direct transfer of digitized data from the digital audio processor and video cassette recorder based data acquisition system designed by Bezanilla (1985, Biophys. J., 47:437-441) to an IBM PC/XT microcomputer. The FORTRAN callable software that drives this interface is capable of controlling the video cassette recorder and starting data collection immediately after recognition of a segment of previously collected data. This permits piecewise analysis of long intervals of data that would otherwise exceed the memory capability of the microcomputer. PMID:3676444

  16. Digital video technology, today and tomorrow

    NASA Astrophysics Data System (ADS)

    Liberman, J.

    1994-10-01

    Digital video is probably computing's fastest moving technology today. Just three years ago, the zenith of digital video technology on the PC was the successful marriage of digital text and graphics with analog audio and video by means of expensive analog laser disc players and video overlay boards. The state of the art involves two different approaches to fully digital video on computers: hardware-assisted and software-only solutions.

  17. Empirical Data Collection and Analysis Using Camtasia and Transana

    ERIC Educational Resources Information Center

    Thorsteinsson, Gisli; Page, Tom

    2009-01-01

    One of the possible techniques for collecting empirical data is video recordings of a computer screen with specific screen capture software. This method for collecting empirical data shows how students use the BSCWII (Be Smart Cooperate Worldwide--a web based collaboration/groupware environment) to coordinate their work and collaborate in…

  18. The SCEC/UseIT Intern Program: Creating Open-Source Visualization Software Using Diverse Resources

    NASA Astrophysics Data System (ADS)

    Francoeur, H.; Callaghan, S.; Perry, S.; Jordan, T.

    2004-12-01

    The Southern California Earthquake Center undergraduate IT intern program (SCEC UseIT) conducts IT research to benefit collaborative earth science research. Through this program, interns have developed real-time, interactive, 3D visualization software using open-source tools. Dubbed LA3D, a distribution of this software is now in use by the seismic community. LA3D enables the user to interactively view Southern California datasets and models of importance to earthquake scientists, such as faults, earthquakes, fault blocks, digital elevation models, and seismic hazard maps. LA3D is now being extended to support visualizations anywhere on the planet. The new software, called SCEC-VIDEO (Virtual Interactive Display of Earth Objects), makes use of a modular, plugin-based software architecture which supports easy development and integration of new data sets. Currently SCEC-VIDEO is in beta testing, with a full open-source release slated for the future. Both LA3D and SCEC-VIDEO were developed using a wide variety of software technologies. These, which included relational databases, web services, software management technologies, and 3-D graphics in Java, were necessary to integrate the heterogeneous array of data sources which comprise our software. Currently the interns are working to integrate new technologies and larger data sets to increase software functionality and value. In addition, both LA3D and SCEC-VIDEO allow the user to script and create movies. Thus program interns with computer science backgrounds have been writing software while interns with other interests, such as cinema, geology, and education, have been making movies that have proved of great use in scientific talks, media interviews, and education. Thus, SCEC UseIT incorporates a wide variety of scientific and human resources to create products of value to the scientific and outreach communities. The program plans to continue with its interdisciplinary approach, increasing the relevance of the software and expanding its use in the scientific community.

  19. PIZZARO: Forensic analysis and restoration of image and video data.

    PubMed

    Kamenicky, Jan; Bartos, Michal; Flusser, Jan; Mahdian, Babak; Kotera, Jan; Novozamsky, Adam; Saic, Stanislav; Sroubek, Filip; Sorel, Michal; Zita, Ales; Zitova, Barbara; Sima, Zdenek; Svarc, Petr; Horinek, Jan

    2016-07-01

    This paper introduces a set of methods for image and video forensic analysis. They were designed to help to assess image and video credibility and origin and to restore and increase image quality by diminishing unwanted blur, noise, and other possible artifacts. The motivation came from the best practices used in the criminal investigation utilizing images and/or videos. The determination of the image source, the verification of the image content, and image restoration were identified as the most important issues of which automation can facilitate criminalists work. Novel theoretical results complemented with existing approaches (LCD re-capture detection and denoising) were implemented in the PIZZARO software tool, which consists of the image processing functionality as well as of reporting and archiving functions to ensure the repeatability of image analysis procedures and thus fulfills formal aspects of the image/video analysis work. Comparison of new proposed methods with the state of the art approaches is shown. Real use cases are presented, which illustrate the functionality of the developed methods and demonstrate their applicability in different situations. The use cases as well as the method design were solved in tight cooperation of scientists from the Institute of Criminalistics, National Drug Headquarters of the Criminal Police and Investigation Service of the Police of the Czech Republic, and image processing experts from the Czech Academy of Sciences. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  20. Automatic content-based analysis of georeferenced image data: Detection of Beggiatoa mats in seafloor video mosaics from the HÅkon Mosby Mud Volcano

    NASA Astrophysics Data System (ADS)

    Jerosch, K.; Lüdtke, A.; Schlüter, M.; Ioannidis, G. T.

    2007-02-01

    The combination of new underwater technology as remotely operating vehicles (ROVs), high-resolution video imagery, and software to compute georeferenced mosaics of the seafloor provides new opportunities for marine geological or biological studies and applications in offshore industry. Even during single surveys by ROVs or towed systems large amounts of images are compiled. While these underwater techniques are now well-engineered, there is still a lack of methods for the automatic analysis of the acquired image data. During ROV dives more than 4200 georeferenced video mosaics were compiled for the HÅkon Mosby Mud Volcano (HMMV). Mud volcanoes as HMMV are considered as significant source locations for methane characterised by unique chemoautotrophic communities as Beggiatoa mats. For the detection and quantification of the spatial distribution of Beggiatoa mats an automated image analysis technique was developed, which applies watershed transformation and relaxation-based labelling of pre-segmented regions. Comparison of the data derived by visual inspection of 2840 video images with the automated image analysis revealed similarities with a precision better than 90%. We consider this as a step towards a time-efficient and accurate analysis of seafloor images for computation of geochemical budgets and identification of habitats at the seafloor.

  1. Video tracking analysis of behavioral patterns during estrus in goats

    PubMed Central

    ENDO, Natsumi; RAHAYU, Larasati Puji; ARAKAWA, Toshiya; TANAKA, Tomomi

    2015-01-01

    Here, we report a new method for measuring behavioral patterns during estrus in goats based on video tracking analysis. Data were collected from cycling goats, which were in estrus (n = 8) or not in estrus (n = 8). An observation pen (2.5 m × 2.5 m) was set up in the corner of the female paddock with one side adjacent to a male paddock. The positions and movements of goats were tracked every 0.5 sec for 10 min by using a video tracking software, and the trajectory data were used for the analysis. There were no significant differences in the durations of standing and walking or the total length of movement. However, the number of approaches to a male and the duration of staying near the male were higher in goats in estrus than in goats not in estrus. The proposed evaluation method may be suitable for detailed monitoring of behavioral changes during estrus in goats. PMID:26560676

  2. In-network adaptation of SHVC video in software-defined networks

    NASA Astrophysics Data System (ADS)

    Awobuluyi, Olatunde; Nightingale, James; Wang, Qi; Alcaraz Calero, Jose Maria; Grecos, Christos

    2016-04-01

    Software Defined Networks (SDN), when combined with Network Function Virtualization (NFV) represents a paradigm shift in how future networks will behave and be managed. SDN's are expected to provide the underpinning technologies for future innovations such as 5G mobile networks and the Internet of Everything. The SDN architecture offers features that facilitate an abstracted and centralized global network view in which packet forwarding or dropping decisions are based on application flows. Software Defined Networks facilitate a wide range of network management tasks, including the adaptation of real-time video streams as they traverse the network. SHVC, the scalable extension to the recent H.265 standard is a new video encoding standard that supports ultra-high definition video streams with spatial resolutions of up to 7680×4320 and frame rates of 60fps or more. The massive increase in bandwidth required to deliver these U-HD video streams dwarfs the bandwidth requirements of current high definition (HD) video. Such large bandwidth increases pose very significant challenges for network operators. In this paper we go substantially beyond the limited number of existing implementations and proposals for video streaming in SDN's all of which have primarily focused on traffic engineering solutions such as load balancing. By implementing and empirically evaluating an SDN enabled Media Adaptation Network Entity (MANE) we provide a valuable empirical insight into the benefits and limitations of SDN enabled video adaptation for real time video applications. The SDN-MANE is the video adaptation component of our Video Quality Assurance Manager (VQAM) SDN control plane application, which also includes an SDN monitoring component to acquire network metrics and a decision making engine using algorithms to determine the optimum adaptation strategy for any real time video application flow given the current network conditions. Our proposed VQAM application has been implemented and evaluated on an SDN allowing us to provide important benchmarks for video streaming over SDN and for SDN control plane latency.

  3. Did You Just See that? Online Video Sites Can Jumpstart Lessons

    ERIC Educational Resources Information Center

    Oishi, Lindsay

    2007-01-01

    Free video sharing Web sites like www.youtube.com allow millions of people to witness videos that are free and can be viewed immediately without having to download any software. These videos do not provide content, but they can stimulate the interest that makes curriculum relevant or "jumpstart" lessons. The YouTube video blog of World War II…

  4. Cost/benefit analysis for video security systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1997-01-01

    Dr. Don Hush and Scott Chapman, in conjunction with the Electrical and Computer Engineering Department of the University of New Mexico (UNM), have been contracted by Los Alamos National Laboratories to perform research in the area of high security video analysis. The first phase of this research, presented in this report, is a cost/benefit analysis of various approaches to the problem in question. This discussion begins with a description of three architectures that have been used as solutions to the problem of high security surveillance. An overview of the relative merits and weaknesses of each of the proposed systems ismore » included. These descriptions are followed directly by a discussion of the criteria chosen in evaluating the systems and the techniques used to perform the comparisons. The results are then given in graphical and tabular form, and their implications discussed. The project to this point has involved assessing hardware and software issues in image acquisition, processing and change detection. Future work is to leave these questions behind to consider the issues of change analysis - particularly the detection of human motion - and alarm decision criteria. The criteria for analysis in this report include: cost; speed; tradeoff issues in moving primative operations from software to hardware; real time operation considerations; change image resolution; and computational requirements.« less

  5. An Imaging And Graphics Workstation For Image Sequence Analysis

    NASA Astrophysics Data System (ADS)

    Mostafavi, Hassan

    1990-01-01

    This paper describes an application-specific engineering workstation designed and developed to analyze imagery sequences from a variety of sources. The system combines the software and hardware environment of the modern graphic-oriented workstations with the digital image acquisition, processing and display techniques. The objective is to achieve automation and high throughput for many data reduction tasks involving metric studies of image sequences. The applications of such an automated data reduction tool include analysis of the trajectory and attitude of aircraft, missile, stores and other flying objects in various flight regimes including launch and separation as well as regular flight maneuvers. The workstation can also be used in an on-line or off-line mode to study three-dimensional motion of aircraft models in simulated flight conditions such as wind tunnels. The system's key features are: 1) Acquisition and storage of image sequences by digitizing real-time video or frames from a film strip; 2) computer-controlled movie loop playback, slow motion and freeze frame display combined with digital image sharpening, noise reduction, contrast enhancement and interactive image magnification; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored image sequence; 4) automatic and manual field-of-view and spatial calibration; 5) image sequence data base generation and management, including the measurement data products; 6) off-line analysis software for trajectory plotting and statistical analysis; 7) model-based estimation and tracking of object attitude angles; and 8) interface to a variety of video players and film transport sub-systems.

  6. Developing a Promotional Video

    ERIC Educational Resources Information Center

    Epley, Hannah K.

    2014-01-01

    There is a need for Extension professionals to show clientele the benefits of their program. This article shares how promotional videos are one way of reaching audiences online. An example is given on how a promotional video has been used and developed using iMovie software. Tips are offered for how professionals can create a promotional video and…

  7. Uncovering Student Learning Profiles with a Video Annotation Tool: Reflective Learning with and without Instructional Norms

    ERIC Educational Resources Information Center

    Mirriahi, Negin; Liaqat, Daniyal; Dawson, Shane; Gaševic, Dragan

    2016-01-01

    This study explores the types of learning profiles that evolve from student use of video annotation software for reflective learning. The data traces from student use of the software were analysed across four undergraduate courses with differing instructional conditions. That is, the use of graded or non-graded self-reflective annotations. Using…

  8. Ability and efficiency of an automatic analysis software to measure microvascular parameters.

    PubMed

    Carsetti, Andrea; Aya, Hollmann D; Pierantozzi, Silvia; Bazurro, Simone; Donati, Abele; Rhodes, Andrew; Cecconi, Maurizio

    2017-08-01

    Analysis of the microcirculation is currently performed offline, is time consuming and operator dependent. The aim of this study was to assess the ability and efficiency of the automatic analysis software CytoCamTools 1.7.12 (CC) to measure microvascular parameters in comparison with Automated Vascular Analysis (AVA) software 3.2. 22 patients admitted to the cardiothoracic intensive care unit following cardiac surgery were prospectively enrolled. Sublingual microcirculatory videos were analysed using AVA and CC software. The total vessel density (TVD) for small vessels, perfused vessel density (PVD) and proportion of perfused vessels (PPV) were calculated. Blood flow was assessed using the microvascular flow index (MFI) for AVA software and the averaged perfused speed indicator (APSI) for the CC software. The duration of the analysis was also recorded. Eighty-four videos from 22 patients were analysed. The bias between TVD-CC and TVD-AVA was 2.20 mm/mm 2 (95 % CI 1.37-3.03) with limits of agreement (LOA) of -4.39 (95 % CI -5.66 to -3.16) and 8.79 (95 % CI 7.50-10.01) mm/mm 2 . The percentage error (PE) for TVD was ±32.2 %. TVD was positively correlated between CC and AVA (r = 0.74, p < 0.001). The bias between PVD-CC and PVD-AVA was 6.54 mm/mm 2 (95 % CI 5.60-7.48) with LOA of -4.25 (95 % CI -8.48 to -0.02) and 17.34 (95 % CI 13.11-21.57) mm/mm 2 . The PE for PVD was ±61.2 %. PVD was positively correlated between CC and AVA (r = 0.66, p < 0.001). The median PPV-AVA was significantly higher than the median PPV-CC [97.39 % (95.25, 100 %) vs. 81.65 % (61.97, 88.99), p < 0.0001]. MFI categories cannot estimate or predict APSI values (p = 0.45). The time required for the analysis was shorter with CC than with AVA system [2'42″ (2'12″, 3'31″) vs. 16'12″ (13'38″, 17'57″), p < 0.001]. TVD is comparable between the two softwares, although faster with CC software. The values for PVD and PPV are not interchangeable given the different approach to assess microcirculatory flow.

  9. Digital Video Editing

    ERIC Educational Resources Information Center

    McConnell, Terry

    2004-01-01

    Monica Adams, head librarian at Robinson Secondary in Fairfax country, Virginia, states that librarians should have the technical knowledge to support projects related to digital video editing. The process of digital video editing and the cables, storage issues and the computer system with software is described.

  10. Accelerating a MPEG-4 video decoder through custom software/hardware co-design

    NASA Astrophysics Data System (ADS)

    Díaz, Jorge L.; Barreto, Dacil; García, Luz; Marrero, Gustavo; Carballo, Pedro P.; Núñez, Antonio

    2007-05-01

    In this paper we present a novel methodology to accelerate an MPEG-4 video decoder using software/hardware co-design for wireless DAB/DMB networks. Software support includes the services provided by the embedded kernel μC/OS-II, and the application tasks mapped to software. Hardware support includes several custom co-processors and a communication architecture with bridges to the main system bus and with a dual port SRAM. Synchronization among tasks is achieved at two levels, by a hardware protocol and by kernel level scheduling services. Our reference application is an MPEG-4 video decoder composed of several software functions and written using a special C++ library named CASSE. Profiling and space exploration techniques were used previously over the Advanced Simple Profile (ASP) MPEG-4 decoder to determinate the best HW/SW partition developed here. This research is part of the ARTEMI project and its main goal is the establishment of methodologies for the design of real-time complex digital systems using Programmable Logic Devices with embedded microprocessors as target technology and the design of multimedia systems for broadcasting networks as reference application.

  11. Video Analysis Software and the Investigation of the Conservation of Mechanical Energy

    ERIC Educational Resources Information Center

    Bryan, Joel

    2004-01-01

    National science and mathematics standards stress the importance of integrating technology use into those fields of study at all levels of education. In order to fulfill these directives, it is necessary to introduce both in-service and preservice teachers to various forms of technology while modeling its appropriate use in investigating…

  12. Steel Spheres and Skydiver--Terminal Velocity

    ERIC Educational Resources Information Center

    Costa Leme, J.; Moura, C.; Costa, Cintia

    2009-01-01

    This paper describes the use of open source video analysis software in the study of the relationship between the velocity of falling objects and time. We discuss an experiment in which a steel sphere falls in a container filled with two immiscible liquids. The motion is similar to that of a skydiver falling through air.

  13. Using computer-based video analysis in the study of fidgety movements.

    PubMed

    Adde, Lars; Helbostad, Jorunn L; Jensenius, Alexander Refsum; Taraldsen, Gunnar; Støen, Ragnhild

    2009-09-01

    Absence of fidgety movements (FM) in high-risk infants is a strong marker for later cerebral palsy (CP). FMs can be classified by the General Movement Assessment (GMA), based on Gestalt perception of the infant's movement pattern. More objective movement analysis may be provided by computer-based technology. The aim of this study was to explore the feasibility of a computer-based video analysis of infants' spontaneous movements in classifying non-fidgety versus fidgety movements. GMA was performed from video material of the fidgety period in 82 term and preterm infants at low and high risks of developing CP. The same videos were analysed using the developed software called General Movement Toolbox (GMT) with visualisation of the infant's movements for qualitative analyses. Variables derived from the calculation of displacement of pixels from one video frame to the next were used for quantitative analyses. Visual representations from GMT showed easily recognisable patterns of FMs. Of the eight quantitative variables derived, the variability in displacement of a spatial centre of active pixels in the image had the highest sensitivity (81.5) and specificity (70.0) in classifying FMs. By setting triage thresholds at 90% sensitivity and specificity for FM, the need for further referral was reduced by 70%. Video recordings can be used for qualitative and quantitative analyses of FMs provided by GMT. GMT is easy to implement in clinical practice, and may provide assistance in detecting infants without FMs.

  14. UROKIN: A Software to Enhance Our Understanding of Urogenital Motion.

    PubMed

    Czyrnyj, Catriona S; Labrosse, Michel R; Graham, Ryan B; McLean, Linda

    2018-05-01

    Transperineal ultrasound (TPUS) allows for objective quantification of mid-sagittal urogenital mechanics, yet current practice omits dynamic motion information in favor of analyzing only a rest and a peak motion frame. This work details the development of UROKIN, a semi-automated software which calculates kinematic curves of urogenital landmark motion. A proof of concept analysis, performed using UROKIN on TPUS video recorded from 20 women with and 10 women without stress urinary incontinence (SUI) performing maximum voluntary contraction of the pelvic floor muscles. The anorectal angle and bladder neck were tracked while the motion of the pubic symphysis was used to compensate for the error incurred by TPUS probe motion during imaging. Kinematic curves of landmark motion were generated for each video and curves were smoothed, time normalized, and averaged within groups. Kinematic data yielded by the UROKIN software showed statistically significant differences between women with and without SUI in terms of magnitude and timing characteristics of the kinematic curves depicting landmark motion. Results provide insight into the ways in which UROKIN may be useful to study differences in pelvic floor muscle contraction mechanics between women with and without SUI and other pelvic floor disorders. The UROKIN software improves on methods described in the literature and provides unique capacity to further our understanding of urogenital biomechanics.

  15. Benefit from NASA

    NASA Image and Video Library

    1999-06-01

    Two scientists at NASA's Marshall Space Flight Center,atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image Stabilization and Registration (VISAR). VISAR may help law enforcement agencies catch criminals by improving the quality of video recorded at crime scenes. In this photograph, the single frame at left, taken at night, was brightened in order to enhance details and reduce noise or snow. To further overcome the video defects in one frame, Law enforcement officials can use VISAR software to add information from multiple frames to reveal a person. Images from less than a second of videotape were added together to create the clarified image at right. VISAR stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. VISAR could also have applications in medical and meteorological imaging. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. The software can be used for defense application by improving recornaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  16. Management of a patient's gait abnormality using smartphone technology in-clinic for improved qualitative analysis: A case report.

    PubMed

    VanWye, William R; Hoover, Donald L

    2018-05-01

    Qualitative analysis has its limitations as the speed of human movement often occurs more quickly than can be comprehended. Digital video allows for frame-by-frame analysis, and therefore likely more effective interventions for gait dysfunction. Although the use of digital video outside laboratory settings, just a decade ago, was challenging due to cost and time constraints, rapid use of smartphones and software applications has made this technology much more practical for clinical usage. A 35-year-old man presented for evaluation with the chief complaint of knee pain 24 months status-post triple arthrodesis following a work-related crush injury. In-clinic qualitative gait analysis revealed gait dysfunction, which was augmented by using a standard IPhone® 3GS camera. After video capture, an IPhone® application (Speed Up TV®, https://itunes.apple.com/us/app/speeduptv/id386986953?mt=8 ) allowed for frame-by-frame analysis. Corrective techniques were employed using in-clinic equipment to develop and apply a temporary heel-to-toe rocker sole (HTRS) to the patient's shoe. Post-intervention video revealed significantly improved gait efficiency with a decrease in pain. The patient was promptly fitted with a permanent HTRS orthosis. This intervention enabled the patient to successfully complete a work conditioning program and progress to job retraining. Video allows for multiple views, which can be further enhanced by using applications for frame-by-frame analysis and zoom capabilities. This is especially useful for less experienced observers of human motion, as well as for establishing comparative signs prior to implementation of training and/or permanent devices.

  17. Geoscience Videos and Their Role in Supporting Student Learning

    ERIC Educational Resources Information Center

    Wiggen, Jennifer; McDonnell, David

    2017-01-01

    A series of short (5 to 7 minutes long) geoscience videos were created to support student learning in a flipped class setting for an introductory geology class at North Carolina State University. Videos were made using a stylus, tablet, microphone, and video editing software. Essentially, we narrate a slide, sketch a diagram, or explain a figure…

  18. Meteor44 Video Meteor Photometry

    NASA Technical Reports Server (NTRS)

    Swift, Wesley R.; Suggs, Robert M.; Cooke, William J.

    2004-01-01

    Meteor44 is a software system developed at MSFC for the calibration and analysis of video meteor data. The dynamic range of the (8bit) video data is extended by approximately 4 magnitudes for both meteors and stellar images using saturation compensation. Camera and lens specific saturation compensation coefficients are derived from artificial variable star laboratory measurements. Saturation compensation significantly increases the number of meteors with measured intensity and improves the estimation of meteoroid mass distribution. Astrometry is automated to determine each image s plate coefficient using appropriate star catalogs. The images are simultaneously intensity calibrated from the contained stars to determine the photon sensitivity and the saturation level referenced above the atmosphere. The camera s spectral response is used to compensate for stellar color index and typical meteor spectra in order to report meteor light curves in traditional visual magnitude units. Recent efforts include improved camera calibration procedures, long focal length "streak" meteor photome&y and two-station track determination. Meteor44 has been used to analyze data from the 2001.2002 and 2003 MSFC Leonid observational campaigns as well as several lesser showers. The software is interactive and can be demonstrated using data from recent Leonid campaigns.

  19. Holodeck: Telepresence Dome Visualization System Simulations

    NASA Technical Reports Server (NTRS)

    Hite, Nicolas

    2012-01-01

    This paper explores the simulation and consideration of different image-projection strategies for the Holodeck, a dome that will be used for highly immersive telepresence operations in future endeavors of the National Aeronautics and Space Administration (NASA). Its visualization system will include a full 360 degree projection onto the dome's interior walls in order to display video streams from both simulations and recorded video. Because humans innately trust their vision to precisely report their surroundings, the Holodeck's visualization system is crucial to its realism. This system will be rigged with an integrated hardware and software infrastructure-namely, a system of projectors that will relay with a Graphics Processing Unit (GPU) and computer to both project images onto the dome and correct warping in those projections in real-time. Using both Computer-Aided Design (CAD) and ray-tracing software, virtual models of various dome/projector geometries were created and simulated via tracking and analysis of virtual light sources, leading to the selection of two possible configurations for installation. Research into image warping and the generation of dome-ready video content was also conducted, including generation of fisheye images, distortion correction, and the generation of a reliable content-generation pipeline.

  20. Prospectus 2000

    NASA Astrophysics Data System (ADS)

    Holmes, Jon L.; Gettys, Nancy S.

    2000-01-01

    We begin 2000 with a message about our plans for JCE Software and what you will be seeing in this column as the year progresses. Floppy Disk --> CD-ROM Most software today is distributed on CD-ROM or by downloading from the Internet. Several new computers no longer include a floppy disk drive as "standard equipment". Today's software no longer fits on one or two floppies (the installation software alone can require two disks) and the cost of reproducing and distributing several disks is prohibitive. In short, distribution of software on floppy disks is no longer practical. Therefore, JCE Software will distribute all new software publications on CD-ROM rather than on disks. Regular Issues --> Collections Distribution of all our software on CD-ROM allows us to extend our concept of software collections that we started with the General Chemistry Collection. Such collections will contain all the previously published software that is still "in print" (i.e., is compatible with current operating systems and hardware) and any new programs that fall under the topic of the collection. Proposed topics in addition to General Chemistry currently include Advanced Chemistry, Instrument and Laboratory Simulations, and Spectroscopy. Eventually, all regular issues will be replaced by these collections, which will be updated annually or semiannually with new programs and updates to existing programs. Abstracts for all new programs will continue to appear in this column when a collection or its update is ready for publication. We will continue to offer special issues of single larger programs (e.g. Periodic Table Live!, Chemistry Comes Alive! volumes) on CD-ROM and video on videotape. Connect with Your Students outside Class JCE Software has always offered network licenses to allow instructors to make our software available to students in computer labs, but that model no longer fits the way many instructors and students work with computers. Many students (or their families) own a personal computer allowing them much more flexibility than a campus computer lab. Many instructors utilize the World Wide Web, creating HTML pages for students to use. JCE Software has options available to take advantage of both of these developments. Software Adoption To provide students who own computers access to JCE Software programs, consider adopting one or more of our CD-ROMs as you would a textbook. The General Chemistry Collection has been adopted by several general chemistry courses. We can arrange to bundle CDs with laboratory manuals or to be sold separately to students through the campus bookstore. The cost per CD can be quite low (as little as $5) when large numbers are ordered, making this a cost-effective method of allowing students access to the software they need whenever and wherever they desire. Web-Ready Publications Several JCE Software programs use HTML to present the material. Viewed with the ubiquitous Internet Browser, HTML is compatible with both Mac OS and Windows (as well most other current operating systems) and provides a flexible hypermedia interface that is familiar to an increasing number of instructors and students. HTML-based publications are also ready for use on local intranets, with appropriate licensing, and can be readily incorporated into other HTML-based materials. Already published in this format are: Chemistry Comes Alive!, Volumes 1 and 2 (Special Issues 18 and 21), Flying over Atoms (Special Issue 19), and Periodic Table Live! Second Edition (Special Issue 17). Solid State Resources Second Edition (Special Issue 12) and Chemistry Comes Alive!, Volume 3 (Special Issue 23) will be available soon. Other submissions being developed in HTML format include ChemPages Laboratory and Multimedia General Chemistry Problems. Contact the JCE Software office to learn about licensing alternatives that take advantage of the World Wide Web. Periodic Table Live! 2nd ed. is one of JCE Software's "Web-ready" publications. Publication Plans for 2000 We have several exciting new issues planned for publication in the coming year. Chemistry Comes Alive! The Chemistry Comes Alive! (CCA!) series continues with additional CD-ROMs for Mac OS and Windows. Each volume in this series contains video and animations of chemical reactions that can be easily incorporated into your own computer-based presentations. Our digital video now uses state-of-the-art compression that yields higher quality video with smaller file sizes and data rates more suited for WWW delivery. Video for Periodic Table Live! 2nd edition, Chemistry Comes Alive! Volumes 3, ChemPages Laboratory, and Multimedia General Chemistry Problems use this new format. We will be releasing updates of CCA! Volumes 1 and 2 to take advantage of this new technology. We are very pleased with the results and think you will be also. The reaction of aluminum with chlorine is included in Chemistry Comes Alive! Volume 3. ChemPages Laboratory ChemPages Laboratory, developed by the New Traditions Curriculum Project at the University of Wisconsin-Madison, is an HTML-based CD-ROM for Mac OS and Windows that contains lessons and tutorials to prepare introductory chemistry students to work in the laboratory. It includes text, photographs, computer graphics, animations, digital video, and voice narration to introduce students to the laboratory equipment and procedures. ChemPages Laboratory teaches introductory chemistry students about laboratory instruments, equipment, and procedures. Versatile Video Video demonstrating the "drinking bird" is included in the Chemistry Comes Alive! video collection. Video from this collection can be incorporated into many other projects. As an example, David Whisnant has used the drinking bird in his Multimedia General Chemistry Problems, where students view the video and are asked to explain why the bird bobs up and down. JCE Software anticipates publication of Multimedia General Chemistry Problems on CD-ROM for Mac OS and Windows in 2000. It will be "Web-ready". General Chemistry Collection, 4th Edition The General Chemistry Collection will be revised early in the summer and CDs will be shipped in time for fall adoptions. The 4th edition will include JCE Software publications for general chemistry published in 1999, as well as any programs for general chemistry accepted in 2000. Regular Issues We have had many recent submissions and submissions of work in progress. In 2000 we will work with the authors and our peer-reviewers to complete and publish these submissions individually or as part of a software collection on CD-ROM. An Invitation In collaboration with JCE Online we plan to make available in 2000 more support files for JCE Software. These will include not only troubleshooting tips and technical support notes, but also supporting information submitted by users such as lessons, specific assignments, and activities using JCE Software publications. All JCE Software users are invited to contribute to this area. Get in touch with JCE Software and let us know how you are using our materials so that we can share your ideas with others! Although the word software is in our name, many of our publications are not traditional software. We also publish video on videotape, videodisc, and CD-ROM and electronic documents (Mathcad and Mathematica, spreadsheet files and macros, HTML documents, and PowerPoint presentations). Most chemistry instructors who use a computer in their teaching have created or considered creating one or more of these for their classes. If you have an original computer presentation, electronic document, animation, video, or any other item that is not printed text it is probably an appropriate submission for JCE Software. By publishing your work in any branch of the Journal of Chemical Education, you will share your efforts with chemistry instructors and students all over the world and get professional recognition for your achievements. All JCE Software publications are Y2K compliant.

  1. Development and application of traffic flow information collecting and analysis system based on multi-type video

    NASA Astrophysics Data System (ADS)

    Lu, Mujie; Shang, Wenjie; Ji, Xinkai; Hua, Mingzhuang; Cheng, Kuo

    2015-12-01

    Nowadays, intelligent transportation system (ITS) has already become the new direction of transportation development. Traffic data, as a fundamental part of intelligent transportation system, is having a more and more crucial status. In recent years, video observation technology has been widely used in the field of traffic information collecting. Traffic flow information contained in video data has many advantages which is comprehensive and can be stored for a long time, but there are still many problems, such as low precision and high cost in the process of collecting information. This paper aiming at these problems, proposes a kind of traffic target detection method with broad applicability. Based on three different ways of getting video data, such as aerial photography, fixed camera and handheld camera, we develop a kind of intelligent analysis software which can be used to extract the macroscopic, microscopic traffic flow information in the video, and the information can be used for traffic analysis and transportation planning. For road intersections, the system uses frame difference method to extract traffic information, for freeway sections, the system uses optical flow method to track the vehicles. The system was applied in Nanjing, Jiangsu province, and the application shows that the system for extracting different types of traffic flow information has a high accuracy, it can meet the needs of traffic engineering observations and has a good application prospect.

  2. Content and ratings of mature-rated video games.

    PubMed

    Thompson, Kimberly M; Tepichin, Karen; Haninger, Kevin

    2006-04-01

    To quantify the depiction of violence, blood, sexual themes, profanity, substances, and gambling in video games rated M (for "mature") and to measure agreement between the content observed and the rating information provided to consumers on the game box by the Entertainment Software Rating Board. We created a database of M-rated video game titles, selected a random sample, recorded at least 1 hour of game play, quantitatively assessed the content, performed statistical analyses to describe the content, and compared our observations with the Entertainment Software Rating Board content descriptors and results of our prior studies. Harvard University, Boston, Mass. Authors and 1 hired game player. M-rated video games. Percentages of game play depicting violence, blood, sexual themes, gambling, alcohol, tobacco, or other drugs; use of profanity in dialogue, song lyrics, or gestures. Although the Entertainment Software Rating Board content descriptors for violence and blood provide a good indication of such content in the game, we identified 45 observations of content that could warrant a content descriptor in 29 games (81%) that lacked these content descriptors. M-rated video games are significantly more likely to contain blood, profanity, and substances; depict more severe injuries to human and nonhuman characters; and have a higher rate of human deaths than video games rated T (for "teen"). Parents and physicians should recognize that popular M-rated video games contain a wide range of unlabeled content and may expose children and adolescents to messages that may negatively influence their perceptions, attitudes, and behaviors.

  3. An optimized video system for augmented reality in endodontics: a feasibility study.

    PubMed

    Bruellmann, D D; Tjaden, H; Schwanecke, U; Barth, P

    2013-03-01

    We propose an augmented reality system for the reliable detection of root canals in video sequences based on a k-nearest neighbor color classification and introduce a simple geometric criterion for teeth. The new software was implemented using C++, Qt, and the image processing library OpenCV. Teeth are detected in video images to restrict the segmentation of the root canal orifices by using a k-nearest neighbor algorithm. The location of the root canal orifices were determined using Euclidean distance-based image segmentation. A set of 126 human teeth with known and verified locations of the root canal orifices was used for evaluation. The software detects root canals orifices for automatic classification of the teeth in video images and stores location and size of the found structures. Overall 287 of 305 root canals were correctly detected. The overall sensitivity was about 94 %. Classification accuracy for molars ranged from 65.0 to 81.2 % and from 85.7 to 96.7 % for premolars. The realized software shows that observations made in anatomical studies can be exploited to automate real-time detection of root canal orifices and tooth classification with a software system. Automatic storage of location, size, and orientation of the found structures with this software can be used for future anatomical studies. Thus, statistical tables with canal locations will be derived, which can improve anatomical knowledge of the teeth to alleviate root canal detection in the future. For this purpose the software is freely available at: http://www.dental-imaging.zahnmedizin.uni-mainz.de/.

  4. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA Marshall Space Flight Center, atmospheric scientist Paul Meyer (left) and solar physicist Dr. David Hathaway, have developed promising new software, called Video Image Stabilization and Registration (VISAR), that may help law enforcement agencies to catch criminals by improving the quality of video recorded at crime scenes, VISAR stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects; produces clearer images of moving objects; smoothes jagged edges; enhances still images; and reduces video noise of snow. VISAR could also have applications in medical and meteorological imaging. It could steady images of Ultrasounds which are infamous for their grainy, blurred quality. It would be especially useful for tornadoes, tracking whirling objects and helping to determine the tornado's wind speed. This image shows two scientists reviewing an enhanced video image of a license plate taken from a moving automobile.

  5. Highly efficient simulation environment for HDTV video decoder in VLSI design

    NASA Astrophysics Data System (ADS)

    Mao, Xun; Wang, Wei; Gong, Huimin; He, Yan L.; Lou, Jian; Yu, Lu; Yao, Qingdong; Pirsch, Peter

    2002-01-01

    With the increase of the complex of VLSI such as the SoC (System on Chip) of MPEG-2 Video decoder with HDTV scalability especially, simulation and verification of the full design, even as high as the behavior level in HDL, often proves to be very slow, costly and it is difficult to perform full verification until late in the design process. Therefore, they become bottleneck of the procedure of HDTV video decoder design, and influence it's time-to-market mostly. In this paper, the architecture of Hardware/Software Interface of HDTV video decoder is studied, and a Hardware-Software Mixed Simulation (HSMS) platform is proposed to check and correct error in the early design stage, based on the algorithm of MPEG-2 video decoding. The application of HSMS to target system could be achieved by employing several introduced approaches. Those approaches speed up the simulation and verification task without decreasing performance.

  6. Determination of the static friction coefficient from circular motion

    NASA Astrophysics Data System (ADS)

    Molina-Bolívar, J. A.; Cabrerizo-Vílchez, M. A.

    2014-07-01

    This paper describes a physics laboratory exercise for determining the coefficient of static friction between two surfaces. The circular motion of a coin placed on the surface of a rotating turntable has been studied. For this purpose, the motion is recorded with a high-speed digital video camera recording at 240 frames s-1, and the videos are analyzed using Tracker video-analysis software, allowing the students to dynamically model the motion of the coin. The students have to obtain the static friction coefficient by comparing the centripetal and maximum static friction forces. The experiment only requires simple and inexpensive materials. The dynamics of circular motion and static friction forces are difficult for many students to understand. The proposed laboratory exercise addresses these topics, which are relevant to the physics curriculum.

  7. Software manual for operating particle displacement tracking data acquisition and reduction system

    NASA Technical Reports Server (NTRS)

    Wernet, Mark P.

    1991-01-01

    The software manual is presented. The necessary steps required to record, analyze, and reduce Particle Image Velocimetry (PIV) data using the Particle Displacement Tracking (PDT) technique are described. The new PDT system is an all electronic technique employing a CCD video camera and a large memory buffer frame-grabber board to record low velocity (less than or equal to 20 cm/s) flows. Using a simple encoding scheme, a time sequence of single exposure images are time coded into a single image and then processed to track particle displacements and determine 2-D velocity vectors. All the PDT data acquisition, analysis, and data reduction software is written to run on an 80386 PC.

  8. Association between facial expression and PTSD symptoms among young children exposed to the Great East Japan Earthquake: a pilot study.

    PubMed

    Fujiwara, Takeo; Mizuki, Rie; Miki, Takahiro; Chemtob, Claude

    2015-01-01

    "Emotional numbing" is a symptom of post-traumatic stress disorder (PTSD) characterized by a loss of interest in usually enjoyable activities, feeling detached from others, and an inability to express a full range of emotions. Emotional numbing is usually assessed through self-report, and is particularly difficult to ascertain among young children. We conducted a pilot study to explore the use of facial expression ratings in response to a comedy video clip to assess emotional reactivity among preschool children directly exposed to the Great East Japan Earthquake. This study included 23 child participants. Child PTSD symptoms were measured using a modified version of the Parent's Report of the Child's Reaction to Stress scale. Children were filmed while watching a 2-min video compilation of natural scenes ('baseline video') followed by a 2-min video clip from a television comedy ('comedy video'). Children's facial expressions were processed the using Noldus FaceReader software, which implements the Facial Action Coding System (FACS). We investigated the association between PTSD symptom scores and facial emotion reactivity using linear regression analysis. Children with higher PTSD symptom scores showed a significantly greater proportion of neutral facial expressions, controlling for sex, age, and baseline facial expression (p < 0.05). This pilot study suggests that facial emotion reactivity, measured using facial expression recognition software, has the potential to index emotional numbing in young children. This pilot study adds to the emerging literature on using experimental psychopathology methods to characterize children's reactions to disasters.

  9. Going Pro: Schools Embrace Video Production and Videoconferencing

    ERIC Educational Resources Information Center

    Stearns, Jared

    2006-01-01

    K-12 schools are broadening their curriculum offerings to include audio, video, and other multimodal styles of communication. A combination of savvy digital natives, affordable software, and online tutoring has created a perfect opportunity to integrate professional level video and videoconferencing into curricula. Educators are also finding…

  10. Video Conferencing: The Next Wave for International Business Communication.

    ERIC Educational Resources Information Center

    Sondak, Norman E.; Sondak, Eileen M.

    This paper suggests that desktop computer-based video conferencing, with high fidelity sound, and group software support, is emerging as a major communications option. Briefly addressed are the following critical factors that are propelling the computer-based video conferencing revolution: (1) widespread availability of desktop computers…

  11. To Spice Up Course Work, Professors Make Their Own Videos

    ERIC Educational Resources Information Center

    Young, Jeffrey R.

    2008-01-01

    College faculty members have recently begun creating homemade videos to supplement their lectures, using free or low-cost software. These are the same technologies that make it easy for students to post spoof videos on YouTube, but the educators are putting the tools to educational use. Students tune in to the short videos more often than they…

  12. BEMOVI, software for extracting behavior and morphology from videos, illustrated with analyses of microbes.

    PubMed

    Pennekamp, Frank; Schtickzelle, Nicolas; Petchey, Owen L

    2015-07-01

    Microbes are critical components of ecosystems and provide vital services (e.g., photosynthesis, decomposition, nutrient recycling). From the diverse roles microbes play in natural ecosystems, high levels of functional diversity result. Quantifying this diversity is challenging, because it is weakly associated with morphological differentiation. In addition, the small size of microbes hinders morphological and behavioral measurements at the individual level, as well as interactions between individuals. Advances in microbial community genetics and genomics, flow cytometry and digital analysis of still images are promising approaches. They miss out, however, on a very important aspect of populations and communities: the behavior of individuals. Video analysis complements these methods by providing in addition to abundance and trait measurements, detailed behavioral information, capturing dynamic processes such as movement, and hence has the potential to describe the interactions between individuals. We introduce BEMOVI, a package using the R and ImageJ software, to extract abundance, morphology, and movement data for tens to thousands of individuals in a video. Through a set of functions BEMOVI identifies individuals present in a video, reconstructs their movement trajectories through space and time, and merges this information into a single database. BEMOVI is a modular set of functions, which can be customized to allow for peculiarities of the videos to be analyzed, in terms of organisms features (e.g., morphology or movement) and how they can be distinguished from the background. We illustrate the validity and accuracy of the method with an example on experimental multispecies communities of aquatic protists. We show high correspondence between manual and automatic counts and illustrate how simultaneous time series of abundance, morphology, and behavior are obtained from BEMOVI. We further demonstrate how the trait data can be used with machine learning to automatically classify individuals into species and that information on movement behavior improves the predictive ability.

  13. Visualization of fluid dynamics at NASA Ames

    NASA Technical Reports Server (NTRS)

    Watson, Val

    1989-01-01

    The hardware and software currently used for visualization of fluid dynamics at NASA Ames is described. The software includes programs to create scenes (for example particle traces representing the flow over an aircraft), programs to interactively view the scenes, and programs to control the creation of video tapes and 16mm movies. The hardware includes high performance graphics workstations, a high speed network, digital video equipment, and film recorders.

  14. Software Support during a Control Room Upgrade

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michele Joyce; Michael Spata; Thomas Oren

    2005-09-21

    In 2004, after 14 years of accelerator operations and commissioning, Jefferson Lab renovated its main control room. Changes in technology and lessons learned during those 14 years drove the control room redesign in a new direction, one that optimizes workflow and makes critical information and controls available to everyone in the control room. Fundamental changes in a variety of software applications were required to facilitate the new operating paradigm. A critical component of the new control room design is a large-format video wall that is used to make a variety of operating information available to everyone in the room. Analogmore » devices such as oscilloscopes and function generators are now displayed on the video wall through two crosspoint switchers: one for analog signals and another for video signals. A new software GUI replaces manual configuration of the oscilloscopes and function generators and helps automate setup. Monitoring screens, customized for the video wall, now make important operating information visible to everyone, not just a single operator. New alarm handler software gives any operator, on any workstation, access to all alarm handler functionality, and multiple users can now contribute to a single electronic logbook entry. To further support the shift to distributed access and control, many applications have been redesigned to run on servers instead of on individual workstations.« less

  15. Natural 3D content on glasses-free light-field 3D cinema

    NASA Astrophysics Data System (ADS)

    Balogh, Tibor; Nagy, Zsolt; Kovács, Péter Tamás.; Adhikarla, Vamsi K.

    2013-03-01

    This paper presents a complete framework for capturing, processing and displaying the free viewpoint video on a large scale immersive light-field display. We present a combined hardware-software solution to visualize free viewpoint 3D video on a cinema-sized screen. The new glasses-free 3D projection technology can support larger audience than the existing autostereoscopic displays. We introduce and describe our new display system including optical and mechanical design considerations, the capturing system and render cluster for producing the 3D content, and the various software modules driving the system. The indigenous display is first of its kind, equipped with front-projection light-field HoloVizio technology, controlling up to 63 MP. It has all the advantages of previous light-field displays and in addition, allows a more flexible arrangement with a larger screen size, matching cinema or meeting room geometries, yet simpler to set-up. The software system makes it possible to show 3D applications in real-time, besides the natural content captured from dense camera arrangements as well as from sparse cameras covering a wider baseline. Our software system on the GPU accelerated render cluster, can also visualize pre-recorded Multi-view Video plus Depth (MVD4) videos on this light-field glasses-free cinema system, interpolating and extrapolating missing views.

  16. MIDAS: Software for the detection and analysis of lunar impact flashes

    NASA Astrophysics Data System (ADS)

    Madiedo, José M.; Ortiz, José L.; Morales, Nicolás; Cabrera-Caño, Jesús

    2015-06-01

    Since 2009 we are running a project to identify flashes produced by the impact of meteoroids on the surface of the Moon. For this purpose we are employing small telescopes and high-sensitivity CCD video cameras. To automatically identify these events a software package called MIDAS was developed and tested. This package can also perform the photometric analysis of these flashes and estimate the value of the luminous efficiency. Besides, we have implemented in MIDAS a new method to establish which is the likely source of the meteoroids (known meteoroid stream or sporadic background). The main features of this computer program are analyzed here, and some examples of lunar impact events are presented.

  17. On-line 3-dimensional confocal imaging in vivo.

    PubMed

    Li, J; Jester, J V; Cavanagh, H D; Black, T D; Petroll, W M

    2000-09-01

    In vivo confocal microscopy through focusing (CMTF) can provide a 3-D stack of high-resolution corneal images and allows objective measurements of corneal sublayer thickness and backscattering. However, current systems require time-consuming off-line image processing and analysis on multiple software platforms. Furthermore, there is a trade off between the CMTF speed and measurement precision. The purpose of this study was to develop a novel on-line system for in vivo corneal imaging and analysis that overcomes these limitations. A tandem scanning confocal microscope (TSCM) was used for corneal imaging. The TSCM video camera was interfaced directly to a PC image acquisition board to implement real-time digitization. Software was developed to allow in vivo 2-D imaging, CMTF image acquisition, interactive 3-D reconstruction, and analysis of CMTF data to be performed on line in a single user-friendly environment. A procedure was also incorporated to separate the odd/even video fields, thereby doubling the CMTF sampling rate and theoretically improving the precision of CMTF thickness measurements by a factor of two. In vivo corneal examinations of a normal human and a photorefractive keratectomy patient are presented to demonstrate the capabilities of the new system. Improvements in the convenience, speed, and functionality of in vivo CMTF image acquisition, display, and analysis are demonstrated. This is the first full-featured software package designed for in vivo TSCM imaging of the cornea, which performs both 2-D and 3-D image acquisition, display, and processing as well as CMTF analysis. The use of a PC platform and incorporation of easy to use, on line, and interactive features should help to improve the clinical utility of this technology.

  18. [Utility of Smartphone in Home Care Medicine - First Trial].

    PubMed

    Takeshige, Toshiyuki; Hirano, Chiho; Nakagawa, Midori; Yoshioka, Rentaro

    2015-12-01

    The use of video calls for home care can reduce anxiety and offer patients peace of mind. The most suitable terminals at facilities to support home care have been iPad Air and iPhone with FaceTime software. However, usage has been limited to specific terminals. In order to eliminate the need for special terminals and software, we have developed a program that has been customized to meet the needs of facilities using Web Real Time Communication(WebRTC)in cooperation with the University of Aizu. With this software, video calls can accommodate the large number of home care patients.

  19. ACQ4: an open-source software platform for data acquisition and analysis in neurophysiology research.

    PubMed

    Campagnola, Luke; Kratz, Megan B; Manis, Paul B

    2014-01-01

    The complexity of modern neurophysiology experiments requires specialized software to coordinate multiple acquisition devices and analyze the collected data. We have developed ACQ4, an open-source software platform for performing data acquisition and analysis in experimental neurophysiology. This software integrates the tasks of acquiring, managing, and analyzing experimental data. ACQ4 has been used primarily for standard patch-clamp electrophysiology, laser scanning photostimulation, multiphoton microscopy, intrinsic imaging, and calcium imaging. The system is highly modular, which facilitates the addition of new devices and functionality. The modules included with ACQ4 provide for rapid construction of acquisition protocols, live video display, and customizable analysis tools. Position-aware data collection allows automated construction of image mosaics and registration of images with 3-dimensional anatomical atlases. ACQ4 uses free and open-source tools including Python, NumPy/SciPy for numerical computation, PyQt for the user interface, and PyQtGraph for scientific graphics. Supported hardware includes cameras, patch clamp amplifiers, scanning mirrors, lasers, shutters, Pockels cells, motorized stages, and more. ACQ4 is available for download at http://www.acq4.org.

  20. GISentinel: a software platform for automatic ulcer detection on capsule endoscopy videos

    NASA Astrophysics Data System (ADS)

    Yi, Steven; Jiao, Heng; Meng, Fan; Leighton, Jonathon A.; Shabana, Pasha; Rentz, Lauri

    2014-03-01

    In this paper, we present a novel and clinically valuable software platform for automatic ulcer detection on gastrointestinal (GI) tract from Capsule Endoscopy (CE) videos. Typical CE videos take about 8 hours. They have to be reviewed manually by physicians to detect and locate diseases such as ulcers and bleedings. The process is time consuming. Moreover, because of the long-time manual review, it is easy to lead to miss-finding. Working with our collaborators, we were focusing on developing a software platform called GISentinel, which can fully automated GI tract ulcer detection and classification. This software includes 3 parts: the frequency based Log-Gabor filter regions of interest (ROI) extraction, the unique feature selection and validation method (e.g. illumination invariant feature, color independent features, and symmetrical texture features), and the cascade SVM classification for handling "ulcer vs. non-ulcer" cases. After the experiments, this SW gave descent results. In frame-wise, the ulcer detection rate is 69.65% (319/458). In instance-wise, the ulcer detection rate is 82.35%(28/34).The false alarm rate is 16.43% (34/207). This work is a part of our innovative 2D/3D based GI tract disease detection software platform. The final goal of this SW is to find and classification of major GI tract diseases intelligently, such as bleeding, ulcer, and polyp from the CE videos. This paper will mainly describe the automatic ulcer detection functional module.

  1. Motmot, an open-source toolkit for realtime video acquisition and analysis.

    PubMed

    Straw, Andrew D; Dickinson, Michael H

    2009-07-22

    Video cameras sense passively from a distance, offer a rich information stream, and provide intuitively meaningful raw data. Camera-based imaging has thus proven critical for many advances in neuroscience and biology, with applications ranging from cellular imaging of fluorescent dyes to tracking of whole-animal behavior at ecologically relevant spatial scales. Here we present 'Motmot': an open-source software suite for acquiring, displaying, saving, and analyzing digital video in real-time. At the highest level, Motmot is written in the Python computer language. The large amounts of data produced by digital cameras are handled by low-level, optimized functions, usually written in C. This high-level/low-level partitioning and use of select external libraries allow Motmot, with only modest complexity, to perform well as a core technology for many high-performance imaging tasks. In its current form, Motmot allows for: (1) image acquisition from a variety of camera interfaces (package motmot.cam_iface), (2) the display of these images with minimal latency and computer resources using wxPython and OpenGL (package motmot.wxglvideo), (3) saving images with no compression in a single-pass, low-CPU-use format (package motmot.FlyMovieFormat), (4) a pluggable framework for custom analysis of images in realtime and (5) firmware for an inexpensive USB device to synchronize image acquisition across multiple cameras, with analog input, or with other hardware devices (package motmot.fview_ext_trig). These capabilities are brought together in a graphical user interface, called 'FView', allowing an end user to easily view and save digital video without writing any code. One plugin for FView, 'FlyTrax', which tracks the movement of fruit flies in real-time, is included with Motmot, and is described to illustrate the capabilities of FView. Motmot enables realtime image processing and display using the Python computer language. In addition to the provided complete applications, the architecture allows the user to write relatively simple plugins, which can accomplish a variety of computer vision tasks and be integrated within larger software systems. The software is available at http://code.astraw.com/projects/motmot.

  2. Interactive Video and Informal Learning Environments.

    ERIC Educational Resources Information Center

    Morrissey, Kristine A.

    The Michigan State University Museum used an interactive videodisc (IVD) as an introduction to a special exhibit, "Birds in Trouble in Michigan." The hardware components included a videodisc player, a microcomputer, a video monitor, and a mouse. Software included a HyperCard program and the videodisc "Audubon Society's VideoGuide to…

  3. Video Measurements: Quantity or Quality

    ERIC Educational Resources Information Center

    Zajkov, Oliver; Mitrevski, Boce

    2012-01-01

    Students have problems with understanding, using and interpreting graphs. In order to improve the students' skills for working with graphs, we propose Manual Video Measurement (MVM). In this paper, the MVM method is explained and its accuracy is tested. The comparison with the standardized video data software shows that its accuracy is comparable…

  4. CD-I and Full Motion Video.

    ERIC Educational Resources Information Center

    Chen, Ching-chih

    1991-01-01

    Describes compact disc interactive (CD-I) as a multimedia home entertainment system that combines audio, visual, text, graphic, and interactive capabilities. Full-screen video and full-motion video (FMV) are explained, hardware for FMV decoding is described, software is briefly discussed, and CD-I titles planned for future production are listed.…

  5. An Overview of Video Description: History, Benefits, and Guidelines

    ERIC Educational Resources Information Center

    Packer, Jaclyn; Vizenor, Katie; Miele, Joshua A.

    2015-01-01

    This article provides an overview of the historical context in which video description services have evolved in the United States, a summary of research demonstrating benefits to people with vision loss, an overview of current video description guidelines, and information about current software programs that are available to produce video…

  6. ISO-IEC MPEG-2 software video codec

    NASA Astrophysics Data System (ADS)

    Eckart, Stefan; Fogg, Chad E.

    1995-04-01

    Part 5 of the International Standard ISO/IEC 13818 `Generic Coding of Moving Pictures and Associated Audio' (MPEG-2) is a Technical Report, a sample software implementation of the procedures in parts 1, 2 and 3 of the standard (systems, video, and audio). This paper focuses on the video software, which gives an example of a fully compliant implementation of the standard and of a good video quality encoder, and serves as a tool for compliance testing. The implementation and some of the development aspects of the codec are described. The encoder is based on Test Model 5 (TM5), one of the best, published, non-proprietary coding models, which was used during MPEG-2 collaborative stage to evaluate proposed algorithms and to verify the syntax. The most important part of the Test Model is controlling the quantization parameter based on the image content and bit rate constraints under both signal-to-noise and psycho-optical aspects. The decoder has been successfully tested for compliance with the MPEG-2 standard, using the ISO/IEC MPEG verification and compliance bitstream test suites as stimuli.

  7. Modeling and analysis of selected space station communications and tracking subsystems

    NASA Technical Reports Server (NTRS)

    Richmond, Elmer Raydean

    1993-01-01

    The Communications and Tracking System on board Space Station Freedom (SSF) provides space-to-ground, space-to-space, audio, and video communications, as well as tracking data reception and processing services. Each major category of service is provided by a communications subsystem which is controlled and monitored by software. Among these subsystems, the Assembly/Contingency Subsystem (ACS) and the Space-to-Ground Subsystem (SGS) provide communications with the ground via the Tracking and Data Relay Satellite (TDRS) System. The ACS is effectively SSF's command link, while the SGS is primarily intended as the data link for SSF payloads. The research activities of this project focused on the ACS and SGS antenna management algorithms identified in the Flight System Software Requirements (FSSR) documentation, including: (1) software modeling and evaluation of antenna management (positioning) algorithms; and (2) analysis and investigation of selected variables and parameters of these antenna management algorithms i.e., descriptions and definitions of ranges, scopes, and dimensions. In a related activity, to assist those responsible for monitoring the development of this flight system software, a brief summary of software metrics concepts, terms, measures, and uses was prepared.

  8. Information System Engineering Supporting Observation, Orientation, Decision, and Compliant Action

    NASA Astrophysics Data System (ADS)

    Georgakopoulos, Dimitrios

    The majority of today's software systems and organizational/business structures have been built on the foundation of solving problems via long-term data collection, analysis, and solution design. This traditional approach of solving problems and building corresponding software systems and business processes, falls short in providing the necessary solutions needed to deal with many problems that require agility as the main ingredient of their solution. For example, such agility is needed in responding to an emergency, in military command control, physical security, price-based competition in business, investing in the stock market, video gaming, network monitoring and self-healing, diagnosis in emergency health care, and many other areas that are too numerous to list here. The concept of Observe, Orient, Decide, and Act (OODA) loops is a guiding principal that captures the fundamental issues and approach for engineering information systems that deal with many of these problem areas. However, there are currently few software systems that are capable of supporting OODA. In this talk, we provide a tour of the research issues and state of the art solutions for supporting OODA. In addition, we provide specific examples of OODA solutions we have developed for the video surveillance and emergency response domains.

  9. Hardware and software improvements to a low-cost horizontal parallax holographic video monitor.

    PubMed

    Henrie, Andrew; Codling, Jesse R; Gneiting, Scott; Christensen, Justin B; Awerkamp, Parker; Burdette, Mark J; Smalley, Daniel E

    2018-01-01

    Displays capable of true holographic video have been prohibitively expensive and difficult to build. With this paper, we present a suite of modularized hardware components and software tools needed to build a HoloMonitor with basic "hacker-space" equipment, highlighting improvements that have enabled the total materials cost to fall to $820, well below that of other holographic displays. It is our hope that the current level of simplicity, development, design flexibility, and documentation will enable the lay engineer, programmer, and scientist to relatively easily replicate, modify, and build upon our designs, bringing true holographic video to the masses.

  10. Optical Science: Deploying Technical Concepts and Engaging Participation through Digital Storytelling

    NASA Astrophysics Data System (ADS)

    Thomas, R. G.; Berry, K.; Arrigo, J.; Hooper, R. P.

    2013-12-01

    Technical 'hands-on' training workshops are designed to bring together scientists, technicians, and program managers from universities, government agencies, and the private sector to discuss methods used and advances made in instrumentation and data analysis. Through classroom lectures and discussions combined with a field-day component, hands-on workshop participants get a 'full life cycle' perspective from instrumentation concepts and deployment to data analysis. Using film to document this process is becoming increasingly more popular, allowing scientists to add a story-telling component to their research. With the availability of high-quality and low priced professional video equipment and editing software, scientists are becoming digital storytellers. The science video developed from the 'hands-on' workshop, Optical Water Quality Sensors for Nutrients: Concepts, Deployment, and Analysis, encapsulates the objectives of technical training workshops for participants. Through the use of still photography, video, interviews, and sound, the short video, An Introduction to CUAHSI's Hands-on Workshops, produced by a co-instructor of the workshop acts as a multi-purpose tool. The 10-minute piece provides an overview of workshop field day activities and works to bridge the gap between classroom learning, instrumentation application and data analysis. CUAHSI 'hands-on' technical workshops have been collaboratively executed with faculty from several universities and with the U.S. Geological Survey. The video developed was designed to attract new participants to these professional development workshops, to stimulate a connection with the environment, to act as a workshop legacy resource, and also serve as a guide for prospective hands-on workshop organizers. The effective use of film and short videos in marketing scientific programs, such as technical trainings, allows scientists to visually demonstrate the technologies currently being employed and to provide a more intriguing perspective on scientific research.

  11. Audiovisual heritage preservation in Earth and Space Science Informatics: Videos from Free and Open Source Software for Geospatial (FOSS4G) conferences in the TIB|AV-Portal.

    NASA Astrophysics Data System (ADS)

    Löwe, Peter; Marín Arraiza, Paloma; Plank, Margret

    2016-04-01

    The influence of Free and Open Source Software (FOSS) projects on Earth and Space Science Informatics (ESSI) continues to grow, particularly in the emerging context of Data Science or Open Science. The scientific significance and heritage of FOSS projects is only to a limited amount covered by traditional scientific journal articles: Audiovisual conference recordings contain significant information for analysis, reference and citation. In the context of data driven research, this audiovisual content needs to be accessible by effective search capabilities, enabling the content to be searched in depth and retrieved. Thereby, it is ensured that the content producers receive credit for their efforts within the respective communities. For Geoinformatics and ESSI, one distinguished driver is the OSGeo Foundation (OSGeo), founded in 2006 to support and promote the interdisciplinary collaborative development of open geospatial technologies and data. The organisational structure is based on software projects that have successfully passed the OSGeo incubation process, proving their compliance with FOSS licence models. This quality assurance is crucial for the transparent and unhindered application in (Open) Science. The main communication channels within and between the OSGeo-hosted community projects for face to face meetings are conferences on national, regional and global scale. Video recordings have been complementing the scientific proceedings since 2006. During the last decade, the growing body of OSGeo videos has been negatively affected by content loss, obsolescence of video technology and dependence on commercial video portals. Even worse, the distributed storage and lack of metadata do not guarantee concise and efficient access of the content. This limits the retrospective analysis of video content from past conferences. But, it also indicates a need for reliable, standardized, comparable audiovisual repositories for the future, as the number of OSGeo projects continues to grow - and so does the number of topics to be addressed at conferences. Up to now, commercial Web 2.0 platforms like Youtube and Vimeo were used. However, these platforms lack capabilities for long-term archiving and scientific citation, such as persistent identifiers that permit the citation of specific intervals of the overall content. To address these issues, the scientific library community has started to implement improved multimedia archiving and retrieval services for scientific audiovisual content which fulfil these requirements. Using the reference case of the OSGeo conference video recordings, this paper gives an overview over the new and growing collection activities by the German National Library of Science and Technology for audiovisual content in Geoinformatics/ESSI in the TIB|AV Portal for audiovisual content. Following a successful start in 2014 and positive response from the OSGeo Community, the TIB acquisition strategy for OSGeo video material was extended to include German, European, North-American and global conference content. The collection grows steadily by new conference content and also by harvesting of past conference videos from commercial Web 2.0 platforms like Youtube and Vimeo. This positions the TIB|AV-Portal as a reliable and concise long-term resource for innovation mining, education and scholarly research within the ESSI context both within Academia and Industry.

  12. Early prediction of cerebral palsy by computer-based video analysis of general movements: a feasibility study.

    PubMed

    Adde, Lars; Helbostad, Jorunn L; Jensenius, Alexander R; Taraldsen, Gunnar; Grunewaldt, Kristine H; Støen, Ragnhild

    2010-08-01

    The aim of this study was to investigate the predictive value of a computer-based video analysis of the development of cerebral palsy (CP) in young infants. A prospective study of general movements used recordings from 30 high-risk infants (13 males, 17 females; mean gestational age 31wks, SD 6wks; range 23-42wks) between 10 and 15 weeks post term when fidgety movements should be present. Recordings were analysed using computer vision software. Movement variables, derived from differences between subsequent video frames, were used for quantitative analyses. CP status was reported at 5 years. Thirteen infants developed CP (eight hemiparetic, four quadriparetic, one dyskinetic; seven ambulatory, three non-ambulatory, and three unknown function), of whom one had fidgety movements. Variability of the centroid of motion had a sensitivity of 85% and a specificity of 71% in identifying CP. By combining this with variables reflecting the amount of motion, specificity increased to 88%. Nine out of 10 children with CP, and for whom information about functional level was available, were correctly predicted with regard to ambulatory and non-ambulatory function. Prediction of CP can be provided by computer-based video analysis in young infants. The method may serve as an objective and feasible tool for early prediction of CP in high-risk infants.

  13. Clinical values of intraoperative indocyanine green fluorescence video angiography with Flow 800 software in cerebrovascular surgery.

    PubMed

    Ye, Xun; Liu, Xing-Ju; Ma, Li; Liu, Ling-Tong; Wang, Wen-Lei; Wang, Shuo; Cao, Yong; Zhang, Dong; Wang, Rong; Zhao, Ji-Zong; Zhao, Yuan-Li

    2013-11-01

    Microscope-integrated near-infrared indocyanine green video angiography (ICG-VA) has been used in neurosurgery for a decade. This study aimed to assess the value of intraoperative indocyanine green (ICG) video angiography with Flow 800 software in cerebrovascular surgery and to discover its hemodynamic features and changes of cerebrovascular diseases during surgery. A total of 87 patients who received ICG-VA during various surgical procedures were enrolled in this study. Among them, 45 cases were cerebral aneurysms, 25 were cerebral arteriovenous malformations (AVMs), and 17 were moyamoya disease (MMD). A surgical microscope integrating an infrared fluorescence module was used to confirm the residual aneurysms and blocking of perforating arteries in aneurysms. Feeder arteries, draining veins, and normal cortical vessels were identified by the time delay color mode of Flow 800 software. Hemodynamic parameters were recorded. All data were analyzed by SPSS version 18.0 (SPSS Inc., USA). T-test was used to analyze the hemodynamic features of AVMs and MMDs, the influence on peripheral cortex after resection in AVMs, and superficial temporal artery to middle cerebral artery (STA-MCA) bypass in MMDs. The visual delay map obtained by Flow 800 software had more advantages than the traditional playback mode in identifying the feeder arteries, draining veins, and their relations to normal cortex vessels. The maximum fluorescence intensity (MFI) and the slope of ICG fluorescence curve of feeder arteries and draining veins were higher than normal peripheral vessels (MFI: 584.24±85.86 vs. 382.94 ± 91.50, slope: 144.95 ± 38.08 vs. 69.20 ± 13.08, P < 0.05). The arteriovenous transit time in AVM was significantly shorter than in normal cortical vessels ((0.60 ± 0.27) vs. (2.08 ± 1.42) seconds, P < 0.05). After resection of AVM, the slope of artery in the cortex increased, which reflected the increased cerebral flow. In patients with MMD, after STA-MCA bypass, cortex perfusion of corresponding branches region increased and local cycle time became shorter. Intraoperative ICG video angiography combined with hemodynamic parameter analysis obtained by Flow 800 software appears to be useful for intraoperative monitoring of regional cerebral blood flow in cerebrovascular disease.

  14. Automated Gait Analysis Through Hues and Areas (AGATHA): a method to characterize the spatiotemporal pattern of rat gait

    PubMed Central

    Kloefkorn, Heidi E.; Pettengill, Travis R.; Turner, Sara M. F.; Streeter, Kristi A.; Gonzalez-Rothi, Elisa J.; Fuller, David D.; Allen, Kyle D.

    2016-01-01

    While rodent gait analysis can quantify the behavioral consequences of disease, significant methodological differences exist between analysis platforms and little validation has been performed to understand or mitigate these sources of variance. By providing the algorithms used to quantify gait, open-source gait analysis software can be validated and used to explore methodological differences. Our group is introducing, for the first time, a fully-automated, open-source method for the characterization of rodent spatiotemporal gait patterns, termed Automated Gait Analysis Through Hues and Areas (AGATHA). This study describes how AGATHA identifies gait events, validates AGATHA relative to manual digitization methods, and utilizes AGATHA to detect gait compensations in orthopaedic and spinal cord injury models. To validate AGATHA against manual digitization, results from videos of rodent gait, recorded at 1000 frames per second (fps), were compared. To assess one common source of variance (the effects of video frame rate), these 1000 fps videos were re-sampled to mimic several lower fps and compared again. While spatial variables were indistinguishable between AGATHA and manual digitization, low video frame rates resulted in temporal errors for both methods. At frame rates over 125 fps, AGATHA achieved a comparable accuracy and precision to manual digitization for all gait variables. Moreover, AGATHA detected unique gait changes in each injury model. These data demonstrate AGATHA is an accurate and precise platform for the analysis of rodent spatiotemporal gait patterns. PMID:27554674

  15. Automated Gait Analysis Through Hues and Areas (AGATHA): A Method to Characterize the Spatiotemporal Pattern of Rat Gait.

    PubMed

    Kloefkorn, Heidi E; Pettengill, Travis R; Turner, Sara M F; Streeter, Kristi A; Gonzalez-Rothi, Elisa J; Fuller, David D; Allen, Kyle D

    2017-03-01

    While rodent gait analysis can quantify the behavioral consequences of disease, significant methodological differences exist between analysis platforms and little validation has been performed to understand or mitigate these sources of variance. By providing the algorithms used to quantify gait, open-source gait analysis software can be validated and used to explore methodological differences. Our group is introducing, for the first time, a fully-automated, open-source method for the characterization of rodent spatiotemporal gait patterns, termed Automated Gait Analysis Through Hues and Areas (AGATHA). This study describes how AGATHA identifies gait events, validates AGATHA relative to manual digitization methods, and utilizes AGATHA to detect gait compensations in orthopaedic and spinal cord injury models. To validate AGATHA against manual digitization, results from videos of rodent gait, recorded at 1000 frames per second (fps), were compared. To assess one common source of variance (the effects of video frame rate), these 1000 fps videos were re-sampled to mimic several lower fps and compared again. While spatial variables were indistinguishable between AGATHA and manual digitization, low video frame rates resulted in temporal errors for both methods. At frame rates over 125 fps, AGATHA achieved a comparable accuracy and precision to manual digitization for all gait variables. Moreover, AGATHA detected unique gait changes in each injury model. These data demonstrate AGATHA is an accurate and precise platform for the analysis of rodent spatiotemporal gait patterns.

  16. Guidelines for Applying Video Simulation Technology to Training Land Design

    DTIC Science & Technology

    1993-02-01

    Training Land Design for Realism." The technical monitor was Dr. Victor Diersing, CEHSC-FN. This study was performed by the Environmental Resources...technology to their land management activities. 5 Objective The objective of this study was to provide a general overview of the use of video simulation...4). A market study of currently available hardware and software provided the basis for descriptions of hardware and software systems, and their

  17. Measuring zebrafish turning rate.

    PubMed

    Mwaffo, Violet; Butail, Sachit; di Bernardo, Mario; Porfiri, Maurizio

    2015-06-01

    Zebrafish is becoming a popular animal model in preclinical research, and zebrafish turning rate has been proposed for the analysis of activity in several domains. The turning rate is often estimated from the trajectory of the fish centroid that is output by commercial or custom-made target tracking software run on overhead videos of fish swimming. However, the accuracy of such indirect methods with respect to the turning rate associated with changes in heading during zebrafish locomotion is largely untested. Here, we compare two indirect methods for the turning rate estimation using the centroid velocity or position data, with full shape tracking for three different video sampling rates. We use tracking data from the overhead video recorded at 60, 30, and 15 frames per second of zebrafish swimming in a shallow water tank. Statistical comparisons of absolute turning rate across methods and sampling rates indicate that, while indirect methods are indistinguishable from full shape tracking, the video sampling rate significantly influences the turning rate measurement. The results of this study can aid in the selection of the video capture frame rate, an experimental design parameter in zebrafish behavioral experiments where activity is an important measure.

  18. Image enhancement software for underwater recovery operations: User's manual

    NASA Astrophysics Data System (ADS)

    Partridge, William J.; Therrien, Charles W.

    1989-06-01

    This report describes software for performing image enhancement on live or recorded video images. The software was developed for operational use during underwater recovery operations at the Naval Undersea Warfare Engineering Station. The image processing is performed on an IBM-PC/AT compatible computer equipped with hardware to digitize and display video images. The software provides the capability to provide contrast enhancement and other similar functions in real time through hardware lookup tables, to automatically perform histogram equalization, to capture one or more frames and average them or apply one of several different processing algorithms to a captured frame. The report is in the form of a user manual for the software and includes guided tutorial and reference sections. A Digital Image Processing Primer in the appendix serves to explain the principle concepts that are used in the image processing.

  19. Monitoring microcirculation.

    PubMed

    Ocak, Işık; Kara, Atila; Ince, Can

    2016-12-01

    The clinical relevance of microcirculation and its bedside observation started gaining importance in the 1990s since the introduction of hand-held video microscopes. From then, this technology has been continuously developed, and its clinical relevance has been established in more than 400 studies. In this paper, we review the different types of video microscopes, their application techniques, the microcirculation of different organ systems, the analysis methods, and the software and scoring systems. The main focus of this review will be on the state-of-art technique, CytoCam-incident dark-field imaging, and the most recent technological and technical updates concerning microcirculation monitoring. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. The problem of the driverless vehicle specified path stability control

    NASA Astrophysics Data System (ADS)

    Buznikov, S. E.; Endachev, D. V.; Elkin, D. S.; Strukov, V. O.

    2018-02-01

    Currently the effort of many leading foreign companies is focused on creation of driverless transport for transportation of cargo and passengers. Among many practical problems arising while creating driverless vehicles, the problem of the specified path stability control occupies a central place. The purpose of this paper is formalization of the problem in question in terms of the quadratic functional of the control quality, the comparative analysis of the possible solutions and justification of the choice of the optimum technical solution. As square value of the integral of the deviation from the specified path is proposed as the quadratic functional of the control quality. For generation of the set of software and hardware solution variants the Zwicky “morphological box” method is used within the hardware and software environments. The heading control algorithms use the wheel steering angle data and the deviation from the lane centerline (specified path) calculated based on the navigation data and the data from the video system. Where the video system does not detect the road marking, the control is carried out based on the wheel navigation system data and where recognizable road marking exits - based on to the video system data. The analysis of the test results allows making the conclusion that the application of the combined navigation system algorithms that provide quasi-optimum solution of the problem while meeting the strict functional limits for the technical and economic indicators of the driverless vehicle control system under development is effective.

  1. Modification of the Miyake-Apple technique for simultaneous anterior and posterior video imaging of wet laboratory-based corneal surgery.

    PubMed

    Tan, Johnson C H; Meadows, Howard; Gupta, Aanchal; Yeung, Sonia N; Moloney, Gregory

    2014-03-01

    The aim of this study was to describe a modification of the Miyake-Apple posterior video analysis for the simultaneous visualization of the anterior and posterior corneal surfaces during wet laboratory-based deep anterior lamellar keratoplasty (DALK). A human donor corneoscleral button was affixed to a microscope slide and placed onto a custom-made mounting box. A big bubble DALK was performed on the cornea in the wet laboratory. An 11-diopter intraocular lens was positioned over the aperture of the back camera of an iPhone. This served to video record the posterior view of the corneoscleral button during the big bubble formation. An overhead operating microscope with an attached video camcorder recorded the anterior view during the surgery. The anterior and posterior views of the wet laboratory-based DALK surgery were simultaneously captured and edited using video editing software. The formation of the big bubble can be studied. This video recording camera system has the potential to act as a valuable research and teaching tool in corneal lamellar surgery, especially in the behavior of the big bubble formation in DALK.

  2. Circumnutation Tracker: novel software for investigation of circumnutation

    PubMed Central

    2014-01-01

    Background An endogenous, helical plant organ movement named circumnutation is ubiquitous in the plant kingdom. Plant shoots, stems, tendrils, leaves, and roots commonly circumnutate but their appearance is still poorly described. To support such investigations, novel software Circumnutation Tracker (CT) for spatial-temporal analysis of circumnutation has been developed. Results CT works on time-lapse video and collected circumnutation parameters: period, length, rate, shape, angle, and clockwise- and counterclockwise directions. The CT combines a filtering algorithm with a graph-based method to describe the parameters of circumnutation. The parameters of circumnutation of Helianthus annuus hypocotyls and the relationship between cotyledon arrangement and circumnutation geometry are presented here to demonstrate the CT options. Conclusions We have established that CT facilitates and accelerates analysis of circumnutation. In combination with the physiological, molecular, and genetic methods, this software may be a powerful tool also for investigations of gravitropism, biological clock, and membrane transport, i.e. processes involved in the mechanism of circumnutation.

  3. CCD Camera Detection of HIV Infection.

    PubMed

    Day, John R

    2017-01-01

    Rapid and precise quantification of the infectivity of HIV is important for molecular virologic studies, as well as for measuring the activities of antiviral drugs and neutralizing antibodies. An indicator cell line, a CCD camera, and image-analysis software are used to quantify HIV infectivity. The cells of the P4R5 line, which express the receptors for HIV infection as well as β-galactosidase under the control of the HIV-1 long terminal repeat, are infected with HIV and then incubated 2 days later with X-gal to stain the infected cells blue. Digital images of monolayers of the infected cells are captured using a high resolution CCD video camera and a macro video zoom lens. A software program is developed to process the images and to count the blue-stained foci of infection. The described method allows for the rapid quantification of the infected cells over a wide range of viral inocula with reproducibility, accuracy and at relatively low cost.

  4. Automated measurement of zebrafish larval movement

    PubMed Central

    Cario, Clinton L; Farrell, Thomas C; Milanese, Chiara; Burton, Edward A

    2011-01-01

    Abstract The zebrafish is a powerful vertebrate model that is readily amenable to genetic, pharmacological and environmental manipulations to elucidate the molecular and cellular basis of movement and behaviour. We report software enabling automated analysis of zebrafish movement from video recordings captured with cameras ranging from a basic camcorder to more specialized equipment. The software, which is provided as open-source MATLAB functions, can be freely modified and distributed, and is compatible with multiwell plates under a wide range of experimental conditions. Automated measurement of zebrafish movement using this technique will be useful for multiple applications in neuroscience, pharmacology and neuropsychiatry. PMID:21646414

  5. Automated measurement of zebrafish larval movement.

    PubMed

    Cario, Clinton L; Farrell, Thomas C; Milanese, Chiara; Burton, Edward A

    2011-08-01

    The zebrafish is a powerful vertebrate model that is readily amenable to genetic, pharmacological and environmental manipulations to elucidate the molecular and cellular basis of movement and behaviour. We report software enabling automated analysis of zebrafish movement from video recordings captured with cameras ranging from a basic camcorder to more specialized equipment. The software, which is provided as open-source MATLAB functions, can be freely modified and distributed, and is compatible with multiwell plates under a wide range of experimental conditions. Automated measurement of zebrafish movement using this technique will be useful for multiple applications in neuroscience, pharmacology and neuropsychiatry.

  6. Astronomy Data Visualization with Blender

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.

    2015-08-01

    We present innovative methods and techniques for using Blender, a 3D software package, in the visualization of astronomical data. N-body simulations, data cubes, galaxy and stellar catalogs, and planetary surface maps can be rendered in high quality videos for exploratory data analysis. Blender's API is Python based, making it advantageous for use in astronomy with flexible libraries like astroPy. Examples will be exhibited that showcase the features of the software in astronomical visualization paradigms. 2D and 3D voxel texture applications, animations, camera movement, and composite renders are introduced to the astronomer's toolkit and how they mesh with different forms of data.

  7. Got Rhythm? Get RockBand[TM]!

    ERIC Educational Resources Information Center

    Nardo, Rachel

    2010-01-01

    According to a 2009 report by the Entertainment Software Association (ESA), at least 68% of American households now play video games, and half the parents in America now play video games with their children. Grandparents are in on the action, as well. In addition, nursing homes and senior centers are now incorporating video games into their…

  8. Health Education Video Games for Children and Adolescents: Theory, Design, and Research Findings.

    ERIC Educational Resources Information Center

    Lieberman, Debra A.

    This study examined whether video games could be effective health education and therapeutic interventions for children and adolescents with diabetes. KIDZ Health Software developed a game about diabetes self-management, and tested its effectiveness for children with diabetes. The Packy and Marlon Super Nintendo video game promotes fun,…

  9. How to Study the Doppler Effect with Audacity Software

    ERIC Educational Resources Information Center

    Dias, Marco Adriano; Carvalho, Paulo Simeão; Ventura, Daniel Rodrigues

    2016-01-01

    The Doppler effect is one of the recurring themes in college and high school classes. In order to contextualize the topic and engage the students in their own learning process, we propose a simple and easily accessible activity, i.e. the analysis of the videos available on the internet by the students. The sound of the engine of the vehicle…

  10. Towards a Local Integration of Theories: Codes and Praxeologies in the Case of Computer-Based Instruction

    ERIC Educational Resources Information Center

    Gellert, Uwe; Barbe, Joaquim; Espinoza, Lorena

    2013-01-01

    We report on the development of a "language of description" that facilitates an integrated analysis of classroom video data in terms of the quality of the teaching-learning process and the students' access to valued forms of mathematical knowledge. Our research setting is the introduction of software for teachers for improving the mathematical…

  11. Quantifying technical skills during open operations using video-based motion analysis.

    PubMed

    Glarner, Carly E; Hu, Yue-Yung; Chen, Chia-Hsiung; Radwin, Robert G; Zhao, Qianqian; Craven, Mark W; Wiegmann, Douglas A; Pugh, Carla M; Carty, Matthew J; Greenberg, Caprice C

    2014-09-01

    Objective quantification of technical operative skills in surgery remains poorly defined, although the delivery of and training in these skills is essential to the profession of surgery. Attempts to measure hand kinematics to quantify operative performance primarily have relied on electromagnetic sensors attached to the surgeon's hand or instrument. We sought to determine whether a similar motion analysis could be performed with a marker-less, video-based review, allowing for a scalable approach to performance evaluation. We recorded six reduction mammoplasty operations-a plastic surgery procedure in which the attending and resident surgeons operate in parallel. Segments representative of surgical tasks were identified with Multimedia Video Task Analysis software. Video digital processing was used to extract and analyze the spatiotemporal characteristics of hand movement. Attending plastic surgeons appear to use their nondominant hand more than residents when cutting with the scalpel, suggesting more use of countertraction. While suturing, attendings were more ambidextrous, with smaller differences in movement between their dominant and nondominant hands than residents. Attendings also seem to have more conservation of movement when performing instrument tying than residents, as demonstrated by less nondominant hand displacement. These observations were consistent within procedures and between the different attending plastic surgeons evaluated in this fashion. Video motion analysis can be used to provide objective measurement of technical skills without the need for sensors or markers. Such data could be valuable in better understanding the acquisition and degradation of operative skills, providing enhanced feedback to shorten the learning curve. Copyright © 2014 Mosby, Inc. All rights reserved.

  12. The Effect of Interactive Simulations on Exercise Adherence with Overweight and Obese Adults

    DTIC Science & Technology

    2009-12-01

    integrated video game play capabilities was developed. Unique software was written and further modified to integrate the exercise equipment/ video game ...exercise bicycle with video gaming console 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18. NUMBER OF PAGES 19a. NAME OF... video game play on exercise adherence, exercise motivation , and self-efficacy in overweight and obese Army personnel. Despite being younger. less

  13. Mobile Vehicle Teleoperated Over Wireless IP

    DTIC Science & Technology

    2007-06-13

    VideoLAN software suite. The VLC media player portion of this suite handles net- work streaming of video, as well as the receipt and display of the video...is found in appendix C.7. Video Display The video feed is displayed for the operator using VLC opened independently from the control sending program...This gives the operator the most choice in how to configure the display. To connect VLC to the feed all you need is the IP address from the Java

  14. Hazardous Environment Robotics

    NASA Technical Reports Server (NTRS)

    1996-01-01

    Jet Propulsion Laboratory (JPL) developed video overlay calibration and demonstration techniques for ground-based telerobotics. Through a technology sharing agreement with JPL, Deneb Robotics added this as an option to its robotics software, TELEGRIP. The software is used for remotely operating robots in nuclear and hazardous environments in industries including automotive and medical. The option allows the operator to utilize video to calibrate 3-D computer models with the actual environment, and thus plan and optimize robot trajectories before the program is automatically generated.

  15. Automatic lesion detection in capsule endoscopy based on color saliency: closer to an essential adjunct for reviewing software.

    PubMed

    Iakovidis, Dimitris K; Koulaouzidis, Anastasios

    2014-11-01

    The advent of wireless capsule endoscopy (WCE) has revolutionized the diagnostic approach to small-bowel disease. However, the task of reviewing WCE video sequences is laborious and time-consuming; software tools offering automated video analysis would enable a timelier and potentially a more accurate diagnosis. To assess the validity of innovative, automatic lesion-detection software in WCE. A color feature-based pattern recognition methodology was devised and applied to the aforementioned image group. This study was performed at the Royal Infirmary of Edinburgh, United Kingdom, and the Technological Educational Institute of Central Greece, Lamia, Greece. A total of 137 deidentified WCE single images, 77 showing pathology and 60 normal images. The proposed methodology, unlike state-of-the-art approaches, is capable of detecting several different types of lesions. The average performance, in terms of the area under the receiver-operating characteristic curve, reached 89.2 ± 0.9%. The best average performance was obtained for angiectasias (97.5 ± 2.4%) and nodular lymphangiectasias (96.3 ± 3.6%). Single expert for annotation of pathologies, single type of WCE model, use of single images instead of entire WCE videos. A simple, yet effective, approach allowing automatic detection of all types of abnormalities in capsule endoscopy is presented. Based on color pattern recognition, it outperforms previous state-of-the-art approaches. Moreover, it is robust in the presence of luminal contents and is capable of detecting even very small lesions. Crown Copyright © 2014. Published by Elsevier Inc. All rights reserved.

  16. a Sensor Aided H.264/AVC Video Encoder for Aerial Video Sequences with in the Loop Metadata Correction

    NASA Astrophysics Data System (ADS)

    Cicala, L.; Angelino, C. V.; Ruatta, G.; Baccaglini, E.; Raimondo, N.

    2015-08-01

    Unmanned Aerial Vehicles (UAVs) are often employed to collect high resolution images in order to perform image mosaicking and/or 3D reconstruction. Images are usually stored on board and then processed with on-ground desktop software. In such a way the computational load, and hence the power consumption, is moved on ground, leaving on board only the task of storing data. Such an approach is important in the case of small multi-rotorcraft UAVs because of their low endurance due to the short battery life. Images can be stored on board with either still image or video data compression. Still image system are preferred when low frame rates are involved, because video coding systems are based on motion estimation and compensation algorithms which fail when the motion vectors are significantly long and when the overlapping between subsequent frames is very small. In this scenario, UAVs attitude and position metadata from the Inertial Navigation System (INS) can be employed to estimate global motion parameters without video analysis. A low complexity image analysis can be still performed in order to refine the motion field estimated using only the metadata. In this work, we propose to use this refinement step in order to improve the position and attitude estimation produced by the navigation system in order to maximize the encoder performance. Experiments are performed on both simulated and real world video sequences.

  17. Improving Video Game Development: Facilitating Heterogeneous Team Collaboration through Flexible Software Processes

    NASA Astrophysics Data System (ADS)

    Musil, Juergen; Schweda, Angelika; Winkler, Dietmar; Biffl, Stefan

    Based on our observations of Austrian video game software development (VGSD) practices we identified a lack of systematic processes/method support and inefficient collaboration between various involved disciplines, i.e. engineers and artists. VGSD includes heterogeneous disciplines, e.g. creative arts, game/content design, and software. Nevertheless, improving team collaboration and process support is an ongoing challenge to enable a comprehensive view on game development projects. Lessons learned from software engineering practices can help game developers to increase game development processes within a heterogeneous environment. Based on a state of the practice survey in the Austrian games industry, this paper presents (a) first results with focus on process/method support and (b) suggests a candidate flexible process approach based on Scrum to improve VGSD and team collaboration. Results showed (a) a trend to highly flexible software processes involving various disciplines and (b) identified the suggested flexible process approach as feasible and useful for project application.

  18. Spacelab, Spacehab, and Space Station Freedom payload interface projects

    NASA Technical Reports Server (NTRS)

    Smith, Dean Lance

    1992-01-01

    Contributions were made to several projects. Howard Nguyen was assisted in developing the Space Station RPS (Rack Power Supply). The RPS is a computer controlled power supply that helps test equipment used for experiments before the equipment is installed on Space Station Freedom. Ron Bennett of General Electric Government Services was assisted in the design and analysis of the Standard Interface Rack Controller hardware and software. An analysis was made of the GPIB (General Purpose Interface Bus), looking for any potential problems while transmitting data across the bus, such as the interaction of the bus controller with a data talker and its listeners. An analysis was made of GPIB bus communications in general, including any negative impact the bus may have on transmitting data back to Earth. A study was made of transmitting digital data back to Earth over a video channel. A report was written about the study and a revised version of the report will be submitted for publication. Work was started on the design of a PC/AT compatible circuit board that will combine digital data with a video signal. Another PC/AT compatible circuit board is being designed to recover the digital data from the video signal. A proposal was submitted to support the continued development of the interface boards after the author returns to Memphis State University in the fall. A study was also made of storing circuit board design software and data on the hard disk server of a LAN (Local Area Network) that connects several IBM style PCs. A report was written that makes several recommendations. A preliminary design review was started of the AIVS (Automatic Interface Verification System). The summer was over before any significant contribution could be made to this project.

  19. Evaluation of Digital Technology and Software Use among Business Education Teachers

    ERIC Educational Resources Information Center

    Ellis, Richard S.; Okpala, Comfort O.

    2004-01-01

    Digital video cameras are part of the evolution of multimedia digital products that have positive applications for educators, students, and industry. Multimedia digital video can be utilized by any personal computer and it allows the user to control, combine, and manipulate different types of media, such as text, sound, video, computer graphics,…

  20. Manual versus Automated Rodent Behavioral Assessment: Comparing Efficacy and Ease of Bederson and Garcia Neurological Deficit Scores to an Open Field Video-Tracking System.

    PubMed

    Desland, Fiona A; Afzal, Aqeela; Warraich, Zuha; Mocco, J

    2014-01-01

    Animal models of stroke have been crucial in advancing our understanding of the pathophysiology of cerebral ischemia. Currently, the standards for determining neurological deficit in rodents are the Bederson and Garcia scales, manual assessments scoring animals based on parameters ranked on a narrow scale of severity. Automated open field analysis of a live-video tracking system that analyzes animal behavior may provide a more sensitive test. Results obtained from the manual Bederson and Garcia scales did not show significant differences between pre- and post-stroke animals in a small cohort. When using the same cohort, however, post-stroke data obtained from automated open field analysis showed significant differences in several parameters. Furthermore, large cohort analysis also demonstrated increased sensitivity with automated open field analysis versus the Bederson and Garcia scales. These early data indicate use of automated open field analysis software may provide a more sensitive assessment when compared to traditional Bederson and Garcia scales.

  1. Measuring fish and their physical habitats: Versatile 2D and 3D video techniques with user-friendly software

    USGS Publications Warehouse

    Neuswanger, Jason R.; Wipfli, Mark S.; Rosenberger, Amanda E.; Hughes, Nicholas F.

    2017-01-01

    Applications of video in fisheries research range from simple biodiversity surveys to three-dimensional (3D) measurement of complex swimming, schooling, feeding, and territorial behaviors. However, researchers lack a transparently developed, easy-to-use, general purpose tool for 3D video measurement and event logging. Thus, we developed a new measurement system, with freely available, user-friendly software, easily obtained hardware, and flexible underlying mathematical methods capable of high precision and accuracy. The software, VidSync, allows users to efficiently record, organize, and navigate complex 2D or 3D measurements of fish and their physical habitats. Laboratory tests showed submillimetre accuracy in length measurements of 50.8 mm targets at close range, with increasing errors (mostly <1%) at longer range and for longer targets. A field test on juvenile Chinook salmon (Oncorhynchus tshawytscha) feeding behavior in Alaska streams found that individuals within aggregations avoided the immediate proximity of their competitors, out to a distance of 1.0 to 2.9 body lengths. This system makes 3D video measurement a practical tool for laboratory and field studies of aquatic or terrestrial animal behavior and ecology.

  2. A portable platform to collect and review behavioral data simultaneously with neurophysiological signals.

    PubMed

    Tianxiao Jiang; Siddiqui, Hasan; Ray, Shruti; Asman, Priscella; Ozturk, Musa; Ince, Nuri F

    2017-07-01

    This paper presents a portable platform to collect and review behavioral data simultaneously with neurophysiological signals. The whole system is comprised of four parts: a sensor data acquisition interface, a socket server for real-time data streaming, a Simulink system for real-time processing and an offline data review and analysis toolbox. A low-cost microcontroller is used to acquire data from external sensors such as accelerometer and hand dynamometer. The micro-controller transfers the data either directly through USB or wirelessly through a bluetooth module to a data server written in C++ for MS Windows OS. The data server also interfaces with the digital glove and captures HD video from webcam. The acquired sensor data are streamed under User Datagram Protocol (UDP) to other applications such as Simulink/Matlab for real-time analysis and recording. Neurophysiological signals such as electroencephalography (EEG), electrocorticography (ECoG) and local field potential (LFP) recordings can be collected simultaneously in Simulink and fused with behavioral data. In addition, we developed a customized Matlab Graphical User Interface (GUI) software to review, annotate and analyze the data offline. The software provides a fast, user-friendly data visualization environment with synchronized video playback feature. The software is also capable of reviewing long-term neural recordings. Other featured functions such as fast preprocessing with multithreaded filters, annotation, montage selection, power-spectral density (PSD) estimate, time-frequency map and spatial spectral map are also implemented.

  3. OpenControl: a free opensource software for video tracking and automated control of behavioral mazes.

    PubMed

    Aguiar, Paulo; Mendonça, Luís; Galhardo, Vasco

    2007-10-15

    Operant animal behavioral tests require the interaction of the subject with sensors and actuators distributed in the experimental environment of the arena. In order to provide user independent reliable results and versatile control of these devices it is vital to use an automated control system. Commercial systems for control of animal mazes are usually based in software implementations that restrict their application to the proprietary hardware of the vendor. In this paper we present OpenControl: an opensource Visual Basic software that permits a Windows-based computer to function as a system to run fully automated behavioral experiments. OpenControl integrates video-tracking of the animal, definition of zones from the video signal for real-time assignment of animal position in the maze, control of the maze actuators from either hardware sensors or from the online video tracking, and recording of experimental data. Bidirectional communication with the maze hardware is achieved through the parallel-port interface, without the need for expensive AD-DA cards, while video tracking is attained using an inexpensive Firewire digital camera. OpenControl Visual Basic code is structurally general and versatile allowing it to be easily modified or extended to fulfill specific experimental protocols and custom hardware configurations. The Visual Basic environment was chosen in order to allow experimenters to easily adapt the code and expand it at their own needs.

  4. Updates to the QBIC system

    NASA Astrophysics Data System (ADS)

    Niblack, Carlton W.; Zhu, Xiaoming; Hafner, James L.; Breuel, Tom; Ponceleon, Dulce B.; Petkovic, Dragutin; Flickner, Myron D.; Upfal, Eli; Nin, Sigfredo I.; Sull, Sanghoon; Dom, Byron E.; Yeo, Boon-Lock; Srinivasan, Savitha; Zivkovic, Dan; Penner, Mike

    1997-12-01

    QBICTM (Query By Image Content) is a set of technologies and associated software that allows a user to search, browse, and retrieve image, graphic, and video data from large on-line collections. This paper discusses current research directions of the QBIC project such as indexing for high-dimensional multimedia data, retrieval of gray level images, and storyboard generation suitable for video. It describes aspects of QBIC software including scripting tools, application interfaces, and available GUIs, and gives examples of applications and demonstration systems using it.

  5. Video Discs in Libraries.

    ERIC Educational Resources Information Center

    Barker, Philip

    1986-01-01

    Discussion of developments in information storage technology likely to have significant impact upon library utilization focuses on hardware (videodisc technology) and software developments (knowledge databases; computer networks; database management systems; interactive video, computer, and multimedia user interfaces). Three generic computer-based…

  6. Blade counting tool with a 3D borescope for turbine applications

    NASA Astrophysics Data System (ADS)

    Harding, Kevin G.; Gu, Jiajun; Tao, Li; Song, Guiju; Han, Jie

    2014-07-01

    Video borescopes are widely used for turbine and aviation engine inspection to guarantee the health of blades and prevent blade failure during running. When the moving components of a turbine engine are inspected with a video borescope, the operator must view every blade in a given stage. The blade counting tool is video interpretation software that runs simultaneously in the background during inspection. It identifies moving turbine blades in a video stream, tracks and counts the blades as they move across the screen. This approach includes blade detection to identify blades in different inspection scenarios and blade tracking to perceive blade movement even in hand-turning engine inspections. The software is able to label each blade by comparing counting results to a known blade count for the engine type and stage. On-screen indications show the borescope user labels for each blade and how many blades have been viewed as the turbine is rotated.

  7. ATLAS Live: Collaborative Information Streams

    NASA Astrophysics Data System (ADS)

    Goldfarb, Steven; ATLAS Collaboration

    2011-12-01

    I report on a pilot project launched in 2010 focusing on facilitating communication and information exchange within the ATLAS Collaboration, through the combination of digital signage software and webcasting. The project, called ATLAS Live, implements video streams of information, ranging from detailed detector and data status to educational and outreach material. The content, including text, images, video and audio, is collected, visualised and scheduled using digital signage software. The system is robust and flexible, utilizing scripts to input data from remote sources, such as the CERN Document Server, Indico, or any available URL, and to integrate these sources into professional-quality streams, including text scrolling, transition effects, inter and intra-screen divisibility. Information is published via the encoding and webcasting of standard video streams, viewable on all common platforms, using a web browser or other common video tool. Authorisation is enforced at the level of the streaming and at the web portals, using the CERN SSO system.

  8. Data simulation for the Lightning Imaging Sensor (LIS)

    NASA Technical Reports Server (NTRS)

    Boeck, William L.

    1991-01-01

    This project aims to build a data analysis system that will utilize existing video tape scenes of lightning as viewed from space. The resultant data will be used for the design and development of the Lightning Imaging Sensor (LIS) software and algorithm analysis. The desire for statistically significant metrics implies that a large data set needs to be analyzed. Before 1990 the quality and quantity of video was insufficient to build a usable data set. At this point in time, there is usable data from missions STS-34, STS-32, STS-31, STS-41, STS-37, and STS-39. During the summer of 1990, a manual analysis system was developed to demonstrate that the video analysis is feasible and to identify techniques to deduce information that was not directly available. Because the closed circuit television system used on the space shuttle was intended for documentary TV, the current value of the camera focal length and pointing orientation, which are needed for photoanalysis, are not included in the system data. A large effort was needed to discover ancillary data sources as well as develop indirect methods to estimate the necessary parameters. Any data system coping with full motion video faces an enormous bottleneck produced by the large data production rate and the need to move and store the digitized images. The manual system bypassed the video digitizing bottleneck by using a genlock to superimpose pixel coordinates on full motion video. Because the data set had to be obtained point by point by a human operating a computer mouse, the data output rate was small. The loan and subsequent acquisition of a Abekas digital frame store with a real time digitizer moved the bottleneck from data acquisition to a problem of data transfer and storage. The semi-automated analysis procedure was developed using existing equipment and is described. A fully automated system is described in the hope that the components may come on the market at reasonable prices in the next few years.

  9. A method of mobile video transmission based on J2ee

    NASA Astrophysics Data System (ADS)

    Guo, Jian-xin; Zhao, Ji-chun; Gong, Jing; Chun, Yang

    2013-03-01

    As 3G (3rd-generation) networks evolve worldwide, the rising demand for mobile video services and the enormous growth of video on the internet is creating major new revenue opportunities for mobile network operators and application developers. The text introduced a method of mobile video transmission based on J2ME, giving the method of video compressing, then describing the video compressing standard, and then describing the software design. The proposed mobile video method based on J2EE is a typical mobile multimedia application, which has a higher availability and a wide range of applications. The users can get the video through terminal devices such as phone.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shoaf, S.; APS Engineering Support Division

    A real-time image analysis system was developed for beam imaging diagnostics. An Apple Power Mac G5 with an Active Silicon LFG frame grabber was used to capture video images that were processed and analyzed. Software routines were created to utilize vector-processing hardware to reduce the time to process images as compared to conventional methods. These improvements allow for more advanced image processing diagnostics to be performed in real time.

  11. The Children of the Computer Generation: An Analysis of the Family Computer Fad in Japan.

    ERIC Educational Resources Information Center

    Ishigaki, Emiko Hannah

    Results of a survey of grade school and junior high school students suggest that Japan is now caught up in a TV game fad called Family Computer (Fami-Com). Fami-Com is a household electric machine for video games that allows players to use more than 100 currently marketed software products. Since its introduction in 1983, the popularity of the…

  12. Realization and optimization of AES algorithm on the TMS320DM6446 based on DaVinci technology

    NASA Astrophysics Data System (ADS)

    Jia, Wen-bin; Xiao, Fu-hai

    2013-03-01

    The application of AES algorithm in the digital cinema system avoids video data to be illegal theft or malicious tampering, and solves its security problems. At the same time, in order to meet the requirements of the real-time, scene and transparent encryption of high-speed data streams of audio and video in the information security field, through the in-depth analysis of AES algorithm principle, based on the hardware platform of TMS320DM6446, with the software framework structure of DaVinci, this paper proposes the specific realization methods of AES algorithm in digital video system and its optimization solutions. The test results show digital movies encrypted by AES128 can not play normally, which ensures the security of digital movies. Through the comparison of the performance of AES128 algorithm before optimization and after, the correctness and validity of improved algorithm is verified.

  13. ACQ4: an open-source software platform for data acquisition and analysis in neurophysiology research

    PubMed Central

    Campagnola, Luke; Kratz, Megan B.; Manis, Paul B.

    2014-01-01

    The complexity of modern neurophysiology experiments requires specialized software to coordinate multiple acquisition devices and analyze the collected data. We have developed ACQ4, an open-source software platform for performing data acquisition and analysis in experimental neurophysiology. This software integrates the tasks of acquiring, managing, and analyzing experimental data. ACQ4 has been used primarily for standard patch-clamp electrophysiology, laser scanning photostimulation, multiphoton microscopy, intrinsic imaging, and calcium imaging. The system is highly modular, which facilitates the addition of new devices and functionality. The modules included with ACQ4 provide for rapid construction of acquisition protocols, live video display, and customizable analysis tools. Position-aware data collection allows automated construction of image mosaics and registration of images with 3-dimensional anatomical atlases. ACQ4 uses free and open-source tools including Python, NumPy/SciPy for numerical computation, PyQt for the user interface, and PyQtGraph for scientific graphics. Supported hardware includes cameras, patch clamp amplifiers, scanning mirrors, lasers, shutters, Pockels cells, motorized stages, and more. ACQ4 is available for download at http://www.acq4.org. PMID:24523692

  14. Pegasus5 is Co-Winner of NASA's 2016 Software of the Year Award

    NASA Image and Video Library

    2016-11-04

    Shareable video highlighting the Pegasus5 software, which was the co-winner of the NASA's 2016 Software of the Year award. Developed at NASA Ames, it helps in the simulation of air flow around space vehicles during launch and re-entry.

  15. Video Compression

    NASA Technical Reports Server (NTRS)

    1996-01-01

    Optivision developed two PC-compatible boards and associated software under a Goddard Space Flight Center Small Business Innovation Research grant for NASA applications in areas such as telerobotics, telesciences and spaceborne experimentation. From this technology, the company used its own funds to develop commercial products, the OPTIVideo MPEG Encoder and Decoder, which are used for realtime video compression and decompression. They are used in commercial applications including interactive video databases and video transmission. The encoder converts video source material to a compressed digital form that can be stored or transmitted, and the decoder decompresses bit streams to provide high quality playback.

  16. Multimedia in 1992.

    ERIC Educational Resources Information Center

    Desmarais, Norman

    1991-01-01

    Reviews current developments in multimedia computing for both the business and consumer markets, including interactive multimedia players; compact disc-interactive (CD-I), including levels of audio quality, various video specifications and visual effects, and software; digital video interactive (DVI); and multimedia personal computers. (LRW)

  17. An application framework for computer-aided patient positioning in radiation therapy.

    PubMed

    Liebler, T; Hub, M; Sanner, C; Schlegel, W

    2003-09-01

    The importance of exact patient positioning in radiation therapy increases with the ongoing improvements in irradiation planning and treatment. Therefore, new ways to overcome precision limitations of current positioning methods in fractionated treatment have to be found. The Department of Medical Physics at the German Cancer Research Centre (DKFZ) follows different video-based approaches to increase repositioning precision. In this context, the modular software framework FIVE (Fast Integrated Video-based Environment) has been designed and implemented. It is both hardware- and platform-independent and supports merging position data by integrating various computer-aided patient positioning methods. A highly precise optical tracking system and several subtraction imaging techniques have been realized as modules to supply basic video-based repositioning techniques. This paper describes the common framework architecture, the main software modules and their interfaces. An object-oriented software engineering process has been applied using the UML, C + + and the Qt library. The significance of the current framework prototype for the application in patient positioning as well as the extension to further application areas will be discussed. Particularly in experimental research, where special system adjustments are often necessary, the open design of the software allows problem-oriented extensions and adaptations.

  18. Film School: To Spice Up Course Work, Professors Make Their Own Videos

    ERIC Educational Resources Information Center

    Young, Jeffrey R.

    2008-01-01

    College faculty members have recently begun creating homemade videos to supplement their lectures, using free or low-cost software. These are the same technologies that make it easy for students to post spoof videos on YouTube, but the scholars are putting the tools to educational use. The professors say that students tune in to the short videos…

  19. Creating a YouTube-Like Collaborative Environment in Mathematics: Integrating Animated Geogebra Constructions and Student-Generated Screencast Videos

    ERIC Educational Resources Information Center

    Lazarus, Jill; Roulet, Geoffrey

    2013-01-01

    This article discusses the integration of student-generated GeoGebra applets and Jing screencast videos to create a YouTube-like medium for sharing in mathematics. The value of combining dynamic mathematics software and screencast videos for facilitating communication and representations in a digital era is demonstrated herein. We share our…

  20. Astro Academy: Principia--Using Tracker to Analyse Experiments Undertaken by Tim Peake on the International Space Station

    ERIC Educational Resources Information Center

    Mobbs, Robin

    2016-01-01

    While on the International Space Station, Tim Peake undertook and recorded video files of experiments suitable for physics teaching coordinated by the National Space Academy. This article describes how the video of these experiments was prepared for use with tracking software. The tracking files of the videos are suitable for use by teachers or…

  1. Software Used to Generate Cancer Statistics - SEER Cancer Statistics

    Cancer.gov

    Videos that highlight topics and trends in cancer statistics and definitions of statistical terms. Also software tools for analyzing and reporting cancer statistics, which are used to compile SEER's annual reports.

  2. Automated Video-Based Analysis of Contractility and Calcium Flux in Human-Induced Pluripotent Stem Cell-Derived Cardiomyocytes Cultured over Different Spatial Scales.

    PubMed

    Huebsch, Nathaniel; Loskill, Peter; Mandegar, Mohammad A; Marks, Natalie C; Sheehan, Alice S; Ma, Zhen; Mathur, Anurag; Nguyen, Trieu N; Yoo, Jennie C; Judge, Luke M; Spencer, C Ian; Chukka, Anand C; Russell, Caitlin R; So, Po-Lin; Conklin, Bruce R; Healy, Kevin E

    2015-05-01

    Contractile motion is the simplest metric of cardiomyocyte health in vitro, but unbiased quantification is challenging. We describe a rapid automated method, requiring only standard video microscopy, to analyze the contractility of human-induced pluripotent stem cell-derived cardiomyocytes (iPS-CM). New algorithms for generating and filtering motion vectors combined with a newly developed isogenic iPSC line harboring genetically encoded calcium indicator, GCaMP6f, allow simultaneous user-independent measurement and analysis of the coupling between calcium flux and contractility. The relative performance of these algorithms, in terms of improving signal to noise, was tested. Applying these algorithms allowed analysis of contractility in iPS-CM cultured over multiple spatial scales from single cells to three-dimensional constructs. This open source software was validated with analysis of isoproterenol response in these cells, and can be applied in future studies comparing the drug responsiveness of iPS-CM cultured in different microenvironments in the context of tissue engineering.

  3. Monitoring system for phreatic eruptions and thermal behavior on Poás volcano hyperacidic lake, with permanent IR and HD cameras

    NASA Astrophysics Data System (ADS)

    Ramirez, C. J.; Mora-Amador, R. A., Sr.; Alpizar Segura, Y.; González, G.

    2015-12-01

    Monitoring volcanoes have been on the past decades an expanding matter, one of the rising techniques that involve new technology is the digital video surveillance, and the automated software that come within, now is possible if you have the budget and some facilities on site, to set up a real-time network of high definition video cameras, some of them even with special features like infrared, thermal, ultraviolet, etc. That can make easier or harder the analysis of volcanic phenomena like lava eruptions, phreatic eruption, plume speed, lava flows, close/open vents, just to mention some of the many application of these cameras. We present the methodology of the installation at Poás volcano of a real-time system for processing and storing HD and thermal images and video, also the process to install and acquired the HD and IR cameras, towers, solar panels and radios to transmit the data on a volcano located at the tropics, plus what volcanic areas are our goal and why. On the other hand we show the hardware and software we consider necessary to carry on our project. Finally we show some early data examples of upwelling areas on the Poás volcano hyperacidic lake and the relation with lake phreatic eruptions, also some data of increasing temperature on an old dome wall and the suddenly wall explosions, and the use of IR video for measuring plume speed and contour for use on combination with DOAS or FTIR measurements.

  4. A Macintosh-Based Scientific Images Video Analysis System

    NASA Technical Reports Server (NTRS)

    Groleau, Nicolas; Friedland, Peter (Technical Monitor)

    1994-01-01

    A set of experiments was designed at MIT's Man-Vehicle Laboratory in order to evaluate the effects of zero gravity on the human orientation system. During many of these experiments, the movements of the eyes are recorded on high quality video cassettes. The images must be analyzed off-line to calculate the position of the eyes at every moment in time. To this aim, I have implemented a simple inexpensive computerized system which measures the angle of rotation of the eye from digitized video images. The system is implemented on a desktop Macintosh computer, processes one play-back frame per second and exhibits adequate levels of accuracy and precision. The system uses LabVIEW, a digital output board, and a video input board to control a VCR, digitize video images, analyze them, and provide a user friendly interface for the various phases of the process. The system uses the Concept Vi LabVIEW library (Graftek's Image, Meudon la Foret, France) for image grabbing and displaying as well as translation to and from LabVIEW arrays. Graftek's software layer drives an Image Grabber board from Neotech (Eastleigh, United Kingdom). A Colour Adapter box from Neotech provides adequate video signal synchronization. The system also requires a LabVIEW driven digital output board (MacADIOS II from GW Instruments, Cambridge, MA) controlling a slightly modified VCR remote control used mainly to advance the video tape frame by frame.

  5. Scalable Adaptive Graphics Environment (SAGE) Software for the Visualization of Large Data Sets on a Video Wall

    NASA Technical Reports Server (NTRS)

    Jedlovec, Gary; Srikishen, Jayanthi; Edwards, Rita; Cross, David; Welch, Jon; Smith, Matt

    2013-01-01

    The use of collaborative scientific visualization systems for the analysis, visualization, and sharing of "big data" available from new high resolution remote sensing satellite sensors or four-dimensional numerical model simulations is propelling the wider adoption of ultra-resolution tiled display walls interconnected by high speed networks. These systems require a globally connected and well-integrated operating environment that provides persistent visualization and collaboration services. This abstract and subsequent presentation describes a new collaborative visualization system installed for NASA's Shortterm Prediction Research and Transition (SPoRT) program at Marshall Space Flight Center and its use for Earth science applications. The system consists of a 3 x 4 array of 1920 x 1080 pixel thin bezel video monitors mounted on a wall in a scientific collaboration lab. The monitors are physically and virtually integrated into a 14' x 7' for video display. The display of scientific data on the video wall is controlled by a single Alienware Aurora PC with a 2nd Generation Intel Core 4.1 GHz processor, 32 GB memory, and an AMD Fire Pro W600 video card with 6 mini display port connections. Six mini display-to-dual DVI cables are used to connect the 12 individual video monitors. The open source Scalable Adaptive Graphics Environment (SAGE) windowing and media control framework, running on top of the Ubuntu 12 Linux operating system, allows several users to simultaneously control the display and storage of high resolution still and moving graphics in a variety of formats, on tiled display walls of any size. The Ubuntu operating system supports the open source Scalable Adaptive Graphics Environment (SAGE) software which provides a common environment, or framework, enabling its users to access, display and share a variety of data-intensive information. This information can be digital-cinema animations, high-resolution images, high-definition video-teleconferences, presentation slides, documents, spreadsheets or laptop screens. SAGE is cross-platform, community-driven, open-source visualization and collaboration middleware that utilizes shared national and international cyberinfrastructure for the advancement of scientific research and education.

  6. Scalable Adaptive Graphics Environment (SAGE) Software for the Visualization of Large Data Sets on a Video Wall

    NASA Astrophysics Data System (ADS)

    Jedlovec, G.; Srikishen, J.; Edwards, R.; Cross, D.; Welch, J. D.; Smith, M. R.

    2013-12-01

    The use of collaborative scientific visualization systems for the analysis, visualization, and sharing of 'big data' available from new high resolution remote sensing satellite sensors or four-dimensional numerical model simulations is propelling the wider adoption of ultra-resolution tiled display walls interconnected by high speed networks. These systems require a globally connected and well-integrated operating environment that provides persistent visualization and collaboration services. This abstract and subsequent presentation describes a new collaborative visualization system installed for NASA's Short-term Prediction Research and Transition (SPoRT) program at Marshall Space Flight Center and its use for Earth science applications. The system consists of a 3 x 4 array of 1920 x 1080 pixel thin bezel video monitors mounted on a wall in a scientific collaboration lab. The monitors are physically and virtually integrated into a 14' x 7' for video display. The display of scientific data on the video wall is controlled by a single Alienware Aurora PC with a 2nd Generation Intel Core 4.1 GHz processor, 32 GB memory, and an AMD Fire Pro W600 video card with 6 mini display port connections. Six mini display-to-dual DVI cables are used to connect the 12 individual video monitors. The open source Scalable Adaptive Graphics Environment (SAGE) windowing and media control framework, running on top of the Ubuntu 12 Linux operating system, allows several users to simultaneously control the display and storage of high resolution still and moving graphics in a variety of formats, on tiled display walls of any size. The Ubuntu operating system supports the open source Scalable Adaptive Graphics Environment (SAGE) software which provides a common environment, or framework, enabling its users to access, display and share a variety of data-intensive information. This information can be digital-cinema animations, high-resolution images, high-definition video-teleconferences, presentation slides, documents, spreadsheets or laptop screens. SAGE is cross-platform, community-driven, open-source visualization and collaboration middleware that utilizes shared national and international cyberinfrastructure for the advancement of scientific research and education.

  7. Performance comparison of AV1, HEVC, and JVET video codecs on 360 (spherical) video

    NASA Astrophysics Data System (ADS)

    Topiwala, Pankaj; Dai, Wei; Krishnan, Madhu; Abbas, Adeel; Doshi, Sandeep; Newman, David

    2017-09-01

    This paper compares the coding efficiency performance on 360 videos, of three software codecs: (a) AV1 video codec from the Alliance for Open Media (AOM); (b) the HEVC Reference Software HM; and (c) the JVET JEM Reference SW. Note that 360 video is especially challenging content, in that one codes full res globally, but typically looks locally (in a viewport), which magnifies errors. These are tested in two different projection formats ERP and RSP, to check consistency. Performance is tabulated for 1-pass encoding on two fronts: (1) objective performance based on end-to-end (E2E) metrics such as SPSNR-NN, and WS-PSNR, currently developed in the JVET committee; and (2) informal subjective assessment of static viewports. Constant quality encoding is performed with all the three codecs for an unbiased comparison of the core coding tools. Our general conclusion is that under constant quality coding, AV1 underperforms HEVC, which underperforms JVET. We also test with rate control, where AV1 currently underperforms the open source X265 HEVC codec. Objective and visual evidence is provided.

  8. Representing videos in tangible products

    NASA Astrophysics Data System (ADS)

    Fageth, Reiner; Weiting, Ralf

    2014-03-01

    Videos can be taken with nearly every camera, digital point and shoot cameras, DSLRs as well as smartphones and more and more with so-called action cameras mounted on sports devices. The implementation of videos while generating QR codes and relevant pictures out of the video stream via a software implementation was contents in last years' paper. This year we present first data about what contents is displayed and how the users represent their videos in printed products, e.g. CEWE PHOTOBOOKS and greeting cards. We report the share of the different video formats used, the number of images extracted out of the video in order to represent the video, the positions in the book and different design strategies compared to regular books.

  9. Database for the collection and analysis of clinical data and images of neoplasms of the sinonasal tract.

    PubMed

    Trimarchi, Matteo; Lund, Valerie J; Nicolai, Piero; Pini, Massimiliano; Senna, Massimo; Howard, David J

    2004-04-01

    The Neoplasms of the Sinonasal Tract software package (NSNT v 1.0) implements a complete visual database for patients with sinonasal neoplasia, facilitating standardization of data and statistical analysis. The software, which is compatible with the Macintosh and Windows platforms, provides multiuser application with a dedicated server (on Windows NT or 2000 or Macintosh OS 9 or X and a network of clients) together with web access, if required. The system hardware consists of an Apple Power Macintosh G4500 MHz computer with PCI bus, 256 Mb of RAM plus 60 Gb hard disk, or any IBM-compatible computer with a Pentium 2 processor. Image acquisition may be performed with different frame-grabber cards for analog or digital video input of different standards (PAL, SECAM, or NTSC) and levels of quality (VHS, S-VHS, Betacam, Mini DV, DV). The visual database is based on 4th Dimension by 4D Inc, and video compression is made in real-time MPEG format. Six sections have been developed: demographics, symptoms, extent of disease, radiology, treatment, and follow-up. Acquisition of data includes computed tomography and magnetic resonance imaging, histology, and endoscopy images, allowing sequential comparison. Statistical analysis integral to the program provides Kaplan-Meier survival curves. The development of a dedicated, user-friendly database for sinonasal neoplasia facilitates a multicenter network and has obvious clinical and research benefits.

  10. High-speed video analysis of forward and backward spattered blood droplets

    NASA Astrophysics Data System (ADS)

    Comiskey, Patrick; Yarin, Alexander; Attinger, Daniel

    2017-11-01

    High-speed videos of blood spatter due to a gunshot taken by the Ames Laboratory Midwest Forensics Resource Center are analyzed. The videos used in this analysis were focused on a variety of targets hit by a bullet which caused either forward, backward, or both types of blood spatter. The analysis process utilized particle image velocimetry and particle analysis software to measure drop velocities as well as the distributions of the number of droplets and their respective side view area. This analysis revealed that forward spatter results in drops travelling twice as fast compared to backward spatter, while both types of spatter contain drops of approximately the same size. Moreover, the close-to-cone domain in which drops are issued is larger in forward spatter than in the backward one. The inclination angle of the bullet as it penetrates the target is seen to play a significant role in the directional preference of the spattered blood. Also, the aerodynamic drop-drop interaction, muzzle gases, bullet impact angle, as well as the aerodynamic wake of the bullet are seen to greatly influence the flight of the drops. The aim of this study is to provide a quantitative basis for current and future research on bloodstain pattern analysis. This work was financially supported by the United States National Institute of Justice (award NIJ 2014-DN-BXK036).

  11. Automated Production of Movies on a Cluster of Computers

    NASA Technical Reports Server (NTRS)

    Nail, Jasper; Le, Duong; Nail, William L.; Nail, William

    2008-01-01

    A method of accelerating and facilitating production of video and film motion-picture products, and software and generic designs of computer hardware to implement the method, are undergoing development. The method provides for automation of most of the tedious and repetitive tasks involved in editing and otherwise processing raw digitized imagery into final motion-picture products. The method was conceived to satisfy requirements, in industrial and scientific testing, for rapid processing of multiple streams of simultaneously captured raw video imagery into documentation in the form of edited video imagery and video derived data products for technical review and analysis. In the production of such video technical documentation, unlike in production of motion-picture products for entertainment, (1) it is often necessary to produce multiple video derived data products, (2) there are usually no second chances to repeat acquisition of raw imagery, (3) it is often desired to produce final products within minutes rather than hours, days, or months, and (4) consistency and quality, rather than aesthetics, are the primary criteria for judging the products. In the present method, the workflow has both serial and parallel aspects: processing can begin before all the raw imagery has been acquired, each video stream can be subjected to different stages of processing simultaneously on different computers that may be grouped into one or more cluster(s), and the final product may consist of multiple video streams. Results of processing on different computers are shared, so that workers can collaborate effectively.

  12. Three-dimensional image reconstruction with free open-source OsiriX software in video-assisted thoracoscopic lobectomy and segmentectomy.

    PubMed

    Yao, Fei; Wang, Jian; Yao, Ju; Hang, Fangrong; Lei, Xu; Cao, Yongke

    2017-03-01

    The aim of this retrospective study was to evaluate the practice and the feasibility of Osirix, a free and open-source medical imaging software, in performing accurate video-assisted thoracoscopic lobectomy and segmentectomy. From July 2014 to April 2016, 63 patients received anatomical video-assisted thoracoscopic surgery (VATS), either lobectomy or segmentectomy, in our department. Three-dimensional (3D) reconstruction images of 61 (96.8%) patients were preoperatively obtained with contrast-enhanced computed tomography (CT). Preoperative resection simulations were accomplished with patient-individual reconstructed 3D images. For lobectomy, pulmonary lobar veins, arteries and bronchi were identified meticulously by carefully reviewing the 3D images on the display. For segmentectomy, the intrasegmental veins in the affected segment for division and the intersegmental veins to be preserved were identified on the 3D images. Patient preoperative characteristics, surgical outcomes and postoperative data were reviewed from a prospective database. The study cohort of 63 patients included 33 (52.4%) men and 30 (47.6%) women, of whom 46 (73.0%) underwent VATS lobectomy and 17 (27.0%) underwent VATS segmentectomy. There was 1 conversion from VATS lobectomy to open thoracotomy because of fibrocalcified lymph nodes. A VATS lobectomy was performed in 1 case after completing the segmentectomy because invasive adenocarcinoma was detected by intraoperative frozen-section analysis. There were no 30-day or 90-day operative mortalities CONCLUSIONS: The free, simple, and user-friendly software program Osirix can provide a 3D anatomic structure of pulmonary vessels and a clear vision into the space between the lesion and adjacent tissues, which allows surgeons to make preoperative simulations and improve the accuracy and safety of actual surgery. Copyright © 2017 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.

  13. Video Annotation Software Application for Thorough Collaborative Assessment of and Feedback on Microteaching Lessons in Geography Education

    ERIC Educational Resources Information Center

    van der Westhuizen, Christo P.; Golightly, Aubrey

    2015-01-01

    This article discusses the process and findings of a study in which video annotation (VideoANT) and a learning management system (LMS) were implemented together in the microteaching lessons of fourth-year geography student teachers. The aim was to ensure adequate assessment of and feedback for each student, since these aspects are, in general, a…

  14. Using a digital video camera to examine coupled oscillations

    NASA Astrophysics Data System (ADS)

    Greczylo, T.; Debowska, E.

    2002-07-01

    In our previous paper (Debowska E, Jakubowicz S and Mazur Z 1999 Eur. J. Phys. 20 89-95), thanks to the use of an ultrasound distance sensor, experimental verification of the solution of Lagrange equations for longitudinal oscillations of the Wilberforce pendulum was shown. In this paper the sensor and a digital video camera were used to monitor and measure the changes of both the pendulum's coordinates (vertical displacement and angle of rotation) simultaneously. The experiments were performed with the aid of the integrated software package COACH 5. Fourier analysis in Microsoft^{\\circledR} Excel 97 was used to find normal modes in each case of the measured oscillations. Comparison of the results with those presented in our previous paper (as given above) leads to the conclusion that a digital video camera is a powerful tool for measuring coupled oscillations of a Wilberforce pendulum. The most important conclusion is that a video camera is able to do something more than merely register interesting physical phenomena - it can be used to perform measurements of physical quantities at an advanced level.

  15. Different source image fusion based on FPGA

    NASA Astrophysics Data System (ADS)

    Luo, Xiao; Piao, Yan

    2016-03-01

    The fusion technology of video image is to make the video obtained by different image sensors complementary to each other by some technical means, so as to obtain the video information which is rich in information and suitable for the human eye system. Infrared cameras in harsh environments such as when smoke, fog and low light situations penetrating power, but the ability to obtain the details of the image is poor, does not meet the human visual system. Single visible light imaging can be rich in detail, high resolution images and for the visual system, but the visible image easily affected by the external environment. Infrared image and visible image fusion process involved in the video image fusion algorithm complexity and high calculation capacity, have occupied more memory resources, high clock rate requirements, such as software, c ++, c, etc. to achieve more, but based on Hardware platform less. In this paper, based on the imaging characteristics of infrared images and visible light images, the software and hardware are combined to obtain the registration parameters through software matlab, and the gray level weighted average method is used to implement the hardware platform. Information fusion, and finally the fusion image can achieve the goal of effectively improving the acquisition of information to increase the amount of information in the image.

  16. SolTrack: an automatic video processing software for in situ interface tracking.

    PubMed

    Griesser, S; Pierer, R; Reid, M; Dippenaar, R

    2012-10-01

    High-Resolution in situ observation of solidification experiments has become a powerful technique to improve the fundamental understanding of solidification processes of metals and alloys. In the present study, high-temperature laser-scanning confocal microscopy (HTLSCM) was utilized to observe and capture in situ solidification and phase transformations of alloys for subsequent post processing and analysis. Until now, this analysis has been very time consuming as frame-by-frame manual evaluation of propagating interfaces was used to determine the interface velocities. SolTrack has been developed using the commercial software package MATLAB and is designed to automatically detect, locate and track propagating interfaces during solidification and phase transformations as well as to calculate interfacial velocities. Different solidification phenomena have been recorded to demonstrate a wider spectrum of applications of this software. A validation, through comparison with manual evaluation, is included where the accuracy is shown to be very high. © 2012 The Authors Journal of Microscopy © 2012 Royal Microscopical Society.

  17. A Video-Tracking Analysis-Based Behavioral Assay for Larvae of Anopheles pseudopunctipennis and Aedes aegypti (Diptera: Culicidae).

    PubMed

    Gonzalez, Paula V; Alvarez Costa, Agustín; Masuh, Héctor M

    2017-05-01

    Aedes aegypti (L.) is the primary vector of dengue, yellow fever, Zika, and chikungunya viruses, whereas Anopheles pseudopunctipennis (Theobald) is the principal vector for malaria in Latin America. The larval stage of these mosquitoes occurs in very different development habitats, and the study of their respective behaviors could give us valuable information to improve larval control. The aim of this study was to set up a bioassay to study basic larval behaviors using a video-tracking software. Larvae of An. pseudopunctipennis came from two localities in Salta Province, Argentina, while Ae. aegypti larvae were of the Rockefeller laboratory strain. Behaviors of individual fourth-instar larvae were documented in an experimental petri dish arena using EthoVision XT10.1 video-tracking software. The overall level of movement of larval An. pseudopunctipennis was lower than that for Ae. aegypti, and, while moving, larval An. pseudopunctipennis spent significantly more time swimming near the wall of the arena (thigmotaxis). This is the first study that analyzes the behavior of An. pseudopunctipennis larvae. The experimental system described here may be useful for future studies on the effect of physiological, toxicological, and chemosensory stimuli on larval behaviors. © The Authors 2017. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  18. The Use of Video Cases in a Multimedia Learning Environment for Facilitating High School Students' Inquiry into a Problem from Varying Perspectives

    NASA Astrophysics Data System (ADS)

    Zydney, Janet Mannheimer; Grincewicz, Amy

    2011-12-01

    This study investigated the connection between the use of video cases within a multimedia learning environment and students' inquiry into a socio-scientific problem. The software program was designed based on principles from the Cognitive Flexibility Theory (CFT) and incorporated video cases of experts with differing perspectives. Seventy-nine 10th-grade students in an urban high school participated in this study. After watching the expert videos, students generated investigative questions and reflected on how their ideas changed over time. This study found a significant correlation between the time students spent watching the expert videos and their ability to consider the problem's perspectives as well as their ability to integrate these perspectives within their questions. Moreover, problem-solving ability and time watching the videos were detected as possible influential predictors of students' consideration of the problem's perspectives within their questions. Although students watched all video cases in equivalent ways, one of the video cases, which incorporated multiple perspectives as opposed to just presenting one perspective, appeared most influential in helping students integrate the various perspectives into their own thinking. A qualitative analysis of students' reflections indicated that many students appreciated the complexity, authenticity, and ethical dimensions of the problem. It also revealed that while the majority of students thought critically about the problem, some students still had naïve or simplistic ways of thinking. This study provided some preliminary evidence that offering students the opportunity to watch videos of different perspectives may influence them to think in alternative ways about a complex problem.

  19. The Coming of Digital Desktop Media.

    ERIC Educational Resources Information Center

    Galbreath, Jeremy

    1992-01-01

    Discusses the movement toward digital-based platforms including full-motion video for multimedia products. Hardware- and software-based compression techniques for digital data storage are considered, and a chart summarizes features of Digital Video Interactive, Moving Pictures Experts Group, P x 64, Joint Photographic Experts Group, Apple…

  20. A real-time inverse quantised transform for multi-standard with dynamic resolution support

    NASA Astrophysics Data System (ADS)

    Sun, Chi-Chia; Lin, Chun-Ying; Zhang, Ce

    2016-06-01

    In this paper, a real-time configurable intelligent property (IP) core is presented for image/video decoding process in compatibility with the standard MPEG-4 Visual and the standard H.264/AVC. The inverse quantised discrete cosine and integer transform can be used to perform inverse quantised discrete cosine transform and inverse quantised inverse integer transforms which only required shift and add operations. Meanwhile, COordinate Rotation DIgital Computer iterations and compensation steps are adjustable in order to compensate for the video compression quality regarding various data throughput. The implementations are embedded in publicly available software XVID Codes 1.2.2 for the standard MPEG-4 Visual and the H.264/AVC reference software JM 16.1, where the experimental results show that the balance between the computational complexity and video compression quality is retained. At the end, FPGA synthesised results show that the proposed IP core can bring advantages to low hardware costs and also provide real-time performance for Full HD and 4K-2K video decoding.

  1. Design and Implementation of a Software for Teaching Health Related Topics to Deaf Students: the First Experience in Iran

    PubMed Central

    Ahmadi, Maryam; Abbasi, Masoomeh; Bahaadinbeigy, Kambiz

    2015-01-01

    Introduction: Deaf are not able to communicate with other community members due to hearing impaired. Providing health care for deaf is more complex because of their communication problems. Multimedia tools can provide multiple tangible concepts (movie, subtitles, and sign language) for the deaf and hard of hearing. In this study, identify the priority health needs of deaf students in primary schools and health education software has been created. Method: Priority health needs and software requirements were identified through interviews with teachers in primary schools in Tehran. After training videos recorded, videos edited and the required software has been created in stages. Results: As a result, health care needs, including: health, dental, ear, nails, and hair care aids, washing hands and face, the corners of the bathroom. Expected Features of the software was including the use of sign language, lip reading, pictures, animations and simple and short subtitles. Discussion: Based on the results of interviews and interest of educators and students to using of educational software for deaf health problems, we can use this software to help Teachers and student’s families to education and promotion the health of deaf students for learn effectively. PMID:26005271

  2. Design and implementation of a software for teaching health related topics to deaf students: the first experience in iran.

    PubMed

    Ahmadi, Maryam; Abbasi, Masoomeh; Bahaadinbeigy, Kambiz

    2015-04-01

    Deaf are not able to communicate with other community members due to hearing impaired. Providing health care for deaf is more complex because of their communication problems. Multimedia tools can provide multiple tangible concepts (movie, subtitles, and sign language) for the deaf and hard of hearing. In this study, identify the priority health needs of deaf students in primary schools and health education software has been created. Priority health needs and software requirements were identified through interviews with teachers in primary schools in Tehran. After training videos recorded, videos edited and the required software has been created in stages. As a result, health care needs, including: health, dental, ear, nails, and hair care aids, washing hands and face, the corners of the bathroom. Expected Features of the software was including the use of sign language, lip reading, pictures, animations and simple and short subtitles. Based on the results of interviews and interest of educators and students to using of educational software for deaf health problems, we can use this software to help Teachers and student's families to education and promotion the health of deaf students for learn effectively.

  3. 75 FR 25185 - Broadband Initiatives Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-07

    ..., excluding desktop or laptop computers, computer hardware and software (including anti-virus, anti-spyware, and other security software), audio or video equipment, computer network components... 10 desktop or laptop computers and individual workstations to be located within the rural library...

  4. "SmartMonitor"--an intelligent security system for the protection of individuals and small properties with the possibility of home automation.

    PubMed

    Frejlichowski, Dariusz; Gościewska, Katarzyna; Forczmański, Paweł; Hofman, Radosław

    2014-06-05

    "SmartMonitor" is an intelligent security system based on image analysis that combines the advantages of alarm, video surveillance and home automation systems. The system is a complete solution that automatically reacts to every learned situation in a pre-specified way and has various applications, e.g., home and surrounding protection against unauthorized intrusion, crime detection or supervision over ill persons. The software is based on well-known and proven methods and algorithms for visual content analysis (VCA) that were appropriately modified and adopted to fit specific needs and create a video processing model which consists of foreground region detection and localization, candidate object extraction, object classification and tracking. In this paper, the "SmartMonitor" system is presented along with its architecture, employed methods and algorithms, and object analysis approach. Some experimental results on system operation are also provided. In the paper, focus is put on one of the aforementioned functionalities of the system, namely supervision over ill persons.

  5. Error analysis and algorithm implementation for an improved optical-electric tracking device based on MEMS

    NASA Astrophysics Data System (ADS)

    Sun, Hong; Wu, Qian-zhong

    2013-09-01

    In order to improve the precision of optical-electric tracking device, proposing a kind of improved optical-electric tracking device based on MEMS, in allusion to the tracking error of gyroscope senor and the random drift, According to the principles of time series analysis of random sequence, establish AR model of gyro random error based on Kalman filter algorithm, then the output signals of gyro are multiple filtered with Kalman filter. And use ARM as micro controller servo motor is controlled by fuzzy PID full closed loop control algorithm, and add advanced correction and feed-forward links to improve response lag of angle input, Free-forward can make output perfectly follow input. The function of lead compensation link is to shorten the response of input signals, so as to reduce errors. Use the wireless video monitor module and remote monitoring software (Visual Basic 6.0) to monitor servo motor state in real time, the video monitor module gathers video signals, and the wireless video module will sent these signals to upper computer, so that show the motor running state in the window of Visual Basic 6.0. At the same time, take a detailed analysis to the main error source. Through the quantitative analysis of the errors from bandwidth and gyro sensor, it makes the proportion of each error in the whole error more intuitive, consequently, decrease the error of the system. Through the simulation and experiment results shows the system has good following characteristic, and it is very valuable for engineering application.

  6. New space sensor and mesoscale data analysis

    NASA Technical Reports Server (NTRS)

    Hickey, John S.

    1987-01-01

    The developed Earth Science and Application Division (ESAD) system/software provides the research scientist with the following capabilities: an extensive data base management capibility to convert various experiment data types into a standard format; and interactive analysis and display package (AVE80); an interactive imaging/color graphics capability utilizing the Apple III and IBM PC workstations integrated into the ESAD computer system; and local and remote smart-terminal capability which provides color video, graphics, and Laserjet output. Recommendations for updating and enhancing the performance of the ESAD computer system are listed.

  7. A model-based approach for automated in vitro cell tracking and chemotaxis analyses.

    PubMed

    Debeir, Olivier; Camby, Isabelle; Kiss, Robert; Van Ham, Philippe; Decaestecker, Christine

    2004-07-01

    Chemotaxis may be studied in two main ways: 1) counting cells passing through an insert (e.g., using Boyden chambers), and 2) directly observing cell cultures (e.g., using Dunn chambers), both in response to stationary concentration gradients. This article promotes the use of Dunn chambers and in vitro cell-tracking, achieved by video microscopy coupled with automatic image analysis software, in order to extract quantitative and qualitative measurements characterizing the response of cells to a diffusible chemical agent. Previously, we set up a videomicroscopy system coupled with image analysis software that was able to compute cell trajectories from in vitro cell cultures. In the present study, we are introducing a new software increasing the application field of this system to chemotaxis studies. This software is based on an adapted version of the active contour methodology, enabling each cell to be efficiently tracked for hours and resulting in detailed descriptions of individual cell trajectories. The major advantages of this method come from an improved robustness with respect to variability in cell morphologies between different cell lines and dynamical changes in cell shape during cell migration. Moreover, the software includes a very small number of parameters which do not require overly sensitive tuning. Finally, the running time of the software is very short, allowing improved possibilities in acquisition frequency and, consequently, improved descriptions of complex cell trajectories, i.e. trajectories including cell division and cell crossing. We validated this software on several artificial and real cell culture experiments in Dunn chambers also including comparisons with manual (human-controlled) analyses. We developed new software and data analysis tools for automated cell tracking which enable cell chemotaxis to be efficiently analyzed. Copyright 2004 Wiley-Liss, Inc.

  8. QuickView video preview software of colon capsule endoscopy: reliability in presenting colorectal polyps as compared to normal mode reading.

    PubMed

    Farnbacher, Michael J; Krause, Horst H; Hagel, Alexander F; Raithel, Martin; Neurath, Markus F; Schneider, Thomas

    2014-03-01

    OBJECTIVE. Colon capsule endoscopy (CCE) proved to be highly sensitive in detection of colorectal polyps (CP). Major limitation is the time-consuming video reading. The aim of this prospective, double-center study was to assess the theoretical time-saving potential and its possible impact on the reliability of "QuickView" (QV), in the presentation of CP as compared to normal mode (NM). METHODS. During NM reading of 65 CCE videos (mean patient´s age 56 years), all frames showing CPs were collected and compared to the number of frames presented by QV at increasing QV settings (10, 20, ... 80%). Reliability of QV in presenting polyps <6 mm and ≥6 mm (significant polyp), and identifying patients for subsequent therapeutic colonoscopy, capsule egestion rate, cleansing level, and estimated time-saving potential were assessed. RESULTS. At a 30% QV setting, the QV video presented 89% of the significant polyps and 86% of any polyps with ≥1 frame (per-polyp analysis) identified in NM before. At a 10% QV setting, 98% of the 52 patients with significant polyps could be identified (per-patient analysis) by QV video analysis. Capsule excretion rate was 74% and colon cleanliness was adequate in 85%. QV´s presentation rate correlates to the QV setting, the polyp size, and the number of frames per finding. CONCLUSIONS. Depending on its setting, the reliability of QV in presenting CP as compared to NM reading is notable. However, if no significant polyp is presented by QV, NM reading must be performed afterwards. The reduction of frames to be analyzed in QV might speed up identification of candidates for therapeutic colonoscopy.

  9. VOIP for Telerehabilitation: A Risk Analysis for Privacy, Security and HIPAA Compliance: Part II

    PubMed Central

    Watzlaf, Valerie J.M.; Moeini, Sohrab; Matusow, Laura; Firouzan, Patti

    2011-01-01

    In a previous publication the authors developed a privacy and security checklist to evaluate Voice over Internet Protocol (VoIP) videoconferencing software used between patients and therapists to provide telerehabilitation (TR) therapy. In this paper, the privacy and security checklist that was previously developed is used to perform a risk analysis of the top ten VoIP videoconferencing software to determine if their policies provide answers to the privacy and security checklist. Sixty percent of the companies claimed they do not listen into video-therapy calls unless maintenance is needed. Only 50% of the companies assessed use some form of encryption, and some did not specify what type of encryption was used. Seventy percent of the companies assessed did not specify any form of auditing on their servers. Statistically significant differences across company websites were found for sharing information outside of the country (p=0.010), encryption (p=0.006), and security evaluation (p=0.005). Healthcare providers considering use of VoIP software for TR services may consider using this privacy and security checklist before deciding to incorporate a VoIP software system for TR. Other videoconferencing software that is specific for TR with strong encryption, good access controls, and hardware that meets privacy and security standards should be considered for use with TR. PMID:25945177

  10. VOIP for Telerehabilitation: A Risk Analysis for Privacy, Security and HIPAA Compliance: Part II.

    PubMed

    Watzlaf, Valerie J M; Moeini, Sohrab; Matusow, Laura; Firouzan, Patti

    2011-01-01

    In a previous publication the authors developed a privacy and security checklist to evaluate Voice over Internet Protocol (VoIP) videoconferencing software used between patients and therapists to provide telerehabilitation (TR) therapy. In this paper, the privacy and security checklist that was previously developed is used to perform a risk analysis of the top ten VoIP videoconferencing software to determine if their policies provide answers to the privacy and security checklist. Sixty percent of the companies claimed they do not listen into video-therapy calls unless maintenance is needed. Only 50% of the companies assessed use some form of encryption, and some did not specify what type of encryption was used. Seventy percent of the companies assessed did not specify any form of auditing on their servers. Statistically significant differences across company websites were found for sharing information outside of the country (p=0.010), encryption (p=0.006), and security evaluation (p=0.005). Healthcare providers considering use of VoIP software for TR services may consider using this privacy and security checklist before deciding to incorporate a VoIP software system for TR. Other videoconferencing software that is specific for TR with strong encryption, good access controls, and hardware that meets privacy and security standards should be considered for use with TR.

  11. Keeping a Competitive U.S. Military Aircraft Industry Aloft: Findings from an Analysis of the Industrial Base

    DTIC Science & Technology

    2011-01-01

    industries as credit cards , health maintenance organizations (HMOs), travel reservations, video games, container shipping, music, and (remarkably) cement... iTunes . It is a software platform that is replacing the (nonplatform) strategy of selling CDs in retail stores. In other words, a platform-mediated...boundar- ies. Brokerage, mortgage, checking, savings, and credit cards are inte- grated around a major platform. This banking platform is even pen

  12. A Comparison of Face to Face and Video-Based Self Care Education on Quality of Life of Hemodialysis Patients

    PubMed Central

    Hemmati Maslakpak, Masumeh; Shams, Shadi

    2015-01-01

    Background End stage renal disease negatively affects the patients’ quality of life. There are different educational methods to help these patients. This study was performed to compare the effectiveness of self-care education in two methods, face to face and video educational, on the quality of life in patients under treatment by hemodialysis in education-medical centers in Urmia. Methods In this quasi-experimental study, 120 hemodialysis patients were selected randomly; they were then randomly allocated to three groups: the control, face to face education and video education. For face to face group, education was given individually in two sessions of 35 to 45 minutes. For video educational group, CD was shown. Kidney Disease Quality Of Life- Short Form (KDQOL-SF) questionnaire was filled out before and two months after the intervention. Data analysis was performed in SPSS software by using one-way ANOVA. Results ANOVA test showed a statistically significant difference in the quality of life scores among the three groups after the intervention (P=0.024). After the intervention, Tukey’s post-hoc test showed a statistically significant difference between the two groups of video and face to face education regarding the quality of life (P>0.05). Conclusion Implementation of the face to face and video education methods improves the quality of life in hemodialysis patients. So, it is suggested that video educational should be used along with face to face education. PMID:26171412

  13. VOIP for Telerehabilitation: A Risk Analysis for Privacy, Security, and HIPAA Compliance

    PubMed Central

    Watzlaf, Valerie J.M.; Moeini, Sohrab; Firouzan, Patti

    2010-01-01

    Voice over the Internet Protocol (VoIP) systems such as Adobe ConnectNow, Skype, ooVoo, etc. may include the use of software applications for telerehabilitation (TR) therapy that can provide voice and video teleconferencing between patients and therapists. Privacy and security applications as well as HIPAA compliance within these protocols have been questioned by information technologists, providers of care and other health care entities. This paper develops a privacy and security checklist that can be used within a VoIP system to determine if it meets privacy and security procedures and whether it is HIPAA compliant. Based on this analysis, specific HIPAA criteria that therapists and health care facilities should follow are outlined and discussed, and therapists must weigh the risks and benefits when deciding to use VoIP software for TR. PMID:25945172

  14. VOIP for Telerehabilitation: A Risk Analysis for Privacy, Security, and HIPAA Compliance.

    PubMed

    Watzlaf, Valerie J M; Moeini, Sohrab; Firouzan, Patti

    2010-01-01

    Voice over the Internet Protocol (VoIP) systems such as Adobe ConnectNow, Skype, ooVoo, etc. may include the use of software applications for telerehabilitation (TR) therapy that can provide voice and video teleconferencing between patients and therapists. Privacy and security applications as well as HIPAA compliance within these protocols have been questioned by information technologists, providers of care and other health care entities. This paper develops a privacy and security checklist that can be used within a VoIP system to determine if it meets privacy and security procedures and whether it is HIPAA compliant. Based on this analysis, specific HIPAA criteria that therapists and health care facilities should follow are outlined and discussed, and therapists must weigh the risks and benefits when deciding to use VoIP software for TR.

  15. A formative evaluation of CU-SeeMe

    NASA Astrophysics Data System (ADS)

    Bibeau, Michael

    1995-02-01

    CU-SeeMe is a video conferencing software package that was designed and programmed at Cornell University. The program works with the TCP/IP network protocol and allows two or more parties to conduct a real-time video conference with full audio support. In this paper we evaluate CU-SeeMe through the process of Formative Evaluation. We first perform a Critical Review of the software using a subset of the Smith and Mosier Guidelines for Human-Computer Interaction. Next, we empirically review the software interface through a series of benchmark tests that are derived directly from a set of scenarios. The scenarios attempt to model real world situations that might be encountered by an individual in the target user class. Designing benchmark tasks becomes a natural and straightforward process when they are derived from the scenario set. Empirical measures are taken for each task, including completion times and error counts. These measures are accompanied by critical incident analysis 2 7 13 which serves to identify problems with the interface and the cognitive roots of those problems. The critical incidents reported by participants are accompanied by explanations of what caused the problem and why This helps in the process of formulating solutions for observed usability problems. All the testing results are combined in the Appendix in an illustrated partial redesign of the CU-SeeMe Interface.

  16. Validity of the Acti4 method for detection of physical activity types in free-living settings: comparison with video analysis.

    PubMed

    Stemland, Ingunn; Ingebrigtsen, Jørgen; Christiansen, Caroline S; Jensen, Bente R; Hanisch, Christiana; Skotte, Jørgen; Holtermann, Andreas

    2015-01-01

    This study examined the ability of the Acti4 software for identifying physical activity types from accelerometers during free-living with different levels of movement complexity compared with video observations. Nineteen aircraft cabin cleaners with ActiGraph GT3X+ accelerometer at the thigh and hip performed one semi-standardised and two non-standardised sessions (outside and inside aircraft) with different levels of movement complexity during working hours. The sensitivity for identifying different activity types was 75.4-99.4% for the semi-standardised session, 54.6-98.5% outside the aircraft and 49.9-90.2% inside the aircraft. The specificity was above 90% for all activities, except 'moving' inside the aircraft. These findings indicate that Acti4 provides good estimates of time spent in different activity types during semi-standardised conditions, and for sitting, standing and walking during non-standardised conditions with normal level of movement complexity. The Acti4 software may be a useful tool for researchers and practitioners in the field of ergonomics, occupational and public health. Practitioner Summary: Being inexpensive, small, water-resistant and without wires, the ActiGraph GT3X+ by applying the Acti4 software may be a useful tool for long-term field measurements of physical activity types for researchers and practitioners in the field of ergonomics, occupational and public health.

  17. Noninvasive Test Detects Cardiovascular Disease

    NASA Technical Reports Server (NTRS)

    2007-01-01

    At NASA's Jet Propulsion Laboratory (JPL), NASA-developed Video Imaging Communication and Retrieval (VICAR) software laid the groundwork for analyzing images of all kinds. A project seeking to use imaging technology for health care diagnosis began when the imaging team considered using the VICAR software to analyze X-ray images of soft tissue. With marginal success using X-rays, the team applied the same methodology to ultrasound imagery, which was already digitally formatted. The new approach proved successful for assessing amounts of plaque build-up and arterial wall thickness, direct predictors of heart disease, and the result was a noninvasive diagnostic system with the ability to accurately predict heart health. Medical Technologies International Inc. (MTI) further developed and then submitted the technology to a vigorous review process at the FDA, which cleared the software for public use. The software, patented under the name Prowin, is being used in MTI's patented ArterioVision, a carotid intima-media thickness (CIMT) test that uses ultrasound image-capturing and analysis software to noninvasively identify the risk for the major cause of heart attack and strokes: atherosclerosis. ArterioVision provides a direct measurement of atherosclerosis by safely and painlessly measuring the thickness of the first two layers of the carotid artery wall using an ultrasound procedure and advanced image-analysis software. The technology is now in use in all 50 states and in many countries throughout the world.

  18. Comparison of ASGARD and UFOCapture

    NASA Technical Reports Server (NTRS)

    Blaauw, Rhiannon C.; Cruse, Katherine S.

    2011-01-01

    The Meteoroid Environment Office is undertaking a comparison between UFOCapture/Analyzer and ASGARD (All Sky and Guided Automatic Realtime Detection). To accomplish this, video output from a Watec video camera on a 17 mm Schneider lens (25 degree field of view) was split and input into the two different meteor detection softwares. The purpose of this study is to compare the sensitivity of the two systems, false alarm rates and trajectory information, among other quantities. The important components of each software will be highlighted and comments made about the detection/rejection algorithms and the amount of user-labor required for each system.

  19. The Effect of Interactive Simulations on Exercise Adherence with Overweight and Obese Adults

    DTIC Science & Technology

    2011-03-01

    bicycle: one while watching television and the other one while playing video games . Related variables tested were exercise motivation and self-efficacy in...overweight and obese adults. Unique software was written to integrate the exercise equipment/ video game components, and to capture and transfer...Start Letter was received on Dec 20, 2010 and recruitment of participants commenced in Feb 2011. Prototype exercise bicycle with video gaming console

  20. The Effects of Reviews in Video Tutorials

    ERIC Educational Resources Information Center

    van der Meij, H.; van der Meij, J.

    2016-01-01

    This study investigates how well a video tutorial for software training that is based on Demonstration-Based Teaching supports user motivation and performance. In addition, it is studied whether reviews significantly contribute to these measures. The Control condition employs a tutorial with instructional features added to a dynamic task…

  1. Interactive Video: A Cross Curriculum Computer Project.

    ERIC Educational Resources Information Center

    Grimm, Floyd M., III; And Others

    Responding to the rapid development and often prohibitive costs of new classroom instruction technology, a group of interested faculty at Harford Community College (HCC), in Maryland, formed three Interactive Video (IV) Teams to explore the possibilities of using existing computer hardware and software at the college for interactive video…

  2. Migrant Families: Moving Up with Technology.

    ERIC Educational Resources Information Center

    Winograd, Kathryn

    2001-01-01

    Under the direction of the Pennsylvania Department of Migrant Education, an educational software company has adapted educational curricula to a video game format for use in video game consoles that hook into television sets. Migrant children using these at home have made significant gains in math, reading, English fluency, and critical thinking…

  3. Strategic Design of an Interactive Video Learning Lab (IVL).

    ERIC Educational Resources Information Center

    Switzer, Ralph V., Jr.; Switzer, Jamie S.

    1993-01-01

    Describes a study that researched elements necessary for the design of an interactive video learning (IVL) lab for business courses. Highlights include a review of pertinent literature; guidelines for the use of an IVL lab; IVL systems integration; system specifications; hardware costs; and system software. (five references) (LRW)

  4. On-line content creation for photo products: understanding what the user wants

    NASA Astrophysics Data System (ADS)

    Fageth, Reiner

    2015-03-01

    This paper describes how videos can be implemented into printed photo books and greeting cards. We will show that - surprisingly or not- pictures from videos are similarly used such as classical images to tell compelling stories. Videos can be taken with nearly every camera, digital point and shoot cameras, DSLRs as well as smartphones and more and more with so-called action cameras mounted on sports devices. The implementation of videos while generating QR codes and relevant pictures out of the video stream via a software implementation was contents in last years' paper. This year we present first data about what contents is displayed and how the users represent their videos in printed products, e.g. CEWE PHOTOBOOKS and greeting cards. We report the share of the different video formats used.

  5. Design and implementation of H.264 based embedded video coding technology

    NASA Astrophysics Data System (ADS)

    Mao, Jian; Liu, Jinming; Zhang, Jiemin

    2016-03-01

    In this paper, an embedded system for remote online video monitoring was designed and developed to capture and record the real-time circumstances in elevator. For the purpose of improving the efficiency of video acquisition and processing, the system selected Samsung S5PV210 chip as the core processor which Integrated graphics processing unit. And the video was encoded with H.264 format for storage and transmission efficiently. Based on S5PV210 chip, the hardware video coding technology was researched, which was more efficient than software coding. After running test, it had been proved that the hardware video coding technology could obviously reduce the cost of system and obtain the more smooth video display. It can be widely applied for the security supervision [1].

  6. Video camera system for locating bullet holes in targets at a ballistics tunnel

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Rummler, D. R.; Goad, W. K.

    1990-01-01

    A system consisting of a single charge coupled device (CCD) video camera, computer controlled video digitizer, and software to automate the measurement was developed to measure the location of bullet holes in targets at the International Shooters Development Fund (ISDF)/NASA Ballistics Tunnel. The camera/digitizer system is a crucial component of a highly instrumented indoor 50 meter rifle range which is being constructed to support development of wind resistant, ultra match ammunition. The system was designed to take data rapidly (10 sec between shoots) and automatically with little operator intervention. The system description, measurement concept, and procedure are presented along with laboratory tests of repeatability and bias error. The long term (1 hour) repeatability of the system was found to be 4 microns (one standard deviation) at the target and the bias error was found to be less than 50 microns. An analysis of potential errors and a technique for calibration of the system are presented.

  7. Loop-the-Loop: An Easy Experiment, A Challenging Explanation

    NASA Astrophysics Data System (ADS)

    Asavapibhop, B.; Suwonjandee, N.

    2010-07-01

    A loop-the-loop built by the Institute for the Promotion of Teaching Science and Technology (IPST) was used in Thai high school teachers training program to demonstrate a circular motion and investigate the concept of the conservation of mechanical energy. We took videos using high speed camera to record the motions of a spherical steel ball moving down the aluminum inclined track at different released positions. The ball then moved into the circular loop and underwent a projectile motion upon leaving the track. We then asked the teachers to predict the landing position of the ball if we changed the height of the whole loop-the-loop system. We also analyzed the videos using Tracker, a video analysis software. It turned out that most teachers did not realize the effect of the friction between the ball and the track and could not obtain the correct relationship hence their predictions were inconsistent with the actual landing positions of the ball.

  8. Teaching optical phenomena with Tracker

    NASA Astrophysics Data System (ADS)

    Rodrigues, M.; Simeão Carvalho, P.

    2014-11-01

    Since the invention and dissemination of domestic laser pointers, observing optical phenomena is a relatively easy task. Any student can buy a laser and experience at home, in a qualitative way, the reflection, refraction and even diffraction phenomena of light. However, quantitative experiments need instruments of high precision that have a relatively complex setup. Fortunately, nowadays it is possible to analyse optical phenomena in a simple and quantitative way using the freeware video analysis software ‘Tracker’. In this paper, we show the advantages of video-based experimental activities for teaching concepts in optics. We intend to show: (a) how easy the study of such phenomena can be, even at home, because only simple materials are needed, and Tracker provides the necessary measuring instruments; and (b) how we can use Tracker to improve students’ understanding of some optical concepts. We give examples using video modelling to study the laws of reflection, Snell’s laws, focal distances in lenses and mirrors, and diffraction phenomena, which we hope will motivate teachers to implement it in their own classes and schools.

  9. Validation of a digital audio recording method for the objective assessment of cough in the horse.

    PubMed

    Duz, M; Whittaker, A G; Love, S; Parkin, T D H; Hughes, K J

    2010-10-01

    To validate the use of digital audio recording and analysis for quantification of coughing in horses. Part A: Nine simultaneous digital audio and video recordings were collected individually from seven stabled horses over a 1 h period using a digital audio recorder attached to the halter. Audio files were analysed using audio analysis software. Video and audio recordings were analysed for cough count and timing by two blinded operators on two occasions using a randomised study design for determination of intra-operator and inter-operator agreement. Part B: Seventy-eight hours of audio recordings obtained from nine horses were analysed once by two blinded operators to assess inter-operator repeatability on a larger sample. Part A: There was complete agreement between audio and video analyses and inter- and intra-operator analyses. Part B: There was >97% agreement between operators on number and timing of 727 coughs recorded over 78 h. The results of this study suggest that the cough monitor methodology used has excellent sensitivity and specificity for the objective assessment of cough in horses and intra- and inter-operator variability of recorded coughs is minimal. Crown Copyright 2010. Published by Elsevier India Pvt Ltd. All rights reserved.

  10. [Video-assisted thoracoscopic surgery as an alternative to urgent thoracotomy following open chest trauma in selected cases].

    PubMed

    Samiatina, Diana; Rubikas, Romaldas

    2004-01-01

    To prove that video-assisted thoracoscopic surgery in selected cases is an alternative to urgent thoracotomy following open chest trauma. Retrospective analysis of case reports of patients operated for open chest trauma during 1997-2002. Comparison of two methods of surgical treatment: urgent video-assisted thoracoscopy and urgent thoracotomy. Duration of drain presence in the pleural cavity, duration of postoperative treatment, pain intensity and cosmetic effect were evaluated. Data analysis was performed using SPSS statistical software. Statistical evaluation of differences between groups was performed using Mann-Whitney U test. The differences between groups were considered to be statistically significant when the probability of deviation was p<0.05. During 1997-2002, 121 patients with open chest trauma were operated. Thirty three patients underwent urgent video-assisted thoracoscopy, 88 patients were operated through thoracotomy incision: 69 due to isolated open chest trauma, 17 due to thoracoabdominal injury and 2 due to abdominothoracic injury. Almost thirteen percent (12.5%) of patients after urgent thoracotomy underwent urgent laparotomy due to damaged diaphragm and other organs of peritoneal cavity. Duration of drain presence in the pleural cavity after video-assisted thoracoscopy was 4.57 days and after urgent thoracotomy - 6.88 days (p<0.05). Duration of post-operative treatment after video-assisted thoracoscopy was 8.21 days and after urgent thoracotomy - 14.89 days (p<0.05). Amount of consumed non-narcotic analgesics after video-assisted thoracoscopy was 1056.98 mg and after urgent thoracotomy - 1966.70 mg (p<0.05). Video-assisted thoracoscopy is minimally invasive method of thoracic surgery allowing for the evaluation of the pathological changes in the lung, pericardium, diaphragm, mediastinum, thoracic wall and pleura, including the localization of these changes, and the type and severity of the injury. The number of early post-operative complications following video-assisted thoracoscopy is lower. Compared to operations through thoracotomy incision, video assisted thoracoscopies entail the shortening of the duration of drain presence in the pleural cavity and the duration of post-operative treatment. Video-assisted thoracoscopy should be performed on all patients with open chest trauma and stable hemodynamics and the respiration function. Video-assisted thoracoscopy is an informative diagnostic and treatment method allowing for the selection of patients for urgent thoracotomy.

  11. Intergraph video and images exploitation capabilities

    NASA Astrophysics Data System (ADS)

    Colla, Simone; Manesis, Charalampos

    2013-08-01

    The current paper focuses on the capture, fusion and process of aerial imagery in order to leverage full motion video, giving analysts the ability to collect, analyze, and maximize the value of video assets. Unmanned aerial vehicles (UAV) have provided critical real-time surveillance and operational support to military organizations, and are a key source of intelligence, particularly when integrated with other geospatial data. In the current workflow, at first, the UAV operators plan the flight by using a flight planning software. During the flight the UAV send a live video stream directly on the field to be processed by Intergraph software, to generate and disseminate georeferenced images trough a service oriented architecture based on ERDAS Apollo suite. The raw video-based data sources provide the most recent view of a situation and can augment other forms of geospatial intelligence - such as satellite imagery and aerial photos - to provide a richer, more detailed view of the area of interest. To effectively use video as a source of intelligence, however, the analyst needs to seamlessly fuse the video with these other types of intelligence, such as map features and annotations. Intergraph has developed an application that automatically generates mosaicked georeferenced image, tags along the video route which can then be seamlessly integrated with other forms of static data, such as aerial photos, satellite imagery, or geospatial layers and features. Consumers will finally have the ability to use a single, streamlined system to complete the entire geospatial information lifecycle: capturing geospatial data using sensor technology; processing vector, raster, terrain data into actionable information; managing, fusing, and sharing geospatial data and video toghether; and finally, rapidly and securely delivering integrated information products, ensuring individuals can make timely decisions.

  12. Low Cost Efficient Deliverying Video Surveillance Service to Moving Guard for Smart Home.

    PubMed

    Gualotuña, Tatiana; Macías, Elsa; Suárez, Álvaro; C, Efraín R Fonseca; Rivadeneira, Andrés

    2018-03-01

    Low-cost video surveillance systems are attractive for Smart Home applications (especially in emerging economies). Those systems use the flexibility of the Internet of Things to operate the video camera only when an intrusion is detected. We are the only ones that focus on the design of protocols based on intelligent agents to communicate the video of an intrusion in real time to the guards by wireless or mobile networks. The goal is to communicate, in real time, the video to the guards who can be moving towards the smart home. However, this communication suffers from sporadic disruptions that difficults the control and drastically reduces user satisfaction and operativity of the system. In a novel way, we have designed a generic software architecture based on design patterns that can be adapted to any hardware in a simple way. The implanted hardware is of very low economic cost; the software frameworks are free. In the experimental tests we have shown that it is possible to communicate to the moving guard, intrusion notifications (by e-mail and by instant messaging), and the first video frames in less than 20 s. In addition, we automatically recovered the frames of video lost in the disruptions in a transparent way to the user, we supported vertical handover processes and we could save energy of the smartphone's battery. However, the most important thing was that the high satisfaction of the people who have used the system.

  13. Low Cost Efficient Deliverying Video Surveillance Service to Moving Guard for Smart Home

    PubMed Central

    Gualotuña, Tatiana; Fonseca C., Efraín R.; Rivadeneira, Andrés

    2018-01-01

    Low-cost video surveillance systems are attractive for Smart Home applications (especially in emerging economies). Those systems use the flexibility of the Internet of Things to operate the video camera only when an intrusion is detected. We are the only ones that focus on the design of protocols based on intelligent agents to communicate the video of an intrusion in real time to the guards by wireless or mobile networks. The goal is to communicate, in real time, the video to the guards who can be moving towards the smart home. However, this communication suffers from sporadic disruptions that difficults the control and drastically reduces user satisfaction and operativity of the system. In a novel way, we have designed a generic software architecture based on design patterns that can be adapted to any hardware in a simple way. The implanted hardware is of very low economic cost; the software frameworks are free. In the experimental tests we have shown that it is possible to communicate to the moving guard, intrusion notifications (by e-mail and by instant messaging), and the first video frames in less than 20 s. In addition, we automatically recovered the frames of video lost in the disruptions in a transparent way to the user, we supported vertical handover processes and we could save energy of the smartphone's battery. However, the most important thing was that the high satisfaction of the people who have used the system. PMID:29494551

  14. Scalable software architecture for on-line multi-camera video processing

    NASA Astrophysics Data System (ADS)

    Camplani, Massimo; Salgado, Luis

    2011-03-01

    In this paper we present a scalable software architecture for on-line multi-camera video processing, that guarantees a good trade off between computational power, scalability and flexibility. The software system is modular and its main blocks are the Processing Units (PUs), and the Central Unit. The Central Unit works as a supervisor of the running PUs and each PU manages the acquisition phase and the processing phase. Furthermore, an approach to easily parallelize the desired processing application has been presented. In this paper, as case study, we apply the proposed software architecture to a multi-camera system in order to efficiently manage multiple 2D object detection modules in a real-time scenario. System performance has been evaluated under different load conditions such as number of cameras and image sizes. The results show that the software architecture scales well with the number of camera and can easily works with different image formats respecting the real time constraints. Moreover, the parallelization approach can be used in order to speed up the processing tasks with a low level of overhead.

  15. The Consumer Juggernaut: Web-Based and Mobile Applications as Innovation Pioneer

    NASA Astrophysics Data System (ADS)

    Messerschmitt, David G.

    As happened previously in electronics, software targeted at consumers is increasingly the focus of investment and innovation. Some of the areas where it is leading is animated interfaces, treating users as a community, audio and video information, software as a service, agile software development, and the integration of business models with software design. As a risk-taking and experimental market, and as a source of ideas, consumer software can benefit other areas of applications software. The influence of consumer software can be magnified by research into the internal organizations and processes of the innovative firms at its foundation.

  16. Technical Note: Validation of two methods to determine contact area between breast and compression paddle in mammography.

    PubMed

    Branderhorst, Woutjan; de Groot, Jerry E; van Lier, Monique G J T B; Highnam, Ralph P; den Heeten, Gerard J; Grimbergen, Cornelis A

    2017-08-01

    To assess the accuracy of two methods of determining the contact area between the compression paddle and the breast in mammography. An accurate method to determine the contact area is essential to accurately calculate the average compression pressure applied by the paddle. For a set of 300 breast compressions, we measured the contact areas between breast and paddle, both capacitively using a transparent foil with indium-tin-oxide (ITO) coating attached to the paddle, and retrospectively from the obtained mammograms using image processing software (Volpara Enterprise, algorithm version 1.5.2). A gold standard was obtained from video images of the compressed breast. During each compression, the breast was illuminated from the sides in order to create a dark shadow on the video image where the breast was in contact with the compression paddle. We manually segmented the shadows captured at the time of x-ray exposure and measured their areas. We found a strong correlation between the manual segmentations and the capacitive measurements [r = 0.989, 95% CI (0.987, 0.992)] and between the manual segmentations and the image processing software [r = 0.978, 95% CI (0.972, 0.982)]. Bland-Altman analysis showed a bias of -0.0038 dm 2 for the capacitive measurement (SD 0.0658, 95% limits of agreement [-0.1329, 0.1252]) and -0.0035 dm 2 for the image processing software [SD 0.0962, 95% limits of agreement (-0.1921, 0.1850)]. The size of the contact area between the paddle and the breast can be determined accurately and precisely, both in real-time using the capacitive method, and retrospectively using image processing software. This result is beneficial for scientific research, data analysis and quality control systems that depend on one of these two methods for determining the average pressure on the breast during mammographic compression. © 2017 Sigmascreening B.V. Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  17. Automated Video-Based Analysis of Contractility and Calcium Flux in Human-Induced Pluripotent Stem Cell-Derived Cardiomyocytes Cultured over Different Spatial Scales

    PubMed Central

    Huebsch, Nathaniel; Loskill, Peter; Mandegar, Mohammad A.; Marks, Natalie C.; Sheehan, Alice S.; Ma, Zhen; Mathur, Anurag; Nguyen, Trieu N.; Yoo, Jennie C.; Judge, Luke M.; Spencer, C. Ian; Chukka, Anand C.; Russell, Caitlin R.; So, Po-Lin

    2015-01-01

    Contractile motion is the simplest metric of cardiomyocyte health in vitro, but unbiased quantification is challenging. We describe a rapid automated method, requiring only standard video microscopy, to analyze the contractility of human-induced pluripotent stem cell-derived cardiomyocytes (iPS-CM). New algorithms for generating and filtering motion vectors combined with a newly developed isogenic iPSC line harboring genetically encoded calcium indicator, GCaMP6f, allow simultaneous user-independent measurement and analysis of the coupling between calcium flux and contractility. The relative performance of these algorithms, in terms of improving signal to noise, was tested. Applying these algorithms allowed analysis of contractility in iPS-CM cultured over multiple spatial scales from single cells to three-dimensional constructs. This open source software was validated with analysis of isoproterenol response in these cells, and can be applied in future studies comparing the drug responsiveness of iPS-CM cultured in different microenvironments in the context of tissue engineering. PMID:25333967

  18. Use of a computerized decision support system for primary and secondary prevention of work-related MSD disability.

    PubMed

    Womack, Sarah K; Armstrong, Thomas J

    2005-09-01

    The present study evaluates the effectiveness of a decision support system used to evaluate and control physical job stresses and prevent re-injury of workers who have experienced or are concerned about work-related musculoskeletal disorders. The software program is a database that stores detailed job information such as standardized work data, videos, and upper-extremity physical stress ratings for over 400 jobs in the plant. Additionally, the database users were able to record comments about the jobs and related control issues. The researchers investigated the utility and effectiveness of the software by analyzing its use over a 20-month period. Of the 197 comments entered by the users, 25% pertained to primary prevention, 75% pertained to secondary prevention, and 94 comments (47.7%) described ergonomic interventions. Use of the software tool improved primary and secondary prevention by improving the quality and efficiency of the ergonomic job analysis process.

  19. Colonoscopy tutorial software made with a cadaver's sectioned images.

    PubMed

    Chung, Beom Sun; Chung, Min Suk; Park, Hyung Seon; Shin, Byeong-Seok; Kwon, Koojoo

    2016-11-01

    Novice doctors may watch tutorial videos in training for actual or computed tomographic (CT) colonoscopy. The conventional learning videos can be complemented by virtual colonoscopy software made with a cadaver's sectioned images (SIs). The objective of this study was to assist colonoscopy trainees with the new interactive software. Submucosal segmentation on the SIs was carried out through the whole length of the large intestine. With the SIs and segmented images, a three dimensional model was reconstructed. Six-hundred seventy-one proximal colonoscopic views (conventional views) and corresponding distal colonoscopic views (simulating the retroflexion of a colonoscope) were produced. Not only navigation views showing the current location of the colonoscope tip and its course, but also, supplementary description views were elaborated. The four corresponding views were put into convenient browsing software to be downloaded free from the homepage (anatomy.co.kr). The SI colonoscopy software with the realistic images and supportive tools was available to anybody. Users could readily notice the position and direction of the virtual colonoscope tip and recognize meaningful structures in colonoscopic views. The software is expected to be an auxiliary learning tool to improve technique and related knowledge in actual and CT colonoscopies. Hopefully, the software will be updated using raw images from the Visible Korean project. Copyright © 2016 Elsevier GmbH. All rights reserved.

  20. Access NASA Satellite Global Precipitation Data Visualization on YouTube

    NASA Astrophysics Data System (ADS)

    Liu, Z.; Su, J.; Acker, J. G.; Huffman, G. J.; Vollmer, B.; Wei, J.; Meyer, D. J.

    2017-12-01

    Since the satellite era began, NASA has collected a large volume of Earth science observations for research and applications around the world. Satellite data at 12 NASA data centers can also be used for STEM activities such as disaster events, climate change, etc. However, accessing satellite data can be a daunting task for non-professional users such as teachers and students because of unfamiliarity of terminology, disciplines, data formats, data structures, computing resources, processing software, programing languages, etc. Over the years, many efforts have been developed to improve satellite data access, but barriers still exist for non-professionals. In this presentation, we will present our latest activity that uses the popular online video sharing web site, YouTube, to access visualization of global precipitation datasets at the NASA Goddard Earth Sciences (GES) Data and Information Services Center (DISC). With YouTube, users can access and visualize a large volume of satellite data without necessity to learn new software or download data. The dataset in this activity is the 3-hourly TRMM (Tropical Rainfall Measuring Mission) Multi-satellite Precipitation Analysis (TMPA). The video consists of over 50,000 data files collected since 1998 onwards, covering a zone between 50°N-S. The YouTube video will last 36 minutes for the entire dataset record (over 19 years). Since the time stamp is on each frame of the video, users can begin at any time by dragging the time progress bar. This precipitation animation will allow viewing precipitation events and processes (e.g., hurricanes, fronts, atmospheric rivers, etc.) on a global scale. The next plan is to develop a similar animation for the GPM (Global Precipitation Measurement) Integrated Multi-satellitE Retrievals for GPM (IMERG). The IMERG provides precipitation on a near-global (60°N-S) coverage at half-hourly time interval, showing more details on precipitation processes and development, compared to the 3-hourly TMPA product. The entire video will contain more than 330,000 files and will last 3.6 hours. Future plans include development of fly-over videos for orbital data for an entire satellite mission or project. All videos will be uploaded and available at the GES DISC site on YouTube (https://www.youtube.com/user/NASAGESDISC).

  1. Advances of FishNet towards a fully automatic monitoring system for fish migration

    NASA Astrophysics Data System (ADS)

    Kratzert, Frederik; Mader, Helmut

    2017-04-01

    Restoring the continuum of river networks, affected by anthropogenic constructions, is one of the main objectives of the Water Framework Directive. Regarding fish migration, fish passes are a widely used measure. Often the functionality of these fish passes needs to be assessed by monitoring. Over the last years, we developed a new semi-automatic monitoring system (FishCam) which allows the contact free observation of fish migration in fish passes through videos. The system consists of a detection tunnel, equipped with a camera, a motion sensor and artificial light sources, as well as a software (FishNet), which helps to analyze the video data. In its latest version, the software is capable of detecting and tracking objects in the videos as well as classifying them into "fish" and "no-fish" objects. This allows filtering out the videos containing at least one fish (approx. 5 % of all grabbed videos) and reduces the manual labor to the analysis of these videos. In this state the entire system has already been used in over 20 different fish passes across Austria for a total of over 140 months of monitoring resulting in more than 1.4 million analyzed videos. As a next step towards a fully automatic monitoring system, a key feature is the automatized classification of the detected fish into their species, which is still an unsolved task in a fully automatic monitoring environment. Recent advances in the field of machine learning, especially image classification with deep convolutional neural networks, sound promising in order to solve this problem. In this study, different approaches for the fish species classification are tested. Besides an image-only based classification approach using deep convolutional neural networks, various methods that combine the power of convolutional neural networks as image descriptors with additional features, such as the fish length and the time of appearance, are explored. To facilitate the development and testing phase of this approach, a subset of six fish species of Austrian rivers and streams is considered in this study. All scripts and the data to reproduce the results of this study will be made publicly available on GitHub* at the beginning of the EGU2017 General Assembly. * https://github.com/kratzert/EGU2017_public/

  2. Can You See Me Now Visualizing Battlefield Facial Recognition Technology in 2035

    DTIC Science & Technology

    2010-04-01

    County Sheriff’s Department, use certain measurements such as the distance between eyes, the length of the nose, or the shape of the ears. 8 However...captures multiple frames of video and composites them into an appropriately high-resolution image that can be processed by the facial recognition software...stream of data. High resolution video systems, such as those described below will be able to capture orders of magnitude more data in one video frame

  3. Data Mining and Information Technology: Its Impact on Intelligence Collection and Privacy Rights

    DTIC Science & Technology

    2007-11-26

    sources include: Cameras - Digital cameras (still and video ) have been improving in capability while simultaneously dropping in cost at a rate...citizen is caught on camera 300 times each day.5 The power of extensive video coverage is magnified greatly by the nascent capability for voice and...software on security videos and tracking cell phone usage in the local area. However, it would only return the names and data of those who

  4. Developing high-quality educational software.

    PubMed

    Johnson, Lynn A; Schleyer, Titus K L

    2003-11-01

    The development of effective educational software requires a systematic process executed by a skilled development team. This article describes the core skills required of the development team members for the six phases of successful educational software development. During analysis, the foundation of product development is laid including defining the audience and program goals, determining hardware and software constraints, identifying content resources, and developing management tools. The design phase creates the specifications that describe the user interface, the sequence of events, and the details of the content to be displayed. During development, the pieces of the educational program are assembled. Graphics and other media are created, video and audio scripts written and recorded, the program code created, and support documentation produced. Extensive testing by the development team (alpha testing) and with students (beta testing) is conducted. Carefully planned implementation is most likely to result in a flawless delivery of the educational software and maintenance ensures up-to-date content and software. Due to the importance of the sixth phase, evaluation, we have written a companion article on it that follows this one. The development of a CD-ROM product is described including the development team, a detailed description of the development phases, and the lessons learned from the project.

  5. Calibration of Action Cameras for Photogrammetric Purposes

    PubMed Central

    Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo

    2014-01-01

    The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution. PMID:25237898

  6. Calibration of action cameras for photogrammetric purposes.

    PubMed

    Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo

    2014-09-18

    The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution.

  7. Facial Video-Based Photoplethysmography to Detect HRV at Rest.

    PubMed

    Moreno, J; Ramos-Castro, J; Movellan, J; Parrado, E; Rodas, G; Capdevila, L

    2015-06-01

    Our aim is to demonstrate the usefulness of photoplethysmography (PPG) for analyzing heart rate variability (HRV) using a standard 5-min test at rest with paced breathing, comparing the results with real RR intervals and testing supine and sitting positions. Simultaneous recordings of R-R intervals were conducted with a Polar system and a non-contact PPG, based on facial video recording on 20 individuals. Data analysis and editing were performed with individually designated software for each instrument. Agreement on HRV parameters was assessed with concordance correlations, effect size from ANOVA and Bland and Altman plots. For supine position, differences between video and Polar systems showed a small effect size in most HRV parameters. For sitting position, these differences showed a moderate effect size in most HRV parameters. A new procedure, based on the pixels that contained more heart beat information, is proposed for improving the signal-to-noise ratio in the PPG video signal. Results were acceptable in both positions but better in the supine position. Our approach could be relevant for applications that require monitoring of stress or cardio-respiratory health, such as effort/recuperation states in sports. © Georg Thieme Verlag KG Stuttgart · New York.

  8. Enhanced Video-Oculography System

    NASA Technical Reports Server (NTRS)

    Moore, Steven T.; MacDougall, Hamish G.

    2009-01-01

    A previously developed video-oculography system has been enhanced for use in measuring vestibulo-ocular reflexes of a human subject in a centrifuge, motor vehicle, or other setting. The system as previously developed included a lightweight digital video camera mounted on goggles. The left eye was illuminated by an infrared light-emitting diode via a dichroic mirror, and the camera captured images of the left eye in infrared light. To extract eye-movement data, the digitized video images were processed by software running in a laptop computer. Eye movements were calibrated by having the subject view a target pattern, fixed with respect to the subject s head, generated by a goggle-mounted laser with a diffraction grating. The system as enhanced includes a second camera for imaging the scene from the subject s perspective, and two inertial measurement units (IMUs) for measuring linear accelerations and rates of rotation for computing head movements. One IMU is mounted on the goggles, the other on the centrifuge or vehicle frame. All eye-movement and head-motion data are time-stamped. In addition, the subject s point of regard is superimposed on each scene image to enable analysis of patterns of gaze in real time.

  9. Online Videoconferencing Products: Update

    ERIC Educational Resources Information Center

    Burton, Douglas; Kitchen, Tim

    2011-01-01

    Software allowing real-time online video connectivity is rapidly evolving. The ability to connect students, staff, and guest speakers instantaneously carries great benefits for the online distance education classroom. This evaluation report compares four software applications at opposite ends of the cost spectrum: "DimDim", "Elluminate VCS",…

  10. Using a Digital Video Camera to Study Motion

    ERIC Educational Resources Information Center

    Abisdris, Gil; Phaneuf, Alain

    2007-01-01

    To illustrate how a digital video camera can be used to analyze various types of motion, this simple activity analyzes the motion and measures the acceleration due to gravity of a basketball in free fall. Although many excellent commercially available data loggers and software can accomplish this task, this activity requires almost no financial…

  11. Participatory Culture as Professional Development: Preparing Teachers to Use "Minecraft" in the Classroom

    ERIC Educational Resources Information Center

    Kuhn, Jeff; Stevens, Vance

    2017-01-01

    As computer-based game use grows in classrooms, teachers need more opportunities for professional development aimed at helping them to appropriately incorporate games into their classrooms. Teachers need opportunities not only to learn about video games as software but also about video games as culture. This requires professional development that…

  12. Audio and Video Reflections to Promote Social Justice

    ERIC Educational Resources Information Center

    Boske, Christa

    2011-01-01

    Purpose: The purpose of this paper is to examine how 15 graduate students enrolled in a US school leadership preparation program understand issues of social justice and equity through a reflective process utilizing audio and/or video software. Design/methodology/approach: The study is based on the tradition of grounded theory. The researcher…

  13. Mutually Beneficial Foreign Language Learning: Creating Meaningful Interactions through Video-Synchronous Computer-Mediated Communication

    ERIC Educational Resources Information Center

    Kato, Fumie; Spring, Ryan; Mori, Chikako

    2016-01-01

    Providing learners of a foreign language with meaningful opportunities for interactions, specifically with native speakers, is especially challenging for instructors. One way to overcome this obstacle is through video-synchronous computer-mediated communication tools such as Skype software. This study reports quantitative and qualitative data from…

  14. A Software Defined Integrated T1 Digital Network for Voice, Data and Video.

    ERIC Educational Resources Information Center

    Hill, James R.

    The Dallas County Community College District developed and implemented a strategic plan for communications that utilizes a county-wide integrated network to carry voice, data, and video information to nine locations within the district. The network, which was installed and operational by March 1987, utilizes microwave, fiber optics, digital cross…

  15. Applying Video Game Interaction Design to Business Performance, Round 2.

    ERIC Educational Resources Information Center

    Shirinian, Ara; Dickelman, Erik

    2002-01-01

    Discusses software design for enterprise systems and for video games, and describes difficulties with enterprise tools, including interface complexity, training costs, and user frustration. Examines the world of tools and games from the human perspective and suggests ways in which game design can be successfully transferred to the enterprise tool…

  16. Student Perceptions of Online Tutoring Videos

    ERIC Educational Resources Information Center

    Sligar, Steven R.; Pelletier, Christopher D.; Bonner, Heidi Stone; Coghill, Elizabeth; Guberman, Daniel; Zeng, Xiaoming; Newman, Joyce J.; Muller, Dorothy; Dennis, Allen

    2017-01-01

    Online tutoring is made possible by using videos to replace or supplement face to face services. The purpose of this research was to examine student reactions to the use of lecture capture technology in a university tutoring setting and to assess student knowledge of some features of Tegrity lecture capture software. A survey was administered to…

  17. Video Use in Sweden, 1982. Summary of SR/Pub Report No. 16-1982.

    ERIC Educational Resources Information Center

    Hulten, Olof

    Swedish consumer use of video recording equipment and software was surveyed through interviews with 10,700 people; the interviews were conducted by the field research staff of the Swedish Broadcasting Corporation's Audience and Programme Research Department between December 1981 and April 1982. The study focused on possession (ownership, leasing,…

  18. A Real-Time Image Acquisition And Processing System For A RISC-Based Microcomputer

    NASA Astrophysics Data System (ADS)

    Luckman, Adrian J.; Allinson, Nigel M.

    1989-03-01

    A low cost image acquisition and processing system has been developed for the Acorn Archimedes microcomputer. Using a Reduced Instruction Set Computer (RISC) architecture, the ARM (Acorn Risc Machine) processor provides instruction speeds suitable for image processing applications. The associated improvement in data transfer rate has allowed real-time video image acquisition without the need for frame-store memory external to the microcomputer. The system is comprised of real-time video digitising hardware which interfaces directly to the Archimedes memory, and software to provide an integrated image acquisition and processing environment. The hardware can digitise a video signal at up to 640 samples per video line with programmable parameters such as sampling rate and gain. Software support includes a work environment for image capture and processing with pixel, neighbourhood and global operators. A friendly user interface is provided with the help of the Archimedes Operating System WIMP (Windows, Icons, Mouse and Pointer) Manager. Windows provide a convenient way of handling images on the screen and program control is directed mostly by pop-up menus.

  19. Image processing and computer controls for video profile diagnostic system in the ground test accelerator (GTA)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wright, R.M.; Zander, M.E.; Brown, S.K.

    1992-09-01

    This paper describes the application of video image processing to beam profile measurements on the Ground Test Accelerator (GTA). A diagnostic was needed to measure beam profiles in the intermediate matching section (IMS) between the radio-frequency quadrupole (RFQ) and the drift tube linac (DTL). Beam profiles are measured by injecting puffs of gas into the beam. The light emitted from the beam-gas interaction is captured and processed by a video image processing system, generating the beam profile data. A general purpose, modular and flexible video image processing system, imagetool, was used for the GTA image profile measurement. The development ofmore » both software and hardware for imagetool and its integration with the GTA control system (GTACS) will be discussed. The software includes specialized algorithms for analyzing data and calibrating the system. The underlying design philosophy of imagetool was tested by the experience of building and using the system, pointing the way for future improvements. The current status of the system will be illustrated by samples of experimental data.« less

  20. Image processing and computer controls for video profile diagnostic system in the ground test accelerator (GTA)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wright, R.M.; Zander, M.E.; Brown, S.K.

    1992-01-01

    This paper describes the application of video image processing to beam profile measurements on the Ground Test Accelerator (GTA). A diagnostic was needed to measure beam profiles in the intermediate matching section (IMS) between the radio-frequency quadrupole (RFQ) and the drift tube linac (DTL). Beam profiles are measured by injecting puffs of gas into the beam. The light emitted from the beam-gas interaction is captured and processed by a video image processing system, generating the beam profile data. A general purpose, modular and flexible video image processing system, imagetool, was used for the GTA image profile measurement. The development ofmore » both software and hardware for imagetool and its integration with the GTA control system (GTACS) will be discussed. The software includes specialized algorithms for analyzing data and calibrating the system. The underlying design philosophy of imagetool was tested by the experience of building and using the system, pointing the way for future improvements. The current status of the system will be illustrated by samples of experimental data.« less

  1. Low-cost Tools for Aerial Video Geolocation and Air Traffic Analysis for Delay Reduction Using Google Earth

    NASA Astrophysics Data System (ADS)

    Zetterlind, V.; Pledgie, S.

    2009-12-01

    Low-cost, low-latency, robust geolocation and display of aerial video is a common need for a wide range of earth observing as well as emergency response and security applications. While hardware costs for aerial video collection systems, GPS, and inertial sensors have been decreasing, software costs for geolocation algorithms and reference imagery/DTED remain expensive and highly proprietary. As part of a Federal Small Business Innovative Research project, MosaicATM and EarthNC, Inc have developed a simple geolocation system based on the Google Earth API and Google's 'built-in' DTED and reference imagery libraries. This system geolocates aerial video based on platform and camera position, attitude, and field-of-view metadata using geometric photogrammetric principles of ray-intersection with DTED. Geolocated video can be directly rectified and viewed in the Google Earth API during processing. Work is underway to extend our geolocation code to NASA World Wind for additional flexibility and a fully open-source platform. In addition to our airborne remote sensing work, MosaicATM has developed the Surface Operations Data Analysis and Adaptation (SODAA) tool, funded by NASA Ames, which supports analysis of airport surface operations to optimize aircraft movements and reduce fuel burn and delays. As part of SODAA, MosaicATM and EarthNC, Inc have developed powerful tools to display national airspace data and time-animated 3D flight tracks in Google Earth for 4D analysis. The SODAA tool can convert raw format flight track data, FAA National Flight Data (NFD), and FAA 'Adaptation' airport surface data to a spatial database representation and then to Google Earth KML. The SODAA client provides users with a simple graphical interface through which to generate queries with a wide range of predefined and custom filters, plot results, and export for playback in Google Earth in conjunction with NFD and Adaptation overlays.

  2. Novel Uses of Video to Accelerate the Surgical Learning Curve.

    PubMed

    Ibrahim, Andrew M; Varban, Oliver A; Dimick, Justin B

    2016-04-01

    Surgeons are under enormous pressure to continually improve and learn new surgical skills. Novel uses of surgical video in the preoperative, intraoperative, and postoperative setting are emerging to accelerate the learning curve of surgical skill and minimize harm to patients. In the preoperative setting, social media outlets provide a valuable platform for surgeons to collaborate and plan for difficult operative cases. Live streaming of video has allowed for intraoperative telementoring. Finally, postoperative use of video has provided structure for peer coaching to evaluate and improve surgical skill. Applying these approaches into practice is becoming easier as most of our surgical platforms (e.g., laparoscopic, and endoscopy) now have video recording technology built in and video editing software has become more user friendly. Future applications of video technology are being developed, including possible integration into accreditation and board certification.

  3. Direct measurement of lateral transport in membranes by using time-resolved spatial photometry.

    PubMed Central

    Kapitza, H G; McGregor, G; Jacobson, K A

    1985-01-01

    Spatially resolving light detectors allow, with proper calibration, quantitative analysis of the variations in two-dimensional intensity distributions over time. An ultrasensitive microfluorometer was assembled by using as a detector a microchannel plate-intensified video camera. The camera was interfaced with a software-based digital video analysis system to digitize, average, and process images and to directly control the timing of the experiments to minimize exposure of the specimen to light. The detector system has been characterized to allow its use as a photometer. A major application has been to perform fluorescence recovery after photobleaching measurements by using the camera in place of a photomultiplier tube (video-FRAP) with the goal of detecting possible anisotropic diffusion or convective flow. Analysis of the data on macromolecular diffusion in homogenous aqueous glycol solutions yielded diffusion constants in agreement with previous measurements. Results on lipid probe diffusion in dimyristoylphosphatidylcholine multibilayers indicated that at temperatures above the gel-to-liquid crystalline phase transition diffusion is isotropic, and analysis of video-FRAP data yielded diffusion coefficients consistent with those measured previously by using spot photobleaching. However, lipid probes in these multibilayers held just below the main phase transition temperature exhibited markedly anisotropic diffusive fluxes when the bleaching beam was positioned proximate to domain boundaries in the P beta' phase. Lipid probes and lectin receptor complexes diffused isotropically in fibroblast surface membranes with little evidence for diffusion channeled parallel to stress fibers. A second application was to trace the time evolution of cell surface reactions such as patching. The feasibility of following, on the optical scale, the growth of individual receptor clusters induced by the ligand wheat germ agglutinin was demonstrated. PMID:3858869

  4. Direct measurement of lateral transport in membranes by using time-resolved spatial photometry.

    PubMed

    Kapitza, H G; McGregor, G; Jacobson, K A

    1985-06-01

    Spatially resolving light detectors allow, with proper calibration, quantitative analysis of the variations in two-dimensional intensity distributions over time. An ultrasensitive microfluorometer was assembled by using as a detector a microchannel plate-intensified video camera. The camera was interfaced with a software-based digital video analysis system to digitize, average, and process images and to directly control the timing of the experiments to minimize exposure of the specimen to light. The detector system has been characterized to allow its use as a photometer. A major application has been to perform fluorescence recovery after photobleaching measurements by using the camera in place of a photomultiplier tube (video-FRAP) with the goal of detecting possible anisotropic diffusion or convective flow. Analysis of the data on macromolecular diffusion in homogenous aqueous glycol solutions yielded diffusion constants in agreement with previous measurements. Results on lipid probe diffusion in dimyristoylphosphatidylcholine multibilayers indicated that at temperatures above the gel-to-liquid crystalline phase transition diffusion is isotropic, and analysis of video-FRAP data yielded diffusion coefficients consistent with those measured previously by using spot photobleaching. However, lipid probes in these multibilayers held just below the main phase transition temperature exhibited markedly anisotropic diffusive fluxes when the bleaching beam was positioned proximate to domain boundaries in the P beta' phase. Lipid probes and lectin receptor complexes diffused isotropically in fibroblast surface membranes with little evidence for diffusion channeled parallel to stress fibers. A second application was to trace the time evolution of cell surface reactions such as patching. The feasibility of following, on the optical scale, the growth of individual receptor clusters induced by the ligand wheat germ agglutinin was demonstrated.

  5. Software Accelerates Computing Time for Complex Math

    NASA Technical Reports Server (NTRS)

    2014-01-01

    Ames Research Center awarded Newark, Delaware-based EM Photonics Inc. SBIR funding to utilize graphic processing unit (GPU) technology- traditionally used for computer video games-to develop high-computing software called CULA. The software gives users the ability to run complex algorithms on personal computers with greater speed. As a result of the NASA collaboration, the number of employees at the company has increased 10 percent.

  6. Near real-time, on-the-move software PED using VPEF

    NASA Astrophysics Data System (ADS)

    Green, Kevin; Geyer, Chris; Burnette, Chris; Agarwal, Sanjeev; Swett, Bruce; Phan, Chung; Deterline, Diane

    2015-05-01

    The scope of the Micro-Cloud for Operational, Vehicle-Based EO-IR Reconnaissance System (MOVERS) development effort, managed by the Night Vision and Electronic Sensors Directorate (NVESD), is to develop, integrate, and demonstrate new sensor technologies and algorithms that improve improvised device/mine detection using efficient and effective exploitation and fusion of sensor data and target cues from existing and future Route Clearance Package (RCP) sensor systems. Unfortunately, the majority of forward looking Full Motion Video (FMV) and computer vision processing, exploitation, and dissemination (PED) algorithms are often developed using proprietary, incompatible software. This makes the insertion of new algorithms difficult due to the lack of standardized processing chains. In order to overcome these limitations, EOIR developed the Government off-the-shelf (GOTS) Video Processing and Exploitation Framework (VPEF) to be able to provide standardized interfaces (e.g., input/output video formats, sensor metadata, and detected objects) for exploitation software and to rapidly integrate and test computer vision algorithms. EOIR developed a vehicle-based computing framework within the MOVERS and integrated it with VPEF. VPEF was further enhanced for automated processing, detection, and publishing of detections in near real-time, thus improving the efficiency and effectiveness of RCP sensor systems.

  7. A software-based tool for video motion tracking in the surgical skills assessment landscape.

    PubMed

    Ganni, Sandeep; Botden, Sanne M B I; Chmarra, Magdalena; Goossens, Richard H M; Jakimowicz, Jack J

    2018-01-16

    The use of motion tracking has been proved to provide an objective assessment in surgical skills training. Current systems, however, require the use of additional equipment or specialised laparoscopic instruments and cameras to extract the data. The aim of this study was to determine the possibility of using a software-based solution to extract the data. 6 expert and 23 novice participants performed a basic laparoscopic cholecystectomy procedure in the operating room. The recorded videos were analysed using Kinovea 0.8.15 and the following parameters calculated the path length, average instrument movement and number of sudden or extreme movements. The analysed data showed that experts had significantly shorter path length (median 127 cm vs. 187 cm, p = 0.01), smaller average movements (median 0.40 cm vs. 0.32 cm, p = 0.002) and fewer sudden movements (median 14.00 vs. 21.61, p = 0.001) than their novice counterparts. The use of software-based video motion tracking of laparoscopic cholecystectomy is a simple and viable method enabling objective assessment of surgical performance. It provides clear discrimination between expert and novice performance.

  8. COMPUTATIONAL ANALYSIS OF SWALLOWING MECHANICS UNDERLYING IMPAIRED EPIGLOTTIC INVERSION

    PubMed Central

    Pearson, William G.; Taylor, Brandon K; Blair, Julie; Martin-Harris, Bonnie

    2015-01-01

    Objective Determine swallowing mechanics associated with the first and second epiglottic movements, that is, movement to horizontal and full inversion respectively, in order to provide a clinical interpretation of impaired epiglottic function. Study Design Retrospective cohort study. Methods A heterogeneous cohort of patients with swallowing difficulties was identified (n=92). Two speech-language pathologists reviewed 5ml thin and 5ml pudding videofluoroscopic swallow studies per subject, and assigned epiglottic component scores of 0=complete inversion, 1=partial inversion, and 2=no inversion forming three groups of videos for comparison. Coordinates mapping minimum and maximum excursion of the hyoid, pharynx, larynx, and tongue base during pharyngeal swallowing were recorded using ImageJ software. A canonical variate analysis with post-hoc discriminant function analysis of coordinates was performed using MorphoJ software to evaluate mechanical differences between groups. Eigenvectors characterizing swallowing mechanics underlying impaired epiglottic movements were visualized. Results Nineteen of 184 video-swallows were rejected for poor quality (n=165). A Goodman-Kruskal index of predictive association showed no correlation between epiglottic component scores and etiologies of dysphagia (λ=.04). A two-way analysis of variance by epiglottic component scores showed no significant interaction effects between sex and age (f=1.4, p=.25). Discriminant function analysis demonstrated statistically significant mechanical differences between epiglottic component scores: 1&2, representing the first epiglottic movement (Mahalanobis distance=1.13, p=.0007); and, 0&1, representing the second epiglottic movement (Mahalanobis distance=0.83, p=.003). Eigenvectors indicate that laryngeal elevation and tongue base retraction underlie both epiglottic movements. Conclusion Results suggest that reduced tongue base retraction and laryngeal elevation underlie impaired first and second epiglottic movements. The styloglossus, hyoglossus and long pharyngeal muscles are implicated as targets for rehabilitation in dysphagic patients with impaired epiglottic inversion. PMID:27426940

  9. What's New in Software? Hot New Tool: The Hypertext.

    ERIC Educational Resources Information Center

    Hedley, Carolyn N.

    1989-01-01

    This article surveys recent developments in hypertext software, a highly interactive nonsequential reading/writing/database approach to research and teaching that allows paths to be created through related materials including text, graphics, video, and animation sources. Described are uses, advantages, and problems of hypertext. (PB)

  10. Tele-EnREDando.com: A Multimedia WEB-CALL Software for Mobile Phones.

    ERIC Educational Resources Information Center

    Garcia, Jose Carlos

    2002-01-01

    Presents one of the world's first prototypes of language learning software for smart-phones. Tele-EnREDando.com is an Internet based multimedia application designed for 3G mobile phones with audio, video, and interactive exercises for learning Spanish for business. (Author/VWL)

  11. An Interactive, Physics-Based Unmanned Ground Vehicle Simulator Leveraging Open Source Gaming Technology: Progress in the Development and Application of the Virtual Autonomous Navigation Environment (VANE) Desktop

    DTIC Science & Technology

    2009-01-01

    interface, mechatronics, video games 1. INTRODUCTION Engineering methods have substantially and continuously evolved over the past 40 years. In the past...1970s, video games have pioneered interactive simulation and laid the groundwork for inexpensive computing that individuals, corporations, and...purposes. This has not gone unnoticed, and software technology and techniques evolved for video games are beginning to have extraordinary impact in

  12. Games for Training: Leveraging Commercial Off the Shelf Multiplayer Gaming Software for Infantry Squad Collective Training

    DTIC Science & Technology

    2005-09-01

    squad training, team training, dismounted training, video games , computer games, multiplayer games. 16. PRICE CODE 17. SECURITY CLASSIFICATION OF...Multiplayer - mode of play for computer and video games in which multiple people can play the same game at the same time (Wikipedia, 2005) D...that “improvements in 3-D image generation on the PC and the speed of the internet” have increased the military’s interest in the use of video games as

  13. Quantifying cell mono-layer cultures by video imaging.

    PubMed

    Miller, K S; Hook, L A

    1996-04-01

    A method is described in which the relative number of adherent cells in multi-well tissue-culture plates is assayed by staining the cells with Giemsa and capturing the image of the stained cells with a video camera and charged-coupled device. The resultant image is quantified using the associated video imaging software. The method is shown to be sensitive and reproducible and should be useful for studies where quantifying relative cell numbers and/or proliferation in vitro is required.

  14. Establishing a gold standard for manual cough counting: video versus digital audio recordings

    PubMed Central

    Smith, Jaclyn A; Earis, John E; Woodcock, Ashley A

    2006-01-01

    Background Manual cough counting is time-consuming and laborious; however it is the standard to which automated cough monitoring devices must be compared. We have compared manual cough counting from video recordings with manual cough counting from digital audio recordings. Methods We studied 8 patients with chronic cough, overnight in laboratory conditions (diagnoses were 5 asthma, 1 rhinitis, 1 gastro-oesophageal reflux disease and 1 idiopathic cough). Coughs were recorded simultaneously using a video camera with infrared lighting and digital sound recording. The numbers of coughs in each 8 hour recording were counted manually, by a trained observer, in real time from the video recordings and using audio-editing software from the digital sound recordings. Results The median cough frequency was 17.8 (IQR 5.9–28.7) cough sounds per hour in the video recordings and 17.7 (6.0–29.4) coughs per hour in the digital sound recordings. There was excellent agreement between the video and digital audio cough rates; mean difference of -0.3 coughs per hour (SD ± 0.6), 95% limits of agreement -1.5 to +0.9 coughs per hour. Video recordings had poorer sound quality even in controlled conditions and can only be analysed in real time (8 hours per recording). Digital sound recordings required 2–4 hours of analysis per recording. Conclusion Manual counting of cough sounds from digital audio recordings has excellent agreement with simultaneous video recordings in laboratory conditions. We suggest that ambulatory digital audio recording is therefore ideal for validating future cough monitoring devices, as this as this can be performed in the patients own environment. PMID:16887019

  15. A comparison of face to face and video-based education on attitude related to diet and fluids: Adherence in hemodialysis patients.

    PubMed

    Karimi Moonaghi, Hossein; Hasanzadeh, Farzaneh; Shamsoddini, Somayyeh; Emamimoghadam, Zahra; Ebrahimzadeh, Saeed

    2012-07-01

    Adherence to diet and fluids is the cornerstone of patients undergoing hemodialysis. By informing hemodialysis patients we can help them have a proper diet and reduce mortality and complications of toxins. Face to face education is one of the most common methods of training in health care system. But advantages of video- based education are being simple and cost-effective, although this method is virtual. Seventy-five hemodialysis patients were divided randomly into face to face and video-based education groups. A training manual was designed based on Orem's self-care model. Content of training manual was same in both the groups. In the face to face group, 2 educational sessions were accomplished during dialysis with a 1-week time interval. In the video-based education group, a produced film, separated to 2 episodes was presented during dialysis with a 1-week time interval. An Attitude questionnaire was completed as a pretest and at the end of weeks 2 and 4. SPSS software version 11.5 was used for analysis. Attitudes about fluid and diet adherence at the end of weeks 2 and 4 are not significantly different in face to face or video-based education groups. The patients' attitude had a significant difference in face to face group between the 3 study phases (pre-, 2, and 4 weeks postintervention). The same results were obtained in 3 phases of video-based education group. Our findings showed that video-based education could be as effective as face to face method. It is recommended that more investment be devoted to video-based education.

  16. Hierarchical video surveillance architecture: a chassis for video big data analytics and exploration

    NASA Astrophysics Data System (ADS)

    Ajiboye, Sola O.; Birch, Philip; Chatwin, Christopher; Young, Rupert

    2015-03-01

    There is increasing reliance on video surveillance systems for systematic derivation, analysis and interpretation of the data needed for predicting, planning, evaluating and implementing public safety. This is evident from the massive number of surveillance cameras deployed across public locations. For example, in July 2013, the British Security Industry Association (BSIA) reported that over 4 million CCTV cameras had been installed in Britain alone. The BSIA also reveal that only 1.5% of these are state owned. In this paper, we propose a framework that allows access to data from privately owned cameras, with the aim of increasing the efficiency and accuracy of public safety planning, security activities, and decision support systems that are based on video integrated surveillance systems. The accuracy of results obtained from government-owned public safety infrastructure would improve greatly if privately owned surveillance systems `expose' relevant video-generated metadata events, such as triggered alerts and also permit query of a metadata repository. Subsequently, a police officer, for example, with an appropriate level of system permission can query unified video systems across a large geographical area such as a city or a country to predict the location of an interesting entity, such as a pedestrian or a vehicle. This becomes possible with our proposed novel hierarchical architecture, the Fused Video Surveillance Architecture (FVSA). At the high level, FVSA comprises of a hardware framework that is supported by a multi-layer abstraction software interface. It presents video surveillance systems as an adapted computational grid of intelligent services, which is integration-enabled to communicate with other compatible systems in the Internet of Things (IoT).

  17. No-Fail Software Gifts for Kids.

    ERIC Educational Resources Information Center

    Buckleitner, Warren

    1996-01-01

    Reviews children's software packages: (1) "Fun 'N Games"--nonviolent games and activities; (2) "Putt-Putt Saves the Zoo"--matching, logic games, and animal facts; (3) "Big Job"--12 logic games with video from job sites; (4) "JumpStart First Grade"--15 activities introducing typical school lessons; and (5) "Read, Write, & Type!"--progressively…

  18. Recording Computer-Based Demonstrations and Board Work

    ERIC Educational Resources Information Center

    Spencer, Neil H.

    2010-01-01

    This article describes how a demonstration of statistical (or other) software can be recorded without expensive video equipment and saved as a presentation to be displayed with software such as Microsoft PowerPoint. Work carried out on a tablet PC, for example, can also be recorded in this fashion.

  19. Prezi: A Different Way to Present

    ERIC Educational Resources Information Center

    Yee, Kevin; Hargis, Jace

    2010-01-01

    In this article, the author discusses Prezi and compares it to other forms of presentation software. Taking a completely different approach to the entire concept of software for presentations, Prezi stands alone as a unique and wholly viable competitor to PowerPoint. With a "Prezi", users display words, images, and videos without using…

  20. The Effects of Multiple Linked Representations on Student Learning in Mathematics.

    ERIC Educational Resources Information Center

    Ozgun-Koca, S. Asli

    This study investigated the effects on student understanding of linear relationships using the linked representation software VideoPoint as compared to using semi-linked representation software. It investigated students' attitudes towards and preferences for mathematical representations--equations, tables, or graphs. An Algebra I class was divided…

  1. Interaction Patterns in Synchronous Online Calculus and Linear Algebra Recitations

    ERIC Educational Resources Information Center

    Mayer, Greg; Hendricks, Cher

    2014-01-01

    This study describes interaction patterns observed during a pilot project that explored the use of web-conferencing (WC) software in two undergraduate distance education courses offered to advanced high-school students. The pilot program replaced video-conferencing technology with WC software during recitations, so as to increase participation in…

  2. Creating History Documentaries: A Step-by-Step Guide to Video Projects in the Classroom.

    ERIC Educational Resources Information Center

    Escobar, Deborah

    This guide offers an easy introduction to social studies teachers wanting to challenge their students with creative media by bringing the past to life. The 14-step guide shows teachers and students the techniques needed for researching, scripting, and editing a historical documentary. Using a video camera and computer software, students can…

  3. Polarimeter based on video matrix

    NASA Astrophysics Data System (ADS)

    Pavlov, Andrey; Kontantinov, Oleg; Shmirko, Konstantin; Zubko, Evgenij

    2017-11-01

    In this paper we present a new measurement tool - polarimeter, based on video matrix. Polarimetric measure- ments are usefull, for example, when monitoring water areas pollutions and atmosphere constituents. New device is small enough to mount on unmanned aircraft vehicles (quadrocopters) and stationary platforms. Device and corresponding software turns it into real-time monitoring system, that helps to solve some research problems.

  4. Games as an Artistic Medium: Investigating Complexity Thinking in Game-Based Art Pedagogy

    ERIC Educational Resources Information Center

    Patton, Ryan M.

    2013-01-01

    This action research study examines the making of video games, using an integrated development environment software program called GameMaker, as art education curriculum for students between the ages of 8-13. Through a method I designed, students created video games using the concepts of move, avoid, release, and contact (MARC) to explore their…

  5. The California All-sky Meteor Surveillance (CAMS) System

    NASA Astrophysics Data System (ADS)

    Gural, P. S.

    2011-01-01

    A unique next generation multi-camera, multi-site video meteor system is being developed and deployed in California to provide high accuracy orbits of simultaneously captured meteors. Included herein is a description of the goals, concept of operations, hardware, and software development progress. An appendix contains a meteor camera performance trade study made for video systems circa 2010.

  6. Space lab system analysis

    NASA Technical Reports Server (NTRS)

    Ingels, F. M.; Rives, T. B.

    1987-01-01

    An analytical analysis of the HOSC Generic Peripheral processing system was conducted. The results are summarized and they indicate that the maximum delay in performing screen change requests should be less than 2.5 sec., occurring for a slow VAX host to video screen I/O rate of 50 KBps. This delay is due to the average I/O rate from the video terminals to their host computer. Software structure of the main computers and the host computers will have greater impact on screen change or refresh response times. The HOSC data system model was updated by a newly coded PASCAL based simulation program which was installed on the HOSC VAX system. This model is described and documented. Suggestions are offered to fine tune the performance of the ETERNET interconnection network. Suggestions for using the Nutcracker by Excelan to trace itinerate packets which appear on the network from time to time were offered in discussions with the HOSC personnel. Several visits to the HOSC facility were to install and demonstrate the simulation model.

  7. Using WorldWide Telescope in Observing, Research and Presentation

    NASA Astrophysics Data System (ADS)

    Roberts, Douglas A.; Fay, J.

    2014-01-01

    WorldWide Telescope (WWT) is free software that enables researchers to interactively explore observational data using a user-friendly interface. Reference, all-sky datasets and pointed observations are available as layers along with the ability to easily overlay additional FITS images and catalog data. Connections to the Astrophysics Data System (ADS) are included which enable visual investigation using WWT to drive document searches in ADS. WWT can be used to capture and share visual exploration with colleagues during observational planning and analysis. Finally, researchers can use WorldWide Telescope to create videos for professional, education and outreach presentations. I will conclude with an example of how I have used WWT in a research project. Specifically, I will discuss how WorldWide Telescope helped our group to prepare for radio observations and following them, in the analysis of multi-wavelength data taken in the inner parsec of the Galaxy. A concluding video will show how WWT brought together disparate datasets in a unified interactive visualization environment.

  8. Estimation of skeletal movement of human locomotion from body surface shapes using dynamic spatial video camera (DSVC) and 4D human model.

    PubMed

    Saito, Toshikuni; Suzuki, Naoki; Hattori, Asaki; Suzuki, Shigeyuki; Hayashibe, Mitsuhiro; Otake, Yoshito

    2006-01-01

    We have been developing a DSVC (Dynamic Spatial Video Camera) system to measure and observe human locomotion quantitatively and freely. A 4D (four-dimensional) human model with detailed skeletal structure, joint, muscle, and motor functionality has been built. The purpose of our research was to estimate skeletal movements from body surface shapes using DSVC and the 4D human model. For this purpose, we constructed a body surface model of a subject and resized the standard 4D human model to match with geometrical features of the subject's body surface model. Software that integrates the DSVC system and the 4D human model, and allows dynamic skeletal state analysis from body surface movement data was also developed. We practically applied the developed system in dynamic skeletal state analysis of a lower limb in motion and were able to visualize the motion using geometrically resized standard 4D human model.

  9. Development of a video image-based QA system for the positional accuracy of dynamic tumor tracking irradiation in the Vero4DRT system.

    PubMed

    Ebe, Kazuyu; Sugimoto, Satoru; Utsunomiya, Satoru; Kagamu, Hiroshi; Aoyama, Hidefumi; Court, Laurence; Tokuyama, Katsuichi; Baba, Ryuta; Ogihara, Yoshisada; Ichikawa, Kosuke; Toyama, Joji

    2015-08-01

    To develop and evaluate a new video image-based QA system, including in-house software, that can display a tracking state visually and quantify the positional accuracy of dynamic tumor tracking irradiation in the Vero4DRT system. Sixteen trajectories in six patients with pulmonary cancer were obtained with the ExacTrac in the Vero4DRT system. Motion data in the cranio-caudal direction (Y direction) were used as the input for a programmable motion table (Quasar). A target phantom was placed on the motion table, which was placed on the 2D ionization chamber array (MatriXX). Then, the 4D modeling procedure was performed on the target phantom during a reproduction of the patient's tumor motion. A substitute target with the patient's tumor motion was irradiated with 6-MV x-rays under the surrogate infrared system. The 2D dose images obtained from the MatriXX (33 frames/s; 40 s) were exported to in-house video-image analyzing software. The absolute differences in the Y direction between the center of the exposed target and the center of the exposed field were calculated. Positional errors were observed. The authors' QA results were compared to 4D modeling function errors and gimbal motion errors obtained from log analyses in the ExacTrac to verify the accuracy of their QA system. The patients' tumor motions were evaluated in the wave forms, and the peak-to-peak distances were also measured to verify their reproducibility. Thirteen of sixteen trajectories (81.3%) were successfully reproduced with Quasar. The peak-to-peak distances ranged from 2.7 to 29.0 mm. Three trajectories (18.7%) were not successfully reproduced due to the limited motions of the Quasar. Thus, 13 of 16 trajectories were summarized. The mean number of video images used for analysis was 1156. The positional errors (absolute mean difference + 2 standard deviation) ranged from 0.54 to 1.55 mm. The error values differed by less than 1 mm from 4D modeling function errors and gimbal motion errors in the ExacTrac log analyses (n = 13). The newly developed video image-based QA system, including in-house software, can analyze more than a thousand images (33 frames/s). Positional errors are approximately equivalent to those in ExacTrac log analyses. This system is useful for the visual illustration of the progress of the tracking state and for the quantification of positional accuracy during dynamic tumor tracking irradiation in the Vero4DRT system.

  10. Analysis of United States’ Broadband Policy

    DTIC Science & Technology

    2007-03-01

    compared with the minimum speed the FCC uses in its definition of broadband access. For example, using a 56K modem connection to download a 10...transmission rates multiple times faster than a 56K modem , users can view video or download software and other data-intensive files in a matter of seconds...boast download speeds from 144Kbps (roughly three times faster than a 56K dial-up modem connection) to 2.4Mbps (close to cable- modem speed). Although

  11. Spectrally And Temporally Resolved Low-Light Level Video Microscopy

    NASA Astrophysics Data System (ADS)

    Wampler, John E.; Furukawa, Ruth; Fechheimer, Marcus

    1989-12-01

    The IDG law-light video microscope system was designed to aid studies of localization of subcellular luminescence sources and stimulus/response coupling in single living cells using luminescent probes. Much of the motivation for design of this instrument system came from the pioneering efforts of Dr. Reynolds (Reynolds, Q. Rev. Biophys. 5, 295-347; Reynolds and Taylor, Bioscience 30, 586-592) who showed the value of intensified video camera systems for detection and localizion of fluorescence and bioluminescence signals from biological tissues. Our instrument system has essentially two roles, 1) localization and quantitation of very weak bioluminescence signals and 2) quantitation of intracellular environmental characteristics such as pH and calcium ion concentrations using fluorescent and bioluminescent probes. The instrument system exhibits over one million fold operating range allowing visualization and enhancement of quantum limited images with quantum limited response, spectral analysis of fluorescence signals, and transmitted light imaging. The computer control of the system implements rapid switching between light regimes, spatially resolved spectral scanning, and digital data processing for spectral shape analysis and for detailed analysis of the statistical distribution of single cell measurements. The system design and software algorithms used by the system are summarized. These design criteria are illustrated with examples taken from studies of bioluminescence, applications of bioluminescence to study developmental processes and gene expression in single living cells, and applications of fluorescent probes to study stimulus/response coupling in living cells.

  12. An innovative experiment on superconductivity, based on video analysis and non-expensive data acquisition

    NASA Astrophysics Data System (ADS)

    Bonanno, A.; Bozzo, G.; Camarca, M.; Sapia, P.

    2015-07-01

    In this paper we present a new experiment on superconductivity, designed for university undergraduate students, based on the high-speed video analysis of a magnet falling through a ceramic superconducting cylinder (Tc = 110 K). The use of an Atwood’s machine allows us to vary the magnet’s speed and acceleration during its interaction with the superconductor. In this way, we highlight the existence of two interaction regimes: for low crossing energy, the magnet is levitated by the superconductor after a transient oscillatory damping; for higher crossing energy, the magnet passes through the superconducting cylinder. The use of a commercial-grade high speed imaging system, together with video analysis performed using the Tracker software, allows us to attain a good precision in space and time measurements. Four sensing coils, mounted inside and outside the superconducting cylinder, allow us to study the magnetic flux variations in connection with the magnet’s passage through the superconductor, permitting us to shed light on a didactically relevant topic as the behaviour of magnetic field lines in the presence of a superconductor. The critical discussion of experimental data allows undergraduate university students to grasp useful insights on the basic phenomenology of superconductivity as well as on relevant conceptual topics such as the difference between the Meissner effect and the Faraday-like ‘perfect’ induction.

  13. Bringing Javanesse Traditional Dance into Basic Physics Class: Exemplifying Projectile Motion through Video Analysis

    NASA Astrophysics Data System (ADS)

    Handayani, Langlang; Prasetya Aji, Mahardika; Susilo; Marwoto, Putut

    2016-08-01

    An alternative approach of an arts-based instruction for Basic Physics class has been developed through the implementation of video analysis of a Javanesse traditional dance: Bambangan Cakil. A particular movement of the dance -weapon throwing- was analyzed by employing the LoggerPro software package to exemplify projectile motion. The results of analysis indicated that the movement of the thrown weapon in Bambangan Cakil dance provides some helping explanations of several physics concepts of projectile motion: object's path, velocity, and acceleration, in a form of picture, graph and also table. Such kind of weapon path and velocity can be shown via a picture or graph, while such concepts of decreasing velocity in y direction (weapon moving downward and upward) due to acceleration g can be represented through the use of a table. It was concluded that in a Javanesse traditional dance there are many physics concepts which can be explored. The study recommends to bring the traditional dance into a science class which will enable students to get more understanding of both physics concepts and Indonesia cultural heritage.

  14. Standing Waves in an Elastic Spring: A Systematic Study by Video Analysis

    NASA Astrophysics Data System (ADS)

    Ventura, Daniel Rodrigues; de Carvalho, Paulo Simeão; Dias, Marco Adriano

    2017-04-01

    The word "wave" is part of the daily language of every student. However, the physical understanding of the concept demands a high level of abstract thought. In physics, waves are oscillating variations of a physical quantity that involve the transfer of energy from one point to another, without displacement of matter. A wave can be formed by an elastic deformation, a variation of pressure, changes in the intensity of electric or magnetic fields, a propagation of a temperature variation, or other disturbances. Moreover, a wave can be categorized as pulsed or periodic. Most importantly, conditions can be set such that waves interfere with one another, resulting in standing waves. These have many applications in technology, although they are not always readily identified and/or understood by all students. In this work, we use a simple setup including a low-cost constant spring, such as a Slinky, and the free software Tracker for video analysis. We show they can be very useful for the teaching of mechanical wave propagation and the analysis of harmonics in standing waves.

  15. “SmartMonitor” — An Intelligent Security System for the Protection of Individuals and Small Properties with the Possibility of Home Automation

    PubMed Central

    Frejlichowski, Dariusz; Gościewska, Katarzyna; Forczmański, Paweł; Hofman, Radosław

    2014-01-01

    “SmartMonitor” is an intelligent security system based on image analysis that combines the advantages of alarm, video surveillance and home automation systems. The system is a complete solution that automatically reacts to every learned situation in a pre-specified way and has various applications, e.g., home and surrounding protection against unauthorized intrusion, crime detection or supervision over ill persons. The software is based on well-known and proven methods and algorithms for visual content analysis (VCA) that were appropriately modified and adopted to fit specific needs and create a video processing model which consists of foreground region detection and localization, candidate object extraction, object classification and tracking. In this paper, the “SmartMonitor” system is presented along with its architecture, employed methods and algorithms, and object analysis approach. Some experimental results on system operation are also provided. In the paper, focus is put on one of the aforementioned functionalities of the system, namely supervision over ill persons. PMID:24905854

  16. Training Children in Pedestrian Safety: Distinguishing Gains in Knowledge from Gains in Safe Behavior

    PubMed Central

    McClure, Leslie A.

    2014-01-01

    Pedestrian injuries contribute greatly to child morbidity and mortality. Recent evidence suggests that training within virtual pedestrian environments may improve children’s street crossing skills, but may not convey knowledge about safety in street environments. We hypothesized that (a) children will gain pedestrian safety knowledge via videos/software/internet websites, but not when trained by virtual pedestrian environment or other strategies; (b) pedestrian safety knowledge will be associated with safe pedestrian behavior both before and after training; and (c) increases in knowledge will be associated with increases in safe behavior among children trained individually at streetside locations, but not those trained by means of other strategies. We analyzed data from a randomized controlled trial evaluating pedestrian safety training. We randomly assigned 240 children ages 7–8 to one of four training conditions: videos/software/internet, virtual reality (VR), individualized streetside instruction, or a no-contact control. Both virtual and field simulations of street crossing at 2-lane bi-directional mid-block locations assessed pedestrian behavior at baseline, post-training, and 6-month follow-up. Pedestrian knowledge was assessed orally on all three occasions. Children trained by videos/software/internet, and those trained individually, showed increased knowledge following training relative to children in the other groups (ps < 0.01). Correlations between pedestrian safety knowledge and pedestrian behavior were mostly non-significant. Correlations between change in knowledge and change in behavior from pre- to post-intervention also were non-significant, both for the full sample and within conditions. Children trained using videos/software/internet gained knowledge but did not change their behavior. Children trained individually gained in both knowledge and safer behavior. Children trained virtually gained in safer behavior but not knowledge. If VR is used for training, tools like videos/internet might effectively supplement training. We discovered few associations between knowledge and behavior, and none between changes in knowledge and behavior. Pedestrian safety knowledge and safe pedestrian behavior may be orthogonal constructs that should be considered independently for research and training purposes. PMID:24573688

  17. Architecture of web services in the enhancement of real-time 3D video virtualization in cloud environment

    NASA Astrophysics Data System (ADS)

    Bada, Adedayo; Wang, Qi; Alcaraz-Calero, Jose M.; Grecos, Christos

    2016-04-01

    This paper proposes a new approach to improving the application of 3D video rendering and streaming by jointly exploring and optimizing both cloud-based virtualization and web-based delivery. The proposed web service architecture firstly establishes a software virtualization layer based on QEMU (Quick Emulator), an open-source virtualization software that has been able to virtualize system components except for 3D rendering, which is still in its infancy. The architecture then explores the cloud environment to boost the speed of the rendering at the QEMU software virtualization layer. The capabilities and inherent limitations of Virgil 3D, which is one of the most advanced 3D virtual Graphics Processing Unit (GPU) available, are analyzed through benchmarking experiments and integrated into the architecture to further speed up the rendering. Experimental results are reported and analyzed to demonstrate the benefits of the proposed approach.

  18. Evaluation of Hands-On Clinical Exam Performance Using Marker-less Video Tracking.

    PubMed

    Azari, David; Pugh, Carla; Laufer, Shlomi; Cohen, Elaine; Kwan, Calvin; Chen, Chia-Hsiung Eric; Yen, Thomas Y; Hu, Yu Hen; Radwin, Robert

    2014-09-01

    This study investigates the potential of using marker-less video tracking of the hands for evaluating hands-on clinical skills. Experienced family practitioners attending a national conference were recruited and asked to conduct a breast examination on a simulator that simulates different clinical presentations. Videos were made of the clinician's hands during the exam and video processing software for tracking hand motion to quantify hand motion kinematics was used. Practitioner motion patterns indicated consistent behavior of participants across multiple pathologies. Different pathologies exhibited characteristic motion patterns in the aggregate at specific parts of an exam, indicating consistent inter-participant behavior. Marker-less video kinematic tracking therefore shows promise in discriminating between different examination procedures, clinicians, and pathologies.

  19. Toward Dietary Assessment via Mobile Phone Video Cameras.

    PubMed

    Chen, Nicholas; Lee, Yun Young; Rabb, Maurice; Schatz, Bruce

    2010-11-13

    Reliable dietary assessment is a challenging yet essential task for determining general health. Existing efforts are manual, require considerable effort, and are prone to underestimation and misrepresentation of food intake. We propose leveraging mobile phones to make this process faster, easier and automatic. Using mobile phones with built-in video cameras, individuals capture short videos of their meals; our software then automatically analyzes the videos to recognize dishes and estimate calories. Preliminary experiments on 20 typical dishes from a local cafeteria show promising results. Our approach complements existing dietary assessment methods to help individuals better manage their diet to prevent obesity and other diet-related diseases.

  20. Adapting astronomical source detection software to help detect animals in thermal images obtained by unmanned aerial systems

    NASA Astrophysics Data System (ADS)

    Longmore, S. N.; Collins, R. P.; Pfeifer, S.; Fox, S. E.; Mulero-Pazmany, M.; Bezombes, F.; Goodwind, A.; de Juan Ovelar, M.; Knapen, J. H.; Wich, S. A.

    2017-02-01

    In this paper we describe an unmanned aerial system equipped with a thermal-infrared camera and software pipeline that we have developed to monitor animal populations for conservation purposes. Taking a multi-disciplinary approach to tackle this problem, we use freely available astronomical source detection software and the associated expertise of astronomers, to efficiently and reliably detect humans and animals in aerial thermal-infrared footage. Combining this astronomical detection software with existing machine learning algorithms into a single, automated, end-to-end pipeline, we test the software using aerial video footage taken in a controlled, field-like environment. We demonstrate that the pipeline works reliably and describe how it can be used to estimate the completeness of different observational datasets to objects of a given type as a function of height, observing conditions etc. - a crucial step in converting video footage to scientifically useful information such as the spatial distribution and density of different animal species. Finally, having demonstrated the potential utility of the system, we describe the steps we are taking to adapt the system for work in the field, in particular systematic monitoring of endangered species at National Parks around the world.

  1. Climate Science Communications - Video Visualization Techniques

    NASA Astrophysics Data System (ADS)

    Reisman, J. P.; Mann, M. E.

    2010-12-01

    Communicating Climate science is challenging due to it's complexity. But as they say, a picture is worth a thousand words. Visualization techniques can be merely graphical or combine multimedia so as to make graphs come alive in context with other visual and auditory cues. This can also make the information come alive in a way that better communicates what the science is all about. What types of graphics to use depends on your audience, some graphs are great for scientists but if you are trying to communicate to a less sophisticated audience, certain visuals translate information in a more easily perceptible manner. Hollywood techniques and style can be applied to these graphs to give them even more impact. Video is one of the most powerful communication tools in its ability to combine visual and audio through time. Adding music and visual cues such as pans and zooms can greatly enhance the ability to communicate your concepts. Video software ranges from relatively simple to very sophisticated. In reality, you don't need the best tools to get your point across. In fact, with relatively inexpensive software, you can put together powerful videos that more effectively convey the science you are working on with greater sophistication, and in an entertaining way. We will examine some basic techniques to increase the quality of video visualization to make it more effective in communicating complexity. If a picture is worth a thousand words, a decent video with music, and a bit of narration is priceless.

  2. Exposure to Poverty and Productivity.

    PubMed

    Dalton, Patricio S; Gonzalez Jimenez, Victor H; Noussair, Charles N

    2017-01-01

    We study whether exposure to poverty can induce affective states that decrease productivity. In a controlled laboratory setting, we find that subjects randomly assigned to a treatment, in which they view a video featuring individuals that live in extreme poverty, exhibit lower subsequent productivity compared to subjects assigned to a control treatment. Questionnaire responses, as well as facial recognition software, provide quantitative measures of the affective state evoked by the two treatments. Subjects exposed to images of poverty experience a more negative affective state than those in the control treatment. Further analysis shows that individuals in a more positive emotional state exhibit less of a treatment effect. Also, those who exhibit greater attentiveness upon viewing the poverty video are less productive. The results are consistent with the notion that exposure to poverty can induce a psychological state in individuals that adversely affects productivity.

  3. GLOBECOM '88 - IEEE Global Telecommunications Conference and Exhibition, Hollywood, FL, Nov. 28-Dec. 1, 1988, Conference Record. Volumes 1, 2, & 3

    NASA Astrophysics Data System (ADS)

    Various papers on communications for the information age are presented. Among the general topics considered are: telematic services and terminals, satellite communications, telecommunications mangaement network, control of integrated broadband networks, advances in digital radio systems, the intelligent network, broadband networks and services deployment, future switch architectures, performance analysis of computer networks, advances in spread spectrum, optical high-speed LANs, and broadband switching and networks. Also addressed are: multiple access protocols, video coding techniques, modulation and coding, photonic switching, SONET terminals and applications, standards for video coding, digital switching, progress in MANs, mobile and portable radio, software design for improved maintainability, multipath propagation and advanced countermeasure, data communication, network control and management, fiber in the loop, network algorithm and protocols, and advances in computer communications.

  4. Exposure to Poverty and Productivity

    PubMed Central

    2017-01-01

    We study whether exposure to poverty can induce affective states that decrease productivity. In a controlled laboratory setting, we find that subjects randomly assigned to a treatment, in which they view a video featuring individuals that live in extreme poverty, exhibit lower subsequent productivity compared to subjects assigned to a control treatment. Questionnaire responses, as well as facial recognition software, provide quantitative measures of the affective state evoked by the two treatments. Subjects exposed to images of poverty experience a more negative affective state than those in the control treatment. Further analysis shows that individuals in a more positive emotional state exhibit less of a treatment effect. Also, those who exhibit greater attentiveness upon viewing the poverty video are less productive. The results are consistent with the notion that exposure to poverty can induce a psychological state in individuals that adversely affects productivity. PMID:28125621

  5. Weapons, Body Postures, and the Quest for Dominance in Robberies

    PubMed Central

    Mosselman, Floris; Lindegaard, Marie Rosenkrantz

    2018-01-01

    Objective: A small-scale exploration of the use of video analysis to study robberies. We analyze the use of weapons as part of the body posturing of robbers as they attempt to attain dominance. Methods: Qualitative analyses of video footage of 23 shop robberies. We used Observer XT software (version 12) for fine-grained multimodal coding, capturing diverse bodily behavior by various actors simultaneously. We also constructed story lines to understand the robberies as hermeneutic whole cases. Results: Robbers attain dominance by using weapons that afford aggrandizing posturing and forward movements. Guns rather than knives seemed to fit more easily with such posturing. Also, victims were more likely to show minimizing postures when confronted with guns. Thus, guns, as part of aggrandizing posturing, offer more support to robbers’ claims to dominance in addition to their more lethal power. In the cases where resistance occurred, robbers either expressed insecure body movements or minimizing postures and related weapon usage or they failed to impose a robbery frame as the victims did not seem to comprehend the situation initially. Conclusions: Video analysis opens up a new perspective of how violent crime unfolds as sequences of bodily movements. We provide methodological recommendations and suggest a larger scale comparative project. PMID:29416178

  6. Bringing "Scientific Expeditions" Into the Schools

    NASA Technical Reports Server (NTRS)

    Watson, Val; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    Two new technologies, the FASTexpedition and Remote FAST, have been developed that provide remote, 3D, high resolution, dynamic, interactive viewing of scientific data (such as simulations or measurements of fluid dynamics). The FASTexpedition permits one to access scientific data from the World Wide Web, take guided expeditions through the data, and continue with self controlled expeditions through the data. Remote FAST permits collaborators at remote sites to simultaneously view an analysis of scientific data being controlled by one of the collaborators. Control can be transferred between sites. These technologies are now being used for remote collaboration in joint university, industry, and NASA projects in computational fluid dynamics (CFD) and wind tunnel testing. Also, NASA Ames Research Center has initiated a project to make scientific data and guided expeditions through the data available as FASTexpeditions on the World Wide Web for educational purposes. Previously, remote visualiZation of dynamic data was done using video format (transmitting pixel information) such as video conferencing or MPEG movies on the Internet. The concept for this new technology is to send the raw data (e.g., grids, vectors, and scalars) along with viewing scripts over the Internet and have the pixels generated by a visualization tool running on the viewer's local workstation. The visualization tool that is currently used is FAST (Flow Analysis Software Toolkit). The advantages of this new technology over using video format are: 1. The visual is much higher in resolution (1280xl024 pixels with 24 bits of color) than typical video format transmitted over the network. 2. The form of the visualization can be controlled interactively (because the viewer is interactively controlling the visualization tool running on his workstation). 3. A rich variety of guided expeditions through the data can be included easily. 4. A capability is provided for other sites to see a visual analysis of one site as the analysis is interactively performed. Control of the analysis can be passed from site to site. 5. The scenes can be viewed in 3D using stereo vision. 6. The network bandwidth used for the visualization using this new technology is much smaller than when using video format. (The measured peak bandwidth used was 1 Kbit/sec whereas the measured bandwidth for a small video picture was 500 Kbits/sec.)

  7. Fast 3D Net Expeditions: Tools for Effective Scientific Collaboration on the World Wide Web

    NASA Technical Reports Server (NTRS)

    Watson, Val; Chancellor, Marisa K. (Technical Monitor)

    1996-01-01

    Two new technologies, the FASTexpedition and Remote FAST, have been developed that provide remote, 3D (three dimensional), high resolution, dynamic, interactive viewing of scientific data. The FASTexpedition permits one to access scientific data from the World Wide Web, take guided expeditions through the data, and continue with self controlled expeditions through the data. Remote FAST permits collaborators at remote sites to simultaneously view an analysis of scientific data being controlled by one of the collaborators. Control can be transferred between sites. These technologies are now being used for remote collaboration in joint university, industry, and NASA projects. Also, NASA Ames Research Center has initiated a project to make scientific data and guided expeditions through the data available as FASTexpeditions on the World Wide Web for educational purposes. Previously, remote visualization of dynamic data was done using video format (transmitting pixel information) such as video conferencing or MPEG (Motion Picture Expert Group) movies on the Internet. The concept for this new technology is to send the raw data (e.g., grids, vectors, and scalars) along with viewing scripts over the Internet and have the pixels generated by a visualization tool running on the viewers local workstation. The visualization tool that is currently used is FAST (Flow Analysis Software Toolkit). The advantages of this new technology over using video format are: (1) The visual is much higher in resolution (1280x1024 pixels with 24 bits of color) than typical video format transmitted over the network. (2) The form of the visualization can be controlled interactively (because the viewer is interactively controlling the visualization tool running on his workstation). (3) A rich variety of guided expeditions through the data can be included easily. (4) A capability is provided for other sites to see a visual analysis of one site as the analysis is interactively performed. Control of the analysis can be passed from site to site. (5) The scenes can be viewed in 3D using stereo vision. (6) The network bandwidth for the visualization using this new technology is much smaller than when using video format. (The measured peak bandwidth used was 1 Kbit/sec whereas the measured bandwidth for a small video picture was 500 Kbits/sec.) This talk will illustrate the use of these new technologies and present a proposal for using these technologies to improve science education.

  8. Data Analysis of a Space Experiment: Common Software Tackles Uncommon Task

    NASA Technical Reports Server (NTRS)

    Wilkinson, R. Allen

    1998-01-01

    Presented here are the software adaptations developed by laboratory scientists to process the space experiment data products from three experiments on two International Microgravity Laboratory Missions (IML-1 and IML-2). The challenge was to accommodate interacting with many types of hardware and software developed by both European Space Agency (ESA) and NASA aerospace contractors, where data formats were neither commercial nor familiar to scientists. Some of the data had been corrupted by bit shifting of byte boundaries. Least-significant/most-significant byte swapping also occurred as might be expected for the various hardware platforms involved. The data consisted of 20 GBytes per experiment of both numerical and image data. A significant percentage of the bytes were consumed in NASA formatting with extra layers of packetizing structure. It was provided in various pieces to the scientists on magnetic tapes, Syquest cartridges, DAT tapes, CD-ROMS, analog video tapes, and by network FIP. In this paper I will provide some science background and present the software processing used to make the data useful in the months after the missions.

  9. Automated Thermal Image Processing for Detection and Classification of Birds and Bats - FY2012 Annual Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duberstein, Corey A.; Matzner, Shari; Cullinan, Valerie I.

    Surveying wildlife at risk from offshore wind energy development is difficult and expensive. Infrared video can be used to record birds and bats that pass through the camera view, but it is also time consuming and expensive to review video and determine what was recorded. We proposed to conduct algorithm and software development to identify and to differentiate thermally detected targets of interest that would allow automated processing of thermal image data to enumerate birds, bats, and insects. During FY2012 we developed computer code within MATLAB to identify objects recorded in video and extract attribute information that describes the objectsmore » recorded. We tested the efficiency of track identification using observer-based counts of tracks within segments of sample video. We examined object attributes, modeled the effects of random variability on attributes, and produced data smoothing techniques to limit random variation within attribute data. We also began drafting and testing methodology to identify objects recorded on video. We also recorded approximately 10 hours of infrared video of various marine birds, passerine birds, and bats near the Pacific Northwest National Laboratory (PNNL) Marine Sciences Laboratory (MSL) at Sequim, Washington. A total of 6 hours of bird video was captured overlooking Sequim Bay over a series of weeks. An additional 2 hours of video of birds was also captured during two weeks overlooking Dungeness Bay within the Strait of Juan de Fuca. Bats and passerine birds (swallows) were also recorded at dusk on the MSL campus during nine evenings. An observer noted the identity of objects viewed through the camera concurrently with recording. These video files will provide the information necessary to produce and test software developed during FY2013. The annotation will also form the basis for creation of a method to reliably identify recorded objects.« less

  10. Association between facial expression and PTSD symptoms among young children exposed to the Great East Japan Earthquake: a pilot study

    PubMed Central

    Fujiwara, Takeo; Mizuki, Rie; Miki, Takahiro; Chemtob, Claude

    2015-01-01

    “Emotional numbing” is a symptom of post-traumatic stress disorder (PTSD) characterized by a loss of interest in usually enjoyable activities, feeling detached from others, and an inability to express a full range of emotions. Emotional numbing is usually assessed through self-report, and is particularly difficult to ascertain among young children. We conducted a pilot study to explore the use of facial expression ratings in response to a comedy video clip to assess emotional reactivity among preschool children directly exposed to the Great East Japan Earthquake. This study included 23 child participants. Child PTSD symptoms were measured using a modified version of the Parent’s Report of the Child’s Reaction to Stress scale. Children were filmed while watching a 2-min video compilation of natural scenes (‘baseline video’) followed by a 2-min video clip from a television comedy (‘comedy video’). Children’s facial expressions were processed the using Noldus FaceReader software, which implements the Facial Action Coding System (FACS). We investigated the association between PTSD symptom scores and facial emotion reactivity using linear regression analysis. Children with higher PTSD symptom scores showed a significantly greater proportion of neutral facial expressions, controlling for sex, age, and baseline facial expression (p < 0.05). This pilot study suggests that facial emotion reactivity, measured using facial expression recognition software, has the potential to index emotional numbing in young children. This pilot study adds to the emerging literature on using experimental psychopathology methods to characterize children’s reactions to disasters. PMID:26528206

  11. Using DVI To Teach Physics: Making the Abstract More Concrete.

    ERIC Educational Resources Information Center

    Knupfer, Nancy Nelson; Zollman, Dean

    The ways in which Digital Video Interactive (DVI), a new video technology, can help students learn concepts of physics were studied in a project that included software design and production as well as formative and summative evaluation. DVI provides real-time motion, with the full-motion image contained to a window on part of the screen so that…

  12. Evaluating the Use of Streaming Video To Support Student Learning in a First-Year Life Sciences Course for Student Nurses.

    ERIC Educational Resources Information Center

    Green, Sue M.; Voegeli, David; Harrison, Maureen; Phillips, Jackie; Knowles, Jess; Weaver, Mike; Shepard, Kerry

    2003-01-01

    Nursing students (n=656) used streaming videos on immune, endocrine, and neurological systems using Blackboard software. Of students who viewed all three, 32% found access easy, 59% enjoyed them, and 25% felt very confident in their learning. Results were consistent across three different types and embedding methods. Technical and access problems…

  13. Opencast Matterhorn: A Community-Driven Open Source Software Project for Producing, Managing, and Distributing Academic Video

    ERIC Educational Resources Information Center

    Ketterl, Markus; Schulte, Olaf A.; Hochman, Adam

    2010-01-01

    Purpose: The purpose of this paper is to introduce the Opencast Community, a global community of individuals, institutions, and commercial stakeholders exchanging knowledge about all matters relevant in the context of academic video and promoting projects in this context. It also gives an overview of the most prominent of these projects, Opencast…

  14. Computer- and Video-Based Instruction of Food-Preparation Skills: Acquisition, Generalization, and Maintenance

    ERIC Educational Resources Information Center

    Ayres, Kevin; Cihak, David

    2010-01-01

    The purpose of this study was to evaluate the effects of a computer-based video instruction (CBVI) program to teach life skills. Three middle school-aged students with intellectual disabilities were taught how to make a sandwich, use a microwave, and set the table with a CBVI software package. A multiple probe across behaviors design was used to…

  15. An Integrated Textbook, Video, and Software Environment for Novice and Expert Prolog Programmers. Technical Report No. 23.

    ERIC Educational Resources Information Center

    Eisenstadt, Marc; Brayshaw, Mike

    This paper describes a Prolog execution model which serves as the uniform basis of textbook material, video-based teaching material, and an advanced graphical user interface for Prolog programmers. The model, based upon an augmented AND/OR tree representation of Prolog programs, uses an enriched "status box" in place of the traditional…

  16. Technology survey on video face tracking

    NASA Astrophysics Data System (ADS)

    Zhang, Tong; Gomes, Herman Martins

    2014-03-01

    With the pervasiveness of monitoring cameras installed in public areas, schools, hospitals, work places and homes, video analytics technologies for interpreting these video contents are becoming increasingly relevant to people's lives. Among such technologies, human face detection and tracking (and face identification in many cases) are particularly useful in various application scenarios. While plenty of research has been conducted on face tracking and many promising approaches have been proposed, there are still significant challenges in recognizing and tracking people in videos with uncontrolled capturing conditions, largely due to pose and illumination variations, as well as occlusions and cluttered background. It is especially complex to track and identify multiple people simultaneously in real time due to the large amount of computation involved. In this paper, we present a survey on literature and software that are published or developed during recent years on the face tracking topic. The survey covers the following topics: 1) mainstream and state-of-the-art face tracking methods, including features used to model the targets and metrics used for tracking; 2) face identification and face clustering from face sequences; and 3) software packages or demonstrations that are available for algorithm development or trial. A number of publically available databases for face tracking are also introduced.

  17. Observations on online educational materials for powder diffraction crystallography software.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toby, B. H.

    2010-10-01

    This article presents a series of approaches used to educate potential users of crystallographic software for powder diffraction. The approach that has been most successful in the author's opinion is the web lecture, where an audio presentation is coupled to a video-like record of the contents of the presenter's computer screen.

  18. 319 Current Videos and Software for K-12 Law-Related Education.

    ERIC Educational Resources Information Center

    American Bar Association, Chicago, IL. Special Committee on Youth Education for Citizenship.

    This publication assembles into one volume a comprehensive listing of more than 300 electronic media sources on the subject of law-related education (including the Bill of Rights, Constitution, the Courts, Congress, etc.) for grades kindergarten through 12. Items include laser disks, computer software, videotapes, and CD-ROMs (compact…

  19. Designing and Using Videos in Undergraduate Geoscience Education - a workshop and resource website review

    NASA Astrophysics Data System (ADS)

    Wiese, K.; Mcconnell, D. A.

    2014-12-01

    Do you use video in your teaching? Do you make your own video? Interested in joining our growing community of geoscience educators designing and using video inside and outside the classroom? Over four months in Spring 2014, 22 educators of varying video design and development expertise participated in an NSF-funded On the Cutting Edge virtual workshop to review the best educational research on video design and use; to share video-development/use strategies and experiences; and to develop a website of resources for a growing community of geoscience educators who use video: http://serc.carleton.edu/NAGTWorkshops/video/workshop2014/index.html. The site includes links to workshop presentations, teaching activity collections, and a growing collection of online video resources, including "How-To" videos for various video editing or video-making software and hardware options. Additional web resources support several topical themes including: using videos to flip classes, handling ADA access and copyright issues, assessing the effectiveness of videos inside and outside the classroom, best design principles for video learning, and lists and links of the best videos publicly available for use. The workshop represents an initial step in the creation of an informal team of collaborators devoted to the development and support of an ongoing network of geoscience educators designing and using video. Instructors who are interested in joining this effort are encouraged to contact the lead author.

  20. Near-real-time biplanar fluoroscopic tracking system for the video tumor fighter

    NASA Astrophysics Data System (ADS)

    Lawson, Michael A.; Wika, Kevin G.; Gilles, George T.; Ritter, Rogers C.

    1991-06-01

    We have developed software capable of the three-dimensional tracking of objects in the brain volume, and the subsequent overlaying of an image of the object onto previously obtained MR or CT scans. This software has been developed for use with the Magnetic Stereotaxis System (MSS), also called the 'Video Tumor Fighter' (VTF). The software was written for a Sun 4/110 SPARC workstation with an ANDROX ICS-400 image processing card installed to manage this task. At present, the system uses input from two orthogonally-oriented, visible- light cameras and a simulated scene to determine the three-dimensional position of the object of interest. The coordinates are then transformed into MR or CT coordinates and an image of the object is displayed in the appropriate intersecting MR slice on a computer screen. This paper describes the tracking algorithm and discusses how it was implemented in software. The system's hardware is also described. The limitations of the present system are discussed and plans for incorporating bi-planar, x-ray fluoroscopy are presented.

  1. QuantWorm: a comprehensive software package for Caenorhabditis elegans phenotypic assays.

    PubMed

    Jung, Sang-Kyu; Aleman-Meza, Boanerges; Riepe, Celeste; Zhong, Weiwei

    2014-01-01

    Phenotypic assays are crucial in genetics; however, traditional methods that rely on human observation are unsuitable for quantitative, large-scale experiments. Furthermore, there is an increasing need for comprehensive analyses of multiple phenotypes to provide multidimensional information. Here we developed an automated, high-throughput computer imaging system for quantifying multiple Caenorhabditis elegans phenotypes. Our imaging system is composed of a microscope equipped with a digital camera and a motorized stage connected to a computer running the QuantWorm software package. Currently, the software package contains one data acquisition module and four image analysis programs: WormLifespan, WormLocomotion, WormLength, and WormEgg. The data acquisition module collects images and videos. The WormLifespan software counts the number of moving worms by using two time-lapse images; the WormLocomotion software computes the velocity of moving worms; the WormLength software measures worm body size; and the WormEgg software counts the number of eggs. To evaluate the performance of our software, we compared the results of our software with manual measurements. We then demonstrated the application of the QuantWorm software in a drug assay and a genetic assay. Overall, the QuantWorm software provided accurate measurements at a high speed. Software source code, executable programs, and sample images are available at www.quantworm.org. Our software package has several advantages over current imaging systems for C. elegans. It is an all-in-one package for quantifying multiple phenotypes. The QuantWorm software is written in Java and its source code is freely available, so it does not require use of commercial software or libraries. It can be run on multiple platforms and easily customized to cope with new methods and requirements.

  2. CARMA: Software for continuous affect rating and media annotation

    PubMed Central

    Girard, Jeffrey M

    2017-01-01

    CARMA is a media annotation program that collects continuous ratings while displaying audio and video files. It is designed to be highly user-friendly and easily customizable. Based on Gottman and Levenson's affect rating dial, CARMA enables researchers and study participants to provide moment-by-moment ratings of multimedia files using a computer mouse or keyboard. The rating scale can be configured on a number of parameters including the labels for its upper and lower bounds, its numerical range, and its visual representation. Annotations can be displayed alongside the multimedia file and saved for easy import into statistical analysis software. CARMA provides a tool for researchers in affective computing, human-computer interaction, and the social sciences who need to capture the unfolding of subjective experience and observable behavior over time. PMID:29308198

  3. An automated form of video image analysis applied to classification of movement disorders.

    PubMed

    Chang, R; Guan, L; Burne, J A

    Video image analysis is able to provide quantitative data on postural and movement abnormalities and thus has an important application in neurological diagnosis and management. The conventional techniques require patients to be videotaped while wearing markers in a highly structured laboratory environment. This restricts the utility of video in routine clinical practise. We have begun development of intelligent software which aims to provide a more flexible system able to quantify human posture and movement directly from whole-body images without markers and in an unstructured environment. The steps involved are to extract complete human profiles from video frames, to fit skeletal frameworks to the profiles and derive joint angles and swing distances. By this means a given posture is reduced to a set of basic parameters that can provide input to a neural network classifier. To test the system's performance we videotaped patients with dopa-responsive Parkinsonism and age-matched normals during several gait cycles, to yield 61 patient and 49 normal postures. These postures were reduced to their basic parameters and fed to the neural network classifier in various combinations. The optimal parameter sets (consisting of both swing distances and joint angles) yielded successful classification of normals and patients with an accuracy above 90%. This result demonstrated the feasibility of the approach. The technique has the potential to guide clinicians on the relative sensitivity of specific postural/gait features in diagnosis. Future studies will aim to improve the robustness of the system in providing accurate parameter estimates from subjects wearing a range of clothing, and to further improve discrimination by incorporating more stages of the gait cycle into the analysis.

  4. Development of a Video-Microscopic Tool To Evaluate the Precipitation Kinetics of Poorly Water Soluble Drugs: A Case Study with Tadalafil and HPMC.

    PubMed

    Christfort, Juliane Fjelrad; Plum, Jakob; Madsen, Cecilie Maria; Nielsen, Line Hagner; Sandau, Martin; Andersen, Klaus; Müllertz, Anette; Rades, Thomas

    2017-12-04

    Many drug candidates today have a low aqueous solubility and, hence, may show a low oral bioavailability, presenting a major formulation and drug delivery challenge. One way to increase the bioavailability of these drugs is to use a supersaturating drug delivery strategy. The aim of this study was to develop a video-microscopic method, to evaluate the effect of a precipitation inhibitor on supersaturated solutions of the poorly soluble drug tadalafil, using a novel video-microscopic small scale setup. Based on preliminary studies, a degree of supersaturation of 29 was chosen for the supersaturation studies with tadalafil in FaSSIF. Different amounts of hydroxypropyl methyl cellulose (HPMC) were predissolved in FaSSIF to give four different concentrations, and the supersaturated system was then created using a solvent shift method. Precipitation of tadalafil from the supersaturated solutions was monitored by video-microscopy as a function of time. Single-particle analysis was possible using commercially available software; however, to investigate the entire population of precipitating particles (i.e., their number and area covered in the field of view), an image analysis algorithm was developed (multiparticle analysis). The induction time for precipitation of tadalafil in FaSSIF was significantly prolonged by adding 0.01% (w/v) HPMC to FaSSIF, and the maximum inhibition was reached at 0.1% (w/v) HPMC, after which additional HPMC did not further increase the induction time. The single-particle and multiparticle analyses yielded the same ranking of the HPMC concentrations, regarding the inhibitory effect on precipitation. The developed small scale method to assess the effect of precipitation inhibitors can speed up the process of choosing the right precipitation inhibitor and the concentration to be used.

  5. Development of a Kinect Software Tool to Classify Movements during Active Video Gaming.

    PubMed

    Rosenberg, Michael; Thornton, Ashleigh L; Lay, Brendan S; Ward, Brodie; Nathan, David; Hunt, Daniel; Braham, Rebecca

    2016-01-01

    While it has been established that using full body motion to play active video games results in increased levels of energy expenditure, there is little information on the classification of human movement during active video game play in relationship to fundamental movement skills. The aim of this study was to validate software utilising Kinect sensor motion capture technology to recognise fundamental movement skills (FMS), during active video game play. Two human assessors rated jumping and side-stepping and these assessments were compared to the Kinect Action Recognition Tool (KART), to establish a level of agreement and determine the number of movements completed during five minutes of active video game play, for 43 children (m = 12 years 7 months ± 1 year 6 months). During five minutes of active video game play, inter-rater reliability, when examining the two human raters, was found to be higher for the jump (r = 0.94, p < .01) than the sidestep (r = 0.87, p < .01), although both were excellent. Excellent reliability was also found between human raters and the KART system for the jump (r = 0.84, p, .01) and moderate reliability for sidestep (r = 0.6983, p < .01) during game play, demonstrating that both humans and KART had higher agreement for jumps than sidesteps in the game play condition. The results of the study provide confidence that the Kinect sensor can be used to count the number of jumps and sidestep during five minutes of active video game play with a similar level of accuracy as human raters. However, in contrast to humans, the KART system required a fraction of the time to analyse and tabulate the results.

  6. Development of a Kinect Software Tool to Classify Movements during Active Video Gaming

    PubMed Central

    Rosenberg, Michael; Lay, Brendan S.; Ward, Brodie; Nathan, David; Hunt, Daniel; Braham, Rebecca

    2016-01-01

    While it has been established that using full body motion to play active video games results in increased levels of energy expenditure, there is little information on the classification of human movement during active video game play in relationship to fundamental movement skills. The aim of this study was to validate software utilising Kinect sensor motion capture technology to recognise fundamental movement skills (FMS), during active video game play. Two human assessors rated jumping and side-stepping and these assessments were compared to the Kinect Action Recognition Tool (KART), to establish a level of agreement and determine the number of movements completed during five minutes of active video game play, for 43 children (m = 12 years 7 months ± 1 year 6 months). During five minutes of active video game play, inter-rater reliability, when examining the two human raters, was found to be higher for the jump (r = 0.94, p < .01) than the sidestep (r = 0.87, p < .01), although both were excellent. Excellent reliability was also found between human raters and the KART system for the jump (r = 0.84, p, .01) and moderate reliability for sidestep (r = 0.6983, p < .01) during game play, demonstrating that both humans and KART had higher agreement for jumps than sidesteps in the game play condition. The results of the study provide confidence that the Kinect sensor can be used to count the number of jumps and sidestep during five minutes of active video game play with a similar level of accuracy as human raters. However, in contrast to humans, the KART system required a fraction of the time to analyse and tabulate the results. PMID:27442437

  7. An iPad™-based picture and video activity schedule increases community shopping skills of a young adult with autism spectrum disorder and intellectual disability.

    PubMed

    Burckley, Elizabeth; Tincani, Matt; Guld Fisher, Amanda

    2015-04-01

    To evaluate the iPad 2™ with Book Creator™ software to provide visual cues and video prompting to teach shopping skills in the community to a young adult with an autism spectrum disorder and intellectual disability. A multiple probe across settings design was used to assess effects of the intervention on the participant's independence with following a shopping list in a grocery store across three community locations. Visual cues and video prompting substantially increased the participant's shopping skills within two of the three community locations, skill increases maintained after the intervention was withdrawn, and shopping skills generalized to two untaught shopping items. Social validity surveys suggested that the participant's parent and staff favorably viewed the goals, procedures, and outcomes of intervention. The iPad 2™ with Book Creator™ software may be an effective way to teach independent shopping skills in the community; additional replications are needed.

  8. The interactive digital video interface

    NASA Technical Reports Server (NTRS)

    Doyle, Michael D.

    1989-01-01

    A frequent complaint in the computer oriented trade journals is that current hardware technology is progressing so quickly that software developers cannot keep up. A example of this phenomenon can be seen in the field of microcomputer graphics. To exploit the advantages of new mechanisms of information storage and retrieval, new approaches must be made towards incorporating existing programs as well as developing entirely new applications. A particular area of need is the correlation of discrete image elements to textural information. The interactive digital video (IDV) interface embodies a new concept in software design which addresses these needs. The IDV interface is a patented device and language independent process for identifying image features on a digital video display and which allows a number of different processes to be keyed to that identification. Its capabilities include the correlation of discrete image elements to relevant text information and the correlation of these image features to other images as well as to program control mechanisms. Sophisticated interrelationships can be set up between images, text, and program control mechanisms.

  9. Accuracy of complete-arch model using an intraoral video scanner: An in vitro study.

    PubMed

    Jeong, Il-Do; Lee, Jae-Jun; Jeon, Jin-Hun; Kim, Ji-Hwan; Kim, Hae-Young; Kim, Woong-Chul

    2016-06-01

    Information on the accuracy of intraoral video scanners for long-span areas is limited. The purpose of this in vitro study was to evaluate and compare the trueness and precision of an intraoral video scanner, an intraoral still image scanner, and a blue-light scanner for the production of digital impressions. Reference scan data were obtained by scanning a complete-arch model. An identical model was scanned 8 times using an intraoral video scanner (CEREC Omnicam; Sirona) and an intraoral still image scanner (CEREC Bluecam; Sirona), and stone casts made from conventional impressions of the same model were scanned 8 times with a blue-light scanner as a control (Identica Blue; Medit). Accuracy consists of trueness (the extent to which the scan data differ from the reference scan) and precision (the similarity of the data from multiple scans). To evaluate precision, 8 scans were superimposed using 3-dimensional analysis software; the reference scan data were then superimposed to determine the trueness. Differences were analyzed using 1-way ANOVA and post hoc Tukey HSD tests (α=.05). Trueness in the video scanner group was not significantly different from that in the control group. However, the video scanner group showed significantly lower values than those of the still image scanner group for all variables (P<.05), except in tolerance range. The root mean square, standard deviations, and mean negative precision values for the video scanner group were significantly higher than those for the other groups (P<.05). Digital impressions obtained by the intraoral video scanner showed better accuracy for long-span areas than those captured by the still image scanner. However, the video scanner was less accurate than the laboratory scanner. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  10. Virtual interactive presence for real-time, long-distance surgical collaboration during complex microsurgical procedures.

    PubMed

    Shenai, Mahesh B; Tubbs, R Shane; Guthrie, Barton L; Cohen-Gadol, Aaron A

    2014-08-01

    The shortage of surgeons compels the development of novel technologies that geographically extend the capabilities of individual surgeons and enhance surgical skills. The authors have developed "Virtual Interactive Presence" (VIP), a platform that allows remote participants to simultaneously view each other's visual field, creating a shared field of view for real-time surgical telecollaboration. The authors demonstrate the capability of VIP to facilitate long-distance telecollaboration during cadaveric dissection. Virtual Interactive Presence consists of local and remote workstations with integrated video capture devices and video displays. Each workstation mutually connects via commercial teleconferencing devices, allowing worldwide point-to-point communication. Software composites the local and remote video feeds, displaying a hybrid perspective to each participant. For demonstration, local and remote VIP stations were situated in Indianapolis, Indiana, and Birmingham, Alabama, respectively. A suboccipital craniotomy and microsurgical dissection of the pineal region was performed in a cadaveric specimen using VIP. Task and system performance were subjectively evaluated, while additional video analysis was used for objective assessment of delay and resolution. Participants at both stations were able to visually and verbally interact while identifying anatomical structures, guiding surgical maneuvers, and discussing overall surgical strategy. Video analysis of 3 separate video clips yielded a mean compositing delay of 760 ± 606 msec (when compared with the audio signal). Image resolution was adequate to visualize complex intracranial anatomy and provide interactive guidance. Virtual Interactive Presence is a feasible paradigm for real-time, long-distance surgical telecollaboration. Delay, resolution, scaling, and registration are parameters that require further optimization, but are within the realm of current technology. The paradigm potentially enables remotely located experts to mentor less experienced personnel located at the surgical site with applications in surgical training programs, remote proctoring for proficiency, and expert support for rural settings and across different counties.

  11. Design of multi-mode compatible image acquisition system for HD area array CCD

    NASA Astrophysics Data System (ADS)

    Wang, Chen; Sui, Xiubao

    2014-11-01

    Combining with the current development trend in video surveillance-digitization and high-definition, a multimode-compatible image acquisition system for HD area array CCD is designed. The hardware and software designs of the color video capture system of HD area array CCD KAI-02150 presented by Truesense Imaging company are analyzed, and the structure parameters of the HD area array CCD and the color video gathering principle of the acquisition system are introduced. Then, the CCD control sequence and the timing logic of the whole capture system are realized. The noises of the video signal (KTC noise and 1/f noise) are filtered by using the Correlated Double Sampling (CDS) technique to enhance the signal-to-noise ratio of the system. The compatible designs in both software and hardware for the two other image sensors of the same series: KAI-04050 and KAI-08050 are put forward; the effective pixels of these two HD image sensors are respectively as many as four million and eight million. A Field Programmable Gate Array (FPGA) is adopted as the key controller of the system to perform the modularization design from top to bottom, which realizes the hardware design by software and improves development efficiency. At last, the required time sequence driving is simulated accurately by the use of development platform of Quartus II 12.1 combining with VHDL. The result of the simulation indicates that the driving circuit is characterized by simple framework, low power consumption, and strong anti-interference ability, which meet the demand of miniaturization and high-definition for the current tendency.

  12. Measurement of interfacial tension by use of pendant drop video techniques

    NASA Astrophysics Data System (ADS)

    Herd, Melvin D.; Thomas, Charles P.; Bala, Gregory A.; Lassahn, Gordon D.

    1993-09-01

    This report describes an instrument to measure the interfacial tension (IFT) of aqueous surfactant solutions and crude oil. The method involves injection of a drop of fluid (such as crude oil) into a second immiscible phase to determine the IFT between the two phases. The instrument is composed of an AT-class computer, optical cell, illumination, video camera and lens, video frame digitizer board, monitor, and software. The camera displays an image of the pendant drop on the monitor, which is then processed by the frame digitizer board and non-proprietary software to determine the IFT. Several binary and ternary phase systems were taken from the literature and used to measure the precision and accuracy of the instrument in determining IFT's. A copy of the software program is included in the report. A copy of the program on diskette can be obtained from the Energy Science and Technology Software Center, P.O. Box 1020, Oak Ridge, TN 37831-1020. The accuracy and precision of the technique and apparatus presented is very good for measurement of IFT's in the range from 72 to 10(exp -2) mN/m, which is adequate for many EOR applications. With modifications to the equipment and the numerical techniques, measurements of ultralow IFT's (less than 10(exp -3) mN/m) should be possible as well as measurements at reservoir temperature and pressure conditions. The instrument has been used at the Idaho National Engineering Laboratory to support the research program on microbial enhanced oil recovery. Measurements of IFT's for several bacterial supernatants and unfractionated acid precipitates of microbial cultures containing biosurfactants against medium to heavy crude oils are reported. These experiments demonstrate that the use of automated video imaging of pendant drops is a simple and fast method to reliably determine interfacial tension between two immiscible liquid phases, or between a gas and a liquid phase.

  13. Access NASA Satellite Global Precipitation Data Visualization on YouTube

    NASA Technical Reports Server (NTRS)

    Liu, Z.; Su, J.; Acker, J.; Huffman, G.; Vollmer, B.; Wei, J.; Meyer, D.

    2017-01-01

    Since the satellite era began, NASA has collected a large volume of Earth science observations for research and applications around the world. The collected and archived satellite data at 12 NASA data centers can also be used for STEM education and activities such as disaster events, climate change, etc. However, accessing satellite data can be a daunting task for non-professional users such as teachers and students because of unfamiliarity of terminology, disciplines, data formats, data structures, computing resources, processing software, programming languages, etc. Over the years, many efforts including tools, training classes, and tutorials have been developed to improve satellite data access for users, but barriers still exist for non-professionals. In this presentation, we will present our latest activity that uses a very popular online video sharing Web site, YouTube (https://www.youtube.com/), for accessing visualizations of our global precipitation datasets at the NASA Goddard Earth Sciences (GES) Data and Information Services Center (DISC). With YouTube, users can access and visualize a large volume of satellite data without the necessity to learn new software or download data. The dataset in this activity is a one-month animation for the GPM (Global Precipitation Measurement) Integrated Multi-satellite Retrievals for GPM (IMERG). IMERG provides precipitation on a near-global (60 deg. N-S) coverage at half-hourly time interval, providing more details on precipitation processes and development compared to the 3-hourly TRMM (Tropical Rainfall Measuring Mission) Multisatellite Precipitation Analysis (TMPA, 3B42) product. When the retro-processing of IMERG during the TRMM era is finished in 2018, the entire video will contain more than 330,000 files and will last 3.6 hours. Future plans include development of flyover videos for orbital data for an entire satellite mission or project. All videos, including the one-month animation, will be uploaded and available at the GES DISC site on YouTube (https://www.youtube.com/user/NASAGESDISC).

  14. Quantitative contrast-enhanced ultrasound for monitoring vedolizumab therapy in inflammatory bowel disease patients: a pilot study.

    PubMed

    Goertz, Ruediger S; Klett, Daniel; Wildner, Dane; Atreya, Raja; Neurath, Markus F; Strobel, Deike

    2018-01-01

    Background Microvascularization of the bowel wall can be visualized and quantified non-invasively by software-assisted analysis of derived time-intensity curves. Purpose To perform software-based quantification of bowel wall perfusion using quantitative contrast-enhanced ultrasound (CEUS) according to clinical response in patients with inflammatory bowel disease treated with vedolizumab. Material and Methods In a prospective study, in 18 out of 34 patients, high-frequency ultrasound of bowel wall thickness using color Doppler flow combined with CEUS was performed at baseline and after 14 weeks of treatment with vedolizumab. Clinical activity scores at week 14 were used to differentiate between responders and non-responders. CEUS parameters were calculated by software analysis of the video loops. Results Nine of 18 patients (11 with Crohn's disease and seven with ulcerative colitis) showed response to treatment with vedolizumab. Overall, the responder group showed a significant decrease in the semi-quantitative color Doppler vascularization score. Amplitude-derived CEUS parameters of mural microvascularization such as peak enhancement or wash-in rate decreased in responders, in contrast with non-responders. Time-derived parameters remained stable or increased during treatment in all patients. Conclusion Analysis of bowel microvascularization by CEUS shows statistically significant changes in the wash-in-rate related to response of vedolizumab therapy.

  15. Interrater Reliability and Diagnostic Performance of Subjective Evaluation of Sublingual Microcirculation Images by Physicians and Nurses: A Multicenter Observational Study.

    PubMed

    Lima, Alexandre; López, Alejandra; van Genderen, Michel E; Hurtado, Francisco Javier; Angulo, Martin; Grignola, Juan C; Shono, Atsuko; van Bommel, Jasper

    2015-09-01

    This was a cross-sectional multicenter study to investigate the ability of physicians and nurses from three different countries to subjectively evaluate sublingual microcirculation images and thereby discriminate normal from abnormal sublingual microcirculation based on flow and density abnormalities. Forty-five physicians and 61 nurses (mean age, 36 ± 10 years; 44 males) from three different centers in The Netherlands (n = 61), Uruguay (n = 12), and Japan (n = 33) were asked to subjectively evaluate a sample of 15 microcirculation videos randomly selected from an experimental model of endotoxic shock in pigs. All videos were first analyzed offline using the A.V.A. software by an independent, experienced investigator and were categorized as good, bad, or very bad microcirculation based on the microvascular flow index, perfused capillary density, and proportion of perfused capillaries. Then, the videos were randomly assigned to the examiners, who were instructed to subjectively categorize each image as good, bad, or very bad. An interrater analysis was performed, and sensitivity and specificity tests were calculated to evaluate the proportion of A.V.A. score abnormalities that the examiners correctly identified. The κ statistics indicated moderate agreement in the evaluation of microcirculation abnormalities using three categories, i.e., good, bad, or very bad (κ = 0.48), and substantial agreement using two categories, i.e., normal (good) and abnormal (bad or very bad) (κ = 0.66). There was no significant difference between the κ three and κ two statistics. We found that the examiner's subjective evaluations had good diagnostic performance and were highly sensitive (84%; 95% confidence interval, 81%-86%) and specific (87%; 95% confidence interval, 84%-90%) for sublingual microcirculatory abnormalities as assessed using the A.V.A. software. The subjective evaluations of sublingual microcirculation by physicians and nurses agreed well with a conventional offline analysis and were highly sensitive and specific for sublingual microcirculatory abnormalities.

  16. Lawrence Livermore National Laboratory`s Computer Security Short Subjects Videos: Hidden Password, The Incident, Dangerous Games and The Mess; Computer Security Awareness Guide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    A video on computer security is described. Lonnie Moore, the Computer Security Manager, CSSM/CPPM at Lawrence Livermore National Laboratory (LLNL) and Gale Warshawsky, the Coordinator for Computer Security Education and Awareness at LLNL, wanted to share topics such as computer ethics, software piracy, privacy issues, and protecting information in a format that would capture and hold an audience`s attention. Four Computer Security Short Subject videos were produced which ranged from 1--3 minutes each. These videos are very effective education and awareness tools that can be used to generate discussions about computer security concerns and good computing practices.

  17. HEVC real-time decoding

    NASA Astrophysics Data System (ADS)

    Bross, Benjamin; Alvarez-Mesa, Mauricio; George, Valeri; Chi, Chi Ching; Mayer, Tobias; Juurlink, Ben; Schierl, Thomas

    2013-09-01

    The new High Efficiency Video Coding Standard (HEVC) was finalized in January 2013. Compared to its predecessor H.264 / MPEG4-AVC, this new international standard is able to reduce the bitrate by 50% for the same subjective video quality. This paper investigates decoder optimizations that are needed to achieve HEVC real-time software decoding on a mobile processor. It is shown that HEVC real-time decoding up to high definition video is feasible using instruction extensions of the processor while decoding 4K ultra high definition video in real-time requires additional parallel processing. For parallel processing, a picture-level parallel approach has been chosen because it is generic and does not require bitstreams with special indication.

  18. Vulnerabilities in GSM technology and feasibility of selected attacks

    NASA Astrophysics Data System (ADS)

    Voznak, M.; Prokes, M.; Sevcik, L.; Frnda, J.; Toral-Cruz, Homer; Jakovlev, Sergej; Fazio, Peppino; Mehic, M.; Mikulec, M.

    2015-05-01

    Global System for Mobile communication (GSM) is the most widespread technology for mobile communications in the world and serving over 7 billion users. Since first publication of system documentation there has been notified a potential safety problem's occurrence. Selected types of attacks, based on the analysis of the technical feasibility and the degree of risk of these weaknesses, were implemented and demonstrated in laboratory of the VSB-Technical University of Ostrava, Czech Republic. These vulnerabilities were analyzed and afterwards possible attacks were described. These attacks were implemented using open-source tools, software programmable radio USRP (Universal Software RadioPeripheral) and DVB-T (Digital Video Broadcasting - Terrestrial) receiver. GSM security architecture is being scrutinized since first public releases of its specification mainly pointing out weaknesses in authentication and ciphering mechanisms. This contribution also summarizes practically proofed and used scenarios that are performed using opensource software tools and variety of scripts mostly written in Python. Main goal of this paper is in analyzing security issues in GSM network and practical demonstration of selected attacks.

  19. A Pressure Plate-Based Method for the Automatic Assessment of Foot Strike Patterns During Running.

    PubMed

    Santuz, Alessandro; Ekizos, Antonis; Arampatzis, Adamantios

    2016-05-01

    The foot strike pattern (FSP, description of how the foot touches the ground at impact) is recognized to be a predictor of both performance and injury risk. The objective of the current investigation was to validate an original foot strike pattern assessment technique based on the numerical analysis of foot pressure distribution. We analyzed the strike patterns during running of 145 healthy men and women (85 male, 60 female). The participants ran on a treadmill with integrated pressure plate at three different speeds: preferred (shod and barefoot 2.8 ± 0.4 m/s), faster (shod 3.5 ± 0.6 m/s) and slower (shod 2.3 ± 0.3 m/s). A custom-designed algorithm allowed the automatic footprint recognition and FSP evaluation. Incomplete footprints were simultaneously identified and corrected from the software itself. The widely used technique of analyzing high-speed video recordings was checked for its reliability and has been used to validate the numerical technique. The automatic numerical approach showed a good conformity with the reference video-based technique (ICC = 0.93, p < 0.01). The great improvement in data throughput and the increased completeness of results allow the use of this software as a powerful feedback tool in a simple experimental setup.

  20. Combining High-Speed Cameras and Stop-Motion Animation Software to Support Students' Modeling of Human Body Movement

    NASA Astrophysics Data System (ADS)

    Lee, Victor R.

    2015-04-01

    Biomechanics, and specifically the biomechanics associated with human movement, is a potentially rich backdrop against which educators can design innovative science teaching and learning activities. Moreover, the use of technologies associated with biomechanics research, such as high-speed cameras that can produce high-quality slow-motion video, can be deployed in such a way to support students' participation in practices of scientific modeling. As participants in classroom design experiment, fifteen fifth-grade students worked with high-speed cameras and stop-motion animation software (SAM Animation) over several days to produce dynamic models of motion and body movement. The designed series of learning activities involved iterative cycles of animation creation and critique and use of various depictive materials. Subsequent analysis of flipbooks of human jumping movements created by the students at the beginning and end of the unit revealed a significant improvement in both the epistemic fidelity of students' representations. Excerpts from classroom observations highlight the role that the teacher plays in supporting students' thoughtful reflection of and attention to slow-motion video. In total, this design and research intervention demonstrates that the combination of technologies, activities, and teacher support can lead to improvements in some of the foundations associated with students' modeling.

  1. Enabling MPEG-2 video playback in embedded systems through improved data cache efficiency

    NASA Astrophysics Data System (ADS)

    Soderquist, Peter; Leeser, Miriam E.

    1999-01-01

    Digital video decoding, enabled by the MPEG-2 Video standard, is an important future application for embedded systems, particularly PDAs and other information appliances. Many such system require portability and wireless communication capabilities, and thus face severe limitations in size and power consumption. This places a premium on integration and efficiency, and favors software solutions for video functionality over specialized hardware. The processors in most embedded system currently lack the computational power needed to perform video decoding, but a related and equally important problem is the required data bandwidth, and the need to cost-effectively insure adequate data supply. MPEG data sets are very large, and generate significant amounts of excess memory traffic for standard data caches, up to 100 times the amount required for decoding. Meanwhile, cost and power limitations restrict cache sizes in embedded systems. Some systems, including many media processors, eliminate caches in favor of memories under direct, painstaking software control in the manner of digital signal processors. Yet MPEG data has locality which caches can exploit if properly optimized, providing fast, flexible, and automatic data supply. We propose a set of enhancements which target the specific needs of the heterogeneous types within the MPEG decoder working set. These optimizations significantly improve the efficiency of small caches, reducing cache-memory traffic by almost 70 percent, and can make an enhanced 4 KB cache perform better than a standard 1 MB cache. This performance improvement can enable high-resolution, full frame rate video playback in cheaper, smaller system than woudl otherwise be possible.

  2. Video Recording With a GoPro in Hand and Upper Extremity Surgery.

    PubMed

    Vara, Alexander D; Wu, John; Shin, Alexander Y; Sobol, Gregory; Wiater, Brett

    2016-10-01

    Video recordings of surgical procedures are an excellent tool for presentations, analyzing self-performance, illustrating publications, and educating surgeons and patients. Recording the surgeon's perspective with high-resolution video in the operating room or clinic has become readily available and advances in software improve the ease of editing these videos. A GoPro HERO 4 Silver or Black was mounted on a head strap and worn over the surgical scrub cap, above the loupes of the operating surgeon. Five live surgical cases were recorded with the camera. The videos were uploaded to a computer and subsequently edited with iMovie or the GoPro software. The optimal settings for both the Silver and Black editions, when operating room lights are used, were determined to be a narrow view, 1080p, 60 frames per second (fps), spot meter on, protune on with auto white balance, exposure compensation at -0.5, and without a polarizing lens. When the operating room lights were not used, it was determined that the standard settings for a GoPro camera were ideal for positioning and editing (4K, 15 frames per second, spot meter and protune off). The GoPro HERO 4 provides high-quality, the surgeon perspective, and a cost-effective video recording of upper extremity surgical procedures. Challenges include finding the optimal settings for each surgical procedure and the length of recording due to battery life limitations. Copyright © 2016 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.

  3. Advanced Spacesuit Informatics Software Design for Power, Avionics and Software Version 2.0

    NASA Technical Reports Server (NTRS)

    Wright, Theodore W.

    2016-01-01

    A description of the software design for the 2016 edition of the Informatics computer assembly of the NASAs Advanced Extravehicular Mobility Unit (AEMU), also called the Advanced Spacesuit. The Informatics system is an optional part of the spacesuit assembly. It adds a graphical interface for displaying suit status, timelines, procedures, and warning information. It also provides an interface to the suit mounted camera for recording still images, video, and audio field notes.

  4. Design and implementation of the mobility assessment tool: software description.

    PubMed

    Barnard, Ryan T; Marsh, Anthony P; Rejeski, Walter Jack; Pecorella, Anthony; Ip, Edward H

    2013-07-23

    In previous work, we described the development of an 81-item video-animated tool for assessing mobility. In response to criticism levied during a pilot study of this tool, we sought to develop a new version built upon a flexible framework for designing and administering the instrument. Rather than constructing a self-contained software application with a hard-coded instrument, we designed an XML schema capable of describing a variety of psychometric instruments. The new version of our video-animated assessment tool was then defined fully within the context of a compliant XML document. Two software applications--one built in Java, the other in Objective-C for the Apple iPad--were then built that could present the instrument described in the XML document and collect participants' responses. Separating the instrument's definition from the software application implementing it allowed for rapid iteration and easy, reliable definition of variations. Defining instruments in a software-independent XML document simplifies the process of defining instruments and variations and allows a single instrument to be deployed on as many platforms as there are software applications capable of interpreting the instrument, thereby broadening the potential target audience for the instrument. Continued work will be done to further specify and refine this type of instrument specification with a focus on spurring adoption by researchers in gerontology and geriatric medicine.

  5. Design and implementation of the mobility assessment tool: software description

    PubMed Central

    2013-01-01

    Background In previous work, we described the development of an 81-item video-animated tool for assessing mobility. In response to criticism levied during a pilot study of this tool, we sought to develop a new version built upon a flexible framework for designing and administering the instrument. Results Rather than constructing a self-contained software application with a hard-coded instrument, we designed an XML schema capable of describing a variety of psychometric instruments. The new version of our video-animated assessment tool was then defined fully within the context of a compliant XML document. Two software applications—one built in Java, the other in Objective-C for the Apple iPad—were then built that could present the instrument described in the XML document and collect participants’ responses. Separating the instrument’s definition from the software application implementing it allowed for rapid iteration and easy, reliable definition of variations. Conclusions Defining instruments in a software-independent XML document simplifies the process of defining instruments and variations and allows a single instrument to be deployed on as many platforms as there are software applications capable of interpreting the instrument, thereby broadening the potential target audience for the instrument. Continued work will be done to further specify and refine this type of instrument specification with a focus on spurring adoption by researchers in gerontology and geriatric medicine. PMID:23879716

  6. That's Infotainment!: How to Create Your Own Screencasts

    ERIC Educational Resources Information Center

    Kroski, Ellyssa

    2009-01-01

    Screencasts are videos that record the actions that take place on the computer screen, most often including a narrative audio track, in order to demonstrate various computer-related tasks, such as how to use a software program or navigate a certain Web site. All that is needed is a standard microphone and screen recording software, which can be…

  7. Development of Science Simulations for Mildly Mentally Retarded or Learning Disabled Students. Final Report.

    ERIC Educational Resources Information Center

    Macro Systems, Inc., Silver Spring, MD.

    This final report describes the development of eight computer based science simulations designed for use with middle school mainstreamed students having learning disabilities or mild mental retardation. The total program includes software, a teacher's manual, 3 videos, and a set of 30 activity worksheets. Special features of the software for…

  8. Hammond Workforce 2000: A Three-Year Project. October 1989 to September 1992.

    ERIC Educational Resources Information Center

    Meyers, Arthur S.; Somerville, Deborah J.

    A 3-year Library Services and Construction Act grant project from 1989-1992 provided for adult learning centers, equipped with Apple IIGS computers and software at each location of the Hammond Public Library (Indiana). User-friendly, job-based software to strengthen reading, writing, mathematics, spelling, and grammar skills, as well as video and…

  9. A new software tool for 3D motion analyses of the musculo-skeletal system.

    PubMed

    Leardini, A; Belvedere, C; Astolfi, L; Fantozzi, S; Viceconti, M; Taddei, F; Ensini, A; Benedetti, M G; Catani, F

    2006-10-01

    Many clinical and biomechanical research studies, particularly in orthopaedics, nowadays involve forms of movement analysis. Gait analysis, video-fluoroscopy of joint replacement, pre-operative planning, surgical navigation, and standard radiostereometry would require tools for easy access to three-dimensional graphical representations of rigid segment motion. Relevant data from this variety of sources need to be organised in structured forms. Registration, integration, and synchronisation of segment position data are additional necessities. With this aim, the present work exploits the features of a software tool recently developed within a EU-funded project ('Multimod') in a series of different research studies. Standard and advanced gait analysis on a normal subject, in vivo fluoroscopy-based three-dimensional motion of a replaced knee joint, patellar and ligament tracking on a knee specimen by a surgical navigation system, stem-to-femur migration pattern on a patient operated on total hip replacement, were analysed with standard techniques and all represented by this innovative software tool. Segment pose data were eventually obtained from these different techniques, and were successfully imported and organised in a hierarchical tree within the tool. Skeletal bony segments, prosthesis component models and ligament links were registered successfully to corresponding marker position data for effective three-dimensional animations. These were shown in various combinations, in different views, from different perspectives, according to possible specific research interests. Bioengineering and medical professionals would be much facilitated in the interpretation of the motion analysis measurements necessary in their research fields, and would benefit therefore from this software tool.

  10. Automated face detection for occurrence and occupancy estimation in chimpanzees.

    PubMed

    Crunchant, Anne-Sophie; Egerer, Monika; Loos, Alexander; Burghardt, Tilo; Zuberbühler, Klaus; Corogenes, Katherine; Leinert, Vera; Kulik, Lars; Kühl, Hjalmar S

    2017-03-01

    Surveying endangered species is necessary to evaluate conservation effectiveness. Camera trapping and biometric computer vision are recent technological advances. They have impacted on the methods applicable to field surveys and these methods have gained significant momentum over the last decade. Yet, most researchers inspect footage manually and few studies have used automated semantic processing of video trap data from the field. The particular aim of this study is to evaluate methods that incorporate automated face detection technology as an aid to estimate site use of two chimpanzee communities based on camera trapping. As a comparative baseline we employ traditional manual inspection of footage. Our analysis focuses specifically on the basic parameter of occurrence where we assess the performance and practical value of chimpanzee face detection software. We found that the semi-automated data processing required only 2-4% of the time compared to the purely manual analysis. This is a non-negligible increase in efficiency that is critical when assessing the feasibility of camera trap occupancy surveys. Our evaluations suggest that our methodology estimates the proportion of sites used relatively reliably. Chimpanzees are mostly detected when they are present and when videos are filmed in high-resolution: the highest recall rate was 77%, for a false alarm rate of 2.8% for videos containing only chimpanzee frontal face views. Certainly, our study is only a first step for transferring face detection software from the lab into field application. Our results are promising and indicate that the current limitation of detecting chimpanzees in camera trap footage due to lack of suitable face views can be easily overcome on the level of field data collection, that is, by the combined placement of multiple high-resolution cameras facing reverse directions. This will enable to routinely conduct chimpanzee occupancy surveys based on camera trapping and semi-automated processing of footage. Using semi-automated ape face detection technology for processing camera trap footage requires only 2-4% of the time compared to manual analysis and allows to estimate site use by chimpanzees relatively reliably. © 2017 Wiley Periodicals, Inc.

  11. VASIR: An Open-Source Research Platform for Advanced Iris Recognition Technologies.

    PubMed

    Lee, Yooyoung; Micheals, Ross J; Filliben, James J; Phillips, P Jonathon

    2013-01-01

    The performance of iris recognition systems is frequently affected by input image quality, which in turn is vulnerable to less-than-optimal conditions due to illuminations, environments, and subject characteristics (e.g., distance, movement, face/body visibility, blinking, etc.). VASIR (Video-based Automatic System for Iris Recognition) is a state-of-the-art NIST-developed iris recognition software platform designed to systematically address these vulnerabilities. We developed VASIR as a research tool that will not only provide a reference (to assess the relative performance of alternative algorithms) for the biometrics community, but will also advance (via this new emerging iris recognition paradigm) NIST's measurement mission. VASIR is designed to accommodate both ideal (e.g., classical still images) and less-than-ideal images (e.g., face-visible videos). VASIR has three primary modules: 1) Image Acquisition 2) Video Processing, and 3) Iris Recognition. Each module consists of several sub-components that have been optimized by use of rigorous orthogonal experiment design and analysis techniques. We evaluated VASIR performance using the MBGC (Multiple Biometric Grand Challenge) NIR (Near-Infrared) face-visible video dataset and the ICE (Iris Challenge Evaluation) 2005 still-based dataset. The results showed that even though VASIR was primarily developed and optimized for the less-constrained video case, it still achieved high verification rates for the traditional still-image case. For this reason, VASIR may be used as an effective baseline for the biometrics community to evaluate their algorithm performance, and thus serves as a valuable research platform.

  12. VASIR: An Open-Source Research Platform for Advanced Iris Recognition Technologies

    PubMed Central

    Lee, Yooyoung; Micheals, Ross J; Filliben, James J; Phillips, P Jonathon

    2013-01-01

    The performance of iris recognition systems is frequently affected by input image quality, which in turn is vulnerable to less-than-optimal conditions due to illuminations, environments, and subject characteristics (e.g., distance, movement, face/body visibility, blinking, etc.). VASIR (Video-based Automatic System for Iris Recognition) is a state-of-the-art NIST-developed iris recognition software platform designed to systematically address these vulnerabilities. We developed VASIR as a research tool that will not only provide a reference (to assess the relative performance of alternative algorithms) for the biometrics community, but will also advance (via this new emerging iris recognition paradigm) NIST’s measurement mission. VASIR is designed to accommodate both ideal (e.g., classical still images) and less-than-ideal images (e.g., face-visible videos). VASIR has three primary modules: 1) Image Acquisition 2) Video Processing, and 3) Iris Recognition. Each module consists of several sub-components that have been optimized by use of rigorous orthogonal experiment design and analysis techniques. We evaluated VASIR performance using the MBGC (Multiple Biometric Grand Challenge) NIR (Near-Infrared) face-visible video dataset and the ICE (Iris Challenge Evaluation) 2005 still-based dataset. The results showed that even though VASIR was primarily developed and optimized for the less-constrained video case, it still achieved high verification rates for the traditional still-image case. For this reason, VASIR may be used as an effective baseline for the biometrics community to evaluate their algorithm performance, and thus serves as a valuable research platform. PMID:26401431

  13. High-speed video analysis of forward and backward spattered blood droplets.

    PubMed

    Comiskey, P M; Yarin, A L; Attinger, D

    2017-07-01

    High-speed videos of blood spatter due to a gunshot taken by the Ames Laboratory Midwest Forensics Resource Center (MFRC) [1] are analyzed. The videos used in this analysis were focused on a variety of targets hit by a bullet which caused either forward, backward, or both types of blood spatter. The analysis process utilized particle image velocimetry (PIV) and particle analysis software to measure drop velocities as well as the distributions of the number of droplets and their respective side view area. The results of this analysis revealed that the maximal velocity in the forward spatter can be about 47±5m/s and for the backward spatter - about 24±8m/s. Moreover, our measurements indicate that the number of droplets produced is larger in forward spatter than it is in backward spatter. In the forward and backward spatter the droplet area in the side-view images is approximately the same. The upper angles of the close-to-cone domain in which droplets are issued in forward and backward spatter are, 27±9° and 57±7°, respectively, whereas the lower angles of the close-to-cone domain are 28±12° and 30±18°, respectively. The inclination angle of the bullet as it penetrates the target is seen to play a large role in the directional preference of the spattered blood. Also, muzzle gases, bullet impact angle, as well as the aerodynamic wake of the bullet are seen to greatly influence the flight of the droplets. The intent of this investigation is to provide a quantitative basis for current and future research on bloodstain pattern analysis (BPA) of either forward or backward blood spatter due to a gunshot. Published by Elsevier B.V.

  14. Virtual Ultrasound Guidance for Inexperienced Operators

    NASA Technical Reports Server (NTRS)

    Caine, Timothy; Martin, David

    2012-01-01

    Medical ultrasound or echocardiographic studies are highly operator-dependent and generally require lengthy training and internship to perfect. To obtain quality echocardiographic images in remote environments, such as on-orbit, remote guidance of studies has been employed. This technique involves minimal training for the user, coupled with remote guidance from an expert. When real-time communication or expert guidance is not available, a more autonomous system of guiding an inexperienced operator through an ultrasound study is needed. One example would be missions beyond low Earth orbit in which the time delay inherent with communication will make remote guidance impractical. The Virtual Ultrasound Guidance system is a combination of hardware and software. The hardware portion includes, but is not limited to, video glasses that allow hands-free, full-screen viewing. The glasses also allow the operator a substantial field of view below the glasses to view and operate the ultrasound system. The software is a comprehensive video program designed to guide an inexperienced operator through a detailed ultrasound or echocardiographic study without extensive training or guidance from the ground. The program contains a detailed description using video and audio to demonstrate equipment controls, ergonomics of scanning, study protocol, and scanning guidance, including recovery from sub-optimal images. The components used in the initial validation of the system include an Apple iPod Classic third-generation as the video source, and Myvue video glasses. Initially, the program prompts the operator to power-up the ultrasound and position the patient. The operator would put on the video glasses and attach them to the video source. After turning on both devices and the ultrasound system, the audio-video guidance would then instruct on patient positioning and scanning techniques. A detailed scanning protocol follows with descriptions and reference video of each view along with advice on technique. The program also instructs the operator regarding the types of images to store and how to overcome pitfalls in scanning. Images can be forwarded to the ground or other site when convenient. Following study completion, the video glasses, video source, and ultrasound system are powered down and stored. Virtually any equipment that can play back video can be used to play back the program. This includes a DVD player, personal computer, and some MP3 players.

  15. Exploring physical exposures and identifying high-risk work tasks within the floor layer trade

    PubMed Central

    McGaha, Jamie; Miller, Kim; Descatha, Alexis; Welch, Laurie; Buchholz, Bryan; Evanoff, Bradley; Dale, Ann Marie

    2014-01-01

    Introduction Floor layers have high rates of musculoskeletal disorders yet few studies have examined their work exposures. This study used observational methods to describe physical exposures within floor laying tasks. Methods We analyzed 45 videos from 32 floor layers using Multimedia-Video Task Analysis software to determine the time in task, forces, postures, and repetitive hand movements for installation of four common flooring materials. We used the WISHA checklists to define exposure thresholds. Results Most workers (91%) met the caution threshold for one or more exposures. Workers showed high exposures in multiple body parts with variability in exposures across tasks and for different materials. Prolonged exposures were seen for kneeling, poor neck and low back postures, and intermittent but frequent hand grip forces. Conclusions Floor layers experience prolonged awkward postures and high force physical exposures in multiple body parts, which probably contribute to their high rates of musculoskeletal disorders. PMID:24274895

  16. Remote Visualization and Remote Collaboration On Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Watson, Val; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    A new technology has been developed for remote visualization that provides remote, 3D, high resolution, dynamic, interactive viewing of scientific data (such as fluid dynamics simulations or measurements). Based on this technology, some World Wide Web sites on the Internet are providing fluid dynamics data for educational or testing purposes. This technology is also being used for remote collaboration in joint university, industry, and NASA projects in computational fluid dynamics and wind tunnel testing. Previously, remote visualization of dynamic data was done using video format (transmitting pixel information) such as video conferencing or MPEG movies on the Internet. The concept for this new technology is to send the raw data (e.g., grids, vectors, and scalars) along with viewing scripts over the Internet and have the pixels generated by a visualization tool running on the viewer's local workstation. The visualization tool that is currently used is FAST (Flow Analysis Software Toolkit).

  17. Intersegmental Eye-Head-Body Interactions during Complex Whole Body Movements

    PubMed Central

    von Laßberg, Christoph; Beykirch, Karl A.; Mohler, Betty J.; Bülthoff, Heinrich H.

    2014-01-01

    Using state-of-the-art technology, interactions of eye, head and intersegmental body movements were analyzed for the first time during multiple twisting somersaults of high-level gymnasts. With this aim, we used a unique combination of a 16-channel infrared kinemetric system; a three-dimensional video kinemetric system; wireless electromyography; and a specialized wireless sport-video-oculography system, which was able to capture and calculate precise oculomotor data under conditions of rapid multiaxial acceleration. All data were synchronized and integrated in a multimodal software tool for three-dimensional analysis. During specific phases of the recorded movements, a previously unknown eye-head-body interaction was observed. The phenomenon was marked by a prolonged and complete suppression of gaze-stabilizing eye movements, in favor of a tight coupling with the head, spine and joint movements of the gymnasts. Potential reasons for these observations are discussed with regard to earlier findings and integrated within a functional model. PMID:24763143

  18. Multiple Frequency Audio Signal Communication as a Mechanism for Neurophysiology and Video Data Synchronization

    PubMed Central

    Topper, Nicholas C.; Burke, S.N.; Maurer, A.P.

    2014-01-01

    BACKGROUND Current methods for aligning neurophysiology and video data are either prepackaged, requiring the additional purchase of a software suite, or use a blinking LED with a stationary pulse-width and frequency. These methods lack significant user interface for adaptation, are expensive, or risk a misalignment of the two data streams. NEW METHOD A cost-effective means to obtain high-precision alignment of behavioral and neurophysiological data is obtained by generating an audio-pulse embedded with two domains of information, a low-frequency binary-counting signal and a high, randomly changing frequency. This enabled the derivation of temporal information while maintaining enough entropy in the system for algorithmic alignment. RESULTS The sample to frame index constructed using the audio input correlation method described in this paper enables video and data acquisition to be aligned at a sub-frame level of precision. COMPARISONS WITH EXISTING METHOD Traditionally, a synchrony pulse is recorded on-screen via a flashing diode. The higher sampling rate of the audio input of the camcorder enables the timing of an event to be detected with greater precision. CONCLUSIONS While On-line analysis and synchronization using specialized equipment may be the ideal situation in some cases, the method presented in the current paper presents a viable, low cost alternative, and gives the flexibility to interface with custom off-line analysis tools. Moreover, the ease of constructing and implements this set-up presented in the current paper makes it applicable to a wide variety of applications that require video recording. PMID:25256648

  19. Multiple frequency audio signal communication as a mechanism for neurophysiology and video data synchronization.

    PubMed

    Topper, Nicholas C; Burke, Sara N; Maurer, Andrew Porter

    2014-12-30

    Current methods for aligning neurophysiology and video data are either prepackaged, requiring the additional purchase of a software suite, or use a blinking LED with a stationary pulse-width and frequency. These methods lack significant user interface for adaptation, are expensive, or risk a misalignment of the two data streams. A cost-effective means to obtain high-precision alignment of behavioral and neurophysiological data is obtained by generating an audio-pulse embedded with two domains of information, a low-frequency binary-counting signal and a high, randomly changing frequency. This enabled the derivation of temporal information while maintaining enough entropy in the system for algorithmic alignment. The sample to frame index constructed using the audio input correlation method described in this paper enables video and data acquisition to be aligned at a sub-frame level of precision. Traditionally, a synchrony pulse is recorded on-screen via a flashing diode. The higher sampling rate of the audio input of the camcorder enables the timing of an event to be detected with greater precision. While on-line analysis and synchronization using specialized equipment may be the ideal situation in some cases, the method presented in the current paper presents a viable, low cost alternative, and gives the flexibility to interface with custom off-line analysis tools. Moreover, the ease of constructing and implements this set-up presented in the current paper makes it applicable to a wide variety of applications that require video recording. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Process of videotape making: presentation design, software, and hardware

    NASA Astrophysics Data System (ADS)

    Dickinson, Robert R.; Brady, Dan R.; Bennison, Tim; Burns, Thomas; Pines, Sheldon

    1991-06-01

    The use of technical video tape presentations for communicating abstractions of complex data is now becoming commonplace. While the use of video tapes in the day-to-day work of scientists and engineers is still in its infancy, their use as applications oriented conferences is now growing rapidly. Despite these advancements, there is still very little that is written down about the process of making technical videotapes. For printed media, different presentation styles are well known for categories such as results reports, executive summary reports, and technical papers and articles. In this paper, the authors present ideas on the topic of technical videotape presentation design in a format that is worth referring to. They have started to document the ways in which the experience of media specialist, teaching professionals, and character animators can be applied to scientific animation. Software and hardware considerations are also discussed. For this portion, distinctions are drawn between the software and hardware required for computer animation (frame at a time) productions, and live recorded interaction with a computer graphics display.

  1. Surgical Videos with Synchronised Vertical 2-Split Screens Recording the Surgeons' Hand Movement.

    PubMed

    Kaneko, Hiroki; Ra, Eimei; Kawano, Kenichi; Yasukawa, Tsutomu; Takayama, Kei; Iwase, Takeshi; Terasaki, Hiroko

    2015-01-01

    To improve the state-of-the-art teaching system by creating surgical videos with synchronised vertical 2-split screens. An ultra-compact, wide-angle point-of-view camcorder (HX-A1, Panasonic) was mounted on the surgical microscope focusing mostly on the surgeons' hand movements. In combination with the regular surgical videos obtained from the CCD camera in the surgical microscope, synchronised vertical 2-split-screen surgical videos were generated with the video-editing software. Using synchronised vertical 2-split-screen videos, residents of the ophthalmology department could watch and learn how assistant surgeons controlled the eyeball, while the main surgeons performed scleral buckling surgery. In vitrectomy, the synchronised vertical 2-split-screen videos showed the surgeons' hands holding the instruments and moving roughly and boldly, in contrast to the very delicate movements of the vitrectomy instruments inside the eye. Synchronised vertical 2-split-screen surgical videos are beneficial for the education of young surgical trainees when learning surgical skills including the surgeons' hand movements. © 2015 S. Karger AG, Basel.

  2. Software development for airborne radar

    NASA Astrophysics Data System (ADS)

    Sundstrom, Ingvar G.

    Some aspects for development of software in a modern multimode airborne nose radar are described. First, an overview of where software is used in the radar units is presented. The development phases-system design, functional design, detailed design, function verification, and system verification-are then used as the starting point for the discussion. Methods, tools, and the most important documents are described. The importance of video flight recording in the early stages and use of a digital signal generators for performance verification is emphasized. Some future trends are discussed.

  3. The Effects of Music on Microsurgical Technique and Performance: A Motion Analysis Study.

    PubMed

    Shakir, Afaaf; Chattopadhyay, Arhana; Paek, Laurence S; McGoldrick, Rory B; Chetta, Matthew D; Hui, Kenneth; Lee, Gordon K

    2017-05-01

    Music is commonly played in operating rooms (ORs) throughout the country. If a preferred genre of music is played, surgeons have been shown to perform surgical tasks quicker and with greater accuracy. However, there are currently no studies investigating the effects of music on microsurgical technique. Motion analysis technology has recently been validated in the objective assessment of plastic surgery trainees' performance of microanastomoses. Here, we aimed to examine the effects of music on microsurgical skills using motion analysis technology as a primary objective assessment tool. Residents and fellows in the Plastic and Reconstructive Surgery program were recruited to complete a demographic survey and participate in microsurgical tasks. Each participant completed 2 arterial microanastomoses on a chicken foot model, one with music playing, and the other without music playing. Participants were blinded to the study objectives and encouraged to perform their best. The order of music and no music was randomized. Microanastomoses were video recorded using a digitalized S-video system and deidentified. Video segments were analyzed using ProAnalyst motion analysis software for automatic noncontact markerless video tracking of the needle driver tip. Nine residents and 3 plastic surgery fellows were tested. Reported microsurgical experience ranged from 1 to 10 arterial anastomoses performed (n = 2), 11 to 100 anastomoses (n = 9), and 101 to 500 anastomoses (n = 1). Mean age was 33 years (range, 29-36 years), with 11 participants right-handed and 1 ambidextrous. Of the 12 subjects tested, 11 (92%) preferred music in the OR. Composite instrument motion analysis scores significantly improved with playing preferred music during testing versus no music (paired t test, P <0.001). Improvement with music was significant even after stratifying scores by order in which variables were tested (music first vs no music first), postgraduate year, and number of anastomoses (analysis of variance, P < 0.01). Preferred music in the OR may have a positive effect on trainees' microsurgical performance; as such, trainees should be encouraged to participate in setting the conditions of the OR to optimize their comfort and, possibly, performance. Moreover, motion analysis technology is a useful tool with a wide range of applications for surgical education and outcomes optimization.

  4. Development of a video image-based QA system for the positional accuracy of dynamic tumor tracking irradiation in the Vero4DRT system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ebe, Kazuyu, E-mail: nrr24490@nifty.com; Tokuyama, Katsuichi; Baba, Ryuta

    Purpose: To develop and evaluate a new video image-based QA system, including in-house software, that can display a tracking state visually and quantify the positional accuracy of dynamic tumor tracking irradiation in the Vero4DRT system. Methods: Sixteen trajectories in six patients with pulmonary cancer were obtained with the ExacTrac in the Vero4DRT system. Motion data in the cranio–caudal direction (Y direction) were used as the input for a programmable motion table (Quasar). A target phantom was placed on the motion table, which was placed on the 2D ionization chamber array (MatriXX). Then, the 4D modeling procedure was performed on themore » target phantom during a reproduction of the patient’s tumor motion. A substitute target with the patient’s tumor motion was irradiated with 6-MV x-rays under the surrogate infrared system. The 2D dose images obtained from the MatriXX (33 frames/s; 40 s) were exported to in-house video-image analyzing software. The absolute differences in the Y direction between the center of the exposed target and the center of the exposed field were calculated. Positional errors were observed. The authors’ QA results were compared to 4D modeling function errors and gimbal motion errors obtained from log analyses in the ExacTrac to verify the accuracy of their QA system. The patients’ tumor motions were evaluated in the wave forms, and the peak-to-peak distances were also measured to verify their reproducibility. Results: Thirteen of sixteen trajectories (81.3%) were successfully reproduced with Quasar. The peak-to-peak distances ranged from 2.7 to 29.0 mm. Three trajectories (18.7%) were not successfully reproduced due to the limited motions of the Quasar. Thus, 13 of 16 trajectories were summarized. The mean number of video images used for analysis was 1156. The positional errors (absolute mean difference + 2 standard deviation) ranged from 0.54 to 1.55 mm. The error values differed by less than 1 mm from 4D modeling function errors and gimbal motion errors in the ExacTrac log analyses (n = 13). Conclusions: The newly developed video image-based QA system, including in-house software, can analyze more than a thousand images (33 frames/s). Positional errors are approximately equivalent to those in ExacTrac log analyses. This system is useful for the visual illustration of the progress of the tracking state and for the quantification of positional accuracy during dynamic tumor tracking irradiation in the Vero4DRT system.« less

  5. Clustering and Flow Conservation Monitoring Tool for Software Defined Networks.

    PubMed

    Puente Fernández, Jesús Antonio; García Villalba, Luis Javier; Kim, Tai-Hoon

    2018-04-03

    Prediction systems present some challenges on two fronts: the relation between video quality and observed session features and on the other hand, dynamics changes on the video quality. Software Defined Networks (SDN) is a new concept of network architecture that provides the separation of control plane (controller) and data plane (switches) in network devices. Due to the existence of the southbound interface, it is possible to deploy monitoring tools to obtain the network status and retrieve a statistics collection. Therefore, achieving the most accurate statistics depends on a strategy of monitoring and information requests of network devices. In this paper, we propose an enhanced algorithm for requesting statistics to measure the traffic flow in SDN networks. Such an algorithm is based on grouping network switches in clusters focusing on their number of ports to apply different monitoring techniques. Such grouping occurs by avoiding monitoring queries in network switches with common characteristics and then, by omitting redundant information. In this way, the present proposal decreases the number of monitoring queries to switches, improving the network traffic and preventing the switching overload. We have tested our optimization in a video streaming simulation using different types of videos. The experiments and comparison with traditional monitoring techniques demonstrate the feasibility of our proposal maintaining similar values decreasing the number of queries to the switches.

  6. Digital readout for image converter cameras

    NASA Astrophysics Data System (ADS)

    Honour, Joseph

    1991-04-01

    There is an increasing need for fast and reliable analysis of recorded sequences from image converter cameras so that experimental information can be readily evaluated without recourse to more time consuming photographic procedures. A digital readout system has been developed using a randomly triggerable high resolution CCD camera, the output of which is suitable for use with IBM AT compatible PC. Within half a second from receipt of trigger pulse, the frame reformatter displays the image and transfer to storage media can be readily achieved via the PC and dedicated software. Two software programmes offer different levels of image manipulation which includes enhancement routines and parameter calculations with accuracy down to pixel levels. Hard copy prints can be acquired using a specially adapted Polaroid printer, outputs for laser and video printer extend the overall versatility of the system.

  7. Video conferencing made easy

    NASA Astrophysics Data System (ADS)

    Larsen, D. G.; Schwieder, P. R.

    Network video conferencing is advancing rapidly throughout the nation, and the Idaho National Engineering Laboratory (INEL), a Department of Energy (DOE) facility, is at the forefront of the development. Engineers at INEL/EG&G designed and installed a very unique DOE video conferencing system, offering many outstanding features, that include true multipoint conferencing, user-friendly design and operation with no full-time operators required, and the potential for cost effective expansion of the system. One area where INEL/EG&G engineers made a significant contribution to video conferencing was in the development of effective, user-friendly, end station driven scheduling software. A PC at each user site is used to schedule conferences via a windows package. This software interface provides information to the users concerning conference availability, scheduling, initiation, and termination. The menus are 'mouse' controlled. Once a conference is scheduled, a workstation at the hub monitors the network to initiate all scheduled conferences. No active operator participation is required once a user schedules a conference through the local PC; the workstation automatically initiates and terminates the conference as scheduled. As each conference is scheduled, hard copy notification is also printed at each participating site. Video conferencing is the wave of the future. The use of these user-friendly systems will save millions in lost productivity and travel costs throughout the nation. The ease of operation and conference scheduling will play a key role on the extent industry uses this new technology. The INEL/EG&G has developed a prototype scheduling system for both commercial and federal government use.

  8. Understanding pharmacokinetics: are YouTube videos a useful learning resource?

    PubMed

    Azer, S A

    2014-07-01

    To investigate whether YouTube videos on pharmacokinetics can be a useful learning resource for medical students. YouTube was searched from 01 November to 15 November 2013 for search terms "Pharmacokinetics", "Drug absorption", "Drug distribution", Drug metabolism", "Drug elimination", "Biliary excretion of drugs", and "Renal excretion of drugs".  Only videos in the English and those matching the inclusion criteria were included. For each video, the following characteristic data were collected: title, URL, duration, number of viewers, date uploaded, and viewership per day, like, dislike, number of comments, number of video sharing, and the uploader /creator. Using standardized criteria comprising technical, content, authority and pedagogy parameters, three evaluators independently assessed the videos for educational usefulness. Data were analyzed using SPSS software and the agreement between the evaluators was calculated using Cohen's kappa analysis. The search identified 1460 videos. Of these, only 48 fulfilled the inclusion criteria. Only 30 were classified as educationally useful videos (62.5%) scoring 13.83±0.45 (mean±SD) while the remaining 18 videos were not educationally useful (37.5%) scoring 6.48±1.64 (mean±SD), p = 0.000. The educationally useful videos were created by pharmacologists/educators 83.3% (25/30), professors from two universities 13.3% (04/30) and private tutoring body 3.3% (01/30). The useful videos were viewed by 12096 (65.4%) and had a total of 433332 days on YouTube, while the non-educationally useful videos were viewed by 6378 (34.6%) viewers and had 20684 days on YouTube. No correlation was found between video total score and number of like (R2 0.258), dislike (R2 0.103), viewers (R2 0.186), viewership/day (R2 0.256), comments (R2 0.250), or share (R2 0.174). The agreement between the three evaluators had an overall Cohen's kappa score in the range of 0.582-0.949. YouTube videos on pharmacokinetics and drug elimination showed a range of variability in regard to the contents of their educational usefulness.  Medical educators should be aware of the potential influence of YouTube videos may have on student's understanding of pharmacokinetics and drug elimination. Users who rely on the comments made by viewers or the approval expressed in terms of the number of like given by viewers should become aware that these indicators are not accurate and do not correlate with the scores given to videos.

  9. NASA Technology Transfer - Human Robot Teaming

    NASA Image and Video Library

    2016-12-23

    Produced for Intelligent Robotics Group to show at January 2017 Consumer Electronics Show (CES). Highlights development of VERVE (Visual Environment for Remote Virtual Exploration) software used on K-10, K-REX, SPHERES and AstroBee projects for 3D awareness. Also mentions transfer of software to Nissan for their development in their Autonomous Vehicle project. Video includes Nissan's self-driving car around NASA Ames.

  10. The making of the mechanical universe

    NASA Technical Reports Server (NTRS)

    Blinn, James

    1989-01-01

    The Mechanical Universe project required the production of over 550 different animated scenes, totaling about 7 and 1/2 hours of screen time. The project required the use of a wide range of techniques and motivated the development of several different software packages. A documentation is presented of many aspects of the project, encompassing artistic design issues, scientific simulations, software engineering, and video engineering.

  11. Automated management for pavement inspection system (AMPIS)

    NASA Astrophysics Data System (ADS)

    Chung, Hung Chi; Girardello, Roberto; Soeller, Tony; Shinozuka, Masanobu

    2003-08-01

    An automated in-situ road surface distress surveying and management system, AMPIS, has been developed on the basis of video images within the framework of GIS software. Video image processing techniques are introduced to acquire, process and analyze the road surface images obtained from a moving vehicle. ArcGIS platform is used to integrate the routines of image processing and spatial analysis in handling the full-scale metropolitan highway surface distress detection and data fusion/management. This makes it possible to present user-friendly interfaces in GIS and to provide efficient visualizations of surveyed results not only for the use of transportation engineers to manage road surveying documentations, data acquisition, analysis and management, but also for financial officials to plan maintenance and repair programs and further evaluate the socio-economic impacts of highway degradation and deterioration. A review performed in this study on fundamental principle of Pavement Management System (PMS) and its implementation indicates that the proposed approach of using GIS concept and its tools for PMS application will reshape PMS into a new information technology-based system providing a convenient and efficient pavement inspection and management.

  12. GIS-based automated management of highway surface crack inspection system

    NASA Astrophysics Data System (ADS)

    Chung, Hung-Chi; Shinozuka, Masanobu; Soeller, Tony; Girardello, Roberto

    2004-07-01

    An automated in-situ road surface distress surveying and management system, AMPIS, has been developed on the basis of video images within the framework of GIS software. Video image processing techniques are introduced to acquire, process and analyze the road surface images obtained from a moving vehicle. ArcGIS platform is used to integrate the routines of image processing and spatial analysis in handling the full-scale metropolitan highway surface distress detection and data fusion/management. This makes it possible to present user-friendly interfaces in GIS and to provide efficient visualizations of surveyed results not only for the use of transportation engineers to manage road surveying documentations, data acquisition, analysis and management, but also for financial officials to plan maintenance and repair programs and further evaluate the socio-economic impacts of highway degradation and deterioration. A review performed in this study on fundamental principle of Pavement Management System (PMS) and its implementation indicates that the proposed approach of using GIS concept and its tools for PMS application will reshape PMS into a new information technology-based system that can provide convenient and efficient pavement inspection and management.

  13. Internet teleconferencing as a clinical tool for anesthesiologists.

    PubMed

    Ruskin, K J; Palmer, T E; Hagenouw, R R; Lack, A; Dunnill, R

    1998-04-01

    Internet teleconferencing software can be used to hold "virtual" meetings, during which participants around the world can share ideas. A core group of anesthetic medical practitioners, largely consisting of the Society for Advanced Telecommunications in Anesthesia (SATA), has begun to hold regularly scheduled "virtual grand rounds." This paper examines currently available software and offers impressions of our own early experiences with this technology. Two teleconferencing systems have been used: White Pine Software CU-SeeMe and Microsoft NetMeeting. While both provided acceptable results, each had specific advantages and disadvantages. CU-SeeMe is easier to use when conferences include more than two participants. NetMeeting provides higher quality audio and video signals under crowded network conditions, and is better for conferences with only two participants. While some effort is necessary to get these teleconferencing systems to work well, we have been using desktop conferencing for six months to hold virtual Internet meetings. The sound and video images produced by Internet teleconferencing software are inferior to dedicated point-to-point teleconferencing systems. However, low cost, wide availability, and ease of use make this technology a potentially valuable tool for clinicians and researchers.

  14. Physics and Video Analysis

    NASA Astrophysics Data System (ADS)

    Allain, Rhett

    2016-05-01

    We currently live in a world filled with videos. There are videos on YouTube, feature movies and even videos recorded with our own cameras and smartphones. These videos present an excellent opportunity to not only explore physical concepts, but also inspire others to investigate physics ideas. With video analysis, we can explore the fantasy world in science-fiction films. We can also look at online videos to determine if they are genuine or fake. Video analysis can be used in the introductory physics lab and it can even be used to explore the make-believe physics embedded in video games. This book covers the basic ideas behind video analysis along with the fundamental physics principles used in video analysis. The book also includes several examples of the unique situations in which video analysis can be used.

  15. Open-source telemedicine platform for wireless medical video communication.

    PubMed

    Panayides, A; Eleftheriou, I; Pantziaris, M

    2013-01-01

    An m-health system for real-time wireless communication of medical video based on open-source software is presented. The objective is to deliver a low-cost telemedicine platform which will allow for reliable remote diagnosis m-health applications such as emergency incidents, mass population screening, and medical education purposes. The performance of the proposed system is demonstrated using five atherosclerotic plaque ultrasound videos. The videos are encoded at the clinically acquired resolution, in addition to lower, QCIF, and CIF resolutions, at different bitrates, and four different encoding structures. Commercially available wireless local area network (WLAN) and 3.5G high-speed packet access (HSPA) wireless channels are used to validate the developed platform. Objective video quality assessment is based on PSNR ratings, following calibration using the variable frame delay (VFD) algorithm that removes temporal mismatch between original and received videos. Clinical evaluation is based on atherosclerotic plaque ultrasound video assessment protocol. Experimental results show that adequate diagnostic quality wireless medical video communications are realized using the designed telemedicine platform. HSPA cellular networks provide for ultrasound video transmission at the acquired resolution, while VFD algorithm utilization bridges objective and subjective ratings.

  16. Open-Source Telemedicine Platform for Wireless Medical Video Communication

    PubMed Central

    Panayides, A.; Eleftheriou, I.; Pantziaris, M.

    2013-01-01

    An m-health system for real-time wireless communication of medical video based on open-source software is presented. The objective is to deliver a low-cost telemedicine platform which will allow for reliable remote diagnosis m-health applications such as emergency incidents, mass population screening, and medical education purposes. The performance of the proposed system is demonstrated using five atherosclerotic plaque ultrasound videos. The videos are encoded at the clinically acquired resolution, in addition to lower, QCIF, and CIF resolutions, at different bitrates, and four different encoding structures. Commercially available wireless local area network (WLAN) and 3.5G high-speed packet access (HSPA) wireless channels are used to validate the developed platform. Objective video quality assessment is based on PSNR ratings, following calibration using the variable frame delay (VFD) algorithm that removes temporal mismatch between original and received videos. Clinical evaluation is based on atherosclerotic plaque ultrasound video assessment protocol. Experimental results show that adequate diagnostic quality wireless medical video communications are realized using the designed telemedicine platform. HSPA cellular networks provide for ultrasound video transmission at the acquired resolution, while VFD algorithm utilization bridges objective and subjective ratings. PMID:23573082

  17. iSDS: a self-configurable software-defined storage system for enterprise

    NASA Astrophysics Data System (ADS)

    Chen, Wen-Shyen Eric; Huang, Chun-Fang; Huang, Ming-Jen

    2018-01-01

    Storage is one of the most important aspects of IT infrastructure for various enterprises. But, enterprises are interested in more than just data storage; they are interested in such things as more reliable data protection, higher performance and reduced resource consumption. Traditional enterprise-grade storage satisfies these requirements at high cost. It is because traditional enterprise-grade storage is usually designed and constructed by customised field-programmable gate array to achieve high-end functionality. However, in this ever-changing environment, enterprises request storage with more flexible deployment and at lower cost. Moreover, the rise of new application fields, such as social media, big data, video streaming service etc., makes operational tasks for administrators more complex. In this article, a new storage system called intelligent software-defined storage (iSDS), based on software-defined storage, is described. More specifically, this approach advocates using software to replace features provided by traditional customised chips. To alleviate the management burden, it also advocates applying machine learning to automatically configure storage to meet dynamic requirements of workloads running on storage. This article focuses on the analysis feature of iSDS cluster by detailing its architecture and design.

  18. Prospectus 1999

    NASA Astrophysics Data System (ADS)

    Holmes, Jon L.; Gettys, Nancy S.

    1999-01-01

    We begin 1999 with a message to all Journal subscribers about our plans for JCE Software and what you will be seeing in this column as the year progresses. Series News JCE Software will continue to publish individual programs, one to an issue as they become ready for distribution. The old Series B, C, and D designations no longer exist. Regular Issue numbers for 1999 will start with 99, and end with M for Mac OS, W for Windows, or MW for programs that will run under both the Mac OS and Windows. Windows programs will be compatible with Windows 95/98 and may or may not be compatible with Windows 3.1. Special Issues, such as CD-ROMs and videotapes will continue to be designated with SP followed by a number. Publication Plans for 1999 Periodic Table Live! Second Edition Periodic Table Live! Second Edition is a new version of one of JCE Software's most popular publications. The best features of Illustrated Periodic Table (1) for Windows and Chemistry Navigator (2) for Mac OS are combined in a new HTML-based, multimedia presentation format. Together with the video from Periodic Table Videodisc (3), digitized to take advantage of new features available in QuickTime 3 (4), the new Periodic Table Live! will be easy to use with complete features available to both Windows and Mac OS user. Chemistry Comes Alive! The Chemistry Comes Alive! (CCA!) series continues in 1999 with CD-ROMs for Mac OS and Windows. Like the first two volumes (5,6), new CDs will contain video and animations of chemical reactions, including clips from our videodiscs ChemDemos (7), ChemDemos II (8), and Titration Techniques (9). Other clips are new, available for the first time in Chemistry Comes Alive! New CCA! CDs will be made available in two varieties for individual users, one to take advantage of the high-quality video that can be displayed by new, faster computers, and another that will play well on older, slower models. In addition, a third variation for network licensing will include video optimized for delivery via the World Wide Web. If all goes according to plan, two new CCA! volumes will be announced in 1999, and CCA! 1 and CCA! 2 will be updated to take advantage of the latest digital video technology. Chem Pages Chem Pages, Laboratory Techniques, was developed by the New Traditions Curriculum Project at the University of Wisconsin-Madison. It is an HTML-based CD-ROM for Mac OS and Windows that contains lessons and tutorials to prepare introductory chemistry students to work in the laboratory. It includes text, photographs, computer graphics, animations, digital video, and voice narration to introduce students to the laboratory equipment and procedures. Regular Issues Programs that have been accepted for publication as Regular Issues in 1999 include a gas chromatography simulation for Windows 95 by Bruce Armitage, a collection of lessons on torsional rotation for organic chemistry students by Ronald Starkey, and a tutorial on pericyclic reactions, also for organic chemistry, by Albert Lee, C. T. So, and C. L. Chan. We have had many recent submissions and submissions of work in progress. In 1999 we will work with the authors and our peer-reviewers to complete and publish these submissions. Submissions include Multimedia Problems for General Chemistry by David Whisnant, lessons on point groups and crystallography by Margaret Kastner, et al., a mass spectrum simulator by Stephen W. Bigger and Robert A. Craig, a tutorial for introductory chemistry on determining the pH of very dilute acid and base solutions by Paul Mihas and George Papageorgiou, and many others. Also under development by the JCE Software staff are The General Chemistry Collection (instructor's edition) CD-ROM along with an updated student edition. An Invitation In collaboration with JCE Online we plan to make available in 1999 support files for JCE Software. These will include not only troubleshooting tips and technical support notes, but also supporting information such as lessons, specific assignments, and activities using JCE Software publications submitted by users. All JCE Software users are invited to contribute to this area. Get in touch with JCE Software and let us know how you are using our materials so that we can share your ideas with others! Although the word software is in our name, many of our publications are not traditional software. We also publish video on videotape, videodisc, and CD-ROM and electronic documents (Mathcad and Mathematica, spreadsheet files and macros, HTML documents, and PowerPoint presentations). Most chemistry instructors who use a computer in their teaching have created or considered creating one or more of these for their classes. If you have an original computer presentation, electronic document, animation, video, or any other item that is not printed text it is probably an appropriate submission for JCE Software. By publishing your work in any branch of the Journal of Chemical Education, you will share your efforts with chemistry instructors and students all over the world and get professional recognition for your achievements. Literature Cited 1. Schatz, P. F.; Moore, J. W.; Holmes, J. L. Illustrated Periodic Table; J. Chem. Educ. Software 1995, 2D2. 2. Kotz. J. C.; Young, S. Chemistry Navigator; J. Chem. Educ. Software 1995, 6C2. 3. Banks, A. Periodic Table Videodisc, 2nd ed.; J. Chem. Educ. Software 1996, SP1. 4. QuickTime 3.0, Apple Computer, Inc.: 1 Infinite Loop, Cupertino, CA 95014-2084. 5. Jacobsen, J. J.; Moore, J. W. Chemistry Comes Alive!, Volume 1; J. Chem. Educ. Software 1997, SP 18. 6. Jacobsen, J. J.; Moore, J. W. Chemistry Comes Alive!, Volume 2; Chem. Educ. Software 1998, SP 21. 7. Moore, J. W.; Jacobsen, J. J.; Hunsberger, L. R.; Gammon, S. D.; Jetzer, K. H.; Zimmerman, J. ChemDemos Videodisc; J. Chem. Educ. Software 1994, SP 8. 8. Moore, J. W.; Jacobsen, J. J.; Jetzer, K. H.; Gilbert, G.; Mattes, F.; Phillips, D.; Lisensky, G.; Zweerink, G. ChemDemos II; J. Chem. Educ. Software 1996, SP 14. 9. Jacobsen, J. J.; Jetzer, K. H.; Patani, N.; Zimmerman, J. Titration Techniques Videodisc; J. Chem. Educ. Software 1995, SP 9. JCE Software CD-ROMs In addition to more than 100 traditional computer programs and videodiscs, JCE Software has published nine CD-ROMs and four videotapes. Recently published CDs now available include:

    • JCE CD 98
    • Solid State Resources, 2nd Edition
    • General Chemistry Collection, 2nd Edition (Student Edition)
    • Chemistry Comes Alive!, Volumes 1 and 2
    • Flying over Atoms
    Below are some images from JCE Software CD-ROMs. Information for all CDs can be found on our WWW site. Ordering and Information JCE Software is a publication of the Journal of Chemical Education. There is an order form inserted in this issue that provides prices and other ordering information. If this card is not available or if you need additional information, contact: JCE Software, University of Wisconsin-Madison, 1101 University Avenue, Madison, WI 53706-1396 phone: 608/262-5153 or 800/991-5534 fax: 608/265-8094; email: jcesoft@chem.wisc.edu Information about all of our publications (including abstracts, descriptions, updates) is available from our World Wide Web site. http://jchemed.chem.wisc.edu/JCESoft/

  19. Development of a video-guided real-time patient motion monitoring system.

    PubMed

    Ju, Sang Gyu; Huh, Woong; Hong, Chae-Seon; Kim, Jin Sung; Shin, Jung Suk; Shin, Eunhyuk; Han, Youngyih; Ahn, Yong Chan; Park, Hee Chul; Choi, Doo Ho

    2012-05-01

    The authors developed a video image-guided real-time patient motion monitoring (VGRPM) system using PC-cams, and its clinical utility was evaluated using a motion phantom. The VGRPM system has three components: (1) an image acquisition device consisting of two PC-cams, (2) a main control computer with a radiation signal controller and warning system, and (3) patient motion analysis software developed in-house. The intelligent patient motion monitoring system was designed for synchronization with a beam on/off trigger signal in order to limit operation to during treatment time only and to enable system automation. During each treatment session, an initial image of the patient is acquired as soon as radiation starts and is compared with subsequent live images, which can be acquired at up to 30 fps by the real-time frame difference-based analysis software. When the error range exceeds the set criteria (δ(movement)) due to patient movement, a warning message is generated in the form of light and sound. The described procedure repeats automatically for each patient. A motion phantom, which operates by moving a distance of 0.1, 0.2, 0.3, 0.5, and 1.0 cm for 1 and 2 s, respectively, was used to evaluate the system performance. The authors measured optimal δ(movement) for clinical use, the minimum distance that can be detected with this system, and the response time of the whole system using a video analysis technique. The stability of the system in a linear accelerator unit was evaluated for a period of 6 months. As a result of the moving phantom test, the δ(movement) for detection of all simulated phantom motion except the 0.1 cm movement was determined to be 0.2% of total number of pixels in the initial image. The system can detect phantom motion as small as 0.2 cm. The measured response time from the detection of phantom movement to generation of the warning signal was 0.1 s. No significant functional disorder of the system was observed during the testing period. The VGRPM system has a convenient design, which synchronizes initiation of the analysis with a beam on/off signal from the treatment machine and may contribute to a reduction in treatment error due to patient motion and increase the accuracy of treatment dose delivery.

  20. Digital video timing analyzer for the evaluation of PC-based real-time simulation systems

    NASA Astrophysics Data System (ADS)

    Jones, Shawn R.; Crosby, Jay L.; Terry, John E., Jr.

    2009-05-01

    Due to the rapid acceleration in technology and the drop in costs, the use of commercial off-the-shelf (COTS) PC-based hardware and software components for digital and hardware-in-the-loop (HWIL) simulations has increased. However, the increase in PC-based components creates new challenges for HWIL test facilities such as cost-effective hardware and software selection, system configuration and integration, performance testing, and simulation verification/validation. This paper will discuss how the Digital Video Timing Analyzer (DiViTA) installed in the Aviation and Missile Research, Development and Engineering Center (AMRDEC) provides quantitative characterization data for PC-based real-time scene generation systems. An overview of the DiViTA is provided followed by details on measurement techniques, applications, and real-world examples of system benefits.

  1. A semi-automated software tool to study treadmill locomotion in the rat: from experiment videos to statistical gait analysis.

    PubMed

    Gravel, P; Tremblay, M; Leblond, H; Rossignol, S; de Guise, J A

    2010-07-15

    A computer-aided method for the tracking of morphological markers in fluoroscopic images of a rat walking on a treadmill is presented and validated. The markers correspond to bone articulations in a hind leg and are used to define the hip, knee, ankle and metatarsophalangeal joints. The method allows a user to identify, using a computer mouse, about 20% of the marker positions in a video and interpolate their trajectories from frame-to-frame. This results in a seven-fold speed improvement in detecting markers. This also eliminates confusion problems due to legs crossing and blurred images. The video images are corrected for geometric distortions from the X-ray camera, wavelet denoised, to preserve the sharpness of minute bone structures, and contrast enhanced. From those images, the marker positions across video frames are extracted, corrected for rat "solid body" motions on the treadmill, and used to compute the positional and angular gait patterns. Robust Bootstrap estimates of those gait patterns and their prediction and confidence bands are finally generated. The gait patterns are invaluable tools to study the locomotion of healthy animals or the complex process of locomotion recovery in animals with injuries. The method could, in principle, be adapted to analyze the locomotion of other animals as long as a fluoroscopic imager and a treadmill are available. Copyright 2010 Elsevier B.V. All rights reserved.

  2. Analyzing Virtual Physics Simulations with Tracker

    NASA Astrophysics Data System (ADS)

    Claessens, Tom

    2017-12-01

    In the physics teaching community, Tracker is well known as a user-friendly open source video analysis software, authored by Douglas Brown. With this tool, the user can trace markers indicated on a video or on stroboscopic photos and perform kinematic analyses. Tracker also includes a data modeling tool that allows one to fit some theoretical equations of motion onto experimentally obtained data. In the field of particle mechanics, Tracker has been effectively used for learning and teaching about projectile motion, "toss up" and free-fall vertical motion, and to explain the principle of mechanical energy conservation. Also, Tracker has been successfully used in rigid body mechanics to interpret the results of experiments with rolling/slipping cylinders and moving rods. In this work, I propose an original method in which Tracker is used to analyze virtual computer simulations created with a physics-based motion solver, instead of analyzing video recording or stroboscopic photos. This could be an interesting approach to study kinematics and dynamics problems in physics education, in particular when there is no or limited access to physical labs. I demonstrate the working method with a typical (but quite challenging) problem in classical mechanics: a slipping/rolling cylinder on a rough surface.

  3. Real-time video compressing under DSP/BIOS

    NASA Astrophysics Data System (ADS)

    Chen, Qiu-ping; Li, Gui-ju

    2009-10-01

    This paper presents real-time MPEG-4 Simple Profile video compressing based on the DSP processor. The programming framework of video compressing is constructed using TMS320C6416 Microprocessor, TDS510 simulator and PC. It uses embedded real-time operating system DSP/BIOS and the API functions to build periodic function, tasks and interruptions etcs. Realize real-time video compressing. To the questions of data transferring among the system. Based on the architecture of the C64x DSP, utilized double buffer switched and EDMA data transfer controller to transit data from external memory to internal, and realize data transition and processing at the same time; the architecture level optimizations are used to improve software pipeline. The system used DSP/BIOS to realize multi-thread scheduling. The whole system realizes high speed transition of a great deal of data. Experimental results show the encoder can realize real-time encoding of 768*576, 25 frame/s video images.

  4. Improving the occupational skills of students with intellectual disability by applying video prompting combined with dance pads.

    PubMed

    Lin, Mei-Lan; Chiang, Ming-Shan; Shih, Ching-Hsiang; Li, Meng-Fang

    2018-01-01

    Individuals with intellectual disability (ID) are prone to inattention, are slow in learning and reaction, and have deficits in memory skills. Providing proper vocational education and training for individuals with intellectual disability is able to enhance their occupational skills. This study applied video prompting to provide instructional prompts to help participants accurately perform an assigned occupational activity. A control system installed with developed software was used to turn a standard dance pad into a sensor to detect the participants' standing position and to automatically trigger video prompting. The results show that the participants' correct performance of the target behaviour improved significantly after their exposure to the video prompting intervention, and this positive outcome remained consistent during the maintenance phase. Video prompting combined with dance pads was a feasible approach to improving the occupational skills of the three students with intellectual disability. © 2017 John Wiley & Sons Ltd.

  5. Laws of reflection and Snell's law revisited by video modeling

    NASA Astrophysics Data System (ADS)

    Rodrigues, M.; Simeão Carvalho, P.

    2014-07-01

    Video modelling is being used, nowadays, as a tool for teaching and learning several topics in Physics. Most of these topics are related to kinematics. In this work we show how video modelling can be used for demonstrations and experimental teaching in optics, namely the laws of reflection and the well-known Snell's Law of light. Videos were recorded with a photo camera at 30 frames/s, and analysed with the open source software Tracker. Data collected from several frames was treated with the Data Tool module, and graphs were built to obtain relations between incident, reflected and refraction angles, as well as to determine the refractive index of Perspex. These videos can be freely distributed in the web and explored with students within the classroom, or as a homework assignment to improve student's understanding on specific contents. They present a large didactic potential for teaching basic optics in high school with an interactive methodology.

  6. A review of video security training and assessment-systems and their applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cellucci, J.; Hall, R.J.

    1991-01-01

    This paper reports that during the last 10 years computer-aided video data collection and playback systems have been used as nuclear facility security training and assessment tools with varying degrees of success. These mobile systems have been used by trained security personnel for response force training, vulnerability assessment, force-on-force exercises and crisis management. Typically, synchronous recordings from multiple video cameras, communications audio, and digital sensor inputs; are played back to the exercise participants and then edited for training and briefing. Factors that have influence user acceptance include: frequency of use, the demands placed on security personnel, fear of punishment, usermore » training requirements and equipment cost. The introduction of S-VHS video and new software for scenario planning, video editing and data reduction; should bring about a wider range of security applications and supply the opportunity for significant cost sharing with other user groups.« less

  7. ONR Far East Scientific Information Bulletin. Volume 15, Number 3, July- September 1990

    DTIC Science & Technology

    1990-09-01

    video IMSL, an extremely well known com- Mimura’s Laboratory at tapes. Most of the programming is done mercial software library probably avail- Hiroshima...are visualized via video tape. The experiment associated with this. In a ifthiswill have the horsepower tocrank finished product is professional...individuals associated for tional composite applications include tor- the various Japanese databases are: pedo tubes by St. Tropez and aerospace radomes by

  8. Evaluation of automated decisionmaking methodologies and development of an integrated robotic system simulation

    NASA Technical Reports Server (NTRS)

    Haley, D. C.; Almand, B. J.; Thomas, M. M.; Krauze, L. D.; Gremban, K. D.; Sanborn, J. C.; Kelly, J. H.; Depkovich, T. M.

    1984-01-01

    A generic computer simulation for manipulator systems (ROBSIM) was implemented and the specific technologies necessary to increase the role of automation in various missions were developed. The specific items developed are: (1) capability for definition of a manipulator system consisting of multiple arms, load objects, and an environment; (2) capability for kinematic analysis, requirements analysis, and response simulation of manipulator motion; (3) postprocessing options such as graphic replay of simulated motion and manipulator parameter plotting; (4) investigation and simulation of various control methods including manual force/torque and active compliances control; (5) evaluation and implementation of three obstacle avoidance methods; (6) video simulation and edge detection; and (7) software simulation validation.

  9. Design and develop a video conferencing framework for real-time telemedicine applications using secure group-based communication architecture.

    PubMed

    Mat Kiah, M L; Al-Bakri, S H; Zaidan, A A; Zaidan, B B; Hussain, Muzammil

    2014-10-01

    One of the applications of modern technology in telemedicine is video conferencing. An alternative to traveling to attend a conference or meeting, video conferencing is becoming increasingly popular among hospitals. By using this technology, doctors can help patients who are unable to physically visit hospitals. Video conferencing particularly benefits patients from rural areas, where good doctors are not always available. Telemedicine has proven to be a blessing to patients who have no access to the best treatment. A telemedicine system consists of customized hardware and software at two locations, namely, at the patient's and the doctor's end. In such cases, the video streams of the conferencing parties may contain highly sensitive information. Thus, real-time data security is one of the most important requirements when designing video conferencing systems. This study proposes a secure framework for video conferencing systems and a complete management solution for secure video conferencing groups. Java Media Framework Application Programming Interface classes are used to design and test the proposed secure framework. Real-time Transport Protocol over User Datagram Protocol is used to transmit the encrypted audio and video streams, and RSA and AES algorithms are used to provide the required security services. Results show that the encryption algorithm insignificantly increases the video conferencing computation time.

  10. System Synchronizes Recordings from Separated Video Cameras

    NASA Technical Reports Server (NTRS)

    Nail, William; Nail, William L.; Nail, Jasper M.; Le, Doung T.

    2009-01-01

    A system of electronic hardware and software for synchronizing recordings from multiple, physically separated video cameras is being developed, primarily for use in multiple-look-angle video production. The system, the time code used in the system, and the underlying method of synchronization upon which the design of the system is based are denoted generally by the term "Geo-TimeCode(TradeMark)." The system is embodied mostly in compact, lightweight, portable units (see figure) denoted video time-code units (VTUs) - one VTU for each video camera. The system is scalable in that any number of camera recordings can be synchronized. The estimated retail price per unit would be about $350 (in 2006 dollars). The need for this or another synchronization system external to video cameras arises because most video cameras do not include internal means for maintaining synchronization with other video cameras. Unlike prior video-camera-synchronization systems, this system does not depend on continuous cable or radio links between cameras (however, it does depend on occasional cable links lasting a few seconds). Also, whereas the time codes used in prior video-camera-synchronization systems typically repeat after 24 hours, the time code used in this system does not repeat for slightly more than 136 years; hence, this system is much better suited for long-term deployment of multiple cameras.

  11. Curriculum Counselor.

    ERIC Educational Resources Information Center

    Doty, Robert

    1995-01-01

    Features Internet sites that are sources for lesson plans, materials, group discussion topics, activities, test questions, computer software, and videos for K-12 education. Resources highlighted include CNN Newsroom, KidLink, and AskERIC. (AEF)

  12. 15 CFR 1180.2 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... as that term is defined in Section 4 of the Stevenson-Wydler Technology Innovation Act of 1980, as..., software, audio/video production, technology application assessment generated pursuant to Section 11(c) of...

  13. 360° Film Brings Bombed Church to Life

    NASA Astrophysics Data System (ADS)

    Kwiatek, K.

    2011-09-01

    This paper explores how a computer-generated reconstruction of a church can be adapted to create a panoramic film that is presented in a panoramic viewer and also on a wrap-around projection system. It focuses on the fundamental principles of creating 360º films, not only in 3D modelling software, but also presents how to record 360º video using panoramic cameras inside the heritage site. These issues are explored in a case study of Charles Church in Plymouth, UK that was bombed in 1941 and has never been rebuilt. The generation of a 3D model of the bombed church started from the creation of five spherical panoramas and through the use of Autodesk ImageModeler software. The processed files were imported and merged together in Autodesk 3ds Max where a visualisation of the ruin was produced. A number of historical images were found and this collection enabled the process of a virtual reconstruction of the site. The aspect of merging two still or two video panoramas (one from 3D modelling software, the other one recorded on the site) from the same locations or with the same trajectories is also discussed. The prototype of 360º non-linear film tells a narrative of a wartime wedding that occurred in this church. The film was presented on two 360º screens where members of the audience could make decisions on whether to continue the ceremony or whether to run away when the bombing of the church starts. 3D modelling software made this possible to render a number of different alternati ves (360º images and 360º video). Immersive environments empower the visitor to imagine the building before it was destroyed.

  14. Videos for Science Communication and Nature Interpretation: The TIB|AV-Portal as Resource.

    NASA Astrophysics Data System (ADS)

    Marín Arraiza, Paloma; Plank, Margret; Löwe, Peter

    2016-04-01

    Scientific audiovisual media such as videos of research, interactive displays or computer animations has become an important part of scientific communication and education. Dynamic phenomena can be described better by audiovisual media than by words and pictures. For this reason, scientific videos help us to understand and discuss environmental phenomena more efficiently. Moreover, the creation of scientific videos is easier than ever, thanks to mobile devices and open source editing software. Video-clips, webinars or even the interactive part of a PICO are formats of scientific audiovisual media used in the Geosciences. This type of media translates the location-referenced Science Communication such as environmental interpretation into computed-based Science Communication. A new way of Science Communication is video abstracting. A video abstract is a three- to five-minute video statement that provides background information about a research paper. It also gives authors the opportunity to present their research activities to a wider audience. Since this kind of media have become an important part of scientific communication there is a need for reliable infrastructures which are capable of managing the digital assets researchers generate. Using the reference of the usecase of video abstracts this paper gives an overview over the activities by the German National Library of Science and Technology (TIB) regarding publishing and linking audiovisual media in a scientifically sound way. The German National Library of Science and Technology (TIB) in cooperation with the Hasso Plattner Institute (HPI) developed a web-based portal (av.tib.eu) that optimises access to scientific videos in the fields of science and technology. Videos from the realms of science and technology can easily be uploaded onto the TIB|AV Portal. Within a short period of time the videos are assigned a digital object identifier (DOI). This enables them to be referenced, cited, and linked (e.g. to the relevant article or further supplement materials). By using media fragment identifiers not only the whole video can be cited, but also individual parts of it. Doing so, users are also likely to find high-quality related content (for instance, a video abstract and the corresponding article or an expedition documentary and its field notebook). Based on automatic analysis of speech, images and texts within the videos a large amount of metadata associated with the segments of the video is automatically generated. These metadata enhance the searchability of the video and make it easier to retrieve and interlink meaningful parts of the video. This new and reliable library-driven infrastructure allow all different types of data be discoverable, accessible, citable, freely reusable, and interlinked. Therefore, it simplifies Science Communication

  15. A professional and cost effective digital video editing and image storage system for the operating room.

    PubMed

    Scollato, A; Perrini, P; Benedetto, N; Di Lorenzo, N

    2007-06-01

    We propose an easy-to-construct digital video editing system ideal to produce video documentation and still images. A digital video editing system applicable to many video sources in the operating room is described in detail. The proposed system has proved easy to use and permits one to obtain videography quickly and easily. Mixing different streams of video input from all the devices in use in the operating room, the application of filters and effects produces a final, professional end-product. Recording on a DVD provides an inexpensive, portable and easy-to-use medium to store or re-edit or tape at a later time. From stored videography it is easy to extract high-quality, still images useful for teaching, presentations and publications. In conclusion digital videography and still photography can easily be recorded by the proposed system, producing high-quality video recording. The use of firewire ports provides good compatibility with next-generation hardware and software. The high standard of quality makes the proposed system one of the lowest priced products available today.

  16. VAP/VAT: video analytics platform and test bed for testing and deploying video analytics

    NASA Astrophysics Data System (ADS)

    Gorodnichy, Dmitry O.; Dubrofsky, Elan

    2010-04-01

    Deploying Video Analytics in operational environments is extremely challenging. This paper presents a methodological approach developed by the Video Surveillance and Biometrics Section (VSB) of the Science and Engineering Directorate (S&E) of the Canada Border Services Agency (CBSA) to resolve these problems. A three-phase approach to enable VA deployment within an operational agency is presented and the Video Analytics Platform and Testbed (VAP/VAT) developed by the VSB section is introduced. In addition to allowing the integration of third party and in-house built VA codes into an existing video surveillance infrastructure, VAP/VAT also allows the agency to conduct an unbiased performance evaluation of the cameras and VA software available on the market. VAP/VAT consists of two components: EventCapture, which serves to Automatically detect a "Visual Event", and EventBrowser, which serves to Display & Peruse of "Visual Details" captured at the "Visual Event". To deal with Open architecture as well as with Closed architecture cameras, two video-feed capture mechanisms have been developed within the EventCapture component: IPCamCapture and ScreenCapture.

  17. mMass 3: a cross-platform software environment for precise analysis of mass spectrometric data.

    PubMed

    Strohalm, Martin; Kavan, Daniel; Novák, Petr; Volný, Michael; Havlícek, Vladimír

    2010-06-01

    While tools for the automated analysis of MS and LC-MS/MS data are continuously improving, it is still often the case that at the end of an experiment, the mass spectrometrist will spend time carefully examining individual spectra. Current software support is mostly provided only by the instrument vendors, and the available software tools are often instrument-dependent. Here we present a new generation of mMass, a cross-platform environment for the precise analysis of individual mass spectra. The software covers a wide range of processing tasks such as import from various data formats, smoothing, baseline correction, peak picking, deisotoping, charge determination, and recalibration. Functions presented in the earlier versions such as in silico digestion and fragmentation were redesigned and improved. In addition to Mascot, an interface for ProFound has been implemented. A specific tool is available for isotopic pattern modeling to enable precise data validation. The largest available lipid database (from the LIPID MAPS Consortium) has been incorporated and together with the new compound search tool lipids can be rapidly identified. In addition, the user can define custom libraries of compounds and use them analogously. The new version of mMass is based on a stand-alone Python library, which provides the basic functionality for data processing and interpretation. This library can serve as a good starting point for other developers in their projects. Binary distributions of mMass, its source code, a detailed user's guide, and video tutorials are freely available from www.mmass.org .

  18. Power, Avionics and Software Communication Network Architecture

    NASA Technical Reports Server (NTRS)

    Ivancic, William D.; Sands, Obed S.; Bakula, Casey J.; Oldham, Daniel R.; Wright, Ted; Bradish, Martin A.; Klebau, Joseph M.

    2014-01-01

    This document describes the communication architecture for the Power, Avionics and Software (PAS) 2.0 subsystem for the Advanced Extravehicular Mobile Unit (AEMU). The following systems are described in detail: Caution Warn- ing and Control System, Informatics, Storage, Video, Audio, Communication, and Monitoring Test and Validation. This document also provides some background as well as the purpose and goals of the PAS project at Glenn Research Center (GRC).

  19. Open Source Subtitle Editor Software Study for Section 508 Close Caption Applications

    NASA Technical Reports Server (NTRS)

    Murphy, F. Brandon

    2013-01-01

    This paper will focus on a specific item within the NASA Electronic Information Accessibility Policy - Multimedia Presentation shall have synchronized caption; thus making information accessible to a person with hearing impairment. This synchronized caption will assist a person with hearing or cognitive disability to access the same information as everyone else. This paper focuses on the research and implementation for CC (subtitle option) support to video multimedia. The goal of this research is identify the best available open-source (free) software to achieve synchronized captions requirement and achieve savings, while meeting the security requirement for Government information integrity and assurance. CC and subtitling are processes that display text within a video to provide additional or interpretive information for those whom may need it or those whom chose it. Closed captions typically show the transcription of the audio portion of a program (video) as it occurs (either verbatim or in its edited form), sometimes including non-speech elements (such as sound effects). The transcript can be provided by a third party source or can be extracted word for word from the video. This feature can be made available for videos in two forms: either Soft-Coded or Hard-Coded. Soft-Coded is the more optional version of CC, where you can chose to turn them on if you want, or you can turn them off. Most of the time, when using the Soft-Coded option, the transcript is also provided to the view along-side the video. This option is subject to compromise, whereas the transcript is merely a text file that can be changed by anyone who has access to it. With this option the integrity of the CC is at the mercy of the user. Hard-Coded CC is a more permanent form of CC. A Hard-Coded CC transcript is embedded within a video, without the option of removal.

  20. Hardware/Software Issues for Video Guidance Systems: The Coreco Frame Grabber

    NASA Technical Reports Server (NTRS)

    Bales, John W.

    1996-01-01

    The F64 frame grabber is a high performance video image acquisition and processing board utilizing the TMS320C40 and TMS34020 processors. The hardware is designed for the ISA 16 bit bus and supports multiple digital or analog cameras. It has an acquisition rate of 40 million pixels per second, with a variable sampling frequency of 510 kHz to MO MHz. The board has a 4MB frame buffer memory expandable to 32 MB, and has a simultaneous acquisition and processing capability. It supports both VGA and RGB displays, and accepts all analog and digital video input standards.

  1. Science Teacher Efficacy and Extrinsic Factors Toward Professional Development Using Video Games in a Design-Based Research Model: The Next Generation of STEM Learning

    NASA Astrophysics Data System (ADS)

    Annetta, Leonard A.; Frazier, Wendy M.; Folta, Elizabeth; Holmes, Shawn; Lamb, Richard; Cheng, Meng-Tzu

    2013-02-01

    Designed-based research principles guided the study of 51 secondary-science teachers in the second year of a 3-year professional development project. The project entailed the creation of student-centered, inquiry-based, science, video games. A professional development model appropriate for infusing innovative technologies into standards-based curricula was employed to determine how science teacher's attitudes and efficacy where impacted while designing science-based video games. The study's mixed-method design ascertained teacher efficacy on five factors (General computer use, Science Learning, Inquiry Teaching and Learning, Synchronous chat/text, and Playing Video Games) related to technology and gaming using a web-based survey). Qualitative data in the form of online blog posts was gathered during the project to assist in the triangulation and assessment of teacher efficacy. Data analyses consisted of an Analysis of Variance and serial coding of teacher reflective responses. Results indicated participants who used computers daily have higher efficacy while using inquiry-based teaching methods and science teaching and learning. Additional emergent findings revealed possible motivating factors for efficacy. This professional development project was focused on inquiry as a pedagogical strategy, standard-based science learning as means to develop content knowledge, and creating video games as technological knowledge. The project was consistent with the Technological Pedagogical Content Knowledge (TPCK) framework where overlapping circles of the three components indicates development of an integrated understanding of the suggested relationships. Findings provide suggestions for development of standards-based science education software, its integration into the curriculum and, strategies for implementing technology into teaching practices.

  2. Video capture virtual reality as a flexible and effective rehabilitation tool

    PubMed Central

    Weiss, Patrice L; Rand, Debbie; Katz, Noomi; Kizony, Rachel

    2004-01-01

    Video capture virtual reality (VR) uses a video camera and software to track movement in a single plane without the need to place markers on specific bodily locations. The user's image is thereby embedded within a simulated environment such that it is possible to interact with animated graphics in a completely natural manner. Although this technology first became available more than 25 years ago, it is only within the past five years that it has been applied in rehabilitation. The objective of this article is to describe the way this technology works, to review its assets relative to other VR platforms, and to provide an overview of some of the major studies that have evaluated the use of video capture technologies for rehabilitation. PMID:15679949

  3. Our experiences with development of digitised video streams and their use in animal-free medical education.

    PubMed

    Cervinka, Miroslav; Cervinková, Zuzana; Novák, Jan; Spicák, Jan; Rudolf, Emil; Peychl, Jan

    2004-06-01

    Alternatives and their teaching are an essential part of the curricula at the Faculty of Medicine. Dynamic screen-based video recordings are the most important type of alternative models employed for teaching purposes. Currently, the majority of teaching materials for this purpose are based on PowerPoint presentations, which are very popular because of their high versatility and visual impact. Furthermore, current developments in the field of image capturing devices and software enable the use of digitised video streams, tailored precisely to the specific situation. Here, we demonstrate that with reasonable financial resources, it is possible to prepare video sequences and to introduce them into the PowerPoint presentation, thereby shaping the teaching process according to individual students' needs and specificities.

  4. A method for the automated processing and analysis of images of ULVWF-platelet strings.

    PubMed

    Reeve, Scott R; Abbitt, Katherine B; Cruise, Thomas D; Hose, D Rodney; Lawford, Patricia V

    2013-01-01

    We present a method for identifying and analysing unusually large von Willebrand factor (ULVWF)-platelet strings in noisy low-quality images. The method requires relatively inexpensive, non-specialist equipment and allows multiple users to be employed in the capture of images. Images are subsequently enhanced and analysed, using custom-written software to perform the processing tasks. The formation and properties of ULVWF-platelet strings released in in vitro flow-based assays have recently become a popular research area. Endothelial cells are incorporated into a flow chamber, chemically stimulated to induce ULVWF release and perfused with isolated platelets which are able to bind to the ULVWF to form strings. The numbers and lengths of the strings released are related to characteristics of the flow. ULVWF-platelet strings are routinely identified by eye from video recordings captured during experiments and analysed manually using basic NIH image software to determine the number of strings and their lengths. This is a laborious, time-consuming task and a single experiment, often consisting of data from four to six dishes of endothelial cells, can take 2 or more days to analyse. The method described here allows analysis of the strings to provide data such as the number and length of strings, number of platelets per string and the distance between each platelet to be found. The software reduces analysis time, and more importantly removes user subjectivity, producing highly reproducible results with an error of less than 2% when compared with detailed manual analysis.

  5. Colometer: a real-time quality feedback system for screening colonoscopy.

    PubMed

    Filip, Dobromir; Gao, Xuexin; Angulo-Rodríguez, Leticia; Mintchev, Martin P; Devlin, Shane M; Rostom, Alaa; Rosen, Wayne; Andrews, Christopher N

    2012-08-28

    To investigate the performance of a new software-based colonoscopy quality assessment system. The software-based system employs a novel image processing algorithm which detects the levels of image clarity, withdrawal velocity, and level of the bowel preparation in a real-time fashion from live video signal. Threshold levels of image blurriness and the withdrawal velocity below which the visualization could be considered adequate have initially been determined arbitrarily by review of sample colonoscopy videos by two experienced endoscopists. Subsequently, an overall colonoscopy quality rating was computed based on the percentage of the withdrawal time with adequate visualization (scored 1-5; 1, when the percentage was 1%-20%; 2, when the percentage was 21%-40%, etc.). In order to test the proposed velocity and blurriness thresholds, screening colonoscopy withdrawal videos from a specialized ambulatory colon cancer screening center were collected, automatically processed and rated. Quality ratings on the withdrawal were compared to the insertion in the same patients. Then, 3 experienced endoscopists reviewed the collected videos in a blinded fashion and rated the overall quality of each withdrawal (scored 1-5; 1, poor; 3, average; 5, excellent) based on 3 major aspects: image quality, colon preparation, and withdrawal velocity. The automated quality ratings were compared to the averaged endoscopist quality ratings using Spearman correlation coefficient. Fourteen screening colonoscopies were assessed. Adenomatous polyps were detected in 4/14 (29%) of the collected colonoscopy video samples. As a proof of concept, the Colometer software rated colonoscope withdrawal as having better visualization than the insertion in the 10 videos which did not have any polyps (average percent time with adequate visualization: 79% ± 5% for withdrawal and 50% ± 14% for insertion, P < 0.01). Withdrawal times during which no polyps were removed ranged from 4-12 min. The median quality rating from the automated system and the reviewers was 3.45 [interquartile range (IQR), 3.1-3.68] and 3.00 (IQR, 2.33-3.67) respectively for all colonoscopy video samples. The automated rating revealed a strong correlation with the reviewer's rating (ρ coefficient= 0.65, P = 0.01). There was good correlation of the automated overall quality rating and the mean endoscopist withdrawal speed rating (Spearman r coefficient= 0.59, P = 0.03). There was no correlation of automated overall quality rating with mean endoscopists image quality rating (Spearman r coefficient= 0.41, P = 0.15). The results from a novel automated real-time colonoscopy quality feedback system strongly agreed with the endoscopists' quality assessments. Further study is required to validate this approach.

  6. Seafloor video footage and still-frame grabs from U.S. Geological Survey cruises in Hawaiian nearshore waters

    USGS Publications Warehouse

    Gibbs, Ann E.; Cochran, Susan A.; Tierney, Peter W.

    2013-01-01

    Underwater video footage was collected in nearshore waters (<60-meter depth) off the Hawaiian Islands from 2002 to 2011 as part of the U.S. Geological Survey (USGS) Coastal and Marine Geology Program's Pacific Coral Reef Project, to improve seafloor characterization and for the development and ground-truthing of benthic-habitat maps. This report includes nearly 53 hours of digital underwater video footage collected during four USGS cruises and more than 10,200 still images extracted from the videos, including still frames from every 10 seconds along transect lines, and still frames showing both an overview and a near-bottom view from fixed stations. Environmental Systems Research Institute (ESRI) shapefiles of individual video and still-image locations, and Google Earth kml files with explanatory text and links to the video and still images, are included. This report documents the various camera systems and methods used to collect the videos, and the techniques and software used to convert the analog video tapes into digital data in order to process the images for optimum viewing and to extract the still images, along with a brief summary of each survey cruise.

  7. Normalized Temperature Contrast Processing in Flash Infrared Thermography

    NASA Technical Reports Server (NTRS)

    Koshti, Ajay M.

    2016-01-01

    The paper presents further development in normalized contrast processing of flash infrared thermography method by the author given in US 8,577,120 B1. The method of computing normalized image or pixel intensity contrast, and normalized temperature contrast are provided, including converting one from the other. Methods of assessing emissivity of the object, afterglow heat flux, reflection temperature change and temperature video imaging during flash thermography are provided. Temperature imaging and normalized temperature contrast imaging provide certain advantages over pixel intensity normalized contrast processing by reducing effect of reflected energy in images and measurements, providing better quantitative data. The subject matter for this paper mostly comes from US 9,066,028 B1 by the author. Examples of normalized image processing video images and normalized temperature processing video images are provided. Examples of surface temperature video images, surface temperature rise video images and simple contrast video images area also provided. Temperature video imaging in flash infrared thermography allows better comparison with flash thermography simulation using commercial software which provides temperature video as the output. Temperature imaging also allows easy comparison of surface temperature change to camera temperature sensitivity or noise equivalent temperature difference (NETD) to assess probability of detecting (POD) anomalies.

  8. Analysis of a severe head injury in World Cup alpine skiing.

    PubMed

    Yamazaki, Junya; Gilgien, Matthias; Kleiven, Svein; McIntosh, Andrew S; Nachbauer, Werner; Müller, Erich; Bere, Tone; Bahr, Roald; Krosshaug, Tron

    2015-06-01

    Traumatic brain injury (TBI) is the leading cause of death in alpine skiing. It has been found that helmet use can reduce the incidence of head injuries between 15% and 60%. However, knowledge on optimal helmet performance criteria in World Cup alpine skiing is currently limited owing to the lack of biomechanical data from real crash situations. This study aimed to estimate impact velocities in a severe TBI case in World Cup alpine skiing. Video sequences from a TBI case in World Cup alpine skiing were analyzed using a model-based image matching technique. Video sequences from four camera views were obtained in full high-definition (1080p) format. A three-dimensional model of the course was built based on accurate measurements of piste landmarks and matched to the background video footage using the animation software Poser 4. A trunk-neck-head model was used for tracking the skier's trajectory. Immediately before head impact, the downward velocity component was estimated to be 8 m·s⁻¹. After impact, the upward velocity was 3 m·s⁻¹, whereas the velocity parallel to the slope surface was reduced from 33 m·s⁻¹ to 22 m·s⁻¹. The frontal plane angular velocity of the head changed from 80 rad·s⁻¹ left tilt immediately before impact to 20 rad·s⁻¹ right tilt immediately after impact. A unique combination of high-definition video footage and accurate measurements of landmarks in the slope made possible a high-quality analysis of head impact velocity in a severe TBI case. The estimates can provide crucial information on how to prevent TBI through helmet performance criteria and design.

  9. Open-source meteor detection software for low-cost single-board computers

    NASA Astrophysics Data System (ADS)

    Vida, D.; Zubović, D.; Šegon, D.; Gural, P.; Cupec, R.

    2016-01-01

    This work aims to overcome the current price threshold of meteor stations which can sometimes deter meteor enthusiasts from owning one. In recent years small card-sized computers became widely available and are used for numerous applications. To utilize such computers for meteor work, software which can run on them is needed. In this paper we present a detailed description of newly-developed open-source software for fireball and meteor detection optimized for running on low-cost single board computers. Furthermore, an update on the development of automated open-source software which will handle video capture, fireball and meteor detection, astrometry and photometry is given.

  10. FoilSim: Basic Aerodynamics Software Created

    NASA Technical Reports Server (NTRS)

    Peterson, Ruth A.

    1999-01-01

    FoilSim is interactive software that simulates the airflow around various shapes of airfoils. The graphical user interface, which looks more like a video game than a learning tool, captures and holds the students interest. The software is a product of NASA Lewis Research Center s Learning Technologies Project, an educational outreach initiative within the High Performance Computing and Communications Program (HPCCP).This airfoil view panel is a simulated view of a wing being tested in a wind tunnel. As students create new wing shapes by moving slider controls that change parameters, the software calculates their lift. FoilSim also displays plots of pressure or airspeed above and below the airfoil surface.

  11. Development of an ICT-Based Air Column Resonance Learning Media

    NASA Astrophysics Data System (ADS)

    Purjiyanta, Eka; Handayani, Langlang; Marwoto, Putut

    2016-08-01

    Commonly, the sound source used in the air column resonance experiment is the tuning fork having disadvantage of unoptimal resonance results due to the sound produced which is getting weaker. In this study we made tones with varying frequency using the Audacity software which were, then, stored in a mobile phone as a source of sound. One advantage of this sound source is the stability of the resulting sound enabling it to produce the same powerful sound. The movement of water in a glass tube mounted on the tool resonance and the tone sound that comes out from the mobile phone were recorded by using a video camera. Sound resonances recorded were first, second, and third resonance, for each tone frequency mentioned. The resulting sound stays longer, so it can be used for the first, second, third and next resonance experiments. This study aimed to (1) explain how to create tones that can substitute tuning forks sound used in air column resonance experiments, (2) illustrate the sound wave that occurred in the first, second, and third resonance in the experiment, and (3) determine the speed of sound in the air. This study used an experimental method. It was concluded that; (1) substitute tones of a tuning fork sound can be made by using the Audacity software; (2) the form of sound waves that occured in the first, second, and third resonance in the air column resonance can be drawn based on the results of video recording of the air column resonance; and (3) based on the experiment result, the speed of sound in the air is 346.5 m/s, while based on the chart analysis with logger pro software, the speed of sound in the air is 343.9 ± 0.3171 m/s.

  12. Spatial data software integration - Merging CAD/CAM/mapping with GIS and image processing

    NASA Technical Reports Server (NTRS)

    Logan, Thomas L.; Bryant, Nevin A.

    1987-01-01

    The integration of CAD/CAM/mapping with image processing using geographic information systems (GISs) as the interface is examined. Particular emphasis is given to the development of software interfaces between JPL's Video Image Communication and Retrieval (VICAR)/Imaged Based Information System (IBIS) raster-based GIS and the CAD/CAM/mapping system. The design and functions of the VICAR and IBIS are described. Vector data capture and editing are studied. Various software programs for interfacing between the VICAR/IBIS and CAD/CAM/mapping are presented and analyzed.

  13. PC Scene Generation

    NASA Astrophysics Data System (ADS)

    Buford, James A., Jr.; Cosby, David; Bunfield, Dennis H.; Mayhall, Anthony J.; Trimble, Darian E.

    2007-04-01

    AMRDEC has successfully tested hardware and software for Real-Time Scene Generation for IR and SAL Sensors on COTS PC based hardware and video cards. AMRDEC personnel worked with nVidia and Concurrent Computer Corporation to develop a Scene Generation system capable of frame rates of at least 120Hz while frame locked to an external source (such as a missile seeker) with no dropped frames. Latency measurements and image validation were performed using COTS and in-house developed hardware and software. Software for the Scene Generation system was developed using OpenSceneGraph.

  14. Employee and family assistance video counseling program: a post launch retrospective comparison with in-person counseling outcomes.

    PubMed

    Veder, Barbara; Pope, Stan; Mani, Michèle; Beaudoin, Kelly; Ritchie, Janice

    2014-01-01

    Access to technologically mediated information and services under the umbrella of mental and physical health has become increasingly available to clients via Internet modalities, according to a recent study. In May 2010, video counseling was added to the counseling services offered through the Employee and Family Assistance Program at Shepell·fgi as a pilot project with a full operational launch in September 2011. The objective of this study was to conduct a retrospective post launch examination of the video counseling service through an analysis of the reported clinical outcomes of video and in-person counseling modalities. A chronological sample of 68 video counseling (VC) cases and 68 in-person (IP) cases were collected from a pool of client clinical files closed in 2012. To minimize the variables impacting the study and maintain as much clinical continuity as possible, the IP and the VC clients must have attended clinical sessions with any one of six counselors who provided both the VC and the IP services. The study compared the two counseling modalities along the following data points (see glossary of terms): (1) client demographic profiles (eg, age, gender, whether the sessions involved individuals or conjoint sessions with couples or families, etc), (2) presenting issue, (3) average session hours, (4) client rating of session helpfulness, (5) rates of goal completion, (6) client withdrawal rates, (7) no show and late cancellation rates, and (8) pre/post client self-assessment. Specific to VC, we examined client geographic location. Data analysis demonstrates that the VC and the IP showed a similar representation of presenting issues with nearly identical outcomes for client ratings of session helpfulness, rates of goal completion, pre/post client self-assessment, average session duration, and client geographic location. There were no statistically significant differences in the rates of withdrawal from counseling, no shows, and late cancellations between the VC and the IP counseling. The statistical analysis of the data was done on SPSS statistical software using 2-sample and pairwise comparison t tests at a 95% level of significance. Based on the study, VC and IP show similar outcomes in terms of client rating of session and goal attainment.

  15. A novel control software that improves the experimental workflow of scanning photostimulation experiments.

    PubMed

    Bendels, Michael H K; Beed, Prateep; Leibold, Christian; Schmitz, Dietmar; Johenning, Friedrich W

    2008-10-30

    Optical uncaging of caged compounds is a well-established method to study the functional anatomy of a brain region on the circuit level. We present an alternative approach to existing experimental setups. Using a low-magnification objective we acquire images for planning the spatial patterns of stimulation. Then high-magnification objectives are used during laser stimulation providing a laser spot between 2 microm and 20 microm size. The core of this system is a video-based control software that monitors and controls the connected devices, allows for planning of the experiment, coordinates the stimulation process and manages automatic data storage. This combines a high-resolution analysis of neuronal circuits with flexible and efficient online planning and execution of a grid of spatial stimulation patterns on a larger scale. The software offers special optical features that enable the system to achieve a maximum degree of spatial reliability. The hardware is mainly built upon standard laboratory devices and thus ideally suited to cost-effectively complement existing electrophysiological setups with a minimal amount of additional equipment. Finally, we demonstrate the performance of the system by mapping the excitatory and inhibitory connections of entorhinal cortex layer II stellate neurons and present an approach for the analysis of photo-induced synaptic responses in high spontaneous activity.

  16. NASA Tech Briefs, April 1998. Volume 22, No. 4

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Topics include: special coverage on video and imaging, electronic components and circuits, electronic systems, physical sciences, materials, computer software, mechanics, machinery/automation, and a special section of Photonics Tech Briefs.

  17. Evaluation of H.264 and H.265 full motion video encoding for small UAS platforms

    NASA Astrophysics Data System (ADS)

    McGuinness, Christopher D.; Walker, David; Taylor, Clark; Hill, Kerry; Hoffman, Marc

    2016-05-01

    Of all the steps in the image acquisition and formation pipeline, compression is the only process that degrades image quality. A selected compression algorithm succeeds or fails to provide sufficient quality at the requested compression rate depending on how well the algorithm is suited to the input data. Applying an algorithm designed for one type of data to a different type often results in poor compression performance. This is mostly the case when comparing the performance of H.264, designed for standard definition data, to HEVC (High Efficiency Video Coding), which the Joint Collaborative Team on Video Coding (JCT-VC) designed for high-definition data. This study focuses on evaluating how HEVC compares to H.264 when compressing data from small UAS platforms. To compare the standards directly, we assess two open-source traditional software solutions: x264 and x265. These software-only comparisons allow us to establish a baseline of how much improvement can generally be expected of HEVC over H.264. Then, specific solutions leveraging different types of hardware are selected to understand the limitations of commercial-off-the-shelf (COTS) options. Algorithmically, regardless of the implementation, HEVC is found to provide similar quality video as H.264 at 40% lower data rates for video resolutions greater than 1280x720, roughly 1 Megapixel (MPx). For resolutions less than 1MPx, H.264 is an adequate solution though a small (roughly 20%) compression boost is earned by employing HEVC. New low cost, size, weight, and power (CSWAP) HEVC implementations are being developed and will be ideal for small UAS systems.

  18. Design and management of public health outreach using interoperable mobile multimedia: an analysis of a national winter weather preparedness campaign.

    PubMed

    Bandera, Cesar

    2016-05-25

    The Office of Public Health Preparedness and Response (OPHPR) in the Centers for Disease Control and Prevention conducts outreach for public preparedness for natural and manmade incidents. In 2011, OPHPR conducted a nationwide mobile public health (m-Health) campaign that pushed brief videos on preparing for severe winter weather onto cell phones, with the objective of evaluating the interoperability of multimedia m-Health outreach with diverse cell phones (including handsets without Internet capability), carriers, and user preferences. Existing OPHPR outreach material on winter weather preparedness was converted into mobile-ready multimedia using mobile marketing best practices to improve audiovisual quality and relevance. Middleware complying with opt-in requirements was developed to push nine bi-weekly multimedia broadcasts onto subscribers' cell phones, and OPHPR promoted the campaign on its web site and to subscribers on its govdelivery.com notification platform. Multimedia, text, and voice messaging activity to/from the middleware was logged and analyzed. Adapting existing media into mobile video was straightforward using open source and commercial software, including web pages, PDF documents, and public service announcements. The middleware successfully delivered all outreach videos to all participants (a total of 504 videos) regardless of the participant's device. 54 % of videos were viewed on cell phones, 32 % on computers, and 14 % were retrieved by search engine web crawlers. 21 % of participating cell phones did not have Internet access, yet still received and displayed all videos. The time from media push to media viewing on cell phones was half that of push to viewing on computers. Video delivered through multimedia messaging can be as interoperable as text messages, while providing much richer information. This may be the only multimedia mechanism available to outreach campaigns targeting vulnerable populations impacted by the digital divide. Anti-spam laws preserve the integrity of mobile messaging, but complicate campaign promotion. Person-to-person messages may boost enrollment.

  19. A Visualization-Based Tutoring Tool for Engineering Education

    NASA Astrophysics Data System (ADS)

    Nguyen, Tang-Hung; Khoo, I.-Hung

    2010-06-01

    In engineering disciplines, students usually have hard time to visualize different aspects of engineering analysis and design, which inherently are too complex or abstract to fully understand without the aid of visual explanations or visualizations. As examples, when learning materials and sequences of construction process, students need to visualize how all components of a constructed facility are assembled? Such visualization can not be achieved in a textbook and a traditional lecturing environment. In this paper, the authors present the development of a computer tutoring software, in which different visualization tools including video clips, 3 dimensional models, drawings, pictures/photos together with complementary texts are used to assist students in deeply understanding and effectively mastering materials. The paper will also discuss the implementation and the effectiveness evaluation of the proposed tutoring software, which was used to teach a construction engineering management course offered at California State University, Long Beach.

  20. Fluorescent screens and image processing for the APS linac test stand

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berg, W.; Ko, K.

    A fluorescent screen was used to monitor relative beam position and spot size of a 56-MeV electron beam in the linac test stand. A chromium doped alumina ceramic screen inserted into the beam was monitored by a video camera. The resulting image was captured using a frame grabber and stored into memory. Reconstruction and analysis of the stored image was performed using PV-WAVE. This paper will discuss the hardware and software implementation of the fluorescent screen and imaging system. Proposed improvements for the APS linac fluorescent screens and image processing will also be discussed.

Top