Sample records for nasa video segment

  1. Gamifying Video Object Segmentation.

    PubMed

    Spampinato, Concetto; Palazzo, Simone; Giordano, Daniela

    2017-10-01

    Video object segmentation can be considered as one of the most challenging computer vision problems. Indeed, so far, no existing solution is able to effectively deal with the peculiarities of real-world videos, especially in cases of articulated motion and object occlusions; limitations that appear more evident when we compare the performance of automated methods with the human one. However, manually segmenting objects in videos is largely impractical as it requires a lot of time and concentration. To address this problem, in this paper we propose an interactive video object segmentation method, which exploits, on one hand, the capability of humans to identify correctly objects in visual scenes, and on the other hand, the collective human brainpower to solve challenging and large-scale tasks. In particular, our method relies on a game with a purpose to collect human inputs on object locations, followed by an accurate segmentation phase achieved by optimizing an energy function encoding spatial and temporal constraints between object regions as well as human-provided location priors. Performance analysis carried out on complex video benchmarks, and exploiting data provided by over 60 users, demonstrated that our method shows a better trade-off between annotation times and segmentation accuracy than interactive video annotation and automated video object segmentation approaches.

  2. NASA Video Catalog

    NASA Technical Reports Server (NTRS)

    2006-01-01

    This issue of the NASA Video Catalog cites video productions listed in the NASA STI database. The videos listed have been developed by the NASA centers, covering Shuttle mission press conferences; fly-bys of planets; aircraft design, testing and performance; environmental pollution; lunar and planetary exploration; and many other categories related to manned and unmanned space exploration. Each entry in the publication consists of a standard bibliographic citation accompanied by an abstract. The Table of Contents shows how the entries are arranged by divisions and categories according to the NASA Scope and Subject Category Guide. For users with specific information, a Title Index is available. A Subject Term Index, based on the NASA Thesaurus, is also included. Guidelines for usage of NASA audio/visual material, ordering information, and order forms are also available.

  3. NASA Video Catalog. Supplement 12

    NASA Technical Reports Server (NTRS)

    2002-01-01

    This report lists 1878 video productions from the NASA STI Database. This issue of the NASA Video Catalog cites video productions listed in the NASA STI Database. The videos listed have been developed by the NASA centers, covering Shuttle mission press conferences; fly-bys of planets; aircraft design, testing and performance; environmental pollution; lunar and planetary exploration; and many other categories related to manned and unmanned space exploration. Each entry in the publication consists of a standard bibliographic citation accompanied by an abstract. The listing of the entries is arranged by STAR categories. A complete Table of Contents describes the scope of each category. For users with specific information, a Title Index is available. A Subject Term Index, based on the NASA Thesaurus, is also included. Guidelines for usage of NASA audio/visual material, ordering information, and order forms are also available.

  4. NASA's Myriad Uses of Digital Video

    NASA Technical Reports Server (NTRS)

    Grubbs, Rodney; Lindblom, Walt; George, Sandy

    1999-01-01

    Since it's inception, NASA has created many of the most memorable images seen this Century. From the fuzzy video of Neil Armstrong taking that first step on the moon, to images of the Mars surface available to all on the internet, NASA has provided images to inspire a generation, all because a scientist or researcher had a requirement to see something unusual. Digital Television technology will give NASA unprecedented new tools for acquiring, analyzing, and distributing video. This paper will explore NASA's DTV future. The agency has a requirement to move video from one NASA Center to another, in real time. Specifics will be provided relating to the NASA video infrastructure, including video from the Space Shuttle and from the various Centers. A comparison of the pros and cons of interlace and progressive scanned images will be presented. Film is a major component of NASA's image acquisition for analysis usage. The future of film within the context of DTV will be explored.

  5. NASA Video Catalog. Supplement 15

    NASA Technical Reports Server (NTRS)

    2005-01-01

    This issue of the NASA Video Catalog cites video productions listed in the NASA STI Database. The videos listed have been developed by the NASA centers, covering Shuttle mission press conferences; fly-bys of planets; aircraft design, testing and performance; environmental pollution; lunar and planetary exploration; and many other categories related to manned and unmanned space exploration. Each entry in the publication consists of a standard bibliographic citation accompanied by an abstract. The Table of Contents shows how the entries are arranged by divisions and categories according to the NASA Scope and Coverage Category Guide. For users with specific information, a Title Index is available. A Subject Term Index, based on the NASA Thesaurus, is also included. Guidelines for usage of NASA audio/visual material, ordering information, and order forms are also available.

  6. NASA Video Catalog. Supplement 13

    NASA Technical Reports Server (NTRS)

    2003-01-01

    This issue of the NASA Video Catalog cites video productions listed in the NASA STI Database. The videos listed have been developed by the NASA centers, covering Shuttle mission press conferences; fly-bys of planets; aircraft design, testing and performance; environmental pollution; lunar and planetary exploration; and many other categories related to manned and unmanned space exploration. Each entry in the publication consists of a standard bibliographic citation accompanied by an abstract. The Table of Contents shows how the entries are arranged by divisions and categories according to the NASA Scope and Coverage Category Guide. For users with specific information, a Title Index is available. A Subject Term Index, based on the NASA Thesaurus, is also included. Guidelines for usage of NASA audio/visual material, ordering information, and order forms are also available.

  7. NASA Video Catalog. Supplement 14

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This issue of the NASA Video Catalog cites video productions listed in the NASA STI Database. The videos listed have been developed by the NASA centers, covering Shuttle mission press conferences; fly-bys of planets; aircraft design, testing and performance; environmental pollution; lunar and planetary exploration; and many other categories related to manned and unmanned space exploration. Each entry in the publication consists of a standard bibliographic citation accompanied by an abstract. The Table of Contents shows how the entries are arranged by divisions and categories according to the NASA Scope and Coverage Category Guide. For users with specific information, a Title Index is available. A Subject Term Index, based on the NASA Thesaurus, is also included. Guidelines for usage of NASA audio/visual material, ordering information, and order forms are also available.

  8. Automatic video segmentation and indexing

    NASA Astrophysics Data System (ADS)

    Chahir, Youssef; Chen, Liming

    1999-08-01

    Indexing is an important aspect of video database management. Video indexing involves the analysis of video sequences, which is a computationally intensive process. However, effective management of digital video requires robust indexing techniques. The main purpose of our proposed video segmentation is twofold. Firstly, we develop an algorithm that identifies camera shot boundary. The approach is based on the use of combination of color histograms and block-based technique. Next, each temporal segment is represented by a color reference frame which specifies the shot similarities and which is used in the constitution of scenes. Experimental results using a variety of videos selected in the corpus of the French Audiovisual National Institute are presented to demonstrate the effectiveness of performing shot detection, the content characterization of shots and the scene constitution.

  9. Video segmentation using keywords

    NASA Astrophysics Data System (ADS)

    Ton-That, Vinh; Vong, Chi-Tai; Nguyen-Dao, Xuan-Truong; Tran, Minh-Triet

    2018-04-01

    At DAVIS-2016 Challenge, many state-of-art video segmentation methods achieve potential results, but they still much depend on annotated frames to distinguish between background and foreground. It takes a lot of time and efforts to create these frames exactly. In this paper, we introduce a method to segment objects from video based on keywords given by user. First, we use a real-time object detection system - YOLOv2 to identify regions containing objects that have labels match with the given keywords in the first frame. Then, for each region identified from the previous step, we use Pyramid Scene Parsing Network to assign each pixel as foreground or background. These frames can be used as input frames for Object Flow algorithm to perform segmentation on entire video. We conduct experiments on a subset of DAVIS-2016 dataset in half the size of its original size, which shows that our method can handle many popular classes in PASCAL VOC 2012 dataset with acceptable accuracy, about 75.03%. We suggest widely testing by combining other methods to improve this result in the future.

  10. Video-assisted segmentation of speech and audio track

    NASA Astrophysics Data System (ADS)

    Pandit, Medha; Yusoff, Yusseri; Kittler, Josef; Christmas, William J.; Chilton, E. H. S.

    1999-08-01

    Video database research is commonly concerned with the storage and retrieval of visual information invovling sequence segmentation, shot representation and video clip retrieval. In multimedia applications, video sequences are usually accompanied by a sound track. The sound track contains potential cues to aid shot segmentation such as different speakers, background music, singing and distinctive sounds. These different acoustic categories can be modeled to allow for an effective database retrieval. In this paper, we address the problem of automatic segmentation of audio track of multimedia material. This audio based segmentation can be combined with video scene shot detection in order to achieve partitioning of the multimedia material into semantically significant segments.

  11. Selecting salient frames for spatiotemporal video modeling and segmentation.

    PubMed

    Song, Xiaomu; Fan, Guoliang

    2007-12-01

    We propose a new statistical generative model for spatiotemporal video segmentation. The objective is to partition a video sequence into homogeneous segments that can be used as "building blocks" for semantic video segmentation. The baseline framework is a Gaussian mixture model (GMM)-based video modeling approach that involves a six-dimensional spatiotemporal feature space. Specifically, we introduce the concept of frame saliency to quantify the relevancy of a video frame to the GMM-based spatiotemporal video modeling. This helps us use a small set of salient frames to facilitate the model training by reducing data redundancy and irrelevance. A modified expectation maximization algorithm is developed for simultaneous GMM training and frame saliency estimation, and the frames with the highest saliency values are extracted to refine the GMM estimation for video segmentation. Moreover, it is interesting to find that frame saliency can imply some object behaviors. This makes the proposed method also applicable to other frame-related video analysis tasks, such as key-frame extraction, video skimming, etc. Experiments on real videos demonstrate the effectiveness and efficiency of the proposed method.

  12. Blurry-frame detection and shot segmentation in colonoscopy videos

    NASA Astrophysics Data System (ADS)

    Oh, JungHwan; Hwang, Sae; Tavanapong, Wallapak; de Groen, Piet C.; Wong, Johnny

    2003-12-01

    Colonoscopy is an important screening procedure for colorectal cancer. During this procedure, the endoscopist visually inspects the colon. Human inspection, however, is not without error. We hypothesize that colonoscopy videos may contain additional valuable information missed by the endoscopist. Video segmentation is the first necessary step for the content-based video analysis and retrieval to provide efficient access to the important images and video segments from a large colonoscopy video database. Based on the unique characteristics of colonoscopy videos, we introduce a new scheme to detect and remove blurry frames, and segment the videos into shots based on the contents. Our experimental results show that the average precision and recall of the proposed scheme are over 90% for the detection of non-blurry images. The proposed method of blurry frame detection and shot segmentation is extensible to the videos captured from other endoscopic procedures such as upper gastrointestinal endoscopy, enteroscopy, cystoscopy, and laparoscopy.

  13. A novel sub-shot segmentation method for user-generated video

    NASA Astrophysics Data System (ADS)

    Lei, Zhuo; Zhang, Qian; Zheng, Chi; Qiu, Guoping

    2018-04-01

    With the proliferation of the user-generated videos, temporal segmentation is becoming a challengeable problem. Traditional video temporal segmentation methods like shot detection are not able to work on unedited user-generated videos, since they often only contain one single long shot. We propose a novel temporal segmentation framework for user-generated video. It finds similar frames with a tree partitioning min-Hash technique, constructs sparse temporal constrained affinity sub-graphs, and finally divides the video into sub-shot-level segments with a dense-neighbor-based clustering method. Experimental results show that our approach outperforms all the other related works. Furthermore, it is indicated that the proposed approach is able to segment user-generated videos at an average human level.

  14. Video Segmentation Descriptors for Event Recognition

    DTIC Science & Technology

    2014-12-08

    Velastin, 3D Extended Histogram of Oriented Gradients (3DHOG) for Classification of Road Users in Urban Scenes , BMVC, 2009. [3] M.-Y. Chen and A. Hauptmann...computed on 3D volume outputted by the hierarchical segmentation . Each video is described as follows. Each supertube is temporally divided in n-frame...strength of these descriptors is their adaptability to the scene variations since they are grounded on a video segmentation . This makes them naturally robust

  15. Shot boundary detection and label propagation for spatio-temporal video segmentation

    NASA Astrophysics Data System (ADS)

    Piramanayagam, Sankaranaryanan; Saber, Eli; Cahill, Nathan D.; Messinger, David

    2015-02-01

    This paper proposes a two stage algorithm for streaming video segmentation. In the first stage, shot boundaries are detected within a window of frames by comparing dissimilarity between 2-D segmentations of each frame. In the second stage, the 2-D segments are propagated across the window of frames in both spatial and temporal direction. The window is moved across the video to find all shot transitions and obtain spatio-temporal segments simultaneously. As opposed to techniques that operate on entire video, the proposed approach consumes significantly less memory and enables segmentation of lengthy videos. We tested our segmentation based shot detection method on the TRECVID 2007 video dataset and compared it with block-based technique. Cut detection results on the TRECVID 2007 dataset indicate that our algorithm has comparable results to the best of the block-based methods. The streaming video segmentation routine also achieves promising results on a challenging video segmentation benchmark database.

  16. WCE video segmentation using textons

    NASA Astrophysics Data System (ADS)

    Gallo, Giovanni; Granata, Eliana

    2010-03-01

    Wireless Capsule Endoscopy (WCE) integrates wireless transmission with image and video technology. It has been used to examine the small intestine non invasively. Medical specialists look for signicative events in the WCE video by direct visual inspection manually labelling, in tiring and up to one hour long sessions, clinical relevant frames. This limits the WCE usage. To automatically discriminate digestive organs such as esophagus, stomach, small intestine and colon is of great advantage. In this paper we propose to use textons for the automatic discrimination of abrupt changes within a video. In particular, we consider, as features, for each frame hue, saturation, value, high-frequency energy content and the responses to a bank of Gabor filters. The experiments have been conducted on ten video segments extracted from WCE videos, in which the signicative events have been previously labelled by experts. Results have shown that the proposed method may eliminate up to 70% of the frames from further investigations. The direct analysis of the doctors may hence be concentrated only on eventful frames. A graphical tool showing sudden changes in the textons frequencies for each frame is also proposed as a visual aid to find clinically relevant segments of the video.

  17. Video-based noncooperative iris image segmentation.

    PubMed

    Du, Yingzi; Arslanturk, Emrah; Zhou, Zhi; Belcher, Craig

    2011-02-01

    In this paper, we propose a video-based noncooperative iris image segmentation scheme that incorporates a quality filter to quickly eliminate images without an eye, employs a coarse-to-fine segmentation scheme to improve the overall efficiency, uses a direct least squares fitting of ellipses method to model the deformed pupil and limbic boundaries, and develops a window gradient-based method to remove noise in the iris region. A remote iris acquisition system is set up to collect noncooperative iris video images. An objective method is used to quantitatively evaluate the accuracy of the segmentation results. The experimental results demonstrate the effectiveness of this method. The proposed method would make noncooperative iris recognition or iris surveillance possible.

  18. Segment scheduling method for reducing 360° video streaming latency

    NASA Astrophysics Data System (ADS)

    Gudumasu, Srinivas; Asbun, Eduardo; He, Yong; Ye, Yan

    2017-09-01

    360° video is an emerging new format in the media industry enabled by the growing availability of virtual reality devices. It provides the viewer a new sense of presence and immersion. Compared to conventional rectilinear video (2D or 3D), 360° video poses a new and difficult set of engineering challenges on video processing and delivery. Enabling comfortable and immersive user experience requires very high video quality and very low latency, while the large video file size poses a challenge to delivering 360° video in a quality manner at scale. Conventionally, 360° video represented in equirectangular or other projection formats can be encoded as a single standards-compliant bitstream using existing video codecs such as H.264/AVC or H.265/HEVC. Such method usually needs very high bandwidth to provide an immersive user experience. While at the client side, much of such high bandwidth and the computational power used to decode the video are wasted because the user only watches a small portion (i.e., viewport) of the entire picture. Viewport dependent 360°video processing and delivery approaches spend more bandwidth on the viewport than on non-viewports and are therefore able to reduce the overall transmission bandwidth. This paper proposes a dual buffer segment scheduling algorithm for viewport adaptive streaming methods to reduce latency when switching between high quality viewports in 360° video streaming. The approach decouples the scheduling of viewport segments and non-viewport segments to ensure the viewport segment requested matches the latest user head orientation. A base layer buffer stores all lower quality segments, and a viewport buffer stores high quality viewport segments corresponding to the most recent viewer's head orientation. The scheduling scheme determines viewport requesting time based on the buffer status and the head orientation. This paper also discusses how to deploy the proposed scheduling design for various viewport adaptive video

  19. User-assisted video segmentation system for visual communication

    NASA Astrophysics Data System (ADS)

    Wu, Zhengping; Chen, Chun

    2002-01-01

    Video segmentation plays an important role for efficient storage and transmission in visual communication. In this paper, we introduce a novel video segmentation system using point tracking and contour formation techniques. Inspired by the results from the study of the human visual system, we intend to solve the video segmentation problem into three separate phases: user-assisted feature points selection, feature points' automatic tracking, and contour formation. This splitting relieves the computer of ill-posed automatic segmentation problems, and allows a higher level of flexibility of the method. First, the precise feature points can be found using a combination of user assistance and an eigenvalue-based adjustment. Second, the feature points in the remaining frames are obtained using motion estimation and point refinement. At last, contour formation is used to extract the object, and plus a point insertion process to provide the feature points for next frame's tracking.

  20. Model-based video segmentation for vision-augmented interactive games

    NASA Astrophysics Data System (ADS)

    Liu, Lurng-Kuo

    2000-04-01

    This paper presents an architecture and algorithms for model based video object segmentation and its applications to vision augmented interactive game. We are especially interested in real time low cost vision based applications that can be implemented in software in a PC. We use different models for background and a player object. The object segmentation algorithm is performed in two different levels: pixel level and object level. At pixel level, the segmentation algorithm is formulated as a maximizing a posteriori probability (MAP) problem. The statistical likelihood of each pixel is calculated and used in the MAP problem. Object level segmentation is used to improve segmentation quality by utilizing the information about the spatial and temporal extent of the object. The concept of an active region, which is defined based on motion histogram and trajectory prediction, is introduced to indicate the possibility of a video object region for both background and foreground modeling. It also reduces the overall computation complexity. In contrast with other applications, the proposed video object segmentation system is able to create background and foreground models on the fly even without introductory background frames. Furthermore, we apply different rate of self-tuning on the scene model so that the system can adapt to the environment when there is a scene change. We applied the proposed video object segmentation algorithms to several prototype virtual interactive games. In our prototype vision augmented interactive games, a player can immerse himself/herself inside a game and can virtually interact with other animated characters in a real time manner without being constrained by helmets, gloves, special sensing devices, or background environment. The potential applications of the proposed algorithms including human computer gesture interface and object based video coding such as MPEG-4 video coding.

  1. Activity recognition using Video Event Segmentation with Text (VEST)

    NASA Astrophysics Data System (ADS)

    Holloway, Hillary; Jones, Eric K.; Kaluzniacki, Andrew; Blasch, Erik; Tierno, Jorge

    2014-06-01

    Multi-Intelligence (multi-INT) data includes video, text, and signals that require analysis by operators. Analysis methods include information fusion approaches such as filtering, correlation, and association. In this paper, we discuss the Video Event Segmentation with Text (VEST) method, which provides event boundaries of an activity to compile related message and video clips for future interest. VEST infers meaningful activities by clustering multiple streams of time-sequenced multi-INT intelligence data and derived fusion products. We discuss exemplar results that segment raw full-motion video (FMV) data by using extracted commentary message timestamps, FMV metadata, and user-defined queries.

  2. Fast Appearance Modeling for Automatic Primary Video Object Segmentation.

    PubMed

    Yang, Jiong; Price, Brian; Shen, Xiaohui; Lin, Zhe; Yuan, Junsong

    2016-02-01

    Automatic segmentation of the primary object in a video clip is a challenging problem as there is no prior knowledge of the primary object. Most existing techniques thus adapt an iterative approach for foreground and background appearance modeling, i.e., fix the appearance model while optimizing the segmentation and fix the segmentation while optimizing the appearance model. However, these approaches may rely on good initialization and can be easily trapped in local optimal. In addition, they are usually time consuming for analyzing videos. To address these limitations, we propose a novel and efficient appearance modeling technique for automatic primary video object segmentation in the Markov random field (MRF) framework. It embeds the appearance constraint as auxiliary nodes and edges in the MRF structure, and can optimize both the segmentation and appearance model parameters simultaneously in one graph cut. The extensive experimental evaluations validate the superiority of the proposed approach over the state-of-the-art methods, in both efficiency and effectiveness.

  3. Multilevel wireless capsule endoscopy video segmentation

    NASA Astrophysics Data System (ADS)

    Hwang, Sae; Celebi, M. Emre

    2010-03-01

    Wireless Capsule Endoscopy (WCE) is a relatively new technology (FDA approved in 2002) allowing doctors to view most of the small intestine. WCE transmits more than 50,000 video frames per examination and the visual inspection of the resulting video is a highly time-consuming task even for the experienced gastroenterologist. Typically, a medical clinician spends one or two hours to analyze a WCE video. To reduce the assessment time, it is critical to develop a technique to automatically discriminate digestive organs and shots each of which consists of the same or similar shots. In this paper a multi-level WCE video segmentation methodology is presented to reduce the examination time.

  4. Temporally coherent 4D video segmentation for teleconferencing

    NASA Astrophysics Data System (ADS)

    Ehmann, Jana; Guleryuz, Onur G.

    2013-09-01

    We develop an algorithm for 4-D (RGB+Depth) video segmentation targeting immersive teleconferencing ap- plications on emerging mobile devices. Our algorithm extracts users from their environments and places them onto virtual backgrounds similar to green-screening. The virtual backgrounds increase immersion and interac- tivity, relieving the users of the system from distractions caused by disparate environments. Commodity depth sensors, while providing useful information for segmentation, result in noisy depth maps with a large number of missing depth values. By combining depth and RGB information, our work signi¯cantly improves the other- wise very coarse segmentation. Further imposing temporal coherence yields compositions where the foregrounds seamlessly blend with the virtual backgrounds with minimal °icker and other artifacts. We achieve said improve- ments by correcting the missing information in depth maps before fast RGB-based segmentation, which operates in conjunction with temporal coherence. Simulation results indicate the e±cacy of the proposed system in video conferencing scenarios.

  5. Crowdsourcing for identification of polyp-free segments in virtual colonoscopy videos

    NASA Astrophysics Data System (ADS)

    Park, Ji Hwan; Mirhosseini, Seyedkoosha; Nadeem, Saad; Marino, Joseph; Kaufman, Arie; Baker, Kevin; Barish, Matthew

    2017-03-01

    Virtual colonoscopy (VC) allows a physician to virtually navigate within a reconstructed 3D colon model searching for colorectal polyps. Though VC is widely recognized as a highly sensitive and specific test for identifying polyps, one limitation is the reading time, which can take over 30 minutes per patient. Large amounts of the colon are often devoid of polyps, and a way of identifying these polyp-free segments could be of valuable use in reducing the required reading time for the interrogating radiologist. To this end, we have tested the ability of the collective crowd intelligence of non-expert workers to identify polyp candidates and polyp-free regions. We presented twenty short videos flying through a segment of a virtual colon to each worker, and the crowd was asked to determine whether or not a possible polyp was observed within that video segment. We evaluated our framework on Amazon Mechanical Turk and found that the crowd was able to achieve a sensitivity of 80.0% and specificity of 86.5% in identifying video segments which contained a clinically proven polyp. Since each polyp appeared in multiple consecutive segments, all polyps were in fact identified. Using the crowd results as a first pass, 80% of the video segments could in theory be skipped by the radiologist, equating to a significant time savings and enabling more VC examinations to be performed.

  6. NASA Today - Mars Observer Segment (Part 4 of 6)

    NASA Technical Reports Server (NTRS)

    1993-01-01

    This videotape consists of eight segments from the NASA Today News program. The first segment is an announcement that there was no date set for the launch of STS-51, which had been postponed due to mechanical problems. The second segment describes the MidDeck Dynamic Experiment Facility. The third segment is about the scheduled arrival of the Mars Observer at Mars, it shows an image of Mars as seen from the approaching Observer spacecraft, and features an animation of the approach to Mars, including the maneuvers that are planned to put the spacecraft in the desired orbit. The fourth segment describes a discovery from an infrared spectrometer that there is nitrogen ice on Pluto. The fifth segment discusses the Aerospace for Kids (ASK) program at the Goddard Space Flight Center (GSFC). The sixth segment is about the high school and college summer internship programs at GSFC. The seventh segment announces a science symposium being held at Johnson Space Center. The last segment describes the National Air and Space Museum and NASA's cooperation with the Smithsonian Institution.

  7. Global-constrained hidden Markov model applied on wireless capsule endoscopy video segmentation

    NASA Astrophysics Data System (ADS)

    Wan, Yiwen; Duraisamy, Prakash; Alam, Mohammad S.; Buckles, Bill

    2012-06-01

    Accurate analysis of wireless capsule endoscopy (WCE) videos is vital but tedious. Automatic image analysis can expedite this task. Video segmentation of WCE into the four parts of the gastrointestinal tract is one way to assist a physician. The segmentation approach described in this paper integrates pattern recognition with statiscal analysis. Iniatially, a support vector machine is applied to classify video frames into four classes using a combination of multiple color and texture features as the feature vector. A Poisson cumulative distribution, for which the parameter depends on the length of segments, models a prior knowledge. A priori knowledge together with inter-frame difference serves as the global constraints driven by the underlying observation of each WCE video, which is fitted by Gaussian distribution to constrain the transition probability of hidden Markov model.Experimental results demonstrated effectiveness of the approach.

  8. Causal Video Object Segmentation From Persistence of Occlusions

    DTIC Science & Technology

    2015-05-01

    Precision, recall, and F-measure are reported on the ground truth anno - tations converted to binary masks. Note we cannot evaluate “number of...to lack of occlusions. References [1] P. Arbelaez, M. Maire, C. Fowlkes, and J . Malik. Con- tour detection and hierarchical image segmentation. TPAMI...X. Bai, J . Wang, D. Simons, and G. Sapiro. Video snapcut: robust video object cutout using localized classifiers. In ACM Transactions on Graphics

  9. Video segmentation and camera motion characterization using compressed data

    NASA Astrophysics Data System (ADS)

    Milanese, Ruggero; Deguillaume, Frederic; Jacot-Descombes, Alain

    1997-10-01

    We address the problem of automatically extracting visual indexes from videos, in order to provide sophisticated access methods to the contents of a video server. We focus on tow tasks, namely the decomposition of a video clip into uniform segments, and the characterization of each shot by camera motion parameters. For the first task we use a Bayesian classification approach to detecting scene cuts by analyzing motion vectors. For the second task a least- squares fitting procedure determines the pan/tilt/zoom camera parameters. In order to guarantee the highest processing speed, all techniques process and analyze directly MPEG-1 motion vectors, without need for video decompression. Experimental results are reported for a database of news video clips.

  10. Deep residual networks for automatic segmentation of laparoscopic videos of the liver

    NASA Astrophysics Data System (ADS)

    Gibson, Eli; Robu, Maria R.; Thompson, Stephen; Edwards, P. Eddie; Schneider, Crispin; Gurusamy, Kurinchi; Davidson, Brian; Hawkes, David J.; Barratt, Dean C.; Clarkson, Matthew J.

    2017-03-01

    Motivation: For primary and metastatic liver cancer patients undergoing liver resection, a laparoscopic approach can reduce recovery times and morbidity while offering equivalent curative results; however, only about 10% of tumours reside in anatomical locations that are currently accessible for laparoscopic resection. Augmenting laparoscopic video with registered vascular anatomical models from pre-procedure imaging could support using laparoscopy in a wider population. Segmentation of liver tissue on laparoscopic video supports the robust registration of anatomical liver models by filtering out false anatomical correspondences between pre-procedure and intra-procedure images. In this paper, we present a convolutional neural network (CNN) approach to liver segmentation in laparoscopic liver procedure videos. Method: We defined a CNN architecture comprising fully-convolutional deep residual networks with multi-resolution loss functions. The CNN was trained in a leave-one-patient-out cross-validation on 2050 video frames from 6 liver resections and 7 laparoscopic staging procedures, and evaluated using the Dice score. Results: The CNN yielded segmentations with Dice scores >=0.95 for the majority of images; however, the inter-patient variability in median Dice score was substantial. Four failure modes were identified from low scoring segmentations: minimal visible liver tissue, inter-patient variability in liver appearance, automatic exposure correction, and pathological liver tissue that mimics non-liver tissue appearance. Conclusion: CNNs offer a feasible approach for accurately segmenting liver from other anatomy on laparoscopic video, but additional data or computational advances are necessary to address challenges due to the high inter-patient variability in liver appearance.

  11. News video story segmentation method using fusion of audio-visual features

    NASA Astrophysics Data System (ADS)

    Wen, Jun; Wu, Ling-da; Zeng, Pu; Luan, Xi-dao; Xie, Yu-xiang

    2007-11-01

    News story segmentation is an important aspect for news video analysis. This paper presents a method for news video story segmentation. Different form prior works, which base on visual features transform, the proposed technique uses audio features as baseline and fuses visual features with it to refine the results. At first, it selects silence clips as audio features candidate points, and selects shot boundaries and anchor shots as two kinds of visual features candidate points. Then this paper selects audio feature candidates as cues and develops different fusion method, which effectively using diverse type visual candidates to refine audio candidates, to get story boundaries. Experiment results show that this method has high efficiency and adaptability to different kinds of news video.

  12. An improvement analysis on video compression using file segmentation

    NASA Astrophysics Data System (ADS)

    Sharma, Shubhankar; Singh, K. John; Priya, M.

    2017-11-01

    From the past two decades the extreme evolution of the Internet has lead a massive rise in video technology and significantly video consumption over the Internet which inhabits the bulk of data traffic in general. Clearly, video consumes that so much data size on the World Wide Web, to reduce the burden on the Internet and deduction of bandwidth consume by video so that the user can easily access the video data.For this, many video codecs are developed such as HEVC/H.265 and V9. Although after seeing codec like this one gets a dilemma of which would be improved technology in the manner of rate distortion and the coding standard.This paper gives a solution about the difficulty for getting low delay in video compression and video application e.g. ad-hoc video conferencing/streaming or observation by surveillance. Also this paper describes the benchmark of HEVC and V9 technique of video compression on subjective oral estimations of High Definition video content, playback on web browsers. Moreover, this gives the experimental ideology of dividing the video file into several segments for compression and putting back together to improve the efficiency of video compression on the web as well as on the offline mode.

  13. Effects of Segmenting, Signalling, and Weeding on Learning from Educational Video

    ERIC Educational Resources Information Center

    Ibrahim, Mohamed; Antonenko, Pavlo D.; Greenwood, Carmen M.; Wheeler, Denna

    2012-01-01

    Informed by the cognitive theory of multimedia learning, this study examined the effects of three multimedia design principles on undergraduate students' learning outcomes and perceived learning difficulty in the context of learning entomology from an educational video. These principles included segmenting the video into smaller units, signalling…

  14. A new user-assisted segmentation and tracking technique for an object-based video editing system

    NASA Astrophysics Data System (ADS)

    Yu, Hong Y.; Hong, Sung-Hoon; Lee, Mike M.; Choi, Jae-Gark

    2004-03-01

    This paper presents a semi-automatic segmentation method which can be used to generate video object plane (VOP) for object based coding scheme and multimedia authoring environment. Semi-automatic segmentation can be considered as a user-assisted segmentation technique. A user can initially mark objects of interest around the object boundaries and then the user-guided and selected objects are continuously separated from the unselected areas through time evolution in the image sequences. The proposed segmentation method consists of two processing steps: partially manual intra-frame segmentation and fully automatic inter-frame segmentation. The intra-frame segmentation incorporates user-assistance to define the meaningful complete visual object of interest to be segmentation and decides precise object boundary. The inter-frame segmentation involves boundary and region tracking to obtain temporal coherence of moving object based on the object boundary information of previous frame. The proposed method shows stable efficient results that could be suitable for many digital video applications such as multimedia contents authoring, content based coding and indexing. Based on these results, we have developed objects based video editing system with several convenient editing functions.

  15. Huntsville Area Students Appear in Episode of NASA CONNECT

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Students at Williams Technology Middle School in Huntsville were featured in a new segment of NASA CONNECT, a video series aimed to enhance the teaching of math, science, and technology to middle school students. The segment premiered nationwide May 15, 2003, and helped viewers understand Sir Isaac Newton's first, second, and third laws of gravity and how they relate to NASA's efforts in developing the next generation of space transportation.

  16. Stochastic modeling of soundtrack for efficient segmentation and indexing of video

    NASA Astrophysics Data System (ADS)

    Naphade, Milind R.; Huang, Thomas S.

    1999-12-01

    Tools for efficient and intelligent management of digital content are essential for digital video data management. An extremely challenging research area in this context is that of multimedia analysis and understanding. The capabilities of audio analysis in particular for video data management are yet to be fully exploited. We present a novel scheme for indexing and segmentation of video by analyzing the audio track. This analysis is then applied to the segmentation and indexing of movies. We build models for some interesting events in the motion picture soundtrack. The models built include music, human speech and silence. We propose the use of hidden Markov models to model the dynamics of the soundtrack and detect audio-events. Using these models we segment and index the soundtrack. A practical problem in motion picture soundtracks is that the audio in the track is of a composite nature. This corresponds to the mixing of sounds from different sources. Speech in foreground and music in background are common examples. The coexistence of multiple individual audio sources forces us to model such events explicitly. Experiments reveal that explicit modeling gives better result than modeling individual audio events separately.

  17. Improved segmentation of occluded and adjoining vehicles in traffic surveillance videos

    NASA Astrophysics Data System (ADS)

    Juneja, Medha; Grover, Priyanka

    2013-12-01

    Occlusion in image processing refers to concealment of any part of the object or the whole object from view of an observer. Real time videos captured by static cameras on roads often encounter overlapping and hence, occlusion of vehicles. Occlusion in traffic surveillance videos usually occurs when an object which is being tracked is hidden by another object. This makes it difficult for the object detection algorithms to distinguish all the vehicles efficiently. Also morphological operations tend to join the close proximity vehicles resulting in formation of a single bounding box around more than one vehicle. Such problems lead to errors in further video processing, like counting of vehicles in a video. The proposed system brings forward efficient moving object detection and tracking approach to reduce such errors. The paper uses successive frame subtraction technique for detection of moving objects. Further, this paper implements the watershed algorithm to segment the overlapped and adjoining vehicles. The segmentation results have been improved by the use of noise and morphological operations.

  18. An unsupervised video foreground co-localization and segmentation process by incorporating motion cues and frame features

    NASA Astrophysics Data System (ADS)

    Zhang, Chao; Zhang, Qian; Zheng, Chi; Qiu, Guoping

    2018-04-01

    Video foreground segmentation is one of the key problems in video processing. In this paper, we proposed a novel and fully unsupervised approach for foreground object co-localization and segmentation of unconstrained videos. We firstly compute both the actual edges and motion boundaries of the video frames, and then align them by their HOG feature maps. Then, by filling the occlusions generated by the aligned edges, we obtained more precise masks about the foreground object. Such motion-based masks could be derived as the motion-based likelihood. Moreover, the color-base likelihood is adopted for the segmentation process. Experimental Results show that our approach outperforms most of the State-of-the-art algorithms.

  19. NASA's mobile satellite communications program; ground and space segment technologies

    NASA Technical Reports Server (NTRS)

    Naderi, F.; Weber, W. J.; Knouse, G. H.

    1984-01-01

    This paper describes the Mobile Satellite Communications Program of the United States National Aeronautics and Space Administration (NASA). The program's objectives are to facilitate the deployment of the first generation commercial mobile satellite by the private sector, and to technologically enable future generations by developing advanced and high risk ground and space segment technologies. These technologies are aimed at mitigating severe shortages of spectrum, orbital slot, and spacecraft EIRP which are expected to plague the high capacity mobile satellite systems of the future. After a brief introduction of the concept of mobile satellite systems and their expected evolution, this paper outlines the critical ground and space segment technologies. Next, the Mobile Satellite Experiment (MSAT-X) is described. MSAT-X is the framework through which NASA will develop advanced ground segment technologies. An approach is outlined for the development of conformal vehicle antennas, spectrum and power-efficient speech codecs, and modulation techniques for use in the non-linear faded channels and efficient multiple access schemes. Finally, the paper concludes with a description of the current and planned NASA activities aimed at developing complex large multibeam spacecraft antennas needed for future generation mobile satellite systems.

  20. NASA's Kepler Reveals Potential New Worlds - Raw Video New File

    NASA Image and Video Library

    2017-06-19

    This is a video file, or a collection of unedited video clips for media usage, in support of the Kepler mission's latest discovery announcement. Launched in 2009, the Kepler space telescope is our first mission capable of identifying Earth-size planets around other stars. On Monday, June 19, 2017, scientists announced the results from the latest Kepler candidate catalog of the mission at a press conference at NASA's Ames Research Center.

  1. Bilayer segmentation of webcam videos using tree-based classifiers.

    PubMed

    Yin, Pei; Criminisi, Antonio; Winn, John; Essa, Irfan

    2011-01-01

    This paper presents an automatic segmentation algorithm for video frames captured by a (monocular) webcam that closely approximates depth segmentation from a stereo camera. The frames are segmented into foreground and background layers that comprise a subject (participant) and other objects and individuals. The algorithm produces correct segmentations even in the presence of large background motion with a nearly stationary foreground. This research makes three key contributions: First, we introduce a novel motion representation, referred to as "motons," inspired by research in object recognition. Second, we propose estimating the segmentation likelihood from the spatial context of motion. The estimation is efficiently learned by random forests. Third, we introduce a general taxonomy of tree-based classifiers that facilitates both theoretical and experimental comparisons of several known classification algorithms and generates new ones. In our bilayer segmentation algorithm, diverse visual cues such as motion, motion context, color, contrast, and spatial priors are fused by means of a conditional random field (CRF) model. Segmentation is then achieved by binary min-cut. Experiments on many sequences of our videochat application demonstrate that our algorithm, which requires no initialization, is effective in a variety of scenes, and the segmentation results are comparable to those obtained by stereo systems.

  2. Music video shot segmentation using independent component analysis and keyframe extraction based on image complexity

    NASA Astrophysics Data System (ADS)

    Li, Wei; Chen, Ting; Zhang, Wenjun; Shi, Yunyu; Li, Jun

    2012-04-01

    In recent years, Music video data is increasing at an astonishing speed. Shot segmentation and keyframe extraction constitute a fundamental unit in organizing, indexing, retrieving video content. In this paper a unified framework is proposed to detect the shot boundaries and extract the keyframe of a shot. Music video is first segmented to shots by illumination-invariant chromaticity histogram in independent component (IC) analysis feature space .Then we presents a new metric, image complexity, to extract keyframe in a shot which is computed by ICs. Experimental results show the framework is effective and has a good performance.

  3. Video rate color region segmentation for mobile robotic applications

    NASA Astrophysics Data System (ADS)

    de Cabrol, Aymeric; Bonnin, Patrick J.; Hugel, Vincent; Blazevic, Pierre; Chetto, Maryline

    2005-08-01

    Color Region may be an interesting image feature to extract for visual tasks in robotics, such as navigation and obstacle avoidance. But, whereas numerous methods are used for vision systems embedded on robots, only a few use this segmentation mainly because of the processing duration. In this paper, we propose a new real-time (ie. video rate) color region segmentation followed by a robust color classification and a merging of regions, dedicated to various applications such as RoboCup four-legged league or an industrial conveyor wheeled robot. Performances of this algorithm and confrontation with other methods, in terms of result quality and temporal performances are provided. For better quality results, the obtained speed up is between 2 and 4. For same quality results, the it is up to 10. We present also the outlines of the Dynamic Vision System of the CLEOPATRE Project - for which this segmentation has been developed - and the Clear Box Methodology which allowed us to create the new color region segmentation from the evaluation and the knowledge of other well known segmentations.

  4. Joint Multi-Leaf Segmentation, Alignment, and Tracking for Fluorescence Plant Videos.

    PubMed

    Yin, Xi; Liu, Xiaoming; Chen, Jin; Kramer, David M

    2018-06-01

    This paper proposes a novel framework for fluorescence plant video processing. The plant research community is interested in the leaf-level photosynthetic analysis within a plant. A prerequisite for such analysis is to segment all leaves, estimate their structures, and track them over time. We identify this as a joint multi-leaf segmentation, alignment, and tracking problem. First, leaf segmentation and alignment are applied on the last frame of a plant video to find a number of well-aligned leaf candidates. Second, leaf tracking is applied on the remaining frames with leaf candidate transformation from the previous frame. We form two optimization problems with shared terms in their objective functions for leaf alignment and tracking respectively. A quantitative evaluation framework is formulated to evaluate the performance of our algorithm with four metrics. Two models are learned to predict the alignment accuracy and detect tracking failure respectively in order to provide guidance for subsequent plant biology analysis. The limitation of our algorithm is also studied. Experimental results show the effectiveness, efficiency, and robustness of the proposed method.

  5. NASA Report to Education, Volume 9

    NASA Technical Reports Server (NTRS)

    1991-01-01

    This is an edition of 'NASA Report to Education' covering NASA's Educational Workshop, Lewis Research Center's T-34 and the Space Exploration Initiative. The first segment shows NASA Education Workshop program (NEWEST - NASA Educational Workshops for Elementary School Teachers). Highlights of the 14 days of intense training, lectures, fieldtrips and simple projects that the educators went through to teach the program are included. Participants are shown working on various projects such as the electromagnetic spectrum, living in Space Station Freedom, experience in T-34, tour of tower at the Federal Aviation Administrative Facilities, conducting an egg survival system and an interactive video conference with astronaut Story Musgrave. Participants share impressions of the workshop. The second segment tells how Lewis Research Center's T-34 aircraft is used to promote aerospace education in several Cleveland schools and excite students.

  6. Segmentation of Pollen Tube Growth Videos Using Dynamic Bi-Modal Fusion and Seam Carving.

    PubMed

    Tambo, Asongu L; Bhanu, Bir

    2016-05-01

    The growth of pollen tubes is of significant interest in plant cell biology, as it provides an understanding of internal cell dynamics that affect observable structural characteristics such as cell diameter, length, and growth rate. However, these parameters can only be measured in experimental videos if the complete shape of the cell is known. The challenge is to accurately obtain the cell boundary in noisy video images. Usually, these measurements are performed by a scientist who manually draws regions-of-interest on the images displayed on a computer screen. In this paper, a new automated technique is presented for boundary detection by fusing fluorescence and brightfield images, and a new efficient method of obtaining the final cell boundary through the process of Seam Carving is proposed. This approach takes advantage of the nature of the fusion process and also the shape of the pollen tube to efficiently search for the optimal cell boundary. In video segmentation, the first two frames are used to initialize the segmentation process by creating a search space based on a parametric model of the cell shape. Updates to the search space are performed based on the location of past segmentations and a prediction of the next segmentation.Experimental results show comparable accuracy to a previous method, but significant decrease in processing time. This has the potential for real time applications in pollen tube microscopy.

  7. Self Occlusion and Disocclusion in Causal Video Object Segmentation

    DTIC Science & Technology

    2015-12-18

    computation is parameter- free in contrast to [4, 32, 10]. Taylor et al . [30] perform layer segmentation in longer video sequences leveraging occlusion cues...shows that our method recovers from errors in the first frame (short of failed detection). 4413 image ground truth Lee et al . [19] Grundman et al . [14...Ochs et al . [23] Taylor et al . [30] ours Figure 7. Sample Visual Results on FBMS-59. Comparison of various state-of-the-art methods. Only a single

  8. Applicability of NASA (ARC) two-segment approach procedures to Boeing Aircraft

    NASA Technical Reports Server (NTRS)

    Allison, R. L.

    1974-01-01

    An engineering study to determine the feasibility of applying the NASA (ARC) two-segment approach procedures and avionics to the Boeing fleet of commercial jet transports is presented. This feasibility study is concerned with the speed/path control and systems compability aspects of the procedures. Path performance data are provided for representative Boeing 707/727/737/747 passenger models. Thrust margin requirements for speed/path control are analyzed for still air and shearing tailwind conditions. Certification of the two-segment equipment and possible effects on existing airplane certification are discussed. Operational restrictions on use of the procedures with current autothrottles and in icing or reported tailwind conditions are recommended. Using the NASA/UAL 727 procedures as a baseline, maximum upper glide slopes for representative 707/727/737/747 models are defined as a starting point for further study and/or flight evaluation programs.

  9. Video of Tissue Grown in Space in NASA Bioreactor

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Principal investigator Leland Chung grew prostate cancer and bone stromal cells aboard the Space Shuttle Columbia during the STS-107 mission. Although the experiment samples were lost along with the ill-fated spacecraft and crew, he did obtain downlinked video of the experiment that indicates the enormous potential of growing tissues in microgravity. Cells grown aboard Columbia had grown far larger tissue aggregates at day 5 than did the cells grown in a NASA bioreactor on the ground.

  10. Video Object Segmentation through Spatially Accurate and Temporally Dense Extraction of Primary Object Regions (Open Access)

    DTIC Science & Technology

    2013-10-03

    fol- low the setup in the literature ([13, 14]), and use 5 (birdfall, cheetah , girl, monkeydog and parachute) of the videos for evaluation (since the...segmentation labeling results of the method, GT is the ground-truth labeling of the video, and F is the (a) Birdfall (b) Cheetah (c) Girl (d) Monkeydog...Video Ours [14] [13] [20] [6] birdfall 155 189 288 252 454 cheetah 633 806 905 1142 1217 girl 1488 1698 1785 1304 1755 monkeydog 365 472 521 563 683

  11. Computer-mediated instructional video: a randomised controlled trial comparing a sequential and a segmented instructional video in surgical hand wash.

    PubMed

    Schittek Janda, M; Tani Botticelli, A; Mattheos, N; Nebel, D; Wagner, A; Nattestad, A; Attström, R

    2005-05-01

    Video-based instructions for clinical procedures have been used frequently during the preceding decades. To investigate in a randomised controlled trial the learning effectiveness of fragmented videos vs. the complete sequential video and to analyse the attitudes of the user towards video as a learning aid. An instructional video on surgical hand wash was produced. The video was available in two different forms in two separate web pages: one as a sequential video and one fragmented into eight short clips. Twenty-eight dental students in the second semester were randomised into an experimental (n = 15) and a control group (n = 13). The experimental group used the fragmented form of the video and the control group watched the complete one. The use of the videos was logged and the students were video taped whilst undertaking a test hand wash. The videos were analysed systematically and blindly by two independent clinicians. The students also performed a written test concerning learning outcome from the videos as well as they answered an attitude questionnaire. The students in the experimental group watched the video significantly longer than the control group. There were no significant differences between the groups with regard to the ratings and scores when performing the hand wash. The experimental group had significantly better results in the written test compared with those of the control group. There was no significant difference between the groups with regard to attitudes towards the use of video for learning, as measured by the Visual Analogue Scales. Most students in both groups expressed satisfaction with the use of video for learning. The students demonstrated positive attitudes and acceptable learning outcome from viewing CAL videos as a part of their pre-clinical training. Videos that are part of computer-based learning settings would ideally be presented to the students both as a segmented and as a whole video to give the students the option to choose the

  12. Object class segmentation of RGB-D video using recurrent convolutional neural networks.

    PubMed

    Pavel, Mircea Serban; Schulz, Hannes; Behnke, Sven

    2017-04-01

    Object class segmentation is a computer vision task which requires labeling each pixel of an image with the class of the object it belongs to. Deep convolutional neural networks (DNN) are able to learn and take advantage of local spatial correlations required for this task. They are, however, restricted by their small, fixed-sized filters, which limits their ability to learn long-range dependencies. Recurrent Neural Networks (RNN), on the other hand, do not suffer from this restriction. Their iterative interpretation allows them to model long-range dependencies by propagating activity. This property is especially useful when labeling video sequences, where both spatial and temporal long-range dependencies occur. In this work, a novel RNN architecture for object class segmentation is presented. We investigate several ways to train such a network. We evaluate our models on the challenging NYU Depth v2 dataset for object class segmentation and obtain competitive results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Hybrid video-assisted thoracic surgery with segmental-main bronchial sleeve resection for non-small cell lung cancer.

    PubMed

    Li, Shuben; Chai, Huiping; Huang, Jun; Zeng, Guangqiao; Shao, Wenlong; He, Jianxing

    2014-04-01

    The purpose of the current study is to present the clinical and surgical results in patients who underwent hybrid video-assisted thoracic surgery with segmental-main bronchial sleeve resection. Thirty-one patients, 27 men and 4 women, underwent segmental-main bronchial sleeve anastomoses for non-small cell lung cancer between May 2004 and May 2011. Twenty-six (83.9%) patients had squamous cell carcinoma, and 5 patients had adenocarcinoma. Six patients were at stage IIB, 24 patients at stage IIIA, and 1 patient at stage IIIB. Secondary sleeve anastomosis was performed in 18 patients, and Y-shaped multiple sleeve anastomosis was performed in 8 patients. Single segmental bronchiole anastomosis was performed in 5 cases. The average time for chest tube removal was 5.6 days. The average length of hospital stay was 11.8 days. No anastomosis fistula developed in any of the patients. The 1-, 2-, and 3-year survival rates were 83.9%, 71.0%, and 41.9%, respectively. Hybrid video-assisted thoracic surgery with segmental-main bronchial sleeve resection is a complex technique that requires training and experience, but it is an effective and safe operation for selected patients.

  14. Image Segmentation Analysis for NASA Earth Science Applications

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    2010-01-01

    NASA collects large volumes of imagery data from satellite-based Earth remote sensing sensors. Nearly all of the computerized image analysis of this data is performed pixel-by-pixel, in which an algorithm is applied directly to individual image pixels. While this analysis approach is satisfactory in many cases, it is usually not fully effective in extracting the full information content from the high spatial resolution image data that s now becoming increasingly available from these sensors. The field of object-based image analysis (OBIA) has arisen in recent years to address the need to move beyond pixel-based analysis. The Recursive Hierarchical Segmentation (RHSEG) software developed by the author is being used to facilitate moving from pixel-based image analysis to OBIA. The key unique aspect of RHSEG is that it tightly intertwines region growing segmentation, which produces spatially connected region objects, with region object classification, which groups sets of region objects together into region classes. No other practical, operational image segmentation approach has this tight integration of region growing object finding with region classification This integration is made possible by the recursive, divide-and-conquer implementation utilized by RHSEG, in which the input image data is recursively subdivided until the image data sections are small enough to successfully mitigat the combinatorial explosion caused by the need to compute the dissimilarity between each pair of image pixels. RHSEG's tight integration of region growing object finding and region classification is what enables the high spatial fidelity of the image segmentations produced by RHSEG. This presentation will provide an overview of the RHSEG algorithm and describe how it is currently being used to support OBIA or Earth Science applications such as snow/ice mapping and finding archaeological sites from remotely sensed data.

  15. An efficient fully unsupervised video object segmentation scheme using an adaptive neural-network classifier architecture.

    PubMed

    Doulamis, A; Doulamis, N; Ntalianis, K; Kollias, S

    2003-01-01

    In this paper, an unsupervised video object (VO) segmentation and tracking algorithm is proposed based on an adaptable neural-network architecture. The proposed scheme comprises: 1) a VO tracking module and 2) an initial VO estimation module. Object tracking is handled as a classification problem and implemented through an adaptive network classifier, which provides better results compared to conventional motion-based tracking algorithms. Network adaptation is accomplished through an efficient and cost effective weight updating algorithm, providing a minimum degradation of the previous network knowledge and taking into account the current content conditions. A retraining set is constructed and used for this purpose based on initial VO estimation results. Two different scenarios are investigated. The first concerns extraction of human entities in video conferencing applications, while the second exploits depth information to identify generic VOs in stereoscopic video sequences. Human face/ body detection based on Gaussian distributions is accomplished in the first scenario, while segmentation fusion is obtained using color and depth information in the second scenario. A decision mechanism is also incorporated to detect time instances for weight updating. Experimental results and comparisons indicate the good performance of the proposed scheme even in sequences with complicated content (object bending, occlusion).

  16. Traffic Video Image Segmentation Model Based on Bayesian and Spatio-Temporal Markov Random Field

    NASA Astrophysics Data System (ADS)

    Zhou, Jun; Bao, Xu; Li, Dawei; Yin, Yongwen

    2017-10-01

    Traffic video image is a kind of dynamic image and its background and foreground is changed at any time, which results in the occlusion. In this case, using the general method is more difficult to get accurate image segmentation. A segmentation algorithm based on Bayesian and Spatio-Temporal Markov Random Field is put forward, which respectively build the energy function model of observation field and label field to motion sequence image with Markov property, then according to Bayesian' rule, use the interaction of label field and observation field, that is the relationship of label field’s prior probability and observation field’s likelihood probability, get the maximum posterior probability of label field’s estimation parameter, use the ICM model to extract the motion object, consequently the process of segmentation is finished. Finally, the segmentation methods of ST - MRF and the Bayesian combined with ST - MRF were analyzed. Experimental results: the segmentation time in Bayesian combined with ST-MRF algorithm is shorter than in ST-MRF, and the computing workload is small, especially in the heavy traffic dynamic scenes the method also can achieve better segmentation effect.

  17. From image captioning to video summary using deep recurrent networks and unsupervised segmentation

    NASA Astrophysics Data System (ADS)

    Morosanu, Bogdan-Andrei; Lemnaru, Camelia

    2018-04-01

    Automatic captioning systems based on recurrent neural networks have been tremendously successful at providing realistic natural language captions for complex and varied image data. We explore methods for adapting existing models trained on large image caption data sets to a similar problem, that of summarising videos using natural language descriptions and frame selection. These architectures create internal high level representations of the input image that can be used to define probability distributions and distance metrics on these distributions. Specifically, we interpret each hidden unit inside a layer of the caption model as representing the un-normalised log probability of some unknown image feature of interest for the caption generation process. We can then apply well understood statistical divergence measures to express the difference between images and create an unsupervised segmentation of video frames, classifying consecutive images of low divergence as belonging to the same context, and those of high divergence as belonging to different contexts. To provide a final summary of the video, we provide a group of selected frames and a text description accompanying them, allowing a user to perform a quick exploration of large unlabeled video databases.

  18. Performance Evaluation of the NASA/KSC Transmission System

    NASA Technical Reports Server (NTRS)

    Christensen, Kenneth J.

    2000-01-01

    NASA-KSC currently uses three bridged 100-Mbps FDDI segments as its backbone for data traffic. The FDDI Transmission System (FTXS) connects the KSC industrial area, KSC launch complex 39 area, and the Cape Canaveral Air Force Station. The report presents a performance modeling study of the FTXS and the proposed ATM Transmission System (ATXS). The focus of the study is on performance of MPEG video transmission on these networks. Commercial modeling tools - the CACI Predictor and Comnet tools - were used. In addition, custom software tools were developed to characterize conversation pairs in Sniffer trace (capture) files to use as input to these tools. A baseline study of both non-launch and launch day data traffic on the FTXS is presented. MPEG-1 and MPEG-2 video traffic was characterized and the shaping of it evaluated. It is shown that the characteristics of a video stream has a direct effect on its performance in a network. It is also shown that shaping of video streams is necessary to prevent overflow losses and resulting poor video quality. The developed models can be used to predict when the existing FTXS will 'run out of room' and for optimizing the parameters of ATM links used for transmission of MPEG video. Future work with these models can provide useful input and validation to set-top box projects within the Advanced Networks Development group in NASA-KSC Development Engineering.

  19. NASA's K/Ka-Band Broadband Aeronautical Terminal for Duplex Satellite Video Communications

    NASA Technical Reports Server (NTRS)

    Densmore, A.; Agan, M.

    1994-01-01

    JPL has recently begun the development of a Broadband Aeronautical Terminal (BAT) for duplex video satellite communications on commercial or business class aircraft. The BAT is designed for use with NASA's K/Ka-band Advanced Communications Technology Satellite (ACTS). The BAT system will provide the systems and technology groundwork for an eventual commercial K/Ka-band aeronautical satellite communication system. With industry/government partnerships, three main goals will be addressed by the BAT task: 1) develop, characterize and demonstrate the performance of an ACTS based high data rate aeronautical communications system; 2) assess the performance of current video compression algorithms in an aeronautical satellite communication link; and 3) characterize the propagation effects of the K/Ka-band channel for aeronautical communications.

  20. Automated segmentation and tracking of non-rigid objects in time-lapse microscopy videos of polymorphonuclear neutrophils.

    PubMed

    Brandes, Susanne; Mokhtari, Zeinab; Essig, Fabian; Hünniger, Kerstin; Kurzai, Oliver; Figge, Marc Thilo

    2015-02-01

    Time-lapse microscopy is an important technique to study the dynamics of various biological processes. The labor-intensive manual analysis of microscopy videos is increasingly replaced by automated segmentation and tracking methods. These methods are often limited to certain cell morphologies and/or cell stainings. In this paper, we present an automated segmentation and tracking framework that does not have these restrictions. In particular, our framework handles highly variable cell shapes and does not rely on any cell stainings. Our segmentation approach is based on a combination of spatial and temporal image variations to detect moving cells in microscopy videos. This method yields a sensitivity of 99% and a precision of 95% in object detection. The tracking of cells consists of different steps, starting from single-cell tracking based on a nearest-neighbor-approach, detection of cell-cell interactions and splitting of cell clusters, and finally combining tracklets using methods from graph theory. The segmentation and tracking framework was applied to synthetic as well as experimental datasets with varying cell densities implying different numbers of cell-cell interactions. We established a validation framework to measure the performance of our tracking technique. The cell tracking accuracy was found to be >99% for all datasets indicating a high accuracy for connecting the detected cells between different time points. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Video of Miscible Fluid Experiment Conducted on NASA Low Gravity Airplane

    NASA Technical Reports Server (NTRS)

    2003-01-01

    This is a video of dyed water being injected into glycerin in a 2.2 centimeter (cm) diameter test tube. The experiment was conducted on the KC-135 aircraft, a NASA plane that creates microgravity and 2g conditions as it maneuvers through multiple parabolas. The water is less dense and so it rises to the top of the glycerin. The goal of the experiment was to determine if a blob of a miscible fluid would spontaneously become spherical in a microgravity environment.

  2. Surgical gesture segmentation and recognition.

    PubMed

    Tao, Lingling; Zappella, Luca; Hager, Gregory D; Vidal, René

    2013-01-01

    Automatic surgical gesture segmentation and recognition can provide useful feedback for surgical training in robotic surgery. Most prior work in this field relies on the robot's kinematic data. Although recent work [1,2] shows that the robot's video data can be equally effective for surgical gesture recognition, the segmentation of the video into gestures is assumed to be known. In this paper, we propose a framework for joint segmentation and recognition of surgical gestures from kinematic and video data. Unlike prior work that relies on either frame-level kinematic cues, or segment-level kinematic or video cues, our approach exploits both cues by using a combined Markov/semi-Markov conditional random field (MsM-CRF) model. Our experiments show that the proposed model improves over a Markov or semi-Markov CRF when using video data alone, gives results that are comparable to state-of-the-art methods on kinematic data alone, and improves over state-of-the-art methods when combining kinematic and video data.

  3. Social media is all about video these days: tips communicating science from NASA's Earth Right Now

    NASA Astrophysics Data System (ADS)

    Bell, S.

    2016-12-01

    If you're not producing video to communicate your science findings, you're missing the boat navigating the ever-evolving currents of social media. NASA's Earth Right Now communications team made video a priority the past year as we engaged a massive online audience on social media. We will share best practices on social media, lessons learned, what's on the horizon and storytelling techniques to try. PBS documentary-style is passé. Welcome to the world of ten-second Snaps, text-on-picture CNN stories, Facebook Live events and 360° video experiences. Your audience is out there, you just need to catch their attention.

  4. Video segmentation for post-production

    NASA Astrophysics Data System (ADS)

    Wills, Ciaran

    2001-12-01

    Specialist post-production is an industry that has much to gain from the application of content-based video analysis techniques. However the types of material handled in specialist post-production, such as television commercials, pop music videos and special effects are quite different in nature from the typical broadcast material which many video analysis techniques are designed to work with; shots are short and highly dynamic, and the transitions are often novel or ambiguous. We address the problem of scene change detection and develop a new algorithm which tackles some of the common aspects of post-production material that cause difficulties for past algorithms, such as illumination changes and jump cuts. Operating in the compressed domain on Motion JPEG compressed video, our algorithm detects cuts and fades by analyzing each JPEG macroblock in the context of its temporal and spatial neighbors. Analyzing the DCT coefficients directly we can extract the mean color of a block and an approximate detail level. We can also perform an approximated cross-correlation between two blocks. The algorithm is part of a set of tools being developed to work with an automated asset management system designed specifically for use in post-production facilities.

  5. Affective video retrieval: violence detection in Hollywood movies by large-scale segmental feature extraction.

    PubMed

    Eyben, Florian; Weninger, Felix; Lehment, Nicolas; Schuller, Björn; Rigoll, Gerhard

    2013-01-01

    Without doubt general video and sound, as found in large multimedia archives, carry emotional information. Thus, audio and video retrieval by certain emotional categories or dimensions could play a central role for tomorrow's intelligent systems, enabling search for movies with a particular mood, computer aided scene and sound design in order to elicit certain emotions in the audience, etc. Yet, the lion's share of research in affective computing is exclusively focusing on signals conveyed by humans, such as affective speech. Uniting the fields of multimedia retrieval and affective computing is believed to lend to a multiplicity of interesting retrieval applications, and at the same time to benefit affective computing research, by moving its methodology "out of the lab" to real-world, diverse data. In this contribution, we address the problem of finding "disturbing" scenes in movies, a scenario that is highly relevant for computer-aided parental guidance. We apply large-scale segmental feature extraction combined with audio-visual classification to the particular task of detecting violence. Our system performs fully data-driven analysis including automatic segmentation. We evaluate the system in terms of mean average precision (MAP) on the official data set of the MediaEval 2012 evaluation campaign's Affect Task, which consists of 18 original Hollywood movies, achieving up to .398 MAP on unseen test data in full realism. An in-depth analysis of the worth of individual features with respect to the target class and the system errors is carried out and reveals the importance of peak-related audio feature extraction and low-level histogram-based video analysis.

  6. Analysis and segmentation of images in case of solving problems of detecting and tracing objects on real-time video

    NASA Astrophysics Data System (ADS)

    Ezhova, Kseniia; Fedorenko, Dmitriy; Chuhlamov, Anton

    2016-04-01

    The article deals with the methods of image segmentation based on color space conversion, and allow the most efficient way to carry out the detection of a single color in a complex background and lighting, as well as detection of objects on a homogeneous background. The results of the analysis of segmentation algorithms of this type, the possibility of their implementation for creating software. The implemented algorithm is very time-consuming counting, making it a limited application for the analysis of the video, however, it allows us to solve the problem of analysis of objects in the image if there is no dictionary of images and knowledge bases, as well as the problem of choosing the optimal parameters of the frame quantization for video analysis.

  7. Affective Video Retrieval: Violence Detection in Hollywood Movies by Large-Scale Segmental Feature Extraction

    PubMed Central

    Eyben, Florian; Weninger, Felix; Lehment, Nicolas; Schuller, Björn; Rigoll, Gerhard

    2013-01-01

    Without doubt general video and sound, as found in large multimedia archives, carry emotional information. Thus, audio and video retrieval by certain emotional categories or dimensions could play a central role for tomorrow's intelligent systems, enabling search for movies with a particular mood, computer aided scene and sound design in order to elicit certain emotions in the audience, etc. Yet, the lion's share of research in affective computing is exclusively focusing on signals conveyed by humans, such as affective speech. Uniting the fields of multimedia retrieval and affective computing is believed to lend to a multiplicity of interesting retrieval applications, and at the same time to benefit affective computing research, by moving its methodology “out of the lab” to real-world, diverse data. In this contribution, we address the problem of finding “disturbing” scenes in movies, a scenario that is highly relevant for computer-aided parental guidance. We apply large-scale segmental feature extraction combined with audio-visual classification to the particular task of detecting violence. Our system performs fully data-driven analysis including automatic segmentation. We evaluate the system in terms of mean average precision (MAP) on the official data set of the MediaEval 2012 evaluation campaign's Affect Task, which consists of 18 original Hollywood movies, achieving up to .398 MAP on unseen test data in full realism. An in-depth analysis of the worth of individual features with respect to the target class and the system errors is carried out and reveals the importance of peak-related audio feature extraction and low-level histogram-based video analysis. PMID:24391704

  8. The NASA eClips 4D Program: Impacts from the First Year Quasi-Experimental Study on Video Development and Viewing on Students.

    NASA Astrophysics Data System (ADS)

    Davey, B.; Davis, H. B.; Harper-Neely, J.; Bowers, S.

    2017-12-01

    NASA eClips™ is a multi-media educational program providing educational resources relevant to the formal K-12 classroom. Science content for the NASA eClips™ 4D elements is drawn from all four divisions of the Science Mission Directorate (SMD) as well as cross-divisional topics. The suite of elements fulfills the following SMD education objectives: Enable STEM education, Improve U.S. scientific literacy, Advance national education goals (CoSTEM), and Leverage efforts through partnerships. A component of eClips™ was the development of NASA Spotlite videos (student developed videos designed to increase student literacy and address misconceptions of other students) by digital media students. While developing the Sptolite videos, the students gained skills in teamwork, working in groups to accomplish a task, and how to convey specific concepts in a video. The teachers felt the video project was a good fit for their courses and enhanced what the students were already learning. Teachers also reported that the students learned knowledge and skills that would help them in future careers including how to gain a better understanding of a project and the importance of being knowledgeable about the topic. The student developed eClips videos were then used as part of interactive lessons to help other students learn about key science concepts. As part of our research, we established a quasi-experimental design where one group of students received the intervention including the Spotlite videos (intervention group) and one group did not receive the intervention (comparison group). An overall comparison of post scores between intervention group and comparison group students showed intervention groups had significantly higher scores in three of the four content areas - Ozone, Clouds, and Phase Change.

  9. Echocardiogram video summarization

    NASA Astrophysics Data System (ADS)

    Ebadollahi, Shahram; Chang, Shih-Fu; Wu, Henry D.; Takoma, Shin

    2001-05-01

    This work aims at developing innovative algorithms and tools for summarizing echocardiogram videos. Specifically, we summarize the digital echocardiogram videos by temporally segmenting them into the constituent views and representing each view by the most informative frame. For the segmentation we take advantage of the well-defined spatio- temporal structure of the echocardiogram videos. Two different criteria are used: presence/absence of color and the shape of the region of interest (ROI) in each frame of the video. The change in the ROI is due to different modes of echocardiograms present in one study. The representative frame is defined to be the frame corresponding to the end- diastole of the heart cycle. To locate the end-diastole we track the ECG of each frame to find the exact time the time- marker on the ECG crosses the peak of the end-diastole we track the ECG of each frame to find the exact time the time- marker on the ECG crosses the peak of the R-wave. The corresponding frame is chosen to be the key-frame. The entire echocardiogram video can be summarized into either a static summary, which is a storyboard type of summary and a dynamic summary, which is a concatenation of the selected segments of the echocardiogram video. To the best of our knowledge, this if the first automated system for summarizing the echocardiogram videos base don visual content.

  10. Hierarchical video summarization

    NASA Astrophysics Data System (ADS)

    Ratakonda, Krishna; Sezan, M. Ibrahim; Crinon, Regis J.

    1998-12-01

    We address the problem of key-frame summarization of vide in the absence of any a priori information about its content. This is a common problem that is encountered in home videos. We propose a hierarchical key-frame summarization algorithm where a coarse-to-fine key-frame summary is generated. A hierarchical key-frame summary facilitates multi-level browsing where the user can quickly discover the content of the video by accessing its coarsest but most compact summary and then view a desired segment of the video with increasingly more detail. At the finest level, the summary is generated on the basis of color features of video frames, using an extension of a recently proposed key-frame extraction algorithm. The finest level key-frames are recursively clustered using a novel pairwise K-means clustering approach with temporal consecutiveness constraint. We also address summarization of MPEG-2 compressed video without fully decoding the bitstream. We also propose efficient mechanisms that facilitate decoding the video when the hierarchical summary is utilized in browsing and playback of video segments starting at selected key-frames.

  11. NASA Releases 'NASA App HD' for iPad

    NASA Image and Video Library

    2012-07-06

    The NASA App HD invites you to discover a wealth of NASA information right on your iPad. The application collects, customizes and delivers an extensive selection of dynamically updated mission information, images, videos and Twitter feeds from various online NASA sources in a convenient mobile package. Come explore with NASA, now on your iPad. 2012 Updated Version - HD Resolution and new features. Original version published on Sept. 1, 2010.

  12. Video Modeling by Experts with Video Feedback to Enhance Gymnastics Skills

    ERIC Educational Resources Information Center

    Boyer, Eva; Miltenberger, Raymond G.; Batsche, Catherine; Fogel, Victoria

    2009-01-01

    The effects of combining video modeling by experts with video feedback were analyzed with 4 female competitive gymnasts (7 to 10 years old) in a multiple baseline design across behaviors. During the intervention, after the gymnast performed a specific gymnastics skill, she viewed a video segment showing an expert gymnast performing the same skill…

  13. Automatic segmentation of the optic nerve head for deformation measurements in video rate optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Hidalgo-Aguirre, Maribel; Gitelman, Julian; Lesk, Mark Richard; Costantino, Santiago

    2015-11-01

    Optical coherence tomography (OCT) imaging has become a standard diagnostic tool in ophthalmology, providing essential information associated with various eye diseases. In order to investigate the dynamics of the ocular fundus, we present a simple and accurate automated algorithm to segment the inner limiting membrane in video-rate optic nerve head spectral domain (SD) OCT images. The method is based on morphological operations including a two-step contrast enhancement technique, proving to be very robust when dealing with low signal-to-noise ratio images and pathological eyes. An analysis algorithm was also developed to measure neuroretinal tissue deformation from the segmented retinal profiles. The performance of the algorithm is demonstrated, and deformation results are presented for healthy and glaucomatous eyes.

  14. New robust algorithm for tracking cells in videos of Drosophila morphogenesis based on finding an ideal path in segmented spatio-temporal cellular structures.

    PubMed

    Bellaïche, Yohanns; Bosveld, Floris; Graner, François; Mikula, Karol; Remesíková, Mariana; Smísek, Michal

    2011-01-01

    In this paper, we present a novel algorithm for tracking cells in time lapse confocal microscopy movie of a Drosophila epithelial tissue during pupal morphogenesis. We consider a 2D + time video as a 3D static image, where frames are stacked atop each other, and using a spatio-temporal segmentation algorithm we obtain information about spatio-temporal 3D tubes representing evolutions of cells. The main idea for tracking is the usage of two distance functions--first one from the cells in the initial frame and second one from segmented boundaries. We track the cells backwards in time. The first distance function attracts the subsequently constructed cell trajectories to the cells in the initial frame and the second one forces them to be close to centerlines of the segmented tubular structures. This makes our tracking algorithm robust against noise and missing spatio-temporal boundaries. This approach can be generalized to a 3D + time video analysis, where spatio-temporal tubes are 4D objects.

  15. Automatic topics segmentation for TV news video

    NASA Astrophysics Data System (ADS)

    Hmayda, Mounira; Ejbali, Ridha; Zaied, Mourad

    2017-03-01

    Automatic identification of television programs in the TV stream is an important task for operating archives. This article proposes a new spatio-temporal approach to identify the programs in TV stream into two main steps: First, a reference catalogue for video features visual jingles built. We operate the features that characterize the instances of the same program type to identify the different types of programs in the flow of television. The role of video features is to represent the visual invariants for each visual jingle using appropriate automatic descriptors for each television program. On the other hand, programs in television streams are identified by examining the similarity of the video signal for visual grammars in the catalogue. The main idea of the identification process is to compare the visual similarity of the video signal features in the flow of television to the catalogue. After presenting the proposed approach, the paper overviews encouraging experimental results on several streams extracted from different channels and compounds of several programs.

  16. NASA Dryden's Lori Losey was named NASA's 2004 Videographer of the Year in part for her camera work during NASA's AirSAR 2004 science mission in Chile.

    NASA Image and Video Library

    2004-03-11

    Lori Losey, an employee of Arcata Associates at Dryden, was honored with NASA's 2004 Videographer of the Year award for her work in two of the three categories in the NASA video competition, public affairs and documentation. In the public affairs category, Losey received a first-place citation for her footage of an Earth Science mission that was flown aboard NASA's DC-8 Flying Laboratory in South America last year. Her footage not only depicted the work of the scientists aboard the aircraft and on the ground, but she also obtained spectacular footage of flora and fauna in the mission's target area that helped communicate the environmental research goals of the project. Losey also took first place in the documentation category for her acquisition of technical videography of the X-45A Unmanned Combat Air Vehicle flight tests. The video, shot with a hand-held camera from the rear seat of a NASA F/A-18 mission support aircraft, demonstrated her capabilities in recording precise technical visual data in a very challenging airborne environment. The award was presented to Losey during a NASA reception at the National Association of Broadcasters convention in Las Vegas April 19. A three-judge panel evaluated entries for public affairs, documentation and production videography on professional excellence, technical quality, originality, creativity within restrictions of the project, and applicability to NASA and its mission. Entries consisted of a continuous video sequence or three views of the same subject for a maximum of three minutes duration. Linda Peters, Arcata Associates' Video Systems Supervisor at NASA Dryden, noted, "Lori is a talented videographer who has demonstrated extraordinary abilities with the many opportunities she has received in her career at NASA." Losey's award was the second major NASA video award won by members of the Dryden video team in two years. Steve Parcel took first place in the documentation category last year for his camera and editing

  17. Multi-Aircraft Video - Human/Automation Target Recognition Studies: Video Display Size in Unaided Target Acquisition Involving Multiple Videos

    DTIC Science & Technology

    2008-04-01

    Index ( NASA - TLX : Hart & Staveland, 1988), and a Post-Test Questionnaire. Demographic data/Background Questionnaire. This questionnaire was used...very confident). NASA - TLX . The NASA TLX (Hart & Staveland, 1988) is a subjective workload assessment tool. A multidimensional weighting...completed the NASA - TLX . The test trials were randomized across participants and occurred in a counterbalanced order that took into account video display

  18. A probabilistic approach to joint cell tracking and segmentation in high-throughput microscopy videos.

    PubMed

    Arbelle, Assaf; Reyes, Jose; Chen, Jia-Yun; Lahav, Galit; Riklin Raviv, Tammy

    2018-04-22

    We present a novel computational framework for the analysis of high-throughput microscopy videos of living cells. The proposed framework is generally useful and can be applied to different datasets acquired in a variety of laboratory settings. This is accomplished by tying together two fundamental aspects of cell lineage construction, namely cell segmentation and tracking, via a Bayesian inference of dynamic models. In contrast to most existing approaches, which aim to be general, no assumption of cell shape is made. Spatial, temporal, and cross-sectional variation of the analysed data are accommodated by two key contributions. First, time series analysis is exploited to estimate the temporal cell shape uncertainty in addition to cell trajectory. Second, a fast marching (FM) algorithm is used to integrate the inferred cell properties with the observed image measurements in order to obtain image likelihood for cell segmentation, and association. The proposed approach has been tested on eight different time-lapse microscopy data sets, some of which are high-throughput, demonstrating promising results for the detection, segmentation and association of planar cells. Our results surpass the state of the art for the Fluo-C2DL-MSC data set of the Cell Tracking Challenge (Maška et al., 2014). Copyright © 2018 Elsevier B.V. All rights reserved.

  19. The NASA "Why?" Files: The Case of the Phenomenal Weather. Program 7 in 2001-2002 Video Series. [Videotape].

    ERIC Educational Resources Information Center

    National Aeronautics and Space Administration, Hampton, VA. Langley Research Center.

    The National Aeronautics and Space Administration (NASA) has produced a distance learning series of four 60-minute video programs with an accompanying Web site and companion teacher guides designed for students in grades 3-5. The story lines of each program or episode involve six inquisitive school children who meet in a treehouse. They seek the…

  20. VIDEO MODELING BY EXPERTS WITH VIDEO FEEDBACK TO ENHANCE GYMNASTICS SKILLS

    PubMed Central

    Boyer, Eva; Miltenberger, Raymond G; Batsche, Catherine; Fogel, Victoria

    2009-01-01

    The effects of combining video modeling by experts with video feedback were analyzed with 4 female competitive gymnasts (7 to 10 years old) in a multiple baseline design across behaviors. During the intervention, after the gymnast performed a specific gymnastics skill, she viewed a video segment showing an expert gymnast performing the same skill and then viewed a video replay of her own performance of the skill. The results showed that all gymnasts demonstrated improved performance across three gymnastics skills following exposure to the intervention. PMID:20514194

  1. Video modeling by experts with video feedback to enhance gymnastics skills.

    PubMed

    Boyer, Eva; Miltenberger, Raymond G; Batsche, Catherine; Fogel, Victoria

    2009-01-01

    The effects of combining video modeling by experts with video feedback were analyzed with 4 female competitive gymnasts (7 to 10 years old) in a multiple baseline design across behaviors. During the intervention, after the gymnast performed a specific gymnastics skill, she viewed a video segment showing an expert gymnast performing the same skill and then viewed a video replay of her own performance of the skill. The results showed that all gymnasts demonstrated improved performance across three gymnastics skills following exposure to the intervention.

  2. Vice President Meets with NASA Leadership

    NASA Image and Video Library

    2018-04-23

    NASA Administrator Jim Bridenstine, speaks with NASA leadership by video conference, Monday, April 23, 2018 at NASA Headquarters in Washington. Bridenstine was just sworn in by Vice President Mike Pence as NASA's 13th Administrator. Photo Credit: (NASA/Aubrey Gemignani)

  3. Burbank uses video camera during installation and routing of HRCS Video Cables

    NASA Image and Video Library

    2012-02-01

    ISS030-E-060104 (1 Feb. 2012) --- NASA astronaut Dan Burbank, Expedition 30 commander, uses a video camera in the Destiny laboratory of the International Space Station during installation and routing of video cable for the High Rate Communication System (HRCS). HRCS will allow for two additional space-to-ground audio channels and two additional downlink video channels.

  4. Resolving occlusion and segmentation errors in multiple video object tracking

    NASA Astrophysics Data System (ADS)

    Cheng, Hsu-Yung; Hwang, Jenq-Neng

    2009-02-01

    In this work, we propose a method to integrate the Kalman filter and adaptive particle sampling for multiple video object tracking. The proposed framework is able to detect occlusion and segmentation error cases and perform adaptive particle sampling for accurate measurement selection. Compared with traditional particle filter based tracking methods, the proposed method generates particles only when necessary. With the concept of adaptive particle sampling, we can avoid degeneracy problem because the sampling position and range are dynamically determined by parameters that are updated by Kalman filters. There is no need to spend time on processing particles with very small weights. The adaptive appearance for the occluded object refers to the prediction results of Kalman filters to determine the region that should be updated and avoids the problem of using inadequate information to update the appearance under occlusion cases. The experimental results have shown that a small number of particles are sufficient to achieve high positioning and scaling accuracy. Also, the employment of adaptive appearance substantially improves the positioning and scaling accuracy on the tracking results.

  5. Web Audio/Video Streaming Tool

    NASA Technical Reports Server (NTRS)

    Guruvadoo, Eranna K.

    2003-01-01

    In order to promote NASA-wide educational outreach program to educate and inform the public of space exploration, NASA, at Kennedy Space Center, is seeking efficient ways to add more contents to the web by streaming audio/video files. This project proposes a high level overview of a framework for the creation, management, and scheduling of audio/video assets over the web. To support short-term goals, the prototype of a web-based tool is designed and demonstrated to automate the process of streaming audio/video files. The tool provides web-enabled users interfaces to manage video assets, create publishable schedules of video assets for streaming, and schedule the streaming events. These operations are performed on user-defined and system-derived metadata of audio/video assets stored in a relational database while the assets reside on separate repository. The prototype tool is designed using ColdFusion 5.0.

  6. NASA Missions Inspire Online Video Games

    NASA Technical Reports Server (NTRS)

    2012-01-01

    Fast forward to 2035. Imagine being part of a community of astronauts living and working on the Moon. Suddenly, in the middle of just another day in space, a meteorite crashes into the surface of the Moon, threatening life as you know it. The support equipment that provides oxygen for the entire community has been compromised. What would you do? While this situation is one that most people will never encounter, NASA hopes to place students in such situations - virtually - to inspire, engage, and educate about NASA technologies, job opportunities, and the future of space exploration. Specifically, NASA s Learning Technologies program, part of the Agency s Office of Education, aims to inspire and motivate students to pursue careers in the science, technology, engineering, and math (STEM) disciplines through interactive technologies. The ultimate goal of these educational programs is to support the growth of a pool of qualified scientific and technical candidates for future careers at places like NASA. STEM education has been an area of concern in the United States; according to the results of the 2009 Program for International Student Assessment, 23 countries had higher average scores in mathematics literacy than the United States. On the science literacy scale, 18 countries had higher average scores. "This is part of a much bigger picture of trying to grow skilled graduates for places like NASA that will want that technical expertise," says Daniel Laughlin, the Learning Technologies project manager at Goddard Space Flight Center. "NASA is trying to increase the number of students going into those fields, and so are other government agencies."

  7. Video Compression

    NASA Technical Reports Server (NTRS)

    1996-01-01

    Optivision developed two PC-compatible boards and associated software under a Goddard Space Flight Center Small Business Innovation Research grant for NASA applications in areas such as telerobotics, telesciences and spaceborne experimentation. From this technology, the company used its own funds to develop commercial products, the OPTIVideo MPEG Encoder and Decoder, which are used for realtime video compression and decompression. They are used in commercial applications including interactive video databases and video transmission. The encoder converts video source material to a compressed digital form that can be stored or transmitted, and the decoder decompresses bit streams to provide high quality playback.

  8. Creating and Using Video Segments for Rural Teacher Education.

    ERIC Educational Resources Information Center

    Ludlow, Barbara L.; Duff, Michael C.

    This paper provides guidelines for using video presentations in teacher education programs in special education. The simplest use of video is to provide students with illustrations of basic concepts, demonstrations of specific skills, or examples of model programs and practices. Video can also deliver contextually rich case studies to stimulate…

  9. Video Salient Object Detection via Fully Convolutional Networks.

    PubMed

    Wang, Wenguan; Shen, Jianbing; Shao, Ling

    This paper proposes a deep learning model to efficiently detect salient regions in videos. It addresses two important issues: 1) deep video saliency model training with the absence of sufficiently large and pixel-wise annotated video data and 2) fast video saliency training and detection. The proposed deep video saliency network consists of two modules, for capturing the spatial and temporal saliency information, respectively. The dynamic saliency model, explicitly incorporating saliency estimates from the static saliency model, directly produces spatiotemporal saliency inference without time-consuming optical flow computation. We further propose a novel data augmentation technique that simulates video training data from existing annotated image data sets, which enables our network to learn diverse saliency information and prevents overfitting with the limited number of training videos. Leveraging our synthetic video data (150K video sequences) and real videos, our deep video saliency model successfully learns both spatial and temporal saliency cues, thus producing accurate spatiotemporal saliency estimate. We advance the state-of-the-art on the densely annotated video segmentation data set (MAE of .06) and the Freiburg-Berkeley Motion Segmentation data set (MAE of .07), and do so with much improved speed (2 fps with all steps).This paper proposes a deep learning model to efficiently detect salient regions in videos. It addresses two important issues: 1) deep video saliency model training with the absence of sufficiently large and pixel-wise annotated video data and 2) fast video saliency training and detection. The proposed deep video saliency network consists of two modules, for capturing the spatial and temporal saliency information, respectively. The dynamic saliency model, explicitly incorporating saliency estimates from the static saliency model, directly produces spatiotemporal saliency inference without time-consuming optical flow computation. We further

  10. Automatic generation of pictorial transcripts of video programs

    NASA Astrophysics Data System (ADS)

    Shahraray, Behzad; Gibbon, David C.

    1995-03-01

    An automatic authoring system for the generation of pictorial transcripts of video programs which are accompanied by closed caption information is presented. A number of key frames, each of which represents the visual information in a segment of the video (i.e., a scene), are selected automatically by performing a content-based sampling of the video program. The textual information is recovered from the closed caption signal and is initially segmented based on its implied temporal relationship with the video segments. The text segmentation boundaries are then adjusted, based on lexical analysis and/or caption control information, to account for synchronization errors due to possible delays in the detection of scene boundaries or the transmission of the caption information. The closed caption text is further refined through linguistic processing for conversion to lower- case with correct capitalization. The key frames and the related text generate a compact multimedia presentation of the contents of the video program which lends itself to efficient storage and transmission. This compact representation can be viewed on a computer screen, or used to generate the input to a commercial text processing package to generate a printed version of the program.

  11. Real-time image sequence segmentation using curve evolution

    NASA Astrophysics Data System (ADS)

    Zhang, Jun; Liu, Weisong

    2001-04-01

    In this paper, we describe a novel approach to image sequence segmentation and its real-time implementation. This approach uses the 3D structure tensor to produce a more robust frame difference signal and uses curve evolution to extract whole objects. Our algorithm is implemented on a standard PC running the Windows operating system with video capture from a USB camera that is a standard Windows video capture device. Using the Windows standard video I/O functionalities, our segmentation software is highly portable and easy to maintain and upgrade. In its current implementation on a Pentium 400, the system can perform segmentation at 5 frames/sec with a frame resolution of 160 by 120.

  12. The Biology and Space Exploration Video Series

    NASA Technical Reports Server (NTRS)

    William, Jacqueline M.; Murthy, Gita; Rapa, Steve; Hargens, Alan R.

    1995-01-01

    The Biology and Space Exploration video series illustrates NASA's commitment to increasing the public awareness and understanding of life sciences in space. The video series collection, which was initiated by Dr. Joan Vernikos at NASA headquarters and Dr. Alan Hargens at NASA Ames Research Center, will be distributed to universities and other institutions around the United States. The video series parallels the "Biology and Space Exploration" course taught by NASA Ames scientists at Stanford University, Palo Alto, California. In the past, students have shown considerable enthusiasm for this course and have gained a much better appreciation and understanding of space life sciences and exploration. However, due to the unique nature of the topics and the scarcity of available educational materials, most students in other universities around the country are unable to benefit from this educational experience. Therefore, with the assistance of Ames experts, we are producing a video series on selected aspects of life sciences in space to expose undergraduate students to the effects of gravity on living systems. Additionally, the video series collection contains space flight footage, graphics, charts, pictures, and interviews to make the materials interesting and intelligible to viewers.

  13. NASA Johnson Style_ Gangnam Style Parody

    NASA Image and Video Library

    2012-12-14

    NASA Johnson Style is a volunteer outreach video project created by the students of NASA's Johnson Space Center. It was created as an educational parody of Psy's Gangnam Style. The lyrics and scenes in the video have been re-imagined in order to inform the public about the amazing work going on at NASA and the Johnson Space Center. Special thanks to astronauts Tracy Caldwell Dyson, Mike Massimino and Clay Anderson Special thanks to Mr. Mike Coats, Dr. Ellen Ochoa, and all supporting senior staff members

  14. Automated Music Video Generation Using Multi-level Feature-based Segmentation

    NASA Astrophysics Data System (ADS)

    Yoon, Jong-Chul; Lee, In-Kwon; Byun, Siwoo

    The expansion of the home video market has created a requirement for video editing tools to allow ordinary people to assemble videos from short clips. However, professional skills are still necessary to create a music video, which requires a stream to be synchronized with pre-composed music. Because the music and the video are pre-generated in separate environments, even a professional producer usually requires a number of trials to obtain a satisfactory synchronization, which is something that most amateurs are unable to achieve.

  15. NASA Team Collaboration Pilot: Enabling NASA's Virtual Teams

    NASA Technical Reports Server (NTRS)

    Prahst, Steve

    2003-01-01

    Most NASA projects and work activities are accomplished by teams of people. These teams are often geographically distributed - across NASA centers and NASA external partners, both domestic and international. NASA "virtual" teams are stressed by the challenge of getting team work done - across geographic boundaries and time zones. To get distributed work done, teams rely on established methods - travel, telephones, Video Teleconferencing (NASA VITS), and email. Time is our most critical resource - and team members are hindered by the overhead of travel and the difficulties of coordinating work across their virtual teams. Modern, Internet based team collaboration tools offer the potential to dramatically improve the ability of virtual teams to get distributed work done.

  16. NASA Chief Technologist Hosts Town Hall

    NASA Image and Video Library

    2010-05-24

    Bobby Braun, NASA's Chief Technologist, is seen on a video monitor during a Town Hall meeting to discuss agency-wide technology policy and programs at NASA Headquarters on Tuesday, May 25, 2010, in Washington. Photo Credit: (NASA/Carla Cioffi)

  17. Video content parsing based on combined audio and visual information

    NASA Astrophysics Data System (ADS)

    Zhang, Tong; Kuo, C.-C. Jay

    1999-08-01

    While previous research on audiovisual data segmentation and indexing primarily focuses on the pictorial part, significant clues contained in the accompanying audio flow are often ignored. A fully functional system for video content parsing can be achieved more successfully through a proper combination of audio and visual information. By investigating the data structure of different video types, we present tools for both audio and visual content analysis and a scheme for video segmentation and annotation in this research. In the proposed system, video data are segmented into audio scenes and visual shots by detecting abrupt changes in audio and visual features, respectively. Then, the audio scene is categorized and indexed as one of the basic audio types while a visual shot is presented by keyframes and associate image features. An index table is then generated automatically for each video clip based on the integration of outputs from audio and visual analysis. It is shown that the proposed system provides satisfying video indexing results.

  18. Science documentary video slides to enhance education and communication

    NASA Astrophysics Data System (ADS)

    Byrne, J. M.; Little, L. J.; Dodgson, K.

    2010-12-01

    Documentary production can convey powerful messages using a combination of authentic science and reinforcing video imagery. Conventional documentary production contains too much information for many viewers to follow; hence many powerful points may be lost. But documentary productions that are re-edited into short video sequences and made available through web based video servers allow the teacher/viewer to access the material as video slides. Each video slide contains one critical discussion segment of the larger documentary. A teacher/viewer can review the documentary one segment at a time in a class room, public forum, or in the comfort of home. The sequential presentation of the video slides allows the viewer to best absorb the documentary message. The website environment provides space for additional questions and discussion to enhance the video message.

  19. Multi-modal highlight generation for sports videos using an information-theoretic excitability measure

    NASA Astrophysics Data System (ADS)

    Hasan, Taufiq; Bořil, Hynek; Sangwan, Abhijeet; L Hansen, John H.

    2013-12-01

    The ability to detect and organize `hot spots' representing areas of excitement within video streams is a challenging research problem when techniques rely exclusively on video content. A generic method for sports video highlight selection is presented in this study which leverages both video/image structure as well as audio/speech properties. Processing begins where the video is partitioned into small segments and several multi-modal features are extracted from each segment. Excitability is computed based on the likelihood of the segmental features residing in certain regions of their joint probability density function space which are considered both exciting and rare. The proposed measure is used to rank order the partitioned segments to compress the overall video sequence and produce a contiguous set of highlights. Experiments are performed on baseball videos based on signal processing advancements for excitement assessment in the commentators' speech, audio energy, slow motion replay, scene cut density, and motion activity as features. Detailed analysis on correlation between user excitability and various speech production parameters is conducted and an effective scheme is designed to estimate the excitement level of commentator's speech from the sports videos. Subjective evaluation of excitability and ranking of video segments demonstrate a higher correlation with the proposed measure compared to well-established techniques indicating the effectiveness of the overall approach.

  20. Detection and tracking of gas plumes in LWIR hyperspectral video sequence data

    NASA Astrophysics Data System (ADS)

    Gerhart, Torin; Sunu, Justin; Lieu, Lauren; Merkurjev, Ekaterina; Chang, Jen-Mei; Gilles, Jérôme; Bertozzi, Andrea L.

    2013-05-01

    Automated detection of chemical plumes presents a segmentation challenge. The segmentation problem for gas plumes is difficult due to the diffusive nature of the cloud. The advantage of considering hyperspectral images in the gas plume detection problem over the conventional RGB imagery is the presence of non-visual data, allowing for a richer representation of information. In this paper we present an effective method of visualizing hyperspectral video sequences containing chemical plumes and investigate the effectiveness of segmentation techniques on these post-processed videos. Our approach uses a combination of dimension reduction and histogram equalization to prepare the hyperspectral videos for segmentation. First, Principal Components Analysis (PCA) is used to reduce the dimension of the entire video sequence. This is done by projecting each pixel onto the first few Principal Components resulting in a type of spectral filter. Next, a Midway method for histogram equalization is used. These methods redistribute the intensity values in order to reduce icker between frames. This properly prepares these high-dimensional video sequences for more traditional segmentation techniques. We compare the ability of various clustering techniques to properly segment the chemical plume. These include K-means, spectral clustering, and the Ginzburg-Landau functional.

  1. The ESA/NASA Multi-Aircraft ATV-1 Re-Entry Campaign: Analysis of Airborne Intensified Video Observations from the NASA/JSC Experiment

    NASA Technical Reports Server (NTRS)

    Barker, Ed; Maley, Paul; Mulrooney, Mark; Beaulieu, Kevin

    2009-01-01

    In September 2008, a joint ESA/NASA multi-instrument airborne observing campaign was conducted over the Southern Pacific ocean. The objective was the acquisition of data to support detailed atmospheric re-entry analysis for the first flight of the European Automated Transfer Vehicle (ATV)-1. Skilled observers were deployed aboard two aircraft which were flown at 12.8 km altitude within visible range of the ATV-1 re-entry zone. The observers operated a suite of instruments with low-light-level detection sensitivity including still cameras, high speed and 30 fps video cameras, and spectrographs. The collected data has provided valuable information regarding the dynamic time evolution of the ATV-1 re-entry fragmentation. Specifically, the data has satisfied the primary mission objective of recording the explosion of ATV-1's primary fuel tank and thereby validating predictions regarding the tanks demise and the altitude of its occurrence. Furthermore, the data contains the brightness and trajectories of several hundred ATV-1 fragments. It is the analysis of these properties, as recorded by the particular instrument set sponsored by NASA/Johnson Space Center, which we present here.

  2. Learning Outcomes Afforded by Self-Assessed, Segmented Video-Print Combinations

    ERIC Educational Resources Information Center

    Koumi, Jack

    2015-01-01

    Learning affordances of video and print are examined in order to assess the learning outcomes afforded by hybrid video-print learning packages. The affordances discussed for print are: navigability, surveyability and legibility. Those discussed for video are: design for constructive reflection, provision of realistic experiences, presentational…

  3. Special-effect edit detection using VideoTrails: a comparison with existing techniques

    NASA Astrophysics Data System (ADS)

    Kobla, Vikrant; DeMenthon, Daniel; Doermann, David S.

    1998-12-01

    Video segmentation plays an integral role in many multimedia applications, such as digital libraries, content management systems, and various other video browsing, indexing, and retrieval systems. Many algorithms for segmentation of video have appeared within the past few years. Most of these algorithms perform well on cuts, but yield poor performance on gradual transitions or special effects edits. A complete video segmentation system must also achieve good performance on special effect edit detection. In this paper, we discuss the performance of our Video Trails-based algorithms, with other existing special effect edit-detection algorithms within the literature. Results from experiments testing for the ability to detect edits from TV programs, ranging from commercials to news magazine programs, including diverse special effect edits, which we have introduced.

  4. Goddard In The Galaxy [Music Video

    NASA Image and Video Library

    2014-07-14

    This video highlights the many ways NASA Goddard Space Flight Center explores the universe. So crank up your speakers and let the music be your guide. "My Songs Know What You Did In The Dark (Light Em Up)" Performed by Fall Out Boy Courtesy of Island Def Jam Music Group under license from Universal Music Enterprises Download the video here: svs.gsfc.nasa.gov/goto?11378 NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  5. NASA's What's Up Astronomy and Mission video series celebrates the Year of the Solar System: Fall 2010 - late summer 2012

    NASA Astrophysics Data System (ADS)

    Houston Jones, J.; Alice Wessen, Manager Of Solar System Eduction; Public Engagement

    2010-12-01

    NASA's What's Up video podcast supports the Year of the Solar System (YSS) October 2010 - August 2012. During YSS each podcast pairs a popular night sky viewing target (Moon, Comet, Planets, solar system features) with a mission event (launch, flyby, orbit insertion, landing). This product has proven popular with public, formal and informal audiences and will compliment and augment other programming material.

  6. Lessons from the Hot Seat: NASA Scientists in Live Broadcast and Documentary Television (Invited)

    NASA Astrophysics Data System (ADS)

    Thaller, M.

    2013-12-01

    NASA sends hundreds of scientists a year to media training, where they are taught to stick to their talking points, resist off-topic questions, and stand up to bullying. In over 15 years of television work representing NASA, I have yet to put any of the practices I learned in these sessions into action. Honestly, in over 99% of cases, reporters and documentarians are looking for totally different things from scientists on their programs. For most TV interviews, there are two or three minutes to get a few points across (and it is *amazing* how fast that time goes), show an animation, and smile engagingly to give the impression that NASA scientists are not arrogant jerks and might even be worth some tax money. But we are never trained to do this! In this session, I'll talk about some of my television experiences (good, bad, and totally embarrassing), show some examples of the short video segments we film, and discuss why most science organizations, including NASA, aren't training their scientists to give the media what they really want.

  7. LADEE NASA Social

    NASA Image and Video Library

    2013-09-05

    NASA Associate Administrator for the Science Mission Directorate John Grunsfeld is seen in a video monitor during a NASA Social about the Lunar Atmosphere and Dust Environment Explorer (LADEE) mission at the NASA Wallops Flight Facility, Thursday, Sept. 5, 2013 on Wallops Island, VA. Fifty of NASA's social media followers are attending a two-day event in support of the LADEE launch. Data from LADEE will provide unprecedented information about the environment around the moon and give scientists a better understanding of other planetary bodies in our solar system and beyond. LADEE is scheduled to launch at 11:27 p.m. Friday, Sept. 6, from NASA's Wallops Flight Facility. Photo Credit: (NASA/Carla Cioffi)

  8. Common and Innovative Visuals: A sparsity modeling framework for video.

    PubMed

    Abdolhosseini Moghadam, Abdolreza; Kumar, Mrityunjay; Radha, Hayder

    2014-05-02

    Efficient video representation models are critical for many video analysis and processing tasks. In this paper, we present a framework based on the concept of finding the sparsest solution to model video frames. To model the spatio-temporal information, frames from one scene are decomposed into two components: (i) a common frame, which describes the visual information common to all the frames in the scene/segment, and (ii) a set of innovative frames, which depicts the dynamic behaviour of the scene. The proposed approach exploits and builds on recent results in the field of compressed sensing to jointly estimate the common frame and the innovative frames for each video segment. We refer to the proposed modeling framework by CIV (Common and Innovative Visuals). We show how the proposed model can be utilized to find scene change boundaries and extend CIV to videos from multiple scenes. Furthermore, the proposed model is robust to noise and can be used for various video processing applications without relying on motion estimation and detection or image segmentation. Results for object tracking, video editing (object removal, inpainting) and scene change detection are presented to demonstrate the efficiency and the performance of the proposed model.

  9. Automated detection of videotaped neonatal seizures based on motion segmentation methods.

    PubMed

    Karayiannis, Nicolaos B; Tao, Guozhi; Frost, James D; Wise, Merrill S; Hrachovy, Richard A; Mizrahi, Eli M

    2006-07-01

    This study was aimed at the development of a seizure detection system by training neural networks using quantitative motion information extracted by motion segmentation methods from short video recordings of infants monitored for seizures. The motion of the infants' body parts was quantified by temporal motion strength signals extracted from video recordings by motion segmentation methods based on optical flow computation. The area of each frame occupied by the infants' moving body parts was segmented by direct thresholding, by clustering of the pixel velocities, and by clustering the motion parameters obtained by fitting an affine model to the pixel velocities. The computational tools and procedures developed for automated seizure detection were tested and evaluated on 240 short video segments selected and labeled by physicians from a set of video recordings of 54 patients exhibiting myoclonic seizures (80 segments), focal clonic seizures (80 segments), and random infant movements (80 segments). The experimental study described in this paper provided the basis for selecting the most effective strategy for training neural networks to detect neonatal seizures as well as the decision scheme used for interpreting the responses of the trained neural networks. Depending on the decision scheme used for interpreting the responses of the trained neural networks, the best neural networks exhibited sensitivity above 90% or specificity above 90%. The best among the motion segmentation methods developed in this study produced quantitative features that constitute a reliable basis for detecting myoclonic and focal clonic neonatal seizures. The performance targets of this phase of the project may be achieved by combining the quantitative features described in this paper with those obtained by analyzing motion trajectory signals produced by motion tracking methods. A video system based upon automated analysis potentially offers a number of advantages. Infants who are at risk for

  10. A content-based news video retrieval system: NVRS

    NASA Astrophysics Data System (ADS)

    Liu, Huayong; He, Tingting

    2009-10-01

    This paper focus on TV news programs and design a content-based news video browsing and retrieval system, NVRS, which is convenient for users to fast browsing and retrieving news video by different categories such as political, finance, amusement, etc. Combining audiovisual features and caption text information, the system automatically segments a complete news program into separate news stories. NVRS supports keyword-based news story retrieval, category-based news story browsing and generates key-frame-based video abstract for each story. Experiments show that the method of story segmentation is effective and the retrieval is also efficient.

  11. NASA's NPOESS Preparatory Project Science Data Segment: A Framework for Measurement-based Earth Science Data Systems

    NASA Technical Reports Server (NTRS)

    Schwaller, Mathew R.; Schweiss, Robert J.

    2007-01-01

    The NPOESS Preparatory Project (NPP) Science Data Segment (SDS) provides a framework for the future of NASA s distributed Earth science data systems. The NPP SDS performs research and data product assessment while using a fully distributed architecture. The components of this architecture are organized around key environmental data disciplines: land, ocean, ozone, atmospheric sounding, and atmospheric composition. The SDS thus establishes a set of concepts and a working prototypes. This paper describes the framework used by the NPP Project as it enabled Measurement-Based Earth Science Data Systems for the assessment of NPP products.

  12. Specification for wide channel bandwidth one-inch video tape

    NASA Technical Reports Server (NTRS)

    Perry, Jimmy L.

    1988-01-01

    Standards and controls are established for the procurement of wide channel bandwidth one inch video magnetic recording tapes for Very Long Base Interferometer (VLBI) system applications. The Magnetic Tape Certification Facility (MTCF) currently maintains three specifications for the Quality Products List (QPL) and acceptance testing of magnetic tapes. NASA-TM-79724 is used for the QPL and acceptance testing of new analog tapes; NASA-TM-80599 is used for QPL and acceptance testing of new digital tapes; and NASA-TM-100702 is used for the QPL and acceptance testing of new IBM/IBM compatible 3480 magnetic tape cartridges. This specification will be used for the QPL and acceptance testing of new wide channel bandwidth one inch video magnetic recording tapes. The one inch video tapes used by the Jet Propulsion Lab., the Deep Space Network and the Haystack Observatory will be covered by this specification. These NASA stations will use the video tapes for their VLBI system applications. The VLBI system is used for the tracking of quasars and the support of interplanetary exploration.

  13. Astronomical Video Suites

    NASA Astrophysics Data System (ADS)

    Francisco Salgado, Jose

    2010-01-01

    Astronomer and visual artist Jose Francisco Salgado has directed two astronomical video suites to accompany live performances of classical music works. The suites feature awe-inspiring images, historical illustrations, and visualizations produced by NASA, ESA, and the Adler Planetarium. By the end of 2009, his video suites Gustav Holst's The Planets and Astronomical Pictures at an Exhibition will have been presented more than 40 times in over 10 countries. Lately Salgado, an avid photographer, has been experimenting with high dynamic range imaging, time-lapse, infrared, and fisheye photography, as well as with stereoscopic photography and video to enhance his multimedia works.

  14. Segments on Western Rim of Endeavour Crater, Mars

    NASA Image and Video Library

    2017-04-19

    This orbital image of the western rim of Mars' Endeavour Crater covers an area about 5 miles (8 kilometers) east-west by about 9 miles (14 kilometers) north-south and indicates the names of some of the raised segments of the rim. NASA's Mars Exploration Rover Opportunity arrived at Endeavour in 2011 after exploring smaller craters to the northwest during its first six years on Mars. It initially explored the "Cape York" segment, then headed south. It reached the northern end of "Cape Tribulation" in late 2014 and the southern tip of that segment in April 2017. A key destination in the "Cape Byron" segment is "Perseverance Valley," where the rover team plans to investigate whether the valley was carved by water, wind or a debris flow initiated by water. This image is from the Context Camera on NASA's Mars Reconnaissance Orbiter. Malin Space Science Systems, San Diego, California, built and operates that camera. NASA's Jet Propulsion Laboratory, a division of Caltech in Pasadena, California, built and operates Opportunity. https://photojournal.jpl.nasa.gov/catalog/PIA21490

  15. Video-Guidance Design for the DART Rendezvous Mission

    NASA Technical Reports Server (NTRS)

    Ruth, Michael; Tracy, Chisholm

    2004-01-01

    NASA's Demonstration of Autonomous Rendezvous Technology (DART) mission will validate a number of different guidance technologies, including state-differenced GPS transfers and close-approach video guidance. The video guidance for DART will employ NASA/Marshall s Advanced Video Guidance Sensor (AVGS). This paper focuses on the terminal phase of the DART mission that includes close-approach maneuvers under AVGS guidance. The closed-loop video guidance design for DART is driven by a number of competing requirements, including a need for maximizing tracking bandwidths while coping with measurement noise and the need to minimize RCS firings. A range of different strategies for attitude control and docking guidance have been considered for the DART mission, and design decisions are driven by a goal of minimizing both the design complexity and the effects of video guidance lags. The DART design employs an indirect docking approach, in which the guidance position targets are defined using relative attitude information. Flight simulation results have proven the effectiveness of the video guidance design.

  16. Optimizing Educational Video through Comparative Trials in Clinical Environments

    ERIC Educational Resources Information Center

    Aronson, Ian David; Plass, Jan L.; Bania, Theodore C.

    2012-01-01

    Although video is increasingly used in public health education, studies generally do not implement randomized trials of multiple video segments in clinical environments. Therefore, the specific configurations of educational videos that will have the greatest impact on outcome measures ranging from increased knowledge of important public health…

  17. From computer images to video presentation: Enhancing technology transfer

    NASA Technical Reports Server (NTRS)

    Beam, Sherilee F.

    1994-01-01

    With NASA placing increased emphasis on transferring technology to outside industry, NASA researchers need to evaluate many aspects of their efforts in this regard. Often it may seem like too much self-promotion to many researchers. However, industry's use of video presentations in sales, advertising, public relations and training should be considered. Today, the most typical presentation at NASA is through the use of vu-graphs (overhead transparencies) which can be effective for text or static presentations. For full blown color and sound presentations, however, the best method is videotape. In fact, it is frequently more convenient due to its portability and the availability of viewing equipment. This talk describes techniques for creating a video presentation through the use of a combined researcher and video professional team.

  18. 'How To' Clean Room Video

    NASA Technical Reports Server (NTRS)

    McCarty, Kaley Corinne

    2013-01-01

    One of the projects that I am completing this summer is a Launch Services Program intern 'How to' set up a clean room informational video. The purpose of this video is to go along with a clean room kit that can be checked out by employees at the Kennedy Space Center and to be taken to classrooms to help educate students and intrigue them about NASA. The video will include 'how to' set up and operate a clean room at NASA. This is a group project so we will be acting as a team and contributing our own input and ideas. We will include various activities for children in classrooms to complete, while learning and having fun. Activities that we will explain and film include: helping children understand the proper way to wear a bunny suit, a brief background on cleanrooms, and the importance of maintaining the cleanliness of a space craft. This project will be shown to LSP management and co-workers; we will be presenting the video once it is completed.

  19. The video watermarking container: efficient real-time transaction watermarking

    NASA Astrophysics Data System (ADS)

    Wolf, Patrick; Hauer, Enrico; Steinebach, Martin

    2008-02-01

    When transaction watermarking is used to secure sales in online shops by embedding transaction specific watermarks, the major challenge is embedding efficiency: Maximum speed by minimal workload. This is true for all types of media. Video transaction watermarking presents a double challenge. Video files not only are larger than for example music files of the same playback time. In addition, video watermarking algorithms have a higher complexity than algorithms for other types of media. Therefore online shops that want to protect their videos by transaction watermarking are faced with the problem that their servers need to work harder and longer for every sold medium in comparison to audio sales. In the past, many algorithms responded to this challenge by reducing their complexity. But this usually results in a loss of either robustness or transparency. This paper presents a different approach. The container technology separates watermark embedding into two stages: A preparation stage and the finalization stage. In the preparation stage, the video is divided into embedding segments. For each segment one copy marked with "0" and anther one marked with "1" is created. This stage is computationally expensive but only needs to be done once. In the finalization stage, the watermarked video is assembled from the embedding segments according to the watermark message. This stage is very fast and involves no complex computations. It thus allows efficient creation of individually watermarked video files.

  20. Multi-view video segmentation and tracking for video surveillance

    NASA Astrophysics Data System (ADS)

    Mohammadi, Gelareh; Dufaux, Frederic; Minh, Thien Ha; Ebrahimi, Touradj

    2009-05-01

    Tracking moving objects is a critical step for smart video surveillance systems. Despite the complexity increase, multiple camera systems exhibit the undoubted advantages of covering wide areas and handling the occurrence of occlusions by exploiting the different viewpoints. The technical problems in multiple camera systems are several: installation, calibration, objects matching, switching, data fusion, and occlusion handling. In this paper, we address the issue of tracking moving objects in an environment covered by multiple un-calibrated cameras with overlapping fields of view, typical of most surveillance setups. Our main objective is to create a framework that can be used to integrate objecttracking information from multiple video sources. Basically, the proposed technique consists of the following steps. We first perform a single-view tracking algorithm on each camera view, and then apply a consistent object labeling algorithm on all views. In the next step, we verify objects in each view separately for inconsistencies. Correspondent objects are extracted through a Homography transform from one view to the other and vice versa. Having found the correspondent objects of different views, we partition each object into homogeneous regions. In the last step, we apply the Homography transform to find the region map of first view in the second view and vice versa. For each region (in the main frame and mapped frame) a set of descriptors are extracted to find the best match between two views based on region descriptors similarity. This method is able to deal with multiple objects. Track management issues such as occlusion, appearance and disappearance of objects are resolved using information from all views. This method is capable of tracking rigid and deformable objects and this versatility lets it to be suitable for different application scenarios.

  1. NASA finds Shrimp Under Antarctic Ice [Video

    NASA Image and Video Library

    2017-12-08

    At a depth of 600 feet beneath the West Antarctic ice sheet, a small shrimp-like creature managed to brighten up an otherwise gray polar day in late November 2009. This critter is a three-inch long Lyssianasid amphipod found beneath the Ross Ice Shelf, about 12.5 miles away from open water. NASA scientists were using a borehole camera to look back up towards the ice surface when they spotted this pinkish-orange creature swimming beneath the ice. Credit: NASA

  2. Video Skimming and Characterization through the Combination of Image and Language Understanding Techniques

    NASA Technical Reports Server (NTRS)

    Smith, Michael A.; Kanade, Takeo

    1997-01-01

    Digital video is rapidly becoming important for education, entertainment, and a host of multimedia applications. With the size of the video collections growing to thousands of hours, technology is needed to effectively browse segments in a short time without losing the content of the video. We propose a method to extract the significant audio and video information and create a "skim" video which represents a very short synopsis of the original. The goal of this work is to show the utility of integrating language and image understanding techniques for video skimming by extraction of significant information, such as specific objects, audio keywords and relevant video structure. The resulting skim video is much shorter, where compaction is as high as 20:1, and yet retains the essential content of the original segment.

  3. NASA NASA CONNECT: Special World Space Congress. [Videotape].

    ERIC Educational Resources Information Center

    National Aeronautics and Space Administration, Hampton, VA. Langley Research Center.

    NASA CONNECT is an annual series of free integrated mathematics, science, and technology instructional distance learning programs for students in grades 5-8. This video presents the World Space Congress 2002, the meeting of the decade for space professionals. Topics discussed range from the discovery of distant planets to medical advancements,…

  4. Segmentation of the Speaker's Face Region with Audiovisual Correlation

    NASA Astrophysics Data System (ADS)

    Liu, Yuyu; Sato, Yoichi

    The ability to find the speaker's face region in a video is useful for various applications. In this work, we develop a novel technique to find this region within different time windows, which is robust against the changes of view, scale, and background. The main thrust of our technique is to integrate audiovisual correlation analysis into a video segmentation framework. We analyze the audiovisual correlation locally by computing quadratic mutual information between our audiovisual features. The computation of quadratic mutual information is based on the probability density functions estimated by kernel density estimation with adaptive kernel bandwidth. The results of this audiovisual correlation analysis are incorporated into graph cut-based video segmentation to resolve a globally optimum extraction of the speaker's face region. The setting of any heuristic threshold in this segmentation is avoided by learning the correlation distributions of speaker and background by expectation maximization. Experimental results demonstrate that our method can detect the speaker's face region accurately and robustly for different views, scales, and backgrounds.

  5. NASA Research Being Shared Through Live, Interactive Video Tours

    NASA Technical Reports Server (NTRS)

    Petersen, Ruth A.; Zona, Kathleen A.

    2001-01-01

    On June 2, 2000, the NASA Glenn Research Center Learning Technologies Project (LTP) coordinated the first live remote videoconferencing broadcast from a Glenn facility. The historic event from Glenn's Icing Research Tunnel featured wind tunnel technicians and researchers performing an icing experiment, obtaining results, and discussing the relevance to everyday flight operations and safety. After a brief overview of its history, students were able to "walk through" the tunnel, stand in the control room, and observe a live icing experiment that demonstrated how ice would grow on an airplane wing in flight through an icing cloud. The tour was interactive, with a spirited exchange of questions and explanations between the students and presenters. The virtual tour of the oldest and largest refrigerated icing research tunnel in the world was the second of a series of videoconferencing connections with the AP Physics students at Bay Village High School, Bay Village, Ohio. The first connection, called Aircraft Safety and Icing Research, introduced the Tailplane Icing Program. In an effort to improve aircraft safety by reducing the number of in-flight icing events, Glenn's Icing Branch uses its icing research aircraft to conduct flight tests. The presenter engaged the students in discussions of basic aircraft flight mechanics and the function of the horizontal tailplane, as well as the effect of ice on airfoil (wing or tail) surfaces. A brief video of actual flight footage provided a view of the pilot's actions and reactions and of the horizon during tailplane icing conditions.

  6. Smoke regions extraction based on two steps segmentation and motion detection in early fire

    NASA Astrophysics Data System (ADS)

    Jian, Wenlin; Wu, Kaizhi; Yu, Zirong; Chen, Lijuan

    2018-03-01

    Aiming at the early problems of video-based smoke detection in fire video, this paper proposes a method to extract smoke suspected regions by combining two steps segmentation and motion characteristics. Early smoldering smoke can be seen as gray or gray-white regions. In the first stage, regions of interests (ROIs) with smoke are obtained by using two step segmentation methods. Then, suspected smoke regions are detected by combining the two step segmentation and motion detection. Finally, morphological processing is used for smoke regions extracting. The Otsu algorithm is used as segmentation method and the ViBe algorithm is used to detect the motion of smoke. The proposed method was tested on 6 test videos with smoke. The experimental results show the effectiveness of our proposed method over visual observation.

  7. Flyover Video of Phoenix Work Area

    NASA Technical Reports Server (NTRS)

    2008-01-01

    [figure removed for brevity, see original site] Click on image for animation

    This video shows an overhead view of NASA's Phoenix Mars Lander and the work area of the Robotic Arm.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  8. Innovative Solution to Video Enhancement

    NASA Technical Reports Server (NTRS)

    2001-01-01

    Through a licensing agreement, Intergraph Government Solutions adapted a technology originally developed at NASA's Marshall Space Flight Center for enhanced video imaging by developing its Video Analyst(TM) System. Marshall's scientists developed the Video Image Stabilization and Registration (VISAR) technology to help FBI agents analyze video footage of the deadly 1996 Olympic Summer Games bombing in Atlanta, Georgia. VISAR technology enhanced nighttime videotapes made with hand-held camcorders, revealing important details about the explosion. Intergraph's Video Analyst System is a simple, effective, and affordable tool for video enhancement and analysis. The benefits associated with the Video Analyst System include support of full-resolution digital video, frame-by-frame analysis, and the ability to store analog video in digital format. Up to 12 hours of digital video can be stored and maintained for reliable footage analysis. The system also includes state-of-the-art features such as stabilization, image enhancement, and convolution to help improve the visibility of subjects in the video without altering underlying footage. Adaptable to many uses, Intergraph#s Video Analyst System meets the stringent demands of the law enforcement industry in the areas of surveillance, crime scene footage, sting operations, and dash-mounted video cameras.

  9. Evolving discriminators for querying video sequences

    NASA Astrophysics Data System (ADS)

    Iyengar, Giridharan; Lippman, Andrew B.

    1997-01-01

    In this paper we present a framework for content based query and retrieval of information from large video databases. This framework enables content based retrieval of video sequences by characterizing the sequences using motion, texture and colorimetry cues. This characterization is biologically inspired and results in a compact parameter space where every segment of video is represented by an 8 dimensional vector. Searching and retrieval is done in real- time with accuracy in this parameter space. Using this characterization, we then evolve a set of discriminators using Genetic Programming Experiments indicate that these discriminators are capable of analyzing and characterizing video. The VideoBook is able to search and retrieve video sequences with 92% accuracy in real-time. Experiments thus demonstrate that the characterization is capable of extracting higher level structure from raw pixel values.

  10. Space Shuttle Five-Segment Booster (Short Course)

    NASA Technical Reports Server (NTRS)

    Graves, Stanley R.; Rudolphi, Michael (Technical Monitor)

    2002-01-01

    NASA is considering upgrading the Space Shuttle by adding a fifth segment (FSB) to the current four-segment solid rocket booster. Course materials cover design and engineering issues related to the Reusable Solid Rocket Motor (RSRM) raised by the addition of a fifth segment to the rocket booster. Topics cover include: four segment vs. five segment booster, abort modes, FSB grain design, erosive burning, enhanced propellant burn rate, FSB erosive burning model development and hardware configuration.

  11. Benefit from NASA

    NASA Image and Video Library

    1999-06-01

    Two scientists at NASA Marshall Space Flight Center, atmospheric scientist Paul Meyer (left) and solar physicist Dr. David Hathaway, have developed promising new software, called Video Image Stabilization and Registration (VISAR), that may help law enforcement agencies to catch criminals by improving the quality of video recorded at crime scenes, VISAR stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects; produces clearer images of moving objects; smoothes jagged edges; enhances still images; and reduces video noise of snow. VISAR could also have applications in medical and meteorological imaging. It could steady images of Ultrasounds which are infamous for their grainy, blurred quality. It would be especially useful for tornadoes, tracking whirling objects and helping to determine the tornado's wind speed. This image shows two scientists reviewing an enhanced video image of a license plate taken from a moving automobile.

  12. IBES: a tool for creating instructions based on event segmentation

    PubMed Central

    Mura, Katharina; Petersen, Nils; Huff, Markus; Ghose, Tandra

    2013-01-01

    Receiving informative, well-structured, and well-designed instructions supports performance and memory in assembly tasks. We describe IBES, a tool with which users can quickly and easily create multimedia, step-by-step instructions by segmenting a video of a task into segments. In a validation study we demonstrate that the step-by-step structure of the visual instructions created by the tool corresponds to the natural event boundaries, which are assessed by event segmentation and are known to play an important role in memory processes. In one part of the study, 20 participants created instructions based on videos of two different scenarios by using the proposed tool. In the other part of the study, 10 and 12 participants respectively segmented videos of the same scenarios yielding event boundaries for coarse and fine events. We found that the visual steps chosen by the participants for creating the instruction manual had corresponding events in the event segmentation. The number of instructional steps was a compromise between the number of fine and coarse events. Our interpretation of results is that the tool picks up on natural human event perception processes of segmenting an ongoing activity into events and enables the convenient transfer into meaningful multimedia instructions for assembly tasks. We discuss the practical application of IBES, for example, creating manuals for differing expertise levels, and give suggestions for research on user-oriented instructional design based on this tool. PMID:24454296

  13. IBES: a tool for creating instructions based on event segmentation.

    PubMed

    Mura, Katharina; Petersen, Nils; Huff, Markus; Ghose, Tandra

    2013-12-26

    Receiving informative, well-structured, and well-designed instructions supports performance and memory in assembly tasks. We describe IBES, a tool with which users can quickly and easily create multimedia, step-by-step instructions by segmenting a video of a task into segments. In a validation study we demonstrate that the step-by-step structure of the visual instructions created by the tool corresponds to the natural event boundaries, which are assessed by event segmentation and are known to play an important role in memory processes. In one part of the study, 20 participants created instructions based on videos of two different scenarios by using the proposed tool. In the other part of the study, 10 and 12 participants respectively segmented videos of the same scenarios yielding event boundaries for coarse and fine events. We found that the visual steps chosen by the participants for creating the instruction manual had corresponding events in the event segmentation. The number of instructional steps was a compromise between the number of fine and coarse events. Our interpretation of results is that the tool picks up on natural human event perception processes of segmenting an ongoing activity into events and enables the convenient transfer into meaningful multimedia instructions for assembly tasks. We discuss the practical application of IBES, for example, creating manuals for differing expertise levels, and give suggestions for research on user-oriented instructional design based on this tool.

  14. SLS Pathfinder Segments Car Train Departure

    NASA Image and Video Library

    2016-03-02

    An Iowa Northern locomotive, contracted by Goodloe Transportation of Chicago, travels along the NASA railroad bridge over the Indian River north of Kennedy Space Center, carrying one of two containers on a railcar for transport to the NASA Jay Jay railroad yard. The containers held two pathfinders, or test versions, of solid rocket booster segments for NASA’s Space Launch System rocket that were delivered to the Rotation, Processing and Surge Facility (RPSF). Inside the RPSF, the Ground Systems Development and Operations Program and Jacobs Engineering, on the Test and Operations Support Contract, will conduct a series of lifts, moves and stacking operations using the booster segments, which are inert, to prepare for Exploration Mission-1, deep-space missions and the journey to Mars. The pathfinder booster segments are from Orbital ATK in Utah.

  15. SLS Pathfinder Segments Car Train Departure

    NASA Image and Video Library

    2016-03-02

    An Iowa Northern locomotive, conracted by Goodloe Transportation of Chicago, travels along the NASA railroad bridge over the Indian River north of Kennedy Space Center, with two containers on railcars for transport to the NASA Jay Jay railroad yard. The containers held two pathfinders, or test versions, of solid rocket booster segments for NASA’s Space Launch System rocket that were delivered to the Rotation, Processing and Surge Facility (RPSF). Inside the RPSF, the Ground Systems Development and Operations Program and Jacobs Engineering, on the Test and Operations Support Contract, will conduct a series of lifts, moves and stacking operations using the booster segments, which are inert, to prepare for Exploration Mission-1, deep-space missions and the journey to Mars. The pathfinder booster segments are from Orbital ATK in Utah.

  16. SLS Pathfinder Segments Car Train Departure

    NASA Image and Video Library

    2016-03-02

    An Iowa Northern locomotive, contracted by Goodloe Transportation of Chicago, continues along the NASA railroad bridge over the Indian River north of Kennedy Space Center, carrying one of two containers on a railcar for transport to the NASA Jay Jay railroad yard. The containers held two pathfinders, or test versions, of solid rocket booster segments for NASA’s Space Launch System rocket that were delivered to the Rotation, Processing and Surge Facility (RPSF). Inside the RPSF, the Ground Systems Development and Operations Program and Jacobs Engineering, on the Test and Operations Support Contract, will conduct a series of lifts, moves and stacking operations using the booster segments, which are inert, to prepare for Exploration Mission-1, deep-space missions and the journey to Mars. The pathfinder booster segments are from Orbital ATK in Utah.

  17. Got a Minute? Tune Your iPad to NASA's Best

    NASA Astrophysics Data System (ADS)

    Leon, N.; Fitzpatrick, A. J.; Fisher, D. K.; Netting, R. A.

    2012-12-01

    Space Place Prime is a content presentation app for the iPad. It gathers some of the best and most recent web offerings from NASA. A spinoff of NASA's popular kids' website The Space Place (spaceplace.nasa.gov or science.nasa.gov/kids), Space Place Prime taps timely educational and easy-to-read articles from the website, as well as daily updates of NASA space and Earth images and the latest informative videos, including Science Casts and the monthly "What's up in the Sky." Space Place Prime targets a multigenerational audience, including anyone with an interest in NASA and science in general. Features are offered for kids, teachers, parents, space enthusiasts, and everyone in between. The app can be the user's own NASA news source. Like a newspaper or magazine app, Space Place Prime downloads new content daily via wireless connection. In addition to the Space Place website, several NASA RSS feeds are tapped to provide new content. Content is retained for the previous several days or some number of editions of each feed. All content is controlled on the server side, so we can push features about the latest news or change any content without updating the app in the Apple Store. The Space Place Prime interface is a virtual endless grid of small images with short titles, each image a link to an image, video, article, or hands-on activity for kids. The grid can be dragged in any direction with no boundaries. (Image links repeat to fill in the grid "infinitely.") For a more focused search, a list mode presents menus of images, videos, and articles (including activity articles) separately. If the user tags a page (image, video, or article) as a Favorite, the content is downloaded and maintained on the device, and remains permanently available regardless of connectivity. (Very large video files are permanently retained on the server side, however, rather than taking up the limited storage on the iPad.) Facebook, twitter, and e-mail connections make any feature easy to

  18. NASA Bioreactor

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Astronaut John Blaha replaces an exhausted media bag and filled waste bag with fresh bags to continue a bioreactor experiment aboard space station Mir in 1996. NASA-sponsored bioreactor research has been instrumental in helping scientists to better understand normal and cancerous tissue development. In cooperation with the medical community, the bioreactor design is being used to prepare better models of human colon, prostate, breast and ovarian tumors. Cartilage, bone marrow, heart muscle, skeletal muscle, pancreatic islet cells, liver and kidney are just a few of the normal tissues being cultured in rotating bioreactors by investigators. This image is from a video downlink. The work is sponsored by NASA's Office of Biological and Physical Research. The bioreactor is managed by the Biotechnology Cell Science Program at NASA's Johnson Space Center (JSC).

  19. Audio-guided audiovisual data segmentation, indexing, and retrieval

    NASA Astrophysics Data System (ADS)

    Zhang, Tong; Kuo, C.-C. Jay

    1998-12-01

    While current approaches for video segmentation and indexing are mostly focused on visual information, audio signals may actually play a primary role in video content parsing. In this paper, we present an approach for automatic segmentation, indexing, and retrieval of audiovisual data, based on audio content analysis. The accompanying audio signal of audiovisual data is first segmented and classified into basic types, i.e., speech, music, environmental sound, and silence. This coarse-level segmentation and indexing step is based upon morphological and statistical analysis of several short-term features of the audio signals. Then, environmental sounds are classified into finer classes, such as applause, explosions, bird sounds, etc. This fine-level classification and indexing step is based upon time- frequency analysis of audio signals and the use of the hidden Markov model as the classifier. On top of this archiving scheme, an audiovisual data retrieval system is proposed. Experimental results show that the proposed approach has an accuracy rate higher than 90 percent for the coarse-level classification, and higher than 85 percent for the fine-level classification. Examples of audiovisual data segmentation and retrieval are also provided.

  20. Expedition 32 Video Message Recording

    NASA Image and Video Library

    2012-07-25

    ISS032-E-009061 (25 July 2012) --- NASA astronauts Joe Acaba and Sunita Williams, both Expedition 32 flight engineers, perform video message recording in the Destiny laboratory of the International Space Station.

  1. Precise determination of anthropometric dimensions by means of image processing methods for estimating human body segment parameter values.

    PubMed

    Baca, A

    1996-04-01

    A method has been developed for the precise determination of anthropometric dimensions from the video images of four different body configurations. High precision is achieved by incorporating techniques for finding the location of object boundaries with sub-pixel accuracy, the implementation of calibration algorithms, and by taking into account the varying distances of the body segments from the recording camera. The system allows automatic segment boundary identification from the video image, if the boundaries are marked on the subject by black ribbons. In connection with the mathematical finite-mass-element segment model of Hatze, body segment parameters (volumes, masses, the three principal moments of inertia, the three local coordinates of the segmental mass centers etc.) can be computed by using the anthropometric data determined videometrically as input data. Compared to other, recently published video-based systems for the estimation of the inertial properties of body segments, the present algorithms reduce errors originating from optical distortions, inaccurate edge-detection procedures, and user-specified upper and lower segment boundaries or threshold levels for the edge-detection. The video-based estimation of human body segment parameters is especially useful in situations where ease of application and rapid availability of comparatively precise parameter values are of importance.

  2. Spatio-Temporal Video Segmentation with Shape Growth or Shrinkage Constraint

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Charpiat, Guillaume; Brucker, Ludovic; Menze, Bjoern H.

    2014-01-01

    We propose a new method for joint segmentation of monotonously growing or shrinking shapes in a time sequence of noisy images. The task of segmenting the image time series is expressed as an optimization problem using the spatio-temporal graph of pixels, in which we are able to impose the constraint of shape growth or of shrinkage by introducing monodirectional infinite links connecting pixels at the same spatial locations in successive image frames. The globally optimal solution is computed with a graph cut. The performance of the proposed method is validated on three applications: segmentation of melting sea ice floes and of growing burned areas from time series of 2D satellite images, and segmentation of a growing brain tumor from sequences of 3D medical scans. In the latter application, we impose an additional intersequences inclusion constraint by adding directed infinite links between pixels of dependent image structures.

  3. Do the Depictions of Sexual Attire and Sexual Behavior in Music Videos Differ Based on Video Network and Character Gender?

    ERIC Educational Resources Information Center

    King, Keith; Laake, Rebecca A.; Bernard, Amy

    2006-01-01

    This study examined the sexual messages depicted in music videos aired on MTV, MTV2, BET, and GAC from August 2, 2004 to August 15, 2004. One-hour segments of music videos were taped daily for two weeks. Depictions of sexual attire and sexual behavior were analyzed via a four-page coding sheet (interrater-reliability = 0.93). Results indicated…

  4. Headlines: Planet Earth: Improving Climate Literacy with Short Format News Videos

    NASA Astrophysics Data System (ADS)

    Tenenbaum, L. F.; Kulikov, A.; Jackson, R.

    2012-12-01

    One of the challenges of communicating climate science is the sense that climate change is remote and unconnected to daily life--something that's happening to someone else or in the future. To help face this challenge, NASA's Global Climate Change website http://climate.nasa.gov has launched a new video series, "Headlines: Planet Earth," which focuses on current climate news events. This rapid-response video series uses 3D video visualization technology combined with real-time satellite data and images, to throw a spotlight on real-world events.. The "Headlines: Planet Earth" news video products will be deployed frequently, ensuring timeliness. NASA's Global Climate Change Website makes extensive use of interactive media, immersive visualizations, ground-based and remote images, narrated and time-lapse videos, time-series animations, and real-time scientific data, plus maps and user-friendly graphics that make the scientific content both accessible and engaging to the public. The site has also won two consecutive Webby Awards for Best Science Website. Connecting climate science to current real-world events will contribute to improving climate literacy by making climate science relevant to everyday life.

  5. SLS Pathfinder Segments Car Train Departure

    NASA Image and Video Library

    2016-03-02

    An Iowa Northern locomotive, contracted by Goodloe Transportation of Chicago, approaches the raised span of the NASA railroad bridge to continue over the Indian River north of Kennedy Space Center with two containers on railcars for storage at the NASA Jay Jay railroad yard. The containers held two pathfinders, or test versions, of solid rocket booster segments for NASA’s Space Launch System rocket that were delivered to the Rotation, Processing and Surge Facility (RPSF). Inside the RPSF, the Ground Systems Development and Operations Program and Jacobs Engineering, on the Test and Operations Support Contract, will conduct a series of lifts, moves and stacking operations using the booster segments, which are inert, to prepare for Exploration Mission-1, deep-space missions and the journey to Mars. The pathfinder booster segments are from Orbital ATK in Utah.

  6. SLS Pathfinder Segments Car Train Departure

    NASA Image and Video Library

    2016-03-02

    An Iowa Northern locomotive, contracted by Goodloe Transportation of Chicago, travels along the NASA railroad bridge over the Indian River north of Kennedy Space Center, carrying one of two containers on a railcar for transport to the NASA Jay Jay railroad yard near the center. The containers held two pathfinders, or test versions, of solid rocket booster segments for NASA’s Space Launch System rocket that were delivered to the Rotation, Processing and Surge Facility (RPSF). Inside the RPSF, the Ground Systems Development and Operations Program and Jacobs Engineering, on the Test and Operations Support Contract, will conduct a series of lifts, moves and stacking operations using the booster segments, which are inert, to prepare for Exploration Mission-1, deep-space missions and the journey to Mars. The pathfinder booster segments are from Orbital ATK in Utah.

  7. Hierarchical video summarization based on context clustering

    NASA Astrophysics Data System (ADS)

    Tseng, Belle L.; Smith, John R.

    2003-11-01

    A personalized video summary is dynamically generated in our video personalization and summarization system based on user preference and usage environment. The three-tier personalization system adopts the server-middleware-client architecture in order to maintain, select, adapt, and deliver rich media content to the user. The server stores the content sources along with their corresponding MPEG-7 metadata descriptions. In this paper, the metadata includes visual semantic annotations and automatic speech transcriptions. Our personalization and summarization engine in the middleware selects the optimal set of desired video segments by matching shot annotations and sentence transcripts with user preferences. Besides finding the desired contents, the objective is to present a coherent summary. There are diverse methods for creating summaries, and we focus on the challenges of generating a hierarchical video summary based on context information. In our summarization algorithm, three inputs are used to generate the hierarchical video summary output. These inputs are (1) MPEG-7 metadata descriptions of the contents in the server, (2) user preference and usage environment declarations from the user client, and (3) context information including MPEG-7 controlled term list and classification scheme. In a video sequence, descriptions and relevance scores are assigned to each shot. Based on these shot descriptions, context clustering is performed to collect consecutively similar shots to correspond to hierarchical scene representations. The context clustering is based on the available context information, and may be derived from domain knowledge or rules engines. Finally, the selection of structured video segments to generate the hierarchical summary efficiently balances between scene representation and shot selection.

  8. KENNEDY SPACE CENTER, FLA. - The red NASA engine hauls its cargo toward Titusville, Fla. The containers enclose segments of a solid rocket booster being returned to Utah for testing. The segments were part of the STS-114 stack. It is the first time actual flight segments that had been stacked for flight in the VAB are being returned for testing. They will undergo firing, which will enable inspectors to check the viability of the solid and verify the life expectancy for stacked segments.

    NASA Image and Video Library

    2004-01-30

    KENNEDY SPACE CENTER, FLA. - The red NASA engine hauls its cargo toward Titusville, Fla. The containers enclose segments of a solid rocket booster being returned to Utah for testing. The segments were part of the STS-114 stack. It is the first time actual flight segments that had been stacked for flight in the VAB are being returned for testing. They will undergo firing, which will enable inspectors to check the viability of the solid and verify the life expectancy for stacked segments.

  9. Agency Video, Audio and Imagery Library

    NASA Technical Reports Server (NTRS)

    Grubbs, Rodney

    2015-01-01

    The purpose of this presentation was to inform the ISS International Partners of the new NASA Agency Video, Audio and Imagery Library (AVAIL) website. AVAIL is a new resource for the public to search for and download NASA-related imagery, and is not intended to replace the current process by which the International Partners receive their Space Station imagery products.

  10. Using NASA's Reference Architecture: Comparing Polar and Geostationary Data Processing Systems

    NASA Technical Reports Server (NTRS)

    Ullman, Richard; Burnett, Michael

    2013-01-01

    The JPSS and GOES-R programs are housed at NASA GSFC and jointly implemented by NASA and NOAA to NOAA requirements. NASA's role in the JPSS Ground System is to develop and deploy the system according to NOAA requirements. NASA's role in the GOES-R ground segment is to provide Systems Engineering expertise and oversight for NOAA's development and deployment of the system. NASA's Earth Science Data Systems Reference Architecture is a document developed by NASA's Earth Science Data Systems Standards Process Group that describes a NASA Earth Observing Mission Ground system as a generic abstraction. The authors work within the respective ground segment projects and are also separately contributors to the Reference Architecture document. Opinions expressed are the author's only and are not NOAA, NASA or the Ground Projects' official positions.

  11. Content-based management service for medical videos.

    PubMed

    Mendi, Engin; Bayrak, Coskun; Cecen, Songul; Ermisoglu, Emre

    2013-01-01

    Development of health information technology has had a dramatic impact to improve the efficiency and quality of medical care. Developing interoperable health information systems for healthcare providers has the potential to improve the quality and equitability of patient-centered healthcare. In this article, we describe an automated content-based medical video analysis and management service that provides convenience and ease in accessing the relevant medical video content without sequential scanning. The system facilitates effective temporal video segmentation and content-based visual information retrieval that enable a more reliable understanding of medical video content. The system is implemented as a Web- and mobile-based service and has the potential to offer a knowledge-sharing platform for the purpose of efficient medical video content access.

  12. Benefit from NASA

    NASA Image and Video Library

    1999-06-01

    Two scientists at NASA's Marshall Space Flight Center,atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image Stabilization and Registration (VISAR). VISAR may help law enforcement agencies catch criminals by improving the quality of video recorded at crime scenes. In this photograph, the single frame at left, taken at night, was brightened in order to enhance details and reduce noise or snow. To further overcome the video defects in one frame, Law enforcement officials can use VISAR software to add information from multiple frames to reveal a person. Images from less than a second of videotape were added together to create the clarified image at right. VISAR stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. VISAR could also have applications in medical and meteorological imaging. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. The software can be used for defense application by improving recornaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  13. Segmenting Images for a Better Diagnosis

    NASA Technical Reports Server (NTRS)

    2004-01-01

    NASA's Hierarchical Segmentation (HSEG) software has been adapted by Bartron Medical Imaging, LLC, for use in segmentation feature extraction, pattern recognition, and classification of medical images. Bartron acquired licenses from NASA Goddard Space Flight Center for application of the HSEG concept to medical imaging, from the California Institute of Technology/Jet Propulsion Laboratory to incorporate pattern-matching software, and from Kennedy Space Center for data-mining and edge-detection programs. The Med-Seg[TM] united developed by Bartron provides improved diagnoses for a wide range of medical images, including computed tomography scans, positron emission tomography scans, magnetic resonance imaging, ultrasound, digitized Z-ray, digitized mammography, dental X-ray, soft tissue analysis, and moving object analysis. It also can be used in analysis of soft-tissue slides. Bartron's future plans include the application of HSEG technology to drug development. NASA is advancing it's HSEG software to learn more about the Earth's magnetosphere.

  14. Automated fall detection on privacy-enhanced video.

    PubMed

    Edgcomb, Alex; Vahid, Frank

    2012-01-01

    A privacy-enhanced video obscures the appearance of a person in the video. We consider four privacy enhancements: blurring of the person, silhouetting of the person, covering the person with a graphical box, and covering the person with a graphical oval. We demonstrate that an automated video-based fall detection algorithm can be as accurate on privacy-enhanced video as on raw video. The algorithm operated on video from a stationary in-home camera, using a foreground-background segmentation algorithm to extract a minimum bounding rectangle (MBR) around the motion in the video, and using time series shapelet analysis on the height and width of the rectangle to detect falls. We report accuracy applying fall detection on 23 scenarios depicted as raw video and privacy-enhanced videos involving a sole actor portraying normal activities and various falls. We found that fall detection on privacy-enhanced video, except for the common approach of blurring of the person, was competitive with raw video, and in particular that the graphical oval privacy enhancement yielded the same accuracy as raw video, namely 0.91 sensitivity and 0.92 specificity.

  15. Towards computer-assisted TTTS: Laser ablation detection for workflow segmentation from fetoscopic video.

    PubMed

    Vasconcelos, Francisco; Brandão, Patrick; Vercauteren, Tom; Ourselin, Sebastien; Deprest, Jan; Peebles, Donald; Stoyanov, Danail

    2018-06-27

    Intrauterine foetal surgery is the treatment option for several congenital malformations. For twin-to-twin transfusion syndrome (TTTS), interventions involve the use of laser fibre to ablate vessels in a shared placenta. The procedure presents a number of challenges for the surgeon, and computer-assisted technologies can potentially be a significant support. Vision-based sensing is the primary source of information from the intrauterine environment, and hence, vision approaches present an appealing approach for extracting higher level information from the surgical site. In this paper, we propose a framework to detect one of the key steps during TTTS interventions-ablation. We adopt a deep learning approach, specifically the ResNet101 architecture, for classification of different surgical actions performed during laser ablation therapy. We perform a two-fold cross-validation using almost 50 k frames from five different TTTS ablation procedures. Our results show that deep learning methods are a promising approach for ablation detection. To our knowledge, this is the first attempt at automating photocoagulation detection using video and our technique can be an important component of a larger assistive framework for enhanced foetal therapies. The current implementation does not include semantic segmentation or localisation of the ablation site, and this would be a natural extension in future work.

  16. The CYGNSS flight segment; A major NASA science mission enabled by micro-satellite technology

    NASA Astrophysics Data System (ADS)

    Rose, R.; Ruf, C.; Rose, D.; Brummitt, M.; Ridley, A.

    While hurricane track forecasts have improved in accuracy by ~50% since 1990, there has been essentially no improvement in the accuracy of intensity prediction. This lack of progress is thought to be caused by inadequate observations and modeling of the inner core due to two causes: 1) much of the inner core ocean surface is obscured from conventional remote sensing instruments by intense precipitation in the inner rain bands and 2) the rapidly evolving stages of the tropical cyclone (TC) life cycle are poorly sampled in time by conventional polar-orbiting, wide-swath surface wind imagers. NASA's most recently awarded Earth science mission, the NASA EV-2 Cyclone Global Navigation Satellite System (CYGNSS) has been designed to address these deficiencies by combining the all-weather performance of GNSS bistatic ocean surface scatterometry with the sampling properties of a satellite constellation. This paper provides an overview of the CYGNSS flight segment requirements, implementation, and concept of operations for the CYGNSS constellation; consisting of 8 microsatellite-class spacecraft (<; 100kg) each hosting a GNSS receiver, operating in a 500 km orbit, inclined at 35° to provide 70% coverage of the historical TC track. The CYGNSS mission is enabled by modern electronic technology; it is an example of how nanosatellite technology can be applied to replace traditional "old school" solutions at significantly reduced cost while providing an increase in performance. This paper provides an overview of how we combined a reliable space-flight proven avionics design with selected microsatellite components to create an innovative, low-cost solution for a mainstream science investigation.

  17. NASA HUNCH Hardware

    NASA Technical Reports Server (NTRS)

    Hall, Nancy R.; Wagner, James; Phelps, Amanda

    2014-01-01

    What is NASA HUNCH? High School Students United with NASA to Create Hardware-HUNCH is an instructional partnership between NASA and educational institutions. This partnership benefits both NASA and students. NASA receives cost-effective hardware and soft goods, while students receive real-world hands-on experiences. The 2014-2015 was the 12th year of the HUNCH Program. NASA Glenn Research Center joined the program that already included the NASA Johnson Space Flight Center, Marshall Space Flight Center, Langley Research Center and Goddard Space Flight Center. The program included 76 schools in 24 states and NASA Glenn worked with the following five schools in the HUNCH Build to Print Hardware Program: Medina Career Center, Medina, OH; Cattaraugus Allegheny-BOCES, Olean, NY; Orleans Niagara-BOCES, Medina, NY; Apollo Career Center, Lima, OH; Romeo Engineering and Tech Center, Washington, MI. The schools built various parts of an International Space Station (ISS) middeck stowage locker and learned about manufacturing process and how best to build these components to NASA specifications. For the 2015-2016 school year the schools will be part of a larger group of schools building flight hardware consisting of 20 ISS middeck stowage lockers for the ISS Program. The HUNCH Program consists of: Build to Print Hardware; Build to Print Soft Goods; Design and Prototyping; Culinary Challenge; Implementation: Web Page and Video Production.

  18. FY18 State Of NASA Budget

    NASA Image and Video Library

    2017-05-22

    On May 23, the Acting Administrator Robert Lightfoot gave a State of NASA address at Headquarters to rollout the Fiscal Year 2018 Budget proposal. This video highlights the future-facing vision of those plans.

  19. Segmented cold cathode display panel

    NASA Technical Reports Server (NTRS)

    Payne, Leslie (Inventor)

    1998-01-01

    The present invention is a video display device that utilizes the novel concept of generating an electronically controlled pattern of electron emission at the output of a segmented photocathode. This pattern of electron emission is amplified via a channel plate. The result is that an intense electronic image can be accelerated toward a phosphor thus creating a bright video image. This novel arrangement allows for one to provide a full color flat video display capable of implementation in large formats. In an alternate arrangement, the present invention is provided without the channel plate and a porous conducting surface is provided instead. In this alternate arrangement, the brightness of the image is reduced but the cost of the overall device is significantly lowered because fabrication complexity is significantly decreased.

  20. Educator Resource Center for NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Bridgford, Todd; Koltun, Nick R.

    2003-01-01

    The goal of the ERCN is to provide expertise and facilities to help educators access and utilize science, mathematics, and technology instructional products aligned with national standards and appropriate state frameworks and based on NASA s unique mission and results. The NASA Langley s Office of Education has established the service area for this ERC to be the five states of Kentucky, North Carolina, South Carolina, Virginia and West Virginia. This educational grant activity is associated with NASA s Mission to inspire the next generation of explorers.. .as only NASA can. The communication of NASA s knowledge is the prime role of this ERC. Functioning as a dissemination system of instructional materials and support for pre-college education programs we have met the NASA Education ERCN Program's goal. The following ERCN objectives have been accomplished: Demonstrate and facilitate the use of NASA educational products and technologies in print, video and web based formats. Examples include but are not limited to NASA approved Educator s Guides with Activities based on national standards for appropriate subjects and grade levels. We have demonstrated the use videotape series in analogue format and the new digital video instructional systems along with the use of NASA TV. The promotion of web page based resources such as the new NASA Portal web and the ability to download print resources is continuously facilitated in workshops. This objective has been completed by educator contacts that include on-site visits, phone requests, postal mail requests, e-mail requests, fax requests and workshops offered.

  1. Extraction of Blebs in Human Embryonic Stem Cell Videos.

    PubMed

    Guan, Benjamin X; Bhanu, Bir; Talbot, Prue; Weng, Nikki Jo-Hao

    2016-01-01

    Blebbing is an important biological indicator in determining the health of human embryonic stem cells (hESC). Especially, areas of a bleb sequence in a video are often used to distinguish two cell blebbing behaviors in hESC: dynamic and apoptotic blebbings. This paper analyzes various segmentation methods for bleb extraction in hESC videos and introduces a bio-inspired score function to improve the performance in bleb extraction. Full bleb formation consists of bleb expansion and retraction. Blebs change their size and image properties dynamically in both processes and between frames. Therefore, adaptive parameters are needed for each segmentation method. A score function derived from the change of bleb area and orientation between consecutive frames is proposed which provides adaptive parameters for bleb extraction in videos. In comparison to manual analysis, the proposed method provides an automated fast and accurate approach for bleb sequence extraction.

  2. KENNEDY SPACE CENTER, FLA. - The red NASA engine moves forward past the Vehicle Assembly Building with its cargo of containers enclosing segments of a solid rocket booster being returned to Utah for testing. The segments were part of the STS-114 stack. It is the first time actual flight segments that had been stacked for flight in the VAB are being returned for testing. They will undergo firing, which will enable inspectors to check the viability of the solid and verify the life expectancy for stacked segments.

    NASA Image and Video Library

    2004-01-30

    KENNEDY SPACE CENTER, FLA. - The red NASA engine moves forward past the Vehicle Assembly Building with its cargo of containers enclosing segments of a solid rocket booster being returned to Utah for testing. The segments were part of the STS-114 stack. It is the first time actual flight segments that had been stacked for flight in the VAB are being returned for testing. They will undergo firing, which will enable inspectors to check the viability of the solid and verify the life expectancy for stacked segments.

  3. Segmentation of stereo terrain images

    NASA Astrophysics Data System (ADS)

    George, Debra A.; Privitera, Claudio M.; Blackmon, Theodore T.; Zbinden, Eric; Stark, Lawrence W.

    2000-06-01

    We have studied four approaches to segmentation of images: three automatic ones using image processing algorithms and a fourth approach, human manual segmentation. We were motivated toward helping with an important NASA Mars rover mission task -- replacing laborious manual path planning with automatic navigation of the rover on the Mars terrain. The goal of the automatic segmentations was to identify an obstacle map on the Mars terrain to enable automatic path planning for the rover. The automatic segmentation was first explored with two different segmentation methods: one based on pixel luminance, and the other based on pixel altitude generated through stereo image processing. The third automatic segmentation was achieved by combining these two types of image segmentation. Human manual segmentation of Martian terrain images was used for evaluating the effectiveness of the combined automatic segmentation as well as for determining how different humans segment the same images. Comparisons between two different segmentations, manual or automatic, were measured using a similarity metric, SAB. Based on this metric, the combined automatic segmentation did fairly well in agreeing with the manual segmentation. This was a demonstration of a positive step towards automatically creating the accurate obstacle maps necessary for automatic path planning and rover navigation.

  4. The Status of the NASA All Sky Fireball Network

    NASA Technical Reports Server (NTRS)

    Cooke, William J.; Moser, Danielle E.

    2011-01-01

    Established by the NASA Meteoroid Environment Office, the NASA All Sky Fireball Network consists of 6 meteor video cameras in the southern United States, with plans to expand to 15 cameras by 2013. As of mid-2011, the network had detected 1796 multi-station meteors, including meteors from 43 different meteor showers. The current status of the NASA All Sky Fireball Network is described, alongside preliminary results.

  5. Mixed Reality Technology at NASA JPL

    NASA Image and Video Library

    2016-05-16

    NASA's JPL is a center of innovation in virtual and augmented reality, producing groundbreaking applications of these technologies to support a variety of missions. This video is a collection of unedited scenes released to the media.

  6. Race and Emotion in Computer-Based HIV Prevention Videos for Emergency Department Patients

    ERIC Educational Resources Information Center

    Aronson, Ian David; Bania, Theodore C.

    2011-01-01

    Computer-based video provides a valuable tool for HIV prevention in hospital emergency departments. However, the type of video content and protocol that will be most effective remain underexplored and the subject of debate. This study employs a new and highly replicable methodology that enables comparisons of multiple video segments, each based on…

  7. Adventure Racing and Organizational Behavior: Using Eco Challenge Video Clips to Stimulate Learning

    ERIC Educational Resources Information Center

    Kenworthy-U'Ren, Amy; Erickson, Anthony

    2009-01-01

    In this article, the Eco Challenge race video is presented as a teaching tool for facilitating theory-based discussion and application in organizational behavior (OB) courses. Before discussing the intricacies of the video series itself, the authors present a pedagogically based rationale for using reality TV-based video segments in a classroom…

  8. Unsupervised motion-based object segmentation refined by color

    NASA Astrophysics Data System (ADS)

    Piek, Matthijs C.; Braspenning, Ralph; Varekamp, Chris

    2003-06-01

    For various applications, such as data compression, structure from motion, medical imaging and video enhancement, there is a need for an algorithm that divides video sequences into independently moving objects. Because our focus is on video enhancement and structure from motion for consumer electronics, we strive for a low complexity solution. For still images, several approaches exist based on colour, but these lack in both speed and segmentation quality. For instance, colour-based watershed algorithms produce a so-called oversegmentation with many segments covering each single physical object. Other colour segmentation approaches exist which somehow limit the number of segments to reduce this oversegmentation problem. However, this often results in inaccurate edges or even missed objects. Most likely, colour is an inherently insufficient cue for real world object segmentation, because real world objects can display complex combinations of colours. For video sequences, however, an additional cue is available, namely the motion of objects. When different objects in a scene have different motion, the motion cue alone is often enough to reliably distinguish objects from one another and the background. However, because of the lack of sufficient resolution of efficient motion estimators, like the 3DRS block matcher, the resulting segmentation is not at pixel resolution, but at block resolution. Existing pixel resolution motion estimators are more sensitive to noise, suffer more from aperture problems or have less correspondence to the true motion of objects when compared to block-based approaches or are too computationally expensive. From its tendency to oversegmentation it is apparent that colour segmentation is particularly effective near edges of homogeneously coloured areas. On the other hand, block-based true motion estimation is particularly effective in heterogeneous areas, because heterogeneous areas improve the chance a block is unique and thus decrease the

  9. NASA Imaging for Safety, Science, and History

    NASA Technical Reports Server (NTRS)

    Grubbs, Rodney; Lindblom, Walt; Bowerman, Deborah S. (Technical Monitor)

    2002-01-01

    Since its creation in 1958 NASA has been making and documenting history, both on Earth and in space. To complete its missions NASA has long relied on still and motion imagery to document spacecraft performance, see what can't be seen by the naked eye, and enhance the safety of astronauts and expensive equipment. Today, NASA is working to take advantage of new digital imagery technologies and techniques to make its missions more safe and efficient. An HDTV camera was on-board the International Space Station from early August, to mid-December, 2001. HDTV cameras previously flown have had degradation in the CCD during the short duration of a Space Shuttle flight. Initial performance assessment of the CCD during the first-ever long duration space flight of a HDTV camera and earlier flights is discussed. Recent Space Shuttle launches have been documented with HDTV cameras and new long lenses giving clarity never before seen with video. Examples and comparisons will be illustrated between HD, highspeed film, and analog video of these launches and other NASA tests. Other uses of HDTV where image quality is of crucial importance will also be featured.

  10. Visualization of fluid dynamics at NASA Ames

    NASA Technical Reports Server (NTRS)

    Watson, Val

    1989-01-01

    The hardware and software currently used for visualization of fluid dynamics at NASA Ames is described. The software includes programs to create scenes (for example particle traces representing the flow over an aircraft), programs to interactively view the scenes, and programs to control the creation of video tapes and 16mm movies. The hardware includes high performance graphics workstations, a high speed network, digital video equipment, and film recorders.

  11. Science is Cool with NASA's "Space School Musical"

    NASA Astrophysics Data System (ADS)

    Asplund, S.

    2011-10-01

    To help young learners understand basic solar system science concepts and retain what they learn, NASA's Discovery and New Frontiers Programs have collaborated with KidTribe to create "Space School Musical," an innovative approach for teaching about the solar system. It's an educational "hip-hopera" that raps, rhymes, moves and grooves its way into the minds and memories of students and educators alike. The solar system comes alive, combining science content with music, fun lyrics, and choreography. Kids can watch the videos, learn the songs, do the cross-curricular activities, and perform the show themselves. The videos, songs, lyrics, and guides are available to all with free downloads at http://discovery.nasa.gov/

  12. The Quest for Contact: NASA's Search for Extraterrestrial Intelligence

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This video details the history and current efforts of NASA's Search for Extraterrestrial Intelligence program. The video explains the use of radiotelescopes to monitor electromagnetic frequencies reaching the Earth, and the analysis of this data for patterns or signals that have no natural origin. The video presents an overview of Frank Drake's 1960 'Ozma' experiment, the current META experiment, and planned efforts incorporating an international Deep Space Network of radiotelescopes that will be trained on over 800 stars.

  13. Tracking cells in Life Cell Imaging videos using topological alignments.

    PubMed

    Mosig, Axel; Jäger, Stefan; Wang, Chaofeng; Nath, Sumit; Ersoy, Ilker; Palaniappan, Kannap-pan; Chen, Su-Shing

    2009-07-16

    With the increasing availability of live cell imaging technology, tracking cells and other moving objects in live cell videos has become a major challenge for bioimage informatics. An inherent problem for most cell tracking algorithms is over- or under-segmentation of cells - many algorithms tend to recognize one cell as several cells or vice versa. We propose to approach this problem through so-called topological alignments, which we apply to address the problem of linking segmentations of two consecutive frames in the video sequence. Starting from the output of a conventional segmentation procedure, we align pairs of consecutive frames through assigning sets of segments in one frame to sets of segments in the next frame. We achieve this through finding maximum weighted solutions to a generalized "bipartite matching" between two hierarchies of segments, where we derive weights from relative overlap scores of convex hulls of sets of segments. For solving the matching task, we rely on an integer linear program. Practical experiments demonstrate that the matching task can be solved efficiently in practice, and that our method is both effective and useful for tracking cells in data sets derived from a so-called Large Scale Digital Cell Analysis System (LSDCAS). The source code of the implementation is available for download from http://www.picb.ac.cn/patterns/Software/topaln.

  14. Use of videos for Distribution Construction and Maintenance (DC M) training

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, G.M.

    This paper presents the results of a survey taken among members of the American Gas Association (AGA)'s Distribution Construction and Maintenance (DC M) committee to gauge the extent, sources, mode of use, and degree of satisfaction with videos as a training aid in distribution construction and maintenance skills. Also cites AGA Engineering Technical Note, DCM-88-3-1, as a catalog of the videos listed by respondents to the survey. Comments on the various sources of training videos and the characteristics of videos from each. Conference presentation included showing of a sampling of video segments from these various sources. 1 fig.

  15. Bayesian Modeling of Temporal Coherence in Videos for Entity Discovery and Summarization.

    PubMed

    Mitra, Adway; Biswas, Soma; Bhattacharyya, Chiranjib

    2017-03-01

    A video is understood by users in terms of entities present in it. Entity Discovery is the task of building appearance model for each entity (e.g., a person), and finding all its occurrences in the video. We represent a video as a sequence of tracklets, each spanning 10-20 frames, and associated with one entity. We pose Entity Discovery as tracklet clustering, and approach it by leveraging Temporal Coherence (TC): the property that temporally neighboring tracklets are likely to be associated with the same entity. Our major contributions are the first Bayesian nonparametric models for TC at tracklet-level. We extend Chinese Restaurant Process (CRP) to TC-CRP, and further to Temporally Coherent Chinese Restaurant Franchise (TC-CRF) to jointly model entities and temporal segments using mixture components and sparse distributions. For discovering persons in TV serial videos without meta-data like scripts, these methods show considerable improvement over state-of-the-art approaches to tracklet clustering in terms of clustering accuracy, cluster purity and entity coverage. The proposed methods can perform online tracklet clustering on streaming videos unlike existing approaches, and can automatically reject false tracklets. Finally we discuss entity-driven video summarization- where temporal segments of the video are selected based on the discovered entities, to create a semantically meaningful summary.

  16. The Center/TRACON Automation System (CTAS): A video presentation

    NASA Technical Reports Server (NTRS)

    Green, Steven M.; Freeman, Jeannine

    1992-01-01

    NASA Ames, working with the FAA, has developed a highly effective set of automation tools for aiding the air traffic controller in traffic management within the terminal area. To effectively demonstrate these tools, the video AAV-1372, entitled 'Center/TRACON Automation System,' was produced. The script to the video is provided along with instructions for its acquisition.

  17. NASA's F-15B testbed aircraft in flight during the first evaluation flight of the joint NASA/Gulfstream Quiet Spike project

    NASA Image and Video Library

    2006-08-10

    NASA's F-15B testbed aircraft in flight during the first evaluation flight of the joint NASA/Gulfstream Quiet Spike project. The project seeks to verify the structural integrity of the multi-segmented, articulating spike attachment designed to reduce and control a sonic boom.

  18. BCAT5 Video Setup In JEM

    NASA Image and Video Library

    2011-09-21

    ISS029-E-010998 (21 Sept. 2011) --- NASA astronaut Mike Fossum, Expedition 29 commander, prepares a camcorder for recording documentary video of the Binary Colloidal Alloy Test-5 (BCAT-5) payload operations in the Kibo laboratory of the International Space Station.

  19. BCAT5 Video Setup In JEM

    NASA Image and Video Library

    2011-09-21

    ISS029-E-010999 (21 Sept. 2011) --- NASA astronaut Mike Fossum, Expedition 29 commander, prepares a camcorder for recording documentary video of the Binary Colloidal Alloy Test-5 (BCAT-5) payload operations in the Kibo laboratory of the International Space Station.

  20. The Simple Video Coder: A free tool for efficiently coding social video data.

    PubMed

    Barto, Daniel; Bird, Clark W; Hamilton, Derek A; Fink, Brandi C

    2017-08-01

    Videotaping of experimental sessions is a common practice across many disciplines of psychology, ranging from clinical therapy, to developmental science, to animal research. Audio-visual data are a rich source of information that can be easily recorded; however, analysis of the recordings presents a major obstacle to project completion. Coding behavior is time-consuming and often requires ad-hoc training of a student coder. In addition, existing software is either prohibitively expensive or cumbersome, which leaves researchers with inadequate tools to quickly process video data. We offer the Simple Video Coder-free, open-source software for behavior coding that is flexible in accommodating different experimental designs, is intuitive for students to use, and produces outcome measures of event timing, frequency, and duration. Finally, the software also offers extraction tools to splice video into coded segments suitable for training future human coders or for use as input for pattern classification algorithms.

  1. Wiseman and Suraev in Russian segment

    NASA Image and Video Library

    2014-06-06

    ISS040-E-008030 (6 June 2014) --- NASA astronaut Reid Wiseman and Russian cosmonaut Maxim Suraev (background), both Expedition 40 flight engineers, are pictured in the Russian segment of the International Space Station.

  2. NASA Missions Monitor a Waking Black Hole

    NASA Image and Video Library

    2015-06-30

    On June 15, NASA's Swift caught the onset of a rare X-ray outburst from a stellar-mass black hole in the binary system V404 Cygni. Astronomers around the world are watching the event. In this system, a stream of gas from a star much like the sun flows toward a 10 solar mass black hole. Instead of spiraling toward the black hole, the gas accumulates in an accretion disk around it. Every couple of decades, the disk switches into a state that sends the gas rushing inward, starting a new outburst. Read more: www.nasa.gov/feature/goddard/nasa-missions-monitor-a-waki... Credits: NASA's Goddard Space Flight Center Download this video in HD formats from NASA Goddard's Scientific Visualization Studio svs.gsfc.nasa.gov/cgi-bin/details.cgi?aid=11110

  3. NASA Cribs: Human Exploration Research Analog

    NASA Image and Video Library

    2017-07-20

    Follow along as interns at NASA’s Johnson Space Center show you around the Human Exploration Research Analog (HERA), a mission simulation environment located onsite at the Johnson Space Center in Houston. HERA is a unique three-story habitat designed to serve as an analog for isolation, confinement, and remote conditions in exploration scenarios. This video gives a tour of where crew members live, work, sleep, and eat during the analog missions. Find out more about HERA mission activities: https://www.nasa.gov/analogs/hera Find out how to be a HERA crew member: https://www.nasa.gov/analogs/hera/want-to-participate For more on NASA internships: https://intern.nasa.gov/ For Johnson Space Center specific internships: https://pathways.jsc.nasa.gov/ https://www.nasa.gov/centers/johnson/education/interns/index.html HD download link: https://archive.org/details/jsc2017m000730_NASA-Cribs-Human-Exploration-Research-Analog --------------------------------- FOLLOW JOHNSON SPACE CENTER INTERNS! Facebook: @NASA.JSC.Students https://www.facebook.com/NASA.JSC.Students/ Instagram: @nasajscstudents https://www.instagram.com/nasajscstudents/ Twitter: @NASAJSCStudents https://twitter.com/nasajscstudents

  4. Highlight summarization in golf videos using audio signals

    NASA Astrophysics Data System (ADS)

    Kim, Hyoung-Gook; Kim, Jin Young

    2008-01-01

    In this paper, we present an automatic summarization of highlights in golf videos based on audio information alone without video information. The proposed highlight summarization system is carried out based on semantic audio segmentation and detection on action units from audio signals. Studio speech, field speech, music, and applause are segmented by means of sound classification. Swing is detected by the methods of impulse onset detection. Sounds like swing and applause form a complete action unit, while studio speech and music parts are used to anchor the program structure. With the advantage of highly precise detection of applause, highlights are extracted effectively. Our experimental results obtain high classification precision on 18 golf games. It proves that the proposed system is very effective and computationally efficient to apply the technology to embedded consumer electronic devices.

  5. Video-to-film color-image recorder.

    NASA Technical Reports Server (NTRS)

    Montuori, J. S.; Carnes, W. R.; Shim, I. H.

    1973-01-01

    A precision video-to-film recorder for use in image data processing systems, being developed for NASA, will convert three video input signals (red, blue, green) into a single full-color light beam for image recording on color film. Argon ion and krypton lasers are used to produce three spectral lines which are independently modulated by the appropriate video signals, combined into a single full-color light beam, and swept over the recording film in a raster format for image recording. A rotating multi-faceted spinner mounted on a translating carriage generates the raster, and an annotation head is used to record up to 512 alphanumeric characters in a designated area outside the image area.

  6. Parting Moon Shots from NASAs GRAIL Mission

    NASA Image and Video Library

    2013-01-10

    Video of the moon taken by the NASA GRAIL mission's MoonKam (Moon Knowledge Acquired by Middle School Students) camera aboard the Ebb spacecraft on Dec. 14, 2012. Features forward-facing and rear-facing views.

  7. NASA Space Safety Standards and Procedures for Human Rating Requirements

    NASA Technical Reports Server (NTRS)

    Shivers, C. Herbert

    2009-01-01

    The National Aeronautics and Space Administration of the United States of America (NASA) has arguably led this planet in space exploration and certainly has been one of two major leaders in those endeavors. NASA governance is institutionalized and managed in a series documents arranged in a hierarchy and flowing down to the work levels. A document tree of NASA s documentation in its totality would likely overwhelm and not be very informative. Taken in segments related to the various business topics and focusing in those segments, however, provides a logical and understandable relationship and flow of requirements and processes. That is the nature of this chapter, a selection of NASA documentation pertaining to space exploration and a description of how those documents together form the plan by which NASA business for space exploration is conducted. Information presented herein is taken from NASA publications and is available publicly and no information herein is protected by copyright or security regulations. While NASA documents are the source of information presented herein, any and all views expressed herein and any misrepresentations of NASA data that may occur herein are those of the author and should not be considered NASA official positions or statements, nor should NASA endorsement of anything presented in this work be assumed.

  8. KENNEDY SPACE CENTER, FLA. - The red NASA engine backs up with its cargo of containers in order to change tracks. The containers enclose segments of a solid rocket booster being returned to Utah for testing. The segments were part of the STS-114 stack. It is the first time actual flight segments that had been stacked for flight in the VAB are being returned for testing. They will undergo firing, which will enable inspectors to check the viability of the solid and verify the life expectancy for stacked segments.

    NASA Image and Video Library

    2004-01-30

    KENNEDY SPACE CENTER, FLA. - The red NASA engine backs up with its cargo of containers in order to change tracks. The containers enclose segments of a solid rocket booster being returned to Utah for testing. The segments were part of the STS-114 stack. It is the first time actual flight segments that had been stacked for flight in the VAB are being returned for testing. They will undergo firing, which will enable inspectors to check the viability of the solid and verify the life expectancy for stacked segments.

  9. The control panel for the joint NASA/Gulfstream Quiet Spike project, located in the backseat of NASA's F-15B testbed aircraft

    NASA Image and Video Library

    2006-08-16

    The control panel for the joint NASA/Gulfstream Quiet Spike project, located in the backseat of NASA's F-15B testbed aircraft. The project seeks to verify the structural integrity of the multi-segmented, articulating spike attachment designed to reduce and control a sonic boom.

  10. NASA mobile satellite program

    NASA Technical Reports Server (NTRS)

    Knouse, G.; Weber, W.

    1985-01-01

    A three phase development program for ground and space segment technologies which will enhance and enable the second and third generation mobile satellite systems (MSS) is outlined. Phase 1, called the Mobile Satellite Experiment (MSAT-X), is directed toward the development of ground segment technology needed for future MSS generations. Technology validation and preoperational experiments with other government agencies will be carried out during the two year period following launch. The satellite channel capacity needed to carry out these experiments will be obtained from industry under a barter type agreement in exchange for NASA provided launch services. Phase 2 will develop and flight test the multibeam spacecraft antenna technology needed to obtain substantial frequency reuse for second generation commercial systems. Industry will provide the antenna, and NASA will fly it on the Shuttle and test it in orbit. Phase 3 is similar to Phase 2 but will develop an even larger multibeam antenna and test it on the space station.

  11. NASA mobile satellite program

    NASA Astrophysics Data System (ADS)

    Knouse, G.; Weber, W.

    1985-04-01

    A three phase development program for ground and space segment technologies which will enhance and enable the second and third generation mobile satellite systems (MSS) is outlined. Phase 1, called the Mobile Satellite Experiment (MSAT-X), is directed toward the development of ground segment technology needed for future MSS generations. Technology validation and preoperational experiments with other government agencies will be carried out during the two year period following launch. The satellite channel capacity needed to carry out these experiments will be obtained from industry under a barter type agreement in exchange for NASA provided launch services. Phase 2 will develop and flight test the multibeam spacecraft antenna technology needed to obtain substantial frequency reuse for second generation commercial systems. Industry will provide the antenna, and NASA will fly it on the Shuttle and test it in orbit. Phase 3 is similar to Phase 2 but will develop an even larger multibeam antenna and test it on the space station.

  12. Video repairing under variable illumination using cyclic motions.

    PubMed

    Jia, Jiaya; Tai, Yu-Wing; Wu, Tai-Pang; Tang, Chi-Keung

    2006-05-01

    This paper presents a complete system capable of synthesizing a large number of pixels that are missing due to occlusion or damage in an uncalibrated input video. These missing pixels may correspond to the static background or cyclic motions of the captured scene. Our system employs user-assisted video layer segmentation, while the main processing in video repair is fully automatic. The input video is first decomposed into the color and illumination videos. The necessary temporal consistency is maintained by tensor voting in the spatio-temporal domain. Missing colors and illumination of the background are synthesized by applying image repairing. Finally, the occluded motions are inferred by spatio-temporal alignment of collected samples at multiple scales. We experimented on our system with some difficult examples with variable illumination, where the capturing camera can be stationary or in motion.

  13. VLSI-based video event triggering for image data compression

    NASA Astrophysics Data System (ADS)

    Williams, Glenn L.

    1994-02-01

    Long-duration, on-orbit microgravity experiments require a combination of high resolution and high frame rate video data acquisition. The digitized high-rate video stream presents a difficult data storage problem. Data produced at rates of several hundred million bytes per second may require a total mission video data storage requirement exceeding one terabyte. A NASA-designed, VLSI-based, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term (DC-like) or short term (AC-like) changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pre-trigger and post-trigger storage techniques are then adaptable to archiving only the significant video images.

  14. VLSI-based Video Event Triggering for Image Data Compression

    NASA Technical Reports Server (NTRS)

    Williams, Glenn L.

    1994-01-01

    Long-duration, on-orbit microgravity experiments require a combination of high resolution and high frame rate video data acquisition. The digitized high-rate video stream presents a difficult data storage problem. Data produced at rates of several hundred million bytes per second may require a total mission video data storage requirement exceeding one terabyte. A NASA-designed, VLSI-based, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term (DC-like) or short term (AC-like) changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pre-trigger and post-trigger storage techniques are then adaptable to archiving only the significant video images.

  15. Motion video analysis using planar parallax

    NASA Astrophysics Data System (ADS)

    Sawhney, Harpreet S.

    1994-04-01

    Motion and structure analysis in video sequences can lead to efficient descriptions of objects and their motions. Interesting events in videos can be detected using such an analysis--for instance independent object motion when the camera itself is moving, figure-ground segregation based on the saliency of a structure compared to its surroundings. In this paper we present a method for 3D motion and structure analysis that uses a planar surface in the environment as a reference coordinate system to describe a video sequence. The motion in the video sequence is described as the motion of the reference plane, and the parallax motion of all the non-planar components of the scene. It is shown how this method simplifies the otherwise hard general 3D motion analysis problem. In addition, a natural coordinate system in the environment is used to describe the scene which can simplify motion based segmentation. This work is a part of an ongoing effort in our group towards video annotation and analysis for indexing and retrieval. Results from a demonstration system being developed are presented.

  16. Research and Development at NASA

    NASA Technical Reports Server (NTRS)

    2004-01-01

    The Vision for Space Exploration marks the next segment of NASA's continuing journey to find answers to compelling questions about the origins of the solar system, the existence of life beyond Earth, and the ability of humankind to live on other worlds. The success of the Vision relies upon the ongoing research and development activities conducted at each of NASA's 10 field centers. In an effort to promote synergy across NASA as it works to meet its long-term goals, the Agency restructured its Strategic Enterprises into four Mission Directorates that align with the Vision. Consisting of Exploration Systems, Space Operations, Science, and Aeronautics Research, these directorates provide NASA Headquarters and the field centers with a streamlined approach to continue exploration both in space and on Earth.

  17. Indexed Captioned Searchable Videos: A Learning Companion for STEM Coursework

    NASA Astrophysics Data System (ADS)

    Tuna, Tayfun; Subhlok, Jaspal; Barker, Lecia; Shah, Shishir; Johnson, Olin; Hovey, Christopher

    2017-02-01

    Videos of classroom lectures have proven to be a popular and versatile learning resource. A key shortcoming of the lecture video format is accessing the content of interest hidden in a video. This work meets this challenge with an advanced video framework featuring topical indexing, search, and captioning (ICS videos). Standard optical character recognition (OCR) technology was enhanced with image transformations for extraction of text from video frames to support indexing and search. The images and text on video frames is analyzed to divide lecture videos into topical segments. The ICS video player integrates indexing, search, and captioning in video playback providing instant access to the content of interest. This video framework has been used by more than 70 courses in a variety of STEM disciplines and assessed by more than 4000 students. Results presented from the surveys demonstrate the value of the videos as a learning resource and the role played by videos in a students learning process. Survey results also establish the value of indexing and search features in a video platform for education. This paper reports on the development and evaluation of ICS videos framework and over 5 years of usage experience in several STEM courses.

  18. DIY Video Abstracts: Lessons from an ultimately successful experience

    NASA Astrophysics Data System (ADS)

    Brauman, K. A.

    2013-12-01

    A great video abstract can come together in as little as two days with only a laptop and a sense of adventure. From script to setup, here are tips to make the process practically pain-free. The content of every abstract is unique, but some pointers for writing a video script are universal. Keeping it short and clarifying the message into 4 or 5 single-issue segments make any video better. Making the video itself can be intimidating, but it doesn't have to be! Practical ideas to be discussed include setting up the script as a narrow column to avoid the appearance of reading and hunting for a colored backdrop. A lot goes into just two minutes of video, but for not too much effort the payoff is tremendous.

  19. Activity-based exploitation of Full Motion Video (FMV)

    NASA Astrophysics Data System (ADS)

    Kant, Shashi

    2012-06-01

    Video has been a game-changer in how US forces are able to find, track and defeat its adversaries. With millions of minutes of video being generated from an increasing number of sensor platforms, the DOD has stated that the rapid increase in video is overwhelming their analysts. The manpower required to view and garner useable information from the flood of video is unaffordable, especially in light of current fiscal restraints. "Search" within full-motion video has traditionally relied on human tagging of content, and video metadata, to provision filtering and locate segments of interest, in the context of analyst query. Our approach utilizes a novel machine-vision based approach to index FMV, using object recognition & tracking, events and activities detection. This approach enables FMV exploitation in real-time, as well as a forensic look-back within archives. This approach can help get the most information out of video sensor collection, help focus the attention of overburdened analysts form connections in activity over time and conserve national fiscal resources in exploiting FMV.

  20. Classification and Weakly Supervised Pain Localization using Multiple Segment Representation.

    PubMed

    Sikka, Karan; Dhall, Abhinav; Bartlett, Marian Stewart

    2014-10-01

    Automatic pain recognition from videos is a vital clinical application and, owing to its spontaneous nature, poses interesting challenges to automatic facial expression recognition (AFER) research. Previous pain vs no-pain systems have highlighted two major challenges: (1) ground truth is provided for the sequence, but the presence or absence of the target expression for a given frame is unknown, and (2) the time point and the duration of the pain expression event(s) in each video are unknown. To address these issues we propose a novel framework (referred to as MS-MIL) where each sequence is represented as a bag containing multiple segments, and multiple instance learning (MIL) is employed to handle this weakly labeled data in the form of sequence level ground-truth. These segments are generated via multiple clustering of a sequence or running a multi-scale temporal scanning window, and are represented using a state-of-the-art Bag of Words (BoW) representation. This work extends the idea of detecting facial expressions through 'concept frames' to 'concept segments' and argues through extensive experiments that algorithms such as MIL are needed to reap the benefits of such representation. The key advantages of our approach are: (1) joint detection and localization of painful frames using only sequence-level ground-truth, (2) incorporation of temporal dynamics by representing the data not as individual frames but as segments, and (3) extraction of multiple segments, which is well suited to signals with uncertain temporal location and duration in the video. Extensive experiments on UNBC-McMaster Shoulder Pain dataset highlight the effectiveness of the approach by achieving competitive results on both tasks of pain classification and localization in videos. We also empirically evaluate the contributions of different components of MS-MIL. The paper also includes the visualization of discriminative facial patches, important for pain detection, as discovered by our

  1. A unified framework for gesture recognition and spatiotemporal gesture segmentation.

    PubMed

    Alon, Jonathan; Athitsos, Vassilis; Yuan, Quan; Sclaroff, Stan

    2009-09-01

    Within the context of hand gesture recognition, spatiotemporal gesture segmentation is the task of determining, in a video sequence, where the gesturing hand is located and when the gesture starts and ends. Existing gesture recognition methods typically assume either known spatial segmentation or known temporal segmentation, or both. This paper introduces a unified framework for simultaneously performing spatial segmentation, temporal segmentation, and recognition. In the proposed framework, information flows both bottom-up and top-down. A gesture can be recognized even when the hand location is highly ambiguous and when information about when the gesture begins and ends is unavailable. Thus, the method can be applied to continuous image streams where gestures are performed in front of moving, cluttered backgrounds. The proposed method consists of three novel contributions: a spatiotemporal matching algorithm that can accommodate multiple candidate hand detections in every frame, a classifier-based pruning framework that enables accurate and early rejection of poor matches to gesture models, and a subgesture reasoning algorithm that learns which gesture models can falsely match parts of other longer gestures. The performance of the approach is evaluated on two challenging applications: recognition of hand-signed digits gestured by users wearing short-sleeved shirts, in front of a cluttered background, and retrieval of occurrences of signs of interest in a video database containing continuous, unsegmented signing in American Sign Language (ASL).

  2. Gradual cut detection using low-level vision for digital video

    NASA Astrophysics Data System (ADS)

    Lee, Jae-Hyun; Choi, Yeun-Sung; Jang, Ok-bae

    1996-09-01

    Digital video computing and organization is one of the important issues in multimedia system, signal compression, or database. Video should be segmented into shots to be used for identification and indexing. This approach requires a suitable method to automatically locate cut points in order to separate shot in a video. Automatic cut detection to isolate shots in a video has received considerable attention due to many practical applications; our video database, browsing, authoring system, retrieval and movie. Previous studies are based on a set of difference mechanisms and they measured the content changes between video frames. But they could not detect more special effects which include dissolve, wipe, fade-in, fade-out, and structured flashing. In this paper, a new cut detection method for gradual transition based on computer vision techniques is proposed. And then, experimental results applied to commercial video are presented and evaluated.

  3. Helping Video Games Rewire "Our Minds"

    NASA Technical Reports Server (NTRS)

    Pope, Alan T.; Palsson, Olafur S.

    2001-01-01

    Biofeedback-modulated video games are games that respond to physiological signals as well as mouse, joystick or game controller input; they embody the concept of improving physiological functioning by rewarding specific healthy body signals with success at playing a video game. The NASA patented biofeedback-modulated game method blends biofeedback into popular off-the- shelf video games in such a way that the games do not lose their entertainment value. This method uses physiological signals (e.g., electroencephalogram frequency band ratio) not simply to drive a biofeedback display directly, or periodically modify a task as in other systems, but to continuously modulate parameters (e.g., game character speed and mobility) of a game task in real time while the game task is being performed by other means (e.g., a game controller). Biofeedback-modulated video games represent a new generation of computer and video game environments that train valuable mental skills beyond eye-hand coordination. These psychophysiological training technologies are poised to exploit the revolution in interactive multimedia home entertainment for the personal improvement, not just the diversion, of the user.

  4. Retinal slit lamp video mosaicking.

    PubMed

    De Zanet, Sandro; Rudolph, Tobias; Richa, Rogerio; Tappeiner, Christoph; Sznitman, Raphael

    2016-06-01

    To this day, the slit lamp remains the first tool used by an ophthalmologist to examine patient eyes. Imaging of the retina poses, however, a variety of problems, namely a shallow depth of focus, reflections from the optical system, a small field of view and non-uniform illumination. For ophthalmologists, the use of slit lamp images for documentation and analysis purposes, however, remains extremely challenging due to large image artifacts. For this reason, we propose an automatic retinal slit lamp video mosaicking, which enlarges the field of view and reduces amount of noise and reflections, thus enhancing image quality. Our method is composed of three parts: (i) viable content segmentation, (ii) global registration and (iii) image blending. Frame content is segmented using gradient boosting with custom pixel-wise features. Speeded-up robust features are used for finding pair-wise translations between frames with robust random sample consensus estimation and graph-based simultaneous localization and mapping for global bundle adjustment. Foreground-aware blending based on feathering merges video frames into comprehensive mosaics. Foreground is segmented successfully with an area under the curve of the receiver operating characteristic curve of 0.9557. Mosaicking results and state-of-the-art methods were compared and rated by ophthalmologists showing a strong preference for a large field of view provided by our method. The proposed method for global registration of retinal slit lamp images of the retina into comprehensive mosaics improves over state-of-the-art methods and is preferred qualitatively.

  5. We Remember 2015 - A Video Memorial

    NASA Image and Video Library

    2015-06-10

    Video tribute to 12 members of the NASA Astrobiology community who passed away since the 2012 AbSciCon meeting. Tributes to: Dick Holland, Bob Wharton, Carl Woese, David McKay, Tom Wdowiak, John Billingham, Bishun Khare, Tom Pierson, Colin Pillinger, Katrina Edwards, Martin Brasier and Alberto Behar.

  6. NASA Conducts "Out of Sight" Drone Tests in Nevada

    NASA Image and Video Library

    2016-10-27

    Shareable video highlighting NASA's work with the Federal Aviation Administration (FAA) to develop an air traffic management platform for drones, called the Unmanned Aircraft Systems Traffic Management system or UTM.

  7. The NASA Fireball Network Database

    NASA Technical Reports Server (NTRS)

    Moser, Danielle E.

    2011-01-01

    The NASA Meteoroid Environment Office (MEO) has been operating an automated video fireball network since late-2008. Since that time, over 1,700 multi-station fireballs have been observed. A database containing orbital data and trajectory information on all these events has recently been compiled and is currently being mined for information. Preliminary results are presented here.

  8. NASA Lewis' Telescience Support Center Supports Orbiting Microgravity Experiments

    NASA Technical Reports Server (NTRS)

    Hawersaat, Bob W.

    1998-01-01

    The Telescience Support Center (TSC) at the NASA Lewis Research Center was developed to enable Lewis-based science teams and principal investigators to monitor and control experimental and operational payloads onboard the International Space Station. The TSC is a remote operations hub that can interface with other remote facilities, such as universities and industrial laboratories. As a pathfinder for International Space Station telescience operations, the TSC has incrementally developed an operational capability by supporting space shuttle missions. The TSC has evolved into an environment where experimenters and scientists can control and monitor the health and status of their experiments in near real time. Remote operations (or telescience) allow local scientists and their experiment teams to minimize their travel and maintain a local complement of expertise for hardware and software troubleshooting and data analysis. The TSC was designed, developed, and is operated by Lewis' Engineering and Technical Services Directorate and its support contractors, Analex Corporation and White's Information System, Inc. It is managed by Lewis' Microgravity Science Division. The TSC provides operational support in conjunction with the NASA Marshall Space Flight Center and NASA Johnson Space Center. It enables its customers to command, receive, and view telemetry; monitor the science video from their on-orbit experiments; and communicate over mission-support voice loops. Data can be received and routed to experimenter-supplied ground support equipment and/or to the TSC data system for display. Video teleconferencing capability and other video sources, such as NASA TV, are also available. The TSC has a full complement of standard services to aid experimenters in telemetry operations.

  9. Around Marshall

    NASA Image and Video Library

    2003-05-01

    Students at Williams Technology Middle School in Huntsville were featured in a new segment of NASA CONNECT, a video series aimed to enhance the teaching of math, science, and technology to middle school students. The segment premiered nationwide May 15, 2003, and helped viewers understand Sir Isaac Newton's first, second, and third laws of gravity and how they relate to NASA's efforts in developing the next generation of space transportation.

  10. NASA work unit system users manual

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The NASA Work Unit System is a management information system for research tasks (i.e., work units) performed under NASA grants and contracts. It supplies profiles to indicate how much effort is being expended to what types of research, where the effort is being expended, and how funds are being distributed. The user obtains information by entering requests on the keyboard of a time-sharing terminal. Responses are received as video displays or typed messages at the terminal, or as lists printed in the computer room for subsequent delivery by messenger.

  11. Interactive segmentation of tongue contours in ultrasound video sequences using quality maps

    NASA Astrophysics Data System (ADS)

    Ghrenassia, Sarah; Ménard, Lucie; Laporte, Catherine

    2014-03-01

    Ultrasound (US) imaging is an effective and non invasive way of studying the tongue motions involved in normal and pathological speech, and the results of US studies are of interest for the development of new strategies in speech therapy. State-of-the-art tongue shape analysis techniques based on US images depend on semi-automated tongue segmentation and tracking techniques. Recent work has mostly focused on improving the accuracy of the tracking techniques themselves. However, occasional errors remain inevitable, regardless of the technique used, and the tongue tracking process must thus be supervised by a speech scientist who will correct these errors manually or semi-automatically. This paper proposes an interactive framework to facilitate this process. In this framework, the user is guided towards potentially problematic portions of the US image sequence by a segmentation quality map that is based on the normalized energy of an active contour model and automatically produced during tracking. When a problematic segmentation is identified, corrections to the segmented contour can be made on one image and propagated both forward and backward in the problematic subsequence, thereby improving the user experience. The interactive tools were tested in combination with two different tracking algorithms. Preliminary results illustrate the potential of the proposed framework, suggesting that the proposed framework generally improves user interaction time, with little change in segmentation repeatability.

  12. Design Effectiveness Analysis of a Media Literacy Intervention to Reduce Violent Video Games Consumption Among Adolescents: The Relevance of Lifestyles Segmentation.

    PubMed

    Rivera, Reynaldo; Santos, David; Brändle, Gaspar; Cárdaba, Miguel Ángel M

    2016-04-01

    Exposure to media violence might have detrimental effects on psychological adjustment and is associated with aggression-related attitudes and behaviors. As a result, many media literacy programs were implemented to tackle that major public health issue. However, there is little evidence about their effectiveness. Evaluating design effectiveness, particularly regarding targeting process, would prevent adverse effects and improve the evaluation of evidence-based media literacy programs. The present research examined whether or not different relational lifestyles may explain the different effects of an antiviolence intervention program. Based on relational and lifestyles theory, the authors designed a randomized controlled trial and applied an analysis of variance 2 (treatment: experimental vs. control) × 4 (lifestyle classes emerged from data using latent class analysis: communicative vs. autonomous vs. meta-reflexive vs. fractured). Seven hundred and thirty-five Italian students distributed in 47 classes participated anonymously in the research (51.3% females). Participants completed a lifestyle questionnaire as well as their attitudes and behavioral intentions as the dependent measures. The results indicated that the program was effective in changing adolescents' attitudes toward violence. However, behavioral intentions toward consumption of violent video games were moderated by lifestyles. Those with communicative relational lifestyles showed fewer intentions to consume violent video games, while a boomerang effect was found among participants with problematic lifestyles. Adolescents' lifestyles played an important role in influencing the effectiveness of an intervention aimed at changing behavioral intentions toward the consumption of violent video games. For that reason, audience lifestyle segmentation analysis should be considered an essential technique for designing, evaluating, and improving media literacy programs. © The Author(s) 2016.

  13. Pegasus5 is Co-Winner of NASA's 2016 Software of the Year Award

    NASA Image and Video Library

    2016-11-04

    Shareable video highlighting the Pegasus5 software, which was the co-winner of the NASA's 2016 Software of the Year award. Developed at NASA Ames, it helps in the simulation of air flow around space vehicles during launch and re-entry.

  14. Video Comprehensibility and Attention in Very Young Children

    PubMed Central

    Pempek, Tiffany A.; Kirkorian, Heather L.; Richards, John E.; Anderson, Daniel R.; Lund, Anne F.; Stevens, Michael

    2010-01-01

    Earlier research established that preschool children pay less attention to television that is sequentially or linguistically incomprehensible. This study determines the youngest age for which this effect can be found. One-hundred and three 6-, 12-, 18-, and 24-month-olds’ looking and heart rate were recorded while they watched Teletubbies, a television program designed for very young children. Comprehensibility was manipulated by either randomly ordering shots or reversing dialogue to become backward speech. Infants watched one normal segment and one distorted version of the same segment. Only 24-month-olds, and to some extent 18-month-olds, distinguished between normal and distorted video by looking for longer durations towards the normal stimuli. The results suggest that it may not be until the middle of the second year that children demonstrate the earliest beginnings of comprehension of video as it is currently produced. PMID:20822238

  15. Texture-adaptive hyperspectral video acquisition system with a spatial light modulator

    NASA Astrophysics Data System (ADS)

    Fang, Xiaojing; Feng, Jiao; Wang, Yongjin

    2014-10-01

    We present a new hybrid camera system based on spatial light modulator (SLM) to capture texture-adaptive high-resolution hyperspectral video. The hybrid camera system records a hyperspectral video with low spatial resolution using a gray camera and a high-spatial resolution video using a RGB camera. The hyperspectral video is subsampled by the SLM. The subsampled points can be adaptively selected according to the texture characteristic of the scene by combining with digital imaging analysis and computational processing. In this paper, we propose an adaptive sampling method utilizing texture segmentation and wavelet transform (WT). We also demonstrate the effectiveness of the sampled pattern on the SLM with the proposed method.

  16. Eclipse Shadow from NASA's G-III Research Aircraft

    NASA Image and Video Library

    2017-08-21

    From aboard NASA's Armstrong Flight Research Center G-III aircraft, this wide angle video of the moon's umbra was captured as they flew over the coast of Oregon, near Lincoln City at 35,00 feet during the eclipse.

  17. Depth Extraction from Videos Using Geometric Context and Occlusion Boundaries (Open Access)

    DTIC Science & Technology

    2014-09-05

    RAZA ET AL .: DEPTH EXTRACTION FROM VIDEOS 1 Depth Extraction from Videos Using Geometric Context and Occlusion Boundaries S. Hussain Raza1...electronic forms. ar X iv :1 51 0. 07 31 7v 1 [ cs .C V ] 2 5 O ct 2 01 5 2 RAZA ET AL .: DEPTH EXTRACTION FROM VIDEOS Frame Ground Truth Depth...temporal segmentation using the method proposed by Grundmann et al . [4]. estimation and triangulation to estimate depth maps [17, 27](see Figure 1). In

  18. Automatic multiple zebrafish larvae tracking in unconstrained microscopic video conditions.

    PubMed

    Wang, Xiaoying; Cheng, Eva; Burnett, Ian S; Huang, Yushi; Wlodkowic, Donald

    2017-12-14

    The accurate tracking of zebrafish larvae movement is fundamental to research in many biomedical, pharmaceutical, and behavioral science applications. However, the locomotive characteristics of zebrafish larvae are significantly different from adult zebrafish, where existing adult zebrafish tracking systems cannot reliably track zebrafish larvae. Further, the far smaller size differentiation between larvae and the container render the detection of water impurities inevitable, which further affects the tracking of zebrafish larvae or require very strict video imaging conditions that typically result in unreliable tracking results for realistic experimental conditions. This paper investigates the adaptation of advanced computer vision segmentation techniques and multiple object tracking algorithms to develop an accurate, efficient and reliable multiple zebrafish larvae tracking system. The proposed system has been tested on a set of single and multiple adult and larvae zebrafish videos in a wide variety of (complex) video conditions, including shadowing, labels, water bubbles and background artifacts. Compared with existing state-of-the-art and commercial multiple organism tracking systems, the proposed system improves the tracking accuracy by up to 31.57% in unconstrained video imaging conditions. To facilitate the evaluation on zebrafish segmentation and tracking research, a dataset with annotated ground truth is also presented. The software is also publicly accessible.

  19. Small Moving Vehicle Detection in a Satellite Video of an Urban Area

    PubMed Central

    Yang, Tao; Wang, Xiwen; Yao, Bowei; Li, Jing; Zhang, Yanning; He, Zhannan; Duan, Wencheng

    2016-01-01

    Vehicle surveillance of a wide area allows us to learn much about the daily activities and traffic information. With the rapid development of remote sensing, satellite video has become an important data source for vehicle detection, which provides a broader field of surveillance. The achieved work generally focuses on aerial video with moderately-sized objects based on feature extraction. However, the moving vehicles in satellite video imagery range from just a few pixels to dozens of pixels and exhibit low contrast with respect to the background, which makes it hard to get available appearance or shape information. In this paper, we look into the problem of moving vehicle detection in satellite imagery. To the best of our knowledge, it is the first time to deal with moving vehicle detection from satellite videos. Our approach consists of two stages: first, through foreground motion segmentation and trajectory accumulation, the scene motion heat map is dynamically built. Following this, a novel saliency based background model which intensifies moving objects is presented to segment the vehicles in the hot regions. Qualitative and quantitative experiments on sequence from a recent Skybox satellite video dataset demonstrates that our approach achieves a high detection rate and low false alarm simultaneously. PMID:27657091

  20. NASA's SDO Captures Mercury Transit Time-lapses SDO Captures Mercury Transit Time-lapse

    NASA Image and Video Library

    2017-12-08

    Less than once per decade, Mercury passes between the Earth and the sun in a rare astronomical event known as a planetary transit. The 2016 Mercury transit occurred on May 9th, between roughly 7:12 a.m. and 2:42 p.m. EDT. The images in this video are from NASA’s Solar Dynamics Observatory Music: Encompass by Mark Petrie For more info on the Mercury transit go to: www.nasa.gov/transit This video is public domain and may be downloaded at: svs.gsfc.nasa.gov/12235 NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  1. Brain activity and desire for internet video game play

    PubMed Central

    Han, Doug Hyun; Bolo, Nicolas; Daniels, Melissa A.; Arenella, Lynn; Lyoo, In Kyoon; Renshaw, Perry F.

    2010-01-01

    Objective Recent studies have suggested that the brain circuitry mediating cue induced desire for video games is similar to that elicited by cues related to drugs and alcohol. We hypothesized that desire for internet video games during cue presentation would activate similar brain regions to those which have been linked with craving for drugs or pathological gambling. Methods This study involved the acquisition of diagnostic MRI and fMRI data from 19 healthy male adults (ages 18–23 years) following training and a standardized 10-day period of game play with a specified novel internet video game, “War Rock” (K-network®). Using segments of videotape consisting of five contiguous 90-second segments of alternating resting, matched control and video game-related scenes, desire to play the game was assessed using a seven point visual analogue scale before and after presentation of the videotape. Results In responding to internet video game stimuli, compared to neutral control stimuli, significantly greater activity was identified in left inferior frontal gyrus, left parahippocampal gyrus, right and left parietal lobe, right and left thalamus, and right cerebellum (FDR <0.05, p<0.009243). Self-reported desire was positively correlated with the beta values of left inferior frontal gyrus, left parahippocampal gyrus, and right and left thalamus. Compared to the general players, members who played more internet video game (MIGP) cohort showed significantly greater activity in right medial frontal lobe, right and left frontal pre-central gyrus, right parietal post-central gyrus, right parahippocampal gyrus, and left parietal precuneus gyrus. Controlling for total game time, reported desire for the internet video game in the MIGP cohort was positively correlated with activation in right medial frontal lobe and right parahippocampal gyrus. Discussion The present findings suggest that cue-induced activation to internet video game stimuli may be similar to that observed

  2. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center, atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image Stabilization and Registration (VISAR), which is illustrated in this Quick Time movie. VISAR is a computer algorithm that stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. VISAR could also have applications in law enforcement, medical, and meteorological imaging. The software can be used for defense application by improving reconnaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  3. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center,atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image stabilization and Registration (VISAR), which is illustrated in this Quick Time movie. VISAR is a computer algorithm that stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. VISAR could also have applications in law enforcement, medical, and meteorological imaging. The software can be used for defense application by improving reconnaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  4. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA Marshall Space Flight Center, atmospheric scientist Paul Meyer (left) and solar physicist Dr. David Hathaway, have developed promising new software, called Video Image Stabilization and Registration (VISAR), that may help law enforcement agencies to catch criminals by improving the quality of video recorded at crime scenes, VISAR stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects; produces clearer images of moving objects; smoothes jagged edges; enhances still images; and reduces video noise of snow. VISAR could also have applications in medical and meteorological imaging. It could steady images of Ultrasounds which are infamous for their grainy, blurred quality. It would be especially useful for tornadoes, tracking whirling objects and helping to determine the tornado's wind speed. This image shows two scientists reviewing an enhanced video image of a license plate taken from a moving automobile.

  5. Shuttle Upgrade Using 5-Segment Booster (FSB)

    NASA Technical Reports Server (NTRS)

    Sauvageau, Donald R.; Huppi, Hal D.; McCool, A. A. (Technical Monitor)

    2000-01-01

    In support of NASA's continuing effort to improve the over-all safety and reliability of the Shuttle system- a 5-segment booster (FSB) has been identified as an approach to satisfy that overall objective. To assess the feasibility of a 5-segment booster approach, NASA issued a feasibility study contract to evaluate the potential of a 5-segment booster to improve the overall capability of the Shuttle system, especially evaluating the potential to increase the system reliability and safety. In order to effectively evaluate the feasibility of the 5-segment concept, a four-member contractor team was established under the direction of NASA Marshall Space Flight Center (MSFC). MSFC provided the overall program oversight and integration as well as program contractual management. The contractor team consisted of Thiokol, Boeing North American Huntington Beach (BNA), Lockheed Martin Michoud Space Systems (LMMSS) and United Space Alliance (USA) and their subcontractor bd Systems (Control Dynamics Division, Huntsville, AL). United Space Alliance included the former members of United Space Booster Incorporated (USBI) who managed the booster element portion of the current Shuttle solid rocket boosters. Thiokol was responsible for the overall integration and coordination of the contractor team across all of the booster elements. They were also responsible for all of the motor modification evaluations. Boeing North American (BNA) was responsible for all systems integration analyses, generation of loads and environments. and performance and abort mode capabilities. Lockheed Martin Michoud Space Systems (LMMSS) was responsible for evaluating the impacts of any changes to the booster on the external tank (ET), and evaluating any design changes on the external tank necessary to accommodate the FSB. USA. including the former USBI contingent. was responsible for evaluating any modifications to facilities at the launch site as well as any booster component design modifications.

  6. Development and Evaluation of a Culturally Tailored Educational Video: Changing Breast Cancer-Related Behaviors in Chinese Women

    ERIC Educational Resources Information Center

    Wang, Judy H.; Liang, Wenchi; Schwartz, Marc D.; Lee, Marion M.; Kreling, Barbara; Mandelblatt, Jeanne S.

    2008-01-01

    This study developed and evaluated a culturally tailored video guided by the health belief model to improve Chinese women's low rate of mammography use. Focus-group discussions and an advisory board meeting guided the video development. A 17-min video, including a soap opera and physician-recommendation segment, was made in Chinese languages. A…

  7. Access NASA Satellite Global Precipitation Data Visualization on YouTube

    NASA Astrophysics Data System (ADS)

    Liu, Z.; Su, J.; Acker, J. G.; Huffman, G. J.; Vollmer, B.; Wei, J.; Meyer, D. J.

    2017-12-01

    Since the satellite era began, NASA has collected a large volume of Earth science observations for research and applications around the world. Satellite data at 12 NASA data centers can also be used for STEM activities such as disaster events, climate change, etc. However, accessing satellite data can be a daunting task for non-professional users such as teachers and students because of unfamiliarity of terminology, disciplines, data formats, data structures, computing resources, processing software, programing languages, etc. Over the years, many efforts have been developed to improve satellite data access, but barriers still exist for non-professionals. In this presentation, we will present our latest activity that uses the popular online video sharing web site, YouTube, to access visualization of global precipitation datasets at the NASA Goddard Earth Sciences (GES) Data and Information Services Center (DISC). With YouTube, users can access and visualize a large volume of satellite data without necessity to learn new software or download data. The dataset in this activity is the 3-hourly TRMM (Tropical Rainfall Measuring Mission) Multi-satellite Precipitation Analysis (TMPA). The video consists of over 50,000 data files collected since 1998 onwards, covering a zone between 50°N-S. The YouTube video will last 36 minutes for the entire dataset record (over 19 years). Since the time stamp is on each frame of the video, users can begin at any time by dragging the time progress bar. This precipitation animation will allow viewing precipitation events and processes (e.g., hurricanes, fronts, atmospheric rivers, etc.) on a global scale. The next plan is to develop a similar animation for the GPM (Global Precipitation Measurement) Integrated Multi-satellitE Retrievals for GPM (IMERG). The IMERG provides precipitation on a near-global (60°N-S) coverage at half-hourly time interval, showing more details on precipitation processes and development, compared to the 3

  8. Snaking Filament Eruption [video

    NASA Image and Video Library

    2014-11-14

    A filament (which at one point had an eerie similarity to a snake) broke away from the sun and out into space (Nov. 1, 2014). The video covers just over three hours of activity. This kind of eruptive event is called a Hyder flare. These are filaments (elongated clouds of gases above the sun's surface) that erupt and cause a brightening at the sun's surface, although no active regions are in that area. It did thrust out a cloud of particles but not towards Earth. The images were taken in the 304 Angstrom wavelength of extreme UV light. Credit: NASA/Solar Dynamics Observatory NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  9. By the Dozen: NASA's James Webb Space Telescope Mirrors

    NASA Image and Video Library

    2017-12-08

    A view of the one dozen (out of 18) flight mirror segments that make up the primary mirror on NASA's James Webb Space Telescope have been installed at NASA's Goddard Space Flight Center. Credits: NASA/Chris Gunn More: Since December 2015, the team of scientists and engineers have been working tirelessly to install all the primary mirror segments onto the telescope structure in the large clean room at NASA's Goddard Space Flight Center in Greenbelt, Maryland. The twelfth mirror was installed on January 2, 2016. "This milestone signifies that all of the hexagonal shaped mirrors on the fixed central section of the telescope structure are installed and only the 3 mirrors on each wing are left for installation," said Lee Feinberg, NASA's Optical Telescope Element Manager at NASA Goddard. "The incredibly skilled and dedicated team assembling the telescope continues to find ways to do things faster and more efficiently." Each hexagonal-shaped segment measures just over 4.2 feet (1.3 meters) across and weighs approximately 88 pounds (40 kilograms). After being pieced together, the 18 primary mirror segments will work together as one large 21.3-foot (6.5-meter) mirror. The primary mirror will unfold and adjust to shape after launch. The mirrors are made of ultra-lightweight beryllium. The mirrors are placed on the telescope's backplane using a robotic arm, guided by engineers. The full installation is expected to be completed in a few months. The mirrors were built by Ball Aerospace & Technologies Corp., Boulder, Colorado. Ball is the principal subcontractor to Northrop Grumman for the optical technology and lightweight mirror system. The installation of the mirrors onto the telescope structure is performed by Harris Corporation of Rochester, New York. Harris Corporation leads integration and testing for the telescope. While the mirror assembly is a very significant milestone, there are many more steps involved in assembling the Webb telescope. The primary mirror and the

  10. By the Dozen: NASA's James Webb Space Telescope Mirrors

    NASA Image and Video Library

    2016-01-07

    Caption: One dozen (out of 18) flight mirror segments that make up the primary mirror on NASA's James Webb Space Telescope have been installed at NASA's Goddard Space Flight Center. Credits: NASA/Chris Gunn More: Since December 2015, the team of scientists and engineers have been working tirelessly to install all the primary mirror segments onto the telescope structure in the large clean room at NASA's Goddard Space Flight Center in Greenbelt, Maryland. The twelfth mirror was installed on January 2, 2016. "This milestone signifies that all of the hexagonal shaped mirrors on the fixed central section of the telescope structure are installed and only the 3 mirrors on each wing are left for installation," said Lee Feinberg, NASA's Optical Telescope Element Manager at NASA Goddard. "The incredibly skilled and dedicated team assembling the telescope continues to find ways to do things faster and more efficiently." Each hexagonal-shaped segment measures just over 4.2 feet (1.3 meters) across and weighs approximately 88 pounds (40 kilograms). After being pieced together, the 18 primary mirror segments will work together as one large 21.3-foot (6.5-meter) mirror. The primary mirror will unfold and adjust to shape after launch. The mirrors are made of ultra-lightweight beryllium. The mirrors are placed on the telescope's backplane using a robotic arm, guided by engineers. The full installation is expected to be completed in a few months. The mirrors were built by Ball Aerospace & Technologies Corp., Boulder, Colorado. Ball is the principal subcontractor to Northrop Grumman for the optical technology and lightweight mirror system. The installation of the mirrors onto the telescope structure is performed by Harris Corporation of Rochester, New York. Harris Corporation leads integration and testing for the telescope. While the mirror assembly is a very significant milestone, there are many more steps involved in assembling the Webb telescope. The primary mirror and the tennis

  11. Workload assessment of surgeons: correlation between NASA TLX and blinks.

    PubMed

    Zheng, Bin; Jiang, Xianta; Tien, Geoffrey; Meneghetti, Adam; Panton, O Neely M; Atkins, M Stella

    2012-10-01

    Blinks are known as an indicator of visual attention and mental stress. In this study, surgeons' mental workload was evaluated utilizing a paper assessment instrument (National Aeronautics and Space Administration Task Load Index, NASA TLX) and by examining their eye blinks. Correlation between these two assessments was reported. Surgeons' eye motions were video-recorded using a head-mounted eye-tracker while the surgeons performed a laparoscopic procedure on a virtual reality trainer. Blink frequency and duration were computed using computer vision technology. The level of workload experienced during the procedure was reported by surgeons using the NASA TLX. A total of 42 valid videos were recorded from 23 surgeons. After blinks were computed, videos were divided into two groups based on the blink frequency: infrequent group (≤ 6 blinks/min) and frequent group (more than 6 blinks/min). Surgical performance (measured by task time and trajectories of tool tips) was not significantly different between these two groups, but NASA TLX scores were significantly different. Surgeons who blinked infrequently reported a higher level of frustration (46 vs. 34, P = 0.047) and higher overall level of workload (57 vs. 47, P = 0.045) than those who blinked more frequently. The correlation coefficients (Pearson test) between NASA TLX and the blink frequency and duration were -0.17 and 0.446. Reduction of blink frequency and shorter blink duration matched the increasing level of mental workload reported by surgeons. The value of using eye-tracking technology for assessment of surgeon mental workload was shown.

  12. Jessica Watkins/NASA 2017 Astronaut Candidate

    NASA Image and Video Library

    2017-08-22

    The ranks of America’s Astronaut Corps grew by a dozen today! The twelve new NASA Astronaut Candidates have reported for duty at the Johnson Space Center in Houston to begin two years of training. Before they got to Houston we video-chatted with them all; Caltech postdoctoral fellow Jessica Watkins talks about how she became interested in science, technology, engineering and math, why she wanted to become an astronaut and where she was when she got the news that she’d achieved her dream. Learn more about the new space heroes right here: nasa.gov/2017astronauts

  13. Warren Hoburg/NASA 2017 Astronaut Candidate

    NASA Image and Video Library

    2017-08-22

    The ranks of America’s Astronaut Corps grew by a dozen today! The twelve new NASA Astronaut Candidates have reported for duty at the Johnson Space Center in Houston to begin two years of training. Before they got to Houston we video-chatted with them all; MIT assistant professor Warren Hoburg talks about how he became interested in science, technology, engineering and math, why he wanted to become an astronaut and where he was when he got the news that he’d achieved his dream. Learn more about the new space heroes right here: nasa.gov/2017astronauts

  14. Frank Rubio/NASA 2017 Astronaut Candidate

    NASA Image and Video Library

    2017-08-22

    The ranks of America’s Astronaut Corps grew by a dozen today! The twelve new NASA Astronaut Candidates have reported for duty at the Johnson Space Center in Houston to begin two years of training. Before they got to Houston we video-chatted with them all; U.S. Army Major Frank Rubio talks about how he became interested in science, technology, engineering and math, why he wanted to become an astronaut and where he was when he got the news that he’d achieved his dream. Learn more about the new space heroes right here: nasa.gov/2017astronauts

  15. Jasmin Moghbeli/NASA 2017 Astronaut Candidate

    NASA Image and Video Library

    2017-08-22

    The ranks of America’s Astronaut Corps grew by a dozen today! The twelve new NASA Astronaut Candidates have reported for duty at the Johnson Space Center in Houston to begin two years of training. Before they got to Houston we video-chatted with them all; U.S. Marine Corps Major Jasmin Moghbeli talks about how she became interested in science, technology, engineering and math, why she wanted to become an astronaut and where she was when she got the news that she’d achieved her dream. Learn more about the new space heroes right here: nasa.gov/2017astronauts

  16. Robb Kulin/NASA 2017 Astronaut Candidate

    NASA Image and Video Library

    2017-08-22

    The ranks of America’s Astronaut Corps grew by a dozen today! The twelve new NASA Astronaut Candidates have reported for duty at the Johnson Space Center in Houston to begin two years of training. Before they got to Houston we video-chatted with them all; SpaceX senior manager for flight reliability Robb Kulin talks about how he became interested in science, technology, engineering and math, why he wanted to become an astronaut and where he was when he got the news that he’d achieved his dream. Learn more about the new space heroes right here: nasa.gov/2017astronauts

  17. Zena Cardman/NASA 2017 Astronaut Candidate

    NASA Image and Video Library

    2017-08-21

    The ranks of America’s Astronaut Corps grew by a dozen today! The twelve new NASA Astronaut Candidates have reported for duty at the Johnson Space Center in Houston to begin two years of training. Before they got to Houston we video-chatted with them all; National Science Foundation graduate research fellow Zena Cardman talks about how she became interested in science, technology, engineering and math, why she wanted to become an astronaut and where she was when she got the news that she’d achieved her dream. Learn more about the new space heroes right here: nasa.gov/2017astronauts

  18. Raja Chari/NASA 2017 Astronaut Candidate

    NASA Image and Video Library

    2017-08-21

    The ranks of America’s Astronaut Corps grew by a dozen today! The twelve new NASA Astronaut Candidates have reported for duty at the Johnson Space Center in Houston to begin two years of training. Before they got to Houston we video-chatted with them all; U.S. Air Force Lieutenant Colonel Raja Chari talks about how he became interested in science, technology, engineering and math, why he wanted to become an astronaut and where he was when he got the news that he’d achieved his dream. Learn more about the new space heroes right here: nasa.gov/2017astronauts

  19. Jonny Kim/NASA 2017 Astronaut Candidate

    NASA Image and Video Library

    2017-08-22

    The ranks of America’s Astronaut Corps grew by a dozen today! The twelve new NASA Astronaut Candidates have reported for duty at the Johnson Space Center in Houston to begin two years of training. Before they got to Houston we video-chatted with them all; Dr. Jonny Kim talks about how he became interested in science, technology, engineering and math, why he wanted to become an astronaut and where he was when he got the news that he’d achieved his dream. Learn more about the new space heroes right here: nasa.gov/2017astronauts

  20. Robotic Arm Comprising Two Bending Segments

    NASA Technical Reports Server (NTRS)

    Mehling, Joshua S.; Difler, Myron A.; Ambrose, Robert O.; Chu, Mars W.; Valvo, Michael C.

    2010-01-01

    The figure shows several aspects of an experimental robotic manipulator that includes a housing from which protrudes a tendril- or tentacle-like arm 1 cm thick and 1 m long. The arm consists of two collinear segments, each of which can be bent independently of the other, and the two segments can be bent simultaneously in different planes. The arm can be retracted to a minimum length or extended by any desired amount up to its full length. The arm can also be made to rotate about its own longitudinal axis. Some prior experimental robotic manipulators include single-segment bendable arms. Those arms are thicker and shorter than the present one. The present robotic manipulator serves as a prototype of future manipulators that, by virtue of the slenderness and multiple- bending capability of their arms, are expected to have sufficient dexterity for operation within spaces that would otherwise be inaccessible. Such manipulators could be especially well suited as means of minimally invasive inspection during construction and maintenance activities. Each of the two collinear bending arm segments is further subdivided into a series of collinear extension- and compression-type helical springs joined by threaded links. The extension springs occupy the majority of the length of the arm and engage passively in bending. The compression springs are used for actively controlled bending. Bending is effected by means of pairs of antagonistic tendons in the form of spectra gel spun polymer lines that are attached at specific threaded links and run the entire length of the arm inside the spring helix from the attachment links to motor-driven pulleys inside the housing. Two pairs of tendons, mounted in orthogonal planes that intersect along the longitudinal axis, are used to effect bending of each segment. The tendons for actuating the distal bending segment are in planes offset by an angle of 45 from those of the proximal bending segment: This configuration makes it possible to

  1. The ASAC Flight Segment and Network Cost Models

    NASA Technical Reports Server (NTRS)

    Kaplan, Bruce J.; Lee, David A.; Retina, Nusrat; Wingrove, Earl R., III; Malone, Brett; Hall, Stephen G.; Houser, Scott A.

    1997-01-01

    To assist NASA in identifying research art, with the greatest potential for improving the air transportation system, two models were developed as part of its Aviation System Analysis Capability (ASAC). The ASAC Flight Segment Cost Model (FSCM) is used to predict aircraft trajectories, resource consumption, and variable operating costs for one or more flight segments. The Network Cost Model can either summarize the costs for a network of flight segments processed by the FSCM or can be used to independently estimate the variable operating costs of flying a fleet of equipment given the number of departures and average flight stage lengths.

  2. Five-Segment Solid Rocket Motor Development Status

    NASA Technical Reports Server (NTRS)

    Priskos, Alex S.

    2012-01-01

    In support of the National Aeronautics and Space Administration (NASA), Marshall Space Flight Center (MSFC) is developing a new, more powerful solid rocket motor for space launch applications. To minimize technical risks and development costs, NASA chose to use the Space Shuttle s solid rocket boosters as a starting point in the design and development. The new, five segment motor provides a greater total impulse with improved, more environmentally friendly materials. To meet the mass and trajectory requirements, the motor incorporates substantial design and system upgrades, including new propellant grain geometry with an additional segment, new internal insulation system, and a state-of-the art avionics system. Significant progress has been made in the design, development and testing of the propulsion, and avionics systems. To date, three development motors (one each in 2009, 2010, and 2011) have been successfully static tested by NASA and ATK s Launch Systems Group in Promontory, UT. These development motor tests have validated much of the engineering with substantial data collected, analyzed, and utilized to improve the design. This paper provides an overview of the development progress on the first stage propulsion system.

  3. Demonstration of a Segment Alignment Maintenance System on a Seven-Segment Sub-Array of the Hobby-Eberly Telescope

    NASA Technical Reports Server (NTRS)

    Rakoczy, John; Whitaker, Ann F. (Technical Monitor)

    2001-01-01

    NASA's Marshall Space Flight Center, in collaboration with Blue Line Engineering of Colorado Springs, Colorado, is developing a Segment Alignment Maintenance System (SAMS) for McDonald Observatory's Hobby-Eberly Telescope (HET). The SAMS shall sense motions of the 91 primary mirror segments and send corrections to HET's primary mirror controller as the mirror segments misalign due to thermo-elastic deformations of the mirror support structure. The SAMS consists of inductive edge sensors supplemented by inclinometers for global radius of curvature sensing. All measurements are sent to the SAMS computer where mirror motion corrections are calculated. In October 2000, a prototype SAMS was installed on a seven-segment cluster of the HET. Subsequent testing has shown that the SAMS concept and architecture are a viable practical approach to maintaining HET's primary mirror figure, or the figure of any large segmented telescope. This paper gives a functional description of the SAMS sub-array components and presents test data to characterize the performance of the sub-array SAMS.

  4. NASA's mobile satellite development program

    NASA Technical Reports Server (NTRS)

    Rafferty, William; Dessouky, Khaled; Sue, Miles

    1988-01-01

    A Mobile Satellite System (MSS) will provide data and voice communications over a vast geographical area to a large population of mobile users. A technical overview is given of the extensive research and development studies and development performed under NASA's mobile satellite program (MSAT-X) in support of the introduction of a U.S. MSS. The critical technologies necessary to enable such a system are emphasized: vehicle antennas, modulation and coding, speech coders, networking and propagation characterization. Also proposed is a first, and future generation MSS architecture based upon realized ground segment equipment and advanced space segment studies.

  5. Distance Mentoring in the NASA/Kennedy Space Center Virtual Science Mentor Program.

    ERIC Educational Resources Information Center

    Buckingham, Gregg

    This study examines the results of a three year video mentoring program, the NASA Virtual Science Mentor (VSM) program, which paired 56 NASA mentor engineers and scientists with 56 middle school science teachers in seven Southwest Florida counties. The study sought to determine the impact on students, mentors, and teachers participating in the…

  6. Evaluation Framework for NASA's Educational Outreach Programs

    NASA Technical Reports Server (NTRS)

    Berg, Rick; Booker, Angela; Linde, Charlotte; Preston, Connie

    1999-01-01

    The objective of the proposed work is to develop an evaluation framework for NASA's educational outreach efforts. We focus on public (rather than technical or scientific) dissemination efforts, specifically on Internet-based outreach sites for children.The outcome of this work is to propose both methods and criteria for evaluation, which would enable NASA to do a more analytic evaluation of its outreach efforts. The proposed framework is based on IRL's ethnographic and video-based observational methods, which allow us to analyze how these sites are actually used.

  7. NASA in Silicon Valley Live - Episode 01 - We're Going Back to the Moon!

    NASA Image and Video Library

    2018-01-12

    We’ve launched a live video show on Twitch called NASA in Silicon Valley Live! This is our premiere episode streamed on Jan. 12. In it, we talk about going back to the Moon with NASA rock stars Jim Green and Greg Schmidt.

  8. Digital Audio/Video for Computer- and Web-Based Instruction for Training Rural Special Education Personnel.

    ERIC Educational Resources Information Center

    Ludlow, Barbara L.; Foshay, John B.; Duff, Michael C.

    Video presentations of teaching episodes in home, school, and community settings and audio recordings of parents' and professionals' views can be important adjuncts to personnel preparation in special education. This paper describes instructional applications of digital media and outlines steps in producing audio and video segments. Digital audio…

  9. Geographic Video 3d Data Model And Retrieval

    NASA Astrophysics Data System (ADS)

    Han, Z.; Cui, C.; Kong, Y.; Wu, H.

    2014-04-01

    Geographic video includes both spatial and temporal geographic features acquired through ground-based or non-ground-based cameras. With the popularity of video capture devices such as smartphones, the volume of user-generated geographic video clips has grown significantly and the trend of this growth is quickly accelerating. Such a massive and increasing volume poses a major challenge to efficient video management and query. Most of the today's video management and query techniques are based on signal level content extraction. They are not able to fully utilize the geographic information of the videos. This paper aimed to introduce a geographic video 3D data model based on spatial information. The main idea of the model is to utilize the location, trajectory and azimuth information acquired by sensors such as GPS receivers and 3D electronic compasses in conjunction with video contents. The raw spatial information is synthesized to point, line, polygon and solid according to the camcorder parameters such as focal length and angle of view. With the video segment and video frame, we defined the three categories geometry object using the geometry model of OGC Simple Features Specification for SQL. We can query video through computing the spatial relation between query objects and three categories geometry object such as VFLocation, VSTrajectory, VSFOView and VFFovCone etc. We designed the query methods using the structured query language (SQL) in detail. The experiment indicate that the model is a multiple objective, integration, loosely coupled, flexible and extensible data model for the management of geographic stereo video.

  10. Contact-free determination of human body segment parameters by means of videometric image processing of an anthropomorphic body model

    NASA Astrophysics Data System (ADS)

    Hatze, Herbert; Baca, Arnold

    1993-01-01

    The development of noninvasive techniques for the determination of biomechanical body segment parameters (volumes, masses, the three principal moments of inertia, the three local coordinates of the segmental mass centers, etc.) receives increasing attention from the medical sciences (e,.g., orthopaedic gait analysis), bioengineering, sport biomechanics, and the various space programs. In the present paper, a novel method is presented for determining body segment parameters rapidly and accurately. It is based on the video-image processing of four different body configurations and a finite mass-element human body model. The four video images of the subject in question are recorded against a black background, thus permitting the application of shape recognition procedures incorporating edge detection and calibration algorithms. In this way, a total of 181 object space dimensions of the subject's body segments can be reconstructed and used as anthropometric input data for the mathematical finite mass- element body model. The latter comprises 17 segments (abdomino-thoracic, head-neck, shoulders, upper arms, forearms, hands, abdomino-pelvic, thighs, lower legs, feet) and enables the user to compute all the required segment parameters for each of the 17 segments by means of the associated computer program. The hardware requirements are an IBM- compatible PC (1 MB memory) operating under MS-DOS or PC-DOS (Version 3.1 onwards) and incorporating a VGA-board with a feature connector for connecting it to a super video windows framegrabber board for which there must be available a 16-bit large slot. In addition, a VGA-monitor (50 - 70 Hz, horizontal scan rate at least 31.5 kHz), a common video camera and recorder, and a simple rectangular calibration frame are required. The advantage of the new method lies in its ease of application, its comparatively high accuracy, and in the rapid availability of the body segment parameters, which is particularly useful in clinical practice

  11. Monitoring Change Through Hierarchical Segmentation of Remotely Sensed Image Data

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Lawrence, William T.

    2005-01-01

    NASA's Goddard Space Flight Center has developed a fast and effective method for generating image segmentation hierarchies. These segmentation hierarchies organize image data in a manner that makes their information content more accessible for analysis. Image segmentation enables analysis through the examination of image regions rather than individual image pixels. In addition, the segmentation hierarchy provides additional analysis clues through the tracing of the behavior of image region characteristics at several levels of segmentation detail. The potential for extracting the information content from imagery data based on segmentation hierarchies has not been fully explored for the benefit of the Earth and space science communities. This paper explores the potential of exploiting these segmentation hierarchies for the analysis of multi-date data sets, and for the particular application of change monitoring.

  12. A soft actuation system for segmented reflector articulation and isolation

    NASA Technical Reports Server (NTRS)

    Agronin, Michael L.; Jandura, Louise

    1990-01-01

    Segmented reflectors have been proposed for space based applications such as optical communication and large diameter telescopes. An actuation system for mirrors in a space based segmented mirror array was developed as part of NASA's Precision Segmented Reflector program. The actuation system, called the Articulated Panel Module (APM), provides 3 degrees of freedom mirror articulation, gives isolation from structural motion, and simplifies space assembly of the mirrors to the reflector backup truss. A breadboard of the APM was built and is described.

  13. Intelligent video storage of visual evidences on site in fast deployment

    NASA Astrophysics Data System (ADS)

    Desurmont, Xavier; Bastide, Arnaud; Delaigle, Jean-Francois

    2004-07-01

    In this article we present a generic, flexible, scalable and robust approach for an intelligent real-time forensic visual system. The proposed implementation could be rapidly deployable and integrates minimum logistic support as it embeds low complexity devices (PCs and cameras) that communicate through wireless network. The goal of these advanced tools is to provide intelligent video storage of potential video evidences for fast intervention during deployment around a hazardous sector after a terrorism attack, a disaster, an air crash or before attempt of it. Advanced video analysis tools, such as segmentation and tracking are provided to support intelligent storage and annotation.

  14. NASA Tech Briefs, April 1995. Volume 19, No. 4

    NASA Technical Reports Server (NTRS)

    1995-01-01

    This issue of the NASA Tech Briefs has a special focus section on video and imaging, a feature on the NASA invention of the year, and a resource report on the Dryden Flight Research Center. The issue also contains articles on electronic components and circuits, electronic systems, physical sciences, materials, computer programs, mechanics, machinery, manufacturing/fabrication, mathematics and information sciences and life sciences. In addition to the standard articles in the NASA Tech brief, this contains a supplement entitled "Laser Tech Briefs" which features an article on the National Ignition Facility, and other articles on the use of Lasers.

  15. Tight Loops Close-Up [video

    NASA Image and Video Library

    2014-05-19

    NASA's Solar Dynamics Observatory (SDO) zoomed in almost to its maximum level to watch tight, bright loops and much longer, softer loops shift and sway above an active region on the sun, while a darker blob of plasma in their midst was pulled about every which way (May 13-14, 2014). The video clip covers just over a day beginning at 14:19 UT on May 13. The frames were taken in the 171-angstroms wavelength of extreme ultraviolet light, but colorized red, instead of its usual bronze tone. This type of dynamic activity continues almost non-stop on the sun as opposing magnetic forces tangle with each other. Credit: NASA/Solar Dynamics Observatory NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  16. Satellite Video Shows Movement of Major U.S. Winter Storm

    NASA Image and Video Library

    2014-02-12

    A new NASA video of NOAA's GOES satellite imagery shows three days of movement of the massive winter storm that stretches from the southern U.S. to the northeast. Visible and infrared imagery from NOAA's GOES-East or GOES-13 satellite from Feb. 10 at 1815 UTC/1:15 p.m. EST to Feb. 12 to 1845 UTC/1:45 p.m. EST were compiled into a video made by NASA/NOAA's GOES Project at NASA's Goddard Space Flight Center in Greenbelt, Md. In the video, viewers can see the development and movement of the clouds associated with the progression of the frontal system and related low pressure areas that make up the massive storm. The video also shows the snow covered ground over the Great Lakes region and Ohio Valley that stretches to northern New England. The clouds and fallen snow data from NOAA's GOES-East satellite were overlaid on a true-color image of land and ocean created by data from the Moderate Resolution Imaging Spectroradiometer or MODIS instrument that flies aboard NASA's Aqua and Terra satellites. On February 12 at 10 a.m. EST, NOAA's National Weather Service or NWS continued to issue watches and warnings from Texas to New England. Specifically, NWS cited Winter Storm Warnings and Winter Weather Advisories were in effect from eastern Texas eastward across the interior section of southeastern U.S. states and across much of the eastern seaboard including the Appalachians. Winter storm watches are in effect for portions of northern New England as well as along the western slopes of northern and central Appalachians. For updates on local forecasts, watches and warnings, visit NOAA's www.weather.gov webpage. NOAA's Weather Prediction Center or WPC noted the storm is expected to bring "freezing rain spreading into the Carolinas, significant snow accumulations are expected in the interior Mid-Atlantic states tonight into Thursday and ice storm warnings and freezing rain advisories are in effect across much of central Georgia. GOES satellites provide the kind of continuous

  17. Space Network Ground Segment Sustainment (SGSS) Project: Developing a COTS-Intensive Ground System

    NASA Technical Reports Server (NTRS)

    Saylor, Richard; Esker, Linda; Herman, Frank; Jacobsohn, Jeremy; Saylor, Rick; Hoffman, Constance

    2013-01-01

    Purpose of the Space Network Ground Segment Sustainment (SGSS) is to implement a new modern ground segment that will enable the NASA Space Network (SN) to deliver high quality services to the SN community for the future The key SGSS Goals: (1) Re-engineer the SN ground segment (2) Enable cost efficiencies in the operability and maintainability of the broader SN.

  18. NASA's Space Launch System Takes Shape

    NASA Technical Reports Server (NTRS)

    Askins, Bruce R.; Robinson, Kimberly F.

    2017-01-01

    Significant hardware and software for NASA's Space Launch System (SLS) began rolling off assembly lines in 2016, setting the stage for critical testing in 2017 and the launch of new capability for deep-space human exploration. (Figure 1) At NASA's Michoud Assembly Facility (MAF) near New Orleans, LA, full-scale test articles are being joined by flight hardware. Structural test stands are nearing completion at NASA's Marshall Space Flight Center (MSFC), Huntsville, AL. An SLS booster solid rocket motor underwent test firing, while flight motor segments were cast. An RS-25 and Engine Control Unit (ECU) for early SLS flights were tested at NASA's Stennis Space Center (SSC). The upper stage for the first flight was completed, and NASA completed Preliminary Design Review (PDR) for a new, powerful upper stage. The pace of production and testing is expected to increase in 2017. This paper will discuss the technical and programmatic highlights and challenges of 2016 and look ahead to plans for 2017.

  19. SLS Pathfinder Segments Car Train Departure

    NASA Image and Video Library

    2016-03-02

    An Iowa Northern locomotive, contracted by Goodloe Transportation of Chicago, departs from the Rotation, Processing and Surge Facility (RPSF) at NASA’s Kennedy Space Center in Florida, with two containers on railcars for transport to the NASA Jay Jay railroad yard. The containers held two pathfinders, or test versions, of solid rocket booster segments for NASA’s Space Launch System rocket that were delivered to the RPSF. Inside the RPSF, the Ground Systems Development and Operations Program and Jacobs Engineering, on the Test and Operations Support Contract, will conduct a series of lifts, moves and stacking operations using the booster segments, which are inert, to prepare for Exploration Mission-1, deep-space missions and the journey to Mars. The pathfinder booster segments are from Orbital ATK in Utah.

  20. Hierarchical vs non-hierarchical audio indexation and classification for video genres

    NASA Astrophysics Data System (ADS)

    Dammak, Nouha; BenAyed, Yassine

    2018-04-01

    In this paper, Support Vector Machines (SVMs) are used for segmenting and indexing video genres based on only audio features extracted at block level, which has a prominent asset by capturing local temporal information. The main contribution of our study is to show the wide effect on the classification accuracies while using an hierarchical categorization structure based on Mel Frequency Cepstral Coefficients (MFCC) audio descriptor. In fact, the classification consists in three common video genres: sports videos, music clips and news scenes. The sub-classification may divide each genre into several multi-speaker and multi-dialect sub-genres. The validation of this approach was carried out on over 360 minutes of video span yielding a classification accuracy of over 99%.

  1. Video sensor architecture for surveillance applications.

    PubMed

    Sánchez, Jordi; Benet, Ginés; Simó, José E

    2012-01-01

    This paper introduces a flexible hardware and software architecture for a smart video sensor. This sensor has been applied in a video surveillance application where some of these video sensors are deployed, constituting the sensory nodes of a distributed surveillance system. In this system, a video sensor node processes images locally in order to extract objects of interest, and classify them. The sensor node reports the processing results to other nodes in the cloud (a user or higher level software) in the form of an XML description. The hardware architecture of each sensor node has been developed using two DSP processors and an FPGA that controls, in a flexible way, the interconnection among processors and the image data flow. The developed node software is based on pluggable components and runs on a provided execution run-time. Some basic and application-specific software components have been developed, in particular: acquisition, segmentation, labeling, tracking, classification and feature extraction. Preliminary results demonstrate that the system can achieve up to 7.5 frames per second in the worst case, and the true positive rates in the classification of objects are better than 80%.

  2. Video Sensor Architecture for Surveillance Applications

    PubMed Central

    Sánchez, Jordi; Benet, Ginés; Simó, José E.

    2012-01-01

    This paper introduces a flexible hardware and software architecture for a smart video sensor. This sensor has been applied in a video surveillance application where some of these video sensors are deployed, constituting the sensory nodes of a distributed surveillance system. In this system, a video sensor node processes images locally in order to extract objects of interest, and classify them. The sensor node reports the processing results to other nodes in the cloud (a user or higher level software) in the form of an XML description. The hardware architecture of each sensor node has been developed using two DSP processors and an FPGA that controls, in a flexible way, the interconnection among processors and the image data flow. The developed node software is based on pluggable components and runs on a provided execution run-time. Some basic and application-specific software components have been developed, in particular: acquisition, segmentation, labeling, tracking, classification and feature extraction. Preliminary results demonstrate that the system can achieve up to 7.5 frames per second in the worst case, and the true positive rates in the classification of objects are better than 80%. PMID:22438723

  3. A spatiotemporal decomposition strategy for personal home video management

    NASA Astrophysics Data System (ADS)

    Yi, Haoran; Kozintsev, Igor; Polito, Marzia; Wu, Yi; Bouguet, Jean-Yves; Nefian, Ara; Dulong, Carole

    2007-01-01

    With the advent and proliferation of low cost and high performance digital video recorder devices, an increasing number of personal home video clips are recorded and stored by the consumers. Compared to image data, video data is lager in size and richer in multimedia content. Efficient access to video content is expected to be more challenging than image mining. Previously, we have developed a content-based image retrieval system and the benchmarking framework for personal images. In this paper, we extend our personal image retrieval system to include personal home video clips. A possible initial solution to video mining is to represent video clips by a set of key frames extracted from them thus converting the problem into an image search one. Here we report that a careful selection of key frames may improve the retrieval accuracy. However, because video also has temporal dimension, its key frame representation is inherently limited. The use of temporal information can give us better representation for video content at semantic object and concept levels than image-only based representation. In this paper we propose a bottom-up framework to combine interest point tracking, image segmentation and motion-shape factorization to decompose the video into spatiotemporal regions. We show an example application of activity concept detection using the trajectories extracted from the spatio-temporal regions. The proposed approach shows good potential for concise representation and indexing of objects and their motion in real-life consumer video.

  4. Fully Automatic Segmentation of Fluorescein Leakage in Subjects With Diabetic Macular Edema

    PubMed Central

    Rabbani, Hossein; Allingham, Michael J.; Mettu, Priyatham S.; Cousins, Scott W.; Farsiu, Sina

    2015-01-01

    Purpose. To create and validate software to automatically segment leakage area in real-world clinical fluorescein angiography (FA) images of subjects with diabetic macular edema (DME). Methods. Fluorescein angiography images obtained from 24 eyes of 24 subjects with DME were retrospectively analyzed. Both video and still-frame images were obtained using a Heidelberg Spectralis 6-mode HRA/OCT unit. We aligned early and late FA frames in the video by a two-step nonrigid registration method. To remove background artifacts, we subtracted early and late FA frames. Finally, after postprocessing steps, including detection and inpainting of the vessels, a robust active contour method was utilized to obtain leakage area in a 1500-μm-radius circular region centered at the fovea. Images were captured at different fields of view (FOVs) and were often contaminated with outliers, as is the case in real-world clinical imaging. Our algorithm was applied to these images with no manual input. Separately, all images were manually segmented by two retina specialists. The sensitivity, specificity, and accuracy of manual interobserver, manual intraobserver, and automatic methods were calculated. Results. The mean accuracy was 0.86 ± 0.08 for automatic versus manual, 0.83 ± 0.16 for manual interobserver, and 0.90 ± 0.08 for manual intraobserver segmentation methods. Conclusions. Our fully automated algorithm can reproducibly and accurately quantify the area of leakage of clinical-grade FA video and is congruent with expert manual segmentation. The performance was reliable for different DME subtypes. This approach has the potential to reduce time and labor costs and may yield objective and reproducible quantitative measurements of DME imaging biomarkers. PMID:25634978

  5. Fully automatic segmentation of fluorescein leakage in subjects with diabetic macular edema.

    PubMed

    Rabbani, Hossein; Allingham, Michael J; Mettu, Priyatham S; Cousins, Scott W; Farsiu, Sina

    2015-01-29

    To create and validate software to automatically segment leakage area in real-world clinical fluorescein angiography (FA) images of subjects with diabetic macular edema (DME). Fluorescein angiography images obtained from 24 eyes of 24 subjects with DME were retrospectively analyzed. Both video and still-frame images were obtained using a Heidelberg Spectralis 6-mode HRA/OCT unit. We aligned early and late FA frames in the video by a two-step nonrigid registration method. To remove background artifacts, we subtracted early and late FA frames. Finally, after postprocessing steps, including detection and inpainting of the vessels, a robust active contour method was utilized to obtain leakage area in a 1500-μm-radius circular region centered at the fovea. Images were captured at different fields of view (FOVs) and were often contaminated with outliers, as is the case in real-world clinical imaging. Our algorithm was applied to these images with no manual input. Separately, all images were manually segmented by two retina specialists. The sensitivity, specificity, and accuracy of manual interobserver, manual intraobserver, and automatic methods were calculated. The mean accuracy was 0.86 ± 0.08 for automatic versus manual, 0.83 ± 0.16 for manual interobserver, and 0.90 ± 0.08 for manual intraobserver segmentation methods. Our fully automated algorithm can reproducibly and accurately quantify the area of leakage of clinical-grade FA video and is congruent with expert manual segmentation. The performance was reliable for different DME subtypes. This approach has the potential to reduce time and labor costs and may yield objective and reproducible quantitative measurements of DME imaging biomarkers. Copyright 2015 The Association for Research in Vision and Ophthalmology, Inc.

  6. Automatic colonic lesion detection and tracking in endoscopic videos

    NASA Astrophysics Data System (ADS)

    Li, Wenjing; Gustafsson, Ulf; A-Rahim, Yoursif

    2011-03-01

    The biology of colorectal cancer offers an opportunity for both early detection and prevention. Compared with other imaging modalities, optical colonoscopy is the procedure of choice for simultaneous detection and removal of colonic polyps. Computer assisted screening makes it possible to assist physicians and potentially improve the accuracy of the diagnostic decision during the exam. This paper presents an unsupervised method to detect and track colonic lesions in endoscopic videos. The aim of the lesion screening and tracking is to facilitate detection of polyps and abnormal mucosa in real time as the physician is performing the procedure. For colonic lesion detection, the conventional marker controlled watershed based segmentation is used to segment the colonic lesions, followed by an adaptive ellipse fitting strategy to further validate the shape. For colonic lesion tracking, a mean shift tracker with background modeling is used to track the target region from the detection phase. The approach has been tested on colonoscopy videos acquired during regular colonoscopic procedures and demonstrated promising results.

  7. Jersey number detection in sports video for athlete identification

    NASA Astrophysics Data System (ADS)

    Ye, Qixiang; Huang, Qingming; Jiang, Shuqiang; Liu, Yang; Gao, Wen

    2005-07-01

    Athlete identification is important for sport video content analysis since users often care about the video clips with their preferred athletes. In this paper, we propose a method for athlete identification by combing the segmentation, tracking and recognition procedures into a coarse-to-fine scheme for jersey number (digital characters on sport shirt) detection. Firstly, image segmentation is employed to separate the jersey number regions with its background. And size/pipe-like attributes of digital characters are used to filter out candidates. Then, a K-NN (K nearest neighbor) classifier is employed to classify a candidate into a digit in "0-9" or negative. In the recognition procedure, we use the Zernike moment features, which are invariant to rotation and scale for digital shape recognition. Synthetic training samples with different fonts are used to represent the pattern of digital characters with non-rigid deformation. Once a character candidate is detected, a SSD (smallest square distance)-based tracking procedure is started. The recognition procedure is performed every several frames in the tracking process. After tracking tens of frames, the overall recognition results are combined to determine if a candidate is a true jersey number or not by a voting procedure. Experiments on several types of sports video shows encouraging result.

  8. An algorithm for calculi segmentation on ureteroscopic images.

    PubMed

    Rosa, Benoît; Mozer, Pierre; Szewczyk, Jérôme

    2011-03-01

    The purpose of the study is to develop an algorithm for the segmentation of renal calculi on ureteroscopic images. In fact, renal calculi are common source of urological obstruction, and laser lithotripsy during ureteroscopy is a possible therapy. A laser-based system to sweep the calculus surface and vaporize it was developed to automate a very tedious manual task. The distal tip of the ureteroscope is directed using image guidance, and this operation is not possible without an efficient segmentation of renal calculi on the ureteroscopic images. We proposed and developed a region growing algorithm to segment renal calculi on ureteroscopic images. Using real video images to compute ground truth and compare our segmentation with a reference segmentation, we computed statistics on different image metrics, such as Precision, Recall, and Yasnoff Measure, for comparison with ground truth. The algorithm and its parameters were established for the most likely clinical scenarii. The segmentation results are encouraging: the developed algorithm was able to correctly detect more than 90% of the surface of the calculi, according to an expert observer. Implementation of an algorithm for the segmentation of calculi on ureteroscopic images is feasible. The next step is the integration of our algorithm in the command scheme of a motorized system to build a complete operating prototype.

  9. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center,atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image Stabilization and Registration (VISAR). VISAR may help law enforcement agencies catch criminals by improving the quality of video recorded at crime scenes. In this photograph, the single frame at left, taken at night, was brightened in order to enhance details and reduce noise or snow. To further overcome the video defects in one frame, Law enforcement officials can use VISAR software to add information from multiple frames to reveal a person. Images from less than a second of videotape were added together to create the clarified image at right. VISAR stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. VISAR could also have applications in medical and meteorological imaging. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. The software can be used for defense application by improving recornaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  10. Remembering NASA Astronaut John Young, 1930-2018

    NASA Image and Video Library

    2018-01-06

    Astronaut John Young, who walked on the Moon during Apollo 16 and commanded the first space shuttle mission, has passed away at the age of 87. This video tribute, which includes music and portions of Young’s own words from previous interviews and events, recounts some of the highlights of his storied career at NASA.

  11. Hinode Satellite Captures Total Solar Eclipse Video Aug. 21

    NASA Image and Video Library

    2017-08-21

    The Japan Aerospace Exploration Agency, the National Astronomical Observatory of Japan and NASA released this video of Aug. 21 total solar eclipse taken by the X-ray telescope aboard the Hinode joint solar observation satellite as it orbited high above the Pacific Ocean.

  12. By the Dozen: NASA's James Webb Space Telescope Mirrors

    NASA Image and Video Library

    2016-01-03

    Caption: One dozen (out of 18) flight mirror segments that make up the primary mirror on NASA's James Webb Space Telescope have been installed at NASA's Goddard Space Flight Center. Credits: NASA/Chris Gunn More: Since December 2015, the team of scientists and engineers have been working tirelessly to install all the primary mirror segments onto the telescope structure in the large clean room at NASA's Goddard Space Flight Center in Greenbelt, Maryland. The twelfth mirror was installed on January 2, 2016. "This milestone signifies that all of the hexagonal shaped mirrors on the fixed central section of the telescope structure are installed and only the 3 mirrors on each wing are left for installation," said Lee Feinberg, NASA's Optical Telescope Element Manager at NASA Goddard. "The incredibly skilled and dedicated team assembling the telescope continues to find ways to do things faster and more efficiently." Each hexagonal-shaped segment measures just over 4.2 feet (1.3 meters) across and weighs approximately 88 pounds (40 kilograms). After being pieced together, the 18 primary mirror segments will work together as one large 21.3-foot (6.5-meter) mirror. The primary mirror will unfold and adjust to shape after launch. The mirrors are made of ultra-lightweight beryllium. The mirrors are placed on the telescope's backplane using a robotic arm, guided by engineers. The full installation is expected to be completed in a few months. The mirrors were built by Ball Aerospace & Technologies Corp., Boulder, Colorado. Ball is the principal subcontractor to Northrop Grumman for the optical technology and lightweight mirror system. The installation of the mirrors onto the telescope structure is performed by Harris Corporation of Rochester, New York. Harris Corporation leads integration and testing for the telescope. While the mirror assembly is a very significant milestone, there are many more steps involved in assembling the Webb telescope. The primary mirror and the tennis

  13. By the Dozen: NASA's James Webb Space Telescope Mirrors

    NASA Image and Video Library

    2016-01-03

    A view of the one dozen (out of 18) flight mirror segments that make up the primary mirror on NASA's James Webb Space Telescope have been installed at NASA's Goddard Space Flight Center. Credits: NASA/Chris Gunn More: Since December 2015, the team of scientists and engineers have been working tirelessly to install all the primary mirror segments onto the telescope structure in the large clean room at NASA's Goddard Space Flight Center in Greenbelt, Maryland. The twelfth mirror was installed on January 2, 2016. "This milestone signifies that all of the hexagonal shaped mirrors on the fixed central section of the telescope structure are installed and only the 3 mirrors on each wing are left for installation," said Lee Feinberg, NASA's Optical Telescope Element Manager at NASA Goddard. "The incredibly skilled and dedicated team assembling the telescope continues to find ways to do things faster and more efficiently." Each hexagonal-shaped segment measures just over 4.2 feet (1.3 meters) across and weighs approximately 88 pounds (40 kilograms). After being pieced together, the 18 primary mirror segments will work together as one large 21.3-foot (6.5-meter) mirror. The primary mirror will unfold and adjust to shape after launch. The mirrors are made of ultra-lightweight beryllium. The mirrors are placed on the telescope's backplane using a robotic arm, guided by engineers. The full installation is expected to be completed in a few months. The mirrors were built by Ball Aerospace & Technologies Corp., Boulder, Colorado. Ball is the principal subcontractor to Northrop Grumman for the optical technology and lightweight mirror system. The installation of the mirrors onto the telescope structure is performed by Harris Corporation of Rochester, New York. Harris Corporation leads integration and testing for the telescope. While the mirror assembly is a very significant milestone, there are many more steps involved in assembling the Webb telescope. The primary mirror and the

  14. Loral O’Hara/NASA 2017 Astronaut Candidate

    NASA Image and Video Library

    2017-08-22

    The ranks of America’s Astronaut Corps grew by a dozen today! The twelve new NASA Astronaut Candidates have reported for duty at the Johnson Space Center in Houston to begin two years of training. Before they got to Houston we video-chatted with them all; Woods Hole Oceanographic Institution research engineer Loral O’Hara talks about how she became interested in science, technology, engineering and math, why she wanted to become an astronaut and where she was when she got the news that she’d achieved her dream. Learn more about the new space heroes right here: nasa.gov/2017astronauts

  15. NASA in Silicon Valley Live - Episode 02 - Self-driving Robots, Planes and Automobiles

    NASA Image and Video Library

    2018-01-26

    NASA in Silicon Valley Live is a live show streamed on Twitch.tv that features conversations with the various researchers, scientists, engineers and all around cool people who work at NASA to push the boundaries of innovation. In this episode livestreamed on January 26, 2018, we explore autonomy, or “self-driving” technologies with Terry Fong, NASA chief roboticist, and Diana Acosta, technical lead for autonomous systems and robotics. Video credit: NASA/Ames Research Center NASA's Ames Research Center is located in California's Silicon Valley. Follow us on social media to hear about the latest developments in space, science, technology and aeronautics.

  16. Surgical gesture classification from video and kinematic data.

    PubMed

    Zappella, Luca; Béjar, Benjamín; Hager, Gregory; Vidal, René

    2013-10-01

    Much of the existing work on automatic classification of gestures and skill in robotic surgery is based on dynamic cues (e.g., time to completion, speed, forces, torque) or kinematic data (e.g., robot trajectories and velocities). While videos could be equally or more discriminative (e.g., videos contain semantic information not present in kinematic data), they are typically not used because of the difficulties associated with automatic video interpretation. In this paper, we propose several methods for automatic surgical gesture classification from video data. We assume that the video of a surgical task (e.g., suturing) has been segmented into video clips corresponding to a single gesture (e.g., grabbing the needle, passing the needle) and propose three methods to classify the gesture of each video clip. In the first one, we model each video clip as the output of a linear dynamical system (LDS) and use metrics in the space of LDSs to classify new video clips. In the second one, we use spatio-temporal features extracted from each video clip to learn a dictionary of spatio-temporal words, and use a bag-of-features (BoF) approach to classify new video clips. In the third one, we use multiple kernel learning (MKL) to combine the LDS and BoF approaches. Since the LDS approach is also applicable to kinematic data, we also use MKL to combine both types of data in order to exploit their complementarity. Our experiments on a typical surgical training setup show that methods based on video data perform equally well, if not better, than state-of-the-art approaches based on kinematic data. In turn, the combination of both kinematic and video data outperforms any other algorithm based on one type of data alone. Copyright © 2013 Elsevier B.V. All rights reserved.

  17. NASA Webb Mirror is 'CIAF' and Sound

    NASA Image and Video Library

    2017-12-08

    A James Webb Space Telescope flight spare primary mirror segment is loaded onto the CMM (Configuration Measurement Machine) at the CIAF (Calibration, Integration and Alignment Facility) at NASA's Goddard Space Flight Center in Greenbelt, Md. The CMM is used for precision measurements of the mirrors. These precision measurements must be accurate to 0.1 microns or 1/400th the thickness of a human hair. Image credit: NASA/Goddard/Chris Gunn NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  18. Consolidating NASA's Arc Jets

    NASA Technical Reports Server (NTRS)

    Balboni, John A.; Gokcen, Tahir; Hui, Frank C. L.; Graube, Peter; Morrissey, Patricia; Lewis, Ronald

    2015-01-01

    The paper describes the consolidation of NASA's high powered arc-jet testing at a single location. The existing plasma arc-jet wind tunnels located at the Johnson Space Center were relocated to Ames Research Center while maintaining NASA's technical capability to ground-test thermal protection system materials under simulated atmospheric entry convective heating. The testing conditions at JSC were reproduced and successfully demonstrated at ARC through close collaboration between the two centers. New equipment was installed at Ames to provide test gases of pure nitrogen mixed with pure oxygen, and for future nitrogen-carbon dioxide mixtures. A new control system was custom designed, installed and tested. Tests demonstrated the capability of the 10 MW constricted-segmented arc heater at Ames meets the requirements of the major customer, NASA's Orion program. Solutions from an advanced computational fluid dynamics code were used to aid in characterizing the properties of the plasma stream and the surface environment on the calorimeters in the supersonic flow stream produced by the arc heater.

  19. EcAMSat Video News File

    NASA Image and Video Library

    2017-11-03

    A video news file (or a collection of raw video and interview clips) about the EcAMSat mission. Ever wonder what would happen if you got sick in space? NASA is sending samples of bacteria into low-Earth orbit to find out. One of the latest small satellite missions from NASA’s Ames Research Center in California’s Silicon Valley is the E. coli Anti-Microbial Satellite, or EcAMSat for short. The CubeSat – a spacecraft the size of a shoebox built from cube-shaped units – will explore how effectively antibiotics can combat E. coli bacteria in the low gravity of space. This information will help us improve how we fight infections, providing safer journeys for astronauts on their future voyages, and offer benefits for medicine here on Earth.

  20. Development and Testing of Harpoon-Based Approaches for Collecting Comet Samples (Video Supplement)

    NASA Technical Reports Server (NTRS)

    Purves, Lloyd (Compiler); Nuth, Joseph (Compiler); Amatucci, Edward (Compiler); Wegel, Donald; Smith, Walter; Leary, James; Kee, Lake; Hill, Stuart; Grebenstein, Markus; Voelk, Stefan; hide

    2017-01-01

    This video supplement contains a set of videos created during the approximately 10-year-long course of developing and testing the Goddard Space Flight Center (GSFC) harpoon-based approach for collecting comet samples. The purpose of the videos is to illustrate various design concepts used in this method of acquiring samples of comet material, the testing used to verify the concepts, and the evolution of designs and testing. To play the videos this PDF needs to be opened in the freeware Adobe Reader. They do not seem to play while within a browser. While this supplement can be used as a stand-alone document, it is intended to augment its parent document of the same title, Development and Testing of Harpoon-Based Approaches for Collecting Comet Samples (NASA/CR-2017-219018; this document is accessible from the website: https://ssed.gsfc.nasa.gov/harpoon/SAS_Paper-V1.pdf). The parent document, which only contains text and figures, describes the overall development and testing effort and contains references to each of the videos in this supplement. Thus, the videos are primarily intended to augment the information provided by the text and figures in the parent document. This approach was followed to allow the file size of the parent document to remain small enough to facilitate downloading and storage. Some of the videos were created by other organizations, Johns Hopkins University Applied Physics Laboratory (JHU APL) and the German Aerospace Center called, the Deutsches Zentrum für Luft- und Raumfahrt (DLR), who are partnering with GSFC on developing this technology. Each video is accompanied by text that provides a summary description of its nature and purpose, as well as the identity of the authors. All videos have been edited to only show key parts of the testing. Although not all videos have sound, the sound has been retained in those that have it. Also, each video has been given one or more title screens to clarify what is going in different phases of the video.

  1. Perioperative outcomes of video- and robot-assisted segmentectomies.

    PubMed

    Rinieri, Philippe; Peillon, Christophe; Salaün, Mathieu; Mahieu, Julien; Bubenheim, Michael; Baste, Jean-Marc

    2016-02-01

    Video-assisted thoracic surgery appears to be technically difficult for segmentectomy. Conversely, robotic surgery could facilitate the performance of segmentectomy. The aim of this study was to compare the early results of video- and robot-assisted segmentectomies. Data were collected prospectively on videothoracoscopy from 2010 and on robotic procedures from 2013. Fifty-one patients who were candidates for minimally invasive segmentectomy were included in the study. Perioperative outcomes of video-assisted and robotic segmentectomies were compared. The minimally invasive segmentectomies included 32 video- and 16 robot-assisted procedures; 3 segmentectomies (2 video-assisted and 1 robot-assisted) were converted to lobectomies. Four conversions to thoracotomy were necessary for anatomical reason or arterial injury, with no uncontrolled bleeding in the robotic arm. There were 7 benign or infectious lesions, 9 pre-invasive lesions, 25 lung cancers, and 10 metastatic diseases. Patient characteristics, type of segment, conversion to thoracotomy, conversion to lobectomy, operative time, postoperative complications, chest tube duration, postoperative stay, and histology were similar in the video and robot groups. Estimated blood loss was significantly higher in the video group (100 vs. 50 mL, p = 0.028). The morbidity rate of minimally invasive segmentectomy was low. The short-term results of video-assisted and robot-assisted segmentectomies were similar, and more data are required to show any advantages between the two techniques. Long-term oncologic outcomes are necessary to evaluate these new surgical practices. © The Author(s) 2016.

  2. Comet Jacques Approaches the Sun [video

    NASA Image and Video Library

    2014-07-24

    NASA's Solar TErrestrial RElations Observatory, STEREO has observed the recently discovered Comet Jacques as it passed by its nearest approach to the Sun (July 1-6, 2014). The wide field instrument on board STEREO (Ahead) showed the comet with its elongated tail being stretched and pummeled by the gusty solar wind streaming from the Sun. Also visible near the center of the image is the bright planet Venus. The Sun is just out of the field of view to the right. Comet Jacques is traveling through space at about 180,000 km per hour (110,000 mph). It may brighten enough to be seen with the naked eye. Video of this event here: www.flickr.com/photos/gsfc/14730658164/ Download original file: sohowww.nascom.nasa.gov/pickoftheweek/old/11jul2014/ Credit: NASA/Goddard/STEREO NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  3. NASA Technology Transfer - Human Robot Teaming

    NASA Image and Video Library

    2016-12-23

    Produced for Intelligent Robotics Group to show at January 2017 Consumer Electronics Show (CES). Highlights development of VERVE (Visual Environment for Remote Virtual Exploration) software used on K-10, K-REX, SPHERES and AstroBee projects for 3D awareness. Also mentions transfer of software to Nissan for their development in their Autonomous Vehicle project. Video includes Nissan's self-driving car around NASA Ames.

  4. Ham records video in the FWD MDDK during STS-132

    NASA Image and Video Library

    2010-05-15

    S132-E-007169 (15 May 2010) --- NASA astronaut Ken Ham, STS-132 mission commander, prepares to record some video on the middeck of space shuttle Atlantis during Flight Day 2 activities. Photo credit: National Aeronautics and Space Administration

  5. Annotations of Mexican bullfighting videos for semantic index

    NASA Astrophysics Data System (ADS)

    Montoya Obeso, Abraham; Oropesa Morales, Lester Arturo; Fernando Vázquez, Luis; Cocolán Almeda, Sara Ivonne; Stoian, Andrei; García Vázquez, Mireya Saraí; Zamudio Fuentes, Luis Miguel; Montiel Perez, Jesús Yalja; de la O Torres, Saul; Ramírez Acosta, Alejandro Alvaro

    2015-09-01

    The video annotation is important for web indexing and browsing systems. Indeed, in order to evaluate the performance of video query and mining techniques, databases with concept annotations are required. Therefore, it is necessary generate a database with a semantic indexing that represents the digital content of the Mexican bullfighting atmosphere. This paper proposes a scheme to make complex annotations in a video in the frame of multimedia search engine project. Each video is partitioned using our segmentation algorithm that creates shots of different length and different number of frames. In order to make complex annotations about the video, we use ELAN software. The annotations are done in two steps: First, we take note about the whole content in each shot. Second, we describe the actions as parameters of the camera like direction, position and deepness. As a consequence, we obtain a more complete descriptor of every action. In both cases we use the concepts of the TRECVid 2014 dataset. We also propose new concepts. This methodology allows to generate a database with the necessary information to create descriptors and algorithms capable to detect actions to automatically index and classify new bullfighting multimedia content.

  6. A motion compensation technique using sliced blocks and its application to hybrid video coding

    NASA Astrophysics Data System (ADS)

    Kondo, Satoshi; Sasai, Hisao

    2005-07-01

    This paper proposes a new motion compensation method using "sliced blocks" in DCT-based hybrid video coding. In H.264 ? MPEG-4 Advance Video Coding, a brand-new international video coding standard, motion compensation can be performed by splitting macroblocks into multiple square or rectangular regions. In the proposed method, on the other hand, macroblocks or sub-macroblocks are divided into two regions (sliced blocks) by an arbitrary line segment. The result is that the shapes of the segmented regions are not limited to squares or rectangles, allowing the shapes of the segmented regions to better match the boundaries between moving objects. Thus, the proposed method can improve the performance of the motion compensation. In addition, adaptive prediction of the shape according to the region shape of the surrounding macroblocks can reduce overheads to describe shape information in the bitstream. The proposed method also has the advantage that conventional coding techniques such as mode decision using rate-distortion optimization can be utilized, since coding processes such as frequency transform and quantization are performed on a macroblock basis, similar to the conventional coding methods. The proposed method is implemented in an H.264-based P-picture codec and an improvement in bit rate of 5% is confirmed in comparison with H.264.

  7. The Effects of Video Self-Modeling on the Decoding Skills of Children at Risk for Reading Disabilities

    ERIC Educational Resources Information Center

    Ayala, Sandra M.

    2010-01-01

    Ten first grade students, participating in a Tier II response to intervention (RTI) reading program received an intervention of video self modeling to improve decoding skills and sight word recognition. The students were video recorded blending and segmenting decodable words, and reading sight words taken directly from their curriculum…

  8. Subjective evaluation of H.265/HEVC based dynamic adaptive video streaming over HTTP (HEVC-DASH)

    NASA Astrophysics Data System (ADS)

    Irondi, Iheanyi; Wang, Qi; Grecos, Christos

    2015-02-01

    The Dynamic Adaptive Streaming over HTTP (DASH) standard is becoming increasingly popular for real-time adaptive HTTP streaming of internet video in response to unstable network conditions. Integration of DASH streaming techniques with the new H.265/HEVC video coding standard is a promising area of research. The performance of HEVC-DASH systems has been previously evaluated by a few researchers using objective metrics, however subjective evaluation would provide a better measure of the user's Quality of Experience (QoE) and overall performance of the system. This paper presents a subjective evaluation of an HEVC-DASH system implemented in a hardware testbed. Previous studies in this area have focused on using the current H.264/AVC (Advanced Video Coding) or H.264/SVC (Scalable Video Coding) codecs and moreover, there has been no established standard test procedure for the subjective evaluation of DASH adaptive streaming. In this paper, we define a test plan for HEVC-DASH with a carefully justified data set employing longer video sequences that would be sufficient to demonstrate the bitrate switching operations in response to various network condition patterns. We evaluate the end user's real-time QoE online by investigating the perceived impact of delay, different packet loss rates, fluctuating bandwidth, and the perceived quality of using different DASH video stream segment sizes on a video streaming session using different video sequences. The Mean Opinion Score (MOS) results give an insight into the performance of the system and expectation of the users. The results from this study show the impact of different network impairments and different video segments on users' QoE and further analysis and study may help in optimizing system performance.

  9. Logo recognition in video by line profile classification

    NASA Astrophysics Data System (ADS)

    den Hollander, Richard J. M.; Hanjalic, Alan

    2003-12-01

    We present an extension to earlier work on recognizing logos in video stills. The logo instances considered here are rigid planar objects observed at a distance in the scene, so the possible perspective transformation can be approximated by an affine transformation. For this reason we can classify the logos by matching (invariant) line profiles. We enhance our previous method by considering multiple line profiles instead of a single profile of the logo. The positions of the lines are based on maxima in the Hough transform space of the segmented logo foreground image. Experiments are performed on MPEG1 sport video sequences to show the performance of the proposed method.

  10. The impact of video technology on learning: A cooking skills experiment.

    PubMed

    Surgenor, Dawn; Hollywood, Lynsey; Furey, Sinéad; Lavelle, Fiona; McGowan, Laura; Spence, Michelle; Raats, Monique; McCloat, Amanda; Mooney, Elaine; Caraher, Martin; Dean, Moira

    2017-07-01

    This study examines the role of video technology in the development of cooking skills. The study explored the views of 141 female participants on whether video technology can promote confidence in learning new cooking skills to assist in meal preparation. Prior to each focus group participants took part in a cooking experiment to assess the most effective method of learning for low-skilled cooks across four experimental conditions (recipe card only; recipe card plus video demonstration; recipe card plus video demonstration conducted in segmented stages; and recipe card plus video demonstration whereby participants freely accessed video demonstrations as and when needed). Focus group findings revealed that video technology was perceived to assist learning in the cooking process in the following ways: (1) improved comprehension of the cooking process; (2) real-time reassurance in the cooking process; (3) assisting the acquisition of new cooking skills; and (4) enhancing the enjoyment of the cooking process. These findings display the potential for video technology to promote motivation and confidence as well as enhancing cooking skills among low-skilled individuals wishing to cook from scratch using fresh ingredients. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Video Clip of a Rover Rock-Drilling Demonstration at JPL

    NASA Image and Video Library

    2013-02-20

    This frame from a video clip shows moments during a demonstration of drilling into a rock at NASA JPL, Pasadena, Calif., with a test double of the Mars rover Curiosity. The drill combines hammering and rotation motions of the bit.

  12. NASA's James Webb Space Telescope Primary Mirror Fully Assembled

    NASA Image and Video Library

    2016-02-04

    The 18th and final primary mirror segment is installed on what will be the biggest and most powerful space telescope ever launched. The final mirror installation Wednesday at NASA’s Goddard Space Flight Center in Greenbelt, Maryland marks an important milestone in the assembly of the agency’s James Webb Space Telescope. “Scientists and engineers have been working tirelessly to install these incredible, nearly perfect mirrors that will focus light from previously hidden realms of planetary atmospheres, star forming regions and the very beginnings of the Universe,” said John Grunsfeld, associate administrator for NASA’s Science Mission Directorate in Washington. “With the mirrors finally complete, we are one step closer to the audacious observations that will unravel the mysteries of the Universe.” Using a robotic arm reminiscent of a claw machine, the team meticulously installed all of Webb's primary mirror segments onto the telescope structure. Each of the hexagonal-shaped mirror segments measures just over 4.2 feet (1.3 meters) across -- about the size of a coffee table -- and weighs approximately 88 pounds (40 kilograms). Once in space and fully deployed, the 18 primary mirror segments will work together as one large 21.3-foot diameter (6.5-meter) mirror. Credit: NASA/Goddard/Chris Gunn Credits: NASA/Chris Gunn

  13. A Secure and Robust Object-Based Video Authentication System

    NASA Astrophysics Data System (ADS)

    He, Dajun; Sun, Qibin; Tian, Qi

    2004-12-01

    An object-based video authentication system, which combines watermarking, error correction coding (ECC), and digital signature techniques, is presented for protecting the authenticity between video objects and their associated backgrounds. In this system, a set of angular radial transformation (ART) coefficients is selected as the feature to represent the video object and the background, respectively. ECC and cryptographic hashing are applied to those selected coefficients to generate the robust authentication watermark. This content-based, semifragile watermark is then embedded into the objects frame by frame before MPEG4 coding. In watermark embedding and extraction, groups of discrete Fourier transform (DFT) coefficients are randomly selected, and their energy relationships are employed to hide and extract the watermark. The experimental results demonstrate that our system is robust to MPEG4 compression, object segmentation errors, and some common object-based video processing such as object translation, rotation, and scaling while securely preventing malicious object modifications. The proposed solution can be further incorporated into public key infrastructure (PKI).

  14. Aerial Video Imaging

    NASA Technical Reports Server (NTRS)

    1991-01-01

    When Michael Henry wanted to start an aerial video service, he turned to Johnson Space Center for assistance. Two NASA engineers - one had designed and developed TV systems in Apollo, Skylab, Apollo- Soyuz and Space Shuttle programs - designed a wing-mounted fiberglass camera pod. Camera head and angles are adjustable, and the pod is shaped to reduce vibration. The controls are located so a solo pilot can operate the system. A microprocessor displays latitude, longitude, and bearing, and a GPS receiver provides position data for possible legal references. The service has been successfully utilized by railroads, oil companies, real estate companies, etc.

  15. MPEG-7 audio-visual indexing test-bed for video retrieval

    NASA Astrophysics Data System (ADS)

    Gagnon, Langis; Foucher, Samuel; Gouaillier, Valerie; Brun, Christelle; Brousseau, Julie; Boulianne, Gilles; Osterrath, Frederic; Chapdelaine, Claude; Dutrisac, Julie; St-Onge, Francis; Champagne, Benoit; Lu, Xiaojian

    2003-12-01

    This paper reports on the development status of a Multimedia Asset Management (MAM) test-bed for content-based indexing and retrieval of audio-visual documents within the MPEG-7 standard. The project, called "MPEG-7 Audio-Visual Document Indexing System" (MADIS), specifically targets the indexing and retrieval of video shots and key frames from documentary film archives, based on audio-visual content like face recognition, motion activity, speech recognition and semantic clustering. The MPEG-7/XML encoding of the film database is done off-line. The description decomposition is based on a temporal decomposition into visual segments (shots), key frames and audio/speech sub-segments. The visible outcome will be a web site that allows video retrieval using a proprietary XQuery-based search engine and accessible to members at the Canadian National Film Board (NFB) Cineroute site. For example, end-user will be able to ask to point on movie shots in the database that have been produced in a specific year, that contain the face of a specific actor who tells a specific word and in which there is no motion activity. Video streaming is performed over the high bandwidth CA*net network deployed by CANARIE, a public Canadian Internet development organization.

  16. Preliminary Results from NASA/GSFC Ka-Band High Rate Demonstration for Near-Earth Communications

    NASA Technical Reports Server (NTRS)

    Wong, Yen; Gioannini, Bryan; Bundick, Steven N.; Miller, David T.

    2004-01-01

    In early 2000, the National Aeronautics and Space Administration (NASA) commenced the Ka-Band Transition Project (KaTP) as another step towards satisfying wideband communication requirements of the space research and earth exploration-satellite services. The KaTP team upgraded the ground segment portion of NASA's Space Network (SN) in order to enable high data rate space science and earth science services communications. The SN ground segment is located at the White Sands Complex (WSC) in New Mexico. NASA conducted the SN ground segment upgrades in conjunction with space segment upgrades implemented via the Tracking and Data Relay Satellite (TDRS)-HIJ project. The three new geostationary data relay satellites developed under the TDRS-HIJ project support the use of the inter-satellite service (ISS) allocation in the 25.25-27.5 GHz band (the 26 GHz band) to receive high speed data from low earth-orbiting customer spacecraft. The TDRS H spacecraft (designated TDRS-8) is currently operational at a 171 degrees west longitude. TDRS I and J spacecraft on-orbit testing has been completed. These spacecraft support 650 MHz-wide Ka-band telemetry links that are referred to as return links. The 650 MHz-wide Ka-band telemetry links have the capability to support data rates up to at least 1.2 Gbps. Therefore, the TDRS-HIJ spacecraft will significantly enhance the existing data rate elements of the NASA Space Network that operate at S-band and Ku-band.

  17. Grayscale image segmentation for real-time traffic sign recognition: the hardware point of view

    NASA Astrophysics Data System (ADS)

    Cao, Tam P.; Deng, Guang; Elton, Darrell

    2009-02-01

    In this paper, we study several grayscale-based image segmentation methods for real-time road sign recognition applications on an FPGA hardware platform. The performance of different image segmentation algorithms in different lighting conditions are initially compared using PC simulation. Based on these results and analysis, suitable algorithms are implemented and tested on a real-time FPGA speed sign detection system. Experimental results show that the system using segmented images uses significantly less hardware resources on an FPGA while maintaining comparable system's performance. The system is capable of processing 60 live video frames per second.

  18. Static hand gesture recognition from a video

    NASA Astrophysics Data System (ADS)

    Rokade, Rajeshree S.; Doye, Dharmpal

    2011-10-01

    A sign language (also signed language) is a language which, instead of acoustically conveyed sound patterns, uses visually transmitted sign patterns to convey meaning- "simultaneously combining hand shapes, orientation and movement of the hands". Sign languages commonly develop in deaf communities, which can include interpreters, friends and families of deaf people as well as people who are deaf or hard of hearing themselves. In this paper, we proposed a novel system for recognition of static hand gestures from a video, based on Kohonen neural network. We proposed algorithm to separate out key frames, which include correct gestures from a video sequence. We segment, hand images from complex and non uniform background. Features are extracted by applying Kohonen on key frames and recognition is done.

  19. An intelligent crowdsourcing system for forensic analysis of surveillance video

    NASA Astrophysics Data System (ADS)

    Tahboub, Khalid; Gadgil, Neeraj; Ribera, Javier; Delgado, Blanca; Delp, Edward J.

    2015-03-01

    Video surveillance systems are of a great value for public safety. With an exponential increase in the number of cameras, videos obtained from surveillance systems are often archived for forensic purposes. Many automatic methods have been proposed to do video analytics such as anomaly detection and human activity recognition. However, such methods face significant challenges due to object occlusions, shadows and scene illumination changes. In recent years, crowdsourcing has become an effective tool that utilizes human intelligence to perform tasks that are challenging for machines. In this paper, we present an intelligent crowdsourcing system for forensic analysis of surveillance video that includes the video recorded as a part of search and rescue missions and large-scale investigation tasks. We describe a method to enhance crowdsourcing by incorporating human detection, re-identification and tracking. At the core of our system, we use a hierarchal pyramid model to distinguish the crowd members based on their ability, experience and performance record. Our proposed system operates in an autonomous fashion and produces a final output of the crowdsourcing analysis consisting of a set of video segments detailing the events of interest as one storyline.

  20. A Video Game-Based Framework for Analyzing Human-Robot Interaction: Characterizing Interface Design in Real-Time Interactive Multimedia Applications

    DTIC Science & Technology

    2006-01-01

    segments video game interaction into domain-independent components which together form a framework that can be used to characterize real-time interactive...multimedia applications in general and HRI in particular. We provide examples of using the components in both the video game and the Unmanned Aerial

  1. Spherical primary optical telescope (SPOT) segments

    NASA Astrophysics Data System (ADS)

    Hall, Christopher; Hagopian, John; DeMarco, Michael

    2012-09-01

    The spherical primary optical telescope (SPOT) project is an internal research and development program at NASA Goddard Space Flight Center. The goals of the program are to develop a robust and cost effective way to manufacture spherical mirror segments and demonstrate a new wavefront sensing approach for continuous phasing across the segmented primary. This paper focuses on the fabrication of the mirror segments. Significant cost savings were achieved through the design, since it allowed the mirror segments to be cast rather than machined from a glass blank. Casting was followed by conventional figuring at Goddard Space Flight Center. After polishing, the mirror segments were mounted to their composite assemblies. QED Technologies used magnetorheological finishing (MRF®) for the final figuring. The MRF process polished the mirrors while they were mounted to their composite assemblies. Each assembly included several magnetic invar plugs that extended to within an inch of the face of the mirror. As part of this project, the interaction between the MRF magnetic field and invar plugs was evaluated. By properly selecting the polishing conditions, MRF was able to significantly improve the figure of the mounted segments. The final MRF figuring demonstrates that mirrors, in the mounted configuration, can be polished and tested to specification. There are significant process capability advantes due to polishing and testing the optics in their final, end-use assembled state.

  2. Nonscience Majors' Perceptions on the Use of YouTube Video to Support Learning in an Integrated Science Lecture

    ERIC Educational Resources Information Center

    Eick, Charles Joseph; King, David T., Jr.

    2012-01-01

    The instructor of an integrated science course for nonscience majors embedded content-related video segments from YouTube and other similar internet sources into lecture. Through this study, the instructor wanted to know students' perceptions of how video use engaged them and increased their interest and understanding of science. Written survey…

  3. Variable-Depth Liner Evaluation Using Two NASA Flow Ducts

    NASA Technical Reports Server (NTRS)

    Jones, M. G.; Nark, D. M.; Watson, W. R.; Howerton, B. M.

    2017-01-01

    Four liners are investigated experimentally via tests in the NASA Langley Grazing Flow Impedance Tube. These include an axially-segmented liner and three liners that use reordering of the chambers. Chamber reordering is shown to have a strong effect on the axial sound pressure level profiles, but a limited effect on the overall attenuation. It is also shown that bent chambers can be used to reduce the liner depth with minimal effects on the attenuation. A numerical study is also conducted to explore the effects of a planar and three higher-order mode sources based on the NASA Langley Curved Duct Test Rig geometry. A four-segment liner is designed using the NASA Langley CDL code with a Python-based optimizer. Five additional liner designs, four with rearrangements of the first liner segments and one with a redistribution of the individual chambers, are evaluated for each of the four sources. The liner configuration affects the sound pressure level profile much more than the attenuation spectra for the planar and first two higher-order mode sources, but has a much larger effect on the SPL profiles and attenuation spectra for the last higher-order mode source. Overall, axially variable-depth liners offer the potential to provide improved fan noise reduction, regardless of whether the axially variable depths are achieved via a distributed array of chambers (depths vary from chamber to chamber) or a group of zones (groups of chambers for which the depth is constant).

  4. NASA F-15B #836 landing with Quiet Spike attached

    NASA Image and Video Library

    2006-10-03

    NASA F-15B #836 landing with Quiet Spike attached. The project seeks to verify the structural integrity of the multi-segmented, articulating spike attachment designed to reduce and control a sonic boom.

  5. Automatic video shot boundary detection using k-means clustering and improved adaptive dual threshold comparison

    NASA Astrophysics Data System (ADS)

    Sa, Qila; Wang, Zhihui

    2018-03-01

    At present, content-based video retrieval (CBVR) is the most mainstream video retrieval method, using the video features of its own to perform automatic identification and retrieval. This method involves a key technology, i.e. shot segmentation. In this paper, the method of automatic video shot boundary detection with K-means clustering and improved adaptive dual threshold comparison is proposed. First, extract the visual features of every frame and divide them into two categories using K-means clustering algorithm, namely, one with significant change and one with no significant change. Then, as to the classification results, utilize the improved adaptive dual threshold comparison method to determine the abrupt as well as gradual shot boundaries.Finally, achieve automatic video shot boundary detection system.

  6. Knowledge-based understanding of aerial surveillance video

    NASA Astrophysics Data System (ADS)

    Cheng, Hui; Butler, Darren

    2006-05-01

    Aerial surveillance has long been used by the military to locate, monitor and track the enemy. Recently, its scope has expanded to include law enforcement activities, disaster management and commercial applications. With the ever-growing amount of aerial surveillance video acquired daily, there is an urgent need for extracting actionable intelligence in a timely manner. Furthermore, to support high-level video understanding, this analysis needs to go beyond current approaches and consider the relationships, motivations and intentions of the objects in the scene. In this paper we propose a system for interpreting aerial surveillance videos that automatically generates a succinct but meaningful description of the observed regions, objects and events. For a given video, the semantics of important regions and objects, and the relationships between them, are summarised into a semantic concept graph. From this, a textual description is derived that provides new search and indexing options for aerial video and enables the fusion of aerial video with other information modalities, such as human intelligence, reports and signal intelligence. Using a Mixture-of-Experts video segmentation algorithm an aerial video is first decomposed into regions and objects with predefined semantic meanings. The objects are then tracked and coerced into a semantic concept graph and the graph is summarized spatially, temporally and semantically using ontology guided sub-graph matching and re-writing. The system exploits domain specific knowledge and uses a reasoning engine to verify and correct the classes, identities and semantic relationships between the objects. This approach is advantageous because misclassifications lead to knowledge contradictions and hence they can be easily detected and intelligently corrected. In addition, the graph representation highlights events and anomalies that a low-level analysis would overlook.

  7. Progress in video immersion using Panospheric imaging

    NASA Astrophysics Data System (ADS)

    Bogner, Stephen L.; Southwell, David T.; Penzes, Steven G.; Brosinsky, Chris A.; Anderson, Ron; Hanna, Doug M.

    1998-09-01

    Having demonstrated significant technical and marketplace advantages over other modalities for video immersion, PanosphericTM Imaging (PI) continues to evolve rapidly. This paper reports on progress achieved since AeroSense 97. The first practical field deployment of the technology occurred in June-August 1997 during the NASA-CMU 'Atacama Desert Trek' activity, where the Nomad mobile robot was teleoperated via immersive PanosphericTM imagery from a distance of several thousand kilometers. Research using teleoperated vehicles at DRES has also verified the exceptional utility of the PI technology for achieving high levels of situational awareness, operator confidence, and mission effectiveness. Important performance enhancements have been achieved with the completion of the 4th Generation PI DSP-based array processor system. The system is now able to provide dynamic full video-rate generation of spatial and computational transformations, resulting in a programmable and fully interactive immersive video telepresence. A new multi- CCD camera architecture has been created to exploit the bandwidth of this processor, yielding a well-matched PI system with greatly improved resolution. While the initial commercial application for this technology is expected to be video tele- conferencing, it also appears to have excellent potential for application in the 'Immersive Cockpit' concept. Additional progress is reported in the areas of Long Wave Infrared PI Imaging, Stereo PI concepts, PI based Video-Servoing concepts, PI based Video Navigation concepts, and Foveation concepts (to merge localized high-resolution views with immersive views).

  8. CTS digital video college curriculum-sharing experiment. [Communications Technology Satellite

    NASA Technical Reports Server (NTRS)

    Lumb, D. R.; Sites, M. J.

    1974-01-01

    NASA-Ames Research Center, Stanford University, and Carleton University, Ottawa, Canada, are participating in a joint experiment to evaluate the feasibility and effectiveness of college curriculum sharing using compressed digital television and the Communications Technology Satellite (CTS). Each university will offer televised courses to the other during the 1976-1977 academic year via CTS, a joint program by NASA and the Canadian Department of Communications. The video compression techniques to be demonstrated will enable economical interconnection of educational institutions using existing and planned domestic satellites.

  9. Segment Alignment Maintenance System for the Hobby-Eberly Telescope

    NASA Technical Reports Server (NTRS)

    Rakoczy, John; Burdine, Robert (Technical Monitor)

    2001-01-01

    NASA's Marshall Space Flight Center, in collaboration with Blue Line Engineering of Colorado Springs, Colorado, is developing a Segment Alignment Maintenance System (SAMS) for McDonald Observatory's Hobby-Eberly Telescope (HET). The SAMS shall sense motions of the 91 primary mirror segments and send corrections to HET's primary mirror controller as the mirror segments misalign due to thermo -elastic deformations of the mirror support structure. The SAMS consists of inductive edge sensors. All measurements are sent to the SAMS computer where mirror motion corrections are calculated. In October 2000, a prototype SAMS was installed on a seven-segment cluster of the HET. Subsequent testing has shown that the SAMS concept and architecture are a viable practical approach to maintaining HET's primary mirror figure, or the figure of any large segmented telescope. This paper gives a functional description of the SAMS sub-array components and presents test data to characterize the performance of the subarray SAMS.

  10. The NASA "Why?" Files: The Case of the Barking Dogs. Program 2 in the 2000-2001 Series.

    ERIC Educational Resources Information Center

    National Aeronautics and Space Administration, Hampton, VA. Langley Research Center.

    The National Aeronautics and Space Administration (NASA) has produced a distance learning series of four 60-minute video programs with an accompanying Web site and companion teacher guide. This teacher guide accompanies the second video in the series. The story line of each program involves six ethnically diverse, inquisitive schoolchildren who…

  11. The NASA "Why?" Files: The Case of the Challenging Flight. Program 4 in the 2000-2001 Series.

    ERIC Educational Resources Information Center

    National Aeronautics and Space Administration, Hampton, VA. Langley Research Center.

    The National Aeronautics and Space Administration (NASA) has produced a distance learning series of four 60-minute video programs with an accompanying Web site and companion teacher guides. This teacher guide accompanies the fourth video in the series. The story lines of each program involve six ethnically diverse, inquisitive schoolchildren who…

  12. NASA Spacecraft Sees 'Pac-Man' on Saturn Moon

    NASA Image and Video Library

    2017-12-08

    NASA release date March 29, 2010 The highest-resolution-yet temperature map and images of Saturn’s icy moon Mimas obtained by NASA’s Cassini spacecraft reveal surprising patterns on the surface of the small moon, including unexpected hot regions that resemble “Pac-Man” eating a dot, and striking bands of light and dark in crater walls. The left portion of this image shows Mimas in visible light, an image that has drawn comparisons to the "Star Wars" Death Star. The right portion shows the new temperature map, which resembles 1980s video game icon "Pac Man." To learn more about this image go to: www.nasa.gov/centers/goddard/news/features/2010/pac-man-m... Credit: NASA/JPL/Goddard/SWRI/SSI NASA Goddard Space Flight Center is home to the nation's largest organization of combined scientists, engineers and technologists that build spacecraft, instruments and new technology to study the Earth, the sun, our solar system, and the universe.

  13. Precision segmented reflectors for space applications

    NASA Technical Reports Server (NTRS)

    Lehman, David H.; Pawlik, Eugene V.; Meinel, Aden B.; Fichter, W. B.

    1990-01-01

    A project to develop precision segmented reflectors (PSRs) which operate at submillimeter wavelengths is described. The development of a light efficient means for the construction of large-aperture segmented reflecting space-based telescopes is the primary aim of the project. The 20-m Large Deployable Reflector (LDR) telescope is being developed for a survey mission, and it will make use of the reflector panels and materials, structures, and figure control being elaborated for the PSR. The surface accuracy of a 0.9-m PSR panel is shown to be 1.74-micron RMS, the goal of 100-micron RMS positioning accuracy has been achieved for a 4-m erectable structure. A voice-coil actuator for the figure control system architecture demonstrated 1-micron panel control accuracy in a 3-axis evaluation. The PSR technology is demonstrated to be of value for several NASA projects involving optical communications and interferometers as well as missions which make use of large-diameter segmented reflectors.

  14. Precision segmented reflectors for space applications

    NASA Astrophysics Data System (ADS)

    Lehman, David H.; Pawlik, Eugene V.; Meinel, Aden B.; Fichter, W. B.

    1990-08-01

    A project to develop precision segmented reflectors (PSRs) which operate at submillimeter wavelengths is described. The development of a light efficient means for the construction of large-aperture segmented reflecting space-based telescopes is the primary aim of the project. The 20-m Large Deployable Reflector (LDR) telescope is being developed for a survey mission, and it will make use of the reflector panels and materials, structures, and figure control being elaborated for the PSR. The surface accuracy of a 0.9-m PSR panel is shown to be 1.74-micron RMS, the goal of 100-micron RMS positioning accuracy has been achieved for a 4-m erectable structure. A voice-coil actuator for the figure control system architecture demonstrated 1-micron panel control accuracy in a 3-axis evaluation. The PSR technology is demonstrated to be of value for several NASA projects involving optical communications and interferometers as well as missions which make use of large-diameter segmented reflectors.

  15. Extraction and analysis of neuron firing signals from deep cortical video microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerekes, Ryan A; Blundon, Jay

    We introduce a method for extracting and analyzing neuronal activity time signals from video of the cortex of a live animal. The signals correspond to the firing activity of individual cortical neurons. Activity signals are based on the changing fluorescence of calcium indicators in the cells over time. We propose a cell segmentation method that relies on a user-specified center point, from which the signal extraction method proceeds. A stabilization approach is used to reduce tissue motion in the video. The extracted signal is then processed to flatten the baseline and detect action potentials. We show results from applying themore » method to a cortical video of a live mouse.« less

  16. Assessment of Fall Characteristics From Depth Sensor Videos.

    PubMed

    O'Connor, Jennifer J; Phillips, Lorraine J; Folarinde, Bunmi; Alexander, Gregory L; Rantz, Marilyn

    2017-07-01

    Falls are a major source of death and disability in older adults; little data, however, are available about the etiology of falls in community-dwelling older adults. Sensor systems installed in independent and assisted living residences of 105 older adults participating in an ongoing technology study were programmed to record live videos of probable fall events. Sixty-four fall video segments from 19 individuals were viewed and rated using the Falls Video Assessment Questionnaire. Raters identified that 56% (n = 36) of falls were due to an incorrect shift of body weight and 27% (n = 17) from losing support of an external object, such as an unlocked wheelchair or rolling walker. In 60% of falls, mobility aids were in the room or in use at the time of the fall. Use of environmentally embedded sensors provides a mechanism for real-time fall detection and, ultimately, may supply information to clinicians for fall prevention interventions. [Journal of Gerontological Nursing, 43(7), 13-19.]. Copyright 2017, SLACK Incorporated.

  17. ETHOWATCHER: validation of a tool for behavioral and video-tracking analysis in laboratory animals.

    PubMed

    Crispim Junior, Carlos Fernando; Pederiva, Cesar Nonato; Bose, Ricardo Chessini; Garcia, Vitor Augusto; Lino-de-Oliveira, Cilene; Marino-Neto, José

    2012-02-01

    We present a software (ETHOWATCHER(®)) developed to support ethography, object tracking and extraction of kinematic variables from digital video files of laboratory animals. The tracking module allows controlled segmentation of the target from the background, extracting image attributes used to calculate the distance traveled, orientation, length, area and a path graph of the experimental animal. The ethography module allows recording of catalog-based behaviors from environment or from video files continuously or frame-by-frame. The output reports duration, frequency and latency of each behavior and the sequence of events in a time-segmented format, set by the user. Validation tests were conducted on kinematic measurements and on the detection of known behavioral effects of drugs. This software is freely available at www.ethowatcher.ufsc.br. Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. Video indexing based on image and sound

    NASA Astrophysics Data System (ADS)

    Faudemay, Pascal; Montacie, Claude; Caraty, Marie-Jose

    1997-10-01

    Video indexing is a major challenge for both scientific and economic reasons. Information extraction can sometimes be easier from sound channel than from image channel. We first present a multi-channel and multi-modal query interface, to query sound, image and script through 'pull' and 'push' queries. We then summarize the segmentation phase, which needs information from the image channel. Detection of critical segments is proposed. It should speed-up both automatic and manual indexing. We then present an overview of the information extraction phase. Information can be extracted from the sound channel, through speaker recognition, vocal dictation with unconstrained vocabularies, and script alignment with speech. We present experiment results for these various techniques. Speaker recognition methods were tested on the TIMIT and NTIMIT database. Vocal dictation as experimented on newspaper sentences spoken by several speakers. Script alignment was tested on part of a carton movie, 'Ivanhoe'. For good quality sound segments, error rates are low enough for use in indexing applications. Major issues are the processing of sound segments with noise or music, and performance improvement through the use of appropriate, low-cost architectures or networks of workstations.

  19. Real-time people counting system using a single video camera

    NASA Astrophysics Data System (ADS)

    Lefloch, Damien; Cheikh, Faouzi A.; Hardeberg, Jon Y.; Gouton, Pierre; Picot-Clemente, Romain

    2008-02-01

    There is growing interest in video-based solutions for people monitoring and counting in business and security applications. Compared to classic sensor-based solutions the video-based ones allow for more versatile functionalities, improved performance with lower costs. In this paper, we propose a real-time system for people counting based on single low-end non-calibrated video camera. The two main challenges addressed in this paper are: robust estimation of the scene background and the number of real persons in merge-split scenarios. The latter is likely to occur whenever multiple persons move closely, e.g. in shopping centers. Several persons may be considered to be a single person by automatic segmentation algorithms, due to occlusions or shadows, leading to under-counting. Therefore, to account for noises, illumination and static objects changes, a background substraction is performed using an adaptive background model (updated over time based on motion information) and automatic thresholding. Furthermore, post-processing of the segmentation results is performed, in the HSV color space, to remove shadows. Moving objects are tracked using an adaptive Kalman filter, allowing a robust estimation of the objects future positions even under heavy occlusion. The system is implemented in Matlab, and gives encouraging results even at high frame rates. Experimental results obtained based on the PETS2006 datasets are presented at the end of the paper.

  20. Access NASA Satellite Global Precipitation Data Visualization on YouTube

    NASA Technical Reports Server (NTRS)

    Liu, Z.; Su, J.; Acker, J.; Huffman, G.; Vollmer, B.; Wei, J.; Meyer, D.

    2017-01-01

    Since the satellite era began, NASA has collected a large volume of Earth science observations for research and applications around the world. The collected and archived satellite data at 12 NASA data centers can also be used for STEM education and activities such as disaster events, climate change, etc. However, accessing satellite data can be a daunting task for non-professional users such as teachers and students because of unfamiliarity of terminology, disciplines, data formats, data structures, computing resources, processing software, programming languages, etc. Over the years, many efforts including tools, training classes, and tutorials have been developed to improve satellite data access for users, but barriers still exist for non-professionals. In this presentation, we will present our latest activity that uses a very popular online video sharing Web site, YouTube (https://www.youtube.com/), for accessing visualizations of our global precipitation datasets at the NASA Goddard Earth Sciences (GES) Data and Information Services Center (DISC). With YouTube, users can access and visualize a large volume of satellite data without the necessity to learn new software or download data. The dataset in this activity is a one-month animation for the GPM (Global Precipitation Measurement) Integrated Multi-satellite Retrievals for GPM (IMERG). IMERG provides precipitation on a near-global (60 deg. N-S) coverage at half-hourly time interval, providing more details on precipitation processes and development compared to the 3-hourly TRMM (Tropical Rainfall Measuring Mission) Multisatellite Precipitation Analysis (TMPA, 3B42) product. When the retro-processing of IMERG during the TRMM era is finished in 2018, the entire video will contain more than 330,000 files and will last 3.6 hours. Future plans include development of flyover videos for orbital data for an entire satellite mission or project. All videos, including the one-month animation, will be uploaded and

  1. Personalized Video Feedback and Repeated Task Practice Improve Laparoscopic Knot-Tying Skills: Two Controlled Trials.

    PubMed

    Abbott, Eduardo F; Thompson, Whitney; Pandian, T K; Zendejas, Benjamin; Farley, David R; Cook, David A

    2017-11-01

    Compare the effect of personalized feedback (PF) vs. task demonstration (TD), both delivered via video, on laparoscopic knot-tying skills and perceived workload; and evaluate the effect of repeated practice. General surgery interns and research fellows completed four repetitions of a simulated laparoscopic knot-tying task at one-month intervals. Midway between repetitions, participants received via e-mail either a TD video (demonstration by an expert) or a PF video (video of their own performance with voiceover from a blinded senior surgeon). Each participant received at least one video per format, with sequence randomly assigned. Outcomes included performance scores and NASA Task Load Index (NASA-TLX) scores. To evaluate the effectiveness of repeated practice, scores from these trainees on a separate delayed retention test were compared against historical controls who did not have scheduled repetitions. Twenty-one trainees completed the randomized study. Mean change in performance scores was significantly greater for those receiving PF (difference = 23.1 of 150 [95% confidence interval (CI): 0, 46.2], P = .05). Perceived workload was also significantly reduced (difference = -3.0 of 20 [95% CI: -5.8, -0.3], P = .04). Compared with historical controls (N = 93), the 21 with scheduled repeated practice had higher scores on the laparoscopic knot-tying assessment two weeks after the final repetition (difference = 1.5 of 10 [95% CI: 0.2, 2.8], P = .02). Personalized video feedback improves trainees' procedural performance and perceived workload compared with a task demonstration video. Brief monthly practice sessions support skill acquisition and retention.

  2. Audio-video feature correlation: faces and speech

    NASA Astrophysics Data System (ADS)

    Durand, Gwenael; Montacie, Claude; Caraty, Marie-Jose; Faudemay, Pascal

    1999-08-01

    This paper presents a study of the correlation of features automatically extracted from the audio stream and the video stream of audiovisual documents. In particular, we were interested in finding out whether speech analysis tools could be combined with face detection methods, and to what extend they should be combined. A generic audio signal partitioning algorithm as first used to detect Silence/Noise/Music/Speech segments in a full length movie. A generic object detection method was applied to the keyframes extracted from the movie in order to detect the presence or absence of faces. The correlation between the presence of a face in the keyframes and of the corresponding voice in the audio stream was studied. A third stream, which is the script of the movie, is warped on the speech channel in order to automatically label faces appearing in the keyframes with the name of the corresponding character. We naturally found that extracted audio and video features were related in many cases, and that significant benefits can be obtained from the joint use of audio and video analysis methods.

  3. The NASA Fireball Network

    NASA Technical Reports Server (NTRS)

    Cooke, William J.

    2013-01-01

    In the summer of 2008, the NASA Meteoroid Environments Office (MEO) began to establish a video fireball network, based on the following objectives: (1) determine the speed distribution of cm size meteoroids, (2) determine the major sources of cm size meteoroids (showers/sporadic sources), (3) characterize meteor showers (numbers, magnitudes, trajectories, orbits), (4) determine the size at which showers dominate the meteor flux, (5) discriminate between re-entering space debris and meteors, and 6) locate meteorite falls. In order to achieve the above with the limited resources available to the MEO, it was necessary that the network function almost fully autonomously, with very little required from humans in the areas of upkeep or analysis. With this in mind, the camera design and, most importantly, the ASGARD meteor detection software were adopted from the University of Western Ontario's Southern Ontario Meteor Network (SOMN), as NASA has a cooperative agreement with Western's Meteor Physics Group. 15 cameras have been built, and the network now consists of 8 operational cameras, with at least 4 more slated for deployment in calendar year 2013. The goal is to have 15 systems, distributed in two or more groups east of automatic analysis; every morning, this server also automatically generates an email and a web page (http://fireballs.ndc.nasa.gov) containing an automated analysis of the previous night's events. This analysis provides the following for each meteor: UTC date and time, speed, start and end locations (longitude, latitude, altitude), radiant, shower identification, light curve (meteor absolute magnitude as a function of time), photometric mass, orbital elements, and Tisserand parameter. Radiant/orbital plots and various histograms (number versus speed, time, etc) are also produced. After more than four years of operation, over 5,000 multi-station fireballs have been observed, 3 of which potentially dropped meteorites. A database containing data on all

  4. The Role of the NASA Global Hawk Link Module as an Information Nexus For Atmospheric Mapping Missions

    NASA Technical Reports Server (NTRS)

    Sullivan, D. V.

    2015-01-01

    The Link Module described in this paper was developed for the NASA Uninhabited Aerial System (UAS) Global Hawk Pacific Mission (GloPAC) Airborne Science Campaign; four flights of 30 hour duration, supporting the Aura Validation Experiment (AVE). It was used again during the Genesis and Rapid Intensification Processes (GRIP) experiment, a NASA Earth Science field experiment to better understand how tropical storms form and develop into major hurricanes. In these missions, the Link Module negotiated all communication over the high bandwidth Ku satellite link, archived all the science data from onboard experiments in a spatially enabled database, routed command and control of the instruments from the Global Hawk Operations Center, and re-transmitted select data sets directly to experimenters control and analysis systems. The availability of aggregated information from collections of sensors, and remote control capabilities, in real-time, is revolutionizing the way Airborne Science is being conducted. The Link Module NG now being flown in support of the NASA Earth Venture missions, the Hurricane and Severe Storm Sentinel (HS3) mission, and Airborne Tropical Tropopause Experiment (A TTREX) mission, has advanced data fusion technologies that are further advancing the Scientific productivity, flexibility and robustness of these systems. On-the-fly traffic shaping has been developed to allow the high definition video, used for critical flight control segments, to dynamically allocate variable bandwidth on demand. Historically, the Link Module evolved from the instrument and communication interface controller used by NASA's Pathfinder and Pathfinder plus solar powered UAS's in the late 1990' s. It later was expanded for use in the AIRDAS four channel scanner flown on the NASA Altus UAS, and then again to a module in the AMS twelve channel multispectral scanner flying on the NASA (Predator-b) Ikhana UAS. The current system is the answer to the challenges imposed by extremely

  5. Integrating Thematic Web Portal Capabilities into the NASA Earthdata Web Infrastructure

    NASA Technical Reports Server (NTRS)

    Wong, Minnie; Baynes, Kathleen E.; Huang, Thomas; McLaughlin, Brett

    2015-01-01

    This poster will present the process of integrating thematic web portal capabilities into the NASA Earth data web infrastructure, with examples from the Sea Level Change Portal. The Sea Level Change Portal will be a source of current NASA research, data and information regarding sea level change. The portal will provide sea level change information through articles, graphics, videos and animations, an interactive tool to view and access sea level change data and a dashboard showing sea level change indicators.

  6. No damage to rail cars or SRB segments in derailment

    NASA Technical Reports Server (NTRS)

    2000-01-01

    After being involved in a minor derailment incident during a routine movement on the tracks, rail cars carrying solid rocket booster segments sit idle. The rail cars were being moved as part of a standard operation to '''order''' the cars, placing them into a proper sequence for upcoming segment processing activities. The rear wheels of one car and the front wheels of the car behind it slid off the tracks while passing through a railway switch onto a siding. They were traveling approximately 3 miles per hour at the time, about normal walking speed. No damage occurred to the SRB segments, or to the devices that secure the segments to the rail cars. The incident occurred on KSC property, just north of the NASA Causeway in the KSC Industrial Area.

  7. Two novel motion-based algorithms for surveillance video analysis on embedded platforms

    NASA Astrophysics Data System (ADS)

    Vijverberg, Julien A.; Loomans, Marijn J. H.; Koeleman, Cornelis J.; de With, Peter H. N.

    2010-05-01

    This paper proposes two novel motion-vector based techniques for target detection and target tracking in surveillance videos. The algorithms are designed to operate on a resource-constrained device, such as a surveillance camera, and to reuse the motion vectors generated by the video encoder. The first novel algorithm for target detection uses motion vectors to construct a consistent motion mask, which is combined with a simple background segmentation technique to obtain a segmentation mask. The second proposed algorithm aims at multi-target tracking and uses motion vectors to assign blocks to targets employing five features. The weights of these features are adapted based on the interaction between targets. These algorithms are combined in one complete analysis application. The performance of this application for target detection has been evaluated for the i-LIDS sterile zone dataset and achieves an F1-score of 0.40-0.69. The performance of the analysis algorithm for multi-target tracking has been evaluated using the CAVIAR dataset and achieves an MOTP of around 9.7 and MOTA of 0.17-0.25. On a selection of targets in videos from other datasets, the achieved MOTP and MOTA are 8.8-10.5 and 0.32-0.49 respectively. The execution time on a PC-based platform is 36 ms. This includes the 20 ms for generating motion vectors, which are also required by the video encoder.

  8. Lights, Camera: Learning! Findings from studies of video in formal and informal science education

    NASA Astrophysics Data System (ADS)

    Borland, J.

    2013-12-01

    As part of the panel, media researcher, Jennifer Borland, will highlight findings from a variety of studies of videos across the spectrum of formal to informal learning, including schools, museums, and in viewers homes. In her presentation, Borland will assert that the viewing context matters a great deal, but there are some general take-aways that can be extrapolated to the use of educational video in a variety of settings. Borland has served as an evaluator on several video-related projects funded by NASA and the the National Science Foundation including: Data Visualization videos and Space Shows developed by the American Museum of Natural History, DragonflyTV, Earth the Operators Manual, The Music Instinct and Time Team America.

  9. NASA's "Eyes On The Solar System:" A Real-time, 3D-Interactive Tool to Teach the Wonder of Planetary Science

    NASA Astrophysics Data System (ADS)

    Hussey, K.

    2014-12-01

    NASA's Jet Propulsion Laboratory is using video game technology to immerse students, the general public and mission personnel in our solar system and beyond. "Eyes on the Solar System," a cross-platform, real-time, 3D-interactive application that can run on-line or as a stand-alone "video game," is of particular interest to educators looking for inviting tools to capture students interest in a format they like and understand. (eyes.nasa.gov). It gives users an extraordinary view of our solar system by virtually transporting them across space and time to make first-person observations of spacecraft, planetary bodies and NASA/ESA missions in action. Key scientific results illustrated with video presentations, supporting imagery and web links are imbedded contextually into the solar system. Educators who want an interactive, game-based approach to engage students in learning Planetary Science will see how "Eyes" can be effectively used to teach its principles to grades 3 through 14.The presentation will include a detailed demonstration of the software along with a description/demonstration of how this technology is being adapted for education. There will also be a preview of coming attractions. This work is being conducted by the Visualization Technology Applications and Development Group at NASA's Jet Propulsion Laboratory, the same team responsible for "Eyes on the Earth 3D," and "Eyes on Exoplanets," which can be viewed at eyes.nasa.gov/earth and eyes.nasa.gov/exoplanets.

  10. Image segmentation by iterative parallel region growing with application to data compression and image analysis

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    1988-01-01

    Image segmentation can be a key step in data compression and image analysis. However, the segmentation results produced by most previous approaches to region growing are suspect because they depend on the order in which portions of the image are processed. An iterative parallel segmentation algorithm avoids this problem by performing globally best merges first. Such a segmentation approach, and two implementations of the approach on NASA's Massively Parallel Processor (MPP) are described. Application of the segmentation approach to data compression and image analysis is then described, and results of such application are given for a LANDSAT Thematic Mapper image.

  11. Automatic and quantitative measurement of laryngeal video stroboscopic images.

    PubMed

    Kuo, Chung-Feng Jeffrey; Kuo, Joseph; Hsiao, Shang-Wun; Lee, Chi-Lung; Lee, Jih-Chin; Ke, Bo-Han

    2017-01-01

    The laryngeal video stroboscope is an important instrument for physicians to analyze abnormalities and diseases in the glottal area. Stroboscope has been widely used around the world. However, without quantized indices, physicians can only make subjective judgment on glottal images. We designed a new laser projection marking module and applied it onto the laryngeal video stroboscope to provide scale conversion reference parameters for glottal imaging and to convert the physiological parameters of glottis. Image processing technology was used to segment the important image regions of interest. Information of the glottis was quantified, and the vocal fold image segmentation system was completed to assist clinical diagnosis and increase accuracy. Regarding image processing, histogram equalization was used to enhance glottis image contrast. The center weighted median filters image noise while retaining the texture of the glottal image. Statistical threshold determination was used for automatic segmentation of a glottal image. As the glottis image contains saliva and light spots, which are classified as the noise of the image, noise was eliminated by erosion, expansion, disconnection, and closure techniques to highlight the vocal area. We also used image processing to automatically identify an image of vocal fold region in order to quantify information from the glottal image, such as glottal area, vocal fold perimeter, vocal fold length, glottal width, and vocal fold angle. The quantized glottis image database was created to assist physicians in diagnosing glottis diseases more objectively.

  12. Layer-based buffer aware rate adaptation design for SHVC video streaming

    NASA Astrophysics Data System (ADS)

    Gudumasu, Srinivas; Hamza, Ahmed; Asbun, Eduardo; He, Yong; Ye, Yan

    2016-09-01

    This paper proposes a layer based buffer aware rate adaptation design which is able to avoid abrupt video quality fluctuation, reduce re-buffering latency and improve bandwidth utilization when compared to a conventional simulcast based adaptive streaming system. The proposed adaptation design schedules DASH segment requests based on the estimated bandwidth, dependencies among video layers and layer buffer fullness. Scalable HEVC video coding is the latest state-of-art video coding technique that can alleviate various issues caused by simulcast based adaptive video streaming. With scalable coded video streams, the video is encoded once into a number of layers representing different qualities and/or resolutions: a base layer (BL) and one or more enhancement layers (EL), each incrementally enhancing the quality of the lower layers. Such layer based coding structure allows fine granularity rate adaptation for the video streaming applications. Two video streaming use cases are presented in this paper. The first use case is to stream HD SHVC video over a wireless network where available bandwidth varies, and the performance comparison between proposed layer-based streaming approach and conventional simulcast streaming approach is provided. The second use case is to stream 4K/UHD SHVC video over a hybrid access network that consists of a 5G millimeter wave high-speed wireless link and a conventional wired or WiFi network. The simulation results verify that the proposed layer based rate adaptation approach is able to utilize the bandwidth more efficiently. As a result, a more consistent viewing experience with higher quality video content and minimal video quality fluctuations can be presented to the user.

  13. Query by example video based on fuzzy c-means initialized by fixed clustering center

    NASA Astrophysics Data System (ADS)

    Hou, Sujuan; Zhou, Shangbo; Siddique, Muhammad Abubakar

    2012-04-01

    Currently, the high complexity of video contents has posed the following major challenges for fast retrieval: (1) efficient similarity measurements, and (2) efficient indexing on the compact representations. A video-retrieval strategy based on fuzzy c-means (FCM) is presented for querying by example. Initially, the query video is segmented and represented by a set of shots, each shot can be represented by a key frame, and then we used video processing techniques to find visual cues to represent the key frame. Next, because the FCM algorithm is sensitive to the initializations, here we initialized the cluster center by the shots of query video so that users could achieve appropriate convergence. After an FCM cluster was initialized by the query video, each shot of query video was considered a benchmark point in the aforesaid cluster, and each shot in the database possessed a class label. The similarity between the shots in the database with the same class label and benchmark point can be transformed into the distance between them. Finally, the similarity between the query video and the video in database was transformed into the number of similar shots. Our experimental results demonstrated the performance of this proposed approach.

  14. Using learning analytics to evaluate a video-based lecture series.

    PubMed

    Lau, K H Vincent; Farooque, Pue; Leydon, Gary; Schwartz, Michael L; Sadler, R Mark; Moeller, Jeremy J

    2018-01-01

    The video-based lecture (VBL), an important component of the flipped classroom (FC) and massive open online course (MOOC) approaches to medical education, has primarily been evaluated through direct learner feedback. Evaluation may be enhanced through learner analytics (LA) - analysis of quantitative audience usage data generated by video-sharing platforms. We applied LA to an experimental series of ten VBLs on electroencephalography (EEG) interpretation, uploaded to YouTube in the model of a publicly accessible MOOC. Trends in view count; total percentage of video viewed and audience retention (AR) (percentage of viewers watching at a time point compared to the initial total) were examined. The pattern of average AR decline was characterized using regression analysis, revealing a uniform linear decline in viewership for each video, with no evidence of an optimal VBL length. Segments with transient increases in AR corresponded to those focused on core concepts, indicative of content requiring more detailed evaluation. We propose a model for applying LA at four levels: global, series, video, and feedback. LA may be a useful tool in evaluating a VBL series. Our proposed model combines analytics data and learner self-report for comprehensive evaluation.

  15. Robust and efficient fiducial tracking for augmented reality in HD-laparoscopic video streams

    NASA Astrophysics Data System (ADS)

    Mueller, M.; Groch, A.; Baumhauer, M.; Maier-Hein, L.; Teber, D.; Rassweiler, J.; Meinzer, H.-P.; Wegner, In.

    2012-02-01

    Augmented Reality (AR) is a convenient way of porting information from medical images into the surgical field of view and can deliver valuable assistance to the surgeon, especially in laparoscopic procedures. In addition, high definition (HD) laparoscopic video devices are a great improvement over the previously used low resolution equipment. However, in AR applications that rely on real-time detection of fiducials from video streams, the demand for efficient image processing has increased due to the introduction of HD devices. We present an algorithm based on the well-known Conditional Density Propagation (CONDENSATION) algorithm which can satisfy these new demands. By incorporating a prediction around an already existing and robust segmentation algorithm, we can speed up the whole procedure while leaving the robustness of the fiducial segmentation untouched. For evaluation purposes we tested the algorithm on recordings from real interventions, allowing for a meaningful interpretation of the results. Our results show that we can accelerate the segmentation by a factor of 3.5 on average. Moreover, the prediction information can be used to compensate for fiducials that are temporarily occluded or out of scope, providing greater stability.

  16. Video attention deviation estimation using inter-frame visual saliency map analysis

    NASA Astrophysics Data System (ADS)

    Feng, Yunlong; Cheung, Gene; Le Callet, Patrick; Ji, Yusheng

    2012-01-01

    A viewer's visual attention during video playback is the matching of his eye gaze movement to the changing video content over time. If the gaze movement matches the video content (e.g., follow a rolling soccer ball), then the viewer keeps his visual attention. If the gaze location moves from one video object to another, then the viewer shifts his visual attention. A video that causes a viewer to shift his attention often is a "busy" video. Determination of which video content is busy is an important practical problem; a busy video is difficult for encoder to deploy region of interest (ROI)-based bit allocation, and hard for content provider to insert additional overlays like advertisements, making the video even busier. One way to determine the busyness of video content is to conduct eye gaze experiments with a sizable group of test subjects, but this is time-consuming and costineffective. In this paper, we propose an alternative method to determine the busyness of video-formally called video attention deviation (VAD): analyze the spatial visual saliency maps of the video frames across time. We first derive transition probabilities of a Markov model for eye gaze using saliency maps of a number of consecutive frames. We then compute steady state probability of the saccade state in the model-our estimate of VAD. We demonstrate that the computed steady state probability for saccade using saliency map analysis matches that computed using actual gaze traces for a range of videos with different degrees of busyness. Further, our analysis can also be used to segment video into shorter clips of different degrees of busyness by computing the Kullback-Leibler divergence using consecutive motion compensated saliency maps.

  17. Remote Video Monitor of Vehicles in Cooperative Information Platform

    NASA Astrophysics Data System (ADS)

    Qin, Guofeng; Wang, Xiaoguo; Wang, Li; Li, Yang; Li, Qiyan

    Detection of vehicles plays an important role in the area of the modern intelligent traffic management. And the pattern recognition is a hot issue in the area of computer vision. An auto- recognition system in cooperative information platform is studied. In the cooperative platform, 3G wireless network, including GPS, GPRS (CDMA), Internet (Intranet), remote video monitor and M-DMB networks are integrated. The remote video information can be taken from the terminals and sent to the cooperative platform, then detected by the auto-recognition system. The images are pretreated and segmented, including feature extraction, template matching and pattern recognition. The system identifies different models and gets vehicular traffic statistics. Finally, the implementation of the system is introduced.

  18. A clinical pilot study of a modular video-CT augmentation system for image-guided skull base surgery

    NASA Astrophysics Data System (ADS)

    Liu, Wen P.; Mirota, Daniel J.; Uneri, Ali; Otake, Yoshito; Hager, Gregory; Reh, Douglas D.; Ishii, Masaru; Gallia, Gary L.; Siewerdsen, Jeffrey H.

    2012-02-01

    Augmentation of endoscopic video with preoperative or intraoperative image data [e.g., planning data and/or anatomical segmentations defined in computed tomography (CT) and magnetic resonance (MR)], can improve navigation, spatial orientation, confidence, and tissue resection in skull base surgery, especially with respect to critical neurovascular structures that may be difficult to visualize in the video scene. This paper presents the engineering and evaluation of a video augmentation system for endoscopic skull base surgery translated to use in a clinical study. Extension of previous research yielded a practical system with a modular design that can be applied to other endoscopic surgeries, including orthopedic, abdominal, and thoracic procedures. A clinical pilot study is underway to assess feasibility and benefit to surgical performance by overlaying CT or MR planning data in realtime, high-definition endoscopic video. Preoperative planning included segmentation of the carotid arteries, optic nerves, and surgical target volume (e.g., tumor). An automated camera calibration process was developed that demonstrates mean re-projection accuracy (0.7+/-0.3) pixels and mean target registration error of (2.3+/-1.5) mm. An IRB-approved clinical study involving fifteen patients undergoing skull base tumor surgery is underway in which each surgery includes the experimental video-CT system deployed in parallel to the standard-of-care (unaugmented) video display. Questionnaires distributed to one neurosurgeon and two otolaryngologists are used to assess primary outcome measures regarding the benefit to surgical confidence in localizing critical structures and targets by means of video overlay during surgical approach, resection, and reconstruction.

  19. NASA's Role in Aeronautics: A Workshop. Volume 3: Transport aircraft

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Segments of the spectrum of research and development activities that clearly must be within the purview of NASA in order for U.S. transport aircraft manufacturing and operating industries to succeed and to continue to make important contributions to the nation's wellbeing were examined. National facilities and expertise; basic research, and the evolution of generic and vehicle class technologies were determined to be the areas in which NASA has an essential role in transport aircraft aeronautics.

  20. NASA F-15B #836 in flight with Quiet Spike attached

    NASA Image and Video Library

    2006-09-27

    NASA F-15B #836 in flight with Quiet Spike attached. The project seeks to verify the structural integrity of the multi-segmented, articulating spike attachment designed to reduce and control a sonic boom.

  1. NASA F-15B #836 in flight with Quiet Spike attached

    NASA Image and Video Library

    2006-10-03

    NASA F-15B #836 in flight with Quiet Spike attached. The project seeks to verify the structural integrity of the multi-segmented, articulating spike attachment designed to reduce and control a sonic boom.

  2. NASA F-15B #836 in flight with Quiet Spike attached

    NASA Image and Video Library

    2006-09-25

    NASA F-15B #836 in flight with Quiet Spike attached. The project seeks to verify the structural integrity of the multi-segmented, articulating spike attachment designed to reduce and control a sonic boom.

  3. Video bioinformatics analysis of human embryonic stem cell colony growth.

    PubMed

    Lin, Sabrina; Fonteno, Shawn; Satish, Shruthi; Bhanu, Bir; Talbot, Prue

    2010-05-20

    Because video data are complex and are comprised of many images, mining information from video material is difficult to do without the aid of computer software. Video bioinformatics is a powerful quantitative approach for extracting spatio-temporal data from video images using computer software to perform dating mining and analysis. In this article, we introduce a video bioinformatics method for quantifying the growth of human embryonic stem cells (hESC) by analyzing time-lapse videos collected in a Nikon BioStation CT incubator equipped with a camera for video imaging. In our experiments, hESC colonies that were attached to Matrigel were filmed for 48 hours in the BioStation CT. To determine the rate of growth of these colonies, recipes were developed using CL-Quant software which enables users to extract various types of data from video images. To accurately evaluate colony growth, three recipes were created. The first segmented the image into the colony and background, the second enhanced the image to define colonies throughout the video sequence accurately, and the third measured the number of pixels in the colony over time. The three recipes were run in sequence on video data collected in a BioStation CT to analyze the rate of growth of individual hESC colonies over 48 hours. To verify the truthfulness of the CL-Quant recipes, the same data were analyzed manually using Adobe Photoshop software. When the data obtained using the CL-Quant recipes and Photoshop were compared, results were virtually identical, indicating the CL-Quant recipes were truthful. The method described here could be applied to any video data to measure growth rates of hESC or other cells that grow in colonies. In addition, other video bioinformatics recipes can be developed in the future for other cell processes such as migration, apoptosis, and cell adhesion.

  4. Efficient depth intraprediction method for H.264/AVC-based three-dimensional video coding

    NASA Astrophysics Data System (ADS)

    Oh, Kwan-Jung; Oh, Byung Tae

    2015-04-01

    We present an intracoding method that is applicable to depth map coding in multiview plus depth systems. Our approach combines skip prediction and plane segmentation-based prediction. The proposed depth intraskip prediction uses the estimated direction at both the encoder and decoder, and does not need to encode residual data. Our plane segmentation-based intraprediction divides the current block into biregions, and applies a different prediction scheme for each segmented region. This method avoids incorrect estimations across different regions, resulting in higher prediction accuracy. Simulation results demonstrate that the proposed scheme is superior to H.264/advanced video coding intraprediction and has the ability to improve the subjective rendering quality.

  5. NASA Launches Five Rockets in Five Minutes

    NASA Image and Video Library

    2017-12-08

    NASA image captured March 27, 2012 NASA successfully launched five suborbital sounding rockets this morning from its Wallops Flight Facility in Virginia as part of a study of the upper level jet stream. The first rocket was launched at 4:58 a.m. EDT and each subsequent rocket was launched 80 seconds apart. Each rocket released a chemical tracer that created milky, white clouds at the edge of space. Tracking the way the clouds move can help scientists understand the movement of the winds some 65 miles up in the sky, which in turn will help create better models of the electromagnetic regions of space that can damage man-made satellites and disrupt communications systems. The launches and clouds were reported to be seen from as far south as Wilmington, N.C.; west to Charlestown, W. Va.; and north to Buffalo, N.Y. Credit: NASA/Wallops To watch a video of the launch and to read more go to: www.nasa.gov/mission_pages/sunearth/missions/atrex-launch... NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  6. NASA Launches Five Rockets in Five Minutes

    NASA Image and Video Library

    2012-03-27

    NASA image captured March 27, 2012 NASA successfully launched five suborbital sounding rockets this morning from its Wallops Flight Facility in Virginia as part of a study of the upper level jet stream. The first rocket was launched at 4:58 a.m. EDT and each subsequent rocket was launched 80 seconds apart. Each rocket released a chemical tracer that created milky, white clouds at the edge of space. Tracking the way the clouds move can help scientists understand the movement of the winds some 65 miles up in the sky, which in turn will help create better models of the electromagnetic regions of space that can damage man-made satellites and disrupt communications systems. The launches and clouds were reported to be seen from as far south as Wilmington, N.C.; west to Charlestown, W. Va.; and north to Buffalo, N.Y. Credit: NASA/Wallops To watch a video of the launch and to read more go to: www.nasa.gov/mission_pages/sunearth/missions/atrex-launch... NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  7. Six characteristics of nutrition education videos that support learning and motivation to learn.

    PubMed

    Ramsay, Samantha A; Holyoke, Laura; Branen, Laurel J; Fletcher, Janice

    2012-01-01

    To identify characteristics in nutrition education video vignettes that support learning and motivation to learn about feeding children. Nine focus group interviews were conducted with child care providers in child care settings from 4 states in the western United States: California, Idaho, Oregon, and Washington. At each focus group interview, 3-8 participants (n = 37) viewed video vignettes and participated in a facilitated focus group discussion that was audiorecorded, transcribed, and analyzed. Primary characteristics of video vignettes child care providers perceived as supporting learning and motivation to learn about feeding young children were identified: (1) use real scenarios; (2) provide short segments; (3) present simple, single messages; (4) convey a skill-in-action; (5) develop the videos so participants can relate to the settings; and (6) support participants' ability to conceptualize the information. These 6 characteristics can be used by nutrition educators in selecting and developing videos in nutrition education. Copyright © 2012 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.

  8. Study of Temporal Effects on Subjective Video Quality of Experience.

    PubMed

    Bampis, Christos George; Zhi Li; Moorthy, Anush Krishna; Katsavounidis, Ioannis; Aaron, Anne; Bovik, Alan Conrad

    2017-11-01

    HTTP adaptive streaming is being increasingly deployed by network content providers, such as Netflix and YouTube. By dividing video content into data chunks encoded at different bitrates, a client is able to request the appropriate bitrate for the segment to be played next based on the estimated network conditions. However, this can introduce a number of impairments, including compression artifacts and rebuffering events, which can severely impact an end-user's quality of experience (QoE). We have recently created a new video quality database, which simulates a typical video streaming application, using long video sequences and interesting Netflix content. Going beyond previous efforts, the new database contains highly diverse and contemporary content, and it includes the subjective opinions of a sizable number of human subjects regarding the effects on QoE of both rebuffering and compression distortions. We observed that rebuffering is always obvious and unpleasant to subjects, while bitrate changes may be less obvious due to content-related dependencies. Transient bitrate drops were preferable over rebuffering only on low complexity video content, while consistently low bitrates were poorly tolerated. We evaluated different objective video quality assessment algorithms on our database and found that objective video quality models are unreliable for QoE prediction on videos suffering from both rebuffering events and bitrate changes. This implies the need for more general QoE models that take into account objective quality models, rebuffering-aware information, and memory. The publicly available video content as well as metadata for all of the videos in the new database can be found at http://live.ece.utexas.edu/research/LIVE_NFLXStudy/nflx_index.html.

  9. Sun-Earth Day WEBCAST - NASA TV; Host Paul Mortfield, Astronomer Stanford Solar Center and visiting

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Sun-Earth Day WEBCAST - NASA TV; Host Paul Mortfield, Astronomer Stanford Solar Center and visiting students from San Francisco Bay Area Schools Documentation Technology Branch Video communications van (code-JIT)

  10. High Resolution, High Frame Rate Video Technology

    NASA Technical Reports Server (NTRS)

    1990-01-01

    Papers and working group summaries presented at the High Resolution, High Frame Rate Video (HHV) Workshop are compiled. HHV system is intended for future use on the Space Shuttle and Space Station Freedom. The Workshop was held for the dual purpose of: (1) allowing potential scientific users to assess the utility of the proposed system for monitoring microgravity science experiments; and (2) letting technical experts from industry recommend improvements to the proposed near-term HHV system. The following topics are covered: (1) State of the art in the video system performance; (2) Development plan for the HHV system; (3) Advanced technology for image gathering, coding, and processing; (4) Data compression applied to HHV; (5) Data transmission networks; and (6) Results of the users' requirements survey conducted by NASA.

  11. Optimizing Instructional Video for Preservice Teachers in an Online Technology Integration Course

    ERIC Educational Resources Information Center

    Ibrahim, Mohamed; Callaway, Rebecca; Bell, David

    2014-01-01

    This study assessed the effect of design instructional video based on the Cognitive Theory of Multimedia Learning by applying segmentation and signaling on the learning outcome of students in an online technology integration course. The study assessed the correlation between students' personal preferences (preferred learning styles and area…

  12. Science Education Supporting Weather Broadcasters On-Air and in the Classroom with NASA "Mini-Education Supplements"

    NASA Technical Reports Server (NTRS)

    Shepherd, J. Marshall; Starr, David OC. (Technical Monitor)

    2001-01-01

    NASA-Goddard Space Flight Center has initiated a new project designed to expand on existing news services and add value to classrooms through the development and distribution of two-minute 'mini-supplements' which give context and teach about current weather and Earth research phenomena. The innovative mini-supplements provide raw materials for weather forecasters to build news stories around NASA related missions without having to edit the more traditional and cumbersome long-form video format. The supplements cover different weather and climate topics and include NASA data, animations, video footage, and interviews with scientists. The supplements also include a curriculum package with educational lessons, educator guide, and hand-on activities. One goal is to give on-air broadcasters who are the primary science educators for the general public what they need to 'teach' about the science related to NASA research behind weather and climate news. This goal achieves increasing public literacy and assures higher accuracy and quality science reporting by the media. The other goal is to enable on-air broadcasters to serve as distributors of high quality, standards-based educational curricula and supplemental material when they visit 8-12 grade classrooms. The focus of 'pilot effort' centers around the success of NASA's Tropical Rainfall Measuring Mission (TRMM) but is likely expandable to other NASA earth or space science missions.

  13. General view of a Solid Rocket Motor Forward Segment in ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    General view of a Solid Rocket Motor Forward Segment in the process of being offloaded from it's railcar inside the Rotation Processing and Surge Facility at Kennedy Space Center. - Space Transportation System, Solid Rocket Boosters, Lyndon B. Johnson Space Center, 2101 NASA Parkway, Houston, Harris County, TX

  14. Scalable gastroscopic video summarization via similar-inhibition dictionary selection.

    PubMed

    Wang, Shuai; Cong, Yang; Cao, Jun; Yang, Yunsheng; Tang, Yandong; Zhao, Huaici; Yu, Haibin

    2016-01-01

    This paper aims at developing an automated gastroscopic video summarization algorithm to assist clinicians to more effectively go through the abnormal contents of the video. To select the most representative frames from the original video sequence, we formulate the problem of gastroscopic video summarization as a dictionary selection issue. Different from the traditional dictionary selection methods, which take into account only the number and reconstruction ability of selected key frames, our model introduces the similar-inhibition constraint to reinforce the diversity of selected key frames. We calculate the attention cost by merging both gaze and content change into a prior cue to help select the frames with more high-level semantic information. Moreover, we adopt an image quality evaluation process to eliminate the interference of the poor quality images and a segmentation process to reduce the computational complexity. For experiments, we build a new gastroscopic video dataset captured from 30 volunteers with more than 400k images and compare our method with the state-of-the-arts using the content consistency, index consistency and content-index consistency with the ground truth. Compared with all competitors, our method obtains the best results in 23 of 30 videos evaluated based on content consistency, 24 of 30 videos evaluated based on index consistency and all videos evaluated based on content-index consistency. For gastroscopic video summarization, we propose an automated annotation method via similar-inhibition dictionary selection. Our model can achieve better performance compared with other state-of-the-art models and supplies more suitable key frames for diagnosis. The developed algorithm can be automatically adapted to various real applications, such as the training of young clinicians, computer-aided diagnosis or medical report generation. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Real-time unmanned aircraft systems surveillance video mosaicking using GPU

    NASA Astrophysics Data System (ADS)

    Camargo, Aldo; Anderson, Kyle; Wang, Yi; Schultz, Richard R.; Fevig, Ronald A.

    2010-04-01

    Digital video mosaicking from Unmanned Aircraft Systems (UAS) is being used for many military and civilian applications, including surveillance, target recognition, border protection, forest fire monitoring, traffic control on highways, monitoring of transmission lines, among others. Additionally, NASA is using digital video mosaicking to explore the moon and planets such as Mars. In order to compute a "good" mosaic from video captured by a UAS, the algorithm must deal with motion blur, frame-to-frame jitter associated with an imperfectly stabilized platform, perspective changes as the camera tilts in flight, as well as a number of other factors. The most suitable algorithms use SIFT (Scale-Invariant Feature Transform) to detect the features consistent between video frames. Utilizing these features, the next step is to estimate the homography between two consecutives video frames, perform warping to properly register the image data, and finally blend the video frames resulting in a seamless video mosaick. All this processing takes a great deal of resources of resources from the CPU, so it is almost impossible to compute a real time video mosaic on a single processor. Modern graphics processing units (GPUs) offer computational performance that far exceeds current CPU technology, allowing for real-time operation. This paper presents the development of a GPU-accelerated digital video mosaicking implementation and compares it with CPU performance. Our tests are based on two sets of real video captured by a small UAS aircraft; one video comes from Infrared (IR) and Electro-Optical (EO) cameras. Our results show that we can obtain a speed-up of more than 50 times using GPU technology, so real-time operation at a video capture of 30 frames per second is feasible.

  16. Developing assessment system for wireless capsule endoscopy videos based on event detection

    NASA Astrophysics Data System (ADS)

    Chen, Ying-ju; Yasen, Wisam; Lee, Jeongkyu; Lee, Dongha; Kim, Yongho

    2009-02-01

    Along with the advancing of technology in wireless and miniature camera, Wireless Capsule Endoscopy (WCE), the combination of both, enables a physician to diagnose patient's digestive system without actually perform a surgical procedure. Although WCE is a technical breakthrough that allows physicians to visualize the entire small bowel noninvasively, the video viewing time takes 1 - 2 hours. This is very time consuming for the gastroenterologist. Not only it sets a limit on the wide application of this technology but also it incurs considerable amount of cost. Therefore, it is important to automate such process so that the medical clinicians only focus on interested events. As an extension from our previous work that characterizes the motility of digestive tract in WCE videos, we propose a new assessment system for energy based events detection (EG-EBD) to classify the events in WCE videos. For the system, we first extract general features of a WCE video that can characterize the intestinal contractions in digestive organs. Then, the event boundaries are identified by using High Frequency Content (HFC) function. The segments are classified into WCE event by special features. In this system, we focus on entering duodenum, entering cecum, and active bleeding. This assessment system can be easily extended to discover more WCE events, such as detailed organ segmentation and more diseases, by using new special features. In addition, the system provides a score for every WCE image for each event. Using the event scores, the system helps a specialist to speedup the diagnosis process.

  17. Vehicle counting system using real-time video processing

    NASA Astrophysics Data System (ADS)

    Crisóstomo-Romero, Pedro M.

    2006-02-01

    Transit studies are important for planning a road network with optimal vehicular flow. A vehicular count is essential. This article presents a vehicle counting system based on video processing. An advantage of such system is the greater detail than is possible to obtain, like shape, size and speed of vehicles. The system uses a video camera placed above the street to image transit in real-time. The video camera must be placed at least 6 meters above the street level to achieve proper acquisition quality. Fast image processing algorithms and small image dimensions are used to allow real-time processing. Digital filters, mathematical morphology, segmentation and other techniques allow identifying and counting all vehicles in the image sequences. The system was implemented under Linux in a 1.8 GHz Pentium 4 computer. A successful count was obtained with frame rates of 15 frames per second for images of size 240x180 pixels and 24 frames per second for images of size 180x120 pixels, thus being able to count vehicles whose speeds do not exceed 150 km/h.

  18. Twenty-Five Years of Progress. Part 1: Birth of NASA. Part 2: The Moon-A Goal

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Historical footage (1958 - 1983) concerning NASA's Space Program, is reviewed in this two-part video. Host, Lynn Bondurant describes the birth of NASA and its accomplishments through the years. Part one contains: the launch of Russian satellite Sputnik on October 4,1957; the first dog (Soviet) in space; NACA Space Research, Explorer-6; and still photographs of various Space projects. Tiros 1 experimental weather satellite, Microgravity simulators, Echo 1 passive communications satellite, and the first U.S. manned spaceflight Mercury are included in part two. The seven Mercury astronauts are: Captain Donald Slayton, Lt. Commander Alan Shepard, Lt. Commander Walter Schirra, Captain Virgil Grissom, Lt. Col. John Glenn Jr., Captain Leroy Cooper Jr, and Lt. Malcolm Scott Carpenter. Also included are an ongoing interview (throughout the video) with NASA's first Administrator Keith Glennan, the first flight in 1961 with Enos, a chimpanzee, President Kennedy's speech in Washington about the Space Program, Project Gemini - the 2-manned space flights, and the recovery of Virgil Grissom from splash down.

  19. MIT-NASA/KSC space life science experiments - A telescience testbed

    NASA Technical Reports Server (NTRS)

    Oman, Charles M.; Lichtenberg, Byron K.; Fiser, Richard L.; Vordermark, Deborah S.

    1990-01-01

    Experiments performed at MIT to better define Space Station information system telescience requirements for effective remote coaching of astronauts by principal investigators (PI) on the ground are described. The experiments were conducted via satellite video, data, and voice links to surrogate crewmembers working in a laboratory at NASA's Kennedy Space Center. Teams of two PIs and two crewmembers performed two different space life sciences experiments. During 19 three-hour interactive sessions, a variety of test conditions were explored. Since bit rate limits are necessarily imposed on Space Station video experiments surveillance video was varied down to 50 Kb/s and the effectiveness of PI controlled frame rate, resolution, grey scale, and color decimation was investigated. It is concluded that remote coaching by voice works and that dedicated crew-PI voice loops would be of great value on the Space Station.

  20. NASA's Hyperwall Revealing the Big Picture

    NASA Technical Reports Server (NTRS)

    Sellers, Piers

    2011-01-01

    NASA:s hyperwall is a sophisticated visualization tool used to display large datasets. The hyperwall, or video wall, is capable of displaying multiple high-definition data visualizations and/or images simultaneously across an arrangement of screens. Functioning as a key component at many NASA exhibits, the hyperwall is used to help explain phenomena, ideas, or examples of world change. The traveling version of the hyperwall is typically comprised of nine 42-50" flat-screen monitors arranged in a 3x3 array (as depicted below). However, it is not limited to monitor size or number; screen sizes can be as large as 52" and the arrangement of screens can include more than nine monitors. Generally, NASA satellite and model data are used to highlight particular themes in atmospheric, land, and ocean science. Many of the existing hyperwall stories reveal change across space and time, while others display large-scale still-images accompanied by descriptive, story-telling captions. Hyperwall content on a variety of Earth Science topics already exists and is made available to the public at: eospso.gsfc.nasa.gov/hyperwall. Keynote and PowerPoint presentations as well as Summary of Story files are available for download on each existing topic. New hyperwall content and accompanying files will continue being developed to promote scientific literacy across a diverse group of audience members. NASA invites the use of content accessible through this website but requests the user to acknowledge any and all data sources referenced in the content being used.

  1. The IXV Ground Segment design, implementation and operations

    NASA Astrophysics Data System (ADS)

    Martucci di Scarfizzi, Giovanni; Bellomo, Alessandro; Musso, Ivano; Bussi, Diego; Rabaioli, Massimo; Santoro, Gianfranco; Billig, Gerhard; Gallego Sanz, José María

    2016-07-01

    The Intermediate eXperimental Vehicle (IXV) is an ESA re-entry demonstrator that performed, on the 11th February of 2015, a successful re-entry demonstration mission. The project objectives were the design, development, manufacturing and on ground and in flight verification of an autonomous European lifting and aerodynamically controlled re-entry system. For the IXV mission a dedicated Ground Segment was provided. The main subsystems of the IXV Ground Segment were: IXV Mission Control Center (MCC), from where monitoring of the vehicle was performed, as well as support during pre-launch and recovery phases; IXV Ground Stations, used to cover IXV mission by receiving spacecraft telemetry and forwarding it toward the MCC; the IXV Communication Network, deployed to support the operations of the IXV mission by interconnecting all remote sites with MCC, supporting data, voice and video exchange. This paper describes the concept, architecture, development, implementation and operations of the ESA Intermediate Experimental Vehicle (IXV) Ground Segment and outlines the main operations and lessons learned during the preparation and successful execution of the IXV Mission.

  2. Gigantic Rolling Wave Captured on the Sun [hd video

    NASA Image and Video Library

    2017-12-08

    A corona mass ejection (CME) erupted from just around the edge of the sun on May 1, 2013, in a gigantic rolling wave. CMEs can shoot over a billion tons of particles into space at over a million miles per hour. This CME occurred on the sun’s limb and is not headed toward Earth. The video, taken in extreme ultraviolet light by NASA’s Solar Dynamics Observatory (SDO), covers about two and a half hours. Credit: NASA/Goddard/SDO NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  3. Early Synthetic Prototyping: The Use of Video After-Action Reports for Harvesting Useful Feedback In Early Design

    DTIC Science & Technology

    2016-06-01

    and material developers use an online game to crowdsource ideas from online players in order to increase viable synthetic prototypes. In entertainment... games , players often create videos of their game play to share with other players to demonstrate how to complete a segment of a game . This thesis...explores similar self-recorded videos of ESP game play and determines if they provide useful data to capability and material developers that can

  4. TRW Video News: Chandra X-ray Observatory

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This NASA Kennedy Space Center sponsored video release presents live footage of the Chandra X-ray Observatory prior to STS-93 as well as several short animations recreating some of its activities in space. These animations include a Space Shuttle fly-by with Chandra, two perspectives of Chandra's deployment from the Shuttle, the Chandra deployment orbit sequence, the Initial Upper Stage (IUS) first stage burn, and finally a "beauty shot", which represents another animated view of Chandra in space.

  5. Physical activity patterns across time-segmented youth sport flag football practice.

    PubMed

    Schlechter, Chelsey R; Guagliano, Justin M; Rosenkranz, Richard R; Milliken, George A; Dzewaltowski, David A

    2018-02-08

    Youth sport (YS) reaches a large number of children world-wide and contributes substantially to children's daily physical activity (PA), yet less than half of YS time has been shown to be spent in moderate-to-vigorous physical activity (MVPA). Physical activity during practice is likely to vary depending on practice structure that changes across YS time, therefore the purpose of this study was 1) to describe the type and frequency of segments of time, defined by contextual characteristics of practice structure, during YS practices and 2) determine the influence of these segments on PA. Research assistants video-recorded the full duration of 28 practices from 14 boys' flag football teams (2 practices/team) while children concurrently (N = 111, aged 5-11 years, mean 7.9 ± 1.2 years) wore ActiGraph GT1M accelerometers to measure PA. Observers divided videos of each practice into continuous context time segments (N = 204; mean-segments-per-practice = 7.3, SD = 2.5) using start/stop points defined by change in context characteristics, and assigned a value for task (e.g., management, gameplay, etc.), member arrangement (e.g., small group, whole group, etc.), and setting demand (i.e., fosters participation, fosters exclusion). Segments were then paired with accelerometer data. Data were analyzed using a multilevel model with segment as unit of analysis. Whole practices averaged 34 ± 2.4% of time spent in MVPA. Free-play (51.5 ± 5.5%), gameplay (53.6 ± 3.7%), and warm-up (53.9 ± 3.6%) segments had greater percentage of time (%time) in MVPA compared to fitness (36.8 ± 4.4%) segments (p ≤ .01). Greater %time was spent in MVPA during free-play segments compared to scrimmage (30.2 ± 4.6%), strategy (30.6 ± 3.2%), and sport-skill (31.6 ± 3.1%) segments (p ≤ .01), and in segments that fostered participation (36.1 ± 2.7%) than segments that fostered exclusion (29.1 ± 3.0%; p ≤ .01

  6. NASA Planetary Visualization Tool

    NASA Astrophysics Data System (ADS)

    Hogan, P.; Kim, R.

    2004-12-01

    NASA World Wind allows one to zoom from satellite altitude into any place on Earth, leveraging the combination of high resolution LandSat imagery and SRTM elevation data to experience Earth in visually rich 3D, just as if they were really there. NASA World Wind combines LandSat 7 imagery with Shuttle Radar Topography Mission (SRTM) elevation data, for a dramatic view of the Earth at eye level. Users can literally fly across the world's terrain from any location in any direction. Particular focus was put into the ease of usability so people of all ages can enjoy World Wind. All one needs to control World Wind is a two button mouse. Additional guides and features can be accessed though a simplified menu. Navigation is automated with single clicks of a mouse as well as the ability to type in any location and automatically zoom to it. NASA World Wind was designed to run on recent PC hardware with the same technology used by today's 3D video games. NASA World Wind delivers the NASA Blue Marble, spectacular true-color imagery of the entire Earth at 1-kilometer-per-pixel. Using NASA World Wind, you can continue to zoom past Blue Marble resolution to seamlessly experience the extremely detailed mosaic of LandSat 7 data at an impressive 15-meters-per-pixel resolution. NASA World Wind also delivers other color bands such as the infrared spectrum. The NASA Scientific Visualization Studio at Goddard Space Flight Center (GSFC) has produced a set of visually intense animations that demonstrate a variety of subjects such as hurricane dynamics and seasonal changes across the globe. NASA World Wind takes these animations and plays them directly on the world. The NASA Moderate Resolution Imaging Spectroradiometer (MODIS) produces a set of time relevant planetary imagery that's updated every day. MODIS catalogs fires, floods, dust, smoke, storms and volcanic activity. NASA World Wind produces an easily customized view of this information and marks them directly on the globe. When one

  7. Preliminary experience with a stereoscopic video system in a remotely piloted aircraft application

    NASA Technical Reports Server (NTRS)

    Rezek, T. W.

    1983-01-01

    Remote piloting video display development at the Dryden Flight Research Facility of NASA's Ames Research Center is summarized, and the reasons for considering stereo television are presented. Pertinent equipment is described. Limited flight experience is also discussed, along with recommendations for further study.

  8. National security and national competitiveness: Open source solutions; NASA requirements and capabilities

    NASA Technical Reports Server (NTRS)

    Cotter, Gladys A.

    1993-01-01

    Foreign competitors are challenging the world leadership of the U.S. aerospace industry, and increasingly tight budgets everywhere make international cooperation in aerospace science necessary. The NASA STI Program has as part of its mission to support NASA R&D, and to that end has developed a knowledge base of aerospace-related information known as the NASA Aerospace Database. The NASA STI Program is already involved in international cooperation with NATO/AGARD/TIP, CENDI, ICSU/ICSTI, and the U.S. Japan Committee on STI. With the new more open political climate, the perceived dearth of foreign information in the NASA Aerospace Database, and the development of the ESA database and DELURA, the German databases, the NASA STI Program is responding by sponsoring workshops on foreign acquisitions and by increasing its cooperation with international partners and with other U.S. agencies. The STI Program looks to the future of improved database access through networking and a GUI; new media; optical disk, video, and full text; and a Technology Focus Group that will keep the NASA STI Program current with technology.

  9. Science@NASA: Direct to People!

    NASA Technical Reports Server (NTRS)

    Koczor, Ronald J.; Adams, Mitzi; Gallagher, Dennis; Whitaker, Ann (Technical Monitor)

    2002-01-01

    Science@NASA is a science communication effort sponsored by NASA's Marshall Space Flight Center. It is the result of a four year research project between Marshall, the University of Florida College of Journalism and Communications and the internet communications company, Bishop Web Works. The goals of Science@NASA are to inform, inspire, and involve people in the excitement of NASA science by bringing that science directly to them. We stress not only the reporting of the facts of a particular topic, but also the context and importance of the research. Science@NASA involves several levels of activity from academic communications research to production of content for 6 websites, in an integrated process involving all phases of production. A Science Communications Roundtable Process is in place that includes scientists, managers, writers, editors, and Web technical experts. The close connection between the scientists and the writers/editors assures a high level of scientific accuracy in the finished products. The websites each have unique characters and are aimed at different audience segments: 1. http://science.nasa.gov. (SNG) Carries stories featuring various aspects of NASA science activity. The site carries 2 or 3 new stories each week in written and audio formats for science-attentive adults. 2. http://liftoff.msfc.nasa.gov. Features stories from SNG that are recast for a high school level audience. J-Track and J-Pass applets for tracking satellites are our most popular product. 3. http://kids. msfc.nasa.gov. This is the Nursemaids site and is aimed at a middle school audience. The NASAKids Club is a new feature at the site. 4. http://www.thursdaysclassroom.com . This site features lesson plans and classroom activities for educators centered around one of the science stories carried on SNG. 5. http://www.spaceweather.com. This site gives the status of solar activity and its interactions with the Earth's ionosphere and magnetosphere.

  10. NASA 360 - Talks Alien Ocean

    NASA Image and Video Library

    2015-11-13

    Could life exist on Europa? It may sound farfetched, but this Jovian moon is the most likely place to find life in our solar system thanks to an enormous underground ocean positioned just beneath its icy surface. Watch as Robert Pappalardo, Europa Project Scientist at NASA Jet Propulsion Laboratory, discusses Europa, its potential for life, and the upcoming mission that is being planned to visit this compelling moon. This video was developed from a live recording at the AIAA SPACE 2015 conference in September 2015. To watch the full talk given at the conference please visit: http://bit.ly/1LPWZwV

  11. Free-viewpoint video of human actors using multiple handheld Kinects.

    PubMed

    Ye, Genzhi; Liu, Yebin; Deng, Yue; Hasler, Nils; Ji, Xiangyang; Dai, Qionghai; Theobalt, Christian

    2013-10-01

    We present an algorithm for creating free-viewpoint video of interacting humans using three handheld Kinect cameras. Our method reconstructs deforming surface geometry and temporal varying texture of humans through estimation of human poses and camera poses for every time step of the RGBZ video. Skeletal configurations and camera poses are found by solving a joint energy minimization problem, which optimizes the alignment of RGBZ data from all cameras, as well as the alignment of human shape templates to the Kinect data. The energy function is based on a combination of geometric correspondence finding, implicit scene segmentation, and correspondence finding using image features. Finally, texture recovery is achieved through jointly optimization on spatio-temporal RGB data using matrix completion. As opposed to previous methods, our algorithm succeeds on free-viewpoint video of human actors under general uncontrolled indoor scenes with potentially dynamic background, and it succeeds even if the cameras are moving.

  12. JPL-20180620-ECOSTRf-0001-NASAs ECOSTRESS on Space Station video file

    NASA Image and Video Library

    2018-06-25

    NASA's ECOsystem Spaceborne Thermal Radiometer Experiment on Space Station (ECOSTRESS) is a new instrument that will provide a unique, space-based measurement of how plants respond to changes in water availability. ECOSTRESS will launch from Cape Canveral Air Force Station in Florida no earlier than June 29, 2018 and will be installed on the International Space Station.

  13. Modernization of B-2 Data, Video, and Control Systems Infrastructure

    NASA Technical Reports Server (NTRS)

    Cmar, Mark D.; Maloney, Christian T.; Butala, Vishal D.

    2012-01-01

    The National Aeronautics and Space Administration (NASA) Glenn Research Center (GRC) Plum Brook Station (PBS) Spacecraft Propulsion Research Facility, commonly referred to as B-2, is NASA s third largest thermal-vacuum facility with propellant systems capability. B-2 has completed a modernization effort of its facility legacy data, video and control systems infrastructure to accommodate modern integrated testing and Information Technology (IT) Security requirements. Integrated systems tests have been conducted to demonstrate the new data, video and control systems functionality and capability. Discrete analog signal conditioners have been replaced by new programmable, signal processing hardware that is integrated with the data system. This integration supports automated calibration and verification of the analog subsystem. Modern measurement systems analysis (MSA) tools are being developed to help verify system health and measurement integrity. Legacy hard wired digital data systems have been replaced by distributed Fibre Channel (FC) network connected digitizers where high speed sampling rates have increased to 256,000 samples per second. Several analog video cameras have been replaced by digital image and storage systems. Hard-wired analog control systems have been replaced by Programmable Logic Controllers (PLC), fiber optic networks (FON) infrastructure and human machine interface (HMI) operator screens. New modern IT Security procedures and schemes have been employed to control data access and process control flows. Due to the nature of testing possible at B-2, flexibility and configurability of systems has been central to the architecture during modernization.

  14. Modernization of B-2 Data, Video, and Control Systems Infrastructure

    NASA Technical Reports Server (NTRS)

    Cmar, Mark D.; Maloney, Christian T.; Butala, Vishal D.

    2012-01-01

    The National Aeronautics and Space Administration (NASA) Glenn Research Center (GRC) Plum Brook Station (PBS) Spacecraft Propulsion Research Facility, commonly referred to as B-2, is NASA's third largest thermal-vacuum facility with propellant systems capability. B-2 has completed a modernization effort of its facility legacy data, video and control systems infrastructure to accommodate modern integrated testing and Information Technology (IT) Security requirements. Integrated systems tests have been conducted to demonstrate the new data, video and control systems functionality and capability. Discrete analog signal conditioners have been replaced by new programmable, signal processing hardware that is integrated with the data system. This integration supports automated calibration and verification of the analog subsystem. Modern measurement systems analysis (MSA) tools are being developed to help verify system health and measurement integrity. Legacy hard wired digital data systems have been replaced by distributed Fibre Channel (FC) network connected digitizers where high speed sampling rates have increased to 256,000 samples per second. Several analog video cameras have been replaced by digital image and storage systems. Hard-wired analog control systems have been replaced by Programmable Logic Controllers (PLC), fiber optic networks (FON) infrastructure and human machine interface (HMI) operator screens. New modern IT Security procedures and schemes have been employed to control data access and process control flows. Due to the nature of testing possible at B-2, flexibility and configurability of systems has been central to the architecture during modernization.

  15. The Case of the Great Space Exploration: An Educator Guide with Activities in Mathematics, Science, and Technology. The NASA SCI Files. EG-2004-09-12-LARC

    ERIC Educational Resources Information Center

    Ricles, Shannon; Jaramillo, Becky; Fargo, Michelle

    2004-01-01

    In this companion to the "NASA SCI Files" episode "The Case of the Great Space Exploration," the tree house detectives learn about NASA's new vision for exploring space. In four segments aimed at grades 3-5, students learn about a variety of aspects of space exploration. Each segment of the guide includes an overview, a set of objectives,…

  16. Optical mass memory system (AMM-13). AMM-13 system segment specification

    NASA Technical Reports Server (NTRS)

    Bailey, G. A.

    1980-01-01

    The performance, design, development, and test requirements for an optical mass data storage and retrieval system prototype (AMM-13) are established. This system interfaces to other system segments of the NASA End-to-End Data System via the Data Base Management System segment and is designed to have a storage capacity of 10 to the 13th power bits (10 to the 12th power bits on line). The major functions of the system include control, input and output, recording of ingested data, fiche processing/replication and storage and retrieval.

  17. NASA's Webb "Pathfinder Telescope" Successfully Completes First Super-Cold Optical Test

    NASA Image and Video Library

    2017-12-08

    Testing is crucial part of NASA's success on Earth and in space. So, as the actual flight components of NASA's James Webb Space Telescope come together, engineers are testing the non-flight equipment to ensure that tests on the real Webb telescope later goes safely and according to plan. Recently, the "pathfinder telescope," or just “Pathfinder,” completed its first super-cold optical test that resulted in many first-of-a-kind demonstrations. "This test is the first dry-run of the equipment and procedures we will use to conduct an end-to-end optical test of the flight telescope and instruments," said Mark Clampin, Webb telescope Observatory Project Scientist at NASA's Goddard Space Flight Center in Greenbelt, Maryland. "It provides confidence that once the flight telescope is ready, we are fully prepared for a successful test of the flight hardware." The Pathfinder is a non-flight replica of the Webb telescope’s center section backplane, or “backbone,” that includes mirrors. The flight backplane comes in three segments, a center section and two wing-like parts, all of which will support large hexagonal mirrors on the Webb telescope. The pathfinder only consists of the center part of the backplane. However, during the test, it held two full size spare primary mirror segments and a full size spare secondary mirror to demonstrate the ability to optically test and align the telescope at the planned operating temperatures of -400 degrees Fahrenheit (-240 Celsius). Read more: www.nasa.gov/feature/goddard/nasas-webb-pathfinder-telesc... Credit: NASA/Goddard/Chris Gunn NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  18. Leveraging Automatic Speech Recognition Errors to Detect Challenging Speech Segments in TED Talks

    ERIC Educational Resources Information Center

    Mirzaei, Maryam Sadat; Meshgi, Kourosh; Kawahara, Tatsuya

    2016-01-01

    This study investigates the use of Automatic Speech Recognition (ASR) systems to epitomize second language (L2) listeners' problems in perception of TED talks. ASR-generated transcripts of videos often involve recognition errors, which may indicate difficult segments for L2 listeners. This paper aims to discover the root-causes of the ASR errors…

  19. No damage to rail cars or SRB segments in derailment

    NASA Technical Reports Server (NTRS)

    2000-01-01

    One of two solid rocket booster rail cars is off the track after being involved in a minor derailment incident during a routine movement on the tracks. The rail cars were being moved as part of a standard operation to '''order''' the cars, placing them into a proper sequence for upcoming segment processing activities. The rear wheels of one car and the front wheels of the car behind it slid off the tracks while passing through a railway switch onto a siding. They were traveling approximately 3 miles per hour at the time, about normal walking speed. No damage occurred to the SRB segments, or to the devices that secure the segments to the rail cars. The incident occurred on KSC property, just north of the NASA Causeway in the KSC Industrial Area.

  20. Achieving Space Shuttle Abort-to-Orbit Using the Five-Segment Booster

    NASA Technical Reports Server (NTRS)

    Craft, Joe; Ess, Robert; Sauvageau, Don

    2003-01-01

    The Five-Segment Booster design concept was evaluated by a team that determined the concept to be feasible and capable of achieving the desired abort-to-orbit capability when used in conjunction with increased Space Shuttle main engine throttle capability. The team (NASA Johnson Space Center, NASA Marshall Space Flight Center, ATK Thiokol Propulsion, United Space Alliance, Lockheed-Martin Space Systems, and Boeing) selected the concept that provided abort-to-orbit capability while: 1) minimizing Shuttle system impacts by maintaining the current interface requirements with the orbiter, external tank, and ground operation systems; 2) minimizing changes to the flight-proven design, materials, and processes of the current four-segment Shuttle booster; 3) maximizing use of existing booster hardware; and 4) taking advantage of demonstrated Shuttle main engine throttle capability. The added capability can also provide Shuttle mission planning flexibility. Additional performance could be used to: enable implementation of more desirable Shuttle safety improvements like crew escape, while maintaining current payload capability; compensate for off nominal performance in no-fail missions; and support missions to high altitudes and inclinations. This concept is a low-cost, low-risk approach to meeting Shuttle safety upgrade objectives. The Five-Segment Booster also has the potential to support future heavy-lift missions.

  1. Development of a video-delivered relaxation treatment of late-life anxiety for veterans.

    PubMed

    Gould, Christine E; Zapata, Aimee Marie L; Bruce, Janine; Bereknyei Merrell, Sylvia; Wetherell, Julie Loebach; O'Hara, Ruth; Kuhn, Eric; Goldstein, Mary K; Beaudreau, Sherry A

    2017-10-01

    Behavioral treatments reduce anxiety, yet many older adults may not have access to these efficacious treatments. To address this need, we developed and evaluated the feasibility and acceptability of a video-delivered anxiety treatment for older Veterans. This treatment program, BREATHE (Breathing, Relaxation, and Education for Anxiety Treatment in the Home Environment), combines psychoeducation, diaphragmatic breathing, and progressive muscle relaxation training with engagement in activities. A mixed methods concurrent study design was used to examine the clarity of the treatment videos. We conducted semi-structured interviews with 20 Veterans (M age = 69.5, SD = 7.3 years; 55% White, Non-Hispanic) and collected ratings of video clarity. Quantitative ratings revealed that 100% of participants generally or definitely could follow breathing and relaxation video instructions. Qualitative findings, however, demonstrated more variability in the extent to which each video segment was clear. Participants identified both immediate benefits and motivation challenges associated with a video-delivered treatment. Participants suggested that some patients may need encouragement, whereas others need face-to-face therapy. Quantitative ratings of video clarity and qualitative findings highlight the feasibility of a video-delivered treatment for older Veterans with anxiety. Our findings demonstrate the importance of ensuring patients can follow instructions provided in self-directed treatments and the role that an iterative testing process has in addressing these issues. Next steps include testing the treatment videos with older Veterans with anxiety disorders.

  2. The Analysis of Image Segmentation Hierarchies with a Graph-based Knowledge Discovery System

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Cooke, diane J.; Ketkar, Nikhil; Aksoy, Selim

    2008-01-01

    Currently available pixel-based analysis techniques do not effectively extract the information content from the increasingly available high spatial resolution remotely sensed imagery data. A general consensus is that object-based image analysis (OBIA) is required to effectively analyze this type of data. OBIA is usually a two-stage process; image segmentation followed by an analysis of the segmented objects. We are exploring an approach to OBIA in which hierarchical image segmentations provided by the Recursive Hierarchical Segmentation (RHSEG) software developed at NASA GSFC are analyzed by the Subdue graph-based knowledge discovery system developed by a team at Washington State University. In this paper we discuss out initial approach to representing the RHSEG-produced hierarchical image segmentations in a graphical form understandable by Subdue, and provide results on real and simulated data. We also discuss planned improvements designed to more effectively and completely convey the hierarchical segmentation information to Subdue and to improve processing efficiency.

  3. Space Adaptation of Active Mirror Segment Concepts

    NASA Technical Reports Server (NTRS)

    Ames, Gregory H.

    1999-01-01

    This report summarizes the results of a three year effort by Blue Line Engineering Co. to advance the state of segmented mirror systems in several separate but related areas. The initial set of tasks were designed to address the issues of system level architecture, digital processing system, cluster level support structures, and advanced mirror fabrication concepts. Later in the project new tasks were added to provide support to the existing segmented mirror testbed at Marshall Space Flight Center (MSFC) in the form of upgrades to the 36 subaperture wavefront sensor. Still later, tasks were added to build and install a new system processor based on the results of the new system architecture. The project was successful in achieving a number of important results. These include the following most notable accomplishments: 1) The creation of a new modular digital processing system that is extremely capable and may be applied to a wide range of segmented mirror systems as well as many classes of Multiple Input Multiple Output (MIMO) control systems such as active structures or industrial automation. 2) A new graphical user interface was created for operation of segmented mirror systems. 3) The development of a high bit rate serial data loop that permits bi-directional flow of data to and from as many as 39 segments daisy-chained to form a single cluster of segments. 4) Upgrade of the 36 subaperture Hartmann type Wave Front Sensor (WFS) of the Phased Array Mirror, Extendible Large Aperture (PAMELA) testbed at MSFC resulting in a 40 to 5OX improvement in SNR which in turn enabled NASA personnel to achieve many significant strides in improved closed-loop system operation in 1998. 5) A new system level processor was built and delivered to MSFC for use with the PAMELA testbed. This new system featured a new graphical user interface to replace the obsolete and non-supported menu system originally delivered with the PAMELA system. The hardware featured Blue Line's new stackable

  4. Multiple Vehicle Detection and Segmentation in Malaysia Traffic Flow

    NASA Astrophysics Data System (ADS)

    Fariz Hasan, Ahmad; Fikri Che Husin, Mohd; Affendi Rosli, Khairul; Norhafiz Hashim, Mohd; Faiz Zainal Abidin, Amar

    2018-03-01

    Vision based system are widely used in the field of Intelligent Transportation System (ITS) to extract a large amount of information to analyze traffic scenes. By rapid number of vehicles on the road as well as significant increase on cameras dictated the need for traffic surveillance systems. This system can take over the burden some task was performed by human operator in traffic monitoring centre. The main technique proposed by this paper is concentrated on developing a multiple vehicle detection and segmentation focusing on monitoring through Closed Circuit Television (CCTV) video. The system is able to automatically segment vehicle extracted from heavy traffic scene by optical flow estimation alongside with blob analysis technique in order to detect the moving vehicle. Prior to segmentation, blob analysis technique will compute the area of interest region corresponding to moving vehicle which will be used to create bounding box on that particular vehicle. Experimental validation on the proposed system was performed and the algorithm is demonstrated on various set of traffic scene.

  5. NASA Virtual Conferences and Instruction Over the Internet

    NASA Technical Reports Server (NTRS)

    Leon, Mark; McCurdy, Andrea; Wood, Charles

    1997-01-01

    Distance learning is not new. Since the time that radio has embellished our culture distance learning has taken on may forms. With the onset of television, video tape and satellite link ups the world of multimedia has taken a presence in our remote learning environment. Now in the information age new models for bring the best education to people through out the world is in its early stages. Recent "Information Age" technological developments have made key advancements to distance learning through the greater bandwidths now available over the Internet and a broader communications infrastructure that extends to classrooms throughout the country and the world. Further, new software compression technology allows audio and video to be communicated over the Internet much more efficiently. Larger amounts of data can be transferred to remote sites at less cost. The purpose of this paper is to demonstrate the use of state-of-art technology in the educational community. The focus will be on virtual conferences, virtual instruction and remote education. The techniques herein have been developed by NASA and the University of North Dakota(UND) through the use of existing software and hardware purchased in the United States. NASA has awarded UND a grant for continued research in this area based on their pioneering effort to date. NASA has been conducting "Virtual Conferences" from Ames Research Center in order to make unique educational opportunities available to participants across the country and internationally. Through the use of this technical approach, hundreds of teachers have been able to attend events where physical or financial barriers traditionally prevented their attendance. This technique is currently being adopted by industry due to its scaleable merit.

  6. NASA's "Webb-cam" Captures Engineers at Work on Webb at Johnson Space Center

    NASA Image and Video Library

    2017-05-30

    Now that NASA's James Webb Space Telescope has moved to NASA's Johnson Space Center in Houston, Texas, a special Webb camera was installed there to continue providing daily video feeds on the telescope's progress. Space enthusiasts, who are fascinated to see how this next generation space telescope has come together and how it is being tested, are able to see the telescope’s progress as it happens by watching the Webb-cam feed online. The Web camera at NASA’s Johnson Space Center can be seen online at: jwst.nasa.gov/, with larger views of the cams available at: jwst.nasa.gov/webcam.html. Read more: go.nasa.gov/2rQYpT2 NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  7. ASSESSMENT OF YOUTUBE VIDEOS AS A SOURCE OF INFORMATION ON MEDICATION USE IN PREGNANCY

    PubMed Central

    Hansen, Craig; Interrante, Julia D; Ailes, Elizabeth C; Frey, Meghan T; Broussard, Cheryl S; Godoshian, Valerie J; Lewis, Courtney; Polen, Kara ND; Garcia, Amanda P; Gilboa, Suzanne M

    2015-01-01

    Background When making decisions about medication use in pregnancy, women consult many information sources, including the Internet. The aim of this study was to assess the content of publicly-accessible YouTube videos that discuss medication use in pregnancy. Methods Using 2,023 distinct combinations of search terms related to medications and pregnancy, we extracted metadata from YouTube videos using a YouTube video Application Programming Interface. Relevant videos were defined as those with a medication search term and a pregnancy-related search term in either the video title or description. We viewed relevant videos and abstracted content from each video into a database. We documented whether videos implied each medication to be ‘safe’ or ‘unsafe’ in pregnancy and compared that assessment with the medication’s Teratogen Information System (TERIS) rating. Results After viewing 651 videos, 314 videos with information about medication use in pregnancy were available for the final analyses. The majority of videos were from law firms (67%), television segments (10%), or physicians (8%). Selective serotonin reuptake inhibitors (SSRIs) were the most common medication class named (225 videos, 72%), and 88% percent of videos about SSRIs indicated they were ‘unsafe’ for use in pregnancy. However, the TERIS ratings for medication products in this class range from ‘unlikely’ to ‘minimal’ teratogenic risk. Conclusion For the majority of medications, current YouTube video content does not adequately reflect what is known about the safety of their use in pregnancy and should be interpreted cautiously. However, YouTube could serve as a valuable platform for communicating evidence-based medication safety information. PMID:26541372

  8. Multilevel analysis of sports video sequences

    NASA Astrophysics Data System (ADS)

    Han, Jungong; Farin, Dirk; de With, Peter H. N.

    2006-01-01

    We propose a fully automatic and flexible framework for analysis and summarization of tennis broadcast video sequences, using visual features and specific game-context knowledge. Our framework can analyze a tennis video sequence at three levels, which provides a broad range of different analysis results. The proposed framework includes novel pixel-level and object-level tennis video processing algorithms, such as a moving-player detection taking both the color and the court (playing-field) information into account, and a player-position tracking algorithm based on a 3-D camera model. Additionally, we employ scene-level models for detecting events, like service, base-line rally and net-approach, based on a number real-world visual features. The system can summarize three forms of information: (1) all court-view playing frames in a game, (2) the moving trajectory and real-speed of each player, as well as relative position between the player and the court, (3) the semantic event segments in a game. The proposed framework is flexible in choosing the level of analysis that is desired. It is effective because the framework makes use of several visual cues obtained from the real-world domain to model important events like service, thereby increasing the accuracy of the scene-level analysis. The paper presents attractive experimental results highlighting the system efficiency and analysis capabilities.

  9. Assessment of YouTube videos as a source of information on medication use in pregnancy.

    PubMed

    Hansen, Craig; Interrante, Julia D; Ailes, Elizabeth C; Frey, Meghan T; Broussard, Cheryl S; Godoshian, Valerie J; Lewis, Courtney; Polen, Kara N D; Garcia, Amanda P; Gilboa, Suzanne M

    2016-01-01

    When making decisions about medication use in pregnancy, women consult many information sources, including the Internet. The aim of this study was to assess the content of publicly accessible YouTube videos that discuss medication use in pregnancy. Using 2023 distinct combinations of search terms related to medications and pregnancy, we extracted metadata from YouTube videos using a YouTube video Application Programming Interface. Relevant videos were defined as those with a medication search term and a pregnancy-related search term in either the video title or description. We viewed relevant videos and abstracted content from each video into a database. We documented whether videos implied each medication to be "safe" or "unsafe" in pregnancy and compared that assessment with the medication's Teratogen Information System (TERIS) rating. After viewing 651 videos, 314 videos with information about medication use in pregnancy were available for the final analyses. The majority of videos were from law firms (67%), television segments (10%), or physicians (8%). Selective serotonin reuptake inhibitors (SSRIs) were the most common medication class named (225 videos, 72%), and 88% of videos about SSRIs indicated that they were unsafe for use in pregnancy. However, the TERIS ratings for medication products in this class range from "unlikely" to "minimal" teratogenic risk. For the majority of medications, current YouTube video content does not adequately reflect what is known about the safety of their use in pregnancy and should be interpreted cautiously. However, YouTube could serve as a platform for communicating evidence-based medication safety information. Copyright © 2015 John Wiley & Sons, Ltd.

  10. NASA's Hubble Shows Milky Way is Destined for Head-On Collision

    NASA Image and Video Library

    2017-12-08

    NASA image release Thursday, May 31, 2012 To view a video from this Hubble release go to: www.flickr.com/photos/gsfc/7309212940 Caption: This illustration shows a stage in the predicted merger between our Milky Way galaxy and the neighboring Andromeda galaxy, as it will unfold over the next several billion years. In this image, representing Earth's night sky in 3.75 billion years, Andromeda (left) fills the field of view and begins to distort the Milky Way with tidal pull. Credit: NASA; ESA; Z. Levay and R. van der Marel, STScI; T. Hallas; and A. Mellinger To read more go to: www.nasa.gov/mission_pages/hubble/science/milky-way-colli... NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  11. Accuracy of sign interpreting and real-time captioning of science videos for the delivery of instruction to deaf students

    NASA Astrophysics Data System (ADS)

    Sadler, Karen L.

    2009-04-01

    The purpose of this study was to quantitatively examine the impact of third-party support service providers on the quality of science information available to deaf students in regular science classrooms. Three different videotapes that were developed by NASA for high school science classrooms were selected for the study, allowing for different concepts and vocabulary to be examined. The focus was on the accuracy of translation as measured by the number of key science words included in the transcripts (captions) or videos (interpreted). Data were collected via transcripts completed by CART (computer assisted real-time captionists) or through videos of sign language interpreters. All participants were required to listen to and translate these NASA educational videos with no prior experience with this information so as not to influence their delivery. CART personnel using captions were found to be significantly more accurate in the delivery of science words as compared to the sign language interpreters in this study.

  12. Audio-based queries for video retrieval over Java enabled mobile devices

    NASA Astrophysics Data System (ADS)

    Ahmad, Iftikhar; Cheikh, Faouzi Alaya; Kiranyaz, Serkan; Gabbouj, Moncef

    2006-02-01

    In this paper we propose a generic framework for efficient retrieval of audiovisual media based on its audio content. This framework is implemented in a client-server architecture where the client application is developed in Java to be platform independent whereas the server application is implemented for the PC platform. The client application adapts to the characteristics of the mobile device where it runs such as screen size and commands. The entire framework is designed to take advantage of the high-level segmentation and classification of audio content to improve speed and accuracy of audio-based media retrieval. Therefore, the primary objective of this framework is to provide an adaptive basis for performing efficient video retrieval operations based on the audio content and types (i.e. speech, music, fuzzy and silence). Experimental results approve that such an audio based video retrieval scheme can be used from mobile devices to search and retrieve video clips efficiently over wireless networks.

  13. NASA Science Data Processing for SNPP

    NASA Astrophysics Data System (ADS)

    Hall, A.; Behnke, J.; Lowe, D. R.; Ho, E. L.

    2014-12-01

    NASA's ESDIS Project has been operating the Suomi National Polar-Orbiting Partnership (SNPP) Science Data Segment (SDS) since the launch in October 2011. The science data processing system includes a Science Data Depository and Distribution Element (SD3E) and five Product Evaluation and Analysis Tool Elements (PEATEs): Land, Ocean, Atmosphere, Ozone, and Sounder. The SDS has been responsible for assessing Environmental Data Records (EDRs) for climate quality, providing and demonstrating algorithm improvements/enhancements and supporting the calibration/validation activities as well as instrument calibration and sensor table uploads for mission planning. The SNPP also flies two NASA instruments: OMPS Limb and CERES. The SNPP SDS has been responsible for producing, archiving and distributing the standard products for those instruments in close association with their NASA science teams. The PEATEs leveraged existing science data processing techniques developed under the EOSDIS Program. This enabled he PEATEs to do an excellent job in supporting Science Team analysis for SNPP. The SDS acquires data from three sources: NESDIS IDPS (Raw Data Records (RDRs)), GRAVITE (Retained Intermediate Products (RIPs)), and the NOAA/CLASS (higher level products). The SD3E component aggregates the RDRs, and distributes them to each of the PEATEs for further analysis and processing. It provides a ~32 day rolling storage of data, available for pickup by the PEATEs. The current system used by NASA will be presented along with plans for streamlining the system in support of continuing the NASA's EOS measurements.

  14. Astrometric and Photometric Analysis of the September 2008 ATV-1 Re-Entry Event

    NASA Technical Reports Server (NTRS)

    Mulrooney, Mark K.; Barker, Edwin S.; Maley, Paul D.; Beaulieu, Kevin R.; Stokely, Christopher L.

    2008-01-01

    NASA utilized Image Intensified Video Cameras for ATV data acquisition from a jet flying at 12.8 km. Afterwards the video was digitized and then analyzed with a modified commercial software package, Image Systems Trackeye. Astrometric results were limited by saturation, plate scale, and imposed linear plate solution based on field reference stars. Time-dependent fragment angular trajectories, velocities, accelerations, and luminosities were derived in each video segment. It was evident that individual fragments behave differently. Photometric accuracy was insufficient to confidently assess correlations between luminosity and fragment spatial behavior (velocity, deceleration). Use of high resolution digital video cameras in future should remedy this shortcoming.

  15. A generic flexible and robust approach for intelligent real-time video-surveillance systems

    NASA Astrophysics Data System (ADS)

    Desurmont, Xavier; Delaigle, Jean-Francois; Bastide, Arnaud; Macq, Benoit

    2004-05-01

    In this article we present a generic, flexible and robust approach for an intelligent real-time video-surveillance system. A previous version of the system was presented in [1]. The goal of these advanced tools is to provide help to operators by detecting events of interest in visual scenes and highlighting alarms and compute statistics. The proposed system is a multi-camera platform able to handle different standards of video inputs (composite, IP, IEEE1394 ) and which can basically compress (MPEG4), store and display them. This platform also integrates advanced video analysis tools, such as motion detection, segmentation, tracking and interpretation. The design of the architecture is optimised to playback, display, and process video flows in an efficient way for video-surveillance application. The implementation is distributed on a scalable computer cluster based on Linux and IP network. It relies on POSIX threads for multitasking scheduling. Data flows are transmitted between the different modules using multicast technology and under control of a TCP-based command network (e.g. for bandwidth occupation control). We report here some results and we show the potential use of such a flexible system in third generation video surveillance system. We illustrate the interest of the system in a real case study, which is the indoor surveillance.

  16. Model Deformation Measurements at NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Burner, A. W.

    1998-01-01

    Only recently have large amounts of model deformation data been acquired in NASA wind tunnels. This acquisition of model deformation data was made possible by the development of an automated video photogrammetric system to measure the changes in wing twist and bending under aerodynamic load. The measurement technique is based upon a single view photogrammetric determination of two dimensional coordinates of wing targets with a fixed third dimensional coordinate, namely the spanwise location. A major consideration in the development of the measurement system was that use of the technique must not appreciably reduce wind tunnel productivity. The measurement technique has been used successfully for a number of tests at four large production wind tunnels at NASA and a dedicated system is nearing completion for a fifth facility. These facilities are the National Transonic Facility, the Transonic Dynamics Tunnel, and the Unitary Plan Wind Tunnel at NASA Langley, and the 12-FT Pressure Tunnel at NASA Ames. A dedicated system for the Langley 16-Foot Transonic Tunnel is scheduled to be used for the first time for a test in September. The advantages, limitations, and strategy of the technique as currently used in NASA wind tunnels are presented. Model deformation data are presented which illustrate the value of these measurements. Plans for further enhancements to the technique are presented.

  17. Video event classification and image segmentation based on noncausal multidimensional hidden Markov models.

    PubMed

    Ma, Xiang; Schonfeld, Dan; Khokhar, Ashfaq A

    2009-06-01

    In this paper, we propose a novel solution to an arbitrary noncausal, multidimensional hidden Markov model (HMM) for image and video classification. First, we show that the noncausal model can be solved by splitting it into multiple causal HMMs and simultaneously solving each causal HMM using a fully synchronous distributed computing framework, therefore referred to as distributed HMMs. Next we present an approximate solution to the multiple causal HMMs that is based on an alternating updating scheme and assumes a realistic sequential computing framework. The parameters of the distributed causal HMMs are estimated by extending the classical 1-D training and classification algorithms to multiple dimensions. The proposed extension to arbitrary causal, multidimensional HMMs allows state transitions that are dependent on all causal neighbors. We, thus, extend three fundamental algorithms to multidimensional causal systems, i.e., 1) expectation-maximization (EM), 2) general forward-backward (GFB), and 3) Viterbi algorithms. In the simulations, we choose to limit ourselves to a noncausal 2-D model whose noncausality is along a single dimension, in order to significantly reduce the computational complexity. Simulation results demonstrate the superior performance, higher accuracy rate, and applicability of the proposed noncausal HMM framework to image and video classification.

  18. Probabilistic fusion of stereo with color and contrast for bilayer segmentation.

    PubMed

    Kolmogorov, Vladimir; Criminisi, Antonio; Blake, Andrew; Cross, Geoffrey; Rother, Carsten

    2006-09-01

    This paper describes models and algorithms for the real-time segmentation of foreground from background layers in stereo video sequences. Automatic separation of layers from color/contrast or from stereo alone is known to be error-prone. Here, color, contrast, and stereo matching information are fused to infer layers accurately and efficiently. The first algorithm, Layered Dynamic Programming (LDP), solves stereo in an extended six-state space that represents both foreground/background layers and occluded regions. The stereo-match likelihood is then fused with a contrast-sensitive color model that is learned on-the-fly and stereo disparities are obtained by dynamic programming. The second algorithm, Layered Graph Cut (LGC), does not directly solve stereo. Instead, the stereo match likelihood is marginalized over disparities to evaluate foreground and background hypotheses and then fused with a contrast-sensitive color model like the one used in LDP. Segmentation is solved efficiently by ternary graph cut. Both algorithms are evaluated with respect to ground truth data and found to have similar performance, substantially better than either stereo or color/ contrast alone. However, their characteristics with respect to computational efficiency are rather different. The algorithms are demonstrated in the application of background substitution and shown to give good quality composite video output.

  19. An Orion/Ares I Launch and Ascent Simulation: One Segment of the Distributed Space Exploration Simulation (DSES)

    NASA Technical Reports Server (NTRS)

    Chung, Victoria I.; Crues, Edwin Z.; Blum, Mike G.; Alofs, Cathy; Busto, Juan

    2007-01-01

    This paper describes the architecture and implementation of a distributed launch and ascent simulation of NASA's Orion spacecraft and Ares I launch vehicle. This simulation is one segment of the Distributed Space Exploration Simulation (DSES) Project. The DSES project is a research and development collaboration between NASA centers which investigates technologies and processes for distributed simulation of complex space systems in support of NASA's Exploration Initiative. DSES is developing an integrated end-to-end simulation capability to support NASA development and deployment of new exploration spacecraft and missions. This paper describes the first in a collection of simulation capabilities that DSES will support.

  20. Hybrid vision activities at NASA Johnson Space Center

    NASA Technical Reports Server (NTRS)

    Juday, Richard D.

    1990-01-01

    NASA's Johnson Space Center in Houston, Texas, is active in several aspects of hybrid image processing. (The term hybrid image processing refers to a system that combines digital and photonic processing). The major thrusts are autonomous space operations such as planetary landing, servicing, and rendezvous and docking. By processing images in non-Cartesian geometries to achieve shift invariance to canonical distortions, researchers use certain aspects of the human visual system for machine vision. That technology flow is bidirectional; researchers are investigating the possible utility of video-rate coordinate transformations for human low-vision patients. Man-in-the-loop teleoperations are also supported by the use of video-rate image-coordinate transformations, as researchers plan to use bandwidth compression tailored to the varying spatial acuity of the human operator. Technological elements being developed in the program include upgraded spatial light modulators, real-time coordinate transformations in video imagery, synthetic filters that robustly allow estimation of object pose parameters, convolutionally blurred filters that have continuously selectable invariance to such image changes as magnification and rotation, and optimization of optical correlation done with spatial light modulators that have limited range and couple both phase and amplitude in their response.

  1. NASA-Ames three-dimensional potential flow analysis system (POTFAN) equation solver code (SOLN) version 1

    NASA Technical Reports Server (NTRS)

    Davis, J. E.; Bonnett, W. S.; Medan, R. T.

    1976-01-01

    A computer program known as SOLN was developed as an independent segment of the NASA-Ames three-dimensional potential flow analysis systems of linear algebraic equations. Methods used include: LU decomposition, Householder's method, a partitioning scheme, and a block successive relaxation method. Due to the independent modular nature of the program, it may be used by itself and not necessarily in conjunction with other segments of the POTFAN system.

  2. Automated multiple target detection and tracking in UAV videos

    NASA Astrophysics Data System (ADS)

    Mao, Hongwei; Yang, Chenhui; Abousleman, Glen P.; Si, Jennie

    2010-04-01

    In this paper, a novel system is presented to detect and track multiple targets in Unmanned Air Vehicles (UAV) video sequences. Since the output of the system is based on target motion, we first segment foreground moving areas from the background in each video frame using background subtraction. To stabilize the video, a multi-point-descriptor-based image registration method is performed where a projective model is employed to describe the global transformation between frames. For each detected foreground blob, an object model is used to describe its appearance and motion information. Rather than immediately classifying the detected objects as targets, we track them for a certain period of time and only those with qualified motion patterns are labeled as targets. In the subsequent tracking process, a Kalman filter is assigned to each tracked target to dynamically estimate its position in each frame. Blobs detected at a later time are used as observations to update the state of the tracked targets to which they are associated. The proposed overlap-rate-based data association method considers the splitting and merging of the observations, and therefore is able to maintain tracks more consistently. Experimental results demonstrate that the system performs well on real-world UAV video sequences. Moreover, careful consideration given to each component in the system has made the proposed system feasible for real-time applications.

  3. Automated Visual Event Detection, Tracking, and Data Management System for Cabled- Observatory Video

    NASA Astrophysics Data System (ADS)

    Edgington, D. R.; Cline, D. E.; Schlining, B.; Raymond, E.

    2008-12-01

    Ocean observatories and underwater video surveys have the potential to unlock important discoveries with new and existing camera systems. Yet the burden of video management and analysis often requires reducing the amount of video recorded through time-lapse video or similar methods. It's unknown how many digitized video data sets exist in the oceanographic community, but we suspect that many remain under analyzed due to lack of good tools or human resources to analyze the video. To help address this problem, the Automated Visual Event Detection (AVED) software and The Video Annotation and Reference System (VARS) have been under development at MBARI. For detecting interesting events in the video, the AVED software has been developed over the last 5 years. AVED is based on a neuromorphic-selective attention algorithm, modeled on the human vision system. Frames are decomposed into specific feature maps that are combined into a unique saliency map. This saliency map is then scanned to determine the most salient locations. The candidate salient locations are then segmented from the scene using algorithms suitable for the low, non-uniform light and marine snow typical of deep underwater video. For managing the AVED descriptions of the video, the VARS system provides an interface and database for describing, viewing, and cataloging the video. VARS was developed by the MBARI for annotating deep-sea video data and is currently being used to describe over 3000 dives by our remotely operated vehicles (ROV), making it well suited to this deepwater observatory application with only a few modifications. To meet the compute and data intensive job of video processing, a distributed heterogeneous network of computers is managed using the Condor workload management system. This system manages data storage, video transcoding, and AVED processing. Looking to the future, we see high-speed networks and Grid technology as an important element in addressing the problem of processing and

  4. Segment of Challenger's right wing unloaded at KSC Logistics Facility

    NASA Image and Video Library

    1986-04-18

    51L-10187 (18 April 1986) --- A 9'7" x 16' segment of Challenger's right wing is unloaded at the Logistics Facility after being off-loaded from the rescue and salvage ship USS Opportune. It was located and recovered by Navy divers from the Opportune about 12 nautical miles northeast of Cape Canaveral in 70 feet of water. Photo credit: NASA

  5. Body Segment Kinematics and Energy Expenditure in Active Videogames.

    PubMed

    Böhm, Birgit; Hartmann, Michael; Böhm, Harald

    2016-06-01

    Energy expenditure (EE) in active videogames (AVGs) is a component for assessing its benefit for cardiovascular health. Existing evidence suggests that AVGs are able to increase EE above rest and when compared with playing passive videogames. However, the association between body movement and EE remains unclear. Furthermore, for goal-directed game design, it is important to know the contribution of body segments to EE. This knowledge will help to acquire a certain level of exercise intensity during active gaming. Therefore, the purpose of this study was to determine the best predictors of EE from body segment energies, acceleration, and heart rate during different game situations. EE and body segment movement of 17 subjects, aged 22.1 ± 2.5 years, were measured in two different AVGs. In randomized order, the subjects played a handheld-controlled Nintendo(®) Wii™ tennis (NWT) game and a whole body-controlled Sony EyeToy(®) waterfall (ETW) game. Body segment movement was analyzed using a three-dimensional motion capture system. From the video data, mean values of mechanical energy change and acceleration of 10 body segments were analyzed. Measured EE was significantly higher in ETW (7.8 ± 1.4 metabolic equivalents [METs]) than in NWT (3.4 ± 1.0 METs). The best prediction parameter for the more intense ETW game was the energy change of the right thigh and for the less intense hand-controlled NWT game was the energy change of the upper torso. Segment acceleration was less accurate in predicting EE. The best predictors of metabolic EE were the thighs and the upper torso in whole body and handheld-controlled games, respectively. Increasing movement of these body segments would lead to higher physical activity intensity during gaming, reducing sedentary behavior.

  6. STS-27 Atlantis, OV-104, crewmembers repair 3/4 inch video reel on middeck

    NASA Image and Video Library

    1988-12-06

    STS027-05-020 (2-6 Dec. 1988) --- In the foreground, astronauts Robert L. Gibson (left) and Guy S. Gardner, commander and pilot, respectively, for the STS-27 mission, repair a 3/4-inch video reel on the middeck of the Earth-orbiting space shuttle Atlantis. Photo credit: NASA

  7. TRECVID: the utility of a content-based video retrieval evaluation

    NASA Astrophysics Data System (ADS)

    Hauptmann, Alexander G.

    2006-01-01

    TRECVID, an annual retrieval evaluation benchmark organized by NIST, encourages research in information retrieval from digital video. TRECVID benchmarking covers both interactive and manual searching by end users, as well as the benchmarking of some supporting technologies including shot boundary detection, extraction of semantic features, and the automatic segmentation of TV news broadcasts. Evaluations done in the context of the TRECVID benchmarks show that generally, speech transcripts and annotations provide the single most important clue for successful retrieval. However, automatically finding the individual images is still a tremendous and unsolved challenge. The evaluations repeatedly found that none of the multimedia analysis and retrieval techniques provide a significant benefit over retrieval using only textual information such as from automatic speech recognition transcripts or closed captions. In interactive systems, we do find significant differences among the top systems, indicating that interfaces can make a huge difference for effective video/image search. For interactive tasks efficient interfaces require few key clicks, but display large numbers of images for visual inspection by the user. The text search finds the right context region in the video in general, but to select specific relevant images we need good interfaces to easily browse the storyboard pictures. In general, TRECVID has motivated the video retrieval community to be honest about what we don't know how to do well (sometimes through painful failures), and has focused us to work on the actual task of video retrieval, as opposed to flashy demos based on technological capabilities.

  8. Variable Coding and Modulation Experiment Using NASA's Space Communication and Navigation Testbed

    NASA Technical Reports Server (NTRS)

    Downey, Joseph A.; Mortensen, Dale J.; Evans, Michael A.; Tollis, Nicholas S.

    2016-01-01

    National Aeronautics and Space Administration (NASA)'s Space Communication and Navigation Testbed on the International Space Station provides a unique opportunity to evaluate advanced communication techniques in an operational system. The experimental nature of the Testbed allows for rapid demonstrations while using flight hardware in a deployed system within NASA's networks. One example is variable coding and modulation, which is a method to increase data-throughput in a communication link. This paper describes recent flight testing with variable coding and modulation over S-band using a direct-to-earth link between the SCaN Testbed and the Glenn Research Center. The testing leverages the established Digital Video Broadcasting Second Generation (DVB-S2) standard to provide various modulation and coding options. The experiment was conducted in a challenging environment due to the multipath and shadowing caused by the International Space Station structure. Performance of the variable coding and modulation system is evaluated and compared to the capacity of the link, as well as standard NASA waveforms.

  9. Inferring segmented dense motion layers using 5D tensor voting.

    PubMed

    Min, Changki; Medioni, Gérard

    2008-09-01

    We present a novel local spatiotemporal approach to produce motion segmentation and dense temporal trajectories from an image sequence. A common representation of image sequences is a 3D spatiotemporal volume, (x,y,t), and its corresponding mathematical formalism is the fiber bundle. However, directly enforcing the spatiotemporal smoothness constraint is difficult in the fiber bundle representation. Thus, we convert the representation into a new 5D space (x,y,t,vx,vy) with an additional velocity domain, where each moving object produces a separate 3D smooth layer. The smoothness constraint is now enforced by extracting 3D layers using the tensor voting framework in a single step that solves both correspondence and segmentation simultaneously. Motion segmentation is achieved by identifying those layers, and the dense temporal trajectories are obtained by converting the layers back into the fiber bundle representation. We proceed to address three applications (tracking, mosaic, and 3D reconstruction) that are hard to solve from the video stream directly because of the segmentation and dense matching steps, but become straightforward with our framework. The approach does not make restrictive assumptions about the observed scene or camera motion and is therefore generally applicable. We present results on a number of data sets.

  10. Automated content and quality assessment of full-motion-video for the generation of meta data

    NASA Astrophysics Data System (ADS)

    Harguess, Josh

    2015-05-01

    Virtually all of the video data (and full-motion-video (FMV)) that is currently collected and stored in support of missions has been corrupted to various extents by image acquisition and compression artifacts. Additionally, video collected by wide-area motion imagery (WAMI) surveillance systems and unmanned aerial vehicles (UAVs) and similar sources is often of low quality or in other ways corrupted so that it is not worth storing or analyzing. In order to make progress in the problem of automatic video analysis, the first problem that should be solved is deciding whether the content of the video is even worth analyzing to begin with. We present a work in progress to address three types of scenes which are typically found in real-world data stored in support of Department of Defense (DoD) missions: no or very little motion in the scene, large occlusions in the scene, and fast camera motion. Each of these produce video that is generally not usable to an analyst or automated algorithm for mission support and therefore should be removed or flagged to the user as such. We utilize recent computer vision advances in motion detection and optical flow to automatically assess FMV for the identification and generation of meta-data (or tagging) of video segments which exhibit unwanted scenarios as described above. Results are shown on representative real-world video data.

  11. Measurement, Ratios, and Graphing: Who Added the "Micro" to Gravity? An Educator Guide with Activities in Mathematics, Science, and Technology. NASA CONNECT[TM].

    ERIC Educational Resources Information Center

    National Aeronautics and Space Administration, Hampton, VA. Langley Research Center.

    The NASA CONNECT series features 30-minute, instructional videos for students in grades 5-8 and teacher's guides that use aeronautics and space technology as the organizing theme. In this guide and videotape, National Aeronautics and Space Administration (NASA) researchers and scientists use measurement, ratios, and graphing to demonstrate the…

  12. A new visual navigation system for exploring biomedical Open Educational Resource (OER) videos

    PubMed Central

    Zhao, Baoquan; Xu, Songhua; Lin, Shujin; Luo, Xiaonan; Duan, Lian

    2016-01-01

    video segments delivering personally valuable information, as well as intuitively and conveniently preview essential content of a single or a collection of videos. PMID:26335986

  13. A new visual navigation system for exploring biomedical Open Educational Resource (OER) videos.

    PubMed

    Zhao, Baoquan; Xu, Songhua; Lin, Shujin; Luo, Xiaonan; Duan, Lian

    2016-04-01

    Biomedical videos as open educational resources (OERs) are increasingly proliferating on the Internet. Unfortunately, seeking personally valuable content from among the vast corpus of quality yet diverse OER videos is nontrivial due to limitations of today's keyword- and content-based video retrieval techniques. To address this need, this study introduces a novel visual navigation system that facilitates users' information seeking from biomedical OER videos in mass quantity by interactively offering visual and textual navigational clues that are both semantically revealing and user-friendly. The authors collected and processed around 25 000 YouTube videos, which collectively last for a total length of about 4000 h, in the broad field of biomedical sciences for our experiment. For each video, its semantic clues are first extracted automatically through computationally analyzing audio and visual signals, as well as text either accompanying or embedded in the video. These extracted clues are subsequently stored in a metadata database and indexed by a high-performance text search engine. During the online retrieval stage, the system renders video search results as dynamic web pages using a JavaScript library that allows users to interactively and intuitively explore video content both efficiently and effectively.ResultsThe authors produced a prototype implementation of the proposed system, which is publicly accessible athttps://patentq.njit.edu/oer To examine the overall advantage of the proposed system for exploring biomedical OER videos, the authors further conducted a user study of a modest scale. The study results encouragingly demonstrate the functional effectiveness and user-friendliness of the new system for facilitating information seeking from and content exploration among massive biomedical OER videos. Using the proposed tool, users can efficiently and effectively find videos of interest, precisely locate video segments delivering personally valuable

  14. Investigation of wing upper surface flow-field disturbance due to NASA DC-8-72 in-flight inboard thrust-reverser deployment

    NASA Technical Reports Server (NTRS)

    Hamid, Hedayat U.; Margason, Richard J.; Hardy, Gordon

    1995-01-01

    An investigation of the wing upper surface flow-field disturbance due to in-flight inboard thrust reverser deployment on the NASA DC-8-72, which was conducted cooperatively by NASA Ames, the Federal Aviation Administration (FAA), McDonnell Douglas, and the Aerospace Industry Association (AIA), is outlined and discussed in detail. The purpose of this flight test was to obtain tufted flow visualization data which demonstrates the effect of thrust reverser deployment on the wing upper surface flow field to determine if the disturbed flow regions could be modeled by computational methods. A total of six symmetric thrust reversals of the two inboard engines were performed to monitor tuft and flow cone patterns as well as the character of their movement at the nominal Mach numbers of 0.55, 0.70, and 0.85. The tufts and flow cones were photographed and video-taped to determine the type of flow field that occurs with and without the thrust reversers deployed. In addition, the normal NASA DC-8 onboard Data Acquisition Distribution System (DADS) was used to synchronize the cameras. Results of this flight test will be presented in two parts. First, three distinct flow patterns associated with the above Mach numbers were sketched from the motion videos and discussed in detail. Second, other relevant aircraft parameters, such as aircraft's angular orientation, altitude, Mach number, and vertical descent, are discussed. The flight test participants' comments were recorded on the videos and the interested reader is referred to the video supplement section of this report for that information.

  15. Ultrasonic Phased Array Simulations of Welded Components at NASA

    NASA Technical Reports Server (NTRS)

    Roth, D. J.; Tokars, R. P.; Martin, R. E.; Rauser, R. W.; Aldrin, J. C.

    2009-01-01

    Comprehensive and accurate inspections of welded components have become of increasing importance as NASA develops new hardware such as Ares rocket segments for future exploration missions. Simulation and modeling will play an increasing role in the future for nondestructive evaluation in order to better understand the physics of the inspection process, to prove or disprove the feasibility for an inspection method or inspection scenario, for inspection optimization, for better understanding of experimental results, and for assessment of probability of detection. This study presents simulation and experimental results for an ultrasonic phased array inspection of a critical welded structure important for NASA future exploration vehicles. Keywords: nondestructive evaluation, computational simulation, ultrasonics, weld, modeling, phased array

  16. Detection of illegal transfer of videos over the Internet

    NASA Astrophysics Data System (ADS)

    Chaisorn, Lekha; Sainui, Janya; Manders, Corey

    2010-07-01

    In this paper, a method for detecting infringements or modifications of a video in real-time is proposed. The method first segments a video stream into shots, after which it extracts some reference frames as keyframes. This process is performed employing a Singular Value Decomposition (SVD) technique developed in this work. Next, for each input video (represented by its keyframes), ordinal-based signature and SIFT (Scale Invariant Feature Transform) descriptors are generated. The ordinal-based method employs a two-level bitmap indexing scheme to construct the index for each video signature. The first level clusters all input keyframes into k clusters while the second level converts the ordinal-based signatures into bitmap vectors. On the other hand, the SIFT-based method directly uses the descriptors as the index. Given a suspect video (being streamed or transferred on the Internet), we generate the signature (ordinal and SIFT descriptors) then we compute similarity between its signature and those signatures in the database based on ordinal signature and SIFT descriptors separately. For similarity measure, besides the Euclidean distance, Boolean operators are also utilized during the matching process. We have tested our system by performing several experiments on 50 videos (each about 1/2 hour in duration) obtained from the TRECVID 2006 data set. For experiments set up, we refer to the conditions provided by TRECVID 2009 on "Content-based copy detection" task. In addition, we also refer to the requirements issued in the call for proposals by MPEG standard on the similar task. Initial result shows that our framework is effective and robust. As compared to our previous work, on top of the achievement we obtained by reducing the storage space and time taken in the ordinal based method, by introducing the SIFT features, we could achieve an overall accuracy in F1 measure of about 96% (improved about 8%).

  17. Ultrasonic Phased Array Inspection Simulations of Welded Components at NASA

    NASA Technical Reports Server (NTRS)

    Roth, D. J.; Tokars, R. P.; Martin, R. E.; Rauser, R. W.; Aldrin, J. C.; Schumacher, E. J.

    2009-01-01

    Comprehensive and accurate inspections of welded components have become of increasing importance as NASA develops new hardware such as Ares rocket segments for future exploration missions. Simulation and modeling will play an increased role in the future for nondestructive evaluation in order to better understand the physics of the inspection process and help explain the experimental results. It will also help to prove or disprove the feasibility for an inspection method or inspection scenario, help optimize inspections, and allow to a first approximation limits of detectability. This study presents simulation and experimental results for an ultrasonic phased array inspection of a critical welded structure important for NASA future exploration vehicles.

  18. An optimized video system for augmented reality in endodontics: a feasibility study.

    PubMed

    Bruellmann, D D; Tjaden, H; Schwanecke, U; Barth, P

    2013-03-01

    We propose an augmented reality system for the reliable detection of root canals in video sequences based on a k-nearest neighbor color classification and introduce a simple geometric criterion for teeth. The new software was implemented using C++, Qt, and the image processing library OpenCV. Teeth are detected in video images to restrict the segmentation of the root canal orifices by using a k-nearest neighbor algorithm. The location of the root canal orifices were determined using Euclidean distance-based image segmentation. A set of 126 human teeth with known and verified locations of the root canal orifices was used for evaluation. The software detects root canals orifices for automatic classification of the teeth in video images and stores location and size of the found structures. Overall 287 of 305 root canals were correctly detected. The overall sensitivity was about 94 %. Classification accuracy for molars ranged from 65.0 to 81.2 % and from 85.7 to 96.7 % for premolars. The realized software shows that observations made in anatomical studies can be exploited to automate real-time detection of root canal orifices and tooth classification with a software system. Automatic storage of location, size, and orientation of the found structures with this software can be used for future anatomical studies. Thus, statistical tables with canal locations will be derived, which can improve anatomical knowledge of the teeth to alleviate root canal detection in the future. For this purpose the software is freely available at: http://www.dental-imaging.zahnmedizin.uni-mainz.de/.

  19. Louisiana Governor John Bel Edwards Tours NASA Michoud Assembly Facility

    NASA Image and Video Library

    2017-11-01

    This B-roll video shows Louisiana Gov. John Bel Edwards when visited NASA’s Michoud Assembly Facility in New Orleans on Nov. 1, 2017. He spoke about the state’s partnerships with NASA and the 20 companies and government agencies located at the facility. He toured Michoud with Todd May, the director of NASA’s Marshall Space Flight Center, which manages Michoud. NASA is building its new deep space rocket, the Space Launch System (SLS), and the Orion spacecraft at Michoud. New Orleans Mayor Mitch Landrieu and Michoud Director Keith Hefner, along with members of the Louisiana Economic Development accompanied the Edwards and May on the tour. They saw the Vertical Assemby Center where large structures of the SLS core stage are welded.

  20. NASA SLS Booster Nozzle Plug Pieces Fly During Test

    NASA Image and Video Library

    2016-06-28

    On June 28, a test version of the booster that will help power NASA's new rocket, the Space Launch System, fired up at nearly 6,000 degrees Fahrenheit for a successful, two-minute qualification test at Orbital ATK's test facilities in Promontory, Utah. This video shows the booster's nozzle plug intentionally breaking apart. The smoky ring coming off the booster is condensed water vapor created by a pressure difference between the motor gas and normal air. The nozzle plug is an environmental barrier to prevent heat, dust and moisture from getting inside the booster before it ignites. The plug isn't always part of a static test but was included on this one due to changes made to the hardware. The foam on the plug is denser than previous NASA launch vehicles, as the engines are now in the same plane as the boosters. A numbered grid was placed on the exterior of the plug before the test so the pieces retrieved could support plug breakup assessment and reconstruction. Along with video, collecting the pieces helps determine the size and speed of them when they break apart. Nozzle plug pieces were found as far as 1,500 to 2,000 feet away from the booster. This is the last full-scale qualification test for the booster before the first, uncrewed flight of SLS with the Orion spacecraft in 2018.

  1. General view of the Aft Solid Rocket Motor Segment mated ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    General view of the Aft Solid Rocket Motor Segment mated with the Aft Skirt Assembly and External Tank Attach Ring in the Rotation Processing and Surge Facility at Kennedy Space Center and awaiting transfer to the Vehicle Assembly Building where it will be mounted onto the Mobile Launch Platform. - Space Transportation System, Solid Rocket Boosters, Lyndon B. Johnson Space Center, 2101 NASA Parkway, Houston, Harris County, TX

  2. Methods for 2-D and 3-D Endobronchial Ultrasound Image Segmentation.

    PubMed

    Zang, Xiaonan; Bascom, Rebecca; Gilbert, Christopher; Toth, Jennifer; Higgins, William

    2016-07-01

    Endobronchial ultrasound (EBUS) is now commonly used for cancer-staging bronchoscopy. Unfortunately, EBUS is challenging to use and interpreting EBUS video sequences is difficult. Other ultrasound imaging domains, hampered by related difficulties, have benefited from computer-based image-segmentation methods. Yet, so far, no such methods have been proposed for EBUS. We propose image-segmentation methods for 2-D EBUS frames and 3-D EBUS sequences. Our 2-D method adapts the fast-marching level-set process, anisotropic diffusion, and region growing to the problem of segmenting 2-D EBUS frames. Our 3-D method builds upon the 2-D method while also incorporating the geodesic level-set process for segmenting EBUS sequences. Tests with lung-cancer patient data showed that the methods ran fully automatically for nearly 80% of test cases. For the remaining cases, the only user-interaction required was the selection of a seed point. When compared to ground-truth segmentations, the 2-D method achieved an overall Dice index = 90.0% ±4.9%, while the 3-D method achieved an overall Dice index = 83.9 ± 6.0%. In addition, the computation time (2-D, 0.070 s/frame; 3-D, 0.088 s/frame) was two orders of magnitude faster than interactive contour definition. Finally, we demonstrate the potential of the methods for EBUS localization in a multimodal image-guided bronchoscopy system.

  3. Real-time demonstration hardware for enhanced DPCM video compression algorithm

    NASA Technical Reports Server (NTRS)

    Bizon, Thomas P.; Whyte, Wayne A., Jr.; Marcopoli, Vincent R.

    1992-01-01

    The lack of available wideband digital links as well as the complexity of implementation of bandwidth efficient digital video CODECs (encoder/decoder) has worked to keep the cost of digital television transmission too high to compete with analog methods. Terrestrial and satellite video service providers, however, are now recognizing the potential gains that digital video compression offers and are proposing to incorporate compression systems to increase the number of available program channels. NASA is similarly recognizing the benefits of and trend toward digital video compression techniques for transmission of high quality video from space and therefore, has developed a digital television bandwidth compression algorithm to process standard National Television Systems Committee (NTSC) composite color television signals. The algorithm is based on differential pulse code modulation (DPCM), but additionally utilizes a non-adaptive predictor, non-uniform quantizer and multilevel Huffman coder to reduce the data rate substantially below that achievable with straight DPCM. The non-adaptive predictor and multilevel Huffman coder combine to set this technique apart from other DPCM encoding algorithms. All processing is done on a intra-field basis to prevent motion degradation and minimize hardware complexity. Computer simulations have shown the algorithm will produce broadcast quality reconstructed video at an average transmission rate of 1.8 bits/pixel. Hardware implementation of the DPCM circuit, non-adaptive predictor and non-uniform quantizer has been completed, providing realtime demonstration of the image quality at full video rates. Video sampling/reconstruction circuits have also been constructed to accomplish the analog video processing necessary for the real-time demonstration. Performance results for the completed hardware compare favorably with simulation results. Hardware implementation of the multilevel Huffman encoder/decoder is currently under development

  4. Video document

    NASA Astrophysics Data System (ADS)

    Davies, Bob; Lienhart, Rainer W.; Yeo, Boon-Lock

    1999-08-01

    The metaphor of film and TV permeates the design of software to support video on the PC. Simply transplanting the non- interactive, sequential experience of film to the PC fails to exploit the virtues of the new context. Video ont eh PC should be interactive and non-sequential. This paper experiments with a variety of tools for using video on the PC that exploits the new content of the PC. Some feature are more successful than others. Applications that use these tools are explored, including primarily the home video archive but also streaming video servers on the Internet. The ability to browse, edit, abstract and index large volumes of video content such as home video and corporate video is a problem without appropriate solution in today's market. The current tools available are complex, unfriendly video editors, requiring hours of work to prepare a short home video, far more work that a typical home user can be expected to provide. Our proposed solution treats video like a text document, providing functionality similar to a text editor. Users can browse, interact, edit and compose one or more video sequences with the same ease and convenience as handling text documents. With this level of text-like composition, we call what is normally a sequential medium a 'video document'. An important component of the proposed solution is shot detection, the ability to detect when a short started or stopped. When combined with a spreadsheet of key frames, the host become a grid of pictures that can be manipulated and viewed in the same way that a spreadsheet can be edited. Multiple video documents may be viewed, joined, manipulated, and seamlessly played back. Abstracts of unedited video content can be produce automatically to create novel video content for export to other venues. Edited and raw video content can be published to the net or burned to a CD-ROM with a self-installing viewer for Windows 98 and Windows NT 4.0.

  5. LDR segmented mirror technology assessment study

    NASA Technical Reports Server (NTRS)

    Krim, M.; Russo, J.

    1983-01-01

    In the mid-1990s, NASA plans to orbit a giant telescope, whose aperture may be as great as 30 meters, for infrared and sub-millimeter astronomy. Its primary mirror will be deployed or assembled in orbit from a mosaic of possibly hundreds of mirror segments. Each segment must be shaped to precise curvature tolerances so that diffraction-limited performance will be achieved at 30 micron (nominal operating wavelength). All panels must lie within 1 micron on a theoretical surface described by the optical precipitation of the telescope's primary mirror. To attain diffraction-limited performance, the issues of alignment and/or position sensing, position control of micron tolerances, and structural, thermal, and mechanical considerations for stowing, deploying, and erecting the reflector must be resolved. Radius of curvature precision influences panel size, shape, material, and type of construction. Two superior material choices emerged: fused quartz (sufficiently homogeneous with respect to thermal expansivity to permit a thin shell substrate to be drape molded between graphite dies to a precise enough off-axis asphere for optical finishing on the as-received a segment) and a Pyrex or Duran (less expensive than quartz and formable at lower temperatures). The optimal reflector panel size is between 1-1/2 and 2 meters. Making one, two-meter mirror every two weeks requires new approaches to manufacturing off-axis parabolic or aspheric segments (drape molding on precision dies and subsequent finishing on a nonrotationally symmetric dependent machine). Proof-of-concept developmental programs were identified to prove the feasibility of the materials and manufacturing ideas.

  6. Content-based video retrieval by example video clip

    NASA Astrophysics Data System (ADS)

    Dimitrova, Nevenka; Abdel-Mottaleb, Mohamed

    1997-01-01

    This paper presents a novel approach for video retrieval from a large archive of MPEG or Motion JPEG compressed video clips. We introduce a retrieval algorithm that takes a video clip as a query and searches the database for clips with similar contents. Video clips are characterized by a sequence of representative frame signatures, which are constructed from DC coefficients and motion information (`DC+M' signatures). The similarity between two video clips is determined by using their respective signatures. This method facilitates retrieval of clips for the purpose of video editing, broadcast news retrieval, or copyright violation detection.

  7. Suicide Comet HD Video

    NASA Image and Video Library

    2010-03-16

    Captured March 12, 2010 The SOHO spacecraft captured a very bright, sungrazing comet as it rocketed towards the Sun (Mar. 12, 2010) and was vaporized. This comet is arguably the brightest comet that SOHO has observed since Comet McNaught in early 2007. The comet is believed to belong to the Kreutz family of comets that broke up from a much larger comet many hundreds of years ago. They are known to orbit close to the Sun. A coronal mass ejection (CME) burst away from the Sun during the bright comet’s approach. Interestingly, a much smaller comet that preceded this one can be seen about half a day earlier on just about the identical route. And another pair of small comets followed the same track into the Sun after the bright one. Such a string of comets has never been witnessed before by SOHO. SOHO's C3 coronagraph instrument blocks out the Sun with an occulting disk; the white circle represents the size of the Sun. The planet Mercury can also be seen moving from left to right just beneath the Sun. To learn more and to download the video and still images go here: sohowww.nascom.nasa.gov/pickoftheweek/old/15mar2010/ Credit: NASA/GSFC/SOHO

  8. Segmented X-Ray Optics for Future Space Telescopes

    NASA Technical Reports Server (NTRS)

    McClelland, Ryan S.

    2013-01-01

    Lightweight and high resolution mirrors are needed for future space-based X-ray telescopes to achieve advances in high-energy astrophysics. The slumped glass mirror technology in development at NASA GSFC aims to build X-ray mirror modules with an area to mass ratio of approx.17 sq cm/kg at 1 keV and a resolution of 10 arc-sec Half Power Diameter (HPD) or better at an affordable cost. As the technology nears the performance requirements, additional engineering effort is needed to ensure the modules are compatible with space-flight. This paper describes Flight Mirror Assembly (FMA) designs for several X-ray astrophysics missions studied by NASA and defines generic driving requirements and subsequent verification tests necessary to advance technology readiness for mission implementation. The requirement to perform X-ray testing in a horizontal beam, based on the orientation of existing facilities, is particularly burdensome on the mirror technology, necessitating mechanical over-constraint of the mirror segments and stiffening of the modules in order to prevent self-weight deformation errors from dominating the measured performance. This requirement, in turn, drives the mass and complexity of the system while limiting the testable angular resolution. Design options for a vertical X-ray test facility alleviating these issues are explored. An alternate mirror and module design using kinematic constraint of the mirror segments, enabled by a vertical test facility, is proposed. The kinematic mounting concept has significant advantages including potential for higher angular resolution, simplified mirror integration, and relaxed thermal requirements. However, it presents new challenges including low vibration modes and imperfections in kinematic constraint. Implementation concepts overcoming these challenges are described along with preliminary test and analysis results demonstrating the feasibility of kinematically mounting slumped glass mirror segments.

  9. Microsurgical Clipping of an Unruptured Carotid Cave Aneurysm: 3-Dimensional Operative Video.

    PubMed

    Tabani, Halima; Yousef, Sonia; Burkhardt, Jan-Karl; Gandhi, Sirin; Benet, Arnau; Lawton, Michael T

    2017-08-01

    Most aneurysms originating from the clinoidal segment of the internal carotid artery (ICA) are nowadays managed conservatively, treated endovascularly with coiling (with or without stenting) or flow diverters. However, microsurgical clip occlusion remains an alternative. This video demonstrates clip occlusion of an unruptured right carotid cave aneurysm measuring 7 mm in a 39-year-old woman. The patient opted for surgery because of concerns about prolonged antiplatelet use associated with endovascular therapy. After patient consent, a standard pterional craniotomy was performed followed by extradural anterior clinoidectomy. After dural opening and sylvian fissure split, a clinoidal flap was opened to enter the extradural space around the clinoidal segment. The dural ring was dissected circumferentially, freeing the medial wall of the ICA down to the sellar region and mobilizing the ICA out of its canal of the clinoidal segment. With the aneurysm neck in view, the aneurysm was clipped with a 45° angled fenestrated clip over the ICA. Indocyanine green angiography confirmed no further filling of the aneurysm and patency of the ICA. Complete aneurysm occlusion was confirmed with postoperative angiography, and the patient had no neurologic deficits (Video 1). This case demonstrates the importance of anterior clinoidectomy and thorough distal dural ring dissection for effective clipping of carotid cave aneurysms. Control of venous bleeding from the cavernous sinus with fibrin glue injection simplifies the dissection, which should minimize manipulation of the optic nerve. Knowledge of this anatomy and proficiency with these techniques is important in an era of declining open aneurysm cases. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Video Denoising via Dynamic Video Layering

    NASA Astrophysics Data System (ADS)

    Guo, Han; Vaswani, Namrata

    2018-07-01

    Video denoising refers to the problem of removing "noise" from a video sequence. Here the term "noise" is used in a broad sense to refer to any corruption or outlier or interference that is not the quantity of interest. In this work, we develop a novel approach to video denoising that is based on the idea that many noisy or corrupted videos can be split into three parts - the "low-rank layer", the "sparse layer", and a small residual (which is small and bounded). We show, using extensive experiments, that our denoising approach outperforms the state-of-the-art denoising algorithms.

  11. VideoANT: Extending Online Video Annotation beyond Content Delivery

    ERIC Educational Resources Information Center

    Hosack, Bradford

    2010-01-01

    This paper expands the boundaries of video annotation in education by outlining the need for extended interaction in online video use, identifying the challenges faced by existing video annotation tools, and introducing Video-ANT, a tool designed to create text-based annotations integrated within the time line of a video hosted online. Several…

  12. Robust real-time horizon detection in full-motion video

    NASA Astrophysics Data System (ADS)

    Young, Grace B.; Bagnall, Bryan; Lane, Corey; Parameswaran, Shibin

    2014-06-01

    The ability to detect the horizon on a real-time basis in full-motion video is an important capability to aid and facilitate real-time processing of full-motion videos for the purposes such as object detection, recognition and other video/image segmentation applications. In this paper, we propose a method for real-time horizon detection that is designed to be used as a front-end processing unit for a real-time marine object detection system that carries out object detection and tracking on full-motion videos captured by ship/harbor-mounted cameras, Unmanned Aerial Vehicles (UAVs) or any other method of surveillance for Maritime Domain Awareness (MDA). Unlike existing horizon detection work, we cannot assume a priori the angle or nature (for e.g. straight line) of the horizon, due to the nature of the application domain and the data. Therefore, the proposed real-time algorithm is designed to identify the horizon at any angle and irrespective of objects appearing close to and/or occluding the horizon line (for e.g. trees, vehicles at a distance) by accounting for its non-linear nature. We use a simple two-stage hierarchical methodology, leveraging color-based features, to quickly isolate the region of the image containing the horizon and then perform a more ne-grained horizon detection operation. In this paper, we present our real-time horizon detection results using our algorithm on real-world full-motion video data from a variety of surveillance sensors like UAVs and ship mounted cameras con rming the real-time applicability of this method and its ability to detect horizon with no a priori assumptions.

  13. NASA's Asteroid Redirect Mission (ARM)

    NASA Technical Reports Server (NTRS)

    Abell, P. A.; Mazanek, D. D.; Reeves, D. M.; Chodas, P. W.; Gates, M. M.; Johnson, L. N.; Ticker, R. L.

    2017-01-01

    Mission Description and Objectives: NASA's Asteroid Redirect Mission (ARM) consists of two mission segments: 1) the Asteroid Redirect Robotic Mission (ARRM), a robotic mission to visit a large (greater than approximately 100 meters diameter) near-Earth asteroid (NEA), collect a multi-ton boulder from its surface along with regolith samples, and return the asteroidal material to a stable orbit around the Moon; and 2) the Asteroid Redirect Crewed Mission (ARCM), in which astronauts will explore and investigate the boulder and return to Earth with samples. The ARRM is currently planned to launch at the end of 2021 and the ARCM is scheduled for late 2026.

  14. Infrared video based gas leak detection method using modified FAST features

    NASA Astrophysics Data System (ADS)

    Wang, Min; Hong, Hanyu; Huang, Likun

    2018-03-01

    In order to detect the invisible leaking gas that is usually dangerous and easily leads to fire or explosion in time, many new technologies have arisen in the recent years, among which the infrared video based gas leak detection is widely recognized as a viable tool. However, all the moving regions of a video frame can be detected as leaking gas regions by the existing infrared video based gas leak detection methods, without discriminating the property of each detected region, e.g., a walking person in a video frame may be also detected as gas by the current gas leak detection methods.To solve this problem, we propose a novel infrared video based gas leak detection method in this paper, which is able to effectively suppress strong motion disturbances.Firstly, the Gaussian mixture model(GMM) is used to establish the background model.Then due to the observation that the shapes of gas regions are different from most rigid moving objects, we modify the Features From Accelerated Segment Test (FAST) algorithm and use the modified FAST (mFAST) features to describe each connected component. In view of the fact that the statistical property of the mFAST features extracted from gas regions is different from that of other motion regions, we propose the Pixel-Per-Points (PPP) condition to further select candidate connected components.Experimental results show that the algorithm is able to effectively suppress most strong motion disturbances and achieve real-time leaking gas detection.

  15. Novel dynamic caching for hierarchically distributed video-on-demand systems

    NASA Astrophysics Data System (ADS)

    Ogo, Kenta; Matsuda, Chikashi; Nishimura, Kazutoshi

    1998-02-01

    It is difficult to simultaneously serve the millions of video streams that will be needed in the age of 'Mega-Media' networks by using only one high-performance server. To distribute the service load, caching servers should be location near users. However, in previously proposed caching mechanisms, the grade of service depends on whether the data is already cached at a caching server. To make the caching servers transparent to the users, the ability to randomly access the large volume of data stored in the central server should be supported, and the operational functions of the provided service should not be narrowly restricted. We propose a mechanism for constructing a video-stream-caching server that is transparent to the users and that will always support all special playback functions for all available programs to all the contents with a latency of only 1 or 2 seconds. This mechanism uses Variable-sized-quantum-segment- caching technique derived from an analysis of the historical usage log data generated by a line-on-demand-type service experiment and based on the basic techniques used by a time- slot-based multiple-stream video-on-demand server.

  16. Conference Video for Booth at SAE World Congress Experience Conference

    NASA Technical Reports Server (NTRS)

    Harkey, Ann Marie

    2017-01-01

    Contents: Publicly released videos on technology transfer items available for licensing from NASA. Includes; Powder Handling Device for Analytical Instruments (Ames); 2. Fiber Optic Shape Sensing (FOSS) (Armstrong); 3. Robo-Glove (Johnson); 4. Modular Robotic Vehicle (Johnson); 5. Battery Management System (Johnson); 6. Active Response Gravity Offload System (ARGOS) (Johnson); 7. Contaminant Resistant Coatings for Extreme Environments (Langley); 8. Molecular Adsorber Coating (MAC) (Goddard); 9. Ultrasonic Stir Welding (Marshall). Also includes scenes from the International Space Station.

  17. Design and Analysis of Modules for Segmented X-Ray Optics

    NASA Technical Reports Server (NTRS)

    McClelland, Ryan S.; BIskach, Michael P.; Chan, Kai-Wing; Saha, Timo T; Zhang, William W.

    2012-01-01

    Future X-ray astronomy missions demand thin, light, and closely packed optics which lend themselves to segmentation of the annular mirrors and, in turn, a modular approach to the mirror design. The modular approach to X-ray Flight Mirror Assembly (FMA) design allows excellent scalability of the mirror technology to support a variety of mission sizes and science objectives. This paper describes FMA designs using slumped glass mirror segments for several X-ray astrophysics missions studied by NASA and explores the driving requirements and subsequent verification tests necessary to qualify a slumped glass mirror module for space-flight. A rigorous testing program is outlined allowing Technical Development Modules to reach technical readiness for mission implementation while reducing mission cost and schedule risk.

  18. Survey of contemporary trends in color image segmentation

    NASA Astrophysics Data System (ADS)

    Vantaram, Sreenath Rao; Saber, Eli

    2012-10-01

    In recent years, the acquisition of image and video information for processing, analysis, understanding, and exploitation of the underlying content in various applications, ranging from remote sensing to biomedical imaging, has grown at an unprecedented rate. Analysis by human observers is quite laborious, tiresome, and time consuming, if not infeasible, given the large and continuously rising volume of data. Hence the need for systems capable of automatically and effectively analyzing the aforementioned imagery for a variety of uses that span the spectrum from homeland security to elderly care. In order to achieve the above, tools such as image segmentation provide the appropriate foundation for expediting and improving the effectiveness of subsequent high-level tasks by providing a condensed and pertinent representation of image information. We provide a comprehensive survey of color image segmentation strategies adopted over the last decade, though notable contributions in the gray scale domain will also be discussed. Our taxonomy of segmentation techniques is sampled from a wide spectrum of spatially blind (or feature-based) approaches such as clustering and histogram thresholding as well as spatially guided (or spatial domain-based) methods such as region growing/splitting/merging, energy-driven parametric/geometric active contours, supervised/unsupervised graph cuts, and watersheds, to name a few. In addition, qualitative and quantitative results of prominent algorithms on several images from the Berkeley segmentation dataset are shown in order to furnish a fair indication of the current quality of the state of the art. Finally, we provide a brief discussion on our current perspective of the field as well as its associated future trends.

  19. Collaborative real-time motion video analysis by human observer and image exploitation algorithms

    NASA Astrophysics Data System (ADS)

    Hild, Jutta; Krüger, Wolfgang; Brüstle, Stefan; Trantelle, Patrick; Unmüßig, Gabriel; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2015-05-01

    Motion video analysis is a challenging task, especially in real-time applications. In most safety and security critical applications, a human observer is an obligatory part of the overall analysis system. Over the last years, substantial progress has been made in the development of automated image exploitation algorithms. Hence, we investigate how the benefits of automated video analysis can be integrated suitably into the current video exploitation systems. In this paper, a system design is introduced which strives to combine both the qualities of the human observer's perception and the automated algorithms, thus aiming to improve the overall performance of a real-time video analysis system. The system design builds on prior work where we showed the benefits for the human observer by means of a user interface which utilizes the human visual focus of attention revealed by the eye gaze direction for interaction with the image exploitation system; eye tracker-based interaction allows much faster, more convenient, and equally precise moving target acquisition in video images than traditional computer mouse selection. The system design also builds on prior work we did on automated target detection, segmentation, and tracking algorithms. Beside the system design, a first pilot study is presented, where we investigated how the participants (all non-experts in video analysis) performed in initializing an object tracking subsystem by selecting a target for tracking. Preliminary results show that the gaze + key press technique is an effective, efficient, and easy to use interaction technique when performing selection operations on moving targets in videos in order to initialize an object tracking function.

  20. Improvement in Recursive Hierarchical Segmentation of Data

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    2006-01-01

    A further modification has been made in the algorithm and implementing software reported in Modified Recursive Hierarchical Segmentation of Data (GSC- 14681-1), NASA Tech Briefs, Vol. 30, No. 6 (June 2006), page 51. That software performs recursive hierarchical segmentation of data having spatial characteristics (e.g., spectral-image data). The output of a prior version of the software contained artifacts, including spurious segmentation-image regions bounded by processing-window edges. The modification for suppressing the artifacts, mentioned in the cited article, was addition of a subroutine that analyzes data in the vicinities of seams to find pairs of regions that tend to lie adjacent to each other on opposite sides of the seams. Within each such pair, pixels in one region that are more similar to pixels in the other region are reassigned to the other region. The present modification provides for a parameter ranging from 0 to 1 for controlling the relative priority of merges between spatially adjacent and spatially non-adjacent regions. At 1, spatially-adjacent-/spatially- non-adjacent-region merges have equal priority. At 0, only spatially-adjacent-region merges (no spectral clustering) are allowed. Between 0 and 1, spatially-adjacent- region merges have priority over spatially- non-adjacent ones.

  1. NASA's Lunar Impact Monitoring Program

    NASA Technical Reports Server (NTRS)

    Suggs, Robert M.; Cooke, William; Swift, Wesley; Hollon, Nicholas

    2007-01-01

    NASA's Meteoroid Environment Office nas implemented a program to monitor the Moon for meteoroid impacts from the Marshall Space Flight Center. Using off-the-shelf telescopes and video equipment, the moon is monitored for as many as 10 nights per month, depending on weather. Custom software automatically detects flashes which are confirmed by a second telescope, photometrically calibrated using background stars, and published on a website for correlation with other observations, Hypervelocity impact tests at the Ames Vertical Gun Facility have been performed to determine the luminous efficiency ana ejecta characteristics. The purpose of this research is to define the impact ejecta environment for use by lunar spacecraft designers of the Constellation (manned lunar) Program. The observational techniques and preliminary results will be discussed.

  2. Interacting with target tracking algorithms in a gaze-enhanced motion video analysis system

    NASA Astrophysics Data System (ADS)

    Hild, Jutta; Krüger, Wolfgang; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2016-05-01

    Motion video analysis is a challenging task, particularly if real-time analysis is required. It is therefore an important issue how to provide suitable assistance for the human operator. Given that the use of customized video analysis systems is more and more established, one supporting measure is to provide system functions which perform subtasks of the analysis. Recent progress in the development of automated image exploitation algorithms allow, e.g., real-time moving target tracking. Another supporting measure is to provide a user interface which strives to reduce the perceptual, cognitive and motor load of the human operator for example by incorporating the operator's visual focus of attention. A gaze-enhanced user interface is able to help here. This work extends prior work on automated target recognition, segmentation, and tracking algorithms as well as about the benefits of a gaze-enhanced user interface for interaction with moving targets. We also propose a prototypical system design aiming to combine both the qualities of the human observer's perception and the automated algorithms in order to improve the overall performance of a real-time video analysis system. In this contribution, we address two novel issues analyzing gaze-based interaction with target tracking algorithms. The first issue extends the gaze-based triggering of a target tracking process, e.g., investigating how to best relaunch in the case of track loss. The second issue addresses the initialization of tracking algorithms without motion segmentation where the operator has to provide the system with the object's image region in order to start the tracking algorithm.

  3. A holistic image segmentation framework for cloud detection and extraction

    NASA Astrophysics Data System (ADS)

    Shen, Dan; Xu, Haotian; Blasch, Erik; Horvath, Gregory; Pham, Khanh; Zheng, Yufeng; Ling, Haibin; Chen, Genshe

    2013-05-01

    Atmospheric clouds are commonly encountered phenomena affecting visual tracking from air-borne or space-borne sensors. Generally clouds are difficult to detect and extract because they are complex in shape and interact with sunlight in a complex fashion. In this paper, we propose a clustering game theoretic image segmentation based approach to identify, extract, and patch clouds. In our framework, the first step is to decompose a given image containing clouds. The problem of image segmentation is considered as a "clustering game". Within this context, the notion of a cluster is equivalent to a classical equilibrium concept from game theory, as the game equilibrium reflects both the internal and external (e.g., two-player) cluster conditions. To obtain the evolutionary stable strategies, we explore three evolutionary dynamics: fictitious play, replicator dynamics, and infection and immunization dynamics (InImDyn). Secondly, we use the boundary and shape features to refine the cloud segments. This step can lower the false alarm rate. In the third step, we remove the detected clouds and patch the empty spots by performing background recovery. We demonstrate our cloud detection framework on a video clip provides supportive results.

  4. Video shot boundary detection using region-growing-based watershed method

    NASA Astrophysics Data System (ADS)

    Wang, Jinsong; Patel, Nilesh; Grosky, William

    2004-10-01

    In this paper, a novel shot boundary detection approach is presented, based on the popular region growing segmentation method - Watershed segmentation. In image processing, gray-scale pictures could be considered as topographic reliefs, in which the numerical value of each pixel of a given image represents the elevation at that point. Watershed method segments images by filling up basins with water starting at local minima, and at points where water coming from different basins meet, dams are built. In our method, each frame in the video sequences is first transformed from the feature space into the topographic space based on a density function. Low-level features are extracted from frame to frame. Each frame is then treated as a point in the feature space. The density of each point is defined as the sum of the influence functions of all neighboring data points. The height function that is originally used in Watershed segmentation is then replaced by inverting the density at the point. Thus, all the highest density values are transformed into local minima. Subsequently, Watershed segmentation is performed in the topographic space. The intuitive idea under our method is that frames within a shot are highly agglomerative in the feature space and have higher possibilities to be merged together, while those frames between shots representing the shot changes are not, hence they have less density values and are less likely to be clustered by carefully extracting the markers and choosing the stopping criterion.

  5. Candle Flames in Microgravity Video

    NASA Technical Reports Server (NTRS)

    1997-01-01

    This video of a candle flame burning in space was taken by the Candle Flames in Microgravity (CFM) experiment on the Russian Mir space station. It is actually a composite of still photos from a 35mm camera since the video images were too dim. The images show a hemispherically shaped flame, primarily blue in color, with some yellow early int the flame lifetime. The actual flame is quite dim and difficult to see with the naked eye. Nearly 80 candles were burned in this experiment aboard Mir. NASA scientists have also studied how flames spread in space and how to detect fire in microgravity. Researchers hope that what they learn about fire and combustion from the flame ball experiments will help out here on Earth. Their research could help create things such as better engines for cars and airplanes. Since they use very weak flames, flame balls require little fuel. By studying how this works, engineers may be able to design engines that use far less fuel. In addition, microgravity flame research is an important step in creating new safety precautions for astronauts living in space. By understanding how fire works in space, the astronauts can be better prepared to fight it.

  6. Video File - NASA Conducts Final RS-25 Rocket Engine Test of 2017

    NASA Image and Video Library

    2017-12-13

    NASA engineers at Stennis Space Center capped a year of Space Launch System testing with a final RS-25 rocket engine hot fire on Dec. 13. The 470-second test on the A-1 Test Stand was a “green run” test of an RS-25 flight controller. The engine tested also included a large 3-D-printed part, a pogo accumulator assembly, scheduled for use on future RS-25 flight engines.

  7. Risk Assessment Update: Russian Segment

    NASA Technical Reports Server (NTRS)

    Christiansen, Eric; Lear, Dana; Hyde, James; Bjorkman, Michael; Hoffman, Kevin

    2012-01-01

    BUMPER-II version 1.95j source code was provided to RSC-E- and Khrunichev at January 2012 MMOD TIM in Moscow. MEMCxP and ORDEM 3.0 environments implemented as external data files. NASA provided a sample ORDEM 3.0 g."key" & "daf" environment file set for demonstration and benchmarking BUMPER -II v1.95j installation at the Jan-12 TIM. ORDEM 3.0 has been completed and is currently in beta testing. NASA will provide a preliminary set of ORDEM 3.0 ".key" & ".daf" environment files for the years 2012 through 2028. Bumper output files produced using the new ORDEM 3.0 data files are intended for internal use only, not for requirements verification. Output files will contain these words ORDEM FILE DESCRIPTION = PRELIMINARY VERSION: not for production. The projectile density term in many BUMPER-II ballistic limit equations will need to be updated. Cube demo scripts and output files delivered at the Jan-12 TIM have been updated for the new ORDEM 3.0 data files. Risk assessment results based on ORDEM 3.0 and MEM will be presented for the Russian Segment (RS) of ISS.

  8. Towards a next generation open-source video codec

    NASA Astrophysics Data System (ADS)

    Bankoski, Jim; Bultje, Ronald S.; Grange, Adrian; Gu, Qunshan; Han, Jingning; Koleszar, John; Mukherjee, Debargha; Wilkins, Paul; Xu, Yaowu

    2013-02-01

    Google has recently been developing a next generation opensource video codec called VP9, as part of the experimental branch of the libvpx repository included in the WebM project (http://www.webmproject.org/). Starting from the VP8 video codec released by Google in 2010 as the baseline, a number of enhancements and new tools have been added to improve the coding efficiency. This paper provides a technical overview of the current status of this project along with comparisons and other stateoftheart video codecs H. 264/AVC and HEVC. The new tools that have been added so far include: larger prediction block sizes up to 64x64, various forms of compound INTER prediction, more modes for INTRA prediction, ⅛pel motion vectors and 8tap switchable subpel interpolation filters, improved motion reference generation and motion vector coding, improved entropy coding and framelevel entropy adaptation for various symbols, improved loop filtering, incorporation of Asymmetric Discrete Sine Transforms and larger 16x16 and 32x32 DCTs, frame level segmentation to group similar areas together, etc. Other tools and various bitstream features are being actively worked on as well. The VP9 bitstream is expected to be finalized by earlyto mid2013. Results show VP9 to be quite competitive in performance with mainstream stateoftheart codecs.

  9. Dashboard Videos

    NASA Astrophysics Data System (ADS)

    Gleue, Alan D.; Depcik, Chris; Peltier, Ted

    2012-11-01

    Last school year, I had a web link emailed to me entitled "A Dashboard Physics Lesson." The link, created and posted by Dale Basier on his Lab Out Loud blog, illustrates video of a car's speedometer synchronized with video of the road. These two separate video streams are compiled into one video that students can watch and analyze. After seeing this website and video, I decided to create my own dashboard videos to show to my high school physics students. I have produced and synchronized 12 separate dashboard videos, each about 10 minutes in length, driving around the city of Lawrence, KS, and Douglas County, and posted them to a website.2 Each video reflects different types of driving: both positive and negative accelerations and constant speeds. As shown in Fig. 1, I was able to capture speed, distance, and miles per gallon from my dashboard instrumentation. By linking this with a stopwatch, each of these quantities can be graphed with respect to time. I anticipate and hope that teachers will find these useful in their own classrooms, i.e., having physics students watch the videos and create their own motion maps (distance-time, speed-time) for study.

  10. "Can you see me now?" An objective metric for predicting intelligibility of compressed American Sign Language video

    NASA Astrophysics Data System (ADS)

    Ciaramello, Francis M.; Hemami, Sheila S.

    2007-02-01

    For members of the Deaf Community in the United States, current communication tools include TTY/TTD services, video relay services, and text-based communication. With the growth of cellular technology, mobile sign language conversations are becoming a possibility. Proper coding techniques must be employed to compress American Sign Language (ASL) video for low-rate transmission while maintaining the quality of the conversation. In order to evaluate these techniques, an appropriate quality metric is needed. This paper demonstrates that traditional video quality metrics, such as PSNR, fail to predict subjective intelligibility scores. By considering the unique structure of ASL video, an appropriate objective metric is developed. Face and hand segmentation is performed using skin-color detection techniques. The distortions in the face and hand regions are optimally weighted and pooled across all frames to create an objective intelligibility score for a distorted sequence. The objective intelligibility metric performs significantly better than PSNR in terms of correlation with subjective responses.

  11. Study of moving object detecting and tracking algorithm for video surveillance system

    NASA Astrophysics Data System (ADS)

    Wang, Tao; Zhang, Rongfu

    2010-10-01

    This paper describes a specific process of moving target detecting and tracking in the video surveillance.Obtain high-quality background is the key to achieving differential target detecting in the video surveillance.The paper is based on a block segmentation method to build clear background,and using the method of background difference to detecing moving target,after a series of treatment we can be extracted the more comprehensive object from original image,then using the smallest bounding rectangle to locate the object.In the video surveillance system, the delay of camera and other reasons lead to tracking lag,the model of Kalman filter based on template matching was proposed,using deduced and estimated capacity of Kalman,the center of smallest bounding rectangle for predictive value,predicted the position in the next moment may appare,followed by template matching in the region as the center of this position,by calculate the cross-correlation similarity of current image and reference image,can determine the best matching center.As narrowed the scope of searching,thereby reduced the searching time,so there be achieve fast-tracking.

  12. NASA Sea Level Change Portal - It not just another portal site

    NASA Astrophysics Data System (ADS)

    Huang, T.; Quach, N.; Abercrombie, S. P.; Boening, C.; Brennan, H. P.; Gill, K. M.; Greguska, F. R., III; Jackson, R.; Larour, E. Y.; Shaftel, H.; Tenenbaum, L. F.; Zlotnicki, V.; Moore, B.; Moore, J.; Boeck, A.

    2017-12-01

    The NASA Sea Level Change Portal (https://sealevel.nasa.gov) is designed as a "one-stop" source for current sea level change information, including interactive tools for accessing and viewing regional data, a virtual dashboard of sea level indicators, and ongoing updates through a suite of editorial products that include content articles, graphics, videos, and animations. With increasing global temperatures warming the ocean and melting ice sheets and glaciers, there is an immediate need both for accelerating sea level change research and for making this research accessible to scientists in disparate discipline, to the general public, to policy makers and business. The immersive and innovative NASA portal debuted at the 2015 AGU attracts thousands of daily visitors and over 30K followers on Facebook®. Behind its intuitive interface is an extensible architecture that integrates site contents, data for various sources, visualization, horizontal-scale geospatial data analytic technology (called NEXUS), and an interactive 3D simulation platform (called the Virtual Earth System Laboratory). We will present an overview of our NASA portal and some of our architectural decisions along with discussion on our open-source, cloud-based data analytic technology that enables on-the-fly analysis of heterogeneous data.

  13. Digital Learning Network Education Events of NASA's Extreme Environments Mission Operations

    NASA Technical Reports Server (NTRS)

    Paul, Heather; Guillory, Erika

    2007-01-01

    NASA's Digital Learning Network (DLN) reaches out to thousands of students each year through video conferencing and web casting. The DLN has created a series of live education videoconferences connecting NASA s Extreme Environment Missions Operations (NEEMO) team to students across the United States. The programs are also extended to students around the world live web casting. The primary focus of the events is the vision for space exploration. During the programs, NEEMO Crewmembers including NASA astronauts, engineers and scientists inform and inspire students about the importance of exploration and share the impact of the project as it correlates with plans to return to the moon and explore the planet Mars. These events highlight interactivity. Students talk live with the aquanauts in Aquarius, the National Oceanic and Atmospheric Administration s underwater laboratory. With this program, NASA continues the Agency s tradition of investing in the nation's education programs. It is directly tied to the Agency's major education goal of attracting and retaining students in science, technology, and engineering disciplines. Before connecting with the aquanauts, the students conduct experiments of their own designed to coincide with mission objectives. This paper describes the events that took place in September 2006.

  14. Snowfall Retrivals Using a Video Disdrometer

    NASA Astrophysics Data System (ADS)

    Newman, A. J.; Kucera, P. A.

    2004-12-01

    A video disdrometer has been recently developed at NASA/Wallops Flight Facility in an effort to improve surface precipitation measurements. One of the goals of the upcoming Global Precipitation Measurement (GPM) mission is to provide improved satellite-based measurements of snowfall in mid-latitudes. Also, with the planned dual-polarization upgrade of US National Weather Service weather radars, there is potential for significant improvements in radar-based estimates of snowfall. The video disdrometer, referred to as the Rain Imaging System (RIS), was deployed in Eastern North Dakota during the 2003-2004 winter season to measure size distributions, precipitation rate, and density estimates of snowfall. The RIS uses CCD grayscale video camera with a zoom lens to observe hydrometers in a sample volume located 2 meters from end of the lens and approximately 1.5 meters away from an independent light source. The design of the RIS may eliminate sampling errors from wind flow around the instrument. The RIS operated almost continuously in the adverse conditions often observed in the Northern Plains. Preliminary analysis of an extended winter snowstorm has shown encouraging results. The RIS was able to provide crystal habit information, variability of particle size distributions for the lifecycle of the storm, snowfall rates, and estimates of snow density. Comparisons with coincident snow core samples and measurements from the nearby NWS Forecast Office indicate the RIS provides reasonable snowfall measurements. WSR-88D radar observations over the RIS were used to generate a snowfall-reflectivity relationship from the storm. These results along with several other cases will be shown during the presentation.

  15. Gulfstream's Quiet Spike sonic boom mitigator being installed on NASA DFRC's F-15B testbed aircraft

    NASA Image and Video Library

    2006-04-17

    Gulfstream's Quiet Spike sonic boom mitigator being installed on NASA DFRC's F-15B testbed aircraft. The project seeks to verify the structural integrity of the multi-segmented, articulating spike attachment designed to reduce and control a sonic boom.

  16. Digital Video (DV): A Primer for Developing an Enterprise Video Strategy

    NASA Astrophysics Data System (ADS)

    Talovich, Thomas L.

    2002-09-01

    The purpose of this thesis is to provide an overview of digital video production and delivery. The thesis presents independent research demonstrating the educational value of incorporating video and multimedia content in training and education programs. The thesis explains the fundamental concepts associated with the process of planning, preparing, and publishing video content and assists in the development of follow-on strategies for incorporation of video content into distance training and education programs. The thesis provides an overview of the following technologies: Digital Video, Digital Video Editors, Video Compression, Streaming Video, and Optical Storage Media.

  17. Informative frame detection from wireless capsule video endoscopic images

    NASA Astrophysics Data System (ADS)

    Bashar, Md. Khayrul; Mori, Kensaku; Suenaga, Yasuhito; Kitasaka, Takayuki; Mekada, Yoshito

    2008-03-01

    Wireless capsule endoscopy (WCE) is a new clinical technology permitting the visualization of the small bowel, the most difficult segment of the digestive tract. The major drawback of this technology is the high amount of time for video diagnosis. In this study, we propose a method for informative frame detection by isolating useless frames that are substantially covered by turbid fluids or their contamination with other materials, e.g., faecal, semi-processed or unabsorbed foods etc. Such materials and fluids present a wide range of colors, from brown to yellow, and/or bubble-like texture patterns. The detection scheme, therefore, consists of two stages: highly contaminated non-bubbled (HCN) frame detection and significantly bubbled (SB) frame detection. Local color moments in the Ohta color space are used to characterize HCN frames, which are isolated by the Support Vector Machine (SVM) classifier in Stage-1. The rest of the frames go to the Stage-2, where Laguerre gauss Circular Harmonic Functions (LG-CHFs) extract the characteristics of the bubble-structures in a multi-resolution framework. An automatic segmentation method is designed to extract the bubbled regions based on local absolute energies of the CHF responses, derived from the grayscale version of the original color image. Final detection of the informative frames is obtained by using threshold operation on the extracted regions. An experiment with 20,558 frames from the three videos shows the excellent average detection accuracy (96.75%) by the proposed method, when compared with the Gabor based- (74.29%) and discrete wavelet based features (62.21%).

  18. An improved architecture for video rate image transformations

    NASA Technical Reports Server (NTRS)

    Fisher, Timothy E.; Juday, Richard D.

    1989-01-01

    Geometric image transformations are of interest to pattern recognition algorithms for their use in simplifying some aspects of the pattern recognition process. Examples include reducing sensitivity to rotation, scale, and perspective of the object being recognized. The NASA Programmable Remapper can perform a wide variety of geometric transforms at full video rate. An architecture is proposed that extends its abilities and alleviates many of the first version's shortcomings. The need for the improvements are discussed in the context of the initial Programmable Remapper and the benefits and limitations it has delivered. The implementation and capabilities of the proposed architecture are discussed.

  19. 4K Video of Colorful Liquid in Space

    NASA Image and Video Library

    2015-10-09

    Once again, astronauts on the International Space Station dissolved an effervescent tablet in a floating ball of water, and captured images using a camera capable of recording four times the resolution of normal high-definition cameras. The higher resolution images and higher frame rate videos can reveal more information when used on science investigations, giving researchers a valuable new tool aboard the space station. This footage is one of the first of its kind. The cameras are being evaluated for capturing science data and vehicle operations by engineers at NASA's Marshall Space Flight Center in Huntsville, Alabama.

  20. NASA's Webb Sunshield Stacks Up to Test

    NASA Image and Video Library

    2014-07-24

    The Sunshield on NASA's James Webb Space Telescope is the largest part of the observatory—five layers of thin membrane that must unfurl reliably in space to precise tolerances. Last week, for the first time, engineers stacked and unfurled a full-sized test unit of the Sunshield and it worked perfectly. The Sunshield is about the length of a tennis court, and will be folded up like an umbrella around the Webb telescope’s mirrors and instruments during launch. Once it reaches its orbit, the Webb telescope will receive a command from Earth to unfold, and separate the Sunshield's five layers into their precisely stacked arrangement with its kite-like shape. The Sunshield test unit was stacked and expanded at a cleanroom in the Northrop Grumman facility in Redondo Beach, California. The Sunshield separates the observatory into a warm sun-facing side and a cold side where the sunshine is blocked from interfering with the sensitive infrared instruments. The infrared instruments need to be kept very cold (under 50 K or -370 degrees F) to operate. The Sunshield protects these sensitive instruments with an effective sun protection factor or SPF of 1,000,000 (suntan lotion generally has an SPF of 8-50). In addition to providing a cold environment, the Sunshield provides a thermally stable environment. This stability is essential to maintaining proper alignment of the primary mirror segments as the telescope changes its orientation to the sun. The James Webb Space Telescope is the successor to NASA's Hubble Space Telescope. It will be the most powerful space telescope ever built. Webb is an international project led by NASA with its partners, the European Space Agency and the Canadian Space Agency. For more information about the Webb telescope, visit: www.jwst.nasa.gov or www.nasa.gov/webb For more information on the Webb Sunshield, visit: jwst.nasa.gov/sunshield.html Credit: NASA/Goddard/Chris Gunn NASA image use policy. NASA Goddard Space Flight Center enables NASA

  1. Feedback from video for virtual reality Navigation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsap, L V

    2000-10-27

    Important preconditions for wide acceptance of virtual reality (VR) systems include their comfort, ease and naturalness to use. Most existing trackers super from discomfort-related issues. For example, body-based trackers (hand controllers, joysticks, helmet attachments, etc.) restrict spontaneity and naturalness of motion, while ground-based devices (e.g., hand controllers) limit the workspace by literally binding an operator to the ground. There are similar problems with controls. This paper describes using real-time video with registered depth information (from a commercially available camera) for virtual reality navigation. Camera-based setup can replace cumbersome trackers. The method includes selective depth processing for increased speed, and amore » robust skin-color segmentation for accounting illumination variations.« less

  2. Dashboard Videos

    ERIC Educational Resources Information Center

    Gleue, Alan D.; Depcik, Chris; Peltier, Ted

    2012-01-01

    Last school year, I had a web link emailed to me entitled "A Dashboard Physics Lesson." The link, created and posted by Dale Basier on his "Lab Out Loud" blog, illustrates video of a car's speedometer synchronized with video of the road. These two separate video streams are compiled into one video that students can watch and analyze. After seeing…

  3. Development of Onboard Computer Complex for Russian Segment of ISS

    NASA Technical Reports Server (NTRS)

    Branets, V.; Brand, G.; Vlasov, R.; Graf, I.; Clubb, J.; Mikrin, E.; Samitov, R.

    1998-01-01

    Report present a description of the Onboard Computer Complex (CC) that was developed during the period of 1994-1998 for the Russian Segment of ISS. The system was developed in co-operation with NASA and ESA. ESA developed a new computation system under the RSC Energia Technical Assignment, called DMS-R. The CC also includes elements developed by Russian experts and organizations. A general architecture of the computer system and the characteristics of primary elements of this system are described. The system was integrated at RSC Energia with the participation of American and European specialists. The report contains information on software simulators, verification and de-bugging facilities witch were been developed for both stand-alone and integrated tests and verification. This CC serves as the basis for the Russian Segment Onboard Control Complex on ISS.

  4. Improving Photometric Calibration of Meteor Video Camera Systems

    NASA Technical Reports Server (NTRS)

    Ehlert, Steven; Kingery, Aaron; Suggs, Robert

    2016-01-01

    We present the results of new calibration tests performed by the NASA Meteoroid Environment Oce (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the rst point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric ux within the camera band-pass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at 0:20 mag, and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to 0:05 ?? 0:10 mag in both ltered and un ltered camera observations with no evidence for lingering systematics.

  5. Precise Alignment and Permanent Mounting of Thin and Lightweight X-ray Segments

    NASA Technical Reports Server (NTRS)

    Biskach, Michael P.; Chan, Kai-Wing; Hong, Melinda N.; Mazzarella, James R.; McClelland, Ryan S.; Norman, Michael J.; Saha, Timo T.; Zhang, William W.

    2012-01-01

    To provide observations to support current research efforts in high energy astrophysics. future X-ray telescope designs must provide matching or better angular resolution while significantly increasing the total collecting area. In such a design the permanent mounting of thin and lightweight segments is critical to the overall performance of the complete X-ray optic assembly. The thin and lightweight segments used in the assemhly of the modules are desigued to maintain and/or exceed the resolution of existing X-ray telescopes while providing a substantial increase in collecting area. Such thin and delicate X-ray segments are easily distorted and yet must be aligned to the arcsecond level and retain accurate alignment for many years. The Next Generation X-ray Optic (NGXO) group at NASA Goddard Space Flight Center has designed, assembled. and implemented new hardware and procedures mth the short term goal of aligning three pairs of X-ray segments in a technology demonstration module while maintaining 10 arcsec alignment through environmental testing as part of the eventual design and construction of a full sized module capable of housing hundreds of X-ray segments. The recent attempts at multiple segment pair alignment and permanent mounting is described along with an overview of the procedure used. A look into what the next year mll bring for the alignment and permanent segment mounting effort illustrates some of the challenges left to overcome before an attempt to populate a full sized module can begin.

  6. Student perceptions of a video-based blended learning approach for improving pediatric physical examination skills.

    PubMed

    Lehmann, Ronny; Seitz, Anke; Bosse, Hans Martin; Lutz, Thomas; Huwendiek, Sören

    2016-11-01

    Physical examination skills are crucial for a medical doctor. The physical examination of children differs significantly from that of adults. Students often have only limited contact with pediatric patients to practice these skills. In order to improve the acquisition of pediatric physical examination skills during bedside teaching, we have developed a combined video-based training concept, subsequently evaluating its use and perception. Fifteen videos were compiled, demonstrating defined physical examination sequences in children of different ages. Students were encouraged to use these videos as preparation for bedside teaching during their pediatric clerkship. After bedside teaching, acceptance of this approach was evaluated using a 10-item survey, asking for the frequency of video use and the benefits to learning, self-confidence, and preparation of bedside teaching as well as the concluding OSCE. N=175 out of 299 students returned survey forms (58.5%). Students most frequently used videos, either illustrating complete examination sequences or corresponding focus examinations frequently assessed in the OSCE. Students perceived the videos as a helpful method of conveying the practical process and preparation for bedside teaching as well as the OSCE, and altogether considered them a worthwhile learning experience. Self-confidence at bedside teaching was enhanced by preparation with the videos. The demonstration of a defined standardized procedural sequence, explanatory comments, and demonstration of infrequent procedures and findings were perceived as particularly supportive. Long video segments, poor alignment with other curricular learning activities, and technical problems were perceived as less helpful. Students prefer an optional individual use of the videos, with easy technical access, thoughtful combination with the bedside teaching, and consecutive standardized practice of demonstrated procedures. Preparation with instructional videos combined with bedside

  7. Hybrid Reality Lab Capabilities - Video 2

    NASA Technical Reports Server (NTRS)

    Delgado, Francisco J.; Noyes, Matthew

    2016-01-01

    Our Hybrid Reality and Advanced Operations Lab is developing incredibly realistic and immersive systems that could be used to provide training, support engineering analysis, and augment data collection for various human performance metrics at NASA. To get a better understanding of what Hybrid Reality is, let's go through the two most commonly known types of immersive realities: Virtual Reality, and Augmented Reality. Virtual Reality creates immersive scenes that are completely made up of digital information. This technology has been used to train astronauts at NASA, used during teleoperation of remote assets (arms, rovers, robots, etc.) and other activities. One challenge with Virtual Reality is that if you are using it for real time-applications (like landing an airplane) then the information used to create the virtual scenes can be old (i.e. visualized long after physical objects moved in the scene) and not accurate enough to land the airplane safely. This is where Augmented Reality comes in. Augmented Reality takes real-time environment information (from a camera, or see through window, and places digitally created information into the scene so that it matches with the video/glass information). Augmented Reality enhances real environment information collected with a live sensor or viewport (e.g. camera, window, etc.) with the information-rich visualization provided by Virtual Reality. Hybrid Reality takes Augmented Reality even further, by creating a higher level of immersion where interactivity can take place. Hybrid Reality takes Virtual Reality objects and a trackable, physical representation of those objects, places them in the same coordinate system, and allows people to interact with both objects' representations (virtual and physical) simultaneously. After a short period of adjustment, the individuals begin to interact with all the objects in the scene as if they were real-life objects. The ability to physically touch and interact with digitally created

  8. Stereo-Video Data Reduction of Wake Vortices and Trailing Aircrafts

    NASA Technical Reports Server (NTRS)

    Alter-Gartenberg, Rachel

    1998-01-01

    This report presents stereo image theory and the corresponding image processing software developed to analyze stereo imaging data acquired for the wake-vortex hazard flight experiment conducted at NASA Langley Research Center. In this experiment, a leading Lockheed C-130 was equipped with wing-tip smokers to visualize its wing vortices, while a trailing Boeing 737 flew into the wake vortices of the leading airplane. A Rockwell OV-10A airplane, fitted with video cameras under its wings, flew at 400 to 1000 feet above and parallel to the wakes, and photographed the wake interception process for the purpose of determining the three-dimensional location of the trailing aircraft relative to the wake. The report establishes the image-processing tools developed to analyze the video flight-test data, identifies sources of potential inaccuracies, and assesses the quality of the resultant set of stereo data reduction.

  9. A First for NASA's IRIS: Observing a Gigantic Eruption of Solar Material

    NASA Image and Video Library

    2014-05-30

    Watch a video from this event here: www.flickr.com/photos/gsfc/14118958800/ A coronal mass ejection, or CME, surged off the side of the sun on May 9, 2014, and NASA's newest solar observatory caught it in extraordinary detail. This was the first CME observed by the Interface Region Imaging Spectrograph, or IRIS, which launched in June 2013 to peer into the lowest levels of the sun's atmosphere with better resolution than ever before. Watch the movie to see how a curtain of solar material erupts outward at speeds of 1.5 million miles per hour. Read more: 1.usa.gov/1kp7O4F NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  10. Quantitative characterization of the carbon/carbon composites components based on video of polarized light microscope.

    PubMed

    Li, Yixian; Qi, Lehua; Song, Yongshan; Chao, Xujiang

    2017-06-01

    The components of carbon/carbon (C/C) composites have significant influence on the thermal and mechanical properties, so a quantitative characterization of component is necessary to study the microstructure of C/C composites, and further to improve the macroscopic properties of C/C composites. Considering the extinction crosses of the pyrocarbon matrix have significant moving features, the polarized light microscope (PLM) video is used to characterize C/C composites quantitatively because it contains sufficiently dynamic and structure information. Then the optical flow method is introduced to compute the optical flow field between the adjacent frames, and segment the components of C/C composites from PLM image by image processing. Meanwhile the matrix with different textures is re-segmented by the length difference of motion vectors, and then the component fraction of each component and extinction angle of pyrocarbon matrix are calculated directly. Finally, the C/C composites are successfully characterized from three aspects of carbon fiber, pyrocarbon, and pores by a series of image processing operators based on PLM video, and the errors of component fractions are less than 15%. © 2017 Wiley Periodicals, Inc.

  11. Wicked problems in space technology development at NASA

    NASA Astrophysics Data System (ADS)

    Balint, Tibor S.; Stevens, John

    2016-01-01

    Technological innovation is key to enable future space exploration missions at NASA. Technology development, however, is not only driven by performance and resource considerations, but also by a broad range of directly or loosely interconnected factors. These include, among others, strategy, policy and politics at various levels, tactics and programmatics, interactions between stakeholders, resource requirements, performance goals from component to system level, mission infusion targets, portfolio execution and tracking, and technology push or mission pull. Furthermore, at NASA, these influences occur on varying timescales and at diverse geographic locations. Such a complex and interconnected system could impede space technology innovation in this examined segment of the government environment. Hence, understanding the process through NASA's Planning, Programming, Budget and Execution cycle could benefit strategic thinking, planning and execution. Insights could be gained through suitable models, for example assessing the key drivers against the framework of Wicked Problems. This paper discusses NASA specific space technology innovation and innovation barriers in the government environment through the characteristics of Wicked Problems; that is, they do not have right or wrong solutions, only improved outcomes that can be reached through authoritative, competitive, or collaborative means. We will also augment the Wicked Problems model to account for the temporally and spatially coupled, and cyclical nature of this NASA specific case, and propose how appropriate models could improve understanding of the key influencing factors. In turn, such understanding may subsequently lead to reducing innovation barriers, and stimulating technology innovation at NASA. Furthermore, our approach can be adopted for other government-directed environments to gain insights into their structures, hierarchies, operational flow, and interconnections to facilitate circular dialogs towards

  12. Markerless video analysis for movement quantification in pediatric epilepsy monitoring.

    PubMed

    Lu, Haiping; Eng, How-Lung; Mandal, Bappaditya; Chan, Derrick W S; Ng, Yen-Ling

    2011-01-01

    This paper proposes a markerless video analytic system for quantifying body part movements in pediatric epilepsy monitoring. The system utilizes colored pajamas worn by a patient in bed to extract body part movement trajectories, from which various features can be obtained for seizure detection and analysis. Hence, it is non-intrusive and it requires no sensor/marker to be attached to the patient's body. It takes raw video sequences as input and a simple user-initialization indicates the body parts to be examined. In background/foreground modeling, Gaussian mixture models are employed in conjunction with HSV-based modeling. Body part detection follows a coarse-to-fine paradigm with graph-cut-based segmentation. Finally, body part parameters are estimated with domain knowledge guidance. Experimental studies are reported on sequences captured in an Epilepsy Monitoring Unit at a local hospital. The results demonstrate the feasibility of the proposed system in pediatric epilepsy monitoring and seizure detection.

  13. Thematic video indexing to support video database retrieval and query processing

    NASA Astrophysics Data System (ADS)

    Khoja, Shakeel A.; Hall, Wendy

    1999-08-01

    This paper presents a novel video database system, which caters for complex and long videos, such as documentaries, educational videos, etc. As compared to relatively structured format videos like CNN news or commercial advertisements, this database system has the capacity to work with long and unstructured videos.

  14. Video game addiction, ADHD symptomatology, and video game reinforcement.

    PubMed

    Mathews, Christine L; Morrell, Holly E R; Molle, Jon E

    2018-06-06

    Up to 23% of people who play video games report symptoms of addiction. Individuals with attention deficit hyperactivity disorder (ADHD) may be at increased risk for video game addiction, especially when playing games with more reinforcing properties. The current study tested whether level of video game reinforcement (type of game) places individuals with greater ADHD symptom severity at higher risk for developing video game addiction. Adult video game players (N = 2,801; Mean age = 22.43, SD = 4.70; 93.30% male; 82.80% Caucasian) completed an online survey. Hierarchical multiple linear regression analyses were used to test type of game, ADHD symptom severity, and the interaction between type of game and ADHD symptomatology as predictors of video game addiction severity, after controlling for age, gender, and weekly time spent playing video games. ADHD symptom severity was positively associated with increased addiction severity (b = .73 and .68, ps < 0.001). Type of game played or preferred the most was not associated with addiction severity, ps > .05. The relationship between ADHD symptom severity and addiction severity did not depend on the type of video game played or preferred most, ps > .05. Gamers who have greater ADHD symptom severity may be at greater risk for developing symptoms of video game addiction and its negative consequences, regardless of type of video game played or preferred most. Individuals who report ADHD symptomatology and also identify as gamers may benefit from psychoeducation about the potential risk for problematic play.

  15. NASA SNPP SIPS - Following in the Path of EOS

    NASA Technical Reports Server (NTRS)

    Behnke, Jeanne; Hall, Alfreda; Ho, Evelyn

    2016-01-01

    NASA's Earth Science Data Information System (ESDIS) Project has been operating NASA's Suomi National Polar-Orbiting Partnership (SNPP) Science Data Segment (SDS) since the launch in October 2011. At launch, the SDS focused primarily on the evaluation of Sensor Data Records (SDRs) and Environmental Data Records (EDRs) produced by the Joint Polar Satellite System (JPSS), a National Oceanic and Atmosphere Administration (NOAA) Program, as to their suitability for Earth system science. During the summer of 2014, NASA transitioned to the production of standard Earth Observing System (EOS)-like science products for all instruments aboard Suomi NPP. The five Science Investigator-led Processing Systems (SIPS): Land, Ocean, Atmosphere, Ozone, and Sounder were established to produce the NASA SNPP standard Level 1, Level 2, and global Level 3 products developed by the SNPP Science Teams and to provide the products to NASA's Distributed Active Archive Centers (DAACs) for archive and distribution to the user community. The processing, archiving and distribution of data from NASA's Clouds and the Earth's Radiant Energy System (CERES) and Ozone Mapper/Profiler Suite (OMPS) Limb instruments will continue. With the implementation of the JPSS Block 2 architecture and the launch of JPSS-1, the SDS will receive SNPP data in near real-time via the JPSS Stored Mission Data Hub (JSH), as well as JPSS-1 and future JPSS-2 data. The SNPP SIPS will ingest EOS compatible Level 0 data from the EOS Data Operations System (EDOS) element for their data processing, enabling the continuous EOS-SNPP-JPSS Satellite Data Record.

  16. OceanVideoLab: A Tool for Exploring Underwater Video

    NASA Astrophysics Data System (ADS)

    Ferrini, V. L.; Morton, J. J.; Wiener, C.

    2016-02-01

    Video imagery acquired with underwater vehicles is an essential tool for characterizing seafloor ecosystems and seafloor geology. It is a fundamental component of ocean exploration that facilitates real-time operations, augments multidisciplinary scientific research, and holds tremendous potential for public outreach and engagement. Acquiring, documenting, managing, preserving and providing access to large volumes of video acquired with underwater vehicles presents a variety of data stewardship challenges to the oceanographic community. As a result, only a fraction of underwater video content collected with research submersibles is documented, discoverable and/or viewable online. With more than 1 billion users, YouTube offers infrastructure that can be leveraged to help address some of the challenges associated with sharing underwater video with a broad global audience. Anyone can post content to YouTube, and some oceanographic organizations, such as the Schmidt Ocean Institute, have begun live-streaming video directly from underwater vehicles. OceanVideoLab (oceanvideolab.org) was developed to help improve access to underwater video through simple annotation, browse functionality, and integration with related environmental data. Any underwater video that is publicly accessible on YouTube can be registered with OceanVideoLab by simply providing a URL. It is strongly recommended that a navigational file also be supplied to enable geo-referencing of observations. Once a video is registered, it can be viewed and annotated using a simple user interface that integrates observations with vehicle navigation data if provided. This interface includes an interactive map and a list of previous annotations that allows users to jump to times of specific observations in the video. Future enhancements to OceanVideoLab will include the deployment of a search interface, the development of an application program interface (API) that will drive the search and enable querying of

  17. Videos for Science Communication and Nature Interpretation: The TIB|AV-Portal as Resource.

    NASA Astrophysics Data System (ADS)

    Marín Arraiza, Paloma; Plank, Margret; Löwe, Peter

    2016-04-01

    relevant article or further supplement materials). By using media fragment identifiers not only the whole video can be cited, but also individual parts of it. Doing so, users are also likely to find high-quality related content (for instance, a video abstract and the corresponding article or an expedition documentary and its field notebook). Based on automatic analysis of speech, images and texts within the videos a large amount of metadata associated with the segments of the video is automatically generated. These metadata enhance the searchability of the video and make it easier to retrieve and interlink meaningful parts of the video. This new and reliable library-driven infrastructure allow all different types of data be discoverable, accessible, citable, freely reusable, and interlinked. Therefore, it simplifies Science Communication

  18. Alignment and Integration of Lightweight Mirror Segments

    NASA Technical Reports Server (NTRS)

    Evans, Tyler; Biskach, Michael; Mazzarella, Jim; McClelland, Ryan; Saha, Timo; Zhang, Will; Chan, Kai-Wing

    2011-01-01

    The optics for the International X-Ray Observatory (IXO) require alignment and integration of about fourteen thousand thin mirror segments to achieve the mission goal of 3.0 square meters of effective area at 1.25 keV with an angular resolution of five arc-seconds. These mirror segments are 0.4 mm thick, and 200 to 400 mm in size, which makes it difficult not to impart distortion at the sub-arc-second level. This paper outlines the precise alignment, permanent bonding, and verification testing techniques developed at NASA's Goddard Space Flight Center (GSFC). Improvements in alignment include new hardware and automation software. Improvements in bonding include two module new simulators to bond mirrors into, a glass housing for proving single pair bonding, and a Kovar module for bonding multiple pairs of mirrors. Three separate bonding trials were x-ray tested producing results meeting the requirement of sub ten arc-second alignment. This paper will highlight these recent advances in alignment, testing, and bonding techniques and the exciting developments in thin x-ray optic technology development.

  19. Video File - NASA on a Roll Testing Space Launch System Flight Engines

    NASA Image and Video Library

    2017-08-09

    Just two weeks after conducting another in a series of tests on new RS-25 rocket engine flight controllers for NASA’s Space Launch System (SLS) rocket, engineers at NASA’s Stennis Space Center in Mississippi completed one more hot-fire test of a flight controller on August 9, 2017. With the hot fire, NASA has moved a step closer in completing testing on the four RS-25 engines which will power the first integrated flight of the SLS rocket and Orion capsule known as Exploration Mission 1.

  20. James Webb Space Telescope in NASA's giant thermal vacuum chamber

    NASA Image and Video Library

    2015-04-20

    Inside NASA's giant thermal vacuum chamber, called Chamber A, at NASA's Johnson Space Center in Houston, the James Webb Space Telescope's Pathfinder backplane test model, is being prepared for its cryogenic test. Previously used for manned spaceflight missions, this historic chamber is now filled with engineers and technicians preparing for a crucial test. Exelis developed and installed the optical test equipment in the chamber. "The optical test equipment was developed and installed in the chamber by Exelis," said Thomas Scorse, Exelis JWST Program Manager. "The Pathfinder telescope gives us our first opportunity for an end-to-end checkout of our equipment." "This will be the first time on the program that we will be aligning two primary mirror segments together," said Lee Feinberg, NASA Optical Telescope Element Manager. "In the past, we have always tested one mirror at a time but this time we will use a single test system and align both mirrors to it as though they are a single monolithic mirror." The James Webb Space Telescope is the scientific successor to NASA's Hubble Space Telescope. It will be the most powerful space telescope ever built. Webb is an international project led by NASA with its partners, the European Space Agency and the Canadian Space Agency. Image credit: NASA/Chris Gunn Text credit: Laura Betz, NASA's Goddard Space Flight Center, Greenbelt, Maryland NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  1. Left Transsylvian Transcisternal and Transinferior Insular Sulcus Approach for Resection of Uncohippocampal Tumor: 3-Dimensional Operative Video.

    PubMed

    Fernandez-Miranda, Juan C

    2018-06-07

    The medial temporal lobe can be divided in anterior, middle, and posterior segments. The anterior segment is formed by the uncus and hippocampal head, and it has extra and intraventricular structures. There are 2 main approaches to the uncohippocampal region, the anteromedial temporal lobectomy (Spencer's technique) and the transsylvian selective amygdalohippocampectomy (Yasargil's technique).In this video, we present the case of a 29-yr-old man with new onset of generalized seizures and a contrast-enhancing lesion in the left anterior segment of the medial temporal lobe compatible with high-grade glioma. He had a medical history of cervical astrocytoma at age 8 requiring craniospinal radiation therapy and ventriculoperitoneal shunt placement.The tumor was approached using a combined transsylvian transcisternal and transinferior insular sulcus approach to the extra and intraventricular aspects of the uncohippocampal region. It was resected completely, and the patient was neurologically intact after resection with no further seizures at 6-mo follow-up. The diagnosis was glioblastoma IDH-wild type, for which he underwent adjuvant therapy.Surgical anatomy and technical nuances of this approach are illustrated using a 3-dimensional video and anatomic dissections. The selective approach, when compared to an anteromedial temporal lobectomy, has the advantage of preserving the anterolateral temporal cortex, which is particularly relevant in dominant-hemisphere lesions, and the related fiber tracts, including the inferior fronto-occipital and inferior longitudinal fascicles, and most of the optic radiation fibers. The transsylvian approach, however, is technically and anatomically more challenging and potentially carries a higher risk of vascular injury and vasospasm.Page 1 and figures from Fernández-Miranda JC et al, Microvascular Anatomy of the Medial Temporal Region: Part 1: Its Application to Arteriovenous Malformation Surgery, Operative Neurosurgery, 2010, Volume 67

  2. Data Analysis Measurement: Having a Solar Blast! NASA Connect: Program 7 in the 2001-2002 Video Series. [Videotape].

    ERIC Educational Resources Information Center

    National Aeronautics and Space Administration, Hampton, VA. Langley Research Center.

    NASA Connect is an interdisciplinary, instructional distance learning program targeting students in grades 6-8. This videotape explains how engineers and researchers at the National Aeronautics and Space Administration (NASA) use data analysis and measurement to predict solar storms, anticipate how they will affect the Earth, and improve…

  3. Status of NASA's Space Launch System

    NASA Technical Reports Server (NTRS)

    Honeycutt, John; Lyles, Garry

    2016-01-01

    NASA's Space Launch System (SLS) continued to make significant progress in 2015 and 2016, completing hardware and testing that brings NASA closer to a new era of deep space exploration. Programmatically, SLS completed Critical Design Review (CDR) in 2015. A team of independent reviewers concluded that the vehicle design is technically and programmatically ready to move to Design Certification Review (DCR) and launch readiness in 2018. Just five years after program start, every major element has amassed development and flight hardware and completed key tests that will lead to an accelerated pace of manufacturing and testing in 2016 and 2017. Key to SLS' rapid progress has been the use of existing technologies adapted to the new launch vehicle. The existing fleet of RS-25 engines is undergoing adaptation tests to prove it can meet SLS requirements and environments with minimal change. The four-segment shuttle-era booster has been modified and updated with a fifth propellant segment, new insulation, and new avionics. The Interim Cryogenic Upper Stage is a modified version of an existing upper stage. The first Block I SLS configuration will launch a minimum of 70 metric tons (t) of payload to low Earth orbit (LEO). The vehicle architecture has a clear evolutionary path to more than 100t and, ultimately, to 130t. Among the program's major 2015-2016 accomplishments were two booster qualification hotfire tests, a series of RS-25 adaptation hotfire tests, manufacturing of most of the major components for both core stage test articles and first flight tank, delivery of the Pegasus core stage barge, and the upper stage simulator. Renovations to the B-2 test stand for stage green run testing was completed at NASA Stennis Space Center. This year will see the completion of welding for all qualification and flight EM-1 core stage components and testing of flight avionics, completion of core stage structural test stands, casting of the EM-1 solid rocket motors, additional testing

  4. Status of NASA's Space Launch System

    NASA Technical Reports Server (NTRS)

    Honeycutt, John; Cook, Jerry; Lyles, Garry

    2016-01-01

    NASA's Space Launch System (SLS) continued to make significant progress in 2015, completing hardware and testing that brings NASA closer to a new era of deep space exploration. The most significant program milestone of the year was completion of Critical Design Review (CDR). A team of independent reviewers concluded that the vehicle design is technically and programmatically ready to move to Design Certification Review (DCR) and launch readiness in 2018. Just four years after program start, every major element has amassed development and flight hardware and completed key tests that will set the stage for a growing schedule of manufacturing and testing in 2016. Key to SLS' rapid progress has been the use of existing technologies adapted to the new launch vehicle. The space shuttle-heritage RS-25 engine is undergoing adaptation tests to prove it can meet SLS requirements and environments with minimal change. The four-segment shuttle-era booster has been modified and updated with an additional propellant segment, new insulation, and new avionics. The Interim Cryogenic Upper Stage is a modified version of an existing upper stage. The first Block I SLS configuration will launch a minimum of 70 metric tons of payload to low Earth orbit (LEO). The vehicle architecture has a clear evolutionary path to more than 100 metric tons and, ultimately, to 130 metric tons. Among the program's major accomplishments in 2015 were the first booster qualification hotfire test, a series of seven RS-25 adaptation hotfire tests, manufacturing of most of the major components for both core stage test articles and first flight tank, delivery of the Pegasus core stage barge, and the upper stage simulator. Renovations to the B-2 test stand for stage green run testing was completed at NASA Stennis Space Center. This year will see the second booster qualification motor hotfire, flight and additional development RS-25 engine tests, and completion of core stage test articles and test stands and

  5. Status of NASA's Space Launch System

    NASA Technical Reports Server (NTRS)

    Lyles, Garry

    2016-01-01

    NASA's Space Launch System (SLS) continued to make significant progress in 2015, completing hardware and testing that brings NASA closer to a new era of deep space exploration. The most significant program milestone of the year was completion of Critical Design Review (CDR). A team of independent reviewers concluded that the vehicle design is technically and programmatically ready to move to Design Certification Review (DCR) and launch readiness in 2018. Just four years after program start, every major element has amassed development and flight hardware and completed key tests that will set the stage for a growing schedule of manufacturing and testing in 2016. Key to SLS' rapid progress has been the use of existing technologies adapted to the new launch vehicle. The space shuttle-heritage RS-25 engine is undergoing adaptation tests to prove it can meet SLS requirements and environments with minimal change. The four-segment shuttle-era booster has been modified and updated with an additional propellant segment, new insulation, and new avionics. The Interim Cryogenic Upper Stage is a modified version of an existing upper stage. The first Block I SLS configuration will launch a minimum of 70 metric tons (t) of payload to low Earth orbit (LEO). The vehicle architecture has a clear evolutionary path to more than 100t and, ultimately, to 130t. Among the program's major accomplishments in 2015 were the first booster qualification hotfire test, a series of seven RS-25 adaptation hotfire tests, manufacturing of most of the major components for both core stage test articles and first flight tank, delivery of the Pegasus core stage barge, and the upper stage simulator. Renovations to the B-2 test stand for stage green run testing was completed at NASA Stennis Space Center. This year will see the second booster qualification motor hotfire, flight and additional development RS-25 engine tests, and completion of core stage test articles and test stands and several flight article

  6. An adaptive enhancement algorithm for infrared video based on modified k-means clustering

    NASA Astrophysics Data System (ADS)

    Zhang, Linze; Wang, Jingqi; Wu, Wen

    2016-09-01

    In this paper, we have proposed a video enhancement algorithm to improve the output video of the infrared camera. Sometimes the video obtained by infrared camera is very dark since there is no clear target. In this case, infrared video should be divided into frame images by frame extraction, in order to carry out the image enhancement. For the first frame image, which can be divided into k sub images by using K-means clustering according to the gray interval it occupies before k sub images' histogram equalization according to the amount of information per sub image, we used a method to solve a problem that final cluster centers close to each other in some cases; and for the other frame images, their initial cluster centers can be determined by the final clustering centers of the previous ones, and the histogram equalization of each sub image will be carried out after image segmentation based on K-means clustering. The histogram equalization can make the gray value of the image to the whole gray level, and the gray level of each sub image is determined by the ratio of pixels to a frame image. Experimental results show that this algorithm can improve the contrast of infrared video where night target is not obvious which lead to a dim scene, and reduce the negative effect given by the overexposed pixels adaptively in a certain range.

  7. Streaming Video--The Wave of the Video Future!

    ERIC Educational Resources Information Center

    Brown, Laura

    2004-01-01

    Videos and DVDs give the teachers more flexibility than slide projectors, filmstrips, and 16mm films but teachers and students are excited about a new technology called streaming. Streaming allows the educators to view videos on demand via the Internet, which works through the transfer of digital media like video, and voice data that is received…

  8. Galapagos Islands Flyby [HD Video

    NASA Image and Video Library

    2010-03-26

    Completed: 07-16-2009 Straddling the equator approximately 1000 kilometers to the west of the South American mainland, the Galapagos Islands lie within the heart of the equatorial current system. Rising from the sea floor, the volcanic islands of the Galapagos are set on top of a large submarine platform. The main portion of the Galapagos platform is relatively flat and less than 1000 meters in depth. The steepest slopes are found along the western and southern flanks of the platform with a gradual slope towards the east. The interactions of the Galapagos and the oceanic currents create vastly different environmental regimes which not only isolates one part of the Archipelago from the other but allows penguins to live along the equator on the western part of the Archipelago and tropical corals around the islands to the north. The islands are relatively new in geologic terms with the youngest islands in the west still exhibiting periodic eruptions from their massive volcanic craters. Please give credit for this item to: NASA/Goddard Space Flight Center, The SeaWiFS Project and GeoEye, Scientific Visualization Studio. NOTE: All SeaWiFS images and data presented on this web site are for research and educational use only. All commercial use of SeaWiFS data must be coordinated with GeoEye (http://www.geoeye.com). To download this video go to: svs.gsfc.nasa.gov/goto?3628 NASA Goddard Space Flight Center is home to the nation's largest organization of combined scientists, engineers and technologists that build spacecraft, instruments and new technology to study the Earth, the sun, our solar system, and the universe.

  9. Video Guidance Sensor and Time-of-Flight Rangefinder

    NASA Technical Reports Server (NTRS)

    Bryan, Thomas; Howard, Richard; Bell, Joseph L.; Roe, Fred D.; Book, Michael L.

    2007-01-01

    A proposed video guidance sensor (VGS) would be based mostly on the hardware and software of a prior Advanced VGS (AVGS), with some additions to enable it to function as a time-of-flight rangefinder (in contradistinction to a triangulation or image-processing rangefinder). It would typically be used at distances of the order of 2 or 3 kilometers, where a typical target would appear in a video image as a single blob, making it possible to extract the direction to the target (but not the orientation of the target or the distance to the target) from a video image of light reflected from the target. As described in several previous NASA Tech Briefs articles, an AVGS system is an optoelectronic system that provides guidance for automated docking of two vehicles. In the original application, the two vehicles are spacecraft, but the basic principles of design and operation of the system are applicable to aircraft, robots, objects maneuvered by cranes, or other objects that may be required to be aligned and brought together automatically or under remote control. In a prior AVGS system of the type upon which the now-proposed VGS is largely based, the tracked vehicle is equipped with one or more passive targets that reflect light from one or more continuous-wave laser diode(s) on the tracking vehicle, a video camera on the tracking vehicle acquires images of the targets in the reflected laser light, the video images are digitized, and the image data are processed to obtain the direction to the target. The design concept of the proposed VGS does not call for any memory or processor hardware beyond that already present in the prior AVGS, but does call for some additional hardware and some additional software. It also calls for assignment of some additional tasks to two subsystems that are parts of the prior VGS: a field-programmable gate array (FPGA) that generates timing and control signals, and a digital signal processor (DSP) that processes the digitized video images. The

  10. Image Analysis via Fuzzy-Reasoning Approach: Prototype Applications at NASA

    NASA Technical Reports Server (NTRS)

    Dominguez, Jesus A.; Klinko, Steven J.

    2004-01-01

    A set of imaging techniques based on Fuzzy Reasoning (FR) approach was built for NASA at Kennedy Space Center (KSC) to perform complex real-time visual-related safety prototype tasks, such as detection and tracking of moving Foreign Objects Debris (FOD) during the NASA Space Shuttle liftoff and visual anomaly detection on slidewires used in the emergency egress system for Space Shuttle at the launch pad. The system has also proved its prospective in enhancing X-ray images used to screen hard-covered items leading to a better visualization. The system capability was used as well during the imaging analysis of the Space Shuttle Columbia accident. These FR-based imaging techniques include novel proprietary adaptive image segmentation, image edge extraction, and image enhancement. Probabilistic Neural Network (PNN) scheme available from NeuroShell(TM) Classifier and optimized via Genetic Algorithm (GA) was also used along with this set of novel imaging techniques to add powerful learning and image classification capabilities. Prototype applications built using these techniques have received NASA Space Awards, including a Board Action Award, and are currently being filed for patents by NASA; they are being offered for commercialization through the Research Triangle Institute (RTI), an internationally recognized corporation in scientific research and technology development. Companies from different fields, including security, medical, text digitalization, and aerospace, are currently in the process of licensing these technologies from NASA.

  11. Towards a Video Passive Content Fingerprinting Method for Partial-Copy Detection Robust against Non-Simulated Attacks

    PubMed Central

    2016-01-01

    Passive content fingerprinting is widely used for video content identification and monitoring. However, many challenges remain unsolved especially for partial-copies detection. The main challenge is to find the right balance between the computational cost of fingerprint extraction and fingerprint dimension, without compromising detection performance against various attacks (robustness). Fast video detection performance is desirable in several modern applications, for instance, in those where video detection involves the use of large video databases or in applications requiring real-time video detection of partial copies, a process whose difficulty increases when videos suffer severe transformations. In this context, conventional fingerprinting methods are not fully suitable to cope with the attacks and transformations mentioned before, either because the robustness of these methods is not enough or because their execution time is very high, where the time bottleneck is commonly found in the fingerprint extraction and matching operations. Motivated by these issues, in this work we propose a content fingerprinting method based on the extraction of a set of independent binary global and local fingerprints. Although these features are robust against common video transformations, their combination is more discriminant against severe video transformations such as signal processing attacks, geometric transformations and temporal and spatial desynchronization. Additionally, we use an efficient multilevel filtering system accelerating the processes of fingerprint extraction and matching. This multilevel filtering system helps to rapidly identify potential similar video copies upon which the fingerprint process is carried out only, thus saving computational time. We tested with datasets of real copied videos, and the results show how our method outperforms state-of-the-art methods regarding detection scores. Furthermore, the granularity of our method makes it suitable for

  12. Picturing Video

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Video Pics is a software program that generates high-quality photos from video. The software was developed under an SBIR contract with Marshall Space Flight Center by Redhawk Vision, Inc.--a subsidiary of Irvine Sensors Corporation. Video Pics takes information content from multiple frames of video and enhances the resolution of a selected frame. The resulting image has enhanced sharpness and clarity like that of a 35 mm photo. The images are generated as digital files and are compatible with image editing software.

  13. The Video Interaction Guidance approach applied to teaching communication skills in dentistry.

    PubMed

    Quinn, S; Herron, D; Menzies, R; Scott, L; Black, R; Zhou, Y; Waller, A; Humphris, G; Freeman, R

    2016-05-01

    To examine dentists' views of a novel video review technique to improve communication skills in complex clinical situations. Dentists (n = 3) participated in a video review known as Video Interaction Guidance to encourage more attuned interactions with their patients (n = 4). Part of this process is to identify where dentists and patients reacted positively and effectively. Each dentist was presented with short segments of video footage taken during an appointment with a patient with intellectual disabilities and communication difficulties. Having observed their interactions with patients, dentists were asked to reflect on their communication strategies with the assistance of a trained VIG specialist. Dentists reflected that their VIG session had been insightful and considered the review process as beneficial to communication skills training in dentistry. They believed that this technique could significantly improve the way dentists interact and communicate with patients. The VIG sessions increased their awareness of the communication strategies they use with their patients and were perceived as neither uncomfortable nor threatening. The VIG session was beneficial in this exploratory investigation because the dentists could identify when their interactions were most effective. Awareness of their non-verbal communication strategies and the need to adopt these behaviours frequently were identified as key benefits of this training approach. One dentist suggested that the video review method was supportive because it was undertaken by a behavioural scientist rather than a professional counterpart. Some evidence supports the VIG approach in this specialist area of communication skills and dental training. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  14. Joint modality fusion and temporal context exploitation for semantic video analysis

    NASA Astrophysics Data System (ADS)

    Papadopoulos, Georgios Th; Mezaris, Vasileios; Kompatsiaris, Ioannis; Strintzis, Michael G.

    2011-12-01

    In this paper, a multi-modal context-aware approach to semantic video analysis is presented. Overall, the examined video sequence is initially segmented into shots and for every resulting shot appropriate color, motion and audio features are extracted. Then, Hidden Markov Models (HMMs) are employed for performing an initial association of each shot with the semantic classes that are of interest separately for each modality. Subsequently, a graphical modeling-based approach is proposed for jointly performing modality fusion and temporal context exploitation. Novelties of this work include the combined use of contextual information and multi-modal fusion, and the development of a new representation for providing motion distribution information to HMMs. Specifically, an integrated Bayesian Network is introduced for simultaneously performing information fusion of the individual modality analysis results and exploitation of temporal context, contrary to the usual practice of performing each task separately. Contextual information is in the form of temporal relations among the supported classes. Additionally, a new computationally efficient method for providing motion energy distribution-related information to HMMs, which supports the incorporation of motion characteristics from previous frames to the currently examined one, is presented. The final outcome of this overall video analysis framework is the association of a semantic class with every shot. Experimental results as well as comparative evaluation from the application of the proposed approach to four datasets belonging to the domains of tennis, news and volleyball broadcast video are presented.

  15. Energy efficient engine pin fin and ceramic composite segmented liner combustor sector rig test report

    NASA Technical Reports Server (NTRS)

    Dubiel, D. J.; Lohmann, R. P.; Tanrikut, S.; Morris, P. M.

    1986-01-01

    Under the NASA-sponsored Energy Efficient Engine program, Pratt and Whitney has successfully completed a comprehensive test program using a 90-degree sector combustor rig that featured an advanced two-stage combustor with a succession of advanced segmented liners. Building on the successful characteristics of the first generation counter-parallel Finwall cooled segmented liner, design features of an improved performance metallic segmented liner were substantiated through representative high pressure and temperature testing in a combustor atmosphere. This second generation liner was substantially lighter and lower in cost than the predecessor configuration. The final test in this series provided an evaluation of ceramic composite liner segments in a representative combustor environment. It was demonstrated that the unique properties of ceramic composites, low density, high fracture toughness, and thermal fatigue resistance can be advantageously exploited in high temperature components. Overall, this Combustor Section Rig Test program has provided a firm basis for the design of advanced combustor liners.

  16. The LivePhoto Physics videos and video analysis site

    NASA Astrophysics Data System (ADS)

    Abbott, David

    2009-09-01

    The LivePhoto site is similar to an archive of short films for video analysis. Some videos have Flash tools for analyzing the video embedded in the movie. Most of the videos address mechanics topics with titles like Rolling Pencil (check this one out for pedagogy and content knowledge—nicely done!), Juggler, Yo-yo, Puck and Bar (this one is an inelastic collision with rotation), but there are a few titles in other areas (E&M, waves, thermo, etc.).

  17. Video consultation use by Australian general practitioners: video vignette study.

    PubMed

    Jiwa, Moyez; Meng, Xingqiong

    2013-06-19

    There is unequal access to health care in Australia, particularly for the one-third of the population living in remote and rural areas. Video consultations delivered via the Internet present an opportunity to provide medical services to those who are underserviced, but this is not currently routine practice in Australia. There are advantages and shortcomings to using video consultations for diagnosis, and general practitioners (GPs) have varying opinions regarding their efficacy. The aim of this Internet-based study was to explore the attitudes of Australian GPs toward video consultation by using a range of patient scenarios presenting different clinical problems. Overall, 102 GPs were invited to view 6 video vignettes featuring patients presenting with acute and chronic illnesses. For each vignette, they were asked to offer a differential diagnosis and to complete a survey based on the theory of planned behavior documenting their views on the value of a video consultation. A total of 47 GPs participated in the study. The participants were younger than Australian GPs based on national data, and more likely to be working in a larger practice. Most participants (72%-100%) agreed on the differential diagnosis in all video scenarios. Approximately one-third of the study participants were positive about video consultations, one-third were ambivalent, and one-third were against them. In all, 91% opposed conducting a video consultation for the patient with symptoms of an acute myocardial infarction. Inability to examine the patient was most frequently cited as the reason for not conducting a video consultation. Australian GPs who were favorably inclined toward video consultations were more likely to work in larger practices, and were more established GPs, especially in rural areas. The survey results also suggest that the deployment of video technology will need to focus on follow-up consultations. Patients with minor self-limiting illnesses and those with medical

  18. Ranking Highlights in Personal Videos by Analyzing Edited Videos.

    PubMed

    Sun, Min; Farhadi, Ali; Chen, Tseng-Hung; Seitz, Steve

    2016-11-01

    We present a fully automatic system for ranking domain-specific highlights in unconstrained personal videos by analyzing online edited videos. A novel latent linear ranking model is proposed to handle noisy training data harvested online. Specifically, given a targeted domain such as "surfing," our system mines the YouTube database to find pairs of raw and their corresponding edited videos. Leveraging the assumption that an edited video is more likely to contain highlights than the trimmed parts of the raw video, we obtain pair-wise ranking constraints to train our model. The learning task is challenging due to the amount of noise and variation in the mined data. Hence, a latent loss function is incorporated to mitigate the issues caused by the noise. We efficiently learn the latent model on a large number of videos (about 870 min in total) using a novel EM-like procedure. Our latent ranking model outperforms its classification counterpart and is fairly competitive compared with a fully supervised ranking system that requires labels from Amazon Mechanical Turk. We further show that a state-of-the-art audio feature mel-frequency cepstral coefficients is inferior to a state-of-the-art visual feature. By combining both audio-visual features, we obtain the best performance in dog activity, surfing, skating, and viral video domains. Finally, we show that impressive highlights can be detected without additional human supervision for seven domains (i.e., skating, surfing, skiing, gymnastics, parkour, dog activity, and viral video) in unconstrained personal videos.

  19. Potential Astrophysics Science Missions Enabled by NASA's Planned Ares V

    NASA Technical Reports Server (NTRS)

    Stahl, H. Philip; Thronson, Harley; Langhoff, Stepheni; Postman, Marc; Lester, Daniel; Lillie, Chuck

    2009-01-01

    NASA s planned Ares V cargo vehicle with its 10 meter diameter fairing and 60,000 kg payload mass to L2 offers the potential to launch entirely new classes of space science missions such as 8-meter monolithic aperture telescopes, 12- meter aperture x-ray telescopes, 16 to 24 meter segmented telescopes and highly capable outer planet missions. The paper will summarize the current Ares V baseline performance capabilities and review potential mission concepts enabled by these capabilities.

  20. Objective video presentation QoE predictor for smart adaptive video streaming

    NASA Astrophysics Data System (ADS)

    Wang, Zhou; Zeng, Kai; Rehman, Abdul; Yeganeh, Hojatollah; Wang, Shiqi

    2015-09-01

    How to deliver videos to consumers over the network for optimal quality-of-experience (QoE) has been the central goal of modern video delivery services. Surprisingly, regardless of the large volume of videos being delivered everyday through various systems attempting to improve visual QoE, the actual QoE of end consumers is not properly assessed, not to say using QoE as the key factor in making critical decisions at the video hosting, network and receiving sites. Real-world video streaming systems typically use bitrate as the main video presentation quality indicator, but using the same bitrate to encode different video content could result in drastically different visual QoE, which is further affected by the display device and viewing condition of each individual consumer who receives the video. To correct this, we have to put QoE back to the driver's seat and redesign the video delivery systems. To achieve this goal, a major challenge is to find an objective video presentation QoE predictor that is accurate, fast, easy-to-use, display device adaptive, and provides meaningful QoE predictions across resolution and content. We propose to use the newly developed SSIMplus index (https://ece.uwaterloo.ca/~z70wang/research/ssimplus/) for this role. We demonstrate that based on SSIMplus, one can develop a smart adaptive video streaming strategy that leads to much smoother visual QoE impossible to achieve using existing adaptive bitrate video streaming approaches. Furthermore, SSIMplus finds many more applications, in live and file-based quality monitoring, in benchmarking video encoders and transcoders, and in guiding network resource allocations.