Sample records for video system operator

  1. 47 CFR 76.1501 - Qualifications to be an open video system operator.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 4 2011-10-01 2011-10-01 false Qualifications to be an open video system... RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1501 Qualifications to be an open video system operator. Any person may obtain a certification to operate an open...

  2. 47 CFR 76.1501 - Qualifications to be an open video system operator.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 4 2014-10-01 2014-10-01 false Qualifications to be an open video system... RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1501 Qualifications to be an open video system operator. Any person may obtain a certification to operate an open...

  3. 47 CFR 76.1501 - Qualifications to be an open video system operator.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 4 2012-10-01 2012-10-01 false Qualifications to be an open video system... RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1501 Qualifications to be an open video system operator. Any person may obtain a certification to operate an open...

  4. 47 CFR 76.1501 - Qualifications to be an open video system operator.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 4 2010-10-01 2010-10-01 false Qualifications to be an open video system... RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1501 Qualifications to be an open video system operator. Any person may obtain a certification to operate an open...

  5. 47 CFR 76.1501 - Qualifications to be an open video system operator.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 4 2013-10-01 2013-10-01 false Qualifications to be an open video system... RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1501 Qualifications to be an open video system operator. Any person may obtain a certification to operate an open...

  6. 47 CFR 76.1511 - Fees.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1511 Fees. An open video system operator may be subject to the... open video system operator or its affiliates, including all revenues received from subscribers and all...

  7. 47 CFR 76.1511 - Fees.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1511 Fees. An open video system operator may be subject to the... open video system operator or its affiliates, including all revenues received from subscribers and all...

  8. 47 CFR 76.1511 - Fees.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1511 Fees. An open video system operator may be subject to the... open video system operator or its affiliates, including all revenues received from subscribers and all...

  9. 47 CFR 76.1511 - Fees.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1511 Fees. An open video system operator may be subject to the... open video system operator or its affiliates, including all revenues received from subscribers and all...

  10. 47 CFR 76.1511 - Fees.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1511 Fees. An open video system operator may be subject to the... open video system operator or its affiliates, including all revenues received from subscribers and all...

  11. 47 CFR 76.1505 - Public, educational and governmental access.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1505 Public, educational and governmental access. (a) An open video system operator shall be subject to public, educational and... video system operator must ensure that all subscribers receive any public, educational and governmental...

  12. 47 CFR 76.1505 - Public, educational and governmental access.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1505 Public, educational and governmental access. (a) An open video system operator shall be subject to public, educational and... video system operator must ensure that all subscribers receive any public, educational and governmental...

  13. 47 CFR 76.1505 - Public, educational and governmental access.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1505 Public, educational and governmental access. (a) An open video system operator shall be subject to public, educational and... video system operator must ensure that all subscribers receive any public, educational and governmental...

  14. 47 CFR 76.1505 - Public, educational and governmental access.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1505 Public, educational and governmental access. (a) An open video system operator shall be subject to public, educational and... video system operator must ensure that all subscribers receive any public, educational and governmental...

  15. 47 CFR 76.1505 - Public, educational and governmental access.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1505 Public, educational and governmental access. (a) An open video system operator shall be subject to public, educational and... video system operator must ensure that all subscribers receive any public, educational and governmental...

  16. Test Operations Procedure (TOP) 03-2-827 Test Procedures for Video Target Scoring Using Calibration Lights

    DTIC Science & Technology

    2016-04-04

    Final 3. DATES COVERED (From - To) 4. TITLE AND SUBTITLE Test Operations Procedure (TOP) 03-2-827 Test Procedures for Video Target Scoring Using...ABSTRACT This Test Operations Procedure (TOP) describes typical equipment and procedures to setup and operate a Video Target Scoring System (VTSS) to...lights. 15. SUBJECT TERMS Video Target Scoring System, VTSS, witness screens, camera, target screen, light pole 16. SECURITY

  17. 47 CFR 76.1205 - CableCARD support.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND... operate with multichannel video programming systems shall be provided by the system operator upon request in a timely manner. (b) A multichannel video programming provider that is subject to the requirements...

  18. Virtual Ultrasound Guidance for Inexperienced Operators

    NASA Technical Reports Server (NTRS)

    Caine, Timothy; Martin, David

    2012-01-01

    Medical ultrasound or echocardiographic studies are highly operator-dependent and generally require lengthy training and internship to perfect. To obtain quality echocardiographic images in remote environments, such as on-orbit, remote guidance of studies has been employed. This technique involves minimal training for the user, coupled with remote guidance from an expert. When real-time communication or expert guidance is not available, a more autonomous system of guiding an inexperienced operator through an ultrasound study is needed. One example would be missions beyond low Earth orbit in which the time delay inherent with communication will make remote guidance impractical. The Virtual Ultrasound Guidance system is a combination of hardware and software. The hardware portion includes, but is not limited to, video glasses that allow hands-free, full-screen viewing. The glasses also allow the operator a substantial field of view below the glasses to view and operate the ultrasound system. The software is a comprehensive video program designed to guide an inexperienced operator through a detailed ultrasound or echocardiographic study without extensive training or guidance from the ground. The program contains a detailed description using video and audio to demonstrate equipment controls, ergonomics of scanning, study protocol, and scanning guidance, including recovery from sub-optimal images. The components used in the initial validation of the system include an Apple iPod Classic third-generation as the video source, and Myvue video glasses. Initially, the program prompts the operator to power-up the ultrasound and position the patient. The operator would put on the video glasses and attach them to the video source. After turning on both devices and the ultrasound system, the audio-video guidance would then instruct on patient positioning and scanning techniques. A detailed scanning protocol follows with descriptions and reference video of each view along with advice on technique. The program also instructs the operator regarding the types of images to store and how to overcome pitfalls in scanning. Images can be forwarded to the ground or other site when convenient. Following study completion, the video glasses, video source, and ultrasound system are powered down and stored. Virtually any equipment that can play back video can be used to play back the program. This includes a DVD player, personal computer, and some MP3 players.

  19. Operation quality assessment model for video conference system

    NASA Astrophysics Data System (ADS)

    Du, Bangshi; Qi, Feng; Shao, Sujie; Wang, Ying; Li, Weijian

    2018-01-01

    Video conference system has become an important support platform for smart grid operation and management, its operation quality is gradually concerning grid enterprise. First, the evaluation indicator system covering network, business and operation maintenance aspects was established on basis of video conference system's operation statistics. Then, the operation quality assessment model combining genetic algorithm with regularized BP neural network was proposed, which outputs operation quality level of the system within a time period and provides company manager with some optimization advice. The simulation results show that the proposed evaluation model offers the advantages of fast convergence and high prediction accuracy in contrast with regularized BP neural network, and its generalization ability is superior to LM-BP neural network and Bayesian BP neural network.

  20. 47 CFR 76.1712 - Open video system (OVS) requests for carriage.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 4 2011-10-01 2011-10-01 false Open video system (OVS) requests for carriage... RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Documents to be Maintained for Inspection § 76.1712 Open video system (OVS) requests for carriage. An open video system operator shall maintain a...

  1. 47 CFR 76.1712 - Open video system (OVS) requests for carriage.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 4 2010-10-01 2010-10-01 false Open video system (OVS) requests for carriage... RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Documents to be Maintained for Inspection § 76.1712 Open video system (OVS) requests for carriage. An open video system operator shall maintain a...

  2. 47 CFR 76.1712 - Open video system (OVS) requests for carriage.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 4 2014-10-01 2014-10-01 false Open video system (OVS) requests for carriage... RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Documents to be Maintained for Inspection § 76.1712 Open video system (OVS) requests for carriage. An open video system operator shall maintain a...

  3. 47 CFR 76.1712 - Open video system (OVS) requests for carriage.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 4 2013-10-01 2013-10-01 false Open video system (OVS) requests for carriage... RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Documents to be Maintained for Inspection § 76.1712 Open video system (OVS) requests for carriage. An open video system operator shall maintain a...

  4. 47 CFR 76.1712 - Open video system (OVS) requests for carriage.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 4 2012-10-01 2012-10-01 false Open video system (OVS) requests for carriage... RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Documents to be Maintained for Inspection § 76.1712 Open video system (OVS) requests for carriage. An open video system operator shall maintain a...

  5. 47 CFR 76.1508 - Network non-duplication.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1508 Network non-duplication. (a) Sections 76.92 through 76.97 shall apply to open video systems in accordance with the provisions contained... unit” shall apply to an open video system or that portion of an open video system that operates or will...

  6. 47 CFR 76.1508 - Network non-duplication.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1508 Network non-duplication. (a) Sections 76.92 through 76.97 shall apply to open video systems in accordance with the provisions contained... unit” shall apply to an open video system or that portion of an open video system that operates or will...

  7. 47 CFR 76.1508 - Network non-duplication.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1508 Network non-duplication. (a) Sections 76.92 through 76.97 shall apply to open video systems in accordance with the provisions contained... unit” shall apply to an open video system or that portion of an open video system that operates or will...

  8. 47 CFR 76.1508 - Network non-duplication.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1508 Network non-duplication. (a) Sections 76.92 through 76.97 shall apply to open video systems in accordance with the provisions contained... unit” shall apply to an open video system or that portion of an open video system that operates or will...

  9. A professional and cost effective digital video editing and image storage system for the operating room.

    PubMed

    Scollato, A; Perrini, P; Benedetto, N; Di Lorenzo, N

    2007-06-01

    We propose an easy-to-construct digital video editing system ideal to produce video documentation and still images. A digital video editing system applicable to many video sources in the operating room is described in detail. The proposed system has proved easy to use and permits one to obtain videography quickly and easily. Mixing different streams of video input from all the devices in use in the operating room, the application of filters and effects produces a final, professional end-product. Recording on a DVD provides an inexpensive, portable and easy-to-use medium to store or re-edit or tape at a later time. From stored videography it is easy to extract high-quality, still images useful for teaching, presentations and publications. In conclusion digital videography and still photography can easily be recorded by the proposed system, producing high-quality video recording. The use of firewire ports provides good compatibility with next-generation hardware and software. The high standard of quality makes the proposed system one of the lowest priced products available today.

  10. 47 CFR 76.1502 - Certification.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1502 Certification. (a) An operator of an open video... certification in its cable franchise area, a statement that the applicant is qualified to operate an open video...

  11. 47 CFR 76.1502 - Certification.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1502 Certification. (a) An operator of an open video... certification in its cable franchise area, a statement that the applicant is qualified to operate an open video...

  12. 47 CFR 76.1502 - Certification.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1502 Certification. (a) An operator of an open video... certification in its cable franchise area, a statement that the applicant is qualified to operate an open video...

  13. 47 CFR 76.1502 - Certification.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1502 Certification. (a) An operator of an open video... certification in its cable franchise area, a statement that the applicant is qualified to operate an open video...

  14. 47 CFR 76.1502 - Certification.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1502 Certification. (a) An operator of an open video... certification in its cable franchise area, a statement that the applicant is qualified to operate an open video...

  15. 47 CFR 76.1514 - Bundling of video and local exchange services.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 4 2014-10-01 2014-10-01 false Bundling of video and local exchange services... RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1514 Bundling of video and local exchange services. An open video system operator may offer video and local exchange...

  16. 47 CFR 76.1514 - Bundling of video and local exchange services.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 4 2013-10-01 2013-10-01 false Bundling of video and local exchange services... RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1514 Bundling of video and local exchange services. An open video system operator may offer video and local exchange...

  17. 47 CFR 76.1514 - Bundling of video and local exchange services.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 4 2012-10-01 2012-10-01 false Bundling of video and local exchange services... RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1514 Bundling of video and local exchange services. An open video system operator may offer video and local exchange...

  18. 47 CFR 76.1514 - Bundling of video and local exchange services.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 4 2010-10-01 2010-10-01 false Bundling of video and local exchange services... RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1514 Bundling of video and local exchange services. An open video system operator may offer video and local exchange...

  19. 47 CFR 76.1514 - Bundling of video and local exchange services.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 4 2011-10-01 2011-10-01 false Bundling of video and local exchange services... RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1514 Bundling of video and local exchange services. An open video system operator may offer video and local exchange...

  20. The Systems Engineering Design of a Smart Forward Operating Base Surveillance System for Forward Operating Base Protection

    DTIC Science & Technology

    2013-06-01

    fixed sensors located along the perimeter of the FOB. The video is analyzed for facial recognition to alert the Network Operations Center (NOC...the UAV is processed on board for facial recognition and video for behavior analysis is sent directly to the Network Operations Center (NOC). Video...captured by the fixed sensors are sent directly to the NOC for facial recognition and behavior analysis processing. The multi- directional signal

  1. 47 CFR 76.1507 - Competitive access to satellite cable programming.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1507 Competitive....1000 through 76.1003 shall also apply to an operator of an open video system and its affiliate which provides video programming on its open video system, except as limited by paragraph (a) (1)-(3) of this...

  2. 47 CFR 76.1507 - Competitive access to satellite cable programming.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1507 Competitive....1000 through 76.1003 shall also apply to an operator of an open video system and its affiliate which provides video programming on its open video system, except as limited by paragraph (a) (1)-(3) of this...

  3. 47 CFR 76.1507 - Competitive access to satellite cable programming.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1507 Competitive....1000 through 76.1003 shall also apply to an operator of an open video system and its affiliate which provides video programming on its open video system, except as limited by paragraph (a) (1)-(3) of this...

  4. 47 CFR 76.1507 - Competitive access to satellite cable programming.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1507 Competitive....1000 through 76.1003 shall also apply to an operator of an open video system and its affiliate which provides video programming on its open video system, except as limited by paragraph (a) (1)-(3) of this...

  5. 47 CFR 76.1507 - Competitive access to satellite cable programming.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1507 Competitive....1000 through 76.1003 shall also apply to an operator of an open video system and its affiliate which provides video programming on its open video system, except as limited by paragraph (a) (1)-(3) of this...

  6. Imaging System for Vaginal Surgery.

    PubMed

    Taylor, G Bernard; Myers, Erinn M

    2015-12-01

    The vaginal surgeon is challenged with performing complex procedures within a surgical field of limited light and exposure. The video telescopic operating microscope is an illumination and imaging system that provides visualization during open surgical procedures with a limited field of view. The imaging system is positioned within the surgical field and then secured to the operating room table with a maneuverable holding arm. A high-definition camera and Xenon light source allow transmission of the magnified image to a high-definition monitor in the operating room. The monitor screen is positioned above the patient for the surgeon and assistants to view real time throughout the operation. The video telescopic operating microscope system was used to provide surgical illumination and magnification during total vaginal hysterectomy and salpingectomy, midurethral sling, and release of vaginal scar procedures. All procedures were completed without complications. The video telescopic operating microscope provided illumination of the vaginal operative field and display of the magnified image onto high-definition monitors in the operating room for the surgeon and staff to simultaneously view the procedures. The video telescopic operating microscope provides high-definition display, magnification, and illumination during vaginal surgery.

  7. 47 CFR 76.1512 - Programming information.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1512 Programming information. (a) An open video system operator shall not unreasonably discriminate in favor of itself or its affiliates... for the purpose of selecting programming on the open video system, or in the way such material or...

  8. 47 CFR 76.1512 - Programming information.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1512 Programming information. (a) An open video system operator shall not unreasonably discriminate in favor of itself or its affiliates... for the purpose of selecting programming on the open video system, or in the way such material or...

  9. 47 CFR 76.1512 - Programming information.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1512 Programming information. (a) An open video system operator shall not unreasonably discriminate in favor of itself or its affiliates... for the purpose of selecting programming on the open video system, or in the way such material or...

  10. 47 CFR 76.1512 - Programming information.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1512 Programming information. (a) An open video system operator shall not unreasonably discriminate in favor of itself or its affiliates... for the purpose of selecting programming on the open video system, or in the way such material or...

  11. Viewers & Players

    MedlinePlus

    ... Player Play video and audio files on Apple operating systems. mov Apple iTunes Download NLM podcasts and applications. ... Player Play video and audio files on PC operating systems. mp3 wav wmz About MedlinePlus Site Map FAQs ...

  12. Learning neuroendoscopy with an exoscope system (video telescopic operating monitor): Early clinical results.

    PubMed

    Parihar, Vijay; Yadav, Y R; Kher, Yatin; Ratre, Shailendra; Sethi, Ashish; Sharma, Dhananjaya

    2016-01-01

    Steep learning curve is found initially in pure endoscopic procedures. Video telescopic operating monitor (VITOM) is an advance in rigid-lens telescope systems provides an alternative method for learning basics of neuroendoscopy with the help of the familiar principle of microneurosurgery. The aim was to evaluate the clinical utility of VITOM as a learning tool for neuroendoscopy. Video telescopic operating monitor was used 39 cranial and spinal procedures and its utility as a tool for minimally invasive neurosurgery and neuroendoscopy for initial learning curve was studied. Video telescopic operating monitor was used in 25 cranial and 14 spinal procedures. Image quality is comparable to endoscope and microscope. Surgeons comfort improved with VITOM. Frequent repositioning of scope holder and lack of stereopsis is initial limiting factor was compensated for with repeated procedures. Video telescopic operating monitor is found useful to reduce initial learning curve of neuroendoscopy.

  13. 47 CFR 63.02 - Exemptions for extensions of lines and for systems for the delivery of video programming.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... systems for the delivery of video programming. 63.02 Section 63.02 Telecommunication FEDERAL... systems for the delivery of video programming. (a) Any common carrier is exempt from the requirements of... with respect to the establishment or operation of a system for the delivery of video programming. [64...

  14. 47 CFR 63.02 - Exemptions for extensions of lines and for systems for the delivery of video programming.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... systems for the delivery of video programming. 63.02 Section 63.02 Telecommunication FEDERAL... systems for the delivery of video programming. (a) Any common carrier is exempt from the requirements of... with respect to the establishment or operation of a system for the delivery of video programming. [64...

  15. 47 CFR 63.02 - Exemptions for extensions of lines and for systems for the delivery of video programming.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... systems for the delivery of video programming. 63.02 Section 63.02 Telecommunication FEDERAL... systems for the delivery of video programming. (a) Any common carrier is exempt from the requirements of... with respect to the establishment or operation of a system for the delivery of video programming. [64...

  16. 47 CFR 63.02 - Exemptions for extensions of lines and for systems for the delivery of video programming.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... systems for the delivery of video programming. 63.02 Section 63.02 Telecommunication FEDERAL... systems for the delivery of video programming. (a) Any common carrier is exempt from the requirements of... with respect to the establishment or operation of a system for the delivery of video programming. [64...

  17. 47 CFR 63.02 - Exemptions for extensions of lines and for systems for the delivery of video programming.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... systems for the delivery of video programming. 63.02 Section 63.02 Telecommunication FEDERAL... systems for the delivery of video programming. (a) Any common carrier is exempt from the requirements of... with respect to the establishment or operation of a system for the delivery of video programming. [64...

  18. Network-aware scalable video monitoring system for emergency situations with operator-managed fidelity control

    NASA Astrophysics Data System (ADS)

    Al Hadhrami, Tawfik; Nightingale, James M.; Wang, Qi; Grecos, Christos

    2014-05-01

    In emergency situations, the ability to remotely monitor unfolding events using high-quality video feeds will significantly improve the incident commander's understanding of the situation and thereby aids effective decision making. This paper presents a novel, adaptive video monitoring system for emergency situations where the normal communications network infrastructure has been severely impaired or is no longer operational. The proposed scheme, operating over a rapidly deployable wireless mesh network, supports real-time video feeds between first responders, forward operating bases and primary command and control centers. Video feeds captured on portable devices carried by first responders and by static visual sensors are encoded in H.264/SVC, the scalable extension to H.264/AVC, allowing efficient, standard-based temporal, spatial, and quality scalability of the video. A three-tier video delivery system is proposed, which balances the need to avoid overuse of mesh nodes with the operational requirements of the emergency management team. In the first tier, the video feeds are delivered at a low spatial and temporal resolution employing only the base layer of the H.264/SVC video stream. Routing in this mode is designed to employ all nodes across the entire mesh network. In the second tier, whenever operational considerations require that commanders or operators focus on a particular video feed, a `fidelity control' mechanism at the monitoring station sends control messages to the routing and scheduling agents in the mesh network, which increase the quality of the received picture using SNR scalability while conserving bandwidth by maintaining a low frame rate. In this mode, routing decisions are based on reliable packet delivery with the most reliable routes being used to deliver the base and lower enhancement layers; as fidelity is increased and more scalable layers are transmitted they will be assigned to routes in descending order of reliability. The third tier of video delivery transmits a high-quality video stream including all available scalable layers using the most reliable routes through the mesh network ensuring the highest possible video quality. The proposed scheme is implemented in a proven simulator, and the performance of the proposed system is numerically evaluated through extensive simulations. We further present an in-depth analysis of the proposed solutions and potential approaches towards supporting high-quality visual communications in such a demanding context.

  19. Software-codec-based full motion video conferencing on the PC using visual pattern image sequence coding

    NASA Astrophysics Data System (ADS)

    Barnett, Barry S.; Bovik, Alan C.

    1995-04-01

    This paper presents a real time full motion video conferencing system based on the Visual Pattern Image Sequence Coding (VPISC) software codec. The prototype system hardware is comprised of two personal computers, two camcorders, two frame grabbers, and an ethernet connection. The prototype system software has a simple structure. It runs under the Disk Operating System, and includes a user interface, a video I/O interface, an event driven network interface, and a free running or frame synchronous video codec that also acts as the controller for the video and network interfaces. Two video coders have been tested in this system. Simple implementations of Visual Pattern Image Coding and VPISC have both proven to support full motion video conferencing with good visual quality. Future work will concentrate on expanding this prototype to support the motion compensated version of VPISC, as well as encompassing point-to-point modem I/O and multiple network protocols. The application will be ported to multiple hardware platforms and operating systems. The motivation for developing this prototype system is to demonstrate the practicality of software based real time video codecs. Furthermore, software video codecs are not only cheaper, but are more flexible system solutions because they enable different computer platforms to exchange encoded video information without requiring on-board protocol compatible video codex hardware. Software based solutions enable true low cost video conferencing that fits the `open systems' model of interoperability that is so important for building portable hardware and software applications.

  20. 47 CFR 76.1301 - Prohibited practices.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... interest. No cable operator or other multichannel video programming distributor shall require a financial... systems. (b) Exclusive rights. No cable operator or other multichannel video programming distributor shall coerce any video programming vendor to provide, or retaliate against such a vendor for failing to provide...

  1. Secure video communications system

    DOEpatents

    Smith, Robert L.

    1991-01-01

    A secure video communications system having at least one command network formed by a combination of subsystems. The combination of subsystems to include a video subsystem, an audio subsystem, a communications subsystem, and a control subsystem. The video communications system to be window driven and mouse operated, and having the ability to allow for secure point-to-point real-time teleconferencing.

  2. 47 CFR 76.503 - National subscriber limits.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Ownership of Cable Systems § 76.503 National subscriber limits. (a) No cable operator shall serve more than 30 percent of all multichannel-video programming subscribers nationwide through multichannel video programming distributors owned by such operator or in which...

  3. 47 CFR 76.503 - National subscriber limits.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Ownership of Cable Systems § 76.503 National subscriber limits. (a) No cable operator shall serve more than 30 percent of all multichannel-video programming subscribers nationwide through multichannel video programming distributors owned by such operator or in which...

  4. 47 CFR 76.503 - National subscriber limits.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Ownership of Cable Systems § 76.503 National subscriber limits. (a) No cable operator shall serve more than 30 percent of all multichannel-video programming subscribers nationwide through multichannel video programming distributors owned by such operator or in which...

  5. 47 CFR 76.503 - National subscriber limits.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Ownership of Cable Systems § 76.503 National subscriber limits. (a) No cable operator shall serve more than 30 percent of all multichannel-video programming subscribers nationwide through multichannel video programming distributors owned by such operator or in which...

  6. Feasibility study of a real-time operating system for a multichannel MPEG-4 encoder

    NASA Astrophysics Data System (ADS)

    Lehtoranta, Olli; Hamalainen, Timo D.

    2005-03-01

    Feasibility of DSP/BIOS real-time operating system for a multi-channel MPEG-4 encoder is studied. Performances of two MPEG-4 encoder implementations with and without the operating system are compared in terms of encoding frame rate and memory requirements. The effects of task switching frequency and number of parallel video channels to the encoding frame rate are measured. The research is carried out on a 200 MHz TMS320C6201 fixed point DSP using QCIF (176x144 pixels) video format. Compared to a traditional DSP implementation without an operating system, inclusion of DSP/BIOS reduces total system throughput only by 1 QCIF frames/s. The operating system has 6 KB data memory overhead and program memory requirement of 15.7 KB. Hence, the overhead is considered low enough for resource critical mobile video applications.

  7. A VidEo-Based Intelligent Recognition and Decision System for the Phacoemulsification Cataract Surgery.

    PubMed

    Tian, Shu; Yin, Xu-Cheng; Wang, Zhi-Bin; Zhou, Fang; Hao, Hong-Wei

    2015-01-01

    The phacoemulsification surgery is one of the most advanced surgeries to treat cataract. However, the conventional surgeries are always with low automatic level of operation and over reliance on the ability of surgeons. Alternatively, one imaginative scene is to use video processing and pattern recognition technologies to automatically detect the cataract grade and intelligently control the release of the ultrasonic energy while operating. Unlike cataract grading in the diagnosis system with static images, complicated background, unexpected noise, and varied information are always introduced in dynamic videos of the surgery. Here we develop a Video-Based Intelligent Recognitionand Decision (VeBIRD) system, which breaks new ground by providing a generic framework for automatically tracking the operation process and classifying the cataract grade in microscope videos of the phacoemulsification cataract surgery. VeBIRD comprises a robust eye (iris) detector with randomized Hough transform to precisely locate the eye in the noise background, an effective probe tracker with Tracking-Learning-Detection to thereafter track the operation probe in the dynamic process, and an intelligent decider with discriminative learning to finally recognize the cataract grade in the complicated video. Experiments with a variety of real microscope videos of phacoemulsification verify VeBIRD's effectiveness.

  8. A VidEo-Based Intelligent Recognition and Decision System for the Phacoemulsification Cataract Surgery

    PubMed Central

    Yin, Xu-Cheng; Wang, Zhi-Bin; Zhou, Fang; Hao, Hong-Wei

    2015-01-01

    The phacoemulsification surgery is one of the most advanced surgeries to treat cataract. However, the conventional surgeries are always with low automatic level of operation and over reliance on the ability of surgeons. Alternatively, one imaginative scene is to use video processing and pattern recognition technologies to automatically detect the cataract grade and intelligently control the release of the ultrasonic energy while operating. Unlike cataract grading in the diagnosis system with static images, complicated background, unexpected noise, and varied information are always introduced in dynamic videos of the surgery. Here we develop a Video-Based Intelligent Recognitionand Decision (VeBIRD) system, which breaks new ground by providing a generic framework for automatically tracking the operation process and classifying the cataract grade in microscope videos of the phacoemulsification cataract surgery. VeBIRD comprises a robust eye (iris) detector with randomized Hough transform to precisely locate the eye in the noise background, an effective probe tracker with Tracking-Learning-Detection to thereafter track the operation probe in the dynamic process, and an intelligent decider with discriminative learning to finally recognize the cataract grade in the complicated video. Experiments with a variety of real microscope videos of phacoemulsification verify VeBIRD's effectiveness. PMID:26693249

  9. A Portable Shoulder-Mounted Camera System for Surgical Education in Spine Surgery.

    PubMed

    Pham, Martin H; Ohiorhenuan, Ifije E; Patel, Neil N; Jakoi, Andre M; Hsieh, Patrick C; Acosta, Frank L; Wang, Jeffrey C; Liu, John C

    2017-02-07

    The past several years have demonstrated an increased recognition of operative videos as an important adjunct for resident education. Currently lacking, however, are effective methods to record video for the purposes of illustrating the techniques of minimally invasive (MIS) and complex spine surgery. We describe here our experiences developing and using a shoulder-mounted camera system for recording surgical video. Our requirements for an effective camera system included wireless portability to allow for movement around the operating room, camera mount location for comfort and loupes/headlight usage, battery life for long operative days, and sterile control of on/off recording. With this in mind, we created a shoulder-mounted camera system utilizing a GoPro™ HERO3+, its Smart Remote (GoPro, Inc., San Mateo, California), a high-capacity external battery pack, and a commercially available shoulder-mount harness. This shoulder-mounted system was more comfortable to wear for long periods of time in comparison to existing head-mounted and loupe-mounted systems. Without requiring any wired connections, the surgeon was free to move around the room as needed. Over the past several years, we have recorded numerous MIS and complex spine surgeries for the purposes of surgical video creation for resident education. Surgical videos serve as a platform to distribute important operative nuances in rich multimedia. Effective and practical camera system setups are needed to encourage the continued creation of videos to illustrate the surgical maneuvers in minimally invasive and complex spinal surgery. We describe here a novel portable shoulder-mounted camera system setup specifically designed to be worn and used for long periods of time in the operating room.

  10. 77 FR 75617 - 36(b)(1) Arms Sales Notification

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-21

    ... transmittal, policy justification, and Sensitivity of Technology. Dated: December 18, 2012. Aaron Siegel... Processor Cabinets, 2 Video Wall Screen and Projector Systems, 46 Flat Panel Displays, and 2 Distributed Video Systems), 2 ship sets AN/SPQ-15 Digital Video Distribution Systems, 2 ship sets Operational...

  11. Movable Cameras And Monitors For Viewing Telemanipulator

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B.; Venema, Steven C.

    1993-01-01

    Three methods proposed to assist operator viewing telemanipulator on video monitor in control station when video image generated by movable video camera in remote workspace of telemanipulator. Monitors rotated or shifted and/or images in them transformed to adjust coordinate systems of scenes visible to operator according to motions of cameras and/or operator's preferences. Reduces operator's workload and probability of error by obviating need for mental transformations of coordinates during operation. Methods applied in outer space, undersea, in nuclear industry, in surgery, in entertainment, and in manufacturing.

  12. Portable color multimedia training systems based on monochrome laptop computers (CBT-in-a-briefcase), with spinoff implications for video uplink and downlink in spaceflight operations

    NASA Technical Reports Server (NTRS)

    Scott, D. W.

    1994-01-01

    This report describes efforts to use digital motion video compression technology to develop a highly portable device that would convert 1990-91 era IBM-compatible and/or MacIntosh notebook computers into full-color, motion-video capable multimedia training systems. An architecture was conceived that would permit direct conversion of existing laser-disk-based multimedia courses with little or no reauthoring. The project did not physically demonstrate certain critical video keying techniques, but their implementation should be feasible. This investigation of digital motion video has spawned two significant spaceflight projects at MSFC: one to downlink multiple high-quality video signals from Spacelab, and the other to uplink videoconference-quality video in realtime and high quality video off-line, plus investigate interactive, multimedia-based techniques for enhancing onboard science operations. Other airborne or spaceborne spinoffs are possible.

  13. Multilocation Video Conference By Optical Fiber

    NASA Astrophysics Data System (ADS)

    Gray, Donald J.

    1982-10-01

    An experimental system that permits interconnection of many offices in a single video conference is described. Video images transmitted to conference participants are selected by the conference chairman and switched by a microprocessor-controlled video switch. Speakers can, at their choice, transmit their own images or images of graphics they wish to display. Users are connected to the Switching Center by optical fiber subscriber loops that carry analog video, digitized telephone, data and signaling. The same system also provides user-selectable distribution of video program and video library material. Experience in the operation of the conference system is discussed.

  14. 47 CFR 76.504 - Limits on carriage of vertically integrated programming.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Ownership of Cable Systems § 76.504... national video programming services owned by the cable operator or in which the cable operator has an... up to 45 percent of its channel capacity, whichever is greater, to the carriage of video programming...

  15. 47 CFR 76.504 - Limits on carriage of vertically integrated programming.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Ownership of Cable Systems § 76.504... national video programming services owned by the cable operator or in which the cable operator has an... up to 45 percent of its channel capacity, whichever is greater, to the carriage of video programming...

  16. 47 CFR 76.504 - Limits on carriage of vertically integrated programming.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Ownership of Cable Systems § 76.504... national video programming services owned by the cable operator or in which the cable operator has an... up to 45 percent of its channel capacity, whichever is greater, to the carriage of video programming...

  17. Feasibility study of transmission of OTV camera control information in the video vertical blanking interval

    NASA Technical Reports Server (NTRS)

    White, Preston A., III

    1994-01-01

    The Operational Television system at Kennedy Space Center operates hundreds of video cameras, many remotely controllable, in support of the operations at the center. This study was undertaken to determine if commercial NABTS (North American Basic Teletext System) teletext transmission in the vertical blanking interval of the genlock signals distributed to the cameras could be used to send remote control commands to the cameras and the associated pan and tilt platforms. Wavelength division multiplexed fiberoptic links are being installed in the OTV system to obtain RS-250 short-haul quality. It was demonstrated that the NABTS transmission could be sent over the fiberoptic cable plant without excessive video quality degradation and that video cameras could be controlled using NABTS transmissions over multimode fiberoptic paths as long as 1.2 km.

  18. RAPID: A random access picture digitizer, display, and memory system

    NASA Technical Reports Server (NTRS)

    Yakimovsky, Y.; Rayfield, M.; Eskenazi, R.

    1976-01-01

    RAPID is a system capable of providing convenient digital analysis of video data in real-time. It has two modes of operation. The first allows for continuous digitization of an EIA RS-170 video signal. Each frame in the video signal is digitized and written in 1/30 of a second into RAPID's internal memory. The second mode leaves the content of the internal memory independent of the current input video. In both modes of operation the image contained in the memory is used to generate an EIA RS-170 composite video output signal representing the digitized image in the memory so that it can be displayed on a monitor.

  19. Operational prediction of rip currents using numerical model and nearshore bathymetry from video images

    NASA Astrophysics Data System (ADS)

    Sembiring, L.; Van Ormondt, M.; Van Dongeren, A. R.; Roelvink, J. A.

    2017-07-01

    Rip currents are one of the most dangerous coastal hazards for swimmers. In order to minimize the risk, a coastal operational-process based-model system can be utilized in order to provide forecast of nearshore waves and currents that may endanger beach goers. In this paper, an operational model for rip current prediction by utilizing nearshore bathymetry obtained from video image technique is demonstrated. For the nearshore scale model, XBeach1 is used with which tidal currents, wave induced currents (including the effect of the wave groups) can be simulated simultaneously. Up-to-date bathymetry will be obtained using video images technique, cBathy 2. The system will be tested for the Egmond aan Zee beach, located in the northern part of the Dutch coastline. This paper will test the applicability of bathymetry obtained from video technique to be used as input for the numerical modelling system by comparing simulation results using surveyed bathymetry and model results using video bathymetry. Results show that the video technique is able to produce bathymetry converging towards the ground truth observations. This bathymetry validation will be followed by an example of operational forecasting type of simulation on predicting rip currents. Rip currents flow fields simulated over measured and modeled bathymetries are compared in order to assess the performance of the proposed forecast system.

  20. 47 CFR 76.501 - Cross-ownership.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND..., cable system, SMATV or multiple video distribution provider subject to § 76.501, § 76.505, or § 76.905(b... station, cable system, SMATV, or multiple video distribution provider that operates in the same market, is...

  1. 47 CFR 76.501 - Cross-ownership.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND..., cable system, SMATV or multiple video distribution provider subject to § 76.501, § 76.505, or § 76.905(b... station, cable system, SMATV, or multiple video distribution provider that operates in the same market, is...

  2. 47 CFR 76.501 - Cross-ownership.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND..., cable system, SMATV or multiple video distribution provider subject to § 76.501, § 76.505, or § 76.905(b... station, cable system, SMATV, or multiple video distribution provider that operates in the same market, is...

  3. Data Reduction and Control Software for Meteor Observing Stations Based on CCD Video Systems

    NASA Technical Reports Server (NTRS)

    Madiedo, J. M.; Trigo-Rodriguez, J. M.; Lyytinen, E.

    2011-01-01

    The SPanish Meteor Network (SPMN) is performing a continuous monitoring of meteor activity over Spain and neighbouring countries. The huge amount of data obtained by the 25 video observing stations that this network is currently operating made it necessary to develop new software packages to accomplish some tasks, such as data reduction and remote operation of autonomous systems based on high-sensitivity CCD video devices. The main characteristics of this software are described here.

  4. 47 CFR 76.921 - Buy-through of other tiers prohibited.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Cable Rate Regulation § 76.921 Buy-through of other tiers prohibited. (a) No cable system operator, other than an operator subject to effective competition, may... video programming offered on a per channel or per program charge basis. A cable operator may, however...

  5. 47 CFR 76.921 - Buy-through of other tiers prohibited.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Cable Rate Regulation § 76.921 Buy-through of other tiers prohibited. (a) No cable system operator, other than an operator subject to effective competition, may... video programming offered on a per channel or per program charge basis. A cable operator may, however...

  6. 47 CFR 76.921 - Buy-through of other tiers prohibited.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Cable Rate Regulation § 76.921 Buy-through of other tiers prohibited. (a) No cable system operator, other than an operator subject to effective competition, may... video programming offered on a per channel or per program charge basis. A cable operator may, however...

  7. Video performance for high security applications.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Connell, Jack C.; Norman, Bradley C.

    2010-06-01

    The complexity of physical protection systems has increased to address modern threats to national security and emerging commercial technologies. A key element of modern physical protection systems is the data presented to the human operator used for rapid determination of the cause of an alarm, whether false (e.g., caused by an animal, debris, etc.) or real (e.g., a human adversary). Alarm assessment, the human validation of a sensor alarm, primarily relies on imaging technologies and video systems. Developing measures of effectiveness (MOE) that drive the design or evaluation of a video system or technology becomes a challenge, given the subjectivitymore » of the application (e.g., alarm assessment). Sandia National Laboratories has conducted empirical analysis using field test data and mathematical models such as binomial distribution and Johnson target transfer functions to develop MOEs for video system technologies. Depending on the technology, the task of the security operator and the distance to the target, the Probability of Assessment (PAs) can be determined as a function of a variety of conditions or assumptions. PAs used as an MOE allows the systems engineer to conduct trade studies, make informed design decisions, or evaluate new higher-risk technologies. This paper outlines general video system design trade-offs, discusses ways video can be used to increase system performance and lists MOEs for video systems used in subjective applications such as alarm assessment.« less

  8. Task-oriented situation recognition

    NASA Astrophysics Data System (ADS)

    Bauer, Alexander; Fischer, Yvonne

    2010-04-01

    From the advances in computer vision methods for the detection, tracking and recognition of objects in video streams, new opportunities for video surveillance arise: In the future, automated video surveillance systems will be able to detect critical situations early enough to enable an operator to take preventive actions, instead of using video material merely for forensic investigations. However, problems such as limited computational resources, privacy regulations and a constant change in potential threads have to be addressed by a practical automated video surveillance system. In this paper, we show how these problems can be addressed using a task-oriented approach. The system architecture of the task-oriented video surveillance system NEST and an algorithm for the detection of abnormal behavior as part of the system are presented and illustrated for the surveillance of guests inside a video-monitored building.

  9. An Investigation of the Feasibility of a Video Game System for Developing Scanning and Selection Skills.

    ERIC Educational Resources Information Center

    Horn, Eva; And Others

    1991-01-01

    Three nonvocal students (ages 5-8) with severe physical handicaps were trained in scan and selection responses (similar to responses needed for operating augmentative communication systems) using a microcomputer-operated video-game format. Results indicated that all three children showed substantial increases in the number of correct responses and…

  10. Fluorescence endoscopic video system

    NASA Astrophysics Data System (ADS)

    Papayan, G. V.; Kang, Uk

    2006-10-01

    This paper describes a fluorescence endoscopic video system intended for the diagnosis of diseases of the internal organs. The system operates on the basis of two-channel recording of the video fluxes from a fluorescence channel and a reflected-light channel by means of a high-sensitivity monochrome television camera and a color camera, respectively. Examples are given of the application of the device in gastroenterology.

  11. The use of distributed displays of operating room video when real-time occupancy status was available.

    PubMed

    Xiao, Yan; Dexter, Franklin; Hu, Peter; Dutton, Richard P

    2008-02-01

    On the day of surgery, real-time information of both room occupancy and activities within the operating room (OR) is needed for management of staff, equipment, and unexpected events. A status display system showed color OR video with controllable image quality and showed times that patients entered and exited each OR (obtained automatically). The system was installed and its use was studied in a 6-OR trauma suite and at four locations in a 19-OR tertiary suite. Trauma staff were surveyed for their perceptions of the system. Evidence of staff acceptance of distributed OR video included its operational use for >3 yr in the two suites, with no administrative complaints. Individuals of all job categories used the video. Anesthesiologists were the most frequent users for more than half of the days (95% confidence interval [CI] >50%) in the tertiary ORs. The OR charge nurses accessed the video mostly early in the day when the OR occupancy was high. In comparison (P < 0.001), anesthesiologists accessed it mostly at the end of the workday when occupancy was declining and few cases were starting. Of all 30-min periods during which the video was accessed in the trauma suite, many accesses (95% CI >42%) occurred in periods with no cases starting or ending (i.e., the video was used during the middle of cases). The three stated reasons for using video that had median surveyed responses of "very useful" were "to see if cases are finished," "to see if a room is ready," and "to see when cases are about to finish." Our nurses and physicians both accepted and used distributed OR video as it provided useful information, regardless of whether real-time display of milestones was available (e.g., through anesthesia information system data).

  12. Integrating Time-Synchronized Video with Other Geospatial and Temporal Data for Remote Science Operations

    NASA Technical Reports Server (NTRS)

    Cohen, Tamar E.; Lees, David S.; Deans, Matthew C.; Lim, Darlene S. S.; Lee, Yeon Jin Grace

    2018-01-01

    Exploration Ground Data Systems (xGDS) supports rapid scientific decision making by synchronizing video in context with map, instrument data visualization, geo-located notes and any other collected data. xGDS is an open source web-based software suite developed at NASA Ames Research Center to support remote science operations in analog missions and prototype solutions for remote planetary exploration. (See Appendix B) Typical video systems are designed to play or stream video only, independent of other data collected in the context of the video. Providing customizable displays for monitoring live video and data as well as replaying recorded video and data helps end users build up a rich situational awareness. xGDS was designed to support remote field exploration with unreliable networks. Commercial digital recording systems operate under the assumption that there is a stable and reliable network between the source of the video and the recording system. In many field deployments and space exploration scenarios, this is not the case - there are both anticipated and unexpected network losses. xGDS' Video Module handles these interruptions, storing the available video, organizing and characterizing the dropouts, and presenting the video for streaming or replay to the end user including visualization of the dropouts. Scientific instruments often require custom or expensive software to analyze and visualize collected data. This limits the speed at which the data can be visualized and limits access to the data to those users with the software. xGDS' Instrument Module integrates with instruments that collect and broadcast data in a single snapshot or that continually collect and broadcast a stream of data. While seeing a visualization of collected instrument data is informative, showing the context for the collected data, other data collected nearby along with events indicating current status helps remote science teams build a better understanding of the environment. Further, sharing geo-located, tagged notes recorded by the scientists and others on the team spurs deeper analysis of the data.

  13. 47 CFR 76.1301 - Prohibited practices.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 4 2011-10-01 2011-10-01 false Prohibited practices. 76.1301 Section 76.1301... interest. No cable operator or other multichannel video programming distributor shall require a financial... systems. (b) Exclusive rights. No cable operator or other multichannel video programming distributor shall...

  14. An Analysis of the Cost Effectiveness of Various Electronic Alternatives for Delivering Distance Education Compared to the Travel Costs for Live Instruction.

    ERIC Educational Resources Information Center

    Caffarella, Edward; And Others

    The feasibility and relative costs of four telecommunication systems for delivering university courses to distant locations in Colorado were compared. The four systems were compressed video, vertical blanking interval video, satellite video, and audiographic systems. Actual costs to install and operate each for a 5-year period were determined,…

  15. A new look at deep-sea video

    USGS Publications Warehouse

    Chezar, H.; Lee, J.

    1985-01-01

    A deep-towed photographic system with completely self-contained recording instrumentation and power can obtain color-video and still-photographic transects along rough terrane without need for a long electrically conducting cable. Both the video- and still-camera systems utilize relatively inexpensive and proven off-the-shelf hardware adapted for deep-water environments. The small instrument frame makes the towed sled an ideal photographic tool for use on ship or small-boat operations. The system includes a temperature probe and altimeter that relay data acoustically from the sled to the surface ship. This relay enables the operator to monitor simultaneously water temperature and the precise height off the bottom. ?? 1985.

  16. Innovative Solution to Video Enhancement

    NASA Technical Reports Server (NTRS)

    2001-01-01

    Through a licensing agreement, Intergraph Government Solutions adapted a technology originally developed at NASA's Marshall Space Flight Center for enhanced video imaging by developing its Video Analyst(TM) System. Marshall's scientists developed the Video Image Stabilization and Registration (VISAR) technology to help FBI agents analyze video footage of the deadly 1996 Olympic Summer Games bombing in Atlanta, Georgia. VISAR technology enhanced nighttime videotapes made with hand-held camcorders, revealing important details about the explosion. Intergraph's Video Analyst System is a simple, effective, and affordable tool for video enhancement and analysis. The benefits associated with the Video Analyst System include support of full-resolution digital video, frame-by-frame analysis, and the ability to store analog video in digital format. Up to 12 hours of digital video can be stored and maintained for reliable footage analysis. The system also includes state-of-the-art features such as stabilization, image enhancement, and convolution to help improve the visibility of subjects in the video without altering underlying footage. Adaptable to many uses, Intergraph#s Video Analyst System meets the stringent demands of the law enforcement industry in the areas of surveillance, crime scene footage, sting operations, and dash-mounted video cameras.

  17. Test Operations Procedure (TOP) 02-2-546 Teleoperated Unmanned Ground Vehicle (UGV) Latency Measurements

    DTIC Science & Technology

    2017-01-11

    discrete system components or measurements of latency in autonomous systems. 15. SUBJECT TERMS Unmanned Ground Vehicles, Basic Video Latency, End-to... discrete system components or measurements of latency in autonomous systems. 1.1 Basic Video Latency. Teleoperation latency, or lag, describes

  18. 47 CFR 76.1500 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... consisting of a set of transmission paths and associated signal generation, reception, and control equipment that is designed to provide cable service which includes video programming and which is provided to... complies with this part. (b) Open video system operator (operator). Any person or group of persons who...

  19. 47 CFR 76.1500 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... consisting of a set of transmission paths and associated signal generation, reception, and control equipment that is designed to provide cable service which includes video programming and which is provided to... complies with this part. (b) Open video system operator (operator). Any person or group of persons who...

  20. 47 CFR 76.1500 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... consisting of a set of transmission paths and associated signal generation, reception, and control equipment that is designed to provide cable service which includes video programming and which is provided to... complies with this part. (b) Open video system operator (operator). Any person or group of persons who...

  1. 47 CFR 0.111 - Functions of the Bureau.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... opportunity matters involving broadcasters, cable operators and other multichannel video programming... section 224 of the Communications Act. (13) Resolve complaints regarding multichannel video and cable... devices); subpart Q (regulation of carriage agreements); subpart S (Open Video Systems); and subparts T, U...

  2. 47 CFR 0.111 - Functions of the Bureau.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... opportunity matters involving broadcasters, cable operators and other multichannel video programming... section 224 of the Communications Act. (13) Resolve complaints regarding multichannel video and cable... devices); subpart Q (regulation of carriage agreements); subpart S (Open Video Systems); and subparts T, U...

  3. First results on video meteors from Crete, Greece

    NASA Astrophysics Data System (ADS)

    Maravelias, G.

    2012-01-01

    This work presents the first systematic video meteor observations from a, forthcoming permanent, station in Crete, Greece, operating as the first official node within the International Meteor Organization's Video Network. It consists of a Watec 902 H2 Ultimate camera equipped with a Panasonic WV-LA1208 (focal length 12mm, f/0.8) lens running MetRec. The system operated for 42 nights during 2011 (August 19-December 30, 2011) recording 1905 meteors. It is significantly more performant than a previous system used by the author during the Perseids 2010 (DMK camera 21AF04.AS by The Imaging Source, CCTV lens of focal length 2.8 mm, UFO Capture v2.22), which operated for 17 nights (August 4-22, 2010) recording 32 meteors. Differences - according to the author's experience - between the two softwares (MetRec, UFO Capture) are discussed along with a small guide to video meteor hardware.

  4. A teledentistry system for the second opinion.

    PubMed

    Gambino, Orazio; Lima, Fausto; Pirrone, Roberto; Ardizzone, Edoardo; Campisi, Giuseppina; di Fede, Olga

    2014-01-01

    In this paper we present a Teledentistry system aimed to the Second Opinion task. It make use of a particular camera called intra-oral camera, also called dental camera, in order to perform the photo shooting and real-time video of the inner part of the mouth. The pictures acquired by the Operator with such a device are sent to the Oral Medicine Expert (OME) by means of a current File Transfer Protocol (FTP) service and the real-time video is channeled into a video streaming thanks to the VideoLan client/server (VLC) application. It is composed by a HTML5 web-pages generated by PHP and allows to perform the Second Opinion both when Operator and OME are logged and when one of them is offline.

  5. Advanced visualization platform for surgical operating room coordination: distributed video board system.

    PubMed

    Hu, Peter F; Xiao, Yan; Ho, Danny; Mackenzie, Colin F; Hu, Hao; Voigt, Roger; Martz, Douglas

    2006-06-01

    One of the major challenges for day-of-surgery operating room coordination is accurate and timely situation awareness. Distributed and secure real-time status information is key to addressing these challenges. This article reports on the design and implementation of a passive status monitoring system in a 19-room surgical suite of a major academic medical center. Key design requirements considered included integrated real-time operating room status display, access control, security, and network impact. The system used live operating room video images and patient vital signs obtained through monitors to automatically update events and operating room status. Images were presented on a "need-to-know" basis, and access was controlled by identification badge authorization. The system delivered reliable real-time operating room images and status with acceptable network impact. Operating room status was visualized at 4 separate locations and was used continuously by clinicians and operating room service providers to coordinate operating room activities.

  6. Innovative Video Diagnostic Equipment for Material Science

    NASA Technical Reports Server (NTRS)

    Capuano, G.; Titomanlio, D.; Soellner, W.; Seidel, A.

    2012-01-01

    Materials science experiments under microgravity increasingly rely on advanced optical systems to determine the physical properties of the samples under investigation. This includes video systems with high spatial and temporal resolution. The acquisition, handling, storage and transmission to ground of the resulting video data are very challenging. Since the available downlink data rate is limited, the capability to compress the video data significantly without compromising the data quality is essential. We report on the development of a Digital Video System (DVS) for EML (Electro Magnetic Levitator) which provides real-time video acquisition, high compression using advanced Wavelet algorithms, storage and transmission of a continuous flow of video with different characteristics in terms of image dimensions and frame rates. The DVS is able to operate with the latest generation of high-performance cameras acquiring high resolution video images up to 4Mpixels@60 fps or high frame rate video images up to about 1000 fps@512x512pixels.

  7. Multi-Target Camera Tracking, Hand-off and Display LDRD 158819 Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Robert J.

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn’t lead to more alarms, moremore » monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identify individual moving targets from the background imagery, and then display the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.« less

  8. Multi-target camera tracking, hand-off and display LDRD 158819 final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Robert J.

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn't lead to more alarms, moremore » monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identifies individual moving targets from the background imagery, and then displays the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.« less

  9. Operationally Efficient Propulsion System Study (OEPSS): OEPSS Video Script

    NASA Technical Reports Server (NTRS)

    Wong, George S.; Waldrop, Glen S.; Trent, Donnie (Editor)

    1992-01-01

    The OEPSS video film, along with the OEPSS Databooks, provides a data base of current launch experience that will be useful for design of future expendable and reusable launch systems. The focus is on the launch processing of propulsion systems. A brief 15-minute overview of the OEPSS study results is found at the beginning of the film. The remainder of the film discusses in more detail: current ground operations at the Kennedy Space Center; typical operations issues and problems; critical operations technologies; and efficiency of booster and space propulsion systems. The impact of system architecture on the launch site and its facility infrastucture is emphasized. Finally, a particularly valuable analytical tool, developed during the OEPSS study, that will provide for the "first time" a quantitative measure of operations efficiency for a propulsion system is described.

  10. Video conferencing made easy

    NASA Astrophysics Data System (ADS)

    Larsen, D. G.; Schwieder, P. R.

    Network video conferencing is advancing rapidly throughout the nation, and the Idaho National Engineering Laboratory (INEL), a Department of Energy (DOE) facility, is at the forefront of the development. Engineers at INEL/EG&G designed and installed a very unique DOE video conferencing system, offering many outstanding features, that include true multipoint conferencing, user-friendly design and operation with no full-time operators required, and the potential for cost effective expansion of the system. One area where INEL/EG&G engineers made a significant contribution to video conferencing was in the development of effective, user-friendly, end station driven scheduling software. A PC at each user site is used to schedule conferences via a windows package. This software interface provides information to the users concerning conference availability, scheduling, initiation, and termination. The menus are 'mouse' controlled. Once a conference is scheduled, a workstation at the hub monitors the network to initiate all scheduled conferences. No active operator participation is required once a user schedules a conference through the local PC; the workstation automatically initiates and terminates the conference as scheduled. As each conference is scheduled, hard copy notification is also printed at each participating site. Video conferencing is the wave of the future. The use of these user-friendly systems will save millions in lost productivity and travel costs throughout the nation. The ease of operation and conference scheduling will play a key role on the extent industry uses this new technology. The INEL/EG&G has developed a prototype scheduling system for both commercial and federal government use.

  11. Affordable multisensor digital video architecture for 360° situational awareness displays

    NASA Astrophysics Data System (ADS)

    Scheiner, Steven P.; Khan, Dina A.; Marecki, Alexander L.; Berman, David A.; Carberry, Dana

    2011-06-01

    One of the major challenges facing today's military ground combat vehicle operations is the ability to achieve and maintain full-spectrum situational awareness while under armor (i.e. closed hatch). Thus, the ability to perform basic tasks such as driving, maintaining local situational awareness, surveillance, and targeting will require a high-density array of real time information be processed, distributed, and presented to the vehicle operators and crew in near real time (i.e. low latency). Advances in display and sensor technologies are providing never before seen opportunities to supply large amounts of high fidelity imagery and video to the vehicle operators and crew in real time. To fully realize the advantages of these emerging display and sensor technologies, an underlying digital architecture must be developed that is capable of processing these large amounts of video and data from separate sensor systems and distributing it simultaneously within the vehicle to multiple vehicle operators and crew. This paper will examine the systems and software engineering efforts required to overcome these challenges and will address development of an affordable, integrated digital video architecture. The approaches evaluated will enable both current and future ground combat vehicle systems the flexibility to readily adopt emerging display and sensor technologies, while optimizing the Warfighter Machine Interface (WMI), minimizing lifecycle costs, and improve the survivability of the vehicle crew working in closed-hatch systems during complex ground combat operations.

  12. Video Modeling and Prompting: A Comparison of Two Strategies for Teaching Cooking Skills to Students with Mild Intellectual Disabilities

    ERIC Educational Resources Information Center

    Taber-Doughty, Teresa; Bouck, Emily C.; Tom, Kinsey; Jasper, Andrea D.; Flanagan, Sara M.; Bassette, Laura

    2011-01-01

    Self-operated video prompting and video modeling was compared when used by three secondary students with mild intellectual disabilities as they completed novel recipes during cooking activities. Alternating between video systems, students completed twelve recipes within their classroom kitchen. An alternating treatment design with a follow-up and…

  13. Paving the Way for Greener Highways : Extending Concrete's Service Life Through Multiscale Crack Control

    DOT National Transportation Integrated Search

    2013-10-21

    Today many intersections are operated based on data input from nonintrusive video detection systems. With those systems the video detectors can be easily deployed/modified for different application requirements. This research project is initiated to ...

  14. Head Mounted Alerting for Urban Operations via Tactical Information Management System

    DTIC Science & Technology

    2006-03-01

    MOUT Area Based Experiments .......................................................................... 62 6.4.2 Video Game Based Experiments...associated with the video game task. ................................................................ 35 Figure 20: The learning rate for truth sets defined...23 Table 6: Results of experiments from Breakthrough Mission for our Video Game Configuration

  15. ASTP video tape recorder ground support equipment (audio/CTE splitter/interleaver). Operations manual

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A descriptive handbook for the audio/CTE splitter/interleaver (RCA part No. 8673734-502) was presented. This unit is designed to perform two major functions: extract audio and time data from an interleaved video/audio signal (splitter section), and provide a test interleaved video/audio/CTE signal for the system (interleaver section). It is a rack mounting unit 7 inches high, 19 inches wide, 20 inches deep, mounted on slides for retracting from the rack, and weighs approximately 40 pounds. The following information is provided: installation, operation, principles of operation, maintenance, schematics and parts lists.

  16. Web-video-mining-supported workflow modeling for laparoscopic surgeries.

    PubMed

    Liu, Rui; Zhang, Xiaoli; Zhang, Hao

    2016-11-01

    As quality assurance is of strong concern in advanced surgeries, intelligent surgical systems are expected to have knowledge such as the knowledge of the surgical workflow model (SWM) to support their intuitive cooperation with surgeons. For generating a robust and reliable SWM, a large amount of training data is required. However, training data collected by physically recording surgery operations is often limited and data collection is time-consuming and labor-intensive, severely influencing knowledge scalability of the surgical systems. The objective of this research is to solve the knowledge scalability problem in surgical workflow modeling with a low cost and labor efficient way. A novel web-video-mining-supported surgical workflow modeling (webSWM) method is developed. A novel video quality analysis method based on topic analysis and sentiment analysis techniques is developed to select high-quality videos from abundant and noisy web videos. A statistical learning method is then used to build the workflow model based on the selected videos. To test the effectiveness of the webSWM method, 250 web videos were mined to generate a surgical workflow for the robotic cholecystectomy surgery. The generated workflow was evaluated by 4 web-retrieved videos and 4 operation-room-recorded videos, respectively. The evaluation results (video selection consistency n-index ≥0.60; surgical workflow matching degree ≥0.84) proved the effectiveness of the webSWM method in generating robust and reliable SWM knowledge by mining web videos. With the webSWM method, abundant web videos were selected and a reliable SWM was modeled in a short time with low labor cost. Satisfied performances in mining web videos and learning surgery-related knowledge show that the webSWM method is promising in scaling knowledge for intelligent surgical systems. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Evaluation of Human Research Facility Ultrasound With the ISS Video System

    NASA Technical Reports Server (NTRS)

    Melton, Shannon; Sargsyan, Ashot

    2003-01-01

    Most medical equipment on the International Space Station (ISS) is manifested as part of the U.S. or the Russian medical hardware systems. However, certain medical hardware is also available as part of the Human Research Facility. The HRF and the JSC Medical Operations Branch established a Memorandum of Agreement for joint use of certain medical hardware, including the HRF ultrasound system, the only diagnostic imaging device currently manifested to fly on ISS. The outcome of a medical contingency may be changed drastically, or an unnecessary evacuation may be prevented, if clinical decisions are supported by timely and objective diagnostic information. In many higher-probability medical scenarios, diagnostic ultrasound is a first-choice modality or provides significant diagnostic information. Accordingly, the Clinical Care Capability Development Project is evaluating the HRF ultrasound system for its utility in relevant clinical situations on board ISS. For effective management of these ultrasound-supported ISS medical scenarios, the resulting data should be available for viewing and interpretation on the ground, and bidirectional voice communication should be readily available to allow ground experts (sonographers, physicians) to provide guidance to the Crew Medical Officer. It may also be vitally important to have the capability of real-time guidance via video uplink to the CMO-operator during an exam to facilitate the diagnosis in a timely fashion. In this document, we strove to verify that the HRF ultrasound video output is compatible with the ISS video system, identify ISS video system field rates and resolutions that are acceptable for varying clinical scenaiios, and evaluate the HRF ultrasound video with a commercial, off-the-shelf video converter, and compare it with the ISS video system.

  18. Real-time synchronization of kinematic and video data for the comprehensive assessment of surgical skills.

    PubMed

    Dosis, Aristotelis; Bello, Fernando; Moorthy, Krishna; Munz, Yaron; Gillies, Duncan; Darzi, Ara

    2004-01-01

    Surgical dexterity in operating theatres has traditionally been assessed subjectively. Electromagnetic (EM) motion tracking systems such as the Imperial College Surgical Assessment Device (ICSAD) have been shown to produce valid and accurate objective measures of surgical skill. To allow for video integration we have modified the data acquisition and built it within the ROVIMAS analysis software. We then used ActiveX 9.0 DirectShow video capturing and the system clock as a time stamp for the synchronized concurrent acquisition of kinematic data and video frames. Interactive video/motion data browsing was implemented to allow the user to concentrate on frames exhibiting certain kinematic properties that could result in operative errors. We exploited video-data synchronization to calculate the camera visual hull by identifying all 3D vertices using the ICSAD electromagnetic sensors. We also concentrated on high velocity peaks as a means of identifying potential erroneous movements to be confirmed by studying the corresponding video frames. The outcome of the study clearly shows that the kinematic data are precisely synchronized with the video frames and that the velocity peaks correspond to large and sudden excursions of the instrument tip. We validated the camera visual hull by both video and geometrical kinematic analysis and we observed that graphs containing fewer sudden velocity peaks are less likely to have erroneous movements. This work presented further developments to the well-established ICSAD dexterity analysis system. Synchronized real-time motion and video acquisition provides a comprehensive assessment solution by combining quantitative motion analysis tools and qualitative targeted video scoring.

  19. Development of a web-based video management and application processing system

    NASA Astrophysics Data System (ADS)

    Chan, Shermann S.; Wu, Yi; Li, Qing; Zhuang, Yueting

    2001-07-01

    How to facilitate efficient video manipulation and access in a web-based environment is becoming a popular trend for video applications. In this paper, we present a web-oriented video management and application processing system, based on our previous work on multimedia database and content-based retrieval. In particular, we extend the VideoMAP architecture with specific web-oriented mechanisms, which include: (1) Concurrency control facilities for the editing of video data among different types of users, such as Video Administrator, Video Producer, Video Editor, and Video Query Client; different users are assigned various priority levels for different operations on the database. (2) Versatile video retrieval mechanism which employs a hybrid approach by integrating a query-based (database) mechanism with content- based retrieval (CBR) functions; its specific language (CAROL/ST with CBR) supports spatio-temporal semantics of video objects, and also offers an improved mechanism to describe visual content of videos by content-based analysis method. (3) Query profiling database which records the `histories' of various clients' query activities; such profiles can be used to provide the default query template when a similar query is encountered by the same kind of users. An experimental prototype system is being developed based on the existing VideoMAP prototype system, using Java and VC++ on the PC platform.

  20. 47 CFR 76.7 - General special relief, waiver, enforcement, complaint, show cause, forfeiture, and declaratory...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE... interested party, cable television system operator, a multichannel video programming distributor, local...

  1. 47 CFR 76.7 - General special relief, waiver, enforcement, complaint, show cause, forfeiture, and declaratory...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE... interested party, cable television system operator, a multichannel video programming distributor, local...

  2. 47 CFR 76.7 - General special relief, waiver, enforcement, complaint, show cause, forfeiture, and declaratory...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE... interested party, cable television system operator, a multichannel video programming distributor, local...

  3. 47 CFR 76.7 - General special relief, waiver, enforcement, complaint, show cause, forfeiture, and declaratory...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE... interested party, cable television system operator, a multichannel video programming distributor, local...

  4. 47 CFR 76.7 - General special relief, waiver, enforcement, complaint, show cause, forfeiture, and declaratory...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE... interested party, cable television system operator, a multichannel video programming distributor, local...

  5. Free Space Optical Communication in the Military Environment

    DTIC Science & Technology

    2014-09-01

    Communications Commission FDA Food and Drug Administration FMV Full Motion Video FOB Forward Operating Base FOENEX Free-Space Optical Experimental Network...from radio and voice to chat message and email. Data-rich multimedia content, such as high-definition pictures, video chat, video files, and...introduction of full-motion video (FMV) via numerous different Intelligence Surveillance and Reconnaissance (ISR) systems, such as targeting pods on

  6. The Successful Development of an Automated Rendezvous and Capture (AR&C) System for the National Aeronautics and Space Administration

    NASA Technical Reports Server (NTRS)

    Roe, Fred D.; Howard, Richard T.

    2003-01-01

    During the 1990's, the Marshall Space Flight Center (MSFC) conducted pioneering research in the development of an automated rendezvous and capture/docking (AR&C) system for U.S. space vehicles. Development and demonstration of a rendezvous sensor was identified early in the AR&C Program as the critical enabling technology that allows automated proximity operations and docking. A first generation rendezvous sensor, the Video Guidance Sensor (VGS), was developed and successfully flown on STS-87 and STS-95, proving the concept of a video- based sensor. A ground demonstration of the entire system and software was successfully tested. Advances in both video and signal processing technologies and the lessons learned from the two successful flight experiments provided a baseline for the development, by the MSFC, of a new generation of video based rendezvous sensor. The Advanced Video Guidance Sensor (AGS) has greatly increased performance and additional capability for longer-range operation with a new target designed as a direct replacement for existing ISS hemispherical reflectors.

  7. Video bandwidth compression system

    NASA Astrophysics Data System (ADS)

    Ludington, D.

    1980-08-01

    The objective of this program was the development of a Video Bandwidth Compression brassboard model for use by the Air Force Avionics Laboratory, Wright-Patterson Air Force Base, in evaluation of bandwidth compression techniques for use in tactical weapons and to aid in the selection of particular operational modes to be implemented in an advanced flyable model. The bandwidth compression system is partitioned into two major divisions: the encoder, which processes the input video with a compression algorithm and transmits the most significant information; and the decoder where the compressed data is reconstructed into a video image for display.

  8. [Assessment of learning activities using streaming video for laboratory practice education: aiming for development of E-learning system that promotes self-learning].

    PubMed

    Takeda, Naohito; Takeuchi, Isao; Haruna, Mitsumasa

    2007-12-01

    In order to develop an e-learning system that promotes self-learning, lectures and basic operations in laboratory practice of chemistry were recorded and edited on DVD media, consisting of 8 streaming videos as learning materials. Twenty-six students wanted to watch the DVD, and answered the following questions after they had watched it: "Do you think the video would serve to encourage you to study independently in the laboratory practice?" Almost all students (95%) approved of its usefulness, and more than 60% of them watched the videos repeatedly in order to acquire deeper knowledge and skill of the experimental operations. More than 60% answered that the demonstration-experiment should be continued in the laboratory practice, in spite of distribution of the DVD media.

  9. 47 CFR 76.1200 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... CABLE TELEVISION SERVICE Competitive Availability of Navigation Devices § 76.1200 Definitions. As used... open video system as defined by § 76.1500(a). Such systems include, but are not limited to, cable...) Multichannel video programming distributor. A person such as, but not limited to, a cable operator, a BRS/EBS...

  10. 47 CFR 76.1200 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... CABLE TELEVISION SERVICE Competitive Availability of Navigation Devices § 76.1200 Definitions. As used... open video system as defined by § 76.1500(a). Such systems include, but are not limited to, cable...) Multichannel video programming distributor. A person such as, but not limited to, a cable operator, a BRS/EBS...

  11. 47 CFR 76.1200 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... CABLE TELEVISION SERVICE Competitive Availability of Navigation Devices § 76.1200 Definitions. As used... open video system as defined by § 76.1500(a). Such systems include, but are not limited to, cable...) Multichannel video programming distributor. A person such as, but not limited to, a cable operator, a BRS/EBS...

  12. Distance Learning as a Training and Education Tool.

    ERIC Educational Resources Information Center

    Hosley, David L.; Randolph, Sherry L.

    Lockheed Space Operations Company's Technical Training Department provides certification classes to personnel at other National Aeronautics and Space Administration (NASA) Centers. Courses are delivered over the Kennedy Space Center's Video Teleconferencing System (ViTS). The ViTS system uses two-way compressed video and two-way audio between…

  13. The California All-sky Meteor Surveillance (CAMS) System

    NASA Astrophysics Data System (ADS)

    Gural, P. S.

    2011-01-01

    A unique next generation multi-camera, multi-site video meteor system is being developed and deployed in California to provide high accuracy orbits of simultaneously captured meteors. Included herein is a description of the goals, concept of operations, hardware, and software development progress. An appendix contains a meteor camera performance trade study made for video systems circa 2010.

  14. Low Cost Efficient Deliverying Video Surveillance Service to Moving Guard for Smart Home.

    PubMed

    Gualotuña, Tatiana; Macías, Elsa; Suárez, Álvaro; C, Efraín R Fonseca; Rivadeneira, Andrés

    2018-03-01

    Low-cost video surveillance systems are attractive for Smart Home applications (especially in emerging economies). Those systems use the flexibility of the Internet of Things to operate the video camera only when an intrusion is detected. We are the only ones that focus on the design of protocols based on intelligent agents to communicate the video of an intrusion in real time to the guards by wireless or mobile networks. The goal is to communicate, in real time, the video to the guards who can be moving towards the smart home. However, this communication suffers from sporadic disruptions that difficults the control and drastically reduces user satisfaction and operativity of the system. In a novel way, we have designed a generic software architecture based on design patterns that can be adapted to any hardware in a simple way. The implanted hardware is of very low economic cost; the software frameworks are free. In the experimental tests we have shown that it is possible to communicate to the moving guard, intrusion notifications (by e-mail and by instant messaging), and the first video frames in less than 20 s. In addition, we automatically recovered the frames of video lost in the disruptions in a transparent way to the user, we supported vertical handover processes and we could save energy of the smartphone's battery. However, the most important thing was that the high satisfaction of the people who have used the system.

  15. Low Cost Efficient Deliverying Video Surveillance Service to Moving Guard for Smart Home

    PubMed Central

    Gualotuña, Tatiana; Fonseca C., Efraín R.; Rivadeneira, Andrés

    2018-01-01

    Low-cost video surveillance systems are attractive for Smart Home applications (especially in emerging economies). Those systems use the flexibility of the Internet of Things to operate the video camera only when an intrusion is detected. We are the only ones that focus on the design of protocols based on intelligent agents to communicate the video of an intrusion in real time to the guards by wireless or mobile networks. The goal is to communicate, in real time, the video to the guards who can be moving towards the smart home. However, this communication suffers from sporadic disruptions that difficults the control and drastically reduces user satisfaction and operativity of the system. In a novel way, we have designed a generic software architecture based on design patterns that can be adapted to any hardware in a simple way. The implanted hardware is of very low economic cost; the software frameworks are free. In the experimental tests we have shown that it is possible to communicate to the moving guard, intrusion notifications (by e-mail and by instant messaging), and the first video frames in less than 20 s. In addition, we automatically recovered the frames of video lost in the disruptions in a transparent way to the user, we supported vertical handover processes and we could save energy of the smartphone's battery. However, the most important thing was that the high satisfaction of the people who have used the system. PMID:29494551

  16. Optoelectronic Sensor System for Guidance in Docking

    NASA Technical Reports Server (NTRS)

    Howard, Richard T.; Bryan, Thomas C.; Book, Michael L.; Jackson, John L.

    2004-01-01

    The Video Guidance Sensor (VGS) system is an optoelectronic sensor that provides automated guidance between two vehicles. In the original intended application, the two vehicles would be spacecraft docking together, but the basic principles of design and operation of the sensor are applicable to aircraft, robots, vehicles, or other objects that may be required to be aligned for docking, assembly, resupply, or precise separation. The system includes a sensor head containing a monochrome charge-coupled- device video camera and pulsed laser diodes mounted on the tracking vehicle, and passive reflective targets on the tracked vehicle. The lasers illuminate the targets, and the resulting video images of the targets are digitized. Then, from the positions of the digitized target images and known geometric relationships among the targets, the relative position and orientation of the vehicles are computed. As described thus far, the VGS system is based on the same principles as those of the system described in "Improved Video Sensor System for Guidance in Docking" (MFS-31150), NASA Tech Briefs, Vol. 21, No. 4 (April 1997), page 9a. However, the two systems differ in the details of design and operation. The VGS system is designed to operate with the target completely visible within a relative-azimuth range of +/-10.5deg and a relative-elevation range of +/-8deg. The VGS acquires and tracks the target within that field of view at any distance from 1.0 to 110 m and at any relative roll, pitch, and/or yaw angle within +/-10deg. The VGS produces sets of distance and relative-orientation data at a repetition rate of 5 Hz. The software of this system also accommodates the simultaneous operation of two sensors for redundancy

  17. Increased ISR operator capability utilizing a centralized 360° full motion video display

    NASA Astrophysics Data System (ADS)

    Andryc, K.; Chamberlain, J.; Eagleson, T.; Gottschalk, G.; Kowal, B.; Kuzdeba, P.; LaValley, D.; Myers, E.; Quinn, S.; Rose, M.; Rusiecki, B.

    2012-06-01

    In many situations, the difference between success and failure comes down to taking the right actions quickly. While the myriad of electronic sensors available today can provide data quickly, it may overload the operator; where only a contextualized centralized display of information and intuitive human interface can help to support the quick and effective decisions needed. If these decisions are to result in quick actions, then the operator must be able to understand all of the data of his environment. In this paper we present a novel approach in contextualizing multi-sensor data onto a full motion video real-time 360 degree imaging display. The system described could function as a primary display system for command and control in security, military and observation posts. It has the ability to process and enable interactive control of multiple other sensor systems. It enhances the value of these other sensors by overlaying their information on a panorama of the surroundings. Also, it can be used to interface to other systems including: auxiliary electro-optical systems, aerial video, contact management, Hostile Fire Indicators (HFI), and Remote Weapon Stations (RWS).

  18. Video Information Communication and Retrieval/Image Based Information System (VICAR/IBIS)

    NASA Technical Reports Server (NTRS)

    Wherry, D. B.

    1981-01-01

    The acquisition, operation, and planning stages of installing a VICAR/IBIS system are described. The system operates in an IBM mainframe environment, and provides image processing of raster data. System support problems with software and documentation are discussed.

  19. Very low cost real time histogram-based contrast enhancer utilizing fixed-point DSP processing

    NASA Astrophysics Data System (ADS)

    McCaffrey, Nathaniel J.; Pantuso, Francis P.

    1998-03-01

    A real time contrast enhancement system utilizing histogram- based algorithms has been developed to operate on standard composite video signals. This low-cost DSP based system is designed with fixed-point algorithms and an off-chip look up table (LUT) to reduce the cost considerably over other contemporary approaches. This paper describes several real- time contrast enhancing systems advanced at the Sarnoff Corporation for high-speed visible and infrared cameras. The fixed-point enhancer was derived from these high performance cameras. The enhancer digitizes analog video and spatially subsamples the stream to qualify the scene's luminance. Simultaneously, the video is streamed through a LUT that has been programmed with the previous calculation. Reducing division operations by subsampling reduces calculation- cycles and also allows the processor to be used with cameras of nominal resolutions. All values are written to the LUT during blanking so no frames are lost. The enhancer measures 13 cm X 6.4 cm X 3.2 cm, operates off 9 VAC and consumes 12 W. This processor is small and inexpensive enough to be mounted with field deployed security cameras and can be used for surveillance, video forensics and real- time medical imaging.

  20. Ranging Apparatus and Method Implementing Stereo Vision System

    NASA Technical Reports Server (NTRS)

    Li, Larry C. (Inventor); Cox, Brian J. (Inventor)

    1997-01-01

    A laser-directed ranging system for use in telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a left and right video camera mounted on a camera platform, and a remotely positioned operator. The position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. A laser is provided between the left and right video camera and is directed by the user to point to a target device. The images produced by the left and right video cameras are processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. The horizontal disparity between the two processed images is calculated for use in a stereometric ranging analysis from which range is determined.

  1. System on a chip with MPEG-4 capability

    NASA Astrophysics Data System (ADS)

    Yassa, Fathy; Schonfeld, Dan

    2002-12-01

    Current products supporting video communication applications rely on existing computer architectures. RISC processors have been used successfully in numerous applications over several decades. DSP processors have become ubiquitous in signal processing and communication applications. Real-time applications such as speech processing in cellular telephony rely extensively on the computational power of these processors. Video processors designed to implement the computationally intensive codec operations have also been used to address the high demands of video communication applications (e.g., cable set-top boxes and DVDs). This paper presents an overview of a system-on-chip (SOC) architecture used for real-time video in wireless communication applications. The SOC specifications answer to the system requirements imposed by the application environment. A CAM-based video processor is used to accelerate data intensive video compression tasks such as motion estimations and filtering. Other components are dedicated to system level data processing and audio processing. A rich set of I/Os allows the SOC to communicate with other system components such as baseband and memory subsystems.

  2. A hybrid thermal video and FTIR spectrometer system for rapidly locating and characterizing gas leaks

    NASA Astrophysics Data System (ADS)

    Williams, David J.; Wadsworth, Winthrop; Salvaggio, Carl; Messinger, David W.

    2006-08-01

    Undiscovered gas leaks, known as fugitive emissions, in chemical plants and refinery operations can impact regional air quality and present a loss of product for industry. Surveying a facility for potential gas leaks can be a daunting task. Industrial leak detection and repair programs can be expensive to administer. An efficient, accurate and cost effective method for detecting and quantifying gas leaks would both save industries money by identifying production losses and improve regional air quality. Specialized thermal video systems have proven effective in rapidly locating gas leaks. These systems, however, do not have the spectral resolution for compound identification. Passive FTIR spectrometers can be used for gas compound identification, but using these systems for facility surveys is problematic due to their small field of view. A hybrid approach has been developed that utilizes the thermal video system to locate gas plumes using real time visualization of the leaks, coupled with the high spectral resolution FTIR spectrometer for compound identification and quantification. The prototype hybrid video/spectrometer system uses a sterling cooled thermal camera, operating in the MWIR (3-5 μm) with an additional notch filter set at around 3.4 μm, which allows for the visualization of gas compounds that absorb in this narrow spectral range, such as alkane hydrocarbons. This camera is positioned alongside of a portable, high speed passive FTIR spectrometer, which has a spectral range of 2 - 25 μm and operates at 4 cm -1 resolution. This system uses a 10 cm telescope foreoptic with an onboard blackbody for calibration. The two units are optically aligned using a turning mirror on the spectrometer's telescope with the video camera's output.

  3. Interacting with target tracking algorithms in a gaze-enhanced motion video analysis system

    NASA Astrophysics Data System (ADS)

    Hild, Jutta; Krüger, Wolfgang; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2016-05-01

    Motion video analysis is a challenging task, particularly if real-time analysis is required. It is therefore an important issue how to provide suitable assistance for the human operator. Given that the use of customized video analysis systems is more and more established, one supporting measure is to provide system functions which perform subtasks of the analysis. Recent progress in the development of automated image exploitation algorithms allow, e.g., real-time moving target tracking. Another supporting measure is to provide a user interface which strives to reduce the perceptual, cognitive and motor load of the human operator for example by incorporating the operator's visual focus of attention. A gaze-enhanced user interface is able to help here. This work extends prior work on automated target recognition, segmentation, and tracking algorithms as well as about the benefits of a gaze-enhanced user interface for interaction with moving targets. We also propose a prototypical system design aiming to combine both the qualities of the human observer's perception and the automated algorithms in order to improve the overall performance of a real-time video analysis system. In this contribution, we address two novel issues analyzing gaze-based interaction with target tracking algorithms. The first issue extends the gaze-based triggering of a target tracking process, e.g., investigating how to best relaunch in the case of track loss. The second issue addresses the initialization of tracking algorithms without motion segmentation where the operator has to provide the system with the object's image region in order to start the tracking algorithm.

  4. A Novel Method for Real-Time Audio Recording With Intraoperative Video.

    PubMed

    Sugamoto, Yuji; Hamamoto, Yasuyoshi; Kimura, Masayuki; Fukunaga, Toru; Tasaki, Kentaro; Asai, Yo; Takeshita, Nobuyoshi; Maruyama, Tetsuro; Hosokawa, Takashi; Tamachi, Tomohide; Aoyama, Hiromichi; Matsubara, Hisahiro

    2015-01-01

    Although laparoscopic surgery has become widespread, effective and efficient education in laparoscopic surgery is difficult. Instructive laparoscopy videos with appropriate annotations are ideal for initial training in laparoscopic surgery; however, the method we use at our institution for creating laparoscopy videos with audio is not generalized, and there have been no detailed explanations of any such method. Our objectives were to demonstrate the feasibility of low-cost simple methods for recording surgical videos with audio and to perform a preliminary safety evaluation when obtaining these recordings during operations. We devised a method for the synchronous recording of surgical video with real-time audio in which we connected an amplifier and a wireless microphone to an existing endoscopy system and its equipped video-recording device. We tested this system in 209 cases of laparoscopic surgery in operating rooms between August 2010 and July 2011 and prospectively investigated the results of the audiovisual recording method and examined intraoperative problems. Numazu City Hospital in Numazu city, Japan. Surgeons, instrument nurses, and medical engineers. In all cases, the synchronous input of audio and video was possible. The recording system did not cause any inconvenience to the surgeon, assistants, instrument nurse, sterilized equipment, or electrical medical equipment. Statistically significant differences were not observed between the audiovisual group and control group regarding the operating time, which had been divided into 2 slots-performed by the instructors or by trainees (p > 0.05). This recording method is feasible and considerably safe while posing minimal difficulty in terms of technology, time, and expense. We recommend this method for both surgical trainees who wish to acquire surgical skills effectively and medical instructors who wish to teach surgical skills effectively. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  5. Reliable motion detection of small targets in video with low signal-to-clutter ratios

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nichols, S.A.; Naylor, R.B.

    1995-07-01

    Studies show that vigilance decreases rapidly after several minutes when human operators are required to search live video for infrequent intrusion detections. Therefore, there is a need for systems which can automatically detect targets in live video and reserve the operator`s attention for assessment only. Thus far, automated systems have not simultaneously provided adequate detection sensitivity, false alarm suppression, and ease of setup when used in external, unconstrained environments. This unsatisfactory performance can be exacerbated by poor video imagery with low contrast, high noise, dynamic clutter, image misregistration, and/or the presence of small, slow, or erratically moving targets. This papermore » describes a highly adaptive video motion detection and tracking algorithm which has been developed as part of Sandia`s Advanced Exterior Sensor (AES) program. The AES is a wide-area detection and assessment system for use in unconstrained exterior security applications. The AES detection and tracking algorithm provides good performance under stressing data and environmental conditions. Features of the algorithm include: reliable detection with negligible false alarm rate of variable velocity targets having low signal-to-clutter ratios; reliable tracking of targets that exhibit motion that is non-inertial, i.e., varies in direction and velocity; automatic adaptation to both infrared and visible imagery with variable quality; and suppression of false alarms caused by sensor flaws and/or cutouts.« less

  6. High definition in minimally invasive surgery: a review of methods for recording, editing, and distributing video.

    PubMed

    Kelly, Christopher R; Hogle, Nancy J; Landman, Jaime; Fowler, Dennis L

    2008-09-01

    The use of high-definition cameras and monitors during minimally invasive procedures can provide the surgeon and operating team with more than twice the resolution of standard definition systems. Although this dramatic improvement in visualization offers numerous advantages, the adoption of high definition cameras in the operating room can be challenging because new recording equipment must be purchased, and several new technologies are required to edit and distribute video. The purpose of this review article is to provide an overview of the popular methods for recording, editing, and distributing high-definition video. This article discusses the essential technical concepts of high-definition video, reviews the different kinds of equipment and methods most often used for recording, and describes several options for video distribution.

  7. Analysis and Selection of a Remote Docking Simulation Visual Display System

    NASA Technical Reports Server (NTRS)

    Shields, N., Jr.; Fagg, M. F.

    1984-01-01

    The development of a remote docking simulation visual display system is examined. Video system and operator performance are discussed as well as operator command and control requirements and a design analysis of the reconfigurable work station.

  8. Intersection video detection field handbook : an update.

    DOT National Transportation Integrated Search

    2010-12-01

    This handbook is intended to assist engineers and technicians with the design, layout, and : operation of a video imaging vehicle detection system (VIVDS). This assistance is provided in : three ways. First, the handbook identifies the optimal detect...

  9. Non-Cooperative Facial Recognition Video Dataset Collection Plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kimura, Marcia L.; Erikson, Rebecca L.; Lombardo, Nicholas J.

    The Pacific Northwest National Laboratory (PNNL) will produce a non-cooperative (i.e. not posing for the camera) facial recognition video data set for research purposes to evaluate and enhance facial recognition systems technology. The aggregate data set consists of 1) videos capturing PNNL role players and public volunteers in three key operational settings, 2) photographs of the role players for enrolling in an evaluation database, and 3) ground truth data that documents when the role player is within various camera fields of view. PNNL will deliver the aggregate data set to DHS who may then choose to make it available tomore » other government agencies interested in evaluating and enhancing facial recognition systems. The three operational settings that will be the focus of the video collection effort include: 1) unidirectional crowd flow 2) bi-directional crowd flow, and 3) linear and/or serpentine queues.« less

  10. Study of design and control of remote manipulators. Part 4: Experiments in video camera positioning with regard to remote manipulation

    NASA Technical Reports Server (NTRS)

    Mackro, J.

    1973-01-01

    The results are presented of a study involving closed circuit television as the means of providing the necessary task-to-operator feedback for efficient performance of the remote manipulation system. Experiments were performed to determine the remote video configuration that will result in the best overall system. Two categories of tests were conducted which include: those which involved remote control position (rate) of just the video system, and those in which closed circuit TV was used along with manipulation of the objects themselves.

  11. Orbiter CCTV video signal noise analysis

    NASA Technical Reports Server (NTRS)

    Lawton, R. M.; Blanke, L. R.; Pannett, R. F.

    1977-01-01

    The amount of steady state and transient noise which will couple to orbiter CCTV video signal wiring is predicted. The primary emphasis is on the interim system, however, some predictions are made concerning the operational system wiring in the cabin area. Noise sources considered are RF fields from on board transmitters, precipitation static, induced lightning currents, and induced noise from adjacent wiring. The most significant source is noise coupled to video circuits from associated circuits in common connectors. Video signal crosstalk is the primary cause of steady state interference, and mechanically switched control functions cause the largest induced transients.

  12. 47 CFR 76.1504 - Rates, terms and conditions for carriage on open video systems.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ....1504 Rates, terms and conditions for carriage on open video systems. (a) Reasonable rate principle. An... operator will bear the burden of proof to demonstrate, using the principles set forth below, that the...; (2) Packaging, including marketing and other fees; (3) Talent fees; and (4) A reasonable overhead...

  13. Multi-star processing and gyro filtering for the video inertial pointing system

    NASA Technical Reports Server (NTRS)

    Murphy, J. P.

    1976-01-01

    The video inertial pointing (VIP) system is being developed to satisfy the acquisition and pointing requirements of astronomical telescopes. The VIP system uses a single video sensor to provide star position information that can be used to generate three-axis pointing error signals (multi-star processing) and for input to a cathode ray tube (CRT) display of the star field. The pointing error signals are used to update the telescope's gyro stabilization system (gyro filtering). The CRT display facilitates target acquisition and positioning of the telescope by a remote operator. Linearized small angle equations are used for the multistar processing and a consideration of error performance and singularities lead to star pair location restrictions and equation selection criteria. A discrete steady-state Kalman filter which uses the integration of the gyros is developed and analyzed. The filter includes unit time delays representing asynchronous operations of the VIP microprocessor and video sensor. A digital simulation of a typical gyro stabilized gimbal is developed and used to validate the approach to the gyro filtering.

  14. A Real-Time Image Acquisition And Processing System For A RISC-Based Microcomputer

    NASA Astrophysics Data System (ADS)

    Luckman, Adrian J.; Allinson, Nigel M.

    1989-03-01

    A low cost image acquisition and processing system has been developed for the Acorn Archimedes microcomputer. Using a Reduced Instruction Set Computer (RISC) architecture, the ARM (Acorn Risc Machine) processor provides instruction speeds suitable for image processing applications. The associated improvement in data transfer rate has allowed real-time video image acquisition without the need for frame-store memory external to the microcomputer. The system is comprised of real-time video digitising hardware which interfaces directly to the Archimedes memory, and software to provide an integrated image acquisition and processing environment. The hardware can digitise a video signal at up to 640 samples per video line with programmable parameters such as sampling rate and gain. Software support includes a work environment for image capture and processing with pixel, neighbourhood and global operators. A friendly user interface is provided with the help of the Archimedes Operating System WIMP (Windows, Icons, Mouse and Pointer) Manager. Windows provide a convenient way of handling images on the screen and program control is directed mostly by pop-up menus.

  15. Remote video assessment for missile launch facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wagner, G.G.; Stewart, W.A.

    1995-07-01

    The widely dispersed, unmanned launch facilities (LFs) for land-based ICBMs (intercontinental ballistic missiles) currently do not have visual assessment capability for existing intrusion alarms. The security response force currently must assess each alarm on-site. Remote assessment will enhance manpower, safety, and security efforts. Sandia National Laboratories was tasked by the USAF Electronic Systems Center to research, recommend, and demonstrate a cost-effective remote video assessment capability at missile LFs. The project`s charter was to provide: system concepts; market survey analysis; technology search recommendations; and operational hardware demonstrations for remote video assessment from a missile LF to a remote security center viamore » a cost-effective transmission medium and without using visible, on-site lighting. The technical challenges of this project were to: analyze various video transmission media and emphasize using the existing missile system copper line which can be as long as 30 miles; accentuate and extremely low-cost system because of the many sites requiring system installation; integrate the video assessment system with the current LF alarm system; and provide video assessment at the remote sites with non-visible lighting.« less

  16. Video conferencing made easy

    NASA Technical Reports Server (NTRS)

    Larsen, D. Gail; Schwieder, Paul R.

    1993-01-01

    Network video conferencing is advancing rapidly throughout the nation, and the Idaho National Engineering Laboratory (INEL), a Department of Energy (DOE) facility, is at the forefront of the development. Engineers at INEL/EG&G designed and installed a very unique DOE videoconferencing system, offering many outstanding features, that include true multipoint conferencing, user-friendly design and operation with no full-time operators required, and the potential for cost effective expansion of the system. One area where INEL/EG&G engineers made a significant contribution to video conferencing was in the development of effective, user-friendly, end station driven scheduling software. A PC at each user site is used to schedule conferences via a windows package. This software interface provides information to the users concerning conference availability, scheduling, initiation, and termination. The menus are 'mouse' controlled. Once a conference is scheduled, a workstation at the hubs monitors the network to initiate all scheduled conferences. No active operator participation is required once a user schedules a conference through the local PC; the workstation automatically initiates and terminates the conference as scheduled. As each conference is scheduled, hard copy notification is also printed at each participating site. Video conferencing is the wave of the future. The use of these user-friendly systems will save millions in lost productivity and travel cost throughout the nation. The ease of operation and conference scheduling will play a key role on the extent industry uses this new technology. The INEL/EG&G has developed a prototype scheduling system for both commercial and federal government use.

  17. Video conferencing made easy

    NASA Astrophysics Data System (ADS)

    Larsen, D. Gail; Schwieder, Paul R.

    1993-02-01

    Network video conferencing is advancing rapidly throughout the nation, and the Idaho National Engineering Laboratory (INEL), a Department of Energy (DOE) facility, is at the forefront of the development. Engineers at INEL/EG&G designed and installed a very unique DOE videoconferencing system, offering many outstanding features, that include true multipoint conferencing, user-friendly design and operation with no full-time operators required, and the potential for cost effective expansion of the system. One area where INEL/EG&G engineers made a significant contribution to video conferencing was in the development of effective, user-friendly, end station driven scheduling software. A PC at each user site is used to schedule conferences via a windows package. This software interface provides information to the users concerning conference availability, scheduling, initiation, and termination. The menus are 'mouse' controlled. Once a conference is scheduled, a workstation at the hubs monitors the network to initiate all scheduled conferences. No active operator participation is required once a user schedules a conference through the local PC; the workstation automatically initiates and terminates the conference as scheduled. As each conference is scheduled, hard copy notification is also printed at each participating site. Video conferencing is the wave of the future. The use of these user-friendly systems will save millions in lost productivity and travel cost throughout the nation. The ease of operation and conference scheduling will play a key role on the extent industry uses this new technology. The INEL/EG&G has developed a prototype scheduling system for both commercial and federal government use.

  18. Using GoPro to Give Video-Assisted Operative Feedback for Surgery Residents: A Feasibility and Utility Assessment.

    PubMed

    Moore, Maureen D; Abelson, Jonathan S; O'Mahoney, Paul; Bagautdinov, Iskander; Yeo, Heather; Watkins, Anthony C

    As an adjunct to simulation-based teaching, laparoscopic video-based surgical coaching has been an effective tool to augment surgical education. However, the wide use of video review in open surgery has been limited primarily due to technological and logistical challenges. The aims of our study were to (1) evaluate perceptions of general surgery (GS) residents on video-assisted operative instruction and (2) conduct a pilot study using a head-mounted GoPro in conjunction with the operative performance rating system to assess feasibility of providing video review to enhance operative feedback during open procedures. GS residents were anonymously surveyed to evaluate their perceptions of oral and written operative feedback and use of video-based operative resources. We then conducted a pilot study of 10 GS residents to assess the utility and feasibility of using a GoPro to record resident performance of an arteriovenous fistula creation with an attending surgeon. Categorical variables were analyzed using the chi-square test. Academic, tertiary medical center. GS residents and faculty. A total of 59 GS residents were anonymously surveyed (response rate = 65.5%). A total of 40% (n = 24) of residents reported that structured evaluations rarely or never provided meaningful feedback. When feedback was received, 55% (n = 32) residents reported that it was only rarely or sometimes in regard to their operative skills. There was no significant difference in surveyed responses among junior postgraduate year (PGY 1-2), senior (PGY 3-4), or chief residents (PGY-5). A total of 80% (n = 8) of residents found the use of GoPro video review very or extremely useful for education; they also deemed video review more useful for operative feedback than written or communicative feedback. An overwhelming majority (90%, n = 9) felt that video review would lead to improved technical skills, wanted to review the video with the attending surgeon for further feedback, and desired expansion of this tool to include additional procedures. Although there has been progress toward improving operative feedback, room for further improvement remains. The use of a head-mounted GoPro is a dynamic tool that provides high-quality video for operative review and has the potential to augment the training experience of GS residents. Future studies exploring a wide array of open procedures involving a greater number of trainees will be needed to further define the use of this resource. Copyright © 2017. Published by Elsevier Inc.

  19. Evaluation of the use of live aerial video for traffic management.

    DOT National Transportation Integrated Search

    1995-01-01

    This report describes the evaluation of an intelligent transportation system (ITS) demonstration project in which live aerial video of traffic conditions was captured by a rotary wing aircraft operated by the Fairfax County (Virginia) Police Departme...

  20. Intelligent vision system for autonomous vehicle operations

    NASA Technical Reports Server (NTRS)

    Scholl, Marija S.

    1991-01-01

    A complex optical system consisting of a 4f optical correlator with programmatic filters under the control of a digital on-board computer that operates at video rates for filter generation, storage, and management is described.

  1. Real-time unmanned aircraft systems surveillance video mosaicking using GPU

    NASA Astrophysics Data System (ADS)

    Camargo, Aldo; Anderson, Kyle; Wang, Yi; Schultz, Richard R.; Fevig, Ronald A.

    2010-04-01

    Digital video mosaicking from Unmanned Aircraft Systems (UAS) is being used for many military and civilian applications, including surveillance, target recognition, border protection, forest fire monitoring, traffic control on highways, monitoring of transmission lines, among others. Additionally, NASA is using digital video mosaicking to explore the moon and planets such as Mars. In order to compute a "good" mosaic from video captured by a UAS, the algorithm must deal with motion blur, frame-to-frame jitter associated with an imperfectly stabilized platform, perspective changes as the camera tilts in flight, as well as a number of other factors. The most suitable algorithms use SIFT (Scale-Invariant Feature Transform) to detect the features consistent between video frames. Utilizing these features, the next step is to estimate the homography between two consecutives video frames, perform warping to properly register the image data, and finally blend the video frames resulting in a seamless video mosaick. All this processing takes a great deal of resources of resources from the CPU, so it is almost impossible to compute a real time video mosaic on a single processor. Modern graphics processing units (GPUs) offer computational performance that far exceeds current CPU technology, allowing for real-time operation. This paper presents the development of a GPU-accelerated digital video mosaicking implementation and compares it with CPU performance. Our tests are based on two sets of real video captured by a small UAS aircraft; one video comes from Infrared (IR) and Electro-Optical (EO) cameras. Our results show that we can obtain a speed-up of more than 50 times using GPU technology, so real-time operation at a video capture of 30 frames per second is feasible.

  2. An intelligent crowdsourcing system for forensic analysis of surveillance video

    NASA Astrophysics Data System (ADS)

    Tahboub, Khalid; Gadgil, Neeraj; Ribera, Javier; Delgado, Blanca; Delp, Edward J.

    2015-03-01

    Video surveillance systems are of a great value for public safety. With an exponential increase in the number of cameras, videos obtained from surveillance systems are often archived for forensic purposes. Many automatic methods have been proposed to do video analytics such as anomaly detection and human activity recognition. However, such methods face significant challenges due to object occlusions, shadows and scene illumination changes. In recent years, crowdsourcing has become an effective tool that utilizes human intelligence to perform tasks that are challenging for machines. In this paper, we present an intelligent crowdsourcing system for forensic analysis of surveillance video that includes the video recorded as a part of search and rescue missions and large-scale investigation tasks. We describe a method to enhance crowdsourcing by incorporating human detection, re-identification and tracking. At the core of our system, we use a hierarchal pyramid model to distinguish the crowd members based on their ability, experience and performance record. Our proposed system operates in an autonomous fashion and produces a final output of the crowdsourcing analysis consisting of a set of video segments detailing the events of interest as one storyline.

  3. Real-time processing of dual band HD video for maintaining operational effectiveness in degraded visual environments

    NASA Astrophysics Data System (ADS)

    Parker, Steve C. J.; Hickman, Duncan L.; Smith, Moira I.

    2015-05-01

    Effective reconnaissance, surveillance and situational awareness, using dual band sensor systems, require the extraction, enhancement and fusion of salient features, with the processed video being presented to the user in an ergonomic and interpretable manner. HALO™ is designed to meet these requirements and provides an affordable, real-time, and low-latency image fusion solution on a low size, weight and power (SWAP) platform. The system has been progressively refined through field trials to increase its operating envelope and robustness. The result is a video processor that improves detection, recognition and identification (DRI) performance, whilst lowering operator fatigue and reaction times in complex and highly dynamic situations. This paper compares the performance of HALO™, both qualitatively and quantitatively, with conventional blended fusion for operation in degraded visual environments (DVEs), such as those experienced during ground and air-based operations. Although image blending provides a simple fusion solution, which explains its common adoption, the results presented demonstrate that its performance is poor compared to the HALO™ fusion scheme in DVE scenarios.

  4. Scalable Adaptive Graphics Environment (SAGE) Software for the Visualization of Large Data Sets on a Video Wall

    NASA Technical Reports Server (NTRS)

    Jedlovec, Gary; Srikishen, Jayanthi; Edwards, Rita; Cross, David; Welch, Jon; Smith, Matt

    2013-01-01

    The use of collaborative scientific visualization systems for the analysis, visualization, and sharing of "big data" available from new high resolution remote sensing satellite sensors or four-dimensional numerical model simulations is propelling the wider adoption of ultra-resolution tiled display walls interconnected by high speed networks. These systems require a globally connected and well-integrated operating environment that provides persistent visualization and collaboration services. This abstract and subsequent presentation describes a new collaborative visualization system installed for NASA's Shortterm Prediction Research and Transition (SPoRT) program at Marshall Space Flight Center and its use for Earth science applications. The system consists of a 3 x 4 array of 1920 x 1080 pixel thin bezel video monitors mounted on a wall in a scientific collaboration lab. The monitors are physically and virtually integrated into a 14' x 7' for video display. The display of scientific data on the video wall is controlled by a single Alienware Aurora PC with a 2nd Generation Intel Core 4.1 GHz processor, 32 GB memory, and an AMD Fire Pro W600 video card with 6 mini display port connections. Six mini display-to-dual DVI cables are used to connect the 12 individual video monitors. The open source Scalable Adaptive Graphics Environment (SAGE) windowing and media control framework, running on top of the Ubuntu 12 Linux operating system, allows several users to simultaneously control the display and storage of high resolution still and moving graphics in a variety of formats, on tiled display walls of any size. The Ubuntu operating system supports the open source Scalable Adaptive Graphics Environment (SAGE) software which provides a common environment, or framework, enabling its users to access, display and share a variety of data-intensive information. This information can be digital-cinema animations, high-resolution images, high-definition video-teleconferences, presentation slides, documents, spreadsheets or laptop screens. SAGE is cross-platform, community-driven, open-source visualization and collaboration middleware that utilizes shared national and international cyberinfrastructure for the advancement of scientific research and education.

  5. Scalable Adaptive Graphics Environment (SAGE) Software for the Visualization of Large Data Sets on a Video Wall

    NASA Astrophysics Data System (ADS)

    Jedlovec, G.; Srikishen, J.; Edwards, R.; Cross, D.; Welch, J. D.; Smith, M. R.

    2013-12-01

    The use of collaborative scientific visualization systems for the analysis, visualization, and sharing of 'big data' available from new high resolution remote sensing satellite sensors or four-dimensional numerical model simulations is propelling the wider adoption of ultra-resolution tiled display walls interconnected by high speed networks. These systems require a globally connected and well-integrated operating environment that provides persistent visualization and collaboration services. This abstract and subsequent presentation describes a new collaborative visualization system installed for NASA's Short-term Prediction Research and Transition (SPoRT) program at Marshall Space Flight Center and its use for Earth science applications. The system consists of a 3 x 4 array of 1920 x 1080 pixel thin bezel video monitors mounted on a wall in a scientific collaboration lab. The monitors are physically and virtually integrated into a 14' x 7' for video display. The display of scientific data on the video wall is controlled by a single Alienware Aurora PC with a 2nd Generation Intel Core 4.1 GHz processor, 32 GB memory, and an AMD Fire Pro W600 video card with 6 mini display port connections. Six mini display-to-dual DVI cables are used to connect the 12 individual video monitors. The open source Scalable Adaptive Graphics Environment (SAGE) windowing and media control framework, running on top of the Ubuntu 12 Linux operating system, allows several users to simultaneously control the display and storage of high resolution still and moving graphics in a variety of formats, on tiled display walls of any size. The Ubuntu operating system supports the open source Scalable Adaptive Graphics Environment (SAGE) software which provides a common environment, or framework, enabling its users to access, display and share a variety of data-intensive information. This information can be digital-cinema animations, high-resolution images, high-definition video-teleconferences, presentation slides, documents, spreadsheets or laptop screens. SAGE is cross-platform, community-driven, open-source visualization and collaboration middleware that utilizes shared national and international cyberinfrastructure for the advancement of scientific research and education.

  6. A near infra-red video system as a protective diagnostic for electron cyclotron resonance heating operation in the Wendelstein 7-X stellarator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Preynas, M.; Laqua, H. P.; Marsen, S.

    The Wendelstein 7-X stellarator is a large nuclear fusion device based at Max-Planck-Institut für Plasmaphysik in Greifswald in Germany. The main plasma heating system for steady state operation in W7-X is electron cyclotron resonance heating (ECRH). During operation, part of plama facing components will be directly heated by the non-absorbed power of 1 MW rf beams of ECRH. In order to avoid damages of such components made of graphite tiles during the first operational phase, a near infra-red video system has been developed as a protective diagnostic for safe and secure ECRH operation. Both the mechanical design housing the cameramore » and the optical system are very flexible and respect the requirements of steady state operation. The full system including data acquisition and control system has been successfully tested in the vacuum vessel, including on-line visualization and data storage of the four cameras equipping the ECRH equatorial launchers of W7-X.« less

  7. Real-time video compressing under DSP/BIOS

    NASA Astrophysics Data System (ADS)

    Chen, Qiu-ping; Li, Gui-ju

    2009-10-01

    This paper presents real-time MPEG-4 Simple Profile video compressing based on the DSP processor. The programming framework of video compressing is constructed using TMS320C6416 Microprocessor, TDS510 simulator and PC. It uses embedded real-time operating system DSP/BIOS and the API functions to build periodic function, tasks and interruptions etcs. Realize real-time video compressing. To the questions of data transferring among the system. Based on the architecture of the C64x DSP, utilized double buffer switched and EDMA data transfer controller to transit data from external memory to internal, and realize data transition and processing at the same time; the architecture level optimizations are used to improve software pipeline. The system used DSP/BIOS to realize multi-thread scheduling. The whole system realizes high speed transition of a great deal of data. Experimental results show the encoder can realize real-time encoding of 768*576, 25 frame/s video images.

  8. Naval Research Laboratory 1984 Review.

    DTIC Science & Technology

    1985-07-16

    pulsed infrared comprehensive characterization of ultrahigh trans- sources and electronics for video signal process- parency fluoride glasses and...operates a video system through this port if desired. The optical bench in consisting of visible and infrared television cam- the trailer holds a high...resolution Fourier eras, a high-quality video cassette recorder and transform spectrometer to use in the receiving display, and a digitizer to convert

  9. Power-rate-distortion analysis for wireless video communication under energy constraint

    NASA Astrophysics Data System (ADS)

    He, Zhihai; Liang, Yongfang; Ahmad, Ishfaq

    2004-01-01

    In video coding and streaming over wireless communication network, the power-demanding video encoding operates on the mobile devices with limited energy supply. To analyze, control, and optimize the rate-distortion (R-D) behavior of the wireless video communication system under the energy constraint, we need to develop a power-rate-distortion (P-R-D) analysis framework, which extends the traditional R-D analysis by including another dimension, the power consumption. Specifically, in this paper, we analyze the encoding mechanism of typical video encoding systems and develop a parametric video encoding architecture which is fully scalable in computational complexity. Using dynamic voltage scaling (DVS), a hardware technology recently developed in CMOS circuits design, the complexity scalability can be translated into the power consumption scalability of the video encoder. We investigate the rate-distortion behaviors of the complexity control parameters and establish an analytic framework to explore the P-R-D behavior of the video encoding system. Both theoretically and experimentally, we show that, using this P-R-D model, the encoding system is able to automatically adjust its complexity control parameters to match the available energy supply of the mobile device while maximizing the picture quality. The P-R-D model provides a theoretical guideline for system design and performance optimization in wireless video communication under energy constraint, especially over the wireless video sensor network.

  10. Automated Visual Event Detection, Tracking, and Data Management System for Cabled- Observatory Video

    NASA Astrophysics Data System (ADS)

    Edgington, D. R.; Cline, D. E.; Schlining, B.; Raymond, E.

    2008-12-01

    Ocean observatories and underwater video surveys have the potential to unlock important discoveries with new and existing camera systems. Yet the burden of video management and analysis often requires reducing the amount of video recorded through time-lapse video or similar methods. It's unknown how many digitized video data sets exist in the oceanographic community, but we suspect that many remain under analyzed due to lack of good tools or human resources to analyze the video. To help address this problem, the Automated Visual Event Detection (AVED) software and The Video Annotation and Reference System (VARS) have been under development at MBARI. For detecting interesting events in the video, the AVED software has been developed over the last 5 years. AVED is based on a neuromorphic-selective attention algorithm, modeled on the human vision system. Frames are decomposed into specific feature maps that are combined into a unique saliency map. This saliency map is then scanned to determine the most salient locations. The candidate salient locations are then segmented from the scene using algorithms suitable for the low, non-uniform light and marine snow typical of deep underwater video. For managing the AVED descriptions of the video, the VARS system provides an interface and database for describing, viewing, and cataloging the video. VARS was developed by the MBARI for annotating deep-sea video data and is currently being used to describe over 3000 dives by our remotely operated vehicles (ROV), making it well suited to this deepwater observatory application with only a few modifications. To meet the compute and data intensive job of video processing, a distributed heterogeneous network of computers is managed using the Condor workload management system. This system manages data storage, video transcoding, and AVED processing. Looking to the future, we see high-speed networks and Grid technology as an important element in addressing the problem of processing and accessing large video data sets.

  11. Transmission of live laparoscopic surgery over the Internet2.

    PubMed

    Damore, L J; Johnson, J A; Dixon, R S; Iverson, M A; Ellison, E C; Melvin, W S

    1999-11-01

    Video broadcasting of surgical procedures is an important tool for education, training, and consultation. Current video conferencing systems are expensive and time-consuming and require preplanning. Real-time Internet video is known for its poor quality and relies on the equipment and the speed of the connection. The Internet2, a new high-speed (up to 2,048 Mbps), large bandwidth data network presently connects more than 100 universities and corporations. We have successfully used the Internet2 to broadcast the first real-time, high-quality audio/video program from a live laparoscopic operation to distant points. Video output of the laparoscopic camera and audio from a wireless microphone were broadcast to distant sites using a proprietary, PC-based implementation of H.320 video conferencing over a TCP/IP network connected to the Internet2. The receiving sites participated in two-way, real-time video and audio communications and graded the quality of the signal they received. On August 25, 1998, a laparoscopic Nissen fundoplication was transmitted to Internet2 stations in Colorado, Pennsylvania, and to an Internet station in New York. On September 28 and 29, 1998, we broadcast laparoscopic operations throughout both days to the Internet2 Fall Conference in San Francisco, California. Most recently, on February 24, 1999, we transmitted a laparoscopic Heller myotomy to the Abilene Network Launch Event in Washington, DC. The Internet2 is currently able to provide the bandwidth needed for a turn-key video conferencing system with high-resolution, real-time transmission. The system could be used for a variety of teaching and educational programs for experienced surgeons, residents, and medical students.

  12. Visualizing the history of living spaces.

    PubMed

    Ivanov, Yuri; Wren, Christopher; Sorokin, Alexander; Kaur, Ishwinder

    2007-01-01

    The technology available to building designers now makes it possible to monitor buildings on a very large scale. Video cameras and motion sensors are commonplace in practically every office space, and are slowly making their way into living spaces. The application of such technologies, in particular video cameras, while improving security, also violates privacy. On the other hand, motion sensors, while being privacy-conscious, typically do not provide enough information for a human operator to maintain the same degree of awareness about the space that can be achieved by using video cameras. We propose a novel approach in which we use a large number of simple motion sensors and a small set of video cameras to monitor a large office space. In our system we deployed 215 motion sensors and six video cameras to monitor the 3,000-square-meter office space occupied by 80 people for a period of about one year. The main problem in operating such systems is finding a way to present this highly multidimensional data, which includes both spatial and temporal components, to a human operator to allow browsing and searching recorded data in an efficient and intuitive way. In this paper we present our experiences and the solutions that we have developed in the course of our work on the system. We consider this work to be the first step in helping designers and managers of building systems gain access to information about occupants' behavior in the context of an entire building in a way that is only minimally intrusive to the occupants' privacy.

  13. Secure Video Surveillance System Acquisition Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2009-12-04

    The SVSS Acquisition Software collects and displays video images from two cameras through a VPN, and store the images onto a collection controller. The software is configured to allow a user to enter a time window to display up to 2 1/2, hours of video review. The software collects images from the cameras at a rate of 1 image per second and automatically deletes images older than 3 hours. The software code operates in a linux environment and can be run in a virtual machine on Windows XP. The Sandia software integrates the different COTS software together to build themore » video review system.« less

  14. Video System Highlights Hydrogen Fires

    NASA Technical Reports Server (NTRS)

    Youngquist, Robert C.; Gleman, Stuart M.; Moerk, John S.

    1992-01-01

    Video system combines images from visible spectrum and from three bands in infrared spectrum to produce color-coded display in which hydrogen fires distinguished from other sources of heat. Includes linear array of 64 discrete lead selenide mid-infrared detectors operating at room temperature. Images overlaid on black and white image of same scene from standard commercial video camera. In final image, hydrogen fires appear red; carbon-based fires, blue; and other hot objects, mainly green and combinations of green and red. Where no thermal source present, image remains in black and white. System enables high degree of discrimination between hydrogen flames and other thermal emitters.

  15. Development and application of remote video monitoring system for combine harvester based on embedded Linux

    NASA Astrophysics Data System (ADS)

    Chen, Jin; Wang, Yifan; Wang, Xuelei; Wang, Yuehong; Hu, Rui

    2017-01-01

    Combine harvester usually works in sparsely populated areas with harsh environment. In order to achieve the remote real-time video monitoring of the working state of combine harvester. A remote video monitoring system based on ARM11 and embedded Linux is developed. The system uses USB camera for capturing working state video data of the main parts of combine harvester, including the granary, threshing drum, cab and cut table. Using JPEG image compression standard to compress video data then transferring monitoring screen to remote monitoring center over the network for long-range monitoring and management. At the beginning of this paper it describes the necessity of the design of the system. Then it introduces realization methods of hardware and software briefly. And then it describes detailedly the configuration and compilation of embedded Linux operating system and the compiling and transplanting of video server program are elaborated. At the end of the paper, we carried out equipment installation and commissioning on combine harvester and then tested the system and showed the test results. In the experiment testing, the remote video monitoring system for combine harvester can achieve 30fps with the resolution of 800x600, and the response delay in the public network is about 40ms.

  16. A generic flexible and robust approach for intelligent real-time video-surveillance systems

    NASA Astrophysics Data System (ADS)

    Desurmont, Xavier; Delaigle, Jean-Francois; Bastide, Arnaud; Macq, Benoit

    2004-05-01

    In this article we present a generic, flexible and robust approach for an intelligent real-time video-surveillance system. A previous version of the system was presented in [1]. The goal of these advanced tools is to provide help to operators by detecting events of interest in visual scenes and highlighting alarms and compute statistics. The proposed system is a multi-camera platform able to handle different standards of video inputs (composite, IP, IEEE1394 ) and which can basically compress (MPEG4), store and display them. This platform also integrates advanced video analysis tools, such as motion detection, segmentation, tracking and interpretation. The design of the architecture is optimised to playback, display, and process video flows in an efficient way for video-surveillance application. The implementation is distributed on a scalable computer cluster based on Linux and IP network. It relies on POSIX threads for multitasking scheduling. Data flows are transmitted between the different modules using multicast technology and under control of a TCP-based command network (e.g. for bandwidth occupation control). We report here some results and we show the potential use of such a flexible system in third generation video surveillance system. We illustrate the interest of the system in a real case study, which is the indoor surveillance.

  17. Training with video imaging improves the initial intubation success rates of paramedic trainees in an operating room setting.

    PubMed

    Levitan, R M; Goldman, T S; Bryan, D A; Shofer, F; Herlich, A

    2001-01-01

    Video imaging of intubation as seen by the laryngoscopist has not been a part of traditional instruction methods, and its potential impact on novice intubation success rates has not been evaluated. We prospectively tracked the success rates of novice intubators in paramedic classes who were required to watch a 26-minute instructional videotape made with a direct laryngoscopy imaging system (video group). We compared the prospectively obtained intubation success rate of the video group against retrospectively collected data from prior classes of paramedic students (traditional group) in the same training program. All classes received the same didactic airway instruction, same mannequin practice time, same paramedic textbook, and were trained in the same operating room with the same teaching staff. The traditional group (n=113, total attempts 783) had a mean individual intubation success rate of 46.7% (95% confidence interval 42.2% to 51.3%). The video group (n=36, total attempts 102) had a mean individual intubation success rate of 88.1% (95% confidence interval 79.6% to 96.5%). The difference in mean intubation success rates between the 2 groups was 41.4% (95% confidence interval 31.1% to 50.7%, P <.0001). The 2 groups did not differ in respect to age, male sex, or level of education. An instructional videotape made with the direct laryngoscopy video system significantly improved the initial success rates of novice intubators in an operating room setting.

  18. Operator selection for unmanned aerial systems: comparing video game players and pilots.

    PubMed

    McKinley, R Andy; McIntire, Lindsey K; Funke, Margaret A

    2011-06-01

    Popular unmanned aerial system (UAS) platforms such as the MQ-1 Predator and MQ-9 Reaper have experienced accelerated operations tempos that have outpaced current operator training regimens, leading to a shortage of qualified UAS operators. To find a surrogate to replace pilots of manned aircraft as UAS operators, this study evaluated video game players (VGPs), pilots, and a control group on a set of UAS operation relevant cognitive tasks. There were 30 participants who volunteered for this study and were divided into 3 groups: experienced pilots (P), experienced VGPs, and a control group (C). Each was trained on eight cognitive performance tasks relevant to unmanned flight tasks. The results indicated that pilots significantly outperform the VGP and control groups on multi-attribute cognitive tasks (Tank mean: VGP = 465 +/- 1.046 vs. P = 203 +/- 0.237 vs. C = 351 +/- 0.601). However, the VGPs outperformed pilots on cognitive tests related to visually acquiring, identifying, and tracking targets (final score: VGP = 594.28 +/- 8.708 vs. P = 563.33 +/- 8.787 vs. C = 568.21 +/- 8.224). Likewise, both VGPs and pilots performed similarly on the UAS landing task, but outperformed the control group (glide slope: VGP = 40.982 +/- 3.244 vs. P = 30.461 +/- 2.251 vs. C = 57.060 +/- 4.407). Cognitive skills learned in video game play may transfer to novel environments and improve performance in UAS tasks over individuals with no video game experience.

  19. 47 CFR 15.252 - Operation of wideband vehicular radar systems within the bands 16.2-17.7 GHz and 23.12-29.0 GHz.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... fundamental frequency following the provisions of § 15.31(m). (3) For systems operating in the 23.12-29.0 GHz... with the transmitter operating continuously at a fundamental frequency. The video bandwidth of the... 47 Telecommunication 1 2010-10-01 2010-10-01 false Operation of wideband vehicular radar systems...

  20. A system for endobronchial video analysis

    NASA Astrophysics Data System (ADS)

    Byrnes, Patrick D.; Higgins, William E.

    2017-03-01

    Image-guided bronchoscopy is a critical component in the treatment of lung cancer and other pulmonary disorders. During bronchoscopy, a high-resolution endobronchial video stream facilitates guidance through the lungs and allows for visual inspection of a patient's airway mucosal surfaces. Despite the detailed information it contains, little effort has been made to incorporate recorded video into the clinical workflow. Follow-up procedures often required in cancer assessment or asthma treatment could significantly benefit from effectively parsed and summarized video. Tracking diagnostic regions of interest (ROIs) could potentially better equip physicians to detect early airway-wall cancer or improve asthma treatments, such as bronchial thermoplasty. To address this need, we have developed a system for the postoperative analysis of recorded endobronchial video. The system first parses an input video stream into endoscopic shots, derives motion information, and selects salient representative key frames. Next, a semi-automatic method for CT-video registration creates data linkages between a CT-derived airway-tree model and the input video. These data linkages then enable the construction of a CT-video chest model comprised of a bronchoscopy path history (BPH) - defining all airway locations visited during a procedure - and texture-mapping information for rendering registered video frames onto the airwaytree model. A suite of analysis tools is included to visualize and manipulate the extracted data. Video browsing and retrieval is facilitated through a video table of contents (TOC) and a search query interface. The system provides a variety of operational modes and additional functionality, including the ability to define regions of interest. We demonstrate the potential of our system using two human case study examples.

  1. MSFC ISS Resource Reel 2016

    NASA Image and Video Library

    2016-04-01

    International Space Station Resource Reel. This video describes shows the International Space Station components, such as the Destiny laboratory and the Quest Airlock, being manufactured at NASA's Marshall Space Flight Center in Huntsville, Ala. It provides manufacturing and ground testing video and in-flight video of key space station components: the Microgravity Science Glovebox, the Materials Science Research Facility, the Window Observational Research Facility, the Environmental Control Life Support System, and basic research racks. There is video of people working in Marshall's Payload Operations Integration Center where controllers operate experiments 24/7, 365 days a week. Various crews are shown conducting experiments on board the station. PAO Name:Jennifer Stanfield Phone Number:256-544-0034 Email Address: JENNIFER.STANFIELD@NASA.GOV Name/Title of Video: ISS Resource Reel Description: ISS Resource Reel Graphic Information: NASA PAO Name:Tracy McMahan Phone Number:256-544-1634 Email Address: tracy.mcmahan@nasa.gov

  2. Graphic overlays in high-precision teleoperation: Current and future work at JPL

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B.; Venema, Steven C.

    1989-01-01

    In space teleoperation additional problems arise, including signal transmission time delays. These can greatly reduce operator performance. Recent advances in graphics open new possibilities for addressing these and other problems. Currently a multi-camera system with normal 3-D TV and video graphics capabilities is being developed. Trained and untrained operators will be tested for high precision performance using two force reflecting hand controllers and a voice recognition system to control two robot arms and up to 5 movable stereo or non-stereo TV cameras. A number of new techniques of integrating TV and video graphics displays to improve operator training and performance in teleoperation and supervised automation are evaluated.

  3. 47 CFR 76.1512 - Programming information.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 4 2011-10-01 2011-10-01 false Programming information. 76.1512 Section 76... MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1512 Programming information. (a) An... with regard to material or information (including advertising) provided by the operator to subscribers...

  4. Computer-aided video exposure monitoring.

    PubMed

    Walsh, P T; Clark, R D; Flaherty, S; Gentry, S J

    2000-01-01

    A computer-aided video exposure monitoring system was used to record exposure information. The system comprised a handheld camcorder, portable video cassette recorder, radio-telemetry transmitter/receiver, and handheld or notebook computers for remote data logging, photoionization gas/vapor detectors (PIDs), and a personal aerosol monitor. The following workplaces were surveyed using the system: dry cleaning establishments--monitoring tetrachoroethylene in the air and in breath; printing works--monitoring white spirit type solvent; tire manufacturing factory--monitoring rubber fume; and a slate quarry--monitoring respirable dust and quartz. The system based on the handheld computer, in particular, simplified the data acquisition process compared with earlier systems in use by our laboratory. The equipment is more compact and easier to operate, and allows more accurate calibration of the instrument reading on the video image. Although a variety of data display formats are possible, the best format for videos intended for educational and training purposes was the review-preview chart superimposed on the video image of the work process. Recommendations for reducing exposure by engineering or by modifying work practice were possible through use of the video exposure system in the dry cleaning and tire manufacturing applications. The slate quarry work illustrated how the technique can be used to test ventilation configurations quickly to see their effect on the worker's personal exposure.

  5. Backwards compatible high dynamic range video compression

    NASA Astrophysics Data System (ADS)

    Dolzhenko, Vladimir; Chesnokov, Vyacheslav; Edirisinghe, Eran A.

    2014-02-01

    This paper presents a two layer CODEC architecture for high dynamic range video compression. The base layer contains the tone mapped video stream encoded with 8 bits per component which can be decoded using conventional equipment. The base layer content is optimized for rendering on low dynamic range displays. The enhancement layer contains the image difference, in perceptually uniform color space, between the result of inverse tone mapped base layer content and the original video stream. Prediction of the high dynamic range content reduces the redundancy in the transmitted data while still preserves highlights and out-of-gamut colors. Perceptually uniform colorspace enables using standard ratedistortion optimization algorithms. We present techniques for efficient implementation and encoding of non-uniform tone mapping operators with low overhead in terms of bitstream size and number of operations. The transform representation is based on human vision system model and suitable for global and local tone mapping operators. The compression techniques include predicting the transform parameters from previously decoded frames and from already decoded data for current frame. Different video compression techniques are compared: backwards compatible and non-backwards compatible using AVC and HEVC codecs.

  6. A randomized controlled study to evaluate the role of video-based coaching in training laparoscopic skills.

    PubMed

    Singh, Pritam; Aggarwal, Rajesh; Tahir, Muaaz; Pucher, Philip H; Darzi, Ara

    2015-05-01

    This study evaluates whether video-based coaching can enhance laparoscopic surgical skills performance. Many professions utilize coaching to improve performance. The sports industry employs video analysis to maximize improvement from every performance. Laparoscopic novices were baseline tested and then trained on a validated virtual reality (VR) laparoscopic cholecystectomy (LC) curriculum. After competence, subjects were randomized on a 1:1 ratio and each performed 5 VRLCs. After each LC, intervention group subjects received video-based coaching by a surgeon, utilizing an adaptation of the GROW (Goals, Reality, Options, Wrap-up) coaching model. Control subjects viewed online surgical lectures. All subjects then performed 2 porcine LCs. Performance was assessed by blinded video review using validated global rating scales. Twenty subjects were recruited. No significant differences were observed between groups in baseline performance and in VRLC1. For each subsequent repetition, intervention subjects significantly outperformed controls on all global rating scales. Interventions outperformed controls in porcine LC1 [Global Operative Assessment of Laparoscopic Skills: (20.5 vs 15.5; P = 0.011), Objective Structured Assessment of Technical Skills: (21.5vs 14.5; P = 0.001), and Operative Performance Rating System: (26 vs 19.5; P = 0.001)] and porcine LC2 [Global Operative Assessment of Laparoscopic Skills: (28 vs 17.5; P = 0.005), Objective Structured Assessment of Technical Skills: (30 vs 16.5; P < 0.001), and Operative Performance Rating System: (36 vs 21; P = 0.004)]. Intervention subjects took significantly longer than controls in porcine LC1 (2920 vs 2004 seconds; P = 0.009) and LC2 (2297 vs 1683; P = 0.003). Despite equivalent exposure to practical laparoscopic skills training, video-based coaching enhanced the quality of laparoscopic surgical performance on both VR and porcine LCs, although at the expense of increased time. Video-based coaching is a feasible method of maximizing performance enhancement from every clinical exposure.

  7. Recognition and localization of relevant human behavior in videos

    NASA Astrophysics Data System (ADS)

    Bouma, Henri; Burghouts, Gertjan; de Penning, Leo; Hanckmann, Patrick; ten Hove, Johan-Martijn; Korzec, Sanne; Kruithof, Maarten; Landsmeer, Sander; van Leeuwen, Coen; van den Broek, Sebastiaan; Halma, Arvid; den Hollander, Richard; Schutte, Klamer

    2013-06-01

    Ground surveillance is normally performed by human assets, since it requires visual intelligence. However, especially for military operations, this can be dangerous and is very resource intensive. Therefore, unmanned autonomous visualintelligence systems are desired. In this paper, we present an improved system that can recognize actions of a human and interactions between multiple humans. Central to the new system is our agent-based architecture. The system is trained on thousands of videos and evaluated on realistic persistent surveillance data in the DARPA Mind's Eye program, with hours of videos of challenging scenes. The results show that our system is able to track the people, detect and localize events, and discriminate between different behaviors, and it performs 3.4 times better than our previous system.

  8. 47 CFR 76.403 - Cable television system reports.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 4 2010-10-01 2010-10-01 false Cable television system reports. 76.403 Section 76.403 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Forms and Reports § 76.403 Cable television system reports. The operator of every operational cable...

  9. 47 CFR 76.403 - Cable television system reports.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 4 2014-10-01 2014-10-01 false Cable television system reports. 76.403 Section 76.403 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Forms and Reports § 76.403 Cable television system reports. The operator of every operational cable...

  10. Video-tracker trajectory analysis: who meets whom, when and where

    NASA Astrophysics Data System (ADS)

    Jäger, U.; Willersinn, D.

    2010-04-01

    Unveiling unusual or hostile events by observing manifold moving persons in a crowd is a challenging task for human operators, especially when sitting in front of monitor walls for hours. Typically, hostile events are rare. Thus, due to tiredness and negligence the operator may miss important events. In such situations, an automatic alarming system is able to support the human operator. The system incorporates a processing chain consisting of (1) people tracking, (2) event detection, (3) data retrieval, and (4) display of relevant video sequence overlaid by highlighted regions of interest. In this paper we focus on the event detection stage of the processing chain mentioned above. In our case, the selected event of interest is the encounter of people. Although being based on a rather simple trajectory analysis, this kind of event embodies great practical importance because it paves the way to answer the question "who meets whom, when and where". This, in turn, forms the basis to detect potential situations where e.g. money, weapons, drugs etc. are handed over from one person to another in crowded environments like railway stations, airports or busy streets and places etc.. The input to the trajectory analysis comes from a multi-object video-based tracking system developed at IOSB which is able to track multiple individuals within a crowd in real-time [1]. From this we calculate the inter-distances between all persons on a frame-to-frame basis. We use a sequence of simple rules based on the individuals' kinematics to detect the event mentioned above to output the frame number, the persons' IDs from the tracker and the pixel coordinates of the meeting position. Using this information, a data retrieval system may extract the corresponding part of the recorded video image sequence and finally allows for replaying the selected video clip with a highlighted region of interest to attract the operator's attention for further visual inspection.

  11. 47 CFR 76.1710 - Operator interests in video programming.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 4 2014-10-01 2014-10-01 false Operator interests in video programming. 76... SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Documents to be Maintained for Inspection § 76.1710 Operator interests in video programming. (a) Cable operators are required to maintain records in...

  12. 47 CFR 76.1710 - Operator interests in video programming.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 4 2013-10-01 2013-10-01 false Operator interests in video programming. 76... SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Documents to be Maintained for Inspection § 76.1710 Operator interests in video programming. (a) Cable operators are required to maintain records in...

  13. 47 CFR 76.1710 - Operator interests in video programming.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 4 2010-10-01 2010-10-01 false Operator interests in video programming. 76... SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Documents to be Maintained for Inspection § 76.1710 Operator interests in video programming. (a) Cable operators are required to maintain records in...

  14. 47 CFR 76.1710 - Operator interests in video programming.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 4 2011-10-01 2011-10-01 false Operator interests in video programming. 76... SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Documents to be Maintained for Inspection § 76.1710 Operator interests in video programming. (a) Cable operators are required to maintain records in...

  15. 47 CFR 76.1710 - Operator interests in video programming.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 4 2012-10-01 2012-10-01 false Operator interests in video programming. 76... SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Documents to be Maintained for Inspection § 76.1710 Operator interests in video programming. (a) Cable operators are required to maintain records in...

  16. The effect of interference on delta modulation encoded video signals

    NASA Technical Reports Server (NTRS)

    Schilling, D. L.

    1979-01-01

    An adaptive delta modulator which encodes composite color video signals was shown to provide a good response when operating at 16 Mb/s and near-commercial quality at 23Mb/s. The ADM was relatively immune to channel errors. The system design is discussed and circuit diagrams are included.

  17. Compressed-domain video indexing techniques using DCT and motion vector information in MPEG video

    NASA Astrophysics Data System (ADS)

    Kobla, Vikrant; Doermann, David S.; Lin, King-Ip; Faloutsos, Christos

    1997-01-01

    Development of various multimedia applications hinges on the availability of fast and efficient storage, browsing, indexing, and retrieval techniques. Given that video is typically stored efficiently in a compressed format, if we can analyze the compressed representation directly, we can avoid the costly overhead of decompressing and operating at the pixel level. Compressed domain parsing of video has been presented in earlier work where a video clip is divided into shots, subshots, and scenes. In this paper, we describe key frame selection, feature extraction, and indexing and retrieval techniques that are directly applicable to MPEG compressed video. We develop a frame-type independent representation of the various types of frames present in an MPEG video in which al frames can be considered equivalent. Features are derived from the available DCT, macroblock, and motion vector information and mapped to a low-dimensional space where they can be accessed with standard database techniques. The spatial information is used as primary index while the temporal information is used to enhance the robustness of the system during the retrieval process. The techniques presented enable fast archiving, indexing, and retrieval of video. Our operational prototype typically takes a fraction of a second to retrieve similar video scenes from our database, with over 95% success.

  18. Development of Targeting UAVs Using Electric Helicopters and Yamaha RMAX

    DTIC Science & Technology

    2007-05-17

    including the QNX real - time operating system . The video overlay board is useful to display the onboard camera’s image with important information such as... real - time operating system . Fully utilizing the built-in multi-processing architecture with inter-process synchronization and communication

  19. 3-D video techniques in endoscopic surgery.

    PubMed

    Becker, H; Melzer, A; Schurr, M O; Buess, G

    1993-02-01

    Three-dimensional visualisation of the operative field is an important requisite for precise and fast handling of open surgical operations. Up to now it has only been possible to display a two-dimensional image on the monitor during endoscopic procedures. The increasing complexity of minimal invasive interventions requires endoscopic suturing and ligatures of larger vessels which are difficult to perform without the impression of space. Three-dimensional vision therefore may decrease the operative risk, accelerate interventions and widen the operative spectrum. In April 1992 a 3-D video system developed at the Nuclear Research Center Karlsruhe, Germany (IAI Institute) was applied in various animal experimental procedures and clinically in laparoscopic cholecystectomy. The system works with a single monitor and active high-speed shutter glasses. Our first trials with this new 3-D imaging system clearly showed a facilitation of complex surgical manoeuvres like mobilisation of organs, preparation in the deep space and suture techniques. The 3-D-system introduced in this article will enter the market in 1993 (Opticon Co., Karlsruhe, Germany.

  20. Automated UAV-based video exploitation using service oriented architecture framework

    NASA Astrophysics Data System (ADS)

    Se, Stephen; Nadeau, Christian; Wood, Scott

    2011-05-01

    Airborne surveillance and reconnaissance are essential for successful military missions. Such capabilities are critical for troop protection, situational awareness, mission planning, damage assessment, and others. Unmanned Aerial Vehicles (UAVs) gather huge amounts of video data but it is extremely labour-intensive for operators to analyze hours and hours of received data. At MDA, we have developed a suite of tools that can process the UAV video data automatically, including mosaicking, change detection and 3D reconstruction, which have been integrated within a standard GIS framework. In addition, the mosaicking and 3D reconstruction tools have also been integrated in a Service Oriented Architecture (SOA) framework. The Visualization and Exploitation Workstation (VIEW) integrates 2D and 3D visualization, processing, and analysis capabilities developed for UAV video exploitation. Visualization capabilities are supported through a thick-client Graphical User Interface (GUI), which allows visualization of 2D imagery, video, and 3D models. The GUI interacts with the VIEW server, which provides video mosaicking and 3D reconstruction exploitation services through the SOA framework. The SOA framework allows multiple users to perform video exploitation by running a GUI client on the operator's computer and invoking the video exploitation functionalities residing on the server. This allows the exploitation services to be upgraded easily and allows the intensive video processing to run on powerful workstations. MDA provides UAV services to the Canadian and Australian forces in Afghanistan with the Heron, a Medium Altitude Long Endurance (MALE) UAV system. On-going flight operations service provides important intelligence, surveillance, and reconnaissance information to commanders and front-line soldiers.

  1. Quantitative assessment of human motion using video motion analysis

    NASA Technical Reports Server (NTRS)

    Probe, John D.

    1993-01-01

    In the study of the dynamics and kinematics of the human body a wide variety of technologies has been developed. Photogrammetric techniques are well documented and are known to provide reliable positional data from recorded images. Often these techniques are used in conjunction with cinematography and videography for analysis of planar motion, and to a lesser degree three-dimensional motion. Cinematography has been the most widely used medium for movement analysis. Excessive operating costs and the lag time required for film development, coupled with recent advances in video technology, have allowed video based motion analysis systems to emerge as a cost effective method of collecting and analyzing human movement. The Anthropometric and Biomechanics Lab at Johnson Space Center utilizes the video based Ariel Performance Analysis System (APAS) to develop data on shirtsleeved and space-suited human performance in order to plan efficient on-orbit intravehicular and extravehicular activities. APAS is a fully integrated system of hardware and software for biomechanics and the analysis of human performance and generalized motion measurement. Major components of the complete system include the video system, the AT compatible computer, and the proprietary software.

  2. Collaborative real-time motion video analysis by human observer and image exploitation algorithms

    NASA Astrophysics Data System (ADS)

    Hild, Jutta; Krüger, Wolfgang; Brüstle, Stefan; Trantelle, Patrick; Unmüßig, Gabriel; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2015-05-01

    Motion video analysis is a challenging task, especially in real-time applications. In most safety and security critical applications, a human observer is an obligatory part of the overall analysis system. Over the last years, substantial progress has been made in the development of automated image exploitation algorithms. Hence, we investigate how the benefits of automated video analysis can be integrated suitably into the current video exploitation systems. In this paper, a system design is introduced which strives to combine both the qualities of the human observer's perception and the automated algorithms, thus aiming to improve the overall performance of a real-time video analysis system. The system design builds on prior work where we showed the benefits for the human observer by means of a user interface which utilizes the human visual focus of attention revealed by the eye gaze direction for interaction with the image exploitation system; eye tracker-based interaction allows much faster, more convenient, and equally precise moving target acquisition in video images than traditional computer mouse selection. The system design also builds on prior work we did on automated target detection, segmentation, and tracking algorithms. Beside the system design, a first pilot study is presented, where we investigated how the participants (all non-experts in video analysis) performed in initializing an object tracking subsystem by selecting a target for tracking. Preliminary results show that the gaze + key press technique is an effective, efficient, and easy to use interaction technique when performing selection operations on moving targets in videos in order to initialize an object tracking function.

  3. Enhancements to the Sentinel Fireball Network Video Software

    NASA Astrophysics Data System (ADS)

    Watson, Wayne

    2009-05-01

    The Sentinel Fireball Network that supports meteor imaging of bright meteors (fireballs) has been in existence for over ten years. Nearly five years ago it moved from gathering meteor data with a camera and VCR video tape to a fisheye lens attached to a hardware device, the Sentinel box, which allowed meteor data to be recorded on a PC operating under real-time Linux. In 2006, that software, sentuser, was made available on Apple, Linux, and Window operating systems using the Python computer language. It provides basic video and management functionality and a small amount of analytic software capability. This paper describes the new and attractive future features of the software, and, additionally, it reviews some of the research and networks from the past and present using video equipment to collect and analyze fireball data that have applicability to sentuser.

  4. Operator Selection for Unmanned Aerial Vehicle Operators: A Comparison of Video Game Players and Manned Aircraft Pilots

    DTIC Science & Technology

    2009-11-01

    AFRL-RH-WP-TR-2010-0057 Operator Selection for Unmanned Aerial Vehicle Operators: A Comparison of Video Game Players and Manned Aircraft...Oct-2008 - 30-Nov-2009 4. TITLE AND SUBTITLE Operator Selection for Unmanned Aerial Vehicle Operators: A Comparison of Video Game Players...training regimens leading to a potential shortage of qualified UAS pilots. This study attempted to discover whether video game players (VGPs) possess

  5. Endoscopic techniques in aesthetic plastic surgery.

    PubMed

    McCain, L A; Jones, G

    1995-01-01

    There has been an explosive interest in endoscopic techniques by plastic surgeons over the past two years. Procedures such as facial rejuvenation, breast augmentation and abdominoplasty are being performed with endoscopic assistance. Endoscopic operations require a complex setup with components such as video camera, light sources, cables and hard instruments. The Hopkins Rod Lens system consists of optical fibers for illumination, an objective lens, an image retrieval system, a series of rods and lenses, and an eyepiece for image collection. Good illumination of the body cavity is essential for endoscopic procedures. Placement of the video camera on the eyepiece of the endoscope gives a clear, brightly illuminated large image on the monitor. The video monitor provides the surgical team with the endoscopic image. It is important to become familiar with the equipment before actually doing cases. Several options exist for staff education. In the operating room the endoscopic cart needs to be positioned to allow a clear unrestricted view of the video monitor by the surgeon and the operating team. Fogging of the endoscope may be prevented during induction by using FREDD (a fog reduction/elimination device) or a warm bath. The camera needs to be white balanced. During the procedure, the nurse monitors the level of dissection and assesses for clogging of the suction.

  6. Using Videos Derived from Simulations to Support the Analysis of Spatial Awareness in Synthetic Vision Displays

    NASA Technical Reports Server (NTRS)

    Boton, Matthew L.; Bass, Ellen J.; Comstock, James R., Jr.

    2006-01-01

    The evaluation of human-centered systems can be performed using a variety of different methodologies. This paper describes a human-centered systems evaluation methodology where participants watch 5-second non-interactive videos of a system in operation before supplying judgments and subjective measures based on the information conveyed in the videos. This methodology was used to evaluate the ability of different textures and fields of view to convey spatial awareness in synthetic vision systems (SVS) displays. It produced significant results for both judgment based and subjective measures. This method is compared to other methods commonly used to evaluate SVS displays based on cost, the amount of experimental time required, experimental flexibility, and the type of data provided.

  7. [Development of a video image system for wireless capsule endoscopes based on DSP].

    PubMed

    Yang, Li; Peng, Chenglin; Wu, Huafeng; Zhao, Dechun; Zhang, Jinhua

    2008-02-01

    A video image recorder to record video picture for wireless capsule endoscopes was designed. TMS320C6211 DSP of Texas Instruments Inc. is the core processor of this system. Images are periodically acquired from Composite Video Broadcast Signal (CVBS) source and scaled by video decoder (SAA7114H). Video data is transported from high speed buffer First-in First-out (FIFO) to Digital Signal Processor (DSP) under the control of Complex Programmable Logic Device (CPLD). This paper adopts JPEG algorithm for image coding, and the compressed data in DSP was stored to Compact Flash (CF) card. TMS320C6211 DSP is mainly used for image compression and data transporting. Fast Discrete Cosine Transform (DCT) algorithm and fast coefficient quantization algorithm are used to accelerate operation speed of DSP and decrease the executing code. At the same time, proper address is assigned for each memory, which has different speed;the memory structure is also optimized. In addition, this system uses plenty of Extended Direct Memory Access (EDMA) to transport and process image data, which results in stable and high performance.

  8. Construction and Operation of a High-Speed, High-Precision Eye Tracker for Tight Stimulus Synchronization and Real-Time Gaze Monitoring in Human and Animal Subjects.

    PubMed

    Farivar, Reza; Michaud-Landry, Danny

    2016-01-01

    Measurements of the fast and precise movements of the eye-critical to many vision, oculomotor, and animal behavior studies-can be made non-invasively by video oculography. The protocol here describes the construction and operation of a research-grade video oculography system with ~0.1° precision over the full typical viewing range at over 450 Hz with tight synchronization with stimulus onset. The protocol consists of three stages: (1) system assembly, (2) calibration for both cooperative, and for minimally cooperative subjects (e.g., animals or infants), and (3) gaze monitoring and recording.

  9. A joint signal processing and cryptographic approach to multimedia encryption.

    PubMed

    Mao, Yinian; Wu, Min

    2006-07-01

    In recent years, there has been an increasing trend for multimedia applications to use delegate service providers for content distribution, archiving, search, and retrieval. These delegate services have brought new challenges to the protection of multimedia content confidentiality. This paper discusses the importance and feasibility of applying a joint signal processing and cryptographic approach to multimedia encryption, in order to address the access control issues unique to multimedia applications. We propose two atomic encryption operations that can preserve standard compliance and are friendly to delegate processing. Quantitative analysis for these operations is presented to demonstrate that a good tradeoff can be made between security and bitrate overhead. In assisting the design and evaluation of media security systems, we also propose a set of multimedia-oriented security scores to quantify the security against approximation attacks and to complement the existing notion of generic data security. Using video as an example, we present a systematic study on how to strategically integrate different atomic operations to build a video encryption system. The resulting system can provide superior performance over both generic encryption and its simple adaptation to video in terms of a joint consideration of security, bitrate overhead, and friendliness to delegate processing.

  10. Improved segmentation of occluded and adjoining vehicles in traffic surveillance videos

    NASA Astrophysics Data System (ADS)

    Juneja, Medha; Grover, Priyanka

    2013-12-01

    Occlusion in image processing refers to concealment of any part of the object or the whole object from view of an observer. Real time videos captured by static cameras on roads often encounter overlapping and hence, occlusion of vehicles. Occlusion in traffic surveillance videos usually occurs when an object which is being tracked is hidden by another object. This makes it difficult for the object detection algorithms to distinguish all the vehicles efficiently. Also morphological operations tend to join the close proximity vehicles resulting in formation of a single bounding box around more than one vehicle. Such problems lead to errors in further video processing, like counting of vehicles in a video. The proposed system brings forward efficient moving object detection and tracking approach to reduce such errors. The paper uses successive frame subtraction technique for detection of moving objects. Further, this paper implements the watershed algorithm to segment the overlapped and adjoining vehicles. The segmentation results have been improved by the use of noise and morphological operations.

  11. 76 FR 40263 - Implementation of Section 304 of the Telecommunications Act of 1996: Commercial Availability of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-08

    ... video programming and other services offered over multichannel video programming systems.'' Congress, in... services offered by a cable operator. The Commission anticipated that the parties to the MOU would... request specific channels from the cable head-end. SDV allows cable providers to offer their services more...

  12. High-resolution streaming video integrated with UGS systems

    NASA Astrophysics Data System (ADS)

    Rohrer, Matthew

    2010-04-01

    Imagery has proven to be a valuable complement to Unattended Ground Sensor (UGS) systems. It provides ultimate verification of the nature of detected targets. However, due to the power, bandwidth, and technological limitations inherent to UGS, sacrifices have been made to the imagery portion of such systems. The result is that these systems produce lower resolution images in small quantities. Currently, a high resolution, wireless imaging system is being developed to bring megapixel, streaming video to remote locations to operate in concert with UGS. This paper will provide an overview of how using Wifi radios, new image based Digital Signal Processors (DSP) running advanced target detection algorithms, and high resolution cameras gives the user an opportunity to take high-powered video imagers to areas where power conservation is a necessity.

  13. The design of red-blue 3D video fusion system based on DM642

    NASA Astrophysics Data System (ADS)

    Fu, Rongguo; Luo, Hao; Lv, Jin; Feng, Shu; Wei, Yifang; Zhang, Hao

    2016-10-01

    Aiming at the uncertainty of traditional 3D video capturing including camera focal lengths, distance and angle parameters between two cameras, a red-blue 3D video fusion system based on DM642 hardware processing platform is designed with the parallel optical axis. In view of the brightness reduction of traditional 3D video, the brightness enhancement algorithm based on human visual characteristics is proposed and the luminance component processing method based on YCbCr color space is also proposed. The BIOS real-time operating system is used to improve the real-time performance. The video processing circuit with the core of DM642 enhances the brightness of the images, then converts the video signals of YCbCr to RGB and extracts the R component from one camera, so does the other video and G, B component are extracted synchronously, outputs 3D fusion images finally. The real-time adjustments such as translation and scaling of the two color components are realized through the serial communication between the VC software and BIOS. The system with the method of adding red-blue components reduces the lost of the chrominance components and makes the picture color saturation reduce to more than 95% of the original. Enhancement algorithm after optimization to reduce the amount of data fusion in the processing of video is used to reduce the fusion time and watching effect is improved. Experimental results show that the system can capture images in near distance, output red-blue 3D video and presents the nice experiences to the audience wearing red-blue glasses.

  14. Remote driving with reduced bandwidth communication

    NASA Technical Reports Server (NTRS)

    Depiero, Frederick W.; Noell, Timothy E.; Gee, Timothy F.

    1993-01-01

    Oak Ridge National Laboratory has developed a real-time video transmission system for low bandwidth remote operations. The system supports both continuous transmission of video for remote driving and progressive transmission of still images. Inherent in the system design is a spatiotemporal limitation to the effects of channel errors. The average data rate of the system is 64,000 bits/s, a compression of approximately 1000:1 for the black and white National Television Standard Code video. The image quality of the transmissions is maintained at a level that supports teleoperation of a high mobility multipurpose wheeled vehicle at speeds up to 15 mph on a moguled dirt track. Video compression is achieved by using Laplacian image pyramids and a combination of classical techniques. Certain subbands of the image pyramid are transmitted by using interframe differencing with a periodic refresh to aid in bandwidth reduction. Images are also foveated to concentrate image detail in a steerable region. The system supports dynamic video quality adjustments between frame rate, image detail, and foveation rate. A typical configuration for the system used during driving has a frame rate of 4 Hz, a compression per frame of 125:1, and a resulting latency of less than 1s.

  15. Control Method for Video Guidance Sensor System

    NASA Technical Reports Server (NTRS)

    Howard, Richard T. (Inventor); Book, Michael L. (Inventor); Bryan, Thomas C. (Inventor)

    2005-01-01

    A method is provided for controlling operations in a video guidance sensor system wherein images of laser output signals transmitted by the system and returned from a target are captured and processed by the system to produce data used in tracking of the target. Six modes of operation are provided as follows: (i) a reset mode; (ii) a diagnostic mode; (iii) a standby mode; (iv) an acquisition mode; (v) a tracking mode; and (vi) a spot mode wherein captured images of returned laser signals are processed to produce data for all spots found in the image. The method provides for automatic transition to the standby mode from the reset mode after integrity checks are performed and from the diagnostic mode to the reset mode after diagnostic operations are commands is permitted only when the system is in the carried out. Further, acceptance of reset and diagnostic standby mode. The method also provides for automatic transition from the acquisition mode to the tracking mode when an acceptable target is found.

  16. Control method for video guidance sensor system

    NASA Technical Reports Server (NTRS)

    Howard, Richard T. (Inventor); Book, Michael L. (Inventor); Bryan, Thomas C. (Inventor)

    2005-01-01

    A method is provided for controlling operations in a video guidance sensor system wherein images of laser output signals transmitted by the system and returned from a target are captured and processed by the system to produce data used in tracking of the target. Six modes of operation are provided as follows: (i) a reset mode; (ii) a diagnostic mode; (iii) a standby mode; (iv) an acquisition mode; (v) a tracking mode; and (vi) a spot mode wherein captured images of returned laser signals are processed to produce data for all spots found in the image. The method provides for automatic transition to the standby mode from the reset mode after integrity checks are performed and from the diagnostic mode to the reset mode after diagnostic operations are carried out. Further, acceptance of reset and diagnostic commands is permitted only when the system is in the standby mode. The method also provides for automatic transition from the acquisition mode to the tracking mode when an acceptable target is found.

  17. Video feedforward for rapid learning of a picture-based communication system.

    PubMed

    Smith, Jemma; Hand, Linda; Dowrick, Peter W

    2014-04-01

    This study examined the efficacy of video self modeling (VSM) using feedforward, to teach various goals of a picture exchange communication system (PECS). The participants were two boys with autism and one man with Down syndrome. All three participants were non-verbal with no current functional system of communication; the two children had long histories of PECS failure. A series of replications, with different length baselines, was used to examine whether video self modeling could replace the PECS method of teaching to achieve the same goals. All three participants showed rapid learning of their target behavior when introduced to their self modeling videos, and effects generalized without the need for further intervention. We conclude that VSM, using feedforward, can provide a fast, simple way of teaching the use of a picture-based communication system without the need for prompts or intensive operant conditioning. VSM may provide an accessible, easy-to-use alternative to common methods of teaching augmentative and alternative communication systems.

  18. Recent advances in nondestructive evaluation made possible by novel uses of video systems

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R.; Roth, Don J.

    1990-01-01

    Complex materials are being developed for use in future advanced aerospace systems. High temperature materials have been targeted as a major area of materials development. The development of composites consisting of ceramic matrix and ceramic fibers or whiskers is currently being aggressively pursued internationally. These new advanced materials are difficult and costly to produce; however, their low density and high operating temperature range are needed for the next generation of advanced aerospace systems. These materials represent a challenge to the nondestructive evaluation community. Video imaging techniques not only enhance the nondestructive evaluation, but they are also required for proper evaluation of these advanced materials. Specific research examples are given, highlighting the impact that video systems have had on the nondestructive evaluation of ceramics. An image processing technique for computerized determination of grain and pore size distribution functions from microstructural images is discussed. The uses of video and computer systems for displaying, evaluating, and interpreting ultrasonic image data are presented.

  19. SU-E-J-196: Implementation of An In-House Visual Feedback System for Motion Management During Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, V; James, J; Wang, B

    Purpose: To describe an in-house video goggle feedback system for motion management during simulation and treatment of radiation therapy patients. Methods: This video goggle system works by splitting and amplifying the video output signal directly from the Varian Real-Time Position Management (RPM) workstation or TrueBeam imaging workstation into two signals using a Distribution Amplifier. The first signal S[1] gets reconnected back to the monitor. The second signal S[2] gets connected to the input of a Video Scaler. The S[2] signal can be scaled, cropped and panned in real time to display only the relevant information to the patient. The outputmore » signal from the Video Scaler gets connected to an HDMI Extender Transmitter via a DVI-D to HDMI converter cable. The S[2] signal can be transported from the HDMI Extender Transmitter to the HDMI Extender Receiver located inside the treatment room via a Cat5e/6 cable. Inside the treatment room, the HDMI Extender Receiver is permanently mounted on the wall near the conduit where the Cat5e/6 cable is located. An HDMI cable is used to connect from the output of the HDMI Receiver to the video goggles. Results: This video goggle feedback system is currently being used at two institutions. At one institution, the system was just recently implemented for simulation and treatments on two breath-hold gated patients with 8+ total fractions over a two month period. At the other institution, the system was used to treat 100+ breath-hold gated patients on three Varian TrueBeam linacs and has been operational for twelve months. The average time to prepare the video goggle system for treatment is less than 1 minute. Conclusion: The video goggle system provides an efficient and reliable method to set up a video feedback signal for radiotherapy patients with motion management.« less

  20. Subjective evaluation of H.265/HEVC based dynamic adaptive video streaming over HTTP (HEVC-DASH)

    NASA Astrophysics Data System (ADS)

    Irondi, Iheanyi; Wang, Qi; Grecos, Christos

    2015-02-01

    The Dynamic Adaptive Streaming over HTTP (DASH) standard is becoming increasingly popular for real-time adaptive HTTP streaming of internet video in response to unstable network conditions. Integration of DASH streaming techniques with the new H.265/HEVC video coding standard is a promising area of research. The performance of HEVC-DASH systems has been previously evaluated by a few researchers using objective metrics, however subjective evaluation would provide a better measure of the user's Quality of Experience (QoE) and overall performance of the system. This paper presents a subjective evaluation of an HEVC-DASH system implemented in a hardware testbed. Previous studies in this area have focused on using the current H.264/AVC (Advanced Video Coding) or H.264/SVC (Scalable Video Coding) codecs and moreover, there has been no established standard test procedure for the subjective evaluation of DASH adaptive streaming. In this paper, we define a test plan for HEVC-DASH with a carefully justified data set employing longer video sequences that would be sufficient to demonstrate the bitrate switching operations in response to various network condition patterns. We evaluate the end user's real-time QoE online by investigating the perceived impact of delay, different packet loss rates, fluctuating bandwidth, and the perceived quality of using different DASH video stream segment sizes on a video streaming session using different video sequences. The Mean Opinion Score (MOS) results give an insight into the performance of the system and expectation of the users. The results from this study show the impact of different network impairments and different video segments on users' QoE and further analysis and study may help in optimizing system performance.

  1. Composite video and graphics display for multiple camera viewing system in robotics and teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)

    1991-01-01

    A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.

  2. Composite video and graphics display for camera viewing systems in robotics and teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)

    1993-01-01

    A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.

  3. The Surgeon's View: Comparison of Two Digital Video Recording Systems in Veterinary Surgery.

    PubMed

    Giusto, Gessica; Caramello, Vittorio; Comino, Francesco; Gandini, Marco

    2015-01-01

    Video recording and photography during surgical procedures are useful in veterinary medicine for several reasons, including legal, educational, and archival purposes. Many systems are available, such as hand cameras, light-mounted cameras, and head cameras. We chose a reasonably priced head camera that is among the smallest video cameras available. To best describe its possible uses and advantages, we recorded video and images of eight different surgical cases and procedures, both in hospital and field settings. All procedures were recorded both with a head-mounted camera and a commercial hand-held photo camera. Then sixteen volunteers (eight senior clinicians and eight final-year students) completed an evaluation questionnaire. Both cameras produced high-quality photographs and videos, but observers rated the head camera significantly better regarding point of view and their understanding of the surgical operation. The head camera was considered significantly more useful in teaching surgical procedures. Interestingly, senior clinicians tended to assign generally lower scores compared to students. The head camera we tested is an effective, easy-to-use tool for recording surgeries and various veterinary procedures in all situations, with no need for assistance from a dedicated operator. It can be a valuable aid for veterinarians working in all fields of the profession and a useful tool for veterinary surgical education.

  4. Video enhancement workbench: an operational real-time video image processing system

    NASA Astrophysics Data System (ADS)

    Yool, Stephen R.; Van Vactor, David L.; Smedley, Kirk G.

    1993-01-01

    Video image sequences can be exploited in real-time, giving analysts rapid access to information for military or criminal investigations. Video-rate dynamic range adjustment subdues fluctuations in image intensity, thereby assisting discrimination of small or low- contrast objects. Contrast-regulated unsharp masking enhances differentially shadowed or otherwise low-contrast image regions. Real-time removal of localized hotspots, when combined with automatic histogram equalization, may enhance resolution of objects directly adjacent. In video imagery corrupted by zero-mean noise, real-time frame averaging can assist resolution and location of small or low-contrast objects. To maximize analyst efficiency, lengthy video sequences can be screened automatically for low-frequency, high-magnitude events. Combined zoom, roam, and automatic dynamic range adjustment permit rapid analysis of facial features captured by video cameras recording crimes in progress. When trying to resolve small objects in murky seawater, stereo video places the moving imagery in an optimal setting for human interpretation.

  5. A design of real time image capturing and processing system using Texas Instrument's processor

    NASA Astrophysics Data System (ADS)

    Wee, Toon-Joo; Chaisorn, Lekha; Rahardja, Susanto; Gan, Woon-Seng

    2007-09-01

    In this work, we developed and implemented an image capturing and processing system that equipped with capability of capturing images from an input video in real time. The input video can be a video from a PC, video camcorder or DVD player. We developed two modes of operation in the system. In the first mode, an input image from the PC is processed on the processing board (development platform with a digital signal processor) and is displayed on the PC. In the second mode, current captured image from the video camcorder (or from DVD player) is processed on the board but is displayed on the LCD monitor. The major difference between our system and other existing conventional systems is that image-processing functions are performed on the board instead of the PC (so that the functions can be used for further developments on the board). The user can control the operations of the board through the Graphic User Interface (GUI) provided on the PC. In order to have a smooth image data transfer between the PC and the board, we employed Real Time Data Transfer (RTDX TM) technology to create a link between them. For image processing functions, we developed three main groups of function: (1) Point Processing; (2) Filtering and; (3) 'Others'. Point Processing includes rotation, negation and mirroring. Filter category provides median, adaptive, smooth and sharpen filtering in the time domain. In 'Others' category, auto-contrast adjustment, edge detection, segmentation and sepia color are provided, these functions either add effect on the image or enhance the image. We have developed and implemented our system using C/C# programming language on TMS320DM642 (or DM642) board from Texas Instruments (TI). The system was showcased in College of Engineering (CoE) exhibition 2006 at Nanyang Technological University (NTU) and have more than 40 users tried our system. It is demonstrated that our system is adequate for real time image capturing. Our system can be used or applied for applications such as medical imaging, video surveillance, etc.

  6. A highly sensitive underwater video system for use in turbid aquaculture ponds.

    PubMed

    Hung, Chin-Chang; Tsao, Shih-Chieh; Huang, Kuo-Hao; Jang, Jia-Pu; Chang, Hsu-Kuang; Dobbs, Fred C

    2016-08-24

    The turbid, low-light waters characteristic of aquaculture ponds have made it difficult or impossible for previous video cameras to provide clear imagery of the ponds' benthic habitat. We developed a highly sensitive, underwater video system (UVS) for this particular application and tested it in shrimp ponds having turbidities typical of those in southern Taiwan. The system's high-quality video stream and images, together with its camera capacity (up to nine cameras), permit in situ observations of shrimp feeding behavior, shrimp size and internal anatomy, and organic matter residues on pond sediments. The UVS can operate continuously and be focused remotely, a convenience to shrimp farmers. The observations possible with the UVS provide aquaculturists with information critical to provision of feed with minimal waste; determining whether the accumulation of organic-matter residues dictates exchange of pond water; and management decisions concerning shrimp health.

  7. Quantitative assessment of human motion using video motion analysis

    NASA Technical Reports Server (NTRS)

    Probe, John D.

    1990-01-01

    In the study of the dynamics and kinematics of the human body, a wide variety of technologies was developed. Photogrammetric techniques are well documented and are known to provide reliable positional data from recorded images. Often these techniques are used in conjunction with cinematography and videography for analysis of planar motion, and to a lesser degree three-dimensional motion. Cinematography has been the most widely used medium for movement analysis. Excessive operating costs and the lag time required for film development coupled with recent advances in video technology have allowed video based motion analysis systems to emerge as a cost effective method of collecting and analyzing human movement. The Anthropometric and Biomechanics Lab at Johnson Space Center utilizes the video based Ariel Performance Analysis System to develop data on shirt-sleeved and space-suited human performance in order to plan efficient on orbit intravehicular and extravehicular activities. The system is described.

  8. Three-dimensional, automated, real-time video system for tracking limb motion in brain-machine interface studies.

    PubMed

    Peikon, Ian D; Fitzsimmons, Nathan A; Lebedev, Mikhail A; Nicolelis, Miguel A L

    2009-06-15

    Collection and analysis of limb kinematic data are essential components of the study of biological motion, including research into biomechanics, kinesiology, neurophysiology and brain-machine interfaces (BMIs). In particular, BMI research requires advanced, real-time systems capable of sampling limb kinematics with minimal contact to the subject's body. To answer this demand, we have developed an automated video tracking system for real-time tracking of multiple body parts in freely behaving primates. The system employs high-contrast markers painted on the animal's joints to continuously track the three-dimensional positions of their limbs during activity. Two-dimensional coordinates captured by each video camera are combined and converted to three-dimensional coordinates using a quadratic fitting algorithm. Real-time operation of the system is accomplished using direct memory access (DMA). The system tracks the markers at a rate of 52 frames per second (fps) in real-time and up to 100fps if video recordings are captured to be later analyzed off-line. The system has been tested in several BMI primate experiments, in which limb position was sampled simultaneously with chronic recordings of the extracellular activity of hundreds of cortical cells. During these recordings, multiple computational models were employed to extract a series of kinematic parameters from neuronal ensemble activity in real-time. The system operated reliably under these experimental conditions and was able to compensate for marker occlusions that occurred during natural movements. We propose that this system could also be extended to applications that include other classes of biological motion.

  9. Compression of stereoscopic video using MPEG-2

    NASA Astrophysics Data System (ADS)

    Puri, A.; Kollarits, Richard V.; Haskell, Barry G.

    1995-10-01

    Many current as well as emerging applications in areas of entertainment, remote operations, manufacturing industry and medicine can benefit from the depth perception offered by stereoscopic video systems which employ two views of a scene imaged under the constraints imposed by human visual system. Among the many challenges to be overcome for practical realization and widespread use of 3D/stereoscopic systems are good 3D displays and efficient techniques for digital compression of enormous amounts of data while maintaining compatibility with normal video decoding and display systems. After a brief introduction to the basics of 3D/stereo including issues of depth perception, stereoscopic 3D displays and terminology in stereoscopic imaging and display, we present an overview of tools in the MPEG-2 video standard that are relevant to our discussion on compression of stereoscopic video, which is the main topic of this paper. Next, we outilne the various approaches for compression of stereoscopic video and then focus on compatible stereoscopic video coding using MPEG-2 Temporal scalability concepts. Compatible coding employing two different types of prediction structures become potentially possible, disparity compensated prediction and combined disparity and motion compensated predictions. To further improve coding performance and display quality, preprocessing for reducing mismatch between the two views forming stereoscopic video is considered. Results of simulations performed on stereoscopic video of normal TV resolution are then reported comparing the performance of two prediction structures with the simulcast solution. It is found that combined disparity and motion compensated prediction offers the best performance. Results indicate that compression of both views of stereoscopic video of normal TV resolution appears feasible in a total of 6 to 8 Mbit/s. We then discuss regarding multi-viewpoint video, a generalization of stereoscopic video. Finally, we describe ongoing efforts within MPEG-2 to define a profile for stereoscopic video coding, as well as, the promise of MPEG-4 in addressing coding of multi-viewpoint video.

  10. Compression of stereoscopic video using MPEG-2

    NASA Astrophysics Data System (ADS)

    Puri, Atul; Kollarits, Richard V.; Haskell, Barry G.

    1995-12-01

    Many current as well as emerging applications in areas of entertainment, remote operations, manufacturing industry and medicine can benefit from the depth perception offered by stereoscopic video systems which employ two views of a scene imaged under the constraints imposed by human visual system. Among the many challenges to be overcome for practical realization and widespread use of 3D/stereoscopic systems are good 3D displays and efficient techniques for digital compression of enormous amounts of data while maintaining compatibility with normal video decoding and display systems. After a brief introduction to the basics of 3D/stereo including issues of depth perception, stereoscopic 3D displays and terminology in stereoscopic imaging and display, we present an overview of tools in the MPEG-2 video standard that are relevant to our discussion on compression of stereoscopic video, which is the main topic of this paper. Next, we outline the various approaches for compression of stereoscopic video and then focus on compatible stereoscopic video coding using MPEG-2 Temporal scalability concepts. Compatible coding employing two different types of prediction structures become potentially possible, disparity compensated prediction and combined disparity and motion compensated predictions. To further improve coding performance and display quality, preprocessing for reducing mismatch between the two views forming stereoscopic video is considered. Results of simulations performed on stereoscopic video of normal TV resolution are then reported comparing the performance of two prediction structures with the simulcast solution. It is found that combined disparity and motion compensated prediction offers the best performance. Results indicate that compression of both views of stereoscopic video of normal TV resolution appears feasible in a total of 6 to 8 Mbit/s. We then discuss regarding multi-viewpoint video, a generalization of stereoscopic video. Finally, we describe ongoing efforts within MPEG-2 to define a profile for stereoscopic video coding, as well as, the promise of MPEG-4 in addressing coding of multi-viewpoint video.

  11. 25 CFR 542.43 - What are the minimum internal control standards for surveillance for a Tier C gaming operation?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... surveillance system that enable surveillance personnel to observe the table games remaining open for play and... recordings and/or digital records shall be provided to the Commission upon request. (x) Video library log. A video library log, or comparable alternative procedure approved by the Tribal gaming regulatory...

  12. 25 CFR 542.43 - What are the minimum internal control standards for surveillance for a Tier C gaming operation?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... surveillance system that enable surveillance personnel to observe the table games remaining open for play and... recordings and/or digital records shall be provided to the Commission upon request. (x) Video library log. A video library log, or comparable alternative procedure approved by the Tribal gaming regulatory...

  13. Low-delay predictive audio coding for the HIVITS HDTV codec

    NASA Astrophysics Data System (ADS)

    McParland, A. K.; Gilchrist, N. H. C.

    1995-01-01

    The status of work relating to predictive audio coding, as part of the European project on High Quality Video Telephone and HD(TV) Systems (HIVITS), is reported. The predictive coding algorithm is developed, along with six-channel audio coding and decoding hardware. Demonstrations of the audio codec operating in conjunction with the video codec, are given.

  14. Intelligent viewing control for robotic and automation systems

    NASA Astrophysics Data System (ADS)

    Schenker, Paul S.; Peters, Stephen F.; Paljug, Eric D.; Kim, Won S.

    1994-10-01

    We present a new system for supervisory automated control of multiple remote cameras. Our primary purpose in developing this system has been to provide capability for knowledge- based, `hands-off' viewing during execution of teleoperation/telerobotic tasks. The reported technology has broader applicability to remote surveillance, telescience observation, automated manufacturing workcells, etc. We refer to this new capability as `Intelligent Viewing Control (IVC),' distinguishing it from a simple programmed camera motion control. In the IVC system, camera viewing assignment, sequencing, positioning, panning, and parameter adjustment (zoom, focus, aperture, etc.) are invoked and interactively executed by real-time by a knowledge-based controller, drawing on a priori known task models and constraints, including operator preferences. This multi-camera control is integrated with a real-time, high-fidelity 3D graphics simulation, which is correctly calibrated in perspective to the actual cameras and their platform kinematics (translation/pan-tilt). Such merged graphics- with-video design allows the system user to preview and modify the planned (`choreographed') viewing sequences. Further, during actual task execution, the system operator has available both the resulting optimized video sequence, as well as supplementary graphics views from arbitrary perspectives. IVC, including operator-interactive designation of robot task actions, is presented to the user as a well-integrated video-graphic single screen user interface allowing easy access to all relevant telerobot communication/command/control resources. We describe and show pictorial results of a preliminary IVC system implementation for telerobotic servicing of a satellite.

  15. Manufacturing Methods and Technology Program Automatic In-Process Microcircuit Evaluation.

    DTIC Science & Technology

    1980-10-01

    methods of controlling the AIME system are with the computer and associated inter- face (CPU control), and with controls located on the front panels...Sync and Blanking signals When the AIME system is being operated by the front panel controls , the computer does not influence the system operation. SU...the color video monitor display. The operator controls these parameters by 1) depressing the appropriate key on the keyboard, 2) observing on the

  16. Management by Trajectory

    NASA Image and Video Library

    2018-05-05

    This video provides an overview of the Management by Trajectory (MBT) concept of operations developed as part on a NASA Research Announcement (NRA) sponsored by NASA’s Aviation Operations and Safety Program (AOSP). Possible changes in roles and responsibilities among various agents in the air traffic system are identified, and the concept’s potential impact on system safety in a way that brings the National Airspace System (NAS) closer to a full Trajectory-Based Operations (TBO) environment is described.

  17. An integrated multispectral video and environmental monitoring system for the study of coastal processes and the support of beach management operations

    NASA Astrophysics Data System (ADS)

    Ghionis, George; Trygonis, Vassilis; Karydis, Antonis; Vousdoukas, Michalis; Alexandrakis, George; Drakopoulos, Panos; Amdreadis, Olympos; Psarros, Fotis; Velegrakis, Antonis; Poulos, Serafim

    2016-04-01

    Effective beach management requires environmental assessments that are based on sound science, are cost-effective and are available to beach users and managers in an accessible, timely and transparent manner. The most common problems are: 1) The available field data are scarce and of sub-optimal spatio-temporal resolution and coverage, 2) our understanding of local beach processes needs to be improved in order to accurately model/forecast beach dynamics under a changing climate, and 3) the information provided by coastal scientists/engineers in the form of data, models and scientific interpretation is often too complicated to be of direct use by coastal managers/decision makers. A multispectral video system has been developed, consisting of one or more video cameras operating in the visible part of the spectrum, a passive near-infrared (NIR) camera, an active NIR camera system, a thermal infrared camera and a spherical video camera, coupled with innovative image processing algorithms and a telemetric system for the monitoring of coastal environmental parameters. The complete system has the capability to record, process and communicate (in quasi-real time) high frequency information on shoreline position, wave breaking zones, wave run-up, erosion hot spots along the shoreline, nearshore wave height, turbidity, underwater visibility, wind speed and direction, air and sea temperature, solar radiation, UV radiation, relative humidity, barometric pressure and rainfall. An innovative, remotely-controlled interactive visual monitoring system, based on the spherical video camera (with 360°field of view), combines the video streams from all cameras and can be used by beach managers to monitor (in real time) beach user numbers, flow activities and safety at beaches of high touristic value. The high resolution near infrared cameras permit 24-hour monitoring of beach processes, while the thermal camera provides information on beach sediment temperature and moisture, can detect upwelling in the nearshore zone, and enhances the safety of beach users. All data can be presented in real- or quasi-real time and are stored for future analysis and training/validation of coastal processes models. Acknowledgements: This work was supported by the project BEACHTOUR (11SYN-8-1466) of the Operational Program "Cooperation 2011, Competitiveness and Entrepreneurship", co-funded by the European Regional Development Fund and the Greek Ministry of Education and Religious Affairs.

  18. ViCoMo: visual context modeling for scene understanding in video surveillance

    NASA Astrophysics Data System (ADS)

    Creusen, Ivo M.; Javanbakhti, Solmaz; Loomans, Marijn J. H.; Hazelhoff, Lykele B.; Roubtsova, Nadejda; Zinger, Svitlana; de With, Peter H. N.

    2013-10-01

    The use of contextual information can significantly aid scene understanding of surveillance video. Just detecting people and tracking them does not provide sufficient information to detect situations that require operator attention. We propose a proof-of-concept system that uses several sources of contextual information to improve scene understanding in surveillance video. The focus is on two scenarios that represent common video surveillance situations, parking lot surveillance and crowd monitoring. In the first scenario, a pan-tilt-zoom (PTZ) camera tracking system is developed for parking lot surveillance. Context is provided by the traffic sign recognition system to localize regular and handicapped parking spot signs as well as license plates. The PTZ algorithm has the ability to selectively detect and track persons based on scene context. In the second scenario, a group analysis algorithm is introduced to detect groups of people. Contextual information is provided by traffic sign recognition and region labeling algorithms and exploited for behavior understanding. In both scenarios, decision engines are used to interpret and classify the output of the subsystems and if necessary raise operator alerts. We show that using context information enables the automated analysis of complicated scenarios that were previously not possible using conventional moving object classification techniques.

  19. Optically phase-locked electronic speckle pattern interferometer

    NASA Astrophysics Data System (ADS)

    Moran, Steven E.; Law, Robert; Craig, Peter N.; Goldberg, Warren M.

    1987-02-01

    The design, theory, operation, and characteristics of an optically phase-locked electronic speckle pattern interferometer (OPL-ESPI) are described. The OPL-ESPI system couples an optical phase-locked loop with an ESPI system to generate real-time equal Doppler speckle contours of moving objects from unstable sensor platforms. In addition, the optical phase-locked loop provides the basis for a new ESPI video signal processing technique which incorporates local oscillator phase shifting coupled with video sequential frame subtraction.

  20. Integrating TV/digital data spectrograph system

    NASA Technical Reports Server (NTRS)

    Duncan, B. J.; Fay, T. D.; Miller, E. R.; Wamsteker, W.; Brown, R. M.; Neely, P. L.

    1975-01-01

    A 25-mm vidicon camera was previously modified to allow operation in an integration mode for low-light-level astronomical work. The camera was then mated to a low-dispersion spectrograph for obtaining spectral information in the 400 to 750 nm range. A high speed digital video image system was utilized to digitize the analog video signal, place the information directly into computer-type memory, and record data on digital magnetic tape for permanent storage and subsequent analysis.

  1. Light-Directed Ranging System Implementing Single Camera System for Telerobotics Applications

    NASA Technical Reports Server (NTRS)

    Wells, Dennis L. (Inventor); Li, Larry C. (Inventor); Cox, Brian J. (Inventor)

    1997-01-01

    A laser-directed ranging system has utility for use in various fields, such as telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a single video camera and a directional light source such as a laser mounted on a camera platform, and a remotely positioned operator. In one embodiment, the position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. The laser is offset vertically and horizontally from the camera, and the laser/camera platform is directed by the user to point the laser and the camera toward a target device. The image produced by the video camera is processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. A reference point is defined at a point in the video frame, which may be located outside of the image area of the camera. The disparity between the digital image of the laser spot and the reference point is calculated for use in a ranging analysis to determine range to the target.

  2. The utility of live video capture to enhance debriefing following transcatheter aortic valve replacement.

    PubMed

    Seamans, David P; Louka, Boshra F; Fortuin, F David; Patel, Bhavesh M; Sweeney, John P; Lanza, Louis A; DeValeria, Patrick A; Ezrre, Kim M; Ramakrishna, Harish

    2016-10-01

    The surgical and procedural specialties are continually evolving their methods to include more complex and technically difficult cases. These cases can be longer and incorporate multiple teams in a different model of operating room synergy. Patients are frequently older, with comorbidities adding to the complexity of these cases. Recording of this environment has become more feasible recently with advancement in video and audio capture systems often used in the simulation realm. We began using live capture to record a new procedure shortly after starting these cases in our institution. This has provided continued assessment and evaluation of live procedures. The goal of this was to improve human factors and situational challenges by review and debriefing. B-Line Medical's LiveCapture video system was used to record successive transcatheter aortic valve replacement (TAVR) procedures in our cardiac catheterization/laboratory. An illustrative case is used to discuss analysis and debriefing of the case using this system. An illustrative case is presented that resulted in long-term changes to our approach of these cases. The video capture documented rare events during one of our TAVR procedures. Analysis and debriefing led to definitive changes in our practice. While there are hurdles to the use of this technology in every institution, the role for the ongoing use of video capture, analysis, and debriefing may play an important role in the future of patient safety and human factors analysis in the operating environment.

  3. The utility of live video capture to enhance debriefing following transcatheter aortic valve replacement

    PubMed Central

    Seamans, David P.; Louka, Boshra F.; Fortuin, F. David; Patel, Bhavesh M.; Sweeney, John P.; Lanza, Louis A.; DeValeria, Patrick A.; Ezrre, Kim M.; Ramakrishna, Harish

    2016-01-01

    Background: The surgical and procedural specialties are continually evolving their methods to include more complex and technically difficult cases. These cases can be longer and incorporate multiple teams in a different model of operating room synergy. Patients are frequently older, with comorbidities adding to the complexity of these cases. Recording of this environment has become more feasible recently with advancement in video and audio capture systems often used in the simulation realm. Aims: We began using live capture to record a new procedure shortly after starting these cases in our institution. This has provided continued assessment and evaluation of live procedures. The goal of this was to improve human factors and situational challenges by review and debriefing. Setting and Design: B-Line Medical's LiveCapture video system was used to record successive transcatheter aortic valve replacement (TAVR) procedures in our cardiac catheterization/laboratory. An illustrative case is used to discuss analysis and debriefing of the case using this system. Results and Conclusions: An illustrative case is presented that resulted in long-term changes to our approach of these cases. The video capture documented rare events during one of our TAVR procedures. Analysis and debriefing led to definitive changes in our practice. While there are hurdles to the use of this technology in every institution, the role for the ongoing use of video capture, analysis, and debriefing may play an important role in the future of patient safety and human factors analysis in the operating environment. PMID:27762242

  4. Sensing system for detection and control of deposition on pendant tubes in recovery and power boilers

    DOEpatents

    Kychakoff, George; Afromowitz, Martin A; Hugle, Richard E

    2005-06-21

    A system for detection and control of deposition on pendant tubes in recovery and power boilers includes one or more deposit monitoring sensors operating in infrared regions and about 4 or 8.7 microns and directly producing images of the interior of the boiler. An image pre-processing circuit (95) in which a 2-D image formed by the video data input is captured, and includes a low pass filter for performing noise filtering of said video input. An image segmentation module (105) for separating the image of the recovery boiler interior into background, pendant tubes, and deposition. An image-understanding unit (115) matches derived regions to a 3-D model of said boiler. It derives a 3-D structure the deposition on pendant tubes in the boiler and provides the information about deposits to the plant distributed control system (130) for more efficient operation of the plant pendant tube cleaning and operating systems.

  5. Real-time color image processing for forensic fiber investigations

    NASA Astrophysics Data System (ADS)

    Paulsson, Nils

    1995-09-01

    This paper describes a system for automatic fiber debris detection based on color identification. The properties of the system are fast analysis and high selectivity, a necessity when analyzing forensic fiber samples. An ordinary investigation separates the material into well above 100,000 video images to analyze. The system is based on standard techniques such as CCD-camera, motorized sample table, and IBM-compatible PC/AT with add-on-boards for video frame digitalization and stepping motor control as the main parts. It is possible to operate the instrument at full video rate (25 image/s) with aid of the HSI-color system (hue- saturation-intensity) and software optimization. High selectivity is achieved by separating the analysis into several steps. The first step is fast direct color identification of objects in the analyzed video images and the second step analyzes detected objects with a more complex and time consuming stage of the investigation to identify single fiber fragments for subsequent analysis with more selective techniques.

  6. Registration of multiple video images to preoperative CT for image-guided surgery

    NASA Astrophysics Data System (ADS)

    Clarkson, Matthew J.; Rueckert, Daniel; Hill, Derek L.; Hawkes, David J.

    1999-05-01

    In this paper we propose a method which uses multiple video images to establish the pose of a CT volume with respect to video camera coordinates for use in image guided surgery. The majority of neurosurgical procedures require the neurosurgeon to relate the pre-operative MR/CT data to the intra-operative scene. Registration of 2D video images to the pre-operative 3D image enables a perspective projection of the pre-operative data to be overlaid onto the video image. Our registration method is based on image intensity and uses a simple iterative optimization scheme to maximize the mutual information between a video image and a rendering from the pre-operative data. Video images are obtained from a stereo operating microscope, with a field of view of approximately 110 X 80 mm. We have extended an existing information theoretical framework for 2D-3D registration, so that multiple video images can be registered simultaneously to the pre-operative data. Experiments were performed on video and CT images of a skull phantom. We took three video images, and our algorithm registered these individually to the 3D image. The mean projection error varied between 4.33 and 9.81 millimeters (mm), and the mean 3D error varied between 4.47 and 11.92 mm. Using our novel techniques we then registered five video views simultaneously to the 3D model. This produced an accurate and robust registration with a mean projection error of 0.68 mm and a mean 3D error of 1.05 mm.

  7. Laser-Directed Ranging System Implementing Single Camera System for Telerobotics Applications

    NASA Technical Reports Server (NTRS)

    Wells, Dennis L. (Inventor); Li, Larry C. (Inventor); Cox, Brian J. (Inventor)

    1995-01-01

    The invention relates generally to systems for determining the range of an object from a reference point and, in one embodiment, to laser-directed ranging systems useful in telerobotics applications. Digital processing techniques are employed which minimize the complexity and cost of the hardware and software for processing range calculations, thereby enhancing the commercial attractiveness of the system for use in relatively low-cost robotic systems. The system includes a video camera for generating images of the target, image digitizing circuitry, and an associated frame grabber circuit. The circuit first captures one of the pairs of stereo video images of the target, and then captures a second video image of the target as it is partly illuminated by the light beam, suitably generated by a laser. The two video images, taken sufficiently close together in time to minimize camera and scene motion, are converted to digital images and then compared. Common pixels are eliminated, leaving only a digital image of the laser-illuminated spot on the target. Mw centroid of the laser illuminated spot is dm obtained and compared with a predetermined reference point, predetermined by design or calibration, which represents the coordinate at the focal plane of the laser illumination at infinite range. Preferably, the laser and camera are mounted on a servo-driven platform which can be oriented to direct the camera and the laser toward the target. In one embodiment the platform is positioned in response to movement of the operator's head. Position and orientation sensors are used to monitor head movement. The disparity between the digital image of the laser spot and the reference point is calculated for determining range to the target. Commercial applications for the system relate to active range-determination systems, such as those used with robotic systems in which it is necessary to determine the, range to a workpiece or object to be grasped or acted upon by a robot arm end-effector in response to commands generated by an operator. In one embodiment, the system provides a real-time image of the target for the operator as the robot approaches the object. The system is also adapted for use in virtual reality systems in which a remote object or workpiece is to be acted upon by a remote robot arm or other mechanism controlled by an operator.

  8. Staff acceptance of video monitoring for coordination: a video system to support perioperative situation awareness.

    PubMed

    Kim, Young Ju; Xiao, Yan; Hu, Peter; Dutton, Richard

    2009-08-01

    To understand staff acceptance of a remote video monitoring system for operating room (OR) coordination. Improved real-time remote visual access to OR may enhance situational awareness but also raises privacy concerns for patients and staff. Survey. A system was implemented in a six-room surgical suite to display OR monitoring video at an access restricted control desk area. Image quality was manipulated to improve staff acceptance. Two months after installation, interviews and a survey were conducted on staff acceptance of video monitoring. About half of all OR personnel responded (n = 63). Overall levels of concerns were low, with 53% rated no concerns and 42% little concern. Top two reported uses of the video were to see if cases are finished and to see if a room is ready. Viewing the video monitoring system as useful did not reduce levels of concern. Staff in supervisory positions perceived less concern about the system's impact on privacy than did those supervised (p < 0.03). Concerns for patient privacy correlated with concerns for staff privacy and performance monitoring. Technical means such as manipulating image quality helped staff acceptance. Manipulation of image quality resulted overall acceptance of monitoring video, with residual levels of concerns. OR nurses may express staff privacy concern in the form of concerns over patient privacy. This study provided suggestions for technological and implementation strategies of video monitoring for coordination use in OR. Deployment of communication technology and integration of clinical information will likely raise concerns over staff privacy and performance monitoring. The potential gain of increased information access may be offset by negative impact of a sense of loss of autonomy.

  9. An efficient HW and SW design of H.264 video compression, storage and playback on FPGA devices for handheld thermal imaging systems

    NASA Astrophysics Data System (ADS)

    Gunay, Omer; Ozsarac, Ismail; Kamisli, Fatih

    2017-05-01

    Video recording is an essential property of new generation military imaging systems. Playback of the stored video on the same device is also desirable as it provides several operational benefits to end users. Two very important constraints for many military imaging systems, especially for hand-held devices and thermal weapon sights, are power consumption and size. To meet these constraints, it is essential to perform most of the processing applied to the video signal, such as preprocessing, compression, storing, decoding, playback and other system functions on a single programmable chip, such as FPGA, DSP, GPU or ASIC. In this work, H.264/AVC (Advanced Video Coding) compatible video compression, storage, decoding and playback blocks are efficiently designed and implemented on FPGA platforms using FPGA fabric and Altera NIOS II soft processor. Many subblocks that are used in video encoding are also used during video decoding in order to save FPGA resources and power. Computationally complex blocks are designed using FPGA fabric, while blocks such as SD card write/read, H.264 syntax decoding and CAVLC decoding are done using NIOS processor to benefit from software flexibility. In addition, to keep power consumption low, the system was designed to require limited external memory access. The design was tested using 640x480 25 fps thermal camera on CYCLONE V FPGA, which is the ALTERA's lowest power FPGA family, and consumes lower than 40% of CYCLONE V 5CEFA7 FPGA resources on average.

  10. Collaborative web-based annotation of video footage of deep-sea life, ecosystems and geological processes

    NASA Astrophysics Data System (ADS)

    Kottmann, R.; Ratmeyer, V.; Pop Ristov, A.; Boetius, A.

    2012-04-01

    More and more seagoing scientific expeditions use video-controlled research platforms such as Remote Operating Vehicles (ROV), Autonomous Underwater Vehicles (AUV), and towed camera systems. These produce many hours of video material which contains detailed and scientifically highly valuable footage of the biological, chemical, geological, and physical aspects of the oceans. Many of the videos contain unique observations of unknown life-forms which are rare, and which cannot be sampled and studied otherwise. To make such video material online accessible and to create a collaborative annotation environment the "Video Annotation and processing platform" (V-App) was developed. A first solely web-based installation for ROV videos is setup at the German Center for Marine Environmental Sciences (available at http://videolib.marum.de). It allows users to search and watch videos with a standard web browser based on the HTML5 standard. Moreover, V-App implements social web technologies allowing a distributed world-wide scientific community to collaboratively annotate videos anywhere at any time. It has several features fully implemented among which are: • User login system for fine grained permission and access control • Video watching • Video search using keywords, geographic position, depth and time range and any combination thereof • Video annotation organised in themes (tracks) such as biology and geology among others in standard or full screen mode • Annotation keyword management: Administrative users can add, delete, and update single keywords for annotation or upload sets of keywords from Excel-sheets • Download of products for scientific use This unique web application system helps making costly ROV videos online available (estimated cost range between 5.000 - 10.000 Euros per hour depending on the combination of ship and ROV). Moreover, with this system each expert annotation adds instantaneous available and valuable knowledge to otherwise uncharted material.

  11. Standardized access, display, and retrieval of medical video

    NASA Astrophysics Data System (ADS)

    Bellaire, Gunter; Steines, Daniel; Graschew, Georgi; Thiel, Andreas; Bernarding, Johannes; Tolxdorff, Thomas; Schlag, Peter M.

    1999-05-01

    The system presented here enhances documentation and data- secured, second-opinion facilities by integrating video sequences into DICOM 3.0. We present an implementation for a medical video server extended by a DICOM interface. Security mechanisms conforming with DICOM are integrated to enable secure internet access. Digital video documents of diagnostic and therapeutic procedures should be examined regarding the clip length and size necessary for second opinion and manageable with today's hardware. Image sources relevant for this paper include 3D laparoscope, 3D surgical microscope, 3D open surgery camera, synthetic video, and monoscopic endoscopes, etc. The global DICOM video concept and three special workplaces of distinct applications are described. Additionally, an approach is presented to analyze the motion of the endoscopic camera for future automatic video-cutting. Digital stereoscopic video sequences are especially in demand for surgery . Therefore DSVS are also integrated into the DICOM video concept. Results are presented describing the suitability of stereoscopic display techniques for the operating room.

  12. Web Audio/Video Streaming Tool

    NASA Technical Reports Server (NTRS)

    Guruvadoo, Eranna K.

    2003-01-01

    In order to promote NASA-wide educational outreach program to educate and inform the public of space exploration, NASA, at Kennedy Space Center, is seeking efficient ways to add more contents to the web by streaming audio/video files. This project proposes a high level overview of a framework for the creation, management, and scheduling of audio/video assets over the web. To support short-term goals, the prototype of a web-based tool is designed and demonstrated to automate the process of streaming audio/video files. The tool provides web-enabled users interfaces to manage video assets, create publishable schedules of video assets for streaming, and schedule the streaming events. These operations are performed on user-defined and system-derived metadata of audio/video assets stored in a relational database while the assets reside on separate repository. The prototype tool is designed using ColdFusion 5.0.

  13. Advanced Video Analysis Needs for Human Performance Evaluation

    NASA Technical Reports Server (NTRS)

    Campbell, Paul D.

    1994-01-01

    Evaluators of human task performance in space missions make use of video as a primary source of data. Extraction of relevant human performance information from video is often a labor-intensive process requiring a large amount of time on the part of the evaluator. Based on the experiences of several human performance evaluators, needs were defined for advanced tools which could aid in the analysis of video data from space missions. Such tools should increase the efficiency with which useful information is retrieved from large quantities of raw video. They should also provide the evaluator with new analytical functions which are not present in currently used methods. Video analysis tools based on the needs defined by this study would also have uses in U.S. industry and education. Evaluation of human performance from video data can be a valuable technique in many industrial and institutional settings where humans are involved in operational systems and processes.

  14. RealityFlythrough: Enhancing Situational Awareness for Medical Response to Disasters Using Ubiquitous Video

    PubMed Central

    McCurdy, Neil J.; Griswold, William G; Lenert, Leslie A.

    2005-01-01

    The first moments at a disater scene are chaotic. The command center initially operates with little knowledge of hazards, geography and casualties, building up knowledge of the event slowly as information trickles in by voice radio channels. RealityFlythrough is a tele-presence system that stitches together live video feeds in real-time, using the principle of visual closure, to give command center personnel the illusion of being able to explore the scene interactively by moving smoothly between the video feeds. Using RealityFlythrough, medical, fire, law enforcement, hazardous materials, and engineering experts may be able to achieve situational awareness earlier, and better manage scarce resources. The RealityFlythrough system is composed of camera units with off-the-shelf GPS and orientation systems and a server/viewing station that offers access to images collected by the camera units in real time by position/orientation. In initial field testing using an experimental mesh 802.11 wireless network, two camera unit operators were able to create an interactive image of a simulated disaster scene in about five minutes. PMID:16779092

  15. NASA Lewis' Telescience Support Center Supports Orbiting Microgravity Experiments

    NASA Technical Reports Server (NTRS)

    Hawersaat, Bob W.

    1998-01-01

    The Telescience Support Center (TSC) at the NASA Lewis Research Center was developed to enable Lewis-based science teams and principal investigators to monitor and control experimental and operational payloads onboard the International Space Station. The TSC is a remote operations hub that can interface with other remote facilities, such as universities and industrial laboratories. As a pathfinder for International Space Station telescience operations, the TSC has incrementally developed an operational capability by supporting space shuttle missions. The TSC has evolved into an environment where experimenters and scientists can control and monitor the health and status of their experiments in near real time. Remote operations (or telescience) allow local scientists and their experiment teams to minimize their travel and maintain a local complement of expertise for hardware and software troubleshooting and data analysis. The TSC was designed, developed, and is operated by Lewis' Engineering and Technical Services Directorate and its support contractors, Analex Corporation and White's Information System, Inc. It is managed by Lewis' Microgravity Science Division. The TSC provides operational support in conjunction with the NASA Marshall Space Flight Center and NASA Johnson Space Center. It enables its customers to command, receive, and view telemetry; monitor the science video from their on-orbit experiments; and communicate over mission-support voice loops. Data can be received and routed to experimenter-supplied ground support equipment and/or to the TSC data system for display. Video teleconferencing capability and other video sources, such as NASA TV, are also available. The TSC has a full complement of standard services to aid experimenters in telemetry operations.

  16. Data compression/error correction digital test system. Appendix 2: Theory of operation

    NASA Technical Reports Server (NTRS)

    1972-01-01

    An overall block diagram of the DC/EC digital system test is shown. The system is divided into two major units: the transmitter and the receiver. In operation, the transmitter and receiver are connected only by a real or simulated transmission link. The system inputs consist of: (1) standard format TV video, (2) two channels of analog voice, and (3) one serial PCM bit stream.

  17. Design considerations to improve cognitive ergonomic issues of unmanned vehicle interfaces utilizing video game controllers.

    PubMed

    Oppold, P; Rupp, M; Mouloua, M; Hancock, P A; Martin, J

    2012-01-01

    Unmanned (UAVs, UCAVs, and UGVs) systems still have major human factors and ergonomic challenges related to the effective design of their control interface systems, crucial to their efficient operation, maintenance, and safety. Unmanned system interfaces with a human centered approach promote intuitive interfaces that are easier to learn, and reduce human errors and other cognitive ergonomic issues with interface design. Automation has shifted workload from physical to cognitive, thus control interfaces for unmanned systems need to reduce mental workload on the operators and facilitate the interaction between vehicle and operator. Two-handed video game controllers provide wide usability within the overall population, prior exposure for new operators, and a variety of interface complexity levels to match the complexity level of the task and reduce cognitive load. This paper categorizes and provides taxonomy for 121 haptic interfaces from the entertainment industry that can be utilized as control interfaces for unmanned systems. Five categories of controllers were based on the complexity of the buttons, control pads, joysticks, and switches on the controller. This allows the selection of the level of complexity needed for a specific task without creating an entirely new design or utilizing an overly complex design.

  18. Real-time image sequence segmentation using curve evolution

    NASA Astrophysics Data System (ADS)

    Zhang, Jun; Liu, Weisong

    2001-04-01

    In this paper, we describe a novel approach to image sequence segmentation and its real-time implementation. This approach uses the 3D structure tensor to produce a more robust frame difference signal and uses curve evolution to extract whole objects. Our algorithm is implemented on a standard PC running the Windows operating system with video capture from a USB camera that is a standard Windows video capture device. Using the Windows standard video I/O functionalities, our segmentation software is highly portable and easy to maintain and upgrade. In its current implementation on a Pentium 400, the system can perform segmentation at 5 frames/sec with a frame resolution of 160 by 120.

  19. A low-cost video-oculography system for vestibular function testing.

    PubMed

    Jihwan Park; Youngsun Kong; Yunyoung Nam

    2017-07-01

    In order to remain in focus during head movements, vestibular-ocular reflex causes eyes to move in the opposite direction to head movement. Disorders of vestibular system decrease vision, causing abnormal nystagmus and dizziness. To diagnose abnormal nystagmus, various studies have been reported including the use of rotating chair tests and videonystagmography. However, these tests are unsuitable for home use due to their high costs. Thus, a low-cost video-oculography system is necessary to obtain clinical features at home. In this paper, we present a low-cost video-oculography system using an infrared camera and Raspberry Pi board for tracking the pupils and evaluating a vestibular system. Horizontal eye movement is derived from video data obtained from an infrared camera and infrared light-emitting diodes, and the velocity of head rotation is obtained from a gyroscope sensor. Each pupil was extracted using a morphology operation and a contour detection method. Rotatory chair tests were conducted with our developed device. To evaluate our system, gain, asymmetry, and phase were measured and compared with System 2000. The average IQR errors of gain, phase and asymmetry were 0.81, 2.74 and 17.35, respectively. We showed that our system is able to measure clinical features.

  20. A semantic autonomous video surveillance system for dense camera networks in Smart Cities.

    PubMed

    Calavia, Lorena; Baladrón, Carlos; Aguiar, Javier M; Carro, Belén; Sánchez-Esguevillas, Antonio

    2012-01-01

    This paper presents a proposal of an intelligent video surveillance system able to detect and identify abnormal and alarming situations by analyzing object movement. The system is designed to minimize video processing and transmission, thus allowing a large number of cameras to be deployed on the system, and therefore making it suitable for its usage as an integrated safety and security solution in Smart Cities. Alarm detection is performed on the basis of parameters of the moving objects and their trajectories, and is performed using semantic reasoning and ontologies. This means that the system employs a high-level conceptual language easy to understand for human operators, capable of raising enriched alarms with descriptions of what is happening on the image, and to automate reactions to them such as alerting the appropriate emergency services using the Smart City safety network.

  1. A Semantic Autonomous Video Surveillance System for Dense Camera Networks in Smart Cities

    PubMed Central

    Calavia, Lorena; Baladrón, Carlos; Aguiar, Javier M.; Carro, Belén; Sánchez-Esguevillas, Antonio

    2012-01-01

    This paper presents a proposal of an intelligent video surveillance system able to detect and identify abnormal and alarming situations by analyzing object movement. The system is designed to minimize video processing and transmission, thus allowing a large number of cameras to be deployed on the system, and therefore making it suitable for its usage as an integrated safety and security solution in Smart Cities. Alarm detection is performed on the basis of parameters of the moving objects and their trajectories, and is performed using semantic reasoning and ontologies. This means that the system employs a high-level conceptual language easy to understand for human operators, capable of raising enriched alarms with descriptions of what is happening on the image, and to automate reactions to them such as alerting the appropriate emergency services using the Smart City safety network. PMID:23112607

  2. The Limited Duty/Chief Warrant Officer Professional Guidebook

    DTIC Science & Technology

    1985-01-01

    subsurface imaging . They plan and manage the operation of imaging commands and activities, combat camera groups and aerial reconnaissance imaging...picture and video systems used in aerial, surface and subsurface imaging . They supervise the operation of imaging commands and activities, combat camera

  3. Tonopah Test Range - Index

    Science.gov Websites

    Capabilities Test Operations Center Test Director Range Control Track Control Communications Tracking Radars Us Range Videos/Photos Range Capabilities Test Operations Center Test Director Range Control Track Control Communications Tracking Radars Optical Systems Cinetheodolites Telescopes R&D Telescopes

  4. A practical implementation of free viewpoint video system for soccer games

    NASA Astrophysics Data System (ADS)

    Suenaga, Ryo; Suzuki, Kazuyoshi; Tezuka, Tomoyuki; Panahpour Tehrani, Mehrdad; Takahashi, Keita; Fujii, Toshiaki

    2015-03-01

    In this paper, we present a free viewpoint video generation system with billboard representation for soccer games. Free viewpoint video generation is a technology that enables users to watch 3-D objects from their desired viewpoints. Practical implementation of free viewpoint video for sports events is highly demanded. However, a commercially acceptable system has not yet been developed. The main obstacles are insufficient user-end quality of the synthesized images and highly complex procedures that sometimes require manual operations. In this work, we aim to develop a commercially acceptable free viewpoint video system with a billboard representation. A supposed scenario is that soccer games during the day can be broadcasted in 3-D, even in the evening of the same day. Our work is still ongoing. However, we have already developed several techniques to support our goal. First, we captured an actual soccer game at an official stadium where we used 20 full-HD professional cameras. Second, we have implemented several tools for free viewpoint video generation as follow. In order to facilitate free viewpoint video generation, all cameras should be calibrated. We calibrated all cameras using checker board images and feature points on the field (cross points of the soccer field lines). We extract each player region from captured images manually. The background region is estimated by observing chrominance changes of each pixel in temporal domain (automatically). Additionally, we have developed a user interface for visualizing free viewpoint video generation using a graphic library (OpenGL), which is suitable for not only commercialized TV sets but also devices such as smartphones. However, practical system has not yet been completed and our study is still ongoing.

  5. Technical Assessment: Autonomy

    DTIC Science & Technology

    2015-02-01

    and video games . If DoD develops CONOPS for lower- performance systems, there is an opportunity to leverage a large amount of private investment, as the...originally designed for the Xbox video game platform, it is now being used or developed for retail environments, operating rooms, and physical therapy...approaches that render artificial intelligence less susceptible to intelligent influence. One area worthy of consideration is applied game theory, which

  6. The Next Generation Advanced Video Guidance Sensor: Flight Heritage and Current Development

    NASA Technical Reports Server (NTRS)

    Howard, Richard T.; Bryan, Thomas C.

    2009-01-01

    The Next Generation Advanced Video Guidance Sensor (NGAVGS) is the latest in a line of sensors that have flown four times in the last 10 years. The NGAVGS has been under development for the last two years as a long-range proximity operations and docking sensor for use in an Automated Rendezvous and Docking (AR&D) system. The first autonomous rendezvous and docking in the history of the U.S. Space Program was successfully accomplished by Orbital Express, using the Advanced Video Guidance Sensor (AVGS) as the primary docking sensor. That flight proved that the United States now has a mature and flight proven sensor technology for supporting Crew Exploration Vehicles (CEV) and Commercial Orbital Transport Systems (COTS) Automated Rendezvous and Docking (AR&D). NASA video sensors have worked well in the past: the AVGS used on the Demonstration of Autonomous Rendezvous Technology (DART) mission operated successfully in "spot mode" out to 2 km, and the first generation rendezvous and docking sensor, the Video Guidance Sensor (VGS), was developed and successfully flown on Space Shuttle flights in 1997 and 1998. This paper presents the flight heritage and results of the sensor technology, some hardware trades for the current sensor, and discusses the needs of future vehicles that may rendezvous and dock with the International Space Station (ISS) and other Constellation vehicles. It also discusses approaches for upgrading AVGS to address parts obsolescence, and concepts for minimizing the sensor footprint, weight, and power requirements. In addition, the testing of the various NGAVGS development units will be discussed along with the use of the NGAVGS as a proximity operations and docking sensor.

  7. Proximity Operations and Docking Sensor Development

    NASA Technical Reports Server (NTRS)

    Howard, Richard T.; Bryan, Thomas C.; Brewster, Linda L.; Lee, James E.

    2009-01-01

    The Next Generation Advanced Video Guidance Sensor (NGAVGS) has been under development for the last three years as a long-range proximity operations and docking sensor for use in an Automated Rendezvous and Docking (AR&D) system. The first autonomous rendezvous and docking in the history of the U.S. Space Program was successfully accomplished by Orbital Express, using the Advanced Video Guidance Sensor (AVGS) as the primary docking sensor. That flight proved that the United States now has a mature and flight proven sensor technology for supporting Crew Exploration Vehicles (CEV) and Commercial Orbital Transport Systems (COTS) Automated Rendezvous and Docking (AR&D). NASA video sensors have worked well in the past: the AVGS used on the Demonstration of Autonomous Rendezvous Technology (DART) mission operated successfully in spot mode out to 2 km, and the first generation rendezvous and docking sensor, the Video Guidance Sensor (VGS), was developed and successfully flown on Space Shuttle flights in 1997 and 1998. 12 Parts obsolescence issues prevent the construction of more AVGS units, and the next generation sensor was updated to allow it to support the CEV and COTS programs. The flight proven AR&D sensor has been redesigned to update parts and add additional capabilities for CEV and COTS with the development of the Next Generation AVGS at the Marshall Space Flight Center. The obsolete imager and processor are being replaced with new radiation tolerant parts. In addition, new capabilities include greater sensor range, auto ranging capability, and real-time video output. This paper presents some sensor hardware trades, use of highly integrated laser components, and addresses the needs of future vehicles that may rendezvous and dock with the International Space Station (ISS) and other Constellation vehicles. It also discusses approaches for upgrading AVGS to address parts obsolescence, and concepts for minimizing the sensor footprint, weight, and power requirements. In addition, the testing of the brassboard and proto-type NGAVGS units will be discussed along with the use of the NGAVGS as a proximity operations and docking sensor.

  8. Progress in video immersion using Panospheric imaging

    NASA Astrophysics Data System (ADS)

    Bogner, Stephen L.; Southwell, David T.; Penzes, Steven G.; Brosinsky, Chris A.; Anderson, Ron; Hanna, Doug M.

    1998-09-01

    Having demonstrated significant technical and marketplace advantages over other modalities for video immersion, PanosphericTM Imaging (PI) continues to evolve rapidly. This paper reports on progress achieved since AeroSense 97. The first practical field deployment of the technology occurred in June-August 1997 during the NASA-CMU 'Atacama Desert Trek' activity, where the Nomad mobile robot was teleoperated via immersive PanosphericTM imagery from a distance of several thousand kilometers. Research using teleoperated vehicles at DRES has also verified the exceptional utility of the PI technology for achieving high levels of situational awareness, operator confidence, and mission effectiveness. Important performance enhancements have been achieved with the completion of the 4th Generation PI DSP-based array processor system. The system is now able to provide dynamic full video-rate generation of spatial and computational transformations, resulting in a programmable and fully interactive immersive video telepresence. A new multi- CCD camera architecture has been created to exploit the bandwidth of this processor, yielding a well-matched PI system with greatly improved resolution. While the initial commercial application for this technology is expected to be video tele- conferencing, it also appears to have excellent potential for application in the 'Immersive Cockpit' concept. Additional progress is reported in the areas of Long Wave Infrared PI Imaging, Stereo PI concepts, PI based Video-Servoing concepts, PI based Video Navigation concepts, and Foveation concepts (to merge localized high-resolution views with immersive views).

  9. Brief report: learning via the electronic interactive whiteboard for two students with autism and a student with moderate intellectual disability.

    PubMed

    Yakubova, Gulnoza; Taber-Doughty, Teresa

    2013-06-01

    The effects of a multicomponent intervention (a self-operated video modeling and self-monitoring delivered via an electronic interactive whiteboard (IWB) and a system of least prompts) on skill acquisition and interaction behavior of two students with autism and one student with moderate intellectual disability were examined using a multi-probe across students design. Students were taught to operate and view video modeling clips, perform a chain of novel tasks and self-monitor task performance using a SMART Board IWB. Results support the effectiveness of a multicomponent intervention in improving students' skill acquisition. Results also highlight the use of this technology as a self-operated and interactive device rather than a traditional teacher-operated device to enhance students' active participation in learning.

  10. Synchronized voltage contrast display analysis system

    NASA Technical Reports Server (NTRS)

    Johnston, M. F.; Shumka, A.; Miller, E.; Evans, K. C. (Inventor)

    1982-01-01

    An apparatus and method for comparing internal voltage potentials of first and second operating electronic components such as large scale integrated circuits (LSI's) in which voltage differentials are visually identified via an appropriate display means are described. More particularly, in a first embodiment of the invention a first and second scanning electron microscope (SEM) are configured to scan a first and second operating electronic component respectively. The scan pattern of the second SEM is synchronized to that of the first SEM so that both simultaneously scan corresponding portions of the two operating electronic components. Video signals from each SEM corresponding to secondary electron signals generated as a result of a primary electron beam intersecting each operating electronic component in accordance with a predetermined scan pattern are provided to a video mixer and color encoder.

  11. Stereoscopic augmented reality for laparoscopic surgery.

    PubMed

    Kang, Xin; Azizian, Mahdi; Wilson, Emmanuel; Wu, Kyle; Martin, Aaron D; Kane, Timothy D; Peters, Craig A; Cleary, Kevin; Shekhar, Raj

    2014-07-01

    Conventional laparoscopes provide a flat representation of the three-dimensional (3D) operating field and are incapable of visualizing internal structures located beneath visible organ surfaces. Computed tomography (CT) and magnetic resonance (MR) images are difficult to fuse in real time with laparoscopic views due to the deformable nature of soft-tissue organs. Utilizing emerging camera technology, we have developed a real-time stereoscopic augmented-reality (AR) system for laparoscopic surgery by merging live laparoscopic ultrasound (LUS) with stereoscopic video. The system creates two new visual cues: (1) perception of true depth with improved understanding of 3D spatial relationships among anatomical structures, and (2) visualization of critical internal structures along with a more comprehensive visualization of the operating field. The stereoscopic AR system has been designed for near-term clinical translation with seamless integration into the existing surgical workflow. It is composed of a stereoscopic vision system, a LUS system, and an optical tracker. Specialized software processes streams of imaging data from the tracked devices and registers those in real time. The resulting two ultrasound-augmented video streams (one for the left and one for the right eye) give a live stereoscopic AR view of the operating field. The team conducted a series of stereoscopic AR interrogations of the liver, gallbladder, biliary tree, and kidneys in two swine. The preclinical studies demonstrated the feasibility of the stereoscopic AR system during in vivo procedures. Major internal structures could be easily identified. The system exhibited unobservable latency with acceptable image-to-video registration accuracy. We presented the first in vivo use of a complete system with stereoscopic AR visualization capability. This new capability introduces new visual cues and enhances visualization of the surgical anatomy. The system shows promise to improve the precision and expand the capacity of minimally invasive laparoscopic surgeries.

  12. A highly sensitive underwater video system for use in turbid aquaculture ponds

    NASA Astrophysics Data System (ADS)

    Hung, Chin-Chang; Tsao, Shih-Chieh; Huang, Kuo-Hao; Jang, Jia-Pu; Chang, Hsu-Kuang; Dobbs, Fred C.

    2016-08-01

    The turbid, low-light waters characteristic of aquaculture ponds have made it difficult or impossible for previous video cameras to provide clear imagery of the ponds’ benthic habitat. We developed a highly sensitive, underwater video system (UVS) for this particular application and tested it in shrimp ponds having turbidities typical of those in southern Taiwan. The system’s high-quality video stream and images, together with its camera capacity (up to nine cameras), permit in situ observations of shrimp feeding behavior, shrimp size and internal anatomy, and organic matter residues on pond sediments. The UVS can operate continuously and be focused remotely, a convenience to shrimp farmers. The observations possible with the UVS provide aquaculturists with information critical to provision of feed with minimal waste; determining whether the accumulation of organic-matter residues dictates exchange of pond water; and management decisions concerning shrimp health.

  13. A highly sensitive underwater video system for use in turbid aquaculture ponds

    PubMed Central

    Hung, Chin-Chang; Tsao, Shih-Chieh; Huang, Kuo-Hao; Jang, Jia-Pu; Chang, Hsu-Kuang; Dobbs, Fred C.

    2016-01-01

    The turbid, low-light waters characteristic of aquaculture ponds have made it difficult or impossible for previous video cameras to provide clear imagery of the ponds’ benthic habitat. We developed a highly sensitive, underwater video system (UVS) for this particular application and tested it in shrimp ponds having turbidities typical of those in southern Taiwan. The system’s high-quality video stream and images, together with its camera capacity (up to nine cameras), permit in situ observations of shrimp feeding behavior, shrimp size and internal anatomy, and organic matter residues on pond sediments. The UVS can operate continuously and be focused remotely, a convenience to shrimp farmers. The observations possible with the UVS provide aquaculturists with information critical to provision of feed with minimal waste; determining whether the accumulation of organic-matter residues dictates exchange of pond water; and management decisions concerning shrimp health. PMID:27554201

  14. Video camera system for locating bullet holes in targets at a ballistics tunnel

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Rummler, D. R.; Goad, W. K.

    1990-01-01

    A system consisting of a single charge coupled device (CCD) video camera, computer controlled video digitizer, and software to automate the measurement was developed to measure the location of bullet holes in targets at the International Shooters Development Fund (ISDF)/NASA Ballistics Tunnel. The camera/digitizer system is a crucial component of a highly instrumented indoor 50 meter rifle range which is being constructed to support development of wind resistant, ultra match ammunition. The system was designed to take data rapidly (10 sec between shoots) and automatically with little operator intervention. The system description, measurement concept, and procedure are presented along with laboratory tests of repeatability and bias error. The long term (1 hour) repeatability of the system was found to be 4 microns (one standard deviation) at the target and the bias error was found to be less than 50 microns. An analysis of potential errors and a technique for calibration of the system are presented.

  15. TELE-X and its role in a future operational Nordic satellite system

    NASA Astrophysics Data System (ADS)

    Anderson, Lars

    In the middle of 1987 it is planned to launch TELE-X, the first Nordic telecommunications satellite. The Swedish-Norwegian company NOTELSAT (Nordic Telecommunications Satellite Corporation) will be responsible for the operation of the TELE-X system. Via the experimental TELE-X satellite the Nordic countries will get access to direct broadcasting of two TV-programs and at least four digital sound programs in stereo by use of two transponders in the 12.2 to 12.5 GHz band. The programs are planned to be composed of nationally produced programs in Norway. Sweden and Finland. By means of distributing these programs via satellite they will reach up to 4 times as many viewers and listernes as presently in the terrestrial national systems. The basic motivations for exchanging programs are to strengthen the cultural ties between the Nordic countries and to give the individuals more freedom in the choice of programs. Another goal is to give the public a better sound and picture quality than can be achieved today. These quality improvements shall be met by using small receiver parabolas of less than 1 m in diameter. Contributing to the improved quality is the choice of the C-MAC (Multiplexed Analoque Components) modulation system. TELE-X is a multipurpose satellite which besides the two TV-transponders will have two transponders for data/video communication in the frequency band 12.5 to 12.75 GHz. The choice of system for data and video is based on the philosophy of thin-route traffic between small and low cost earth stations (1.8 to 2.5 m) placed directly at the subscribers premises. The system includes an advanced Data/Video Control Station which automatically connects the traffic stations with standarized transmission speeds up to 2 Mbps. The system which is based on the SCPC/DAMA method can be expanded up to 5000 traffic stations. Numerous data/video applications will be investigated in the initial experimental phase of the project which also will be used for market development of the services. The following text describes the dimensioning criteria for the TELE-X experimental system and a possible specification for an operational system.

  16. A 3-D terrain visualization database for highway information management

    DOT National Transportation Integrated Search

    1999-07-26

    A Multimedia based Highway Information System (MMHIS) is described in the paper to improve the existing photologging system for various operation and management needs. The full digital, computer based MMHIS uses technologies of video, multimedia data...

  17. Smart sensing surveillance video system

    NASA Astrophysics Data System (ADS)

    Hsu, Charles; Szu, Harold

    2016-05-01

    An intelligent video surveillance system is able to detect and identify abnormal and alarming situations by analyzing object movement. The Smart Sensing Surveillance Video (S3V) System is proposed to minimize video processing and transmission, thus allowing a fixed number of cameras to be connected on the system, and making it suitable for its applications in remote battlefield, tactical, and civilian applications including border surveillance, special force operations, airfield protection, perimeter and building protection, and etc. The S3V System would be more effective if equipped with visual understanding capabilities to detect, analyze, and recognize objects, track motions, and predict intentions. In addition, alarm detection is performed on the basis of parameters of the moving objects and their trajectories, and is performed using semantic reasoning and ontologies. The S3V System capabilities and technologies have great potential for both military and civilian applications, enabling highly effective security support tools for improving surveillance activities in densely crowded environments. It would be directly applicable to solutions for emergency response personnel, law enforcement, and other homeland security missions, as well as in applications requiring the interoperation of sensor networks with handheld or body-worn interface devices.

  18. SmartPark Truck Parking Availability System: Video Technology Field Operational Test Results

    DOT National Transportation Integrated Search

    2011-01-01

    The purpose of FMCSAs SmartPark initiative is to determine the feasibility of a technology for providing truck parking space availability in real time to truckers on the road. SmartPark consists of two phases. Phase I was a field operational test ...

  19. Video-based real-time on-street parking occupancy detection system

    NASA Astrophysics Data System (ADS)

    Bulan, Orhan; Loce, Robert P.; Wu, Wencheng; Wang, YaoRong; Bernal, Edgar A.; Fan, Zhigang

    2013-10-01

    Urban parking management is receiving significant attention due to its potential to reduce traffic congestion, fuel consumption, and emissions. Real-time parking occupancy detection is a critical component of on-street parking management systems, where occupancy information is relayed to drivers via smart phone apps, radio, Internet, on-road signs, or global positioning system auxiliary signals. Video-based parking occupancy detection systems can provide a cost-effective solution to the sensing task while providing additional functionality for traffic law enforcement and surveillance. We present a video-based on-street parking occupancy detection system that can operate in real time. Our system accounts for the inherent challenges that exist in on-street parking settings, including illumination changes, rain, shadows, occlusions, and camera motion. Our method utilizes several components from video processing and computer vision for motion detection, background subtraction, and vehicle detection. We also present three traffic law enforcement applications: parking angle violation detection, parking boundary violation detection, and exclusion zone violation detection, which can be integrated into the parking occupancy cameras as a value-added option. Our experimental results show that the proposed parking occupancy detection method performs in real-time at 5 frames/s and achieves better than 90% detection accuracy across several days of videos captured in a busy street block under various weather conditions such as sunny, cloudy, and rainy, among others.

  20. 3rd-generation MW/LWIR sensor engine for advanced tactical systems

    NASA Astrophysics Data System (ADS)

    King, Donald F.; Graham, Jason S.; Kennedy, Adam M.; Mullins, Richard N.; McQuitty, Jeffrey C.; Radford, William A.; Kostrzewa, Thomas J.; Patten, Elizabeth A.; McEwan, Thomas F.; Vodicka, James G.; Wootan, John J.

    2008-04-01

    Raytheon has developed a 3rd-Generation FLIR Sensor Engine (3GFSE) for advanced U.S. Army systems. The sensor engine is based around a compact, productized detector-dewar assembly incorporating a 640 x 480 staring dual-band (MW/LWIR) focal plane array (FPA) and a dual-aperture coldshield mechanism. The capability to switch the coldshield aperture and operate at either of two widely-varying f/#s will enable future multi-mode tactical systems to more fully exploit the many operational advantages offered by dual-band FPAs. RVS has previously demonstrated high-performance dual-band MW/LWIR FPAs in 640 x 480 and 1280 x 720 formats with 20 μm pitch. The 3GFSE includes compact electronics that operate the dual-band FPA and variable-aperture mechanism, and perform 14-bit analog-to-digital conversion of the FPA output video. Digital signal processing electronics perform "fixed" two-point non-uniformity correction (NUC) of the video from both bands and optional dynamic scene-based NUC; advanced enhancement processing of the output video is also supported. The dewar-electronics assembly measures approximately 4.75 x 2.25 x 1.75 inches. A compact, high-performance linear cooler and cooler electronics module provide the necessary FPA cooling over a military environmental temperature range. 3GFSE units are currently being assembled and integrated at RVS, with the first units planned for delivery to the US Army.

  1. Standoff passive video imaging at 350 GHz with 251 superconducting detectors

    NASA Astrophysics Data System (ADS)

    Becker, Daniel; Gentry, Cale; Smirnov, Ilya; Ade, Peter; Beall, James; Cho, Hsiao-Mei; Dicker, Simon; Duncan, William; Halpern, Mark; Hilton, Gene; Irwin, Kent; Li, Dale; Paulter, Nicholas; Reintsema, Carl; Schwall, Robert; Tucker, Carole

    2014-06-01

    Millimeter wavelength radiation holds promise for detection of security threats at a distance, including suicide bomb belts and maritime threats in poor weather. The high sensitivity of superconducting Transition Edge Sensor (TES) detectors makes them ideal for passive imaging of thermal signals at these wavelengths. We have built a 350 GHz video-rate imaging system using a large-format array of feedhorn-coupled TES bolometers. The system operates at a standoff distance of 16m to 28m with a spatial resolution of 1:4 cm (at 17m). It currently contains one 251-detector subarray, and will be expanded to contain four subarrays for a total of 1004 detectors. The system has been used to take video images which reveal the presence of weapons concealed beneath a shirt in an indoor setting. We present a summary of this work.

  2. Applications of superconducting bolometers in security imaging

    NASA Astrophysics Data System (ADS)

    Luukanen, A.; Leivo, M. M.; Rautiainen, A.; Grönholm, M.; Toivanen, H.; Grönberg, L.; Helistö, P.; Mäyrä, A.; Aikio, M.; Grossman, E. N.

    2012-12-01

    Millimeter-wave (MMW) imaging systems are currently undergoing deployment World-wide for airport security screening applications. Security screening through MMW imaging is facilitated by the relatively good transmission of these wavelengths through common clothing materials. Given the long wavelength of operation (frequencies between 20 GHz to ~ 100 GHz, corresponding to wavelengths between 1.5 cm and 3 mm), existing systems are suited for close-range imaging only due to substantial diffraction effects associated with practical aperture diameters. The present and arising security challenges call for systems that are capable of imaging concealed threat items at stand-off ranges beyond 5 meters at near video frame rates, requiring substantial increase in operating frequency in order to achieve useful spatial resolution. The construction of such imaging systems operating at several hundred GHz has been hindered by the lack of submm-wave low-noise amplifiers. In this paper we summarize our efforts in developing a submm-wave video camera which utilizes cryogenic antenna-coupled microbolometers as detectors. Whilst superconducting detectors impose the use of a cryogenic system, we argue that the resulting back-end complexity increase is a favorable trade-off compared to complex and expensive room temperature submm-wave LNAs both in performance and system cost.

  3. Can we see photosynthesis? Magnifying the tiny color changes of plant green leaves using Eulerian video magnification

    NASA Astrophysics Data System (ADS)

    Taj-Eddin, Islam A. T. F.; Afifi, Mahmoud; Korashy, Mostafa; Ahmed, Ali H.; Cheng, Ng Yoke; Hernandez, Evelyng; Abdel-Latif, Salma M.

    2017-11-01

    Plant aliveness is proven through laboratory experiments and special scientific instruments. We aim to detect the degree of animation of plants based on the magnification of the small color changes in the plant's green leaves using the Eulerian video magnification. Capturing the video under a controlled environment, e.g., using a tripod and direct current light sources, reduces camera movements and minimizes light fluctuations; we aim to reduce the external factors as much as possible. The acquired video is then stabilized and a proposed algorithm is used to reduce the illumination variations. Finally, the Euler magnification is utilized to magnify the color changes on the light invariant video. The proposed system does not require any special purpose instruments as it uses a digital camera with a regular frame rate. The results of magnified color changes on both natural and plastic leaves show that the live green leaves have color changes in contrast to the plastic leaves. Hence, we can argue that the color changes of the leaves are due to biological operations, such as photosynthesis. To date, this is possibly the first work that focuses on interpreting visually, some biological operations of plants without any special purpose instruments.

  4. Surveillance of ground vehicles for airport security

    NASA Astrophysics Data System (ADS)

    Blasch, Erik; Wang, Zhonghai; Shen, Dan; Ling, Haibin; Chen, Genshe

    2014-06-01

    Future surveillance systems will work in complex and cluttered environments which require systems engineering solutions for such applications such as airport ground surface management. In this paper, we highlight the use of a L1 video tracker for monitoring activities at an airport. We present methods of information fusion, entity detection, and activity analysis using airport videos for runway detection and airport terminal events. For coordinated airport security, automated ground surveillance enhances efficient and safe maneuvers for aircraft, unmanned air vehicles (UAVs) and unmanned ground vehicles (UGVs) operating within airport environments.

  5. Compilation of Abstracts of Theses Submitted by Candidates for Degrees.

    DTIC Science & Technology

    1986-09-30

    Musitano, J.R. Fin-line Horn Antennas 118 LCDR, USNR Muth, L.R. VLSI Tutorials Through the 119 LT, USN Video -computer Courseware Implementation...Engineer Allocation 432 CPT, USA Model Kiziltan, M. Cognitive Performance Degrada- 433 LTJG, Turkish Navy tion on Sonar Operator and Tor- pedo Data...and Computer Engineering 118 VLSI TUTORIALS THROUGH THE VIDEO -COMPUTER COURSEWARE IMPLEMENTATION SYSTEM Liesel R. Muth Lieutenant, United States Navy

  6. Bar-Chart-Monitor System For Wind Tunnels

    NASA Technical Reports Server (NTRS)

    Jung, Oscar

    1993-01-01

    Real-time monitor system provides bar-chart displays of significant operating parameters developed for National Full-Scale Aerodynamic Complex at Ames Research Center. Designed to gather and process sensory data on operating conditions of wind tunnels and models, and displays data for test engineers and technicians concerned with safety and validation of operating conditions. Bar-chart video monitor displays data in as many as 50 channels at maximum update rate of 2 Hz in format facilitating quick interpretation.

  7. The Role of Spatial Ability in the Relationship Between Video Game Experience and Route Effectiveness Among Unmanned Vehicle Operators

    DTIC Science & Technology

    2008-12-01

    1  THE ROLE OF SPATIAL ABILITY IN THE RELATIONSHIP BETWEEN VIDEO GAME EXPERIENCE AND ROUTE EFFECTIVENESS AMONG UNMANNED VEHICLE OPERATORS...ABSTRACT Effective route planning is essential to the successful operation of unmanned vehicles. Video game experience has been shown to affect...route planning and execution, but why video game experience helps has not been addressed. One answer may be that spatial skills, necessary for route

  8. A Graphical Operator Interface for a Telerobotic Inspection System

    NASA Technical Reports Server (NTRS)

    Kim, W. S.; Tso, K. S.; Hayati, S.

    1993-01-01

    Operator interface has recently emerged as an important element for efficient and safe operatorinteractions with the telerobotic system. Recent advances in graphical user interface (GUI) andgraphics/video merging technologies enable development of more efficient, flexible operatorinterfaces. This paper describes an advanced graphical operator interface newly developed for aremote surface inspection system at Jet Propulsion Laboratory. The interface has been designed sothat remote surface inspection can be performed by a single operator with an integrated robot controland image inspection capability. It supports three inspection strategies of teleoperated human visual inspection, human visual inspection with automated scanning, and machine-vision-based automated inspection.

  9. A low cost, high performance remotely controlled backhoe/excavator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rizzo, J.

    1995-12-31

    This paper addresses a state of the art, low cost, remotely controlled backhoe/excavator system for remediation use at hazardous waste sites. The all weather, all terrain, Remote Dig-It is based on a simple, proven construction platform and incorporates state of the art sensors, control, telemetry and other subsystems derived from advanced underwater remotely operated vehicle systems. The system can be towed to a site without the use of a trailer, manually operated by an on board operator or operated via a fiber optic or optional RF communications link by a remotely positioned operator. A proportional control system is piggy backedmore » onto the standard manual control system. The control system improves manual operation, allows rapid manual/remote mode selection and provides fine manual or remote control of all functions. The system incorporates up to 4 separate video links, acoustic obstacle proximity sensors, and stereo audio pickups and an optional differential GPS navigation. Video system options include electronic panning and tilting within a distortion-corrected wide angle field of view. The backhoe/excavator subsystem has a quick disconnect interface feature which allows its use as a manipulator with a wide variety of end effectors and tools. The Remote Dig-It was developed to respond to the need for a low-cost, effective remediation system for use at sites containing hazardous materials. The prototype system was independently evaluated for this purpose by the Army at the Jefferson Proving Ground where it surpassed all performance goals. At the time of this writing, the Remote Dig-It system is currently the only backhoe/excavator which met the Army`s goals for remediation systems for use at hazardous waste sites and it costs a fraction of any known competing offerings.« less

  10. A system for automatic analysis of blood pressure data for digital computer entry

    NASA Technical Reports Server (NTRS)

    Miller, R. L.

    1972-01-01

    Operation of automatic blood pressure data system is described. Analog blood pressure signal is analyzed by three separate circuits, systolic, diastolic, and cycle defect. Digital computer output is displayed on teletype paper tape punch and video screen. Illustration of system is included.

  11. 47 CFR 1.1703 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... (CARS). All services authorized under part 78 of this title. (e) Filings. Any application, notification... conveyed by operation of rule upon filing notification of aeronautical frequency usage by MVPDs or... database, application filing system, and processing system for Multichannel Video and Cable Television...

  12. Final report : mobile surveillance and wireless communication systems field operational test. Volume 1, Executive summary

    DOT National Transportation Integrated Search

    1999-03-01

    This study focused on assessing the application of traffic monitoring and management systems which use transportable surveillance and ramp meter trailers, video image processors, and wireless communications. The mobile surveillance and wireless commu...

  13. Linear array of photodiodes to track a human speaker for video recording

    NASA Astrophysics Data System (ADS)

    DeTone, D.; Neal, H.; Lougheed, R.

    2012-12-01

    Communication and collaboration using stored digital media has garnered more interest by many areas of business, government and education in recent years. This is due primarily to improvements in the quality of cameras and speed of computers. An advantage of digital media is that it can serve as an effective alternative when physical interaction is not possible. Video recordings that allow for viewers to discern a presenter's facial features, lips and hand motions are more effective than videos that do not. To attain this, one must maintain a video capture in which the speaker occupies a significant portion of the captured pixels. However, camera operators are costly, and often do an imperfect job of tracking presenters in unrehearsed situations. This creates motivation for a robust, automated system that directs a video camera to follow a presenter as he or she walks anywhere in the front of a lecture hall or large conference room. Such a system is presented. The system consists of a commercial, off-the-shelf pan/tilt/zoom (PTZ) color video camera, a necklace of infrared LEDs and a linear photodiode array detector. Electronic output from the photodiode array is processed to generate the location of the LED necklace, which is worn by a human speaker. The computer controls the video camera movements to record video of the speaker. The speaker's vertical position and depth are assumed to remain relatively constant- the video camera is sent only panning (horizontal) movement commands. The LED necklace is flashed at 70Hz at a 50% duty cycle to provide noise-filtering capability. The benefit to using a photodiode array versus a standard video camera is its higher frame rate (4kHz vs. 60Hz). The higher frame rate allows for the filtering of infrared noise such as sunlight and indoor lighting-a capability absent from other tracking technologies. The system has been tested in a large lecture hall and is shown to be effective.

  14. Portable Airborne Laser System Measures Forest-Canopy Height

    NASA Technical Reports Server (NTRS)

    Nelson, Ross

    2005-01-01

    (PALS) is a combination of laser ranging, video imaging, positioning, and data-processing subsystems designed for measuring the heights of forest canopies along linear transects from tens to thousands of kilometers long. Unlike prior laser ranging systems designed to serve the same purpose, the PALS is not restricted to use aboard a single aircraft of a specific type: the PALS fits into two large suitcases that can be carried to any convenient location, and the PALS can be installed in almost any local aircraft for hire, thereby making it possible to sample remote forests at relatively low cost. The initial cost and the cost of repairing the PALS are also lower because the PALS hardware consists mostly of commercial off-the-shelf (COTS) units that can easily be replaced in the field. The COTS units include a laser ranging transceiver, a charge-coupled-device camera that images the laser-illuminated targets, a differential Global Positioning System (dGPS) receiver capable of operation within the Wide Area Augmentation System, a video titler, a video cassette recorder (VCR), and a laptop computer equipped with two serial ports. The VCR and computer are powered by batteries; the other units are powered at 12 VDC from the 28-VDC aircraft power system via a low-pass filter and a voltage converter. The dGPS receiver feeds location and time data, at an update rate of 0.5 Hz, to the video titler and the computer. The laser ranging transceiver, operating at a sampling rate of 2 kHz, feeds its serial range and amplitude data stream to the computer. The analog video signal from the CCD camera is fed into the video titler wherein the signal is annotated with position and time information. The titler then forwards the annotated signal to the VCR for recording on 8-mm tapes. The dGPS and laser range and amplitude serial data streams are processed by software that displays the laser trace and the dGPS information as they are fed into the computer, subsamples the laser range and amplitude data, interleaves the subsampled data with the dGPS information, and records the resulting interleaved data stream.

  15. Video-microscopy for use in microsurgical aspects of complex hepatobiliary and pancreatic surgery: a preliminary report

    PubMed Central

    Nissen, Nicholas N; Menon, Vijay; Williams, James; Berci, George

    2011-01-01

    Background The use of loupe magnification during complex hepatobiliary and pancreatic (HBP) surgery has become routine. Unfortunately, loupe magnification has several disadvantages including limited magnification, a fixed field and non-variable magnification parameters. The aim of this report is to describe a simple system of video-microscopy for use in open surgery as an alternative to loupe magnification. Methods In video-microscopy, the operative field is displayed on a TV monitor using a high-definition (HD) camera with a special optic mounted on an adjustable mechanical arm. The set-up and application of this system are described and illustrated using examples drawn from pancreaticoduodenectomy, bile duct repair and liver transplantation. Results This system is easy to use and can provide variable magnification of ×4–12 at a camera distance of 25–35 cm from the operative field and a depth of field of 15 mm. This system allows the surgeon and assistant to work from a HD TV screen during critical phases of microsurgery. Conclusions The system described here provides better magnification than loupe lenses and thus may be beneficial during complex HPB procedures. Other benefits of this system include the fact that its use decreases neck strain and postural fatigue in the surgeon and it can be used as a tool for documentation and teaching. PMID:21929677

  16. Artists concept of the salvage operations offshore of KSC after STS 51-L

    NASA Image and Video Library

    1986-04-01

    S86-30088 (March 1986) --- Salvage operations offshore of Kennedy Space Center, are depicted in this artist’s concept showing a grapple and recovery fixture (left) being directed through the use of a remote video system suspended from the recovery ship. Photo credit: NASA

  17. Development and Evaluation of Sensor Concepts for Ageless Aerospace Vehicles: Report 6 - Development and Demonstration of a Self-Organizing Diagnostic System for Structural Health Monitoring

    NASA Technical Reports Server (NTRS)

    Batten, Adam; Edwards, Graeme; Gerasimov, Vadim; Hoschke, Nigel; Isaacs, Peter; Lewis, Chris; Moore, Richard; Oppolzer, Florien; Price, Don; Prokopenko, Mikhail; hide

    2010-01-01

    This report describes a significant advance in the capability of the CSIRO/NASA structural health monitoring Concept Demonstrator (CD). The main thrust of the work has been the development of a mobile robotic agent, and the hardware and software modifications and developments required to enable the demonstrator to operate as a single, self-organizing, multi-agent system. This single-robot system is seen as the forerunner of a system in which larger numbers of small robots perform inspection and repair tasks cooperatively, by self-organization. While the goal of demonstrating self-organized damage diagnosis was not fully achieved in the time available, much of the work required for the final element that enables the robot to point the video camera and transmit an image has been completed. A demonstration video of the CD and robotic systems operating will be made and forwarded to NASA.

  18. Progress in passive submillimeter-wave video imaging

    NASA Astrophysics Data System (ADS)

    Heinz, Erik; May, Torsten; Born, Detlef; Zieger, Gabriel; Peiselt, Katja; Zakosarenko, Vyacheslav; Krause, Torsten; Krüger, André; Schulz, Marco; Bauer, Frank; Meyer, Hans-Georg

    2014-06-01

    Since 2007 we are developing passive submillimeter-wave video cameras for personal security screening. In contradiction to established portal-based millimeter-wave scanning techniques, these are suitable for stand-off or stealth operation. The cameras operate in the 350GHz band and use arrays of superconducting transition-edge sensors (TES), reflector optics, and opto-mechanical scanners. Whereas the basic principle of these devices remains unchanged, there has been a continuous development of the technical details, as the detector array, the scanning scheme, and the readout, as well as system integration and performance. The latest prototype of this camera development features a linear array of 128 detectors and a linear scanner capable of 25Hz frame rate. Using different types of reflector optics, a field of view of 1×2m2 and a spatial resolution of 1-2 cm is provided at object distances of about 5-25m. We present the concept of this camera and give details on system design and performance. Demonstration videos show its capability for hidden threat detection and illustrate possible application scenarios.

  19. Ethernet direct display: a new dimension for in-vehicle video connectivity solutions

    NASA Astrophysics Data System (ADS)

    Rowley, Vincent

    2009-05-01

    To improve the local situational awareness (LSA) of personnel in light or heavily armored vehicles, most military organizations recognize the need to equip their fleets with high-resolution digital video systems. Several related upgrade programs are already in progress and, almost invariably, COTS IP/Ethernet is specified as the underlying transport mechanism. The high bandwidths, long reach, networking flexibility, scalability, and affordability of IP/Ethernet make it an attractive choice. There are significant technical challenges, however, in achieving high-performance, real-time video connectivity over the IP/Ethernet platform. As an early pioneer in performance-oriented video systems based on IP/Ethernet, Pleora Technologies has developed core expertise in meeting these challenges and applied a singular focus to innovating within the required framework. The company's field-proven iPORTTM Video Connectivity Solution is deployed successfully in thousands of real-world applications for medical, military, and manufacturing operations. Pleora's latest innovation is eDisplayTM, a smallfootprint, low-power, highly efficient IP engine that acquires video from an Ethernet connection and sends it directly to a standard HDMI/DVI monitor for real-time viewing. More costly PCs are not required. This paper describes Pleora's eDisplay IP Engine in more detail. It demonstrates how - in concert with other elements of the end-to-end iPORT Video Connectivity Solution - the engine can be used to build standards-based, in-vehicle video systems that increase the safety and effectiveness of military personnel while fully leveraging the advantages of the lowcost COTS IP/Ethernet platform.

  20. Imaging-guided thoracoscopic resection of a ground-glass opacity lesion in a hybrid operating room equipped with a robotic C-arm CT system.

    PubMed

    Hsieh, Chen-Ping; Hsieh, Ming-Ju; Fang, Hsin-Yueh; Chao, Yin-Kai

    2017-05-01

    The intraoperative identification of small pulmonary nodules through video-assisted thoracoscopic surgery remains challenging. Although preoperative CT-guided nodule localization is commonly used to detect tumors during video-assisted thoracoscopic surgery (VATS), this approach carries inherent risks. We report the case of a patient with stage I lung cancer presenting as an area of ground-glass opacity (GGO) in the right upper pulmonary lobe. He successfully underwent a single-stage, CT-guided localization and removal of the pulmonary nodule within a hybrid operating room (OR) equipped with a robotic C-arm.

  1. Summarizing Audiovisual Contents of a Video Program

    NASA Astrophysics Data System (ADS)

    Gong, Yihong

    2003-12-01

    In this paper, we focus on video programs that are intended to disseminate information and knowledge such as news, documentaries, seminars, etc, and present an audiovisual summarization system that summarizes the audio and visual contents of the given video separately, and then integrating the two summaries with a partial alignment. The audio summary is created by selecting spoken sentences that best present the main content of the audio speech while the visual summary is created by eliminating duplicates/redundancies and preserving visually rich contents in the image stream. The alignment operation aims to synchronize each spoken sentence in the audio summary with its corresponding speaker's face and to preserve the rich content in the visual summary. A Bipartite Graph-based audiovisual alignment algorithm is developed to efficiently find the best alignment solution that satisfies these alignment requirements. With the proposed system, we strive to produce a video summary that: (1) provides a natural visual and audio content overview, and (2) maximizes the coverage for both audio and visual contents of the original video without having to sacrifice either of them.

  2. An attentive multi-camera system

    NASA Astrophysics Data System (ADS)

    Napoletano, Paolo; Tisato, Francesco

    2014-03-01

    Intelligent multi-camera systems that integrate computer vision algorithms are not error free, and thus both false positive and negative detections need to be revised by a specialized human operator. Traditional multi-camera systems usually include a control center with a wall of monitors displaying videos from each camera of the network. Nevertheless, as the number of cameras increases, switching from a camera to another becomes hard for a human operator. In this work we propose a new method that dynamically selects and displays the content of a video camera from all the available contents in the multi-camera system. The proposed method is based on a computational model of human visual attention that integrates top-down and bottom-up cues. We believe that this is the first work that tries to use a model of human visual attention for the dynamic selection of the camera view of a multi-camera system. The proposed method has been experimented in a given scenario and has demonstrated its effectiveness with respect to the other methods and manually generated ground-truth. The effectiveness has been evaluated in terms of number of correct best-views generated by the method with respect to the camera views manually generated by a human operator.

  3. Meniscus Imaging for Crystal-Growth Control

    NASA Technical Reports Server (NTRS)

    Sachs, E. M.

    1983-01-01

    Silicon crystal growth monitored by new video system reduces operator stress and improves conditions for observation and control of growing process. System optics produce greater magnification vertically than horizontally, so entire meniscus and melt is viewed with high resolution in both width and height dimensions.

  4. Fighting in a Contested Space Environment: Training Marines for Operations with Degraded or Denied Space-Enabled Capabilities

    DTIC Science & Technology

    2015-06-01

    System UFG Ulchi Freedom Guardian UFO UHF Follow-On System UHF Ultra-High Frequency URE User Range Error VTC Video Teleconference WGS Wideband...in the UHF band; two legacy systems, Fleet Satellite Communication System (FLTSATCOM) and UHF Follow-on ( UFO ), and the new constellation being

  5. Apparatus for monitoring crystal growth

    DOEpatents

    Sachs, Emanual M.

    1981-01-01

    A system and method are disclosed for monitoring the growth of a crystalline body from a liquid meniscus in a furnace. The system provides an improved human/machine interface so as to reduce operator stress, strain and fatigue while improving the conditions for observation and control of the growing process. The system comprises suitable optics for forming an image of the meniscus and body wherein the image is anamorphic so that the entire meniscus can be viewed with good resolution in both the width and height dimensions. The system also comprises a video display for displaying the anamorphic image. The video display includes means for enhancing the contrast between any two contrasting points in the image. The video display also comprises a signal averager for averaging the intensity of at least one preselected portions of the image. The value of the average intensity, can in turn be utilized to control the growth of the body. The system and method are also capable of observing and monitoring multiple processes.

  6. Method of monitoring crystal growth

    DOEpatents

    Sachs, Emanual M.

    1982-01-01

    A system and method are disclosed for monitoring the growth of a crystalline body from a liquid meniscus in a furnace. The system provides an improved human/machine interface so as to reduce operator stress, strain and fatigue while improving the conditions for observation and control of the growing process. The system comprises suitable optics for forming an image of the meniscus and body wherein the image is anamorphic so that the entire meniscus can be viewed with good resolution in both the width and height dimensions. The system also comprises a video display for displaying the anamorphic image. The video display includes means for enhancing the contrast between any two contrasting points in the image. The video display also comprises a signal averager for averaging the intensity of at least one preselected portions of the image. The value of the average intensity, can in turn be utilized to control the growth of the body. The system and method are also capable of observing and monitoring multiple processes.

  7. Intergraph video and images exploitation capabilities

    NASA Astrophysics Data System (ADS)

    Colla, Simone; Manesis, Charalampos

    2013-08-01

    The current paper focuses on the capture, fusion and process of aerial imagery in order to leverage full motion video, giving analysts the ability to collect, analyze, and maximize the value of video assets. Unmanned aerial vehicles (UAV) have provided critical real-time surveillance and operational support to military organizations, and are a key source of intelligence, particularly when integrated with other geospatial data. In the current workflow, at first, the UAV operators plan the flight by using a flight planning software. During the flight the UAV send a live video stream directly on the field to be processed by Intergraph software, to generate and disseminate georeferenced images trough a service oriented architecture based on ERDAS Apollo suite. The raw video-based data sources provide the most recent view of a situation and can augment other forms of geospatial intelligence - such as satellite imagery and aerial photos - to provide a richer, more detailed view of the area of interest. To effectively use video as a source of intelligence, however, the analyst needs to seamlessly fuse the video with these other types of intelligence, such as map features and annotations. Intergraph has developed an application that automatically generates mosaicked georeferenced image, tags along the video route which can then be seamlessly integrated with other forms of static data, such as aerial photos, satellite imagery, or geospatial layers and features. Consumers will finally have the ability to use a single, streamlined system to complete the entire geospatial information lifecycle: capturing geospatial data using sensor technology; processing vector, raster, terrain data into actionable information; managing, fusing, and sharing geospatial data and video toghether; and finally, rapidly and securely delivering integrated information products, ensuring individuals can make timely decisions.

  8. Unmanned ground vehicles for integrated force protection

    NASA Astrophysics Data System (ADS)

    Carroll, Daniel M.; Mikell, Kenneth; Denewiler, Thomas

    2004-09-01

    The combination of Command and Control (C2) systems with Unmanned Ground Vehicles (UGVs) provides Integrated Force Protection from the Robotic Operation Command Center. Autonomous UGVs are directed as Force Projection units. UGV payloads and fixed sensors provide situational awareness while unattended munitions provide a less-than-lethal response capability. Remote resources serve as automated interfaces to legacy physical devices such as manned response vehicles, barrier gates, fence openings, garage doors, and remote power on/off capability for unmanned systems. The Robotic Operations Command Center executes the Multiple Resource Host Architecture (MRHA) to simultaneously control heterogeneous unmanned systems. The MRHA graphically displays video, map, and status for each resource using wireless digital communications for integrated data, video, and audio. Events are prioritized and the user is prompted with audio alerts and text instructions for alarms and warnings. A control hierarchy of missions and duty rosters support autonomous operations. This paper provides an overview of the key technology enablers for Integrated Force Protection with details on a force-on-force scenario to test and demonstrate concept of operations using Unmanned Ground Vehicles. Special attention is given to development and applications for the Remote Detection Challenge and Response (REDCAR) initiative for Integrated Base Defense.

  9. Detailed design package for design of a video system providing optimal visual information for controlling payload and experiment operations with television

    NASA Technical Reports Server (NTRS)

    1975-01-01

    A detailed description of a video system for controlling space shuttle payloads and experiments is presented in the preliminary design review and critical design review, first and second engineering design reports respectively, and in the final report submitted jointly with the design package. The material contained in the four subsequent sections of the package contains system descriptions, design data, and specifications for the recommended 2-view system. Section 2 contains diagrams relating to the simulation test configuration of the 2-view system. Section 3 contains descriptions and drawings of the deliverable breadboard equipment. A description of the recommended system is contained in Section 4 with equipment specifications in Section 5.

  10. Achieving an Optimal Medium Altitude UAV Force Balance in Support of COIN Operations

    DTIC Science & Technology

    2009-02-02

    and execute operations. UAS with common data links and remote video terminals (RVTs) provide input to the common operational picture (COP) and...full-motion video (FMV) is intuitive to many tactical warfighters who have used similar sensors in manned aircraft. Modern data links allow the video ...Document (AFDD) 2-9. Intelligence, Surveillance, and Reconnaissance Operations, 17 July 2007. Baldor, Lolita C. “Increased UAV reliance evident in

  11. Recording stereoscopic 3D neurosurgery with a head-mounted 3D camera system.

    PubMed

    Lee, Brian; Chen, Brian R; Chen, Beverly B; Lu, James Y; Giannotta, Steven L

    2015-06-01

    Stereoscopic three-dimensional (3D) imaging can present more information to the viewer and further enhance the learning experience over traditional two-dimensional (2D) video. Most 3D surgical videos are recorded from the operating microscope and only feature the crux, or the most important part of the surgery, leaving out other crucial parts of surgery including the opening, approach, and closing of the surgical site. In addition, many other surgeries including complex spine, trauma, and intensive care unit procedures are also rarely recorded. We describe and share our experience with a commercially available head-mounted stereoscopic 3D camera system to obtain stereoscopic 3D recordings of these seldom recorded aspects of neurosurgery. The strengths and limitations of using the GoPro(®) 3D system as a head-mounted stereoscopic 3D camera system in the operating room are reviewed in detail. Over the past several years, we have recorded in stereoscopic 3D over 50 cranial and spinal surgeries and created a library for education purposes. We have found the head-mounted stereoscopic 3D camera system to be a valuable asset to supplement 3D footage from a 3D microscope. We expect that these comprehensive 3D surgical videos will become an important facet of resident education and ultimately lead to improved patient care.

  12. 47 CFR 1.1704 - Station files.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Operations and Licensing System (COALS) § 1.1704 Station files. Applications, notifications, correspondence... 47 Telecommunication 1 2014-10-01 2014-10-01 false Station files. 1.1704 Section 1.1704... administrative data relating to each system in the Multichannel Video and Cable Television Services (MVCTS) and...

  13. Sensing system for detection and control of deposition on pendant tubes in recovery and power boilers

    DOEpatents

    Kychakoff, George [Maple Valley, WA; Afromowitz, Martin A [Mercer Island, WA; Hogle, Richard E [Olympia, WA

    2008-10-14

    A system for detection and control of deposition on pendant tubes in recovery and power boilers includes one or more deposit monitoring sensors operating in infrared regions of about 4 or 8.7 microns and directly producing images of the interior of the boiler, or producing feeding signals to a data processing system for information to enable a distributed control system by which the boilers are operated to operate said boilers more efficiently. The data processing system includes an image pre-processing circuit in which a 2-D image formed by the video data input is captured, and includes a low pass filter for performing noise filtering of said video input. It also includes an image compensation system for array compensation to correct for pixel variation and dead cells, etc., and for correcting geometric distortion. An image segmentation module receives a cleaned image from the image pre-processing circuit for separating the image of the recovery boiler interior into background, pendant tubes, and deposition. It also accomplishes thresholding/clustering on gray scale/texture and makes morphological transforms to smooth regions, and identifies regions by connected components. An image-understanding unit receives a segmented image sent from the image segmentation module and matches derived regions to a 3-D model of said boiler. It derives a 3-D structure the deposition on pendant tubes in the boiler and provides the information about deposits to the plant distributed control system for more efficient operation of the plant pendant tube cleaning and operating systems.

  14. Mobile Vehicle Teleoperated Over Wireless IP

    DTIC Science & Technology

    2007-06-13

    VideoLAN software suite. The VLC media player portion of this suite handles net- work streaming of video, as well as the receipt and display of the video...is found in appendix C.7. Video Display The video feed is displayed for the operator using VLC opened independently from the control sending program...This gives the operator the most choice in how to configure the display. To connect VLC to the feed all you need is the IP address from the Java

  15. Towards a Video Passive Content Fingerprinting Method for Partial-Copy Detection Robust against Non-Simulated Attacks

    PubMed Central

    2016-01-01

    Passive content fingerprinting is widely used for video content identification and monitoring. However, many challenges remain unsolved especially for partial-copies detection. The main challenge is to find the right balance between the computational cost of fingerprint extraction and fingerprint dimension, without compromising detection performance against various attacks (robustness). Fast video detection performance is desirable in several modern applications, for instance, in those where video detection involves the use of large video databases or in applications requiring real-time video detection of partial copies, a process whose difficulty increases when videos suffer severe transformations. In this context, conventional fingerprinting methods are not fully suitable to cope with the attacks and transformations mentioned before, either because the robustness of these methods is not enough or because their execution time is very high, where the time bottleneck is commonly found in the fingerprint extraction and matching operations. Motivated by these issues, in this work we propose a content fingerprinting method based on the extraction of a set of independent binary global and local fingerprints. Although these features are robust against common video transformations, their combination is more discriminant against severe video transformations such as signal processing attacks, geometric transformations and temporal and spatial desynchronization. Additionally, we use an efficient multilevel filtering system accelerating the processes of fingerprint extraction and matching. This multilevel filtering system helps to rapidly identify potential similar video copies upon which the fingerprint process is carried out only, thus saving computational time. We tested with datasets of real copied videos, and the results show how our method outperforms state-of-the-art methods regarding detection scores. Furthermore, the granularity of our method makes it suitable for partial-copy detection; that is, by processing only short segments of 1 second length. PMID:27861492

  16. SOA approach to battle command: simulation interoperability

    NASA Astrophysics Data System (ADS)

    Mayott, Gregory; Self, Mid; Miller, Gordon J.; McDonnell, Joseph S.

    2010-04-01

    NVESD is developing a Sensor Data and Management Services (SDMS) Service Oriented Architecture (SOA) that provides an innovative approach to achieve seamless application functionality across simulation and battle command systems. In 2010, CERDEC will conduct a SDMS Battle Command demonstration that will highlight the SDMS SOA capability to couple simulation applications to existing Battle Command systems. The demonstration will leverage RDECOM MATREX simulation tools and TRADOC Maneuver Support Battle Laboratory Virtual Base Defense Operations Center facilities. The battle command systems are those specific to the operation of a base defense operations center in support of force protection missions. The SDMS SOA consists of four components that will be discussed. An Asset Management Service (AMS) will automatically discover the existence, state, and interface definition required to interact with a named asset (sensor or a sensor platform, a process such as level-1 fusion, or an interface to a sensor or other network endpoint). A Streaming Video Service (SVS) will automatically discover the existence, state, and interfaces required to interact with a named video stream, and abstract the consumers of the video stream from the originating device. A Task Manager Service (TMS) will be used to automatically discover the existence of a named mission task, and will interpret, translate and transmit a mission command for the blue force unit(s) described in a mission order. JC3IEDM data objects, and software development kit (SDK), will be utilized as the basic data object definition for implemented web services.

  17. Real time simulation using position sensing

    NASA Technical Reports Server (NTRS)

    Isbell, William B. (Inventor); Taylor, Jason A. (Inventor); Studor, George F. (Inventor); Womack, Robert W. (Inventor); Hilferty, Michael F. (Inventor); Bacon, Bruce R. (Inventor)

    2000-01-01

    An interactive exercise system including exercise equipment having a resistance system, a speed sensor, a controller that varies the resistance setting of the exercise equipment, and a playback device for playing pre-recorded video and audio. The controller, operating in conjunction with speed information from the speed sensor and terrain information from media table files, dynamically varies the resistance setting of the exercise equipment in order to simulate varying degrees of difficulty while the playback device concurrently plays back the video and audio to create the simulation that the user is exercising in a natural setting such as a real-world exercise course.

  18. Aerial Video Imaging

    NASA Technical Reports Server (NTRS)

    1991-01-01

    When Michael Henry wanted to start an aerial video service, he turned to Johnson Space Center for assistance. Two NASA engineers - one had designed and developed TV systems in Apollo, Skylab, Apollo- Soyuz and Space Shuttle programs - designed a wing-mounted fiberglass camera pod. Camera head and angles are adjustable, and the pod is shaped to reduce vibration. The controls are located so a solo pilot can operate the system. A microprocessor displays latitude, longitude, and bearing, and a GPS receiver provides position data for possible legal references. The service has been successfully utilized by railroads, oil companies, real estate companies, etc.

  19. [Video-assisted thoracoscopic surgery as an alternative to urgent thoracotomy following open chest trauma in selected cases].

    PubMed

    Samiatina, Diana; Rubikas, Romaldas

    2004-01-01

    To prove that video-assisted thoracoscopic surgery in selected cases is an alternative to urgent thoracotomy following open chest trauma. Retrospective analysis of case reports of patients operated for open chest trauma during 1997-2002. Comparison of two methods of surgical treatment: urgent video-assisted thoracoscopy and urgent thoracotomy. Duration of drain presence in the pleural cavity, duration of postoperative treatment, pain intensity and cosmetic effect were evaluated. Data analysis was performed using SPSS statistical software. Statistical evaluation of differences between groups was performed using Mann-Whitney U test. The differences between groups were considered to be statistically significant when the probability of deviation was p<0.05. During 1997-2002, 121 patients with open chest trauma were operated. Thirty three patients underwent urgent video-assisted thoracoscopy, 88 patients were operated through thoracotomy incision: 69 due to isolated open chest trauma, 17 due to thoracoabdominal injury and 2 due to abdominothoracic injury. Almost thirteen percent (12.5%) of patients after urgent thoracotomy underwent urgent laparotomy due to damaged diaphragm and other organs of peritoneal cavity. Duration of drain presence in the pleural cavity after video-assisted thoracoscopy was 4.57 days and after urgent thoracotomy - 6.88 days (p<0.05). Duration of post-operative treatment after video-assisted thoracoscopy was 8.21 days and after urgent thoracotomy - 14.89 days (p<0.05). Amount of consumed non-narcotic analgesics after video-assisted thoracoscopy was 1056.98 mg and after urgent thoracotomy - 1966.70 mg (p<0.05). Video-assisted thoracoscopy is minimally invasive method of thoracic surgery allowing for the evaluation of the pathological changes in the lung, pericardium, diaphragm, mediastinum, thoracic wall and pleura, including the localization of these changes, and the type and severity of the injury. The number of early post-operative complications following video-assisted thoracoscopy is lower. Compared to operations through thoracotomy incision, video assisted thoracoscopies entail the shortening of the duration of drain presence in the pleural cavity and the duration of post-operative treatment. Video-assisted thoracoscopy should be performed on all patients with open chest trauma and stable hemodynamics and the respiration function. Video-assisted thoracoscopy is an informative diagnostic and treatment method allowing for the selection of patients for urgent thoracotomy.

  20. Visual analysis of trash bin processing on garbage trucks in low resolution video

    NASA Astrophysics Data System (ADS)

    Sidla, Oliver; Loibner, Gernot

    2015-03-01

    We present a system for trash can detection and counting from a camera which is mounted on a garbage collection truck. A working prototype has been successfully implemented and tested with several hours of real-world video. The detection pipeline consists of HOG detectors for two trash can sizes, and meanshift tracking and low level image processing for the analysis of the garbage disposal process. Considering the harsh environment and unfavorable imaging conditions, the process works already good enough so that very useful measurements from video data can be extracted. The false positive/false negative rate of the full processing pipeline is about 5-6% at fully automatic operation. Video data of a full day (about 8 hrs) can be processed in about 30 minutes on a standard PC.

  1. Surgeon point-of-view recording: Using a high-definition head-mounted video camera in the operating room.

    PubMed

    Nair, Akshay Gopinathan; Kamal, Saurabh; Dave, Tarjani Vivek; Mishra, Kapil; Reddy, Harsha S; Della Rocca, David; Della Rocca, Robert C; Andron, Aleza; Jain, Vandana

    2015-10-01

    To study the utility of a commercially available small, portable ultra-high definition (HD) camera (GoPro Hero 4) for intraoperative recording. A head mount was used to fix the camera on the operating surgeon's head. Due care was taken to protect the patient's identity. The recorded video was subsequently edited and used as a teaching tool. This retrospective, noncomparative study was conducted at three tertiary eye care centers. The surgeries recorded were ptosis correction, ectropion correction, dacryocystorhinostomy, angular dermoid excision, enucleation, blepharoplasty and lid tear repair surgery (one each). The recorded videos were reviewed, edited, and checked for clarity, resolution, and reproducibility. The recorded videos were found to be high quality, which allowed for zooming and visualization of the surgical anatomy clearly. Minimal distortion is a drawback that can be effectively addressed during postproduction. The camera, owing to its lightweight and small size, can be mounted on the surgeon's head, thus offering a unique surgeon point-of-view. In our experience, the results were of good quality and reproducible. A head-mounted ultra-HD video recording system is a cheap, high quality, and unobtrusive technique to record surgery and can be a useful teaching tool in external facial and ophthalmic plastic surgery.

  2. Surgeon point-of-view recording: Using a high-definition head-mounted video camera in the operating room

    PubMed Central

    Nair, Akshay Gopinathan; Kamal, Saurabh; Dave, Tarjani Vivek; Mishra, Kapil; Reddy, Harsha S; Rocca, David Della; Rocca, Robert C Della; Andron, Aleza; Jain, Vandana

    2015-01-01

    Objective: To study the utility of a commercially available small, portable ultra-high definition (HD) camera (GoPro Hero 4) for intraoperative recording. Methods: A head mount was used to fix the camera on the operating surgeon's head. Due care was taken to protect the patient's identity. The recorded video was subsequently edited and used as a teaching tool. This retrospective, noncomparative study was conducted at three tertiary eye care centers. The surgeries recorded were ptosis correction, ectropion correction, dacryocystorhinostomy, angular dermoid excision, enucleation, blepharoplasty and lid tear repair surgery (one each). The recorded videos were reviewed, edited, and checked for clarity, resolution, and reproducibility. Results: The recorded videos were found to be high quality, which allowed for zooming and visualization of the surgical anatomy clearly. Minimal distortion is a drawback that can be effectively addressed during postproduction. The camera, owing to its lightweight and small size, can be mounted on the surgeon's head, thus offering a unique surgeon point-of-view. In our experience, the results were of good quality and reproducible. Conclusions: A head-mounted ultra-HD video recording system is a cheap, high quality, and unobtrusive technique to record surgery and can be a useful teaching tool in external facial and ophthalmic plastic surgery. PMID:26655001

  3. Video Guidance Sensor and Time-of-Flight Rangefinder

    NASA Technical Reports Server (NTRS)

    Bryan, Thomas; Howard, Richard; Bell, Joseph L.; Roe, Fred D.; Book, Michael L.

    2007-01-01

    A proposed video guidance sensor (VGS) would be based mostly on the hardware and software of a prior Advanced VGS (AVGS), with some additions to enable it to function as a time-of-flight rangefinder (in contradistinction to a triangulation or image-processing rangefinder). It would typically be used at distances of the order of 2 or 3 kilometers, where a typical target would appear in a video image as a single blob, making it possible to extract the direction to the target (but not the orientation of the target or the distance to the target) from a video image of light reflected from the target. As described in several previous NASA Tech Briefs articles, an AVGS system is an optoelectronic system that provides guidance for automated docking of two vehicles. In the original application, the two vehicles are spacecraft, but the basic principles of design and operation of the system are applicable to aircraft, robots, objects maneuvered by cranes, or other objects that may be required to be aligned and brought together automatically or under remote control. In a prior AVGS system of the type upon which the now-proposed VGS is largely based, the tracked vehicle is equipped with one or more passive targets that reflect light from one or more continuous-wave laser diode(s) on the tracking vehicle, a video camera on the tracking vehicle acquires images of the targets in the reflected laser light, the video images are digitized, and the image data are processed to obtain the direction to the target. The design concept of the proposed VGS does not call for any memory or processor hardware beyond that already present in the prior AVGS, but does call for some additional hardware and some additional software. It also calls for assignment of some additional tasks to two subsystems that are parts of the prior VGS: a field-programmable gate array (FPGA) that generates timing and control signals, and a digital signal processor (DSP) that processes the digitized video images. The additional timing and control signals generated by the FPGA would cause the VGS to alternate between an imaging (direction-finding) mode and a time-of-flight (range-finding mode) and would govern operation in the range-finding mode.

  4. OceanVideoLab: A Tool for Exploring Underwater Video

    NASA Astrophysics Data System (ADS)

    Ferrini, V. L.; Morton, J. J.; Wiener, C.

    2016-02-01

    Video imagery acquired with underwater vehicles is an essential tool for characterizing seafloor ecosystems and seafloor geology. It is a fundamental component of ocean exploration that facilitates real-time operations, augments multidisciplinary scientific research, and holds tremendous potential for public outreach and engagement. Acquiring, documenting, managing, preserving and providing access to large volumes of video acquired with underwater vehicles presents a variety of data stewardship challenges to the oceanographic community. As a result, only a fraction of underwater video content collected with research submersibles is documented, discoverable and/or viewable online. With more than 1 billion users, YouTube offers infrastructure that can be leveraged to help address some of the challenges associated with sharing underwater video with a broad global audience. Anyone can post content to YouTube, and some oceanographic organizations, such as the Schmidt Ocean Institute, have begun live-streaming video directly from underwater vehicles. OceanVideoLab (oceanvideolab.org) was developed to help improve access to underwater video through simple annotation, browse functionality, and integration with related environmental data. Any underwater video that is publicly accessible on YouTube can be registered with OceanVideoLab by simply providing a URL. It is strongly recommended that a navigational file also be supplied to enable geo-referencing of observations. Once a video is registered, it can be viewed and annotated using a simple user interface that integrates observations with vehicle navigation data if provided. This interface includes an interactive map and a list of previous annotations that allows users to jump to times of specific observations in the video. Future enhancements to OceanVideoLab will include the deployment of a search interface, the development of an application program interface (API) that will drive the search and enable querying of content by other systems/tools, the integration of related environmental data from complementary data systems (e.g. temperature, bathymetry), and the expansion of infrastructure to enable broad crowdsourcing of annotations.

  5. A perioperative echocardiographic reporting and recording system.

    PubMed

    Pybus, David A

    2004-11-01

    Advances in video capture, compression, and streaming technology, coupled with improvements in central processing unit design and the inclusion of a database engine in the Windows operating system, have simplified the task of implementing a digital echocardiographic recording system. I describe an application that uses these technologies and runs on a notebook computer.

  6. Camera Control and Geo-Registration for Video Sensor Networks

    NASA Astrophysics Data System (ADS)

    Davis, James W.

    With the use of large video networks, there is a need to coordinate and interpret the video imagery for decision support systems with the goal of reducing the cognitive and perceptual overload of human operators. We present computer vision strategies that enable efficient control and management of cameras to effectively monitor wide-coverage areas, and examine the framework within an actual multi-camera outdoor urban video surveillance network. First, we construct a robust and precise camera control model for commercial pan-tilt-zoom (PTZ) video cameras. In addition to providing a complete functional control mapping for PTZ repositioning, the model can be used to generate wide-view spherical panoramic viewspaces for the cameras. Using the individual camera control models, we next individually map the spherical panoramic viewspace of each camera to a large aerial orthophotograph of the scene. The result provides a unified geo-referenced map representation to permit automatic (and manual) video control and exploitation of cameras in a coordinated manner. The combined framework provides new capabilities for video sensor networks that are of significance and benefit to the broad surveillance/security community.

  7. Pipe Crawler{reg_sign} internal piping characterization system - deactivation and decommissioning focus area. Innovative Technology Summary Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1998-02-01

    Pipe Crawler{reg_sign} is a pipe surveying system for performing radiological characterization and/or free release surveys of piping systems. The technology employs a family of manually advanced, wheeled platforms, or crawlers, fitted with one or more arrays of thin Geiger Mueller (GM) detectors operated from an external power supply and data processing unit. Survey readings are taken in a step-wise fashion. A video camera and tape recording system are used for video surveys of pipe interiors prior to and during radiological surveys. Pipe Crawler{reg_sign} has potential advantages over the baseline and other technologies in areas of cost, durability, waste minimization, andmore » intrusiveness. Advantages include potentially reduced cost, potential reuse of the pipe system, reduced waste volume, and the ability to manage pipes in place with minimal disturbance to facility operations. Advantages over competing technologies include potentially reduced costs and the ability to perform beta-gamma surveys that are capable of passing regulatory scrutiny for free release of piping systems.« less

  8. Systems and methods for improved telepresence

    DOEpatents

    Anderson, Matthew O.; Willis, W. David; Kinoshita, Robert A.

    2005-10-25

    The present invention provides a modular, flexible system for deploying multiple video perception technologies. The telepresence system of the present invention is capable of allowing an operator to control multiple mono and stereo video inputs in a hands-free manner. The raw data generated by the input devices is processed into a common zone structure that corresponds to the commands of the user, and the commands represented by the zone structure are transmitted to the appropriate device. This modularized approach permits input devices to be easily interfaced with various telepresence devices. Additionally, new input devices and telepresence devices are easily added to the system and are frequently interchangeable. The present invention also provides a modular configuration component that allows an operator to define a plurality of views each of which defines the telepresence devices to be controlled by a particular input device. The present invention provides a modular flexible system for providing telepresence for a wide range of applications. The modularization of the software components combined with the generalized zone concept allows the systems and methods of the present invention to be easily expanded to encompass new devices and new uses.

  9. A portable wireless power transmission system for video capsule endoscopes.

    PubMed

    Shi, Yu; Yan, Guozheng; Zhu, Bingquan; Liu, Gang

    2015-01-01

    Wireless power transmission (WPT) technology can solve the energy shortage problem of the video capsule endoscope (VCE) powered by button batteries, but the fixed platform limited its clinical application. This paper presents a portable WPT system for VCE. Besides portability, power transfer efficiency and stability are considered as the main indexes of optimization design of the system, which consists of the transmitting coil structure, portable control box, operating frequency, magnetic core and winding of receiving coil. Upon the above principles, the correlation parameters are measured, compared and chosen. Finally, through experiments on the platform, the methods are tested and evaluated. In the gastrointestinal tract of small pig, the VCE is supplied with sufficient energy by the WPT system, and the energy conversion efficiency is 2.8%. The video obtained is clear with a resolution of 320×240 and a frame rate of 30 frames per second. The experiments verify the feasibility of design scheme, and further improvement direction is discussed.

  10. SEDHI: a new generation of detection electronics for earth observation satellites

    NASA Astrophysics Data System (ADS)

    Dantes, Didier; Neveu, Claude; Biffi, Jean-Marc; Devilliers, Christophe; Andre, Serge

    2017-11-01

    Future earth observation optical systems will be more and more demanding in terms of ground sampling distance, swath width, number of spectral bands, duty cycle. Existing architectures of focal planes and video processing electronics are hardly compatible with these new requirements: electronic functions are split in several units, and video processing is limited to frequencies around 5 MHz in order to fulfil the radiometric requirements expected for high performance image quality systems. This frequency limitation induces a high number of video chains operated in parallel to process the huge amount of pixels at focal plane output, and leads to unacceptable mass and power consumption budgets. Furthermore, splitting the detection electronics functions into several units (at least one for the focal plane and proximity electronics, and one for the video processing functions) does not optimise the production costs : specific development efforts must be performed on critical analogue electronics at each equipment level and operations of assembly, integration and tests are duplicated at equipment and subsystem levels. Alcatel Space Industries has proposed to CNES a new concept of highly integrated detection electronics (SEDHI), and is developing for CNES a breadboard which will allow to confirm its potentialities. This paper presents the trade-off study which have been performed before selection of this new concept and summarises the main advantages and drawbacks of each possible architecture. The electrical, mechanical and thermal aspects of the SEDHI concept are described, including the basic technologies : ASIC for phase shift of detector clocks, ASIC for video processing, hybrids, microchip module... The adaptability to a large amount of missions and optical instruments is also discussed.

  11. The Surgeons' Leadership Inventory (SLI): a taxonomy and rating system for surgeons' intraoperative leadership skills.

    PubMed

    Henrickson Parker, Sarah; Flin, Rhona; McKinley, Aileen; Yule, Steven

    2013-06-01

    Surgeons must demonstrate leadership to optimize performance and maximize patient safety in the operating room, but no behavior rating tool is available to measure leadership. Ten focus groups with members of the operating room team discussed surgeons' intraoperative leadership. Surgeons' leadership behaviors were extracted and used to finalize the Surgeons' Leadership Inventory (SLI), which was checked by surgeons (n = 6) for accuracy and face validity. The SLI was used to code video recordings (n = 5) of operations to test reliability. Eight elements of surgeons' leadership were included in the SLI: (1) maintaining standards, (2) managing resources, (3) making decisions, (4) directing, (5) training, (6) supporting others, (7) communicating, and (8) coping with pressure. Interrater reliability to code videos of surgeons' behaviors while operating using this tool was acceptable (κ = .70). The SLI is empirically grounded in focus group data and both the leadership and surgical literature. The interrater reliability of the system was acceptable. The inventory could be used for rating surgeons' leadership in the operating room for research or as a basis for postoperative feedback on performance. Copyright © 2013 Elsevier Inc. All rights reserved.

  12. Modeling operators' emergency response time for chemical processing operations.

    PubMed

    Murray, Susan L; Harputlu, Emrah; Mentzer, Ray A; Mannan, M Sam

    2014-01-01

    Operators have a crucial role during emergencies at a variety of facilities such as chemical processing plants. When an abnormality occurs in the production process, the operator often has limited time to either take corrective actions or evacuate before the situation becomes deadly. It is crucial that system designers and safety professionals can estimate the time required for a response before procedures and facilities are designed and operations are initiated. There are existing industrial engineering techniques to establish time standards for tasks performed at a normal working pace. However, it is reasonable to expect the time required to take action in emergency situations will be different than working at a normal production pace. It is possible that in an emergency, operators will act faster compared to a normal pace. It would be useful for system designers to be able to establish a time range for operators' response times for emergency situations. This article develops a modeling approach to estimate the time standard range for operators taking corrective actions or following evacuation procedures in emergency situations. This will aid engineers and managers in establishing time requirements for operators in emergency situations. The methodology used for this study combines a well-established industrial engineering technique for determining time requirements (predetermined time standard system) and adjustment coefficients for emergency situations developed by the authors. Numerous videos of workers performing well-established tasks at a maximum pace were studied. As an example, one of the tasks analyzed was pit crew workers changing tires as quickly as they could during a race. The operations in these videos were decomposed into basic, fundamental motions (such as walking, reaching for a tool, and bending over) by studying the videos frame by frame. A comparison analysis was then performed between the emergency pace and the normal working pace operations to determine performance coefficients. These coefficients represent the decrease in time required for various basic motions in emergency situations and were used to model an emergency response. This approach will make hazardous operations requiring operator response, alarm management, and evacuation processes easier to design and predict. An application of this methodology is included in the article. The time required for an emergency response was roughly a one-third faster than for a normal response time.

  13. NASDA's Advanced On-Line System (ADOLIS)

    NASA Technical Reports Server (NTRS)

    Yamamoto, Yoshikatsu; Hara, Hideo; Yamada, Shigeo; Hirata, Nobuyuki; Komatsu, Shigenori; Nishihata, Seiji; Oniyama, Akio

    1993-01-01

    Spacecraft operations including ground system operations are generally realized by various large or small scale group work which is done by operators, engineers, managers, users and so on, and their positions are geographically distributed in many cases. In face-to-face work environments, it is easy for them to understand each other. However, in distributed work environments which need communication media, if only using audio, they become estranged from each other and lose interest in and continuity of work. It is an obstacle to smooth operation of spacecraft. NASDA has developed an experimental model of a new real-time operation control system called 'ADOLIS' (ADvanced On-Line System) adopted to such a distributed environment using a multi-media system dealing with character, figure, image, handwriting, video and audio information which is accommodated to operation systems of a wide range including spacecraft and ground systems. This paper describes the results of the development of the experimental model.

  14. Utilization of a postoperative adenotonsillectomy teaching video: A pilot study.

    PubMed

    Khan, Sarah; Tumin, Dmitry; King, Adele; Rice, Julie; Jatana, Kris R; Tobias, Joseph D; Raman, Vidya T

    2017-11-01

    Pediatric tonsillectomies are increasingly being performed as an outpatient procedure thereby increasing the parental role in post-operative pain management. However, it is unclear if parents receive adequate teaching regarding pain management. We introduced a video teaching tool and compared its efficacy alone and in combination with the standard verbal instruction. A prospective study which randomized parents or caregivers of children undergoing tonsillectomy ± adenoidectomy into three groups: 1) standard verbal post-operative instructions; 2) watching the video teaching tool along with standard verbal instructions or 3) video teaching tool only. Parents completed pre and post-instruction assessments of their knowledge of post-operative pain management with responses scored from 0 to 8. Telephone assessments were conducted within 48 post-operative hours with a subjective rating of the helpfulness of the video teaching tool. The study cohort included 99 patients and their families. The median pre-instruction score was 5 of 8 points (Interquartile range [IQR]: 4, 6) and this remained at 5 following instruction. (IQR:4, 6; p = 0.702 difference from baseline). Baseline scores did not vary across the groups (p = 0.156) and there was no increase in the knowledge score from pre to post-test across the three groups. Groups B and C rated the helpfulness of the video teaching tool with a median score of 4 of 5. (IQR: 4, 5). A baseline deficit exists in parental understanding of post-operative pain management that did not statistically improve regardless of the form post-operative instruction used (verbal vs. video-based instruction). However, the high helpfulness scores in both video groups support the use of video instruction as an alternative to or to complement to verbal instruction. However, further identification of knowledge deficits is required for optimization of post-operative educational materials. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Video content analysis of surgical procedures.

    PubMed

    Loukas, Constantinos

    2018-02-01

    In addition to its therapeutic benefits, minimally invasive surgery offers the potential for video recording of the operation. The videos may be archived and used later for reasons such as cognitive training, skills assessment, and workflow analysis. Methods from the major field of video content analysis and representation are increasingly applied in the surgical domain. In this paper, we review recent developments and analyze future directions in the field of content-based video analysis of surgical operations. The review was obtained from PubMed and Google Scholar search on combinations of the following keywords: 'surgery', 'video', 'phase', 'task', 'skills', 'event', 'shot', 'analysis', 'retrieval', 'detection', 'classification', and 'recognition'. The collected articles were categorized and reviewed based on the technical goal sought, type of surgery performed, and structure of the operation. A total of 81 articles were included. The publication activity is constantly increasing; more than 50% of these articles were published in the last 3 years. Significant research has been performed for video task detection and retrieval in eye surgery. In endoscopic surgery, the research activity is more diverse: gesture/task classification, skills assessment, tool type recognition, shot/event detection and retrieval. Recent works employ deep neural networks for phase and tool recognition as well as shot detection. Content-based video analysis of surgical operations is a rapidly expanding field. Several future prospects for research exist including, inter alia, shot boundary detection, keyframe extraction, video summarization, pattern discovery, and video annotation. The development of publicly available benchmark datasets to evaluate and compare task-specific algorithms is essential.

  16. System of launchable mesoscale robots for distributed sensing

    NASA Astrophysics Data System (ADS)

    Yesin, Kemal B.; Nelson, Bradley J.; Papanikolopoulos, Nikolaos P.; Voyles, Richard M.; Krantz, Donald G.

    1999-08-01

    A system of launchable miniature mobile robots with various sensors as payload is used for distributed sensing. The robots are projected to areas of interest either by a robot launcher or by a human operator using standard equipment. A wireless communication network is used to exchange information with the robots. Payloads such as a MEMS sensor for vibration detection, a microphone and an active video module are used mainly to detect humans. The video camera provides live images through a wireless video transmitter and a pan-tilt mechanism expands the effective field of view. There are strict restrictions on total volume and power consumption of the payloads due to the small size of the robot. Emerging technologies are used to address these restrictions. In this paper, we describe the use of microrobotic technologies to develop active vision modules for the mesoscale robot. A single chip CMOS video sensor is used along with a miniature lens that is approximately the size of a sugar cube. The device consumes 100 mW; about 5 times less than the power consumption of a comparable CCD camera. Miniature gearmotors 3 mm in diameter are used to drive the pan-tilt mechanism. A miniature video transmitter is used to transmit analog video signals from the camera.

  17. Comparison of Inter-Observer Variability and Diagnostic Performance of the Fifth Edition of BI-RADS for Breast Ultrasound of Static versus Video Images.

    PubMed

    Youk, Ji Hyun; Jung, Inkyung; Yoon, Jung Hyun; Kim, Sung Hun; Kim, You Me; Lee, Eun Hye; Jeong, Sun Hye; Kim, Min Jung

    2016-09-01

    Our aim was to compare the inter-observer variability and diagnostic performance of the Breast Imaging Reporting and Data System (BI-RADS) lexicon for breast ultrasound of static and video images. Ninety-nine breast masses visible on ultrasound examination from 95 women 19-81 y of age at five institutions were enrolled in this study. They were scheduled to undergo biopsy or surgery or had been stable for at least 2 y of ultrasound follow-up after benign biopsy results or typically benign findings. For each mass, representative long- and short-axis static ultrasound images were acquired; real-time long- and short-axis B-mode video images through the mass area were separately saved as cine clips. Each image was reviewed independently by five radiologists who were asked to classify ultrasound features according to the fifth edition of the BI-RADS lexicon. Inter-observer variability was assessed using kappa (κ) statistics. Diagnostic performance on static and video images was compared using the area under the receiver operating characteristic curve. No significant difference was found in κ values between static and video images for all descriptors, although κ values of video images were higher than those of static images for shape, orientation, margin and calcifications. After receiver operating characteristic curve analysis, the video images (0.83, range: 0.77-0.87) had higher areas under the curve than the static images (0.80, range: 0.75-0.83; p = 0.08). Inter-observer variability and diagnostic performance of video images was similar to that of static images on breast ultrasonography according to the new edition of BI-RADS. Copyright © 2016 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  18. Initial utilization of the CVIRB video production facility

    NASA Technical Reports Server (NTRS)

    Parrish, Russell V.; Busquets, Anthony M.; Hogge, Thomas W.

    1987-01-01

    Video disk technology is one of the central themes of a technology demonstrator workstation being assembled as a man/machine interface for the Space Station Data Management Test Bed at Johnson Space Center. Langley Research Center personnel involved in the conception and implementation of this workstation have assembled a video production facility to allow production of video disk material for this propose. This paper documents the initial familiarization efforts in the field of video production for those personnel and that facility. Although the entire video disk production cycle was not operational for this initial effort, the production of a simulated disk on video tape did acquaint the personnel with the processes involved and with the operation of the hardware. Invaluable experience in storyboarding, script writing, audio and video recording, and audio and video editing was gained in the production process.

  19. Modernization of B-2 Data, Video, and Control Systems Infrastructure

    NASA Technical Reports Server (NTRS)

    Cmar, Mark D.; Maloney, Christian T.; Butala, Vishal D.

    2012-01-01

    The National Aeronautics and Space Administration (NASA) Glenn Research Center (GRC) Plum Brook Station (PBS) Spacecraft Propulsion Research Facility, commonly referred to as B-2, is NASA s third largest thermal-vacuum facility with propellant systems capability. B-2 has completed a modernization effort of its facility legacy data, video and control systems infrastructure to accommodate modern integrated testing and Information Technology (IT) Security requirements. Integrated systems tests have been conducted to demonstrate the new data, video and control systems functionality and capability. Discrete analog signal conditioners have been replaced by new programmable, signal processing hardware that is integrated with the data system. This integration supports automated calibration and verification of the analog subsystem. Modern measurement systems analysis (MSA) tools are being developed to help verify system health and measurement integrity. Legacy hard wired digital data systems have been replaced by distributed Fibre Channel (FC) network connected digitizers where high speed sampling rates have increased to 256,000 samples per second. Several analog video cameras have been replaced by digital image and storage systems. Hard-wired analog control systems have been replaced by Programmable Logic Controllers (PLC), fiber optic networks (FON) infrastructure and human machine interface (HMI) operator screens. New modern IT Security procedures and schemes have been employed to control data access and process control flows. Due to the nature of testing possible at B-2, flexibility and configurability of systems has been central to the architecture during modernization.

  20. Modernization of B-2 Data, Video, and Control Systems Infrastructure

    NASA Technical Reports Server (NTRS)

    Cmar, Mark D.; Maloney, Christian T.; Butala, Vishal D.

    2012-01-01

    The National Aeronautics and Space Administration (NASA) Glenn Research Center (GRC) Plum Brook Station (PBS) Spacecraft Propulsion Research Facility, commonly referred to as B-2, is NASA's third largest thermal-vacuum facility with propellant systems capability. B-2 has completed a modernization effort of its facility legacy data, video and control systems infrastructure to accommodate modern integrated testing and Information Technology (IT) Security requirements. Integrated systems tests have been conducted to demonstrate the new data, video and control systems functionality and capability. Discrete analog signal conditioners have been replaced by new programmable, signal processing hardware that is integrated with the data system. This integration supports automated calibration and verification of the analog subsystem. Modern measurement systems analysis (MSA) tools are being developed to help verify system health and measurement integrity. Legacy hard wired digital data systems have been replaced by distributed Fibre Channel (FC) network connected digitizers where high speed sampling rates have increased to 256,000 samples per second. Several analog video cameras have been replaced by digital image and storage systems. Hard-wired analog control systems have been replaced by Programmable Logic Controllers (PLC), fiber optic networks (FON) infrastructure and human machine interface (HMI) operator screens. New modern IT Security procedures and schemes have been employed to control data access and process control flows. Due to the nature of testing possible at B-2, flexibility and configurability of systems has been central to the architecture during modernization.

  1. Design and Implementation of a Video-Zoom Driven Digital Audio-Zoom System for Portable Digital Imaging Devices

    NASA Astrophysics Data System (ADS)

    Park, Nam In; Kim, Seon Man; Kim, Hong Kook; Kim, Ji Woon; Kim, Myeong Bo; Yun, Su Won

    In this paper, we propose a video-zoom driven audio-zoom algorithm in order to provide audio zooming effects in accordance with the degree of video-zoom. The proposed algorithm is designed based on a super-directive beamformer operating with a 4-channel microphone system, in conjunction with a soft masking process that considers the phase differences between microphones. Thus, the audio-zoom processed signal is obtained by multiplying an audio gain derived from a video-zoom level by the masked signal. After all, a real-time audio-zoom system is implemented on an ARM-CORETEX-A8 having a clock speed of 600 MHz after different levels of optimization are performed such as algorithmic level, C-code, and memory optimizations. To evaluate the complexity of the proposed real-time audio-zoom system, test data whose length is 21.3 seconds long is sampled at 48 kHz. As a result, it is shown from the experiments that the processing time for the proposed audio-zoom system occupies 14.6% or less of the ARM clock cycles. It is also shown from the experimental results performed in a semi-anechoic chamber that the signal with the front direction can be amplified by approximately 10 dB compared to the other directions.

  2. Vision systems for manned and robotic ground vehicles

    NASA Astrophysics Data System (ADS)

    Sanders-Reed, John N.; Koon, Phillip L.

    2010-04-01

    A Distributed Aperture Vision System for ground vehicles is described. An overview of the hardware including sensor pod, processor, video compression, and displays is provided. This includes a discussion of the choice between an integrated sensor pod and individually mounted sensors, open architecture design, and latency issues as well as flat panel versus head mounted displays. This technology is applied to various ground vehicle scenarios, including closed-hatch operations (operator in the vehicle), remote operator tele-operation, and supervised autonomy for multi-vehicle unmanned convoys. In addition, remote vision for automatic perimeter surveillance using autonomous vehicles and automatic detection algorithms is demonstrated.

  3. 47 CFR 15.250 - Operation of wideband systems within the band 5925-7250 MHz.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 50 MHz. The video bandwidth of the measurement instrument shall not be less than RBW. If RBW is...) Emissions from digital circuitry used to enable the operation of the transmitter may comply with the limits... from digital circuitry contained within the transmitter and the emissions are not intended to be...

  4. Observational Learning in Mice Can Be Prevented by Medial Prefrontal Cortex Stimulation and Enhanced by Nucleus Accumbens Stimulation

    ERIC Educational Resources Information Center

    Jurado-Parras, M. Teresa; Gruart, Agnes; Delgado-Garcia, Jose M.

    2012-01-01

    The neural structures involved in ongoing appetitive and/or observational learning behaviors remain largely unknown. Operant conditioning and observational learning were evoked and recorded in a modified Skinner box provided with an on-line video recording system. Mice improved their acquisition of a simple operant conditioning task by…

  5. 75 FR 43825 - Exemption to Prohibition on Circumvention of Copyright Protection Systems for Access Control...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-27

    ... works such as video games and slide presentations). B. Computer programs that enable wireless telephone... enabling interoperability of such applications, when they have been lawfully obtained, with computer... new printer driver to a computer constitutes a `modification' of the operating system already...

  6. 77 FR 53184 - 36(b)(1) Arms Sales Notification

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-31

    ..., multi-field of view EO/IR system. The system provides color daylight TV and night time IR video with a... along with ground moving target indicator (GMTI) modes. It will also have two onboard workstations that...-locate, collect, and display the relevant information to two operators for analysis and recording...

  7. Telecommunications in Higher Education: Creating New Information Sources.

    ERIC Educational Resources Information Center

    Brown, Fred D.

    1986-01-01

    Discusses the telecommunications systems in operation at Buena Vista College in Iowa. Describes the systems' uses in linking all offices and classrooms on the campus, downlinking satellite communications through a dish, transmitting audio and video information to any set of defined studio or classroom space, and teleconferencing. (TW)

  8. PNNL’s Building Operations Control Center

    ScienceCinema

    Belew, Shan

    2018-01-16

    PNNL's Building Operations Control Center (BOCC) video provides an overview of the center, its capabilities, and its objectives. The BOCC was relocated to PNNL's new 3820 Systems Engineering Building in 2015. Although a key focus of the BOCC is on monitoring and improving the operations of PNNL buildings, the center's state-of-the-art computational, software and visualization resources also have provided a platform for PNNL buildings-related research projects.

  9. Real-time visual communication to aid disaster recovery in a multi-segment hybrid wireless networking system

    NASA Astrophysics Data System (ADS)

    Al Hadhrami, Tawfik; Wang, Qi; Grecos, Christos

    2012-06-01

    When natural disasters or other large-scale incidents occur, obtaining accurate and timely information on the developing situation is vital to effective disaster recovery operations. High-quality video streams and high-resolution images, if available in real time, would provide an invaluable source of current situation reports to the incident management team. Meanwhile, a disaster often causes significant damage to the communications infrastructure. Therefore, another essential requirement for disaster management is the ability to rapidly deploy a flexible incident area communication network. Such a network would facilitate the transmission of real-time video streams and still images from the disrupted area to remote command and control locations. In this paper, a comprehensive end-to-end video/image transmission system between an incident area and a remote control centre is proposed and implemented, and its performance is experimentally investigated. In this study a hybrid multi-segment communication network is designed that seamlessly integrates terrestrial wireless mesh networks (WMNs), distributed wireless visual sensor networks, an airborne platform with video camera balloons, and a Digital Video Broadcasting- Satellite (DVB-S) system. By carefully integrating all of these rapidly deployable, interworking and collaborative networking technologies, we can fully exploit the joint benefits provided by WMNs, WSNs, balloon camera networks and DVB-S for real-time video streaming and image delivery in emergency situations among the disaster hit area, the remote control centre and the rescue teams in the field. The whole proposed system is implemented in a proven simulator. Through extensive simulations, the real-time visual communication performance of this integrated system has been numerically evaluated, towards a more in-depth understanding in supporting high-quality visual communications in such a demanding context.

  10. 78 FR 40421 - Inquiry Regarding Video Description in Video Programming Distributed on Television and on the...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-05

    ... the status, benefits, and costs of video description on television and Internet- provided video... operational issues, costs, and benefits of providing video descriptions for video programming that is... document, the Federal Communications Commission (Commission) solicits public comment on issues related to...

  11. Hybrid vision activities at NASA Johnson Space Center

    NASA Technical Reports Server (NTRS)

    Juday, Richard D.

    1990-01-01

    NASA's Johnson Space Center in Houston, Texas, is active in several aspects of hybrid image processing. (The term hybrid image processing refers to a system that combines digital and photonic processing). The major thrusts are autonomous space operations such as planetary landing, servicing, and rendezvous and docking. By processing images in non-Cartesian geometries to achieve shift invariance to canonical distortions, researchers use certain aspects of the human visual system for machine vision. That technology flow is bidirectional; researchers are investigating the possible utility of video-rate coordinate transformations for human low-vision patients. Man-in-the-loop teleoperations are also supported by the use of video-rate image-coordinate transformations, as researchers plan to use bandwidth compression tailored to the varying spatial acuity of the human operator. Technological elements being developed in the program include upgraded spatial light modulators, real-time coordinate transformations in video imagery, synthetic filters that robustly allow estimation of object pose parameters, convolutionally blurred filters that have continuously selectable invariance to such image changes as magnification and rotation, and optimization of optical correlation done with spatial light modulators that have limited range and couple both phase and amplitude in their response.

  12. VICAR image processing system guide to system use

    NASA Technical Reports Server (NTRS)

    Seidman, J. B.

    1977-01-01

    The functional characteristics and operating requirements of the VICAR (Video Image Communication and Retrieval) system are described. An introduction to the system describes the functional characteristics and the basic theory of operation. A brief description of the data flow as well as tape and disk formats is also presented. A formal presentation of the control statement formats is given along with a guide to usage of the system. The guide provides a step-by-step reference to the creation of a VICAR control card deck. Simple examples are employed to illustrate the various options and the system response thereto.

  13. Defense Small Business Innovation Research Program (SBIR). Volume 2. Navy Projects, Abstracts of Phase 1 Awards from FY 1989 SBIR Solicitation

    DTIC Science & Technology

    1990-04-01

    DECISION AIDS HAVE CREATED A VAST NEW POTENTIAL FOR SUPPORT OF STRATEGIC AND TACTICAL OPERATIONS. THE NON-MONOTONIC PROBABILIST (NMP), DEVELOPED BY...QUALITY OF THE NEW DESIGN WILL BE EVALUATED BY CREATING A VIDEO TAPE USING A VIDEO ANIMATION SYSTEM, AND A SOFTWARE SIMULATION OF THE NEW DESIGN. THE...FAULT TOLERANT, SECURE SHIPBOARD COMMUNICATIONS. THE LAN WILL UTILIZE PHOENIX DIGITAL’S FAULT TOLERANT, " SELF - HEALING " SMALL BUSINESS INNOVATION RESEARCH

  14. 81. THREE ADDITIONAL BLACK AND WHITE VIDEO MONITORS LOCATED IMMEDIATELY ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    81. THREE ADDITIONAL BLACK AND WHITE VIDEO MONITORS LOCATED IMMEDIATELY WEST OF THOSE IN CA-133-1-A-80. COMPLEX SAFETY WARNING LIGHTS FOR SLC-3E (PAD 2) AND BLDG. 763 (LOB) LOCATED ABOVE MONITOR 3; GREEN LIGHTS ON BOTTOM OF EACH STACK ILLUMINATED. LEFT TO RIGHT BELOW MONITORS: ACCIDENT REPORTING EMERGENCY NOTIFICATION SYSTEM TELEPHONE, ATLAS H FUEL COUNTER, AND DIGITAL COUNTDOWN CLOCK. - Vandenberg Air Force Base, Space Launch Complex 3, Launch Operations Building, Napa & Alden Roads, Lompoc, Santa Barbara County, CA

  15. Distributed video data fusion and mining

    NASA Astrophysics Data System (ADS)

    Chang, Edward Y.; Wang, Yuan-Fang; Rodoplu, Volkan

    2004-09-01

    This paper presents an event sensing paradigm for intelligent event-analysis in a wireless, ad hoc, multi-camera, video surveillance system. In particilar, we present statistical methods that we have developed to support three aspects of event sensing: 1) energy-efficient, resource-conserving, and robust sensor data fusion and analysis, 2) intelligent event modeling and recognition, and 3) rapid deployment, dynamic configuration, and continuous operation of the camera networks. We outline our preliminary results, and discuss future directions that research might take.

  16. Microprocessor-Controlled Laser Balancing System

    NASA Technical Reports Server (NTRS)

    Demuth, R. S.

    1985-01-01

    Material removed by laser action as part tested for balance. Directed by microprocessor, laser fires appropriate amount of pulses in correct locations to remove necessary amount of material. Operator and microprocessor software interact through video screen and keypad; no programing skills or unprompted system-control decisions required. System provides complete and accurate balancing in single load-and-spinup cycle.

  17. A novel rotational matrix and translation vector algorithm: geometric accuracy for augmented reality in oral and maxillofacial surgeries.

    PubMed

    Murugesan, Yahini Prabha; Alsadoon, Abeer; Manoranjan, Paul; Prasad, P W C

    2018-06-01

    Augmented reality-based surgeries have not been successfully implemented in oral and maxillofacial areas due to limitations in geometric accuracy and image registration. This paper aims to improve the accuracy and depth perception of the augmented video. The proposed system consists of a rotational matrix and translation vector algorithm to reduce the geometric error and improve the depth perception by including 2 stereo cameras and a translucent mirror in the operating room. The results on the mandible/maxilla area show that the new algorithm improves the video accuracy by 0.30-0.40 mm (in terms of overlay error) and the processing rate to 10-13 frames/s compared to 7-10 frames/s in existing systems. The depth perception increased by 90-100 mm. The proposed system concentrates on reducing the geometric error. Thus, this study provides an acceptable range of accuracy with a shorter operating time, which provides surgeons with a smooth surgical flow. Copyright © 2018 John Wiley & Sons, Ltd.

  18. Assessing neurosurgical non-technical skills: an exploratory study of a new behavioural marker system.

    PubMed

    Michinov, Estelle; Jamet, Eric; Dodeler, Virginie; Haegelen, Claire; Jannin, Pierre

    2014-10-01

    The management of non-technical skills is a major factor affecting teamwork quality and patient safety. This article presents a behavioural marker system for assessing neurosurgical non-technical skills (BMS-NNTS). We tested the BMS during deep brain stimulation surgery. We developed the BMS in three stages. First, we drew up a provisional assessment tool based on the literature and observation tools developed for other surgical specialties. We then analysed videos made in an operating room (OR) during deep brain stimulation operations in order to ensure there were no significant omissions from the skills list. Finally, we used five videos of operations to identify the behavioural markers of non-technical skills in verbal communications. Analyses of more than six hours of observations revealed 3515 behaviours from which we determined the neurosurgeon's non-technical skills behaviour pattern. The neurosurgeon frequently engaged in explicit coordination, situation awareness and leadership behaviours. In addition, the neurosurgeon's behaviours differed according to the stage of the operation and the OR staff members with whom she was communicating. Our behavioural marker system provides a structured approach to assessing non-technical skills in the field of neurosurgery. It can also be transferred to other surgical specialties and used in surgeon training curricula. © 2014 John Wiley & Sons, Ltd.

  19. Microsurgical Clipping of an Anterior Communicating Artery Aneurysm Using a Novel Robotic Visualization Tool in Lieu of the Binocular Operating Microscope: Operative Video.

    PubMed

    Klinger, Daniel R; Reinard, Kevin A; Ajayi, Olaide O; Delashaw, Johnny B

    2018-01-01

    The binocular operating microscope has been the visualization instrument of choice for microsurgical clipping of intracranial aneurysms for many decades. To discuss recent technological advances that have provided novel visualization tools, which may prove to be superior to the binocular operating microscope in many regards. We present an operative video and our operative experience with the BrightMatterTM Servo System (Synaptive Medical, Toronto, Ontario, Canada) during the microsurgical clipping of an anterior communicating artery aneurysm. To the best of our knowledge, the use of this device for the microsurgical clipping of an intracranial aneurysm has never been described in the literature. The BrightMatterTM Servo System (Synaptive Medical) is a surgical exoscope which avoids many of the ergonomic constraints of the binocular operating microscope, but is associated with a steep learning curve. The BrightMatterTM Servo System (Synaptive Medical) is a maneuverable surgical exoscope that is positioned with a directional aiming device and a surgeon-controlled foot pedal. While utilizing this device comes with a steep learning curve typical of any new technology, the BrightMatterTM Servo System (Synaptive Medical) has several advantages over the conventional surgical microscope, which include a relatively unobstructed surgical field, provision of high-definition images, and visualization of difficult angles/trajectories. This device can easily be utilized as a visualization tool for a variety of cranial and spinal procedures in lieu of the binocular operating microscope. We anticipate that this technology will soon become an integral part of the neurosurgeon's armamentarium. Copyright © 2017 by the Congress of Neurological Surgeons

  20. Design of a system based on DSP and FPGA for video recording and replaying

    NASA Astrophysics Data System (ADS)

    Kang, Yan; Wang, Heng

    2013-08-01

    This paper brings forward a video recording and replaying system with the architecture of Digital Signal Processor (DSP) and Field Programmable Gate Array (FPGA). The system achieved encoding, recording, decoding and replaying of Video Graphics Array (VGA) signals which are displayed on a monitor during airplanes and ships' navigating. In the architecture, the DSP is a main processor which is used for a large amount of complicated calculation during digital signal processing. The FPGA is a coprocessor for preprocessing video signals and implementing logic control in the system. In the hardware design of the system, Peripheral Device Transfer (PDT) function of the External Memory Interface (EMIF) is utilized to implement seamless interface among the DSP, the synchronous dynamic RAM (SDRAM) and the First-In-First-Out (FIFO) in the system. This transfer mode can avoid the bottle-neck of the data transfer and simplify the circuit between the DSP and its peripheral chips. The DSP's EMIF and two level matching chips are used to implement Advanced Technology Attachment (ATA) protocol on physical layer of the interface of an Integrated Drive Electronics (IDE) Hard Disk (HD), which has a high speed in data access and does not rely on a computer. Main functions of the logic on the FPGA are described and the screenshots of the behavioral simulation are provided in this paper. In the design of program on the DSP, Enhanced Direct Memory Access (EDMA) channels are used to transfer data between the FIFO and the SDRAM to exert the CPU's high performance on computing without intervention by the CPU and save its time spending. JPEG2000 is implemented to obtain high fidelity in video recording and replaying. Ways and means of acquiring high performance for code are briefly present. The ability of data processing of the system is desirable. And smoothness of the replayed video is acceptable. By right of its design flexibility and reliable operation, the system based on DSP and FPGA for video recording and replaying has a considerable perspective in analysis after the event, simulated exercitation and so forth.

  1. Augmented-reality visualization of brain structures with stereo and kinetic depth cues: system description and initial evaluation with head phantom

    NASA Astrophysics Data System (ADS)

    Maurer, Calvin R., Jr.; Sauer, Frank; Hu, Bo; Bascle, Benedicte; Geiger, Bernhard; Wenzel, Fabian; Recchi, Filippo; Rohlfing, Torsten; Brown, Christopher R.; Bakos, Robert J.; Maciunas, Robert J.; Bani-Hashemi, Ali R.

    2001-05-01

    We are developing a video see-through head-mounted display (HMD) augmented reality (AR) system for image-guided neurosurgical planning and navigation. The surgeon wears a HMD that presents him with the augmented stereo view. The HMD is custom fitted with two miniature color video cameras that capture a stereo view of the real-world scene. We are concentrating specifically at this point on cranial neurosurgery, so the images will be of the patient's head. A third video camera, operating in the near infrared, is also attached to the HMD and is used for head tracking. The pose (i.e., position and orientation) of the HMD is used to determine where to overlay anatomic structures segmented from preoperative tomographic images (e.g., CT, MR) on the intraoperative video images. Two SGI 540 Visual Workstation computers process the three video streams and render the augmented stereo views for display on the HMD. The AR system operates in real time at 30 frames/sec with a temporal latency of about three frames (100 ms) and zero relative lag between the virtual objects and the real-world scene. For an initial evaluation of the system, we created AR images using a head phantom with actual internal anatomic structures (segmented from CT and MR scans of a patient) realistically positioned inside the phantom. When using shaded renderings, many users had difficulty appreciating overlaid brain structures as being inside the head. When using wire frames, and texture-mapped dot patterns, most users correctly visualized brain anatomy as being internal and could generally appreciate spatial relationships among various objects. The 3D perception of these structures is based on both stereoscopic depth cues and kinetic depth cues, with the user looking at the head phantom from varying positions. The perception of the augmented visualization is natural and convincing. The brain structures appear rigidly anchored in the head, manifesting little or no apparent swimming or jitter. The initial evaluation of the system is encouraging, and we believe that AR visualization might become an important tool for image-guided neurosurgical planning and navigation.

  2. Computational imaging with a single-pixel detector and a consumer video projector

    NASA Astrophysics Data System (ADS)

    Sych, D.; Aksenov, M.

    2018-02-01

    Single-pixel imaging is a novel rapidly developing imaging technique that employs spatially structured illumination and a single-pixel detector. In this work, we experimentally demonstrate a fully operating modular single-pixel imaging system. Light patterns in our setup are created with help of a computer-controlled digital micromirror device from a consumer video projector. We investigate how different working modes and settings of the projector affect the quality of reconstructed images. We develop several image reconstruction algorithms and compare their performance for real imaging. Also, we discuss the potential use of the single-pixel imaging system for quantum applications.

  3. Effectiveness of YouTube as a Source of Medical Information on Heart Transplantation.

    PubMed

    Chen, He-Ming; Hu, Zhong-Kai; Zheng, Xiao-Lin; Yuan, Zhao-Shun; Xu, Zhao-Bin; Yuan, Ling-Qing; Perez, Vinicio A De Jesus; Yuan, Ke; Orcholski, Mark; Liao, Xiao-Bo

    2013-11-21

    In this digital era, there is a growing tendency to use the popular Internet site YouTube as a new electronic-learning (e-learning) means for continuing medical education. Heart transplantation (HTx) remains the most viable option for patients with end-stage heart failure or severe coronary artery disease. There are plenty of freely accessible YouTube videos providing medical information about HTx. The aim of the present study is to determine the effectiveness of YouTube as an e-learning source on HTx. In order to carry out this study, YouTube was searched for videos uploaded containing surgical-related information using the four keywords: (1) "heart transplantation", (2) "cardiac transplantation", (3) "heart transplantation operation", and (4) "cardiac transplantation operation". Only videos in English (with comments or subtitles in English language) were included. Two experienced cardiac surgeons watched each video (N=1800) and classified them as useful, misleading, or recipients videos based on the HTx-relevant information. The kappa statistic was used to measure interobserver variability. Data was analyzed according to six types of YouTube characteristics including "total viewership", "duration", "source", "days since upload", "scores" given by the viewers, and specialized information contents of the videos. A total of 342/1800 (19.00%) videos had relevant information about HTx. Of these 342 videos, 215 (62.8%) videos had useful information about specialized knowledge, 7/342 (2.0%) were found to be misleading, and 120/342 (35.1%) only concerned recipients' individual issues. Useful videos had 56.09% of total viewership share (2,175,845/3,878,890), whereas misleading had 35.47% (1,375,673/3,878,890). Independent user channel videos accounted for a smaller proportion (19% in total numbers) but might have a wider impact on Web viewers, with the highest mean views/day (mean 39, SD 107) among four kinds of channels to distribute HTx-related information. YouTube videos on HTx benefit medical professionals by providing a substantial amount of information. However, it is a time-consuming course to find high-quality videos. More authoritative videos by trusted sources should be posted for dissemination of reliable information. With an improvement of ranking system and content providers in future, YouTube, as a freely accessible outlet, will help to meet the huge informational needs of medical staffs and promote medical education on HTx.

  4. Marshall Space Flight Center Ground Systems Development and Integration

    NASA Technical Reports Server (NTRS)

    Wade, Gina

    2016-01-01

    Ground Systems Development and Integration performs a variety of tasks in support of the Mission Operations Laboratory (MOL) and other Center and Agency projects. These tasks include various systems engineering processes such as performing system requirements development, system architecture design, integration, verification and validation, software development, and sustaining engineering of mission operations systems that has evolved the Huntsville Operations Support Center (HOSC) into a leader in remote operations for current and future NASA space projects. The group is also responsible for developing and managing telemetry and command configuration and calibration databases. Personnel are responsible for maintaining and enhancing their disciplinary skills in the areas of project management, software engineering, software development, software process improvement, telecommunications, networking, and systems management. Domain expertise in the ground systems area is also maintained and includes detailed proficiency in the areas of real-time telemetry systems, command systems, voice, video, data networks, and mission planning systems.

  5. NV-CMOS HD camera for day/night imaging

    NASA Astrophysics Data System (ADS)

    Vogelsong, T.; Tower, J.; Sudol, Thomas; Senko, T.; Chodelka, D.

    2014-06-01

    SRI International (SRI) has developed a new multi-purpose day/night video camera with low-light imaging performance comparable to an image intensifier, while offering the size, weight, ruggedness, and cost advantages enabled by the use of SRI's NV-CMOS HD digital image sensor chip. The digital video output is ideal for image enhancement, sharing with others through networking, video capture for data analysis, or fusion with thermal cameras. The camera provides Camera Link output with HD/WUXGA resolution of 1920 x 1200 pixels operating at 60 Hz. Windowing to smaller sizes enables operation at higher frame rates. High sensitivity is achieved through use of backside illumination, providing high Quantum Efficiency (QE) across the visible and near infrared (NIR) bands (peak QE <90%), as well as projected low noise (<2h+) readout. Power consumption is minimized in the camera, which operates from a single 5V supply. The NVCMOS HD camera provides a substantial reduction in size, weight, and power (SWaP) , ideal for SWaP-constrained day/night imaging platforms such as UAVs, ground vehicles, fixed mount surveillance, and may be reconfigured for mobile soldier operations such as night vision goggles and weapon sights. In addition the camera with the NV-CMOS HD imager is suitable for high performance digital cinematography/broadcast systems, biofluorescence/microscopy imaging, day/night security and surveillance, and other high-end applications which require HD video imaging with high sensitivity and wide dynamic range. The camera comes with an array of lens mounts including C-mount and F-mount. The latest test data from the NV-CMOS HD camera will be presented.

  6. Remotely accessible laboratory for MEMS testing

    NASA Astrophysics Data System (ADS)

    Sivakumar, Ganapathy; Mulsow, Matthew; Melinger, Aaron; Lacouture, Shelby; Dallas, Tim E.

    2010-02-01

    We report on the construction of a remotely accessible and interactive laboratory for testing microdevices (aka: MicroElectroMechancial Systems - MEMS). Enabling expanded utilization of microdevices for research, commercial, and educational purposes is very important for driving the creation of future MEMS devices and applications. Unfortunately, the relatively high costs associated with MEMS devices and testing infrastructure makes widespread access to the world of MEMS difficult. The creation of a virtual lab to control and actuate MEMS devices over the internet helps spread knowledge to a larger audience. A host laboratory has been established that contains a digital microscope, microdevices, controllers, and computers that can be logged into through the internet. The overall layout of the tele-operated MEMS laboratory system can be divided into two major parts: the server side and the client side. The server-side is present at Texas Tech University, and hosts a server machine that runs the Linux operating system and is used for interfacing the MEMS lab with the outside world via internet. The controls from the clients are transferred to the lab side through the server interface. The server interacts with the electronics required to drive the MEMS devices using a range of National Instruments hardware and LabView Virtual Instruments. An optical microscope (100 ×) with a CCD video camera is used to capture images of the operating MEMS. The server broadcasts the live video stream over the internet to the clients through the website. When the button is pressed on the website, the MEMS device responds and the video stream shows the movement in close to real time.

  7. Video multiple watermarking technique based on image interlacing using DWT.

    PubMed

    Ibrahim, Mohamed M; Abdel Kader, Neamat S; Zorkany, M

    2014-01-01

    Digital watermarking is one of the important techniques to secure digital media files in the domains of data authentication and copyright protection. In the nonblind watermarking systems, the need of the original host file in the watermark recovery operation makes an overhead over the system resources, doubles memory capacity, and doubles communications bandwidth. In this paper, a robust video multiple watermarking technique is proposed to solve this problem. This technique is based on image interlacing. In this technique, three-level discrete wavelet transform (DWT) is used as a watermark embedding/extracting domain, Arnold transform is used as a watermark encryption/decryption method, and different types of media (gray image, color image, and video) are used as watermarks. The robustness of this technique is tested by applying different types of attacks such as: geometric, noising, format-compression, and image-processing attacks. The simulation results show the effectiveness and good performance of the proposed technique in saving system resources, memory capacity, and communications bandwidth.

  8. Temporally coherent 4D video segmentation for teleconferencing

    NASA Astrophysics Data System (ADS)

    Ehmann, Jana; Guleryuz, Onur G.

    2013-09-01

    We develop an algorithm for 4-D (RGB+Depth) video segmentation targeting immersive teleconferencing ap- plications on emerging mobile devices. Our algorithm extracts users from their environments and places them onto virtual backgrounds similar to green-screening. The virtual backgrounds increase immersion and interac- tivity, relieving the users of the system from distractions caused by disparate environments. Commodity depth sensors, while providing useful information for segmentation, result in noisy depth maps with a large number of missing depth values. By combining depth and RGB information, our work signi¯cantly improves the other- wise very coarse segmentation. Further imposing temporal coherence yields compositions where the foregrounds seamlessly blend with the virtual backgrounds with minimal °icker and other artifacts. We achieve said improve- ments by correcting the missing information in depth maps before fast RGB-based segmentation, which operates in conjunction with temporal coherence. Simulation results indicate the e±cacy of the proposed system in video conferencing scenarios.

  9. OpenControl: a free opensource software for video tracking and automated control of behavioral mazes.

    PubMed

    Aguiar, Paulo; Mendonça, Luís; Galhardo, Vasco

    2007-10-15

    Operant animal behavioral tests require the interaction of the subject with sensors and actuators distributed in the experimental environment of the arena. In order to provide user independent reliable results and versatile control of these devices it is vital to use an automated control system. Commercial systems for control of animal mazes are usually based in software implementations that restrict their application to the proprietary hardware of the vendor. In this paper we present OpenControl: an opensource Visual Basic software that permits a Windows-based computer to function as a system to run fully automated behavioral experiments. OpenControl integrates video-tracking of the animal, definition of zones from the video signal for real-time assignment of animal position in the maze, control of the maze actuators from either hardware sensors or from the online video tracking, and recording of experimental data. Bidirectional communication with the maze hardware is achieved through the parallel-port interface, without the need for expensive AD-DA cards, while video tracking is attained using an inexpensive Firewire digital camera. OpenControl Visual Basic code is structurally general and versatile allowing it to be easily modified or extended to fulfill specific experimental protocols and custom hardware configurations. The Visual Basic environment was chosen in order to allow experimenters to easily adapt the code and expand it at their own needs.

  10. Data simulation for the Lightning Imaging Sensor (LIS)

    NASA Technical Reports Server (NTRS)

    Boeck, William L.

    1991-01-01

    This project aims to build a data analysis system that will utilize existing video tape scenes of lightning as viewed from space. The resultant data will be used for the design and development of the Lightning Imaging Sensor (LIS) software and algorithm analysis. The desire for statistically significant metrics implies that a large data set needs to be analyzed. Before 1990 the quality and quantity of video was insufficient to build a usable data set. At this point in time, there is usable data from missions STS-34, STS-32, STS-31, STS-41, STS-37, and STS-39. During the summer of 1990, a manual analysis system was developed to demonstrate that the video analysis is feasible and to identify techniques to deduce information that was not directly available. Because the closed circuit television system used on the space shuttle was intended for documentary TV, the current value of the camera focal length and pointing orientation, which are needed for photoanalysis, are not included in the system data. A large effort was needed to discover ancillary data sources as well as develop indirect methods to estimate the necessary parameters. Any data system coping with full motion video faces an enormous bottleneck produced by the large data production rate and the need to move and store the digitized images. The manual system bypassed the video digitizing bottleneck by using a genlock to superimpose pixel coordinates on full motion video. Because the data set had to be obtained point by point by a human operating a computer mouse, the data output rate was small. The loan and subsequent acquisition of a Abekas digital frame store with a real time digitizer moved the bottleneck from data acquisition to a problem of data transfer and storage. The semi-automated analysis procedure was developed using existing equipment and is described. A fully automated system is described in the hope that the components may come on the market at reasonable prices in the next few years.

  11. Use of an intuitive telemanipulator system for remote trauma surgery: an experimental study.

    PubMed

    Bowersox, J C; Cordts, P R; LaPorta, A J

    1998-06-01

    Death from battlefield trauma occurs rapidly. Potentially salvageable casualties generally exsanguinate from truncal hemorrhage before operative intervention is possible. An intuitive telemanipulator system that would allow distant surgeons to remotely treat injured patients could improve the outcome from severe injuries. We evaluated a prototype, four-degree-of-freedom, telesurgery system that provides a surgeon with a stereoscopic video display of a remote operative field. Using dexterous robotic manipulators, surgical instruments at the remote site can be precisely controlled, enabling operative procedures to be performed remotely. Surgeons (n = 3) used the telesurgery system to perform organ excision, hemorrhage control, suturing, and knot tying on anesthetized swine. The ability to complete tasks, times required, technical quality, and subjective impressions were recorded. Surgeons using the telesurgery system were able to close gastrotomies remotely, although times required were 2.7 times as long as those performed by conventional techniques (451 +/- 83 versus 1,235 +/- 165 seconds, p < 0.002). Cholecystectomies, hemorrhage control from liver lacerations, and enterotomy closures were successfully completed in all attempts. Force feedback and stereoscopic video display were important for achieving intuitive performance with the telesurgery system, although tasks were completed adequately in the absence of these sensory cues. We demonstrated the feasibility of performing standard surgical procedures remotely, with the operating surgeon linked to the distant field only by electronic cabling. Complex manipulations were possible, although the times required were much longer. The capabilities of the system used would not support resuscitative surgery. Telesurgery is unlikely to play a role in early trauma management, but may be a unique research tool for acquiring basic knowledge of operative surgery.

  12. Sharing from Scratch: How To Network CD-ROM.

    ERIC Educational Resources Information Center

    Doering, David

    1998-01-01

    Examines common CD-ROM networking architectures: via existing operating systems (OS), thin server towers, and dedicated servers. Discusses digital video disc (DVD) and non-CD/DVD optical storage solutions and presents case studies of networks that work. (PEN)

  13. 47 CFR 76.213 - Lotteries.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 4 2011-10-01 2011-10-01 false Lotteries. 76.213 Section 76.213 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Cablecasting § 76.213 Lotteries. (a) No cable television system operator...

  14. 47 CFR 76.213 - Lotteries.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 4 2014-10-01 2014-10-01 false Lotteries. 76.213 Section 76.213 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Cablecasting § 76.213 Lotteries. (a) No cable television system operator...

  15. 47 CFR 76.213 - Lotteries.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 4 2010-10-01 2010-10-01 false Lotteries. 76.213 Section 76.213 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Cablecasting § 76.213 Lotteries. (a) No cable television system operator...

  16. 47 CFR 76.213 - Lotteries.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 4 2013-10-01 2013-10-01 false Lotteries. 76.213 Section 76.213 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Cablecasting § 76.213 Lotteries. (a) No cable television system operator...

  17. 47 CFR 76.213 - Lotteries.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 4 2012-10-01 2012-10-01 false Lotteries. 76.213 Section 76.213 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Cablecasting § 76.213 Lotteries. (a) No cable television system operator...

  18. Submillimeter video imaging with a superconducting bolometer array

    NASA Astrophysics Data System (ADS)

    Becker, Daniel Thomas

    Millimeter wavelength radiation holds promise for detection of security threats at a distance, including suicide bombers and maritime threats in poor weather. The high sensitivity of superconducting Transition Edge Sensor (TES) bolometers makes them ideal for passive imaging of thermal signals at millimeter and submillimeter wavelengths. I have built a 350 GHz video-rate imaging system using an array of feedhorn-coupled TES bolometers. The system operates at standoff distances of 16 m to 28 m with a measured spatial resolution of 1.4 cm (at 17 m). It currently contains one 251-detector sub-array, and can be expanded to contain four sub-arrays for a total of 1004 detectors. The system has been used to take video images that reveal the presence of weapons concealed beneath a shirt in an indoor setting. This dissertation describes the design, implementation and characterization of this system. It presents an overview of the challenges associated with standoff passive imaging and how these problems can be overcome through the use of large-format TES bolometer arrays. I describe the design of the system and cover the results of detector and optical characterization. I explain the procedure used to generate video images using the system, and present a noise analysis of those images. This analysis indicates that the Noise Equivalent Temperature Difference (NETD) of the video images is currently limited by artifacts of the scanning process. More sophisticated image processing algorithms can eliminate these artifacts and reduce the NETD to 100 mK, which is the target value for the most demanding passive imaging scenarios. I finish with an overview of future directions for this system.

  19. Robust media processing on programmable power-constrained systems

    NASA Astrophysics Data System (ADS)

    McVeigh, Jeff

    2005-03-01

    To achieve consumer-level quality, media systems must process continuous streams of audio and video data while maintaining exacting tolerances on sampling rate, jitter, synchronization, and latency. While it is relatively straightforward to design fixed-function hardware implementations to satisfy worst-case conditions, there is a growing trend to utilize programmable multi-tasking solutions for media applications. The flexibility of these systems enables support for multiple current and future media formats, which can reduce design costs and time-to-market. This paper provides practical engineering solutions to achieve robust media processing on such systems, with specific attention given to power-constrained platforms. The techniques covered in this article utilize the fundamental concepts of algorithm and software optimization, software/hardware partitioning, stream buffering, hierarchical prioritization, and system resource and power management. A novel enhancement to dynamically adjust processor voltage and frequency based on buffer fullness to reduce system power consumption is examined in detail. The application of these techniques is provided in a case study of a portable video player implementation based on a general-purpose processor running a non real-time operating system that achieves robust playback of synchronized H.264 video and MP3 audio from local storage and streaming over 802.11.

  20. Video image processor on the Spacelab 2 Solar Optical Universal Polarimeter /SL2 SOUP/

    NASA Technical Reports Server (NTRS)

    Lindgren, R. W.; Tarbell, T. D.

    1981-01-01

    The SOUP instrument is designed to obtain diffraction-limited digital images of the sun with high photometric accuracy. The Video Processor originated from the requirement to provide onboard real-time image processing, both to reduce the telemetry rate and to provide meaningful video displays of scientific data to the payload crew. This original concept has evolved into a versatile digital processing system with a multitude of other uses in the SOUP program. The central element in the Video Processor design is a 16-bit central processing unit based on 2900 family bipolar bit-slice devices. All arithmetic, logical and I/O operations are under control of microprograms, stored in programmable read-only memory and initiated by commands from the LSI-11. Several functions of the Video Processor are described, including interface to the High Rate Multiplexer downlink, cosmetic and scientific data processing, scan conversion for crew displays, focus and exposure testing, and use as ground support equipment.

  1. Low-cost Tools for Aerial Video Geolocation and Air Traffic Analysis for Delay Reduction Using Google Earth

    NASA Astrophysics Data System (ADS)

    Zetterlind, V.; Pledgie, S.

    2009-12-01

    Low-cost, low-latency, robust geolocation and display of aerial video is a common need for a wide range of earth observing as well as emergency response and security applications. While hardware costs for aerial video collection systems, GPS, and inertial sensors have been decreasing, software costs for geolocation algorithms and reference imagery/DTED remain expensive and highly proprietary. As part of a Federal Small Business Innovative Research project, MosaicATM and EarthNC, Inc have developed a simple geolocation system based on the Google Earth API and Google's 'built-in' DTED and reference imagery libraries. This system geolocates aerial video based on platform and camera position, attitude, and field-of-view metadata using geometric photogrammetric principles of ray-intersection with DTED. Geolocated video can be directly rectified and viewed in the Google Earth API during processing. Work is underway to extend our geolocation code to NASA World Wind for additional flexibility and a fully open-source platform. In addition to our airborne remote sensing work, MosaicATM has developed the Surface Operations Data Analysis and Adaptation (SODAA) tool, funded by NASA Ames, which supports analysis of airport surface operations to optimize aircraft movements and reduce fuel burn and delays. As part of SODAA, MosaicATM and EarthNC, Inc have developed powerful tools to display national airspace data and time-animated 3D flight tracks in Google Earth for 4D analysis. The SODAA tool can convert raw format flight track data, FAA National Flight Data (NFD), and FAA 'Adaptation' airport surface data to a spatial database representation and then to Google Earth KML. The SODAA client provides users with a simple graphical interface through which to generate queries with a wide range of predefined and custom filters, plot results, and export for playback in Google Earth in conjunction with NFD and Adaptation overlays.

  2. Audiovisual signal compression: the 64/P codecs

    NASA Astrophysics Data System (ADS)

    Jayant, Nikil S.

    1996-02-01

    Video codecs operating at integral multiples of 64 kbps are well-known in visual communications technology as p * 64 systems (p equals 1 to 24). Originally developed as a class of ITU standards, these codecs have served as core technology for videoconferencing, and they have also influenced the MPEG standards for addressable video. Video compression in the above systems is provided by motion compensation followed by discrete cosine transform -- quantization of the residual signal. Notwithstanding the promise of higher bit rates in emerging generations of networks and storage devices, there is a continuing need for facile audiovisual communications over voice band and wireless modems. Consequently, video compression at bit rates lower than 64 kbps is a widely-sought capability. In particular, video codecs operating at rates in the neighborhood of 64, 32, 16, and 8 kbps seem to have great practical value, being matched respectively to the transmission capacities of basic rate ISDN (64 kbps), and voiceband modems that represent high (32 kbps), medium (16 kbps) and low- end (8 kbps) grades in current modem technology. The purpose of this talk is to describe the state of video technology at these transmission rates, without getting too literal about the specific speeds mentioned above. In other words, we expect codecs designed for non- submultiples of 64 kbps, such as 56 kbps or 19.2 kbps, as well as for sub-multiples of 64 kbps, depending on varying constraints on modem rate and the transmission rate needed for the voice-coding part of the audiovisual communications link. The MPEG-4 video standards process is a natural platform on which to examine current capabilities in sub-ISDN rate video coding, and we shall draw appropriately from this process in describing video codec performance. Inherent in this summary is a reinforcement of motion compensation and DCT as viable building blocks of video compression systems, although there is a need for improving signal quality even in the very best of these systems. In a related part of our talk, we discuss the role of preprocessing and postprocessing subsystems which serve to enhance the performance of an otherwise standard codec. Examples of these (sometimes proprietary) subsystems are automatic face-tracking prior to the coding of a head-and-shoulders scene, and adaptive postfiltering after conventional decoding, to reduce generic classes of artifacts in low bit rate video. The talk concludes with a summary of technology targets and research directions. We discuss targets in terms of four fundamental parameters of coder performance: quality, bit rate, delay and complexity; and we emphasize the need for measuring and maximizing the composite quality of the audiovisual signal. In discussing research directions, we examine progress and opportunities in two fundamental approaches for bit rate reduction: removal of statistical redundancy and reduction of perceptual irrelevancy; we speculate on the value of techniques such as analysis-by-synthesis that have proved to be quite valuable in speech coding, and we examine the prospect of integrating speech and image processing for developing next-generation technology for audiovisual communications.

  3. Deep Sea Gazing: Making Ship-Based Research Aboard RV Falkor Relevant and Accessible

    NASA Astrophysics Data System (ADS)

    Wiener, C.; Zykov, V.; Miller, A.; Pace, L. J.; Ferrini, V. L.; Friedman, A.

    2016-02-01

    Schmidt Ocean Institute (SOI) is a private, non-profit operating foundation established to advance the understanding of the world's oceans through technological advancement, intelligent observation, and open sharing of information. Our research vessel Falkorprovides ship time to selected scientists and supports a wide range of scientific functions, including ROV operations with live streaming capabilities. Since 2013, SOI has live streamed 55 ROV dives in high definition and recorded them onto YouTube. This has totaled over 327 hours of video which received 1,450, 461 views in 2014. SOI is one of the only research programs that makes their entire dive series available online, creating a rich collection of video data sets. In doing this, we provide an opportunity for scientists to make new discoveries in the video data that may have been missed earlier. These data sets are also available to students, allowing them to engage with real data in the classroom. SOI's video collection is also being used in a newly developed video management system, Ocean Video Lab. Telepresence-enabled research is an important component of Falkor cruises, which is exemplified by several that were conducted in 2015. This presentation will share a few case studies including an image tagging citizen science project conducted through the Squidle interface in partnership with the Australian Center for Field Robotics. Using real-time image data collected in the Timor Sea, numerous shore-based citizens created seafloor image tags that could be used by a machine learning algorithms on Falkor's high performance computer (HPC) to accomplish habitat characterization. With the use of the HPC system real-time robot tracking, image tagging, and other outreach connections were made possible, allowing scientists on board to engage with the public and build their knowledge base. The above mentioned examples will be used to demonstrate the benefits of remote data analysis and participatory engagement in science-based telepresence.

  4. Control Software for Advanced Video Guidance Sensor

    NASA Technical Reports Server (NTRS)

    Howard, Richard T.; Book, Michael L.; Bryan, Thomas C.

    2006-01-01

    Embedded software has been developed specifically for controlling an Advanced Video Guidance Sensor (AVGS). A Video Guidance Sensor is an optoelectronic system that provides guidance for automated docking of two vehicles. Such a system includes pulsed laser diodes and a video camera, the output of which is digitized. From the positions of digitized target images and known geometric relationships, the relative position and orientation of the vehicles are computed. The present software consists of two subprograms running in two processors that are parts of the AVGS. The subprogram in the first processor receives commands from an external source, checks the commands for correctness, performs commanded non-image-data-processing control functions, and sends image data processing parts of commands to the second processor. The subprogram in the second processor processes image data as commanded. Upon power-up, the software performs basic tests of functionality, then effects a transition to a standby mode. When a command is received, the software goes into one of several operational modes (e.g. acquisition or tracking). The software then returns, to the external source, the data appropriate to the command.

  5. Earth orbital teleoperator manipulator system evaluation program

    NASA Technical Reports Server (NTRS)

    Brye, R. G.; Frederick, P. N.; Kirkpatrick, M., III; Shields, N. L., Jr.

    1977-01-01

    The operator's ability to perform five manipulator tip movements while using monoptic and stereoptic video systems was assessed. Test data obtained were compared with previous results to determine the impact of camera placement and stereoptic viewing on manipulator system performance. The tests were performed using the NASA MSFC extendible stiff arm Manipulator and an analog joystick controller. Two basic manipulator tasks were utilized. The minimum position change test required the operator to move the manipulator arm to touch a target contract. The dexterity test required removal and replacement of pegs.

  6. [The prevalence and influencing factors of eye diseases for IT industry video operation workers].

    PubMed

    Zhao, Liang-liang; Yu, Yan-yan; Yu, Wen-lan; Xu, Ming; Cao, Wen-dong; Zhang, Hong-bing; Han, Lei; Zhang, Heng-dong

    2013-05-01

    To investigate the situation of video-contact and eye diseases for IT industry video operation workers, and to analyze the influencing factors, providing scientific evidence for the make of health-strategy for IT industry video operation workers. We take the random cluster sampling method to choose 190 IT industry video operation workers in a city of Jiangsu province, analyzing the relations between video contact and eye diseases. The daily video contact time of IT industry video operation workers is 6.0-16.0 hours, whose mean value is (I 0.1 ± 1.8) hours. 79.5% of workers in this survey wear myopic lens, 35.8% of workers have a rest during their working, and 14.2% of IT workers use protective products when they feel unwell of their eyes. Following the BUT experiment, 54.7% of IT workers have the normal examine results of hinoculus, while 45.3% have the abnormal results of at least one eye. Simultaneously, 54.7% workers have the normal examine results of hinoculus in the SIT experiment, however, 42.1% workers are abnormal. According to the broad linear model, there are six influencing factors (daily mean time to video, distance between eye and displayer, the frequency of rest, whether to use protective products when they feel unwell of their eyes, the type of dis player and daily time watching TV.) have significant influence on vision, having statistical significance. At the same time, there are also six influencing factors (whether have a rest regularly,sex, the situation of diaphaneity for cornea, the shape of pupil, family history and whether to use protective products when they feel unwell of their eyes.) have significant influence on the results of BUT experiment,having statistical significance. However, there are seven influencing factors (the type of computer, sex, the shape of pupil, the situation of diaphaneity for cornea, the angle between displayer and workers' sight, the type of displayer and the height of operating floor.) have significant influence on the results of SIT experiment,having statistical significance. The health-situation of IT industry video operation workers' eye is not optimistic, most of workers are lack of protection awareness; we need to strengthen propaganda and education according to its influencing factors and to improve the level of medical control and prevention for eye diseases in relevant industries.

  7. System for delivery of broadcast digital video as an overlay to baseband switched services on a fiber-to-the-home access network

    NASA Astrophysics Data System (ADS)

    Chand, Naresh; Magill, Peter D.; Swaminathan, Venkat S.; Yadvish, R. D.

    1999-04-01

    For low cost fiber-to-the-home (FTTH) passive optical networks (PON), we have studied the delivery of broadcast digital video as an overlay to baseband switched digital services on the same fiber using a single transmitter and a single receiver. We have multiplexed the baseband data at 155.52 Mbps with digital video QPSK channels in the 270 - 1450 MHz range with minimal degradation. We used an additional 860 MHz carrier modulated with 8 Mbps QPSK as a test-signal. An optical to electrical (O/E) receiver using an APD satisfies the power budget needs of ITU-T document G983.x for both class B and C operations (i.e., receiver sensitivity less than -33 dBm for a 10-10 bit error rate) without any FEC for both data and video. The PIN diode O/E receiver nearly satisfies the need for class B operation (-30 dBm receiver sensitivity) of G983 with FEC in QPSK FDM video. For a 155.52 Mbps baseband data transmission and for a given bit error rate, there is approximately 6 dBo1 optical power penalty due to video overlay. Of this, 1 dBo penalty is due to biasing the laser with an extinction ratio reduced from 10 dBo to approximately 6 dBo, and approximately 5 dBo penalty is due to receiver bandwidth increasing from approximately 100 MHz to approximately 1 GHz. The penalty due to receiver is after optimizing the filter for baseband data, and is caused by the reduced value of feedback resistor of the first stage transimpedance amplifier. The optical power penalty for video transmission is about 2 dBo due to reduced optical modulation index.

  8. Motion-seeded object-based attention for dynamic visual imagery

    NASA Astrophysics Data System (ADS)

    Huber, David J.; Khosla, Deepak; Kim, Kyungnam

    2017-05-01

    This paper† describes a novel system that finds and segments "objects of interest" from dynamic imagery (video) that (1) processes each frame using an advanced motion algorithm that pulls out regions that exhibit anomalous motion, and (2) extracts the boundary of each object of interest using a biologically-inspired segmentation algorithm based on feature contours. The system uses a series of modular, parallel algorithms, which allows many complicated operations to be carried out by the system in a very short time, and can be used as a front-end to a larger system that includes object recognition and scene understanding modules. Using this method, we show 90% accuracy with fewer than 0.1 false positives per frame of video, which represents a significant improvement over detection using a baseline attention algorithm.

  9. Video display terminal operation: a potential risk in the etiology and maintenance of temporomandibular disorders.

    PubMed

    Horowitz, L; Sarkin, J M

    1992-01-01

    Surveys indicate over 50 million Americans, mostly women, currently operate video display terminals (VDTs) at home or in the workplace. Recent epidemiological studies reveal more than 75% of approximately 30 million American temporomandibular disorder (TMD) sufferers are women. What does the VDT and TMD have in common besides an affinity for the female gender? TMD is associated with numerous risk factors that commonly initiate sympathetic nervous system and stress hormone response mechanisms resulting in muscle spasms, trigger point formation, and pain in the head and neck. Likewise VDT operation may be linked to three additional sympathetic nervous system irritants including: (1) electrostatic ambient air negative ion depletion, (2) electromagnetic radiation, and (3) eyestrain and postural stress associated with poor work habits and improper work station design. Additional research considering the roles these three factors may play in the etiology of TMD and other myofascial pain problems is indicated. Furthermore, dentists are advised to educate patients as to these possible risks, encourage preventive behaviors on the part of employers and employees, and recommend workplace health, safety, and ergonomic upgrades when indicated.

  10. Fiber optic TV direct

    NASA Technical Reports Server (NTRS)

    Kassak, John E.

    1991-01-01

    The objective of the operational television (OTV) technology was to develop a multiple camera system (up to 256 cameras) for NASA Kennedy installations where camera video, synchronization, control, and status data are transmitted bidirectionally via a single fiber cable at distances in excess of five miles. It is shown that the benefits (such as improved video performance, immunity from electromagnetic interference and radio frequency interference, elimination of repeater stations, and more system configuration flexibility) can be realized if application of the proven fiber optic transmission concept is used. The control system will marry the lens, pan and tilt, and camera control functions into a modular based Local Area Network (LAN) control network. Such a system does not exist commercially at present since the Television Broadcast Industry's current practice is to divorce the positional controls from the camera control system. The application software developed for this system will have direct applicability to similar systems in industry using LAN based control systems.

  11. Efficient management and promotion of utilization of the video information acquired by observation

    NASA Astrophysics Data System (ADS)

    Kitayama, T.; Tanaka, K.; Shimabukuro, R.; Hase, H.; Ogido, M.; Nakamura, M.; Saito, H.; Hanafusa, Y.; Sonoda, A.

    2012-12-01

    In Japan Agency for Marine-Earth Science and Technology (JAMSTEC), the deep sea videos are made from the research by JAMSTEC submersibles in 1982, and the information on the huge deep-sea that will reach more 4,000 dives (ca. 24,700 tapes) by the present are opened to public via the Internet since 2002. The deep-sea videos is important because the time variation of deep-sea environment with difficult investigation and collection and growth of the living thing in extreme environment can be checked. Moreover, with development of video technique, the advanced analysis of an investigation image is attained. For grasp of deep sea environment, especially the utility value of the image is high. In JAMSTEC's Data Research Center for Marine-Earth Sciences (DrC), collection of the video are obtained by dive investigation of JAMSTEC, preservation, quality control, and open to public are performed. It is our big subject that the huge video information which utility value has expanded managed efficiently and promotion of use. In this announcement, the present measure is introduced about these subjects . The videos recorded on a tape or various media onboard are collected, and the backup and encoding for preventing the loss and degradation are performed. The video inside of a hard disk has the large file size. Then, we use the Linear Tape File System (LTFS) which attracts attention with image management engineering these days. Cost does not start compared with the usual disk backup, but correspondence years can also save the video data for a long time, and the operatively of a file is not different from a disk. The video that carried out the transcode to offer is archived by disk storage, and offer according to a use is possible for it. For the promotion of utilization of the video, the video public presentation system was reformed completely from November, 2011 to "JAMSTEC E-library of Deep Sea Images (http:// www.godac.jamstec.go.jp/jedi/)" This new system has preparing various searches (e.g. Search by map, Tree, Icon, Keyword et al.). The video annotation is enabled with the same interface, and the usability of use and management is raised. Moreover, In the "Biological Information System for Marine Life : BISMaL (http://www.godac.jamstec.go.jp/bismal/e/index.html)" which is a data system for biodiversity information, particularly in biogeographic data of marine organisms, based on photography position information, the visualization of living thing distribution, the life list of a deep sea living thing, and the deep sea video were used, and aim at the contribution to biodiversity grasp. Future, aiming at the accuracy improvement of the information given to the video by Work support of the comment registration by automatic recognition of an image and Development of a comment registration tool onboard, it aims at offering higher quality information.

  12. Automatic vehicle counting using background subtraction method on gray scale images and morphology operation

    NASA Astrophysics Data System (ADS)

    Adi, K.; Widodo, A. P.; Widodo, C. E.; Pamungkas, A.; Putranto, A. B.

    2018-05-01

    Traffic monitoring on road needs to be done, the counting of the number of vehicles passing the road is necessary. It is more emphasized for highway transportation management in order to prevent efforts. Therefore, it is necessary to develop a system that is able to counting the number of vehicles automatically. Video processing method is able to counting the number of vehicles automatically. This research has development a system of vehicle counting on toll road. This system includes processes of video acquisition, frame extraction, and image processing for each frame. Video acquisition is conducted in the morning, at noon, in the afternoon, and in the evening. This system employs of background subtraction and morphology methods on gray scale images for vehicle counting. The best vehicle counting results were obtained in the morning with a counting accuracy of 86.36 %, whereas the lowest accuracy was in the evening, at 21.43 %. Differences in morning and evening results are caused by different illumination in the morning and evening. This will cause the values in the image pixels to be different.

  13. Video-based teleradiology for intraosseous lesions. A receiver operating characteristic analysis.

    PubMed

    Tyndall, D A; Boyd, K S; Matteson, S R; Dove, S B

    1995-11-01

    Immediate access to off-site expert diagnostic consultants regarding unusual radiographic findings or radiographic quality assurance issues could be a current problem for private dental practitioners. Teleradiology, a system for transmitting radiographic images, offers a potential solution to this problem. Although much research has been done to evaluate feasibility and utilization of teleradiology systems in medical imaging, little research on dental applications has been performed. In this investigation 47 panoramic films with an equal distribution of images with intraosseous jaw lesions and no disease were viewed by a panel of observers with teleradiology and conventional viewing methods. The teleradiology system consisted of an analog video-based system simulating remote radiographic consultation between a general dentist and a dental imaging specialist. Conventional viewing consisted of traditional viewbox methods. Observers were asked to identify the presence or absence of 24 intraosseous lesions and to determine their locations. No statistically significant differences in modalities or observers were identified between methods at the 0.05 level. The results indicate that viewing intraosseous lesions of video-based panoramic images is equal to conventional light box viewing.

  14. Integrated multisensor perimeter detection systems

    NASA Astrophysics Data System (ADS)

    Kent, P. J.; Fretwell, P.; Barrett, D. J.; Faulkner, D. A.

    2007-10-01

    The report describes the results of a multi-year programme of research aimed at the development of an integrated multi-sensor perimeter detection system capable of being deployed at an operational site. The research was driven by end user requirements in protective security, particularly in threat detection and assessment, where effective capability was either not available or prohibitively expensive. Novel video analytics have been designed to provide robust detection of pedestrians in clutter while new radar detection and tracking algorithms provide wide area day/night surveillance. A modular integrated architecture based on commercially available components has been developed. A graphical user interface allows intuitive interaction and visualisation with the sensors. The fusion of video, radar and other sensor data provides the basis of a threat detection capability for real life conditions. The system was designed to be modular and extendable in order to accommodate future and legacy surveillance sensors. The current sensor mix includes stereoscopic video cameras, mmWave ground movement radar, CCTV and a commercially available perimeter detection cable. The paper outlines the development of the system and describes the lessons learnt after deployment in a pilot trial.

  15. Computer-Aided Remote Driving

    NASA Technical Reports Server (NTRS)

    Wilcox, Brian H.

    1994-01-01

    System for remote control of robotic land vehicle requires only small radio-communication bandwidth. Twin video cameras on vehicle create stereoscopic images. Operator views cross-polarized images on two cathode-ray tubes through correspondingly polarized spectacles. By use of cursor on frozen image, remote operator designates path. Vehicle proceeds to follow path, by use of limited degree of autonomous control to cope with unexpected conditions. System concept, called "computer-aided remote driving" (CARD), potentially useful in exploration of other planets, military surveillance, firefighting, and clean-up of hazardous materials.

  16. Interactive video audio system: communication server for INDECT portal

    NASA Astrophysics Data System (ADS)

    Mikulec, Martin; Voznak, Miroslav; Safarik, Jakub; Partila, Pavol; Rozhon, Jan; Mehic, Miralem

    2014-05-01

    The paper deals with presentation of the IVAS system within the 7FP EU INDECT project. The INDECT project aims at developing the tools for enhancing the security of citizens and protecting the confidentiality of recorded and stored information. It is a part of the Seventh Framework Programme of European Union. We participate in INDECT portal and the Interactive Video Audio System (IVAS). This IVAS system provides a communication gateway between police officers working in dispatching centre and police officers in terrain. The officers in dispatching centre have capabilities to obtain information about all online police officers in terrain, they can command officers in terrain via text messages, voice or video calls and they are able to manage multimedia files from CCTV cameras or other sources, which can be interesting for officers in terrain. The police officers in terrain are equipped by smartphones or tablets. Besides common communication, they can reach pictures or videos sent by commander in office and they can respond to the command via text or multimedia messages taken by their devices. Our IVAS system is unique because we are developing it according to the special requirements from the Police of the Czech Republic. The IVAS communication system is designed to use modern Voice over Internet Protocol (VoIP) services. The whole solution is based on open source software including linux and android operating systems. The technical details of our solution are presented in the paper.

  17. A digital combining-weight estimation algorithm for broadband sources with the array feed compensation system

    NASA Technical Reports Server (NTRS)

    Vilnrotter, V. A.; Rodemich, E. R.

    1994-01-01

    An algorithm for estimating the optimum combining weights for the Ka-band (33.7-GHz) array feed compensation system was developed and analyzed. The input signal is assumed to be broadband radiation of thermal origin, generated by a distant radio source. Currently, seven video converters operating in conjunction with the real-time correlator are used to obtain these weight estimates. The algorithm described here requires only simple operations that can be implemented on a PC-based combining system, greatly reducing the amount of hardware. Therefore, system reliability and portability will be improved.

  18. Video-based convolutional neural networks for activity recognition from robot-centric videos

    NASA Astrophysics Data System (ADS)

    Ryoo, M. S.; Matthies, Larry

    2016-05-01

    In this evaluation paper, we discuss convolutional neural network (CNN)-based approaches for human activity recognition. In particular, we investigate CNN architectures designed to capture temporal information in videos and their applications to the human activity recognition problem. There have been multiple previous works to use CNN-features for videos. These include CNNs using 3-D XYT convolutional filters, CNNs using pooling operations on top of per-frame image-based CNN descriptors, and recurrent neural networks to learn temporal changes in per-frame CNN descriptors. We experimentally compare some of these different representatives CNNs while using first-person human activity videos. We especially focus on videos from a robots viewpoint, captured during its operations and human-robot interactions.

  19. Quantitative evaluation for accumulative calibration error and video-CT registration errors in electromagnetic-tracked endoscopy.

    PubMed

    Liu, Sheena Xin; Gutiérrez, Luis F; Stanton, Doug

    2011-05-01

    Electromagnetic (EM)-guided endoscopy has demonstrated its value in minimally invasive interventions. Accuracy evaluation of the system is of paramount importance to clinical applications. Previously, a number of researchers have reported the results of calibrating the EM-guided endoscope; however, the accumulated errors of an integrated system, which ultimately reflect intra-operative performance, have not been characterized. To fill this vacancy, we propose a novel system to perform this evaluation and use a 3D metric to reflect the intra-operative procedural accuracy. This paper first presents a portable design and a method for calibration of an electromagnetic (EM)-tracked endoscopy system. An evaluation scheme is then described that uses the calibration results and EM-CT registration to enable real-time data fusion between CT and endoscopic video images. We present quantitative evaluation results for estimating the accuracy of this system using eight internal fiducials as the targets on an anatomical phantom: the error is obtained by comparing the positions of these targets in the CT space, EM space and endoscopy image space. To obtain 3D error estimation, the 3D locations of the targets in the endoscopy image space are reconstructed from stereo views of the EM-tracked monocular endoscope. Thus, the accumulated errors are evaluated in a controlled environment, where the ground truth information is present and systematic performance (including the calibration error) can be assessed. We obtain the mean in-plane error to be on the order of 2 pixels. To evaluate the data integration performance for virtual navigation, target video-CT registration error (TRE) is measured as the 3D Euclidean distance between the 3D-reconstructed targets of endoscopy video images and the targets identified in CT. The 3D error (TRE) encapsulates EM-CT registration error, EM-tracking error, fiducial localization error, and optical-EM calibration error. We present in this paper our calibration method and a virtual navigation evaluation system for quantifying the overall errors of the intra-operative data integration. We believe this phantom not only offers us good insights to understand the systematic errors encountered in all phases of an EM-tracked endoscopy procedure but also can provide quality control of laboratory experiments for endoscopic procedures before the experiments are transferred from the laboratory to human subjects.

  20. Altitude deviations : breakdowns of an error tolerant system

    DOT National Transportation Integrated Search

    1991-12-01

    This project was a demonstration of the use of live aerial video recorded from a rotary wing aircraft operated by the Fairfax County, Virginia Police Department and transmitted to ground stations for re-transmission and use by Fairfax County and Virg...

  1. Enumeration of Salmonids in the Okanogan Basin Using Underwater Video, Performance Period: October 2005 (Project Inception) - 31 December 2006.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Peter N.; Rayton, Michael D.; Nass, Bryan L.

    2007-06-01

    The Confederated Tribes of the Colville Reservation (Colville Tribes) identified the need for collecting baseline census data on the timing and abundance of adult salmonids in the Okanogan River Basin in order to determine basin and tributary-specific spawner distributions, evaluate the status and trends of natural salmonid production in the basin, document local fish populations, and augment existing fishery data. This report documents the design, installation, operation and evaluation of mainstem and tributary video systems in the Okanogan River Basin. The species-specific data collected by these fish enumeration systems are presented along with an evaluation of the operation of amore » facility that provides a count of fish using an automated method. Information collected by the Colville Tribes Fish & Wildlife Department, specifically the Okanogan Basin Monitoring and Evaluation Program (OBMEP), is intended to provide a relative abundance indicator for anadromous fish runs migrating past Zosel Dam and is not intended as an absolute census count. Okanogan Basin Monitoring and Evaluation Program collected fish passage data between October 2005 and December 2006. Video counting stations were deployed and data were collected at two locations in the basin: on the mainstem Okanogan River at Zosel Dam near Oroville, Washington, and on Bonaparte Creek, a tributary to the Okanogan River, in the town of Tonasket, Washington. Counts at Zosel Dam between 10 October 2005 and 28 February 2006 are considered partial, pilot year data as they were obtained from the operation of a single video array on the west bank fishway, and covered only a portion of the steelhead migration. A complete description of the apparatus and methodology can be found in 'Fish Enumeration Using Underwater Video Imagery - Operational Protocol' (Nass 2007). At Zosel Dam, totals of 57 and 481 adult Chinook salmon were observed with the video monitoring system in 2005 and 2006, respectively. Run timing for Chinook in 2006 indicated that peak passage occurred in early October and daily peak passage was noted on 5 October when 52 fish passed the dam. Hourly passage estimates of Chinook salmon counts for 2005 and 2006 at Zosel Dam revealed a slight diel pattern as Chinook passage events tended to remain low from 1900 hours to 0600 hours relative to other hours of the day. Chinook salmon showed a slight preference for passing the dam through the video chutes on the east bank (52%) relative to the west bank (48%). A total of 48 adult sockeye salmon in 2005 and 19,245 in 2006 were counted passing through the video chutes at Zosel Dam. The 2006 run timing pattern was characterized by a large peak in passage from 3 August through 10 August when 17,698 fish (92% of total run observed for the year) were observed passing through the video chutes. The daily peak of 5,853 fish occurred on 4 August. Hourly passage estimates of sockeye salmon counts for 2005 and 2006 at the dam showed a strong diel pattern with increased passage during nighttime hours relative to daytime hours. Sockeye showed a strong preference for passing Zosel Dam on the east bank (72%) relative to the west bank (28%). A total of 298 adult upstream-migrating steelhead were counted at Zosel Dam in 2005 and 2006, representing the 2006 cohort based on passage data from 5 October 2005 through 15 July 2006. Eighty-seven percent (87%) of the total steelhead observed passed the dam between 23 March and 25 April with a peak passage occurring on 6 April when 31 fish were observed. Steelhead passage at Zosel Dam exhibited no diel pattern. In contrast to both Chinook and sockeye salmon, steelhead were shown to have a preference for passing the dam on the west bank (71%) relative to the east bank (29%). Both Chinook and sockeye passage at Zosel Dam were influenced by Okanogan River water temperature. When water temperatures peaked in late July (daily mean exceeded 24 C and daily maximum exceeded 26.5 C), Chinook and sockeye counts went to zero. A subsequent decrease in water temperature resulted in sharp increases in both Chinook and sockeye passage. A total of six steelhead were observed with the video monitoring system at Bonaparte Creek in 2006, with three passage events occurring on 29 March and one each on 20, 21, and 23 April. This system was operational for only a portion of the migration.« less

  2. A Framework of Simple Event Detection in Surveillance Video

    NASA Astrophysics Data System (ADS)

    Xu, Weiguang; Zhang, Yafei; Lu, Jianjiang; Tian, Yulong; Wang, Jiabao

    Video surveillance is playing more and more important role in people's social life. Real-time alerting of threaten events and searching interesting content in stored large scale video footage needs human operator to pay full attention on monitor for long time. The labor intensive mode has limit the effectiveness and efficiency of the system. A framework of simple event detection is presented advance the automation of video surveillance. An improved inner key point matching approach is used to compensate motion of background in real-time; frame difference are used to detect foreground; HOG based classifiers are used to classify foreground object into people and car; mean-shift is used to tracking the recognized objects. Events are detected based on predefined rules. The maturity of the algorithms guarantee the robustness of the framework, and the improved approach and the easily checked rules enable the framework to work in real-time. Future works to be done are also discussed.

  3. Design of a video system providing optimal visual information for controlling payload and experiment operations with television

    NASA Technical Reports Server (NTRS)

    1975-01-01

    A program was conducted which included the design of a set of simplified simulation tasks, design of apparatus and breadboard TV equipment for task performance, and the implementation of a number of simulation tests. Performance measurements were made under controlled conditions and the results analyzed to permit evaluation of the relative merits (effectivity) of various TV systems. Burden factors were subsequently generated for each TV system to permit tradeoff evaluation of system characteristics against performance. For the general remote operation mission, the 2-view system is recommended. This system is characterized and the corresponding equipment specifications were generated.

  4. Introduction: Intradural Spinal Surgery video supplement.

    PubMed

    McCormick, Paul C

    2014-09-01

    This Neurosurgical Focus video supplement contains detailed narrated videos of a broad range of intradural pathology such as neoplasms, including intramedullary, extramedullary, and dumbbell tumors, vascular malformations, functional disorders, and rare conditions that are often overlooked or misdiagnosed such as arachnoid cysts, ventral spinal cord herniation, and dorsal arachnoid web. The intent of this supplement is to provide meaningful educational and instructional content at all levels of training and practice. As such, the selected video submissions each provide a comprehensive detailed narrative description and coordinated video that contains the entire spectrum of relevant information including imaging, operative setup and positioning, and exposure, as well as surgical strategies, techniques, and sequencing toward the safe and effective achievement of the operative objective. This level of detail often necessitated a more lengthy video duration than is typically presented in oral presentations or standard video clips from peer reviewed publications. Unfortunately, space limitations precluded the inclusion of several other excellent video submissions in this supplement. While most videos in this supplement reflect standard operative approaches and techniques there are also submissions that describe innovative exposures and techniques that have expanded surgical options such as ventral approaches, stereotactic guidance, and minimally invasive exposures. There is some redundancy in both the topics and techniques both to underscore fundamental surgical principles as well as to provide complementary perspective from different surgeons. It has been my privilege to serve as guest editor for this video supplement and I would like to extend my appreciation to Mark Bilsky, Bill Krauss, and Sander Connolly for reviewing the large number submitted videos. Most of all, I would like to thank the authors for their skill and effort in the preparation of the outstanding videos that constitute this video supplement.

  5. On the development of new SPMN diurnal video systems for daylight fireball monitoring

    NASA Astrophysics Data System (ADS)

    Madiedo, J. M.; Trigo-Rodríguez, J. M.; Castro-Tirado, A. J.

    2008-09-01

    Daylight fireball video monitoring High-sensitivity video devices are commonly used for the study of the activity of meteor streams during the night. These provide useful data for the determination, for instance, of radiant, orbital and photometric parameters ([1] to [7]). With this aim, during 2006 three automated video stations supported by Universidad de Huelva were set up in Andalusia within the framework of the SPanish Meteor Network (SPMN). These are endowed with 8-9 high sensitivity wide-field video cameras that achieve a meteor limiting magnitude of about +3. These stations have increased the coverage performed by the low-scan allsky CCD systems operated by the SPMN and, besides, achieve a time accuracy of about 0.01s for determining the appearance of meteor and fireball events. Despite of these nocturnal monitoring efforts, we realised the need of setting up stations for daylight fireball detection. Such effort was also motivated by the appearance of the two recent meteorite-dropping events of Villalbeto de la Peña [8,9] and Puerto Lápice [10]. Although the Villalbeto de la Peña event was casually videotaped, and photographed, no direct pictures or videos were obtained for the Puerto Lápice event. Consequently, in order to perform a continuous recording of daylight fireball events, we setup new automated systems based on CCD video cameras. However, the development of these video stations implies several issues with respect to nocturnal systems that must be properly solved in order to get an optimal operation. The first of these video stations, also supported by University of Huelva, has been setup in Sevilla (Andalusia) during May 2007. But, of course, fireball association is unequivocal only in those cases when two or more stations recorded the fireball, and when consequently the geocentric radiant is accurately determined. With this aim, a second diurnal video station is being setup in Andalusia in the facilities of Centro Internacional de Estudios y Convenciones Ecológicas y Medioambientales (CIECEM, University of Huelva), in the environment of Doñana Natural Park (Huelva province). In this way, both stations, which are separated by a distance of 75 km, will work as a double video station system in order to provide trajectory and orbit information of mayor bolides and, thus, increase the chance of meteorite recovery in the Iberian Peninsula. The new diurnal SPMN video stations are endowed with different models of Mintron cameras (Mintron Enterprise Co., LTD). These are high-sensitivity devices that employ a colour 1/2" Sony interline transfer CCD image sensor. Aspherical lenses are attached to the video cameras in order to maximize image quality. However, the use of fast lenses is not a priority here: while most of our nocturnal cameras use f0.8 or f1.0 lenses in order to detect meteors as faint as magnitude +3, diurnal systems employ in most cases f1.4 to f2.0 lenses. Their focal length ranges from 3.8 to 12 mm to cover different atmospheric volumes. The cameras are arranged in such a way that the whole sky is monitored from every observing station. Figure 1. A daylight event recorded from Sevilla on May 26, 2008 at 4h30m05.4 +-0.1s UT. The way our diurnal video cameras work is similar to the operation of our nocturnal systems [1]. Thus, diurnal stations are automatically switched on and off at sunrise and sunset, respectively. The images taken at 25 fps and with a resolution of 720x576 pixels are continuously sent to PC computers through a video capture device. The computers run a software (UFOCapture, by SonotaCo, Japan) that automatically registers meteor trails and stores the corresponding video frames on hard disk. Besides, before the signal from the cameras reaches the computers, a video time inserter that employs a GPS device (KIWI-OSD, by PFD Systems) inserts time information on every video frame. This allows us to measure time in a precise way (about 0.01 sec.) along the whole fireball path. EPSC Abstracts, Vol. 3, EPSC2008-A-00319, 2008 European Planetary Science Congress, Author(s) 2008 However, one of the issues with respect to nocturnal observing stations is the high number of false detections as a consequence of several factors: higher activity of birds and insects, reflection of sunlight on planes and helicopters, etc. Sometimes some of these false events follow a pattern which is very similar to fireball trails, which makes absolutely necessary the use of a second station in order to discriminate between them. Other key issue is related to the passage of the Sun before the field of view of some of the cameras. In fact, special care is necessary with this to avoid any damage to the CCD sensor. Besides, depending on atmospheric conditions (dust or moisture, for instance), the Sun may saturate most of the video frame. To solve this, our automated system determines which camera is pointing towards the Sun at a given moment and disconnects it. As the cameras are endowed with autoiris lenses, its disconnection means that the optics is fully closed and, so, the CCD sensor is protected. This, of course, means that when this happens the atmospheric volume covered by the corresponding camera is not monitored. It must be also taken into account that, in general, operation temperatures are higher for diurnal cameras. This results in higher thermal noise and, so, poses some difficulties to the detection software. To minimize this effect, it is necessary to employ CCD video cameras with proper signal to noise ratio. Refrigeration of the CCD sensor with, for instance, a Peltier system, can also be considered. The astrometric reduction procedure is also somewhat different for daytime events: it requires that reference objects are located within the field of view of every camera in order to calibrate the corresponding images. This is done by allowing every camera to capture distant buildings that, by means of said calibration, would allow us to obtain the equatorial coordinates of the fireball along its path by measuring its corresponding X and Y positions on every video frame. Such calibration can be performed from stars positions measured from nocturnal images taken with the same cameras. Once made, if the cameras are not moved it is possible to estimate the equatorial coordinates of any future fireball event. We don't use any software for automatic astrometry of the images. This crucial step is made via direct measurements of the pixel position as in all our previous work. Then, from these astrometric measurements, our software estimates the atmospheric trajectory and radiant for each fireball ([10] to [13]). During 2007 and 2008 the SPMN has also setup other diurnal stations based on 1/3' progressive-scan CMOS sensors attached to modified wide-field lenses covering a 120x80 degrees FOV. They are placed in Andalusia: El Arenosillo (Huelva), La Mayora (Málaga) and Murtas (Granada). They have also night sensitivity thanks to a infrared cut filter (ICR) which enables the camera to perform well in both high and low light condition in colour as well as provide IR sensitive Black/White video at night. Conclusions First detections of daylight fireballs by CCD video camera are being achieved in the SPMN framework. Future expansion and set up of new observing stations is currently being planned. The future establishment of additional diurnal SPMN stations will allow an increase in the number of daytime fireballs detected. This will also increase our chance of meteorite recovery.

  6. SITHON: An Airborne Fire Detection System Compliant with Operational Tactical Requirements

    PubMed Central

    Kontoes, Charalabos; Keramitsoglou, Iphigenia; Sifakis, Nicolaos; Konstantinidis, Pavlos

    2009-01-01

    In response to the urging need of fire managers for timely information on fire location and extent, the SITHON system was developed. SITHON is a fully digital thermal imaging system, integrating INS/GPS and a digital camera, designed to provide timely positioned and projected thermal images and video data streams rapidly integrated in the GIS operated by Crisis Control Centres. This article presents in detail the hardware and software components of SITHON, and demonstrates the first encouraging results of test flights over the Sithonia Peninsula in Northern Greece. It is envisaged that the SITHON system will be soon operated onboard various airborne platforms including fire brigade airplanes and helicopters as well as on UAV platforms owned and operated by the Greek Air Forces. PMID:22399963

  7. FIRRE command and control station (C2)

    NASA Astrophysics Data System (ADS)

    Laird, R. T.; Kramer, T. A.; Cruickshanks, J. R.; Curd, K. M.; Thomas, K. M.; Moneyhun, J.

    2006-05-01

    The Family of Integrated Rapid Response Equipment (FIRRE) is an advanced technology demonstration program intended to develop a family of affordable, scalable, modular, and logistically supportable unmanned systems to meet urgent operational force protection needs and requirements worldwide. The near-term goal is to provide the best available unmanned ground systems to the warfighter in Iraq and Afghanistan. The overarching long-term goal is to develop a fully-integrated, layered force protection system of systems for our forward deployed forces that is networked with the future force C4ISR systems architecture. The intent of the FIRRE program is to reduce manpower requirements, enhance force protection capabilities, and reduce casualties through the use of unmanned systems. FIRRE is sponsored by the Office of the Under Secretary of Defense, Acquisitions, Technology and Logistics (OUSD AT&L), and is managed by the Product Manager, Force Protection Systems (PM-FPS). The FIRRE Command and Control (C2) Station supports two operators, hosts the Joint Battlespace Command and Control Software for Manned and Unmanned Assets (JBC2S), and will be able to host Mission Planning and Rehearsal (MPR) software. The C2 Station consists of an M1152 HMMWV fitted with an S-788 TYPE I shelter. The C2 Station employs five 24" LCD monitors for display of JBC2S software [1], MPR software, and live video feeds from unmanned systems. An audio distribution system allows each operator to select between various audio sources including: AN/PRC-117F tactical radio (SINCGARS compatible), audio prompts from JBC2S software, audio from unmanned systems, audio from other operators, and audio from external sources such as an intercom in an adjacent Tactical Operations Center (TOC). A power distribution system provides battery backup for momentary outages. The Ethernet network, audio distribution system, and audio/video feeds are available for use outside the C2 Station.

  8. Getting the Bigger Picture With Digital Surveillance

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Through a Space Act Agreement, Diebold, Inc., acquired the exclusive rights to Glenn Research Center's patented video observation technology, originally designed to accelerate video image analysis for various ongoing and future space applications. Diebold implemented the technology into its AccuTrack digital, color video recorder, a state-of- the-art surveillance product that uses motion detection for around-the- clock monitoring. AccuTrack captures digitally signed images and transaction data in real-time. This process replaces the onerous tasks involved in operating a VCR-based surveillance system, and subsequently eliminates the need for central viewing and tape archiving locations altogether. AccuTrack can monitor an entire bank facility, including four automated teller machines, multiple teller lines, and new account areas, all from one central location.

  9. Operator Performance Support System (OPSS)

    NASA Technical Reports Server (NTRS)

    Conklin, Marlen Z.

    1993-01-01

    In the complex and fast reaction world of military operations, present technologies, combined with tactical situations, have flooded the operator with assorted information that he is expected to process instantly. As technologies progress, this flow of data and information have both guided and overwhelmed the operator. However, the technologies that have confounded many operators today can be used to assist him -- thus the Operator Performance Support Team. In this paper we propose an operator support station that incorporates the elements of Video and Image Databases, productivity Software, Interactive Computer Based Training, Hypertext/Hypermedia Databases, Expert Programs, and Human Factors Engineering. The Operator Performance Support System will provide the operator with an integrating on-line information/knowledge system that will guide expert or novice to correct systems operations. Although the OPSS is being developed for the Navy, the performance of the workforce in today's competitive industry is of major concern. The concepts presented in this paper which address ASW systems software design issues are also directly applicable to industry. the OPSS will propose practical applications in how to more closely align the relationships between technical knowledge and equipment operator performance.

  10. Complete thoracoscopic lobectomy for cancer: comparative study of three-dimensional high-definition with two-dimensional high-definition video systems †.

    PubMed

    Bagan, Patrick; De Dominicis, Florence; Hernigou, Jacques; Dakhil, Bassel; Zaimi, Rym; Pricopi, Ciprian; Le Pimpec Barthes, Françoise; Berna, Pascal

    2015-06-01

    Common video systems for video-assisted thoracic surgery (VATS) provide the surgeon a two-dimensional (2D) image. This study aimed to evaluate performances of a new three-dimensional high definition (3D-HD) system in comparison with a two-dimensional high definition (2D-HD) system when conducting a complete thoracoscopic lobectomy (CTL). This multi-institutional comparative study trialled two video systems: 2D-HD and 3D-HD video systems used to conduct the same type of CTL. The inclusion criteria were T1N0M0 non-small-cell lung carcinoma (NSCLC) in the left lower lobe and suitable for thoracoscopic resection. The CTL was performed by the same surgeon using either a 3D-HD or 2D-HD system. Eighteen patients with NSCLC were included in the study between January and December 2013: 14 males, 4 females, with a median age of 65.6 years (range: 49-81). The patients were randomized before inclusion into two groups: to undergo surgery with the use of a 2D-HD or 3D-HD system. We compared operating time, the drainage duration, hospital stay and the N upstaging rate from the definitive histology. The use of the 3D-HD system significantly reduced the surgical time (by 17%). However, chest-tube drainage, hospital stay, the number of lymph-node stations and upstaging were similar in both groups. The main finding was that 3D-HD system significantly reduced the surgical time needed to complete the lobectomy. Thus, future integration of 3D-HD systems should improve thoracoscopic surgery, and enable more complex resections to be performed. It will also help advance the field of endoscopically assisted surgery. © The Author 2015. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.

  11. Implementation issues in source coding

    NASA Technical Reports Server (NTRS)

    Sayood, Khalid; Chen, Yun-Chung; Hadenfeldt, A. C.

    1989-01-01

    An edge preserving image coding scheme which can be operated in both a lossy and a lossless manner was developed. The technique is an extension of the lossless encoding algorithm developed for the Mars observer spectral data. It can also be viewed as a modification of the DPCM algorithm. A packet video simulator was also developed from an existing modified packet network simulator. The coding scheme for this system is a modification of the mixture block coding (MBC) scheme described in the last report. Coding algorithms for packet video were also investigated.

  12. Evolution of the Mobile Information SysTem (MIST)

    NASA Technical Reports Server (NTRS)

    Litaker, Harry L., Jr.; Thompson, Shelby; Archer, Ronald D.

    2008-01-01

    The Mobile Information SysTem (MIST) had its origins in the need to determine whether commercial off the shelf (COTS) technologies could improve intervehicular activities (IVA) on International Space Station (ISS) crew maintenance productivity. It began with an exploration of head mounted displays (HMDs), but quickly evolved to include voice recognition, mobile personal computing, and data collection. The unique characteristic of the MIST lies within its mobility, in which a vest is worn that contains a mini-computer and supporting equipment, and a headband with attachments for a HMD, lipstick camera, and microphone. Data is then captured directly by the computer running Morae(TM) or similar software for analysis. To date, the MIST system has been tested in numerous environments such as two parabolic flights on NASA's C-9 microgravity aircraft and several mockup facilities ranging from ISS to the Altair Lunar Sortie Lander. Functional capabilities have included its lightweight and compact design, commonality across systems and environments, and usefulness in remote collaboration. Human Factors evaluations of the system have proven the MIST's ability to be worn for long durations of time (approximately four continuous hours) with no adverse physical deficits, moderate operator compensation, and low workload being reported as measured by Corlett Bishop Discomfort Scale, Cooper-Harper Ratings, and the NASA Total Workload Index (TLX), respectively. Additionally, through development of the system, it has spawned several new applications useful in research. For example, by only employing the lipstick camera, microphone, and a compact digital video recorder (DVR), we created a portable, lightweight data collection device. Video is recorded from the participants point of view (POV) through the use of the camera mounted on the side of the head. Both the video and audio is recorded directly into the DVR located on a belt around the waist. This data is then transferred to another computer for video editing and analysis. Another application has been discovered using simulated flight, in which, a kneeboard is replaced with mini-computer and the HMD to project flight paths and glide slopes for lunar ascent. As technologies evolve, so will the system and its application for research and space system operations.

  13. Automated Generation of Geo-Referenced Mosaics From Video Data Collected by Deep-Submergence Vehicles: Preliminary Results

    NASA Astrophysics Data System (ADS)

    Rhzanov, Y.; Beaulieu, S.; Soule, S. A.; Shank, T.; Fornari, D.; Mayer, L. A.

    2005-12-01

    Many advances in understanding geologic, tectonic, biologic, and sedimentologic processes in the deep ocean are facilitated by direct observation of the seafloor. However, making such observations is both difficult and expensive. Optical systems (e.g., video, still camera, or direct observation) will always be constrained by the severe attenuation of light in the deep ocean, limiting the field of view to distances that are typically less than 10 meters. Acoustic systems can 'see' much larger areas, but at the cost of spatial resolution. Ultimately, scientists want to study and observe deep-sea processes in the same way we do land-based phenomena so that the spatial distribution and juxtaposition of processes and features can be resolved. We have begun development of algorithms that will, in near real-time, generate mosaics from video collected by deep-submergence vehicles. Mosaics consist of >>10 video frames and can cover 100's of square-meters. This work builds on a publicly available still and video mosaicking software package developed by Rzhanov and Mayer. Here we present the results of initial tests of data collection methodologies (e.g., transects across the seafloor and panoramas across features of interest), algorithm application, and GIS integration conducted during a recent cruise to the Eastern Galapagos Spreading Center (0 deg N, 86 deg W). We have developed a GIS database for the region that will act as a means to access and display mosaics within a geospatially-referenced framework. We have constructed numerous mosaics using both video and still imagery and assessed the quality of the mosaics (including registration errors) under different lighting conditions and with different navigation procedures. We have begun to develop algorithms for efficient and timely mosaicking of collected video as well as integration with navigation data for georeferencing the mosaics. Initial results indicate that operators must be properly versed in the control of the video systems as well as maintaining vehicle attitude and altitude in order to achieve the best results possible.

  14. The Operating Technician's Role in Video Distance Learning.

    ERIC Educational Resources Information Center

    Olesinski, Raymond L.; And Others

    Operating technicians play a number of roles in video, or televised, distance learning programs, the most obvious being the operation and support of the technology itself. Very little information exists, however, about the non-technical activities of technicians that may influence the instruction process. This paper describes these activities…

  15. Snapshot hyperspectral fovea vision system (HyperVideo)

    NASA Astrophysics Data System (ADS)

    Kriesel, Jason; Scriven, Gordon; Gat, Nahum; Nagaraj, Sheela; Willson, Paul; Swaminathan, V.

    2012-06-01

    The development and demonstration of a new snapshot hyperspectral sensor is described. The system is a significant extension of the four dimensional imaging spectrometer (4DIS) concept, which resolves all four dimensions of hyperspectral imaging data (2D spatial, spectral, and temporal) in real-time. The new sensor, dubbed "4×4DIS" uses a single fiber optic reformatter that feeds into four separate, miniature visible to near-infrared (VNIR) imaging spectrometers, providing significantly better spatial resolution than previous systems. Full data cubes are captured in each frame period without scanning, i.e., "HyperVideo". The current system operates up to 30 Hz (i.e., 30 cubes/s), has 300 spectral bands from 400 to 1100 nm (~2.4 nm resolution), and a spatial resolution of 44×40 pixels. An additional 1.4 Megapixel video camera provides scene context and effectively sharpens the spatial resolution of the hyperspectral data. Essentially, the 4×4DIS provides a 2D spatially resolved grid of 44×40 = 1760 separate spectral measurements every 33 ms, which is overlaid on the detailed spatial information provided by the context camera. The system can use a wide range of off-the-shelf lenses and can either be operated so that the fields of view match, or in a "spectral fovea" mode, in which the 4×4DIS system uses narrow field of view optics, and is cued by a wider field of view context camera. Unlike other hyperspectral snapshot schemes, which require intensive computations to deconvolve the data (e.g., Computed Tomographic Imaging Spectrometer), the 4×4DIS requires only a linear remapping, enabling real-time display and analysis. The system concept has a range of applications including biomedical imaging, missile defense, infrared counter measure (IRCM) threat characterization, and ground based remote sensing.

  16. A method of operation scheduling based on video transcoding for cluster equipment

    NASA Astrophysics Data System (ADS)

    Zhou, Haojie; Yan, Chun

    2018-04-01

    Because of the cluster technology in real-time video transcoding device, the application of facing the massive growth in the number of video assignments and resolution and bit rate of diversity, task scheduling algorithm, and analyze the current mainstream of cluster for real-time video transcoding equipment characteristics of the cluster, combination with the characteristics of the cluster equipment task delay scheduling algorithm is proposed. This algorithm enables the cluster to get better performance in the generation of the job queue and the lower part of the job queue when receiving the operation instruction. In the end, a small real-time video transcode cluster is constructed to analyze the calculation ability, running time, resource occupation and other aspects of various algorithms in operation scheduling. The experimental results show that compared with traditional clustering task scheduling algorithm, task delay scheduling algorithm has more flexible and efficient characteristics.

  17. Integrating critical interface elements for intuitive single-display aviation control of UAVs

    NASA Astrophysics Data System (ADS)

    Cooper, Joseph L.; Goodrich, Michael A.

    2006-05-01

    Although advancing levels of technology allow UAV operators to give increasingly complex commands with expanding temporal scope, it is unlikely that the need for immediate situation awareness and local, short-term flight adjustment will ever be completely superseded. Local awareness and control are particularly important when the operator uses the UAV to perform a search or inspection task. There are many different tasks which would be facilitated by search and inspection capabilities of a camera-equipped UAV. These tasks range from bridge inspection and news reporting to wilderness search and rescue. The system should be simple, inexpensive, and intuitive for non-pilots. An appropriately designed interface should (a) provide a context for interpreting video and (b) support UAV tasking and control, all within a single display screen. In this paper, we present and analyze an interface that attempts to accomplish this goal. The interface utilizes a georeferenced terrain map rendered from publicly available altitude data and terrain imagery to create a context in which the location of the UAV and the source of the video are communicated to the operator. Rotated and transformed imagery from the UAV provides a stable frame of reference for the operator and integrates cleanly into the terrain model. Simple icons overlaid onto the main display provide intuitive control and feedback when necessary but fade to a semi-transparent state when not in use to avoid distracting the operator's attention from the video signal. With various interface elements integrated into a single display, the interface runs nicely on a small, portable, inexpensive system with a single display screen and simple input device, but is powerful enough to allow a single operator to deploy, control, and recover a small UAV when coupled with appropriate autonomy. As we present elements of the interface design, we will identify concepts that can be leveraged into a large class of UAV applications.

  18. Use of a Proximity Sensor Switch for "Hands Free" Operation of Computer-Based Video Prompting by Young Adults with Moderate Intellectual Disability

    ERIC Educational Resources Information Center

    Ivey, Alexandria N.; Mechling, Linda C.; Spencer, Galen P.

    2015-01-01

    In this study, the effectiveness of a "hands free" approach for operating video prompts to complete multi-step tasks was measured. Students advanced the video prompts by using a motion (hand wave) over a proximity sensor switch. Three young adult females with a diagnosis of moderate intellectual disability participated in the study.…

  19. Frequency of distracting tasks people do while driving : an analysis of the ACAS FOT data.

    DOT National Transportation Integrated Search

    2007-06-01

    This report describes further analysis of data from the advanced collision avoidance system (ACAS) field operational test, a naturalistic driving study. To determine how distracted and nondistracted driving differ, a stratified sample of 2,914 video ...

  20. Video Vehicle Detector Verification System (V2DVS) operators manual and project final report.

    DOT National Transportation Integrated Search

    2012-03-01

    The accurate detection of the presence, speed and/or length of vehicles on roadways is recognized as critical for : effective roadway congestion management and safety. Vehicle presence sensors are commonly used for traffic : volume measurement and co...

  1. 75 FR 36390 - Notice of Public Information Collection(s) Being Submitted for Review and Approval to the Office...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-25

    ... the local franchising authority may file a complaint with the Commission, pursuant to our dispute resolution procedures set forth in Sec. 76.1514, if the open video system operator and the local franchising...

  2. A demonstrator for an integrated subway protection system

    NASA Astrophysics Data System (ADS)

    Detoma, E.; Capetti, P.; Casati, G.; Billington, S.

    2008-04-01

    In 2006 SEPA has carried on the installation and tests of a demonstrator for an integrated subway protection system at a new subway station in the Naples, Italy) metropolitan area. Protection of a subway system is a difficult task given the amount of passengers transported every day. The demonstrator has been limited to non-intrusive detection techniques not to impair the passenger flow into the station. The demonstrator integrates several technologies and products that have been developed by SEPA or are already available on the market (MKS Instruments,...). The main purpose is to provide detection capabilities for attempts to introduce radioactive substances in the subway station, in order to foil possible attempts to place a dirty bomb, and threat detection and identification following release of chemical agents. The system integrates additional sensors such as video surveillance cameras and air flow sensing to complement the basic sensors suite. The need to protect sensitive installations such as subway stations has been highlighted by the series of terroristics actions carried out in recent years in the subway in London. However, given the number of passengers of a metro system, it is impossible to propose security techniques operating in ways similar to the screening of passengers in airports. Passengers screening and threat detection and identification must be quick, non-intrusive and capable of screening a large number of passengers to be applicable to mass transit systems. In 2005 SEPA, a small company operating in the field of trains video-surveillance systems and radiation detectors, started developing an integrated system to provide a comprehensive protection to subway stations, based on ready available or off-the-shelf components in order to quickly develop a reliable system with available technology. We ruled out at the beginning any new development in order to speed up the fielding of the system in less than one year. The system was developed with commercial sensors and deployed in a new station of the Naples metropolitan transit system in Mugnano. The station was particularly suitable for the demonstration since it is a new station that includes air venting control, water barriers (for fire and smoke containment) and a complete SCADA system to integrate technical and video surveillance operations. In order to protect the subway, we tackled four basic technologies, all readily available in-house or on the market: - radiation detection, to detect the introduction in the station of radionuclides, that may be dispersed by conventional explosive (a "dirty" bomb); - chemical agents detection and identification (after release), complemented with air speed and velocity sensors to estimate, track and predict the contamination plume; - video surveillance, integrated with the SCADA system and already available in the station.

  3. Utilizing Simulation-Based Training of Video Clip Instruction for the Store Service Operations Practice Course

    ERIC Educational Resources Information Center

    Lin, Che-Hung; Yen, Yu-Ren; Wu, Pai-Lu

    2015-01-01

    The aim of this study was to develop a store service operations practice course based on simulation-based training of video clip instruction. The action research of problem-solving strategies employed for teaching are by simulated store operations. The counter operations course unit used as an example, this study developed 4 weeks of subunits for…

  4. Test Operations Procedure (TOP) 5-2-521 Pyrotechnic Shock Test Procedures

    DTIC Science & Technology

    2007-11-20

    Clipping will produce a signal that resembles a square wave . (2) Filters are used to limit the frequency bandwidth of the signal . Low pass filters...video systems permit observation of explosive items under test. c. Facilities to perform non-destructive inspections such as x-ray, ultrasonic , magna...test. (1) Accelerometers (2) Signal Conditioners (3) Digital Recording System (4) Data Processing System with hardcopy output

  5. Wrap-Around Out-the-Window Sensor Fusion System

    NASA Technical Reports Server (NTRS)

    Fox, Jeffrey; Boe, Eric A.; Delgado, Francisco; Secor, James B.; Clark, Michael R.; Ehlinger, Kevin D.; Abernathy, Michael F.

    2009-01-01

    The Advanced Cockpit Evaluation System (ACES) includes communication, computing, and display subsystems, mounted in a van, that synthesize out-the-window views to approximate the views of the outside world as it would be seen from the cockpit of a crewed spacecraft, aircraft, or remote control of a ground vehicle or UAV (unmanned aerial vehicle). The system includes five flat-panel display units arranged approximately in a semicircle around an operator, like cockpit windows. The scene displayed on each panel represents the view through the corresponding cockpit window. Each display unit is driven by a personal computer equipped with a video-capture card that accepts live input from any of a variety of sensors (typically, visible and/or infrared video cameras). Software running in the computers blends the live video images with synthetic images that could be generated, for example, from heads-up-display outputs, waypoints, corridors, or from satellite photographs of the same geographic region. Data from a Global Positioning System receiver and an inertial navigation system aboard the remote vehicle are used by the ACES software to keep the synthetic and live views in registration. If the live image were to fail, the synthetic scenes could still be displayed to maintain situational awareness.

  6. Holodeck: Telepresence Dome Visualization System Simulations

    NASA Technical Reports Server (NTRS)

    Hite, Nicolas

    2012-01-01

    This paper explores the simulation and consideration of different image-projection strategies for the Holodeck, a dome that will be used for highly immersive telepresence operations in future endeavors of the National Aeronautics and Space Administration (NASA). Its visualization system will include a full 360 degree projection onto the dome's interior walls in order to display video streams from both simulations and recorded video. Because humans innately trust their vision to precisely report their surroundings, the Holodeck's visualization system is crucial to its realism. This system will be rigged with an integrated hardware and software infrastructure-namely, a system of projectors that will relay with a Graphics Processing Unit (GPU) and computer to both project images onto the dome and correct warping in those projections in real-time. Using both Computer-Aided Design (CAD) and ray-tracing software, virtual models of various dome/projector geometries were created and simulated via tracking and analysis of virtual light sources, leading to the selection of two possible configurations for installation. Research into image warping and the generation of dome-ready video content was also conducted, including generation of fisheye images, distortion correction, and the generation of a reliable content-generation pipeline.

  7. Piloting Telepresence-Enabled Education and Outreach Programs from a UNOLS Ship - Live Interactive Broadcasts from the R/V Endeavor

    NASA Astrophysics Data System (ADS)

    Pereira, M.; Coleman, D.; Donovan, S.; Sanders, R.; Gingras, A.; DeCiccio, A.; Bilbo, E.

    2016-02-01

    The University of Rhode Island's R/V Endeavor was recently equipped with a new satellite telecommunication system and a telepresence system to enable live ship-to-shore broadcasts and remote user participation through the Inner Space Center. The Rhode Island Endeavor Program, which provides state-funded ship time to support local oceanographic research and education, funded a 5-day cruise off the Rhode Island coast that involved a multidisciplinary team of scientists, engineers, students, educators and video producers. Using two remotely operated vehicle (ROV) systems, several dives were conducted to explore various shipwrecks including the German WWII submarine U-853. During the cruise, a team of URI ocean engineers supported ROV operations and performed engineering tests of a new manipulator. Colleagues from the United States Coast Guard Academy operated a small ROV to collect imagery and environmental data around the wreck sites. Additionally, a team of engineers and oceanographers from URI tested a new acoustic sound source and small acoustic receivers developed for a fish tracking experiment. The video producers worked closely with the participating scientists, students and two high school science teachers to communicate the oceanographic research during live educational broadcasts streamed into Rhode Island classrooms, to the public Internet, and directly to Rhode Island Public Television. This work contributed to increasing awareness of possible career pathways for the Rhode Island K-12 population, taught about active oceanographic research projects, and engaged the public in scientific adventures at sea. The interactive nature of the broadcasts included live responses to questions submitted online and live updates and feedback using social media tools. This project characterizes the power of telepresence and video broadcasting to engage diverse learners and exemplifies innovative ways to utilize social media and the Internet to draw a varied audience.

  8. Real-time 3D visualization of volumetric video motion sensor data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlson, J.; Stansfield, S.; Shawver, D.

    1996-11-01

    This paper addresses the problem of improving detection, assessment, and response capabilities of security systems. Our approach combines two state-of-the-art technologies: volumetric video motion detection (VVMD) and virtual reality (VR). This work capitalizes on the ability of VVMD technology to provide three-dimensional (3D) information about the position, shape, and size of intruders within a protected volume. The 3D information is obtained by fusing motion detection data from multiple video sensors. The second component involves the application of VR technology to display information relating to the sensors and the sensor environment. VR technology enables an operator, or security guard, to bemore » immersed in a 3D graphical representation of the remote site. VVMD data is transmitted from the remote site via ordinary telephone lines. There are several benefits to displaying VVMD information in this way. Because the VVMD system provides 3D information and because the sensor environment is a physical 3D space, it seems natural to display this information in 3D. Also, the 3D graphical representation depicts essential details within and around the protected volume in a natural way for human perception. Sensor information can also be more easily interpreted when the operator can `move` through the virtual environment and explore the relationships between the sensor data, objects and other visual cues present in the virtual environment. By exploiting the powerful ability of humans to understand and interpret 3D information, we expect to improve the means for visualizing and interpreting sensor information, allow a human operator to assess a potential threat more quickly and accurately, and enable a more effective response. This paper will detail both the VVMD and VR technologies and will discuss a prototype system based upon their integration.« less

  9. Complementing Operating Room Teaching With Video-Based Coaching.

    PubMed

    Hu, Yue-Yung; Mazer, Laura M; Yule, Steven J; Arriaga, Alexander F; Greenberg, Caprice C; Lipsitz, Stuart R; Gawande, Atul A; Smink, Douglas S

    2017-04-01

    Surgical expertise demands technical and nontechnical skills. Traditionally, surgical trainees acquired these skills in the operating room; however, operative time for residents has decreased with duty hour restrictions. As in other professions, video analysis may help maximize the learning experience. To develop and evaluate a postoperative video-based coaching intervention for residents. In this mixed methods analysis, 10 senior (postgraduate year 4 and 5) residents were videorecorded operating with an attending surgeon at an academic tertiary care hospital. Each video formed the basis of a 1-hour one-on-one coaching session conducted by the operative attending; although a coaching framework was provided, participants determined the specific content collaboratively. Teaching points were identified in the operating room and the video-based coaching sessions; iterative inductive coding, followed by thematic analysis, was performed. Teaching points made in the operating room were compared with those in the video-based coaching sessions with respect to initiator, content, and teaching technique, adjusting for time. Among 10 cases, surgeons made more teaching points per unit time (63.0 vs 102.7 per hour) while coaching. Teaching in the video-based coaching sessions was more resident centered; attendings were more inquisitive about residents' learning needs (3.30 vs 0.28, P = .04), and residents took more initiative to direct their education (27% [198 of 729 teaching points] vs 17% [331 of 1977 teaching points], P < .001). Surgeons also more frequently validated residents' experiences (8.40 vs 1.81, P < .01), and they tended to ask more questions to promote critical thinking (9.30 vs 3.32, P = .07) and set more learning goals (2.90 vs 0.28, P = .11). More complex topics, including intraoperative decision making (mean, 9.70 vs 2.77 instances per hour, P = .03) and failure to progress (mean, 1.20 vs 0.13 instances per hour, P = .04) were addressed, and they were more thoroughly developed and explored. Excerpts of dialogue are presented to illustrate these findings. Video-based coaching is a novel and feasible modality for supplementing intraoperative learning. Objective evaluation demonstrates that video-based coaching may be particularly useful for teaching higher-level concepts, such as decision making, and for individualizing instruction and feedback to each resident.

  10. Expedition Atacama - project AMOS in Chile

    NASA Astrophysics Data System (ADS)

    Tóth, J.; Kaniansky, S.

    2016-01-01

    The Slovak Video Meteor Network operates since 2009 (Tóth et al., 2011). It currently consists of four semi-automated all-sky video cameras, developed at the Astronomical Observatory in Modra, Comenius University in Bratislava, Slovakia. Two new generations of AMOS (All-sky Meteor Orbit System) cameras operate fully automatically at the Canary Islands, Tenerife and La Palma, since March 2015 (Tóth et al., 2015). As a logical step, we plan to cover the southern hemisphere from Chile. We present observational experiences in meteor astronomy from the Atacama Desert and other astronomical sites in Chile. This summary of the observations lists meteor spectra records (26) between Nov.5-13, 2015 mostly Taurid meteors, single and double station meteors as well as the first light from the permanent AMOS stations in Chile.

  11. American Carrier Air Power at the Dawn of a New Century

    DTIC Science & Technology

    2005-01-01

    Systems, Office of the Secretary of Defense (Operational Test and Evaluation); then–Commander Calvin Craig, OPNAV N81; Captain Kenneth Neubauer and...TACP Tactical Air Control Party TARPS Tactical Air Reconnaissance Pod System TCS Television Camera System TLAM Tomahawk Land-Attack Missile TST Time...store any video imagery acquired by the aircraft’s systems, including the TARPS pod, the pilot’s head-up display (HUD), the Television Camera System (TCS

  12. Direct endoscopic video registration for sinus surgery

    NASA Astrophysics Data System (ADS)

    Mirota, Daniel; Taylor, Russell H.; Ishii, Masaru; Hager, Gregory D.

    2009-02-01

    Advances in computer vision have made possible robust 3D reconstruction of monocular endoscopic video. These reconstructions accurately represent the visible anatomy and, once registered to pre-operative CT data, enable a navigation system to track directly through video eliminating the need for an external tracking system. Video registration provides the means for a direct interface between an endoscope and a navigation system and allows a shorter chain of rigid-body transformations to be used to solve the patient/navigation-system registration. To solve this registration step we propose a new 3D-3D registration algorithm based on Trimmed Iterative Closest Point (TrICP)1 and the z-buffer algorithm.2 The algorithm takes as input a 3D point cloud of relative scale with the origin at the camera center, an isosurface from the CT, and an initial guess of the scale and location. Our algorithm utilizes only the visible polygons of the isosurface from the current camera location during each iteration to minimize the search area of the target region and robustly reject outliers of the reconstruction. We present example registrations in the sinus passage applicable to both sinus surgery and transnasal surgery. To evaluate our algorithm's performance we compare it to registration via Optotrak and present closest distance point to surface error. We show our algorithm has a mean closest distance error of .2268mm.

  13. A Video Game Platform for Exploring Satellite and In-Situ Data Streams

    NASA Astrophysics Data System (ADS)

    Cai, Y.

    2014-12-01

    Exploring spatiotemporal patterns of moving objects are essential to Earth Observation missions, such as tracking, modeling and predicting movement of clouds, dust, plumes and harmful algal blooms. Those missions involve high-volume, multi-source, and multi-modal imagery data analysis. Analytical models intend to reveal inner structure, dynamics, and relationship of things. However, they are not necessarily intuitive to humans. Conventional scientific visualization methods are intuitive but limited by manual operations, such as area marking, measurement and alignment of multi-source data, which are expensive and time-consuming. A new development of video analytics platform has been in progress, which integrates the video game engine with satellite and in-situ data streams. The system converts Earth Observation data into articulated objects that are mapped from a high-dimensional space to a 3D space. The object tracking and augmented reality algorithms highlight the objects' features in colors, shapes and trajectories, creating visual cues for observing dynamic patterns. The head and gesture tracker enable users to navigate the data space interactively. To validate our design, we have used NASA SeaWiFS satellite images of oceanographic remote sensing data and NOAA's in-situ cell count data. Our study demonstrates that the video game system can reduce the size and cost of traditional CAVE systems in two to three orders of magnitude. This system can also be used for satellite mission planning and public outreaching.

  14. Internet Voice Distribution System (IVoDS) Utilization in Remote Payload Operations

    NASA Technical Reports Server (NTRS)

    Best, Susan; Bradford, Bob; Chamberlain, Jim; Nichols, Kelvin; Bailey, Darrell (Technical Monitor)

    2002-01-01

    Due to limited crew availability to support science and the large number of experiments to be operated simultaneously, telescience is key to a successful International Space Station (ISS) science program. Crew, operations personnel at NASA centers, and researchers at universities and companies around the world must work closely together to perform scientific experiments on-board ISS. NASA has initiated use of Voice over Internet Protocol (VoIP) to supplement the existing HVoDS mission voice communications system used by researchers. The Internet Voice Distribution System (IVoDS) connects researchers to mission support "loops" or conferences via Internet Protocol networks such as the high-speed Internet 2. Researchers use IVoDS software on personal computers to talk with operations personnel at NASA centers. IVoDS also has the capability, if authorized, to allow researchers to communicate with the ISS crew during experiment operations. NODS was developed by Marshall Space Flight Center with contractors A2 Technology, Inc. FVC, Lockheed- Martin, and VoIP Group. IVoDS is currently undergoing field-testing with full deployment for up to 50 simultaneous users expected in 2002. Research is currently being performed to take full advantage of the digital world - the Personal Computer and Internet Protocol networks - to qualitatively enhance communications among ISS operations personnel. In addition to the current voice capability, video and data-sharing capabilities are being investigated. Major obstacles being addressed include network bandwidth capacity and strict security requirements. Techniques being investigated to reduce and overcome these obstacles include emerging audio-video protocols and network technology including multicast and quality-of-service.

  15. Assessing Caribbean Shallow and Mesophotic Reef Fish Communities Using Baited-Remote Underwater Video (BRUV) and Diver-Operated Video (DOV) Survey Techniques.

    PubMed

    Andradi-Brown, Dominic A; Macaya-Solis, Consuelo; Exton, Dan A; Gress, Erika; Wright, Georgina; Rogers, Alex D

    2016-01-01

    Fish surveys form the backbone of reef monitoring and management initiatives throughout the tropics, and understanding patterns in biases between techniques is crucial if outputs are to address key objectives optimally. Often biases are not consistent across natural environmental gradients such as depth, leading to uncertainty in interpretation of results. Recently there has been much interest in mesophotic reefs (reefs from 30-150 m depth) as refuge habitats from fishing pressure, leading to many comparisons of reef fish communities over depth gradients. Here we compare fish communities using stereo-video footage recorded via baited remote underwater video (BRUV) and diver-operated video (DOV) systems on shallow and mesophotic reefs in the Mesoamerican Barrier Reef, Caribbean. We show inconsistent responses across families, species and trophic groups between methods across the depth gradient. Fish species and family richness were higher using BRUV at both depth ranges, suggesting that BRUV is more appropriate for recording all components of the fish community. Fish length distributions were not different between methods on shallow reefs, yet BRUV recorded more small fish on mesophotic reefs. However, DOV consistently recorded greater relative fish community biomass of herbivores, suggesting that studies focusing on herbivores should consider using DOV. Our results highlight the importance of considering what component of reef fish community researchers and managers are most interested in surveying when deciding which survey technique to use across natural gradients such as depth.

  16. Assessing Caribbean Shallow and Mesophotic Reef Fish Communities Using Baited-Remote Underwater Video (BRUV) and Diver-Operated Video (DOV) Survey Techniques

    PubMed Central

    Macaya-Solis, Consuelo; Exton, Dan A.; Gress, Erika; Wright, Georgina; Rogers, Alex D.

    2016-01-01

    Fish surveys form the backbone of reef monitoring and management initiatives throughout the tropics, and understanding patterns in biases between techniques is crucial if outputs are to address key objectives optimally. Often biases are not consistent across natural environmental gradients such as depth, leading to uncertainty in interpretation of results. Recently there has been much interest in mesophotic reefs (reefs from 30–150 m depth) as refuge habitats from fishing pressure, leading to many comparisons of reef fish communities over depth gradients. Here we compare fish communities using stereo-video footage recorded via baited remote underwater video (BRUV) and diver-operated video (DOV) systems on shallow and mesophotic reefs in the Mesoamerican Barrier Reef, Caribbean. We show inconsistent responses across families, species and trophic groups between methods across the depth gradient. Fish species and family richness were higher using BRUV at both depth ranges, suggesting that BRUV is more appropriate for recording all components of the fish community. Fish length distributions were not different between methods on shallow reefs, yet BRUV recorded more small fish on mesophotic reefs. However, DOV consistently recorded greater relative fish community biomass of herbivores, suggesting that studies focusing on herbivores should consider using DOV. Our results highlight the importance of considering what component of reef fish community researchers and managers are most interested in surveying when deciding which survey technique to use across natural gradients such as depth. PMID:27959907

  17. Use of Video Analysis System for Working Posture Evaluations

    NASA Technical Reports Server (NTRS)

    McKay, Timothy D.; Whitmore, Mihriban

    1994-01-01

    In a work environment, it is important to identify and quantify the relationship among work activities, working posture, and workplace design. Working posture may impact the physical comfort and well-being of individuals, as well as performance. The Posture Video Analysis Tool (PVAT) is an interactive menu and button driven software prototype written in Supercard (trademark). Human Factors analysts are provided with a predefined set of options typically associated with postural assessments and human performance issues. Once options have been selected, the program is used to evaluate working posture and dynamic tasks from video footage. PVAT has been used to evaluate postures from Orbiter missions, as well as from experimental testing of prototype glove box designs. PVAT can be used for video analysis in a number of industries, with little or no modification. It can contribute to various aspects of workplace design such as training, task allocations, procedural analyses, and hardware usability evaluations. The major advantage of the video analysis approach is the ability to gather data, non-intrusively, in restricted-access environments, such as emergency and operation rooms, contaminated areas, and control rooms. Video analysis also provides the opportunity to conduct preliminary evaluations of existing work areas.

  18. Telepathology. Long-distance diagnosis.

    PubMed

    Weinstein, R S; Bloom, K J; Rozek, L S

    1989-04-01

    Telepathology is defined as the practice of pathology at a distance, by visualizing an image on a video monitor rather than viewing a specimen directly through a microscope. Components of a telepathology system include the following: (1) a workstation equipped with a high-resolution video camera attached to a remote-controlled light microscope; (2) a pathologist workstation incorporating controls for manipulating the robotic microscope as well as a high-resolution video monitor; and (3) a telecommunications link. Progress has been made in designing and constructing telepathology workstations and fully motorized, computer-controlled light microscopes suitable for telepathology. In addition, components such as video signal digital encoders and decoders that produce remarkably stable, high-color fidelity, and high-resolution images have been incorporated into the workstations. Resolution requirements for the video microscopy component of telepathology have been formally examined in receiver operator characteristic (ROC) curve analyses. Test-of-concept demonstrations have been completed with the use of geostationary satellites as the broadband communication linkages for 750-line resolution video. Potential benefits of telepathology include providing a means of conveniently delivering pathology services in real-time to remote sites or underserviced areas, time-sharing of pathologists' services by multiple institutions, and increasing accessibility to specialty pathologists.

  19. Modification of the Miyake-Apple technique for simultaneous anterior and posterior video imaging of wet laboratory-based corneal surgery.

    PubMed

    Tan, Johnson C H; Meadows, Howard; Gupta, Aanchal; Yeung, Sonia N; Moloney, Gregory

    2014-03-01

    The aim of this study was to describe a modification of the Miyake-Apple posterior video analysis for the simultaneous visualization of the anterior and posterior corneal surfaces during wet laboratory-based deep anterior lamellar keratoplasty (DALK). A human donor corneoscleral button was affixed to a microscope slide and placed onto a custom-made mounting box. A big bubble DALK was performed on the cornea in the wet laboratory. An 11-diopter intraocular lens was positioned over the aperture of the back camera of an iPhone. This served to video record the posterior view of the corneoscleral button during the big bubble formation. An overhead operating microscope with an attached video camcorder recorded the anterior view during the surgery. The anterior and posterior views of the wet laboratory-based DALK surgery were simultaneously captured and edited using video editing software. The formation of the big bubble can be studied. This video recording camera system has the potential to act as a valuable research and teaching tool in corneal lamellar surgery, especially in the behavior of the big bubble formation in DALK.

  20. Analysis of Preoperative Airway Examination with the CMOS Video Rhino-laryngoscope.

    PubMed

    Tsukamoto, Masanori; Hitosugi, Takashi; Yokoyama, Takeshi

    2017-05-01

    Endoscopy is one of the most useful clinical techniques in difficult airway management Comparing with the fibroptic endoscope, this compact device is easy to operate and can provide the clear image. In this study, we investigated its usefulness in the preoperative examination of endoscopy. Patients undergoing oral maxillofacial surgery were enrolled in this study. We performed preoperative airway examination by electronic endoscope (The CMOS video rhino-laryngoscope, KARL STORZ Endoscopy Japan, Tokyo). The system is composed of a videoendoscope, a compact video processor and a video recorder. In addition, the endoscope has a small color charge coupled device (CMOS) chip built into the tip of the endoscope. The outer diameter of the tip of this scope is 3.7 mm. In this study, electronic endoscope was used for preoperative airway examination in 7 patients. The preoperative airway examination with electronic endoscope was performed successfully in all the patients except one patient The patient had the symptoms such as nausea and vomiting at the examination. We could perform preoperative airway examination with excellent visualization and convenient recording of video sequence images with the CMOS video rhino-laryngoscope. It might be a especially useful device for the patients of difficult airways.

  1. Automated Rendezvous and Capture System Development and Simulation for NASA

    NASA Technical Reports Server (NTRS)

    Roe, Fred D.; Howard, Richard T.; Murphy, Leslie

    2004-01-01

    The United States does not have an Automated Rendezvous and Capture/Docking (AR and C) capability and is reliant on manned control for rendezvous and docking of orbiting spacecraft. This reliance on the labor intensive manned interface for control of rendezvous and docking vehicles has a significant impact on the cost of the operation of the International Space Station (ISS) and precludes the use of any U.S. expendable launch capabilities for Space Station resupply. The Soviets have the capability to autonomously dock in space, but their system produces a hard docking with excessive force and contact velocity. Automated Rendezvous and Capture/Docking has been identified as a key enabling technology for the Space Launch Initiative (SLI) Program, DARPA Orbital Express and other DOD Programs. The development and implementation of an AR&C capability can significantly enhance system flexibility, improve safety, and lower the cost of maintaining, supplying, and operating the International Space Station. The Marshall Space Flight Center (MSFC) has conducted pioneering research in the development of an automated rendezvous and capture (or docking) (AR and C) system for U.S. space vehicles. This AR&C system was tested extensively using hardware-in-the-loop simulations in the Flight Robotics Laboratory, and a rendezvous sensor, the Video Guidance Sensor was developed and successfully flown on the Space Shuttle on flights STS-87 and STS-95, proving the concept of a video- based sensor. Further developments in sensor technology and vehicle and target configuration have lead to continued improvements and changes in AR&C system development and simulation. A new Advanced Video Guidance Sensor (AVGS) with target will be utilized on the Demonstration of Autonomous Rendezvous Technologies (DART) flight experiment in 2004.

  2. Microgravity Science Glovebox (MSG), Space Science's Past, Present and Future Aboard the International Space Station (ISS)

    NASA Technical Reports Server (NTRS)

    Spivey, Reggie; Spearing, Scott; Jordan, Lee

    2012-01-01

    The Microgravity Science Glovebox (MSG) is a double rack facility aboard the International Space Station (ISS), which accommodates science and technology investigations in a "workbench' type environment. The MSG has been operating on the ISS since July 2002 and is currently located in the US Laboratory Module. In fact, the MSG has been used for over 10,000 hours of scientific payload operations and plans to continue for the life of ISS. The facility has an enclosed working volume that is held at a negative pressure with respect to the crew living area. This allows the facility to provide two levels of containment for small parts, particulates, fluids, and gases. This containment approach protects the crew from possible hazardous operations that take place inside the MSG work volume and allows researchers a controlled pristine environment for their needs. Research investigations operating inside the MSG are provided a large 255 liter enclosed work space, 1000 watts of dc power via a versatile supply interface (120, 28, + 12, and 5 Vdc), 1000 watts of cooling capability, video and data recording and real time downlink, ground commanding capabilities, access to ISS Vacuum Exhaust and Vacuum Resource Systems, and gaseous nitrogen supply. These capabilities make the MSG one of the most utilized facilities on ISS. MSG investigations have involved research in cryogenic fluid management, fluid physics, spacecraft fire safety, materials science, combustion, and plant growth technologies. Modifications to the MSG facility are currently under way to expand the capabilities and provide for investigations involving Life Science and Biological research. In addition, the MSG video system is being replaced with a state-of-the-art, digital video system with high definition/high speed capabilities, and with near real-time downlink capabilities. This paper will provide an overview of the MSG facility, a synopsis of the research that has already been accomplished in the MSG, and an overview of the facility enhancements that will shortly be available for use by future investigators.

  3. The effect of video review of resident laparoscopic surgical skills measured by self- and external assessment.

    PubMed

    Herrera-Almario, Gabriel E; Kirk, Katherine; Guerrero, Veronica T; Jeong, Kwonho; Kim, Sara; Hamad, Giselle G

    2016-02-01

    Video review of surgical skills is an educational modality that allows trainees to reflect on self-performance. The purpose of this study was to determine whether resident and attending assessments of a resident's laparoscopic performance differ and whether video review changes assessments. Third-year surgery residents were invited to participate. Elective laparoscopic procedures were video recorded. The Global Operative Assessment of Laparoscopic Skills evaluation was completed immediately after the procedure and again 7 to 10 days later by both resident and attending. Scores were compared using t tests. Nine residents participated and 76 video reviews were completed. Residents scored themselves significantly lower than the faculty scores both before and after video review. Resident scores did not change significantly after video review. Attending and resident self-assessment of laparoscopic skills differs and subsequent video review does not significantly affect Global Operative Assessment of Laparoscopic Skills scores. Further studies should evaluate the impact of video review combined with verbal feedback on skill acquisition and assessment. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Digital Audio: A Sound Design Element.

    ERIC Educational Resources Information Center

    Barron, Ann; Varnadoe, Susan

    1992-01-01

    Discussion of incorporating audio into videodiscs for multimedia educational applications highlights a project developed for the Navy that used digital audio in an interactive video delivery system (IVDS) for training sonar operators. Storage constraints with videodiscs are explained, design requirements for the IVDS are described, and production…

  5. The Global Systems Analysis and Simulation (GLOSAS) Project, and the Global Lecture Hall (GLH) Operating at the Orlando ICEM Conference.

    ERIC Educational Resources Information Center

    Bell, John

    1993-01-01

    Introduces two articles which describe an interactive video conference between North and South America, the Caribbean, Europe, and Scandinavia as part of the International Council for Educational Media (ICEM) 1992 conference. (EAM)

  6. Head-mounted display for use in functional endoscopic sinus surgery

    NASA Astrophysics Data System (ADS)

    Wong, Brian J.; Lee, Jon P.; Dugan, F. Markoe; MacArthur, Carol J.

    1995-05-01

    Since the introduction of functional endoscopic sinus surgery (FESS), the procedure has undergone rapid change with evolution keeping pace with technological advances. The advent of low cost charge coupled device 9CCD) cameras revolutionized the practice and instruction of FESS. Video-based FESS has allowed for documentation of the surgical procedure as well as interactive instruction during surgery. Presently, the technical requirements of video-based FESS include the addition of one or more television monitors positioned strategically in the operating room. Thought video monitors have greatly enhanced surgical endoscopy by re- involving nurses and assistants in the actual mechanics of surgery, video monitors require the operating surgeon to be focused on the screen instead of the patient. In this study, we describe the use of a new low-cost liquid crystal display (LCD) based device that functions as a monitor but is mounted on the head on a visor (PT-O1, O1 Products, Westlake Village, CA). This study illustrates the application of these HMD devices to FESS operations. The same surgeon performed the operation in each patient. In one nasal fossa, surgery was performed using conventional video FESS methods. The contralateral side was operated on while wearing the head mounted video display. The device had adequate resolution for the purposes of FESS. No adverse effects were noted intraoperatively. The results on the patients ipsalateral and contralateral sides were similar. The visor did eliminated significant torsion of the surgeon's neck during the operation, while at the same time permitted simultaneous viewing of both the patient and the intranasal surgical field.

  7. Da Vinci robot-assisted system for thymectomy: experience of 55 patients in China.

    PubMed

    Jun, Yi; Hao, Li; Demin, Li; Guohua, Dong; Hua, Jing; Yi, Shen

    2014-09-01

    Da Vinci robot-assisted thymectomy has been used in the past several years in China, however, practical experience in performing this approach in China remains limited. Thus, the study aimed to evaluate the experience of da Vinci robot-assisted thymectomy in China. From June 2010 to December 2012, 55 patients with diseases of the thymus underwent thymectomy using the da Vinci surgical HD robotic system. The clinical data of the da Vinci robot-assisted thymectomies were compared with the data of video-assisted thoracoscopic thymectomies in the same period. All da Vinci robot operations were successful. This is a retrospective analysis which demonstrated that compared with video-assisted thoracoscopic thymectomy in the same period, the clinical outcomes of da Vinci robot-assisted thymectomy were not significantly different. The da Vinci robot-assisted thymectomy is a safe, minimally invasive, and convenient operation, and shows promise for general thoracic surgery in China. Copyright © 2014 John Wiley & Sons, Ltd.

  8. Wheelchair securement and occupant restraint system (WTORS) practices in public transit buses.

    PubMed

    Frost, Karen L; Bertocci, Gina; Salipur, Zdravko

    2013-01-01

    The purpose of this study was to characterize wheelchair tiedown and occupant restraint system (WTORS) usage in public transit buses based on observations of wheelchair and scooter (wheeled mobility device: WhMD) passenger trips. A retrospective review of on-board video surveillance recordings of WhMD trips on fixed-route, large accessible transit vehicles (LATVs) was performed. Two hundred ninety-five video recordings were collected for review and analysis during the period June 2007-February 2009. Results showed that 73.6% of WhMDs were unsecured during transit. Complete use of all four tiedowns was observed more frequently for manual wheelchairs (14.9%) and power wheelchairs (5.5%), compared to scooters (0.0%), and this difference was significant (p=0.013). Nonuse or misuse (lap belt use only) of the occupant restraint system occurred during 47.5% of WhMD trips. The most frequently observed (52.5%) use of the lap belt consisted of bus operators routing the lap belt around the WhMD seatback in an attempt to secure the WhMD. These findings support the need for development and implementation of WTORS with improved usability and/or WTORS that can be operated independently by WhMD passengers and improved WTORS training for bus operators.

  9. Modification and Validation of an Automotive Data Processing Unit, Compessed Video System, and Communications Equipment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carter, R.J.

    1997-04-01

    The primary purpose of the "modification and validation of an automotive data processing unit (DPU), compressed video system, and communications equipment" cooperative research and development agreement (CRADA) was to modify and validate both hardware and software, developed by Scientific Atlanta, Incorporated (S-A) for defense applications (e.g., rotary-wing airplanes), for the commercial sector surface transportation domain (i.e., automobiles and trucks). S-A also furnished a state-of-the-art compressed video digital storage and retrieval system (CVDSRS), and off-the-shelf data storage and transmission equipment to support the data acquisition system for crash avoidance research (DASCAR) project conducted by Oak Ridge National Laboratory (ORNL). In turn,more » S-A received access to hardware and technology related to DASCAR. DASCAR was subsequently removed completely and installation was repeated a number of times to gain an accurate idea of complete installation, operation, and removal of DASCAR. Upon satisfactory completion of the DASCAR construction and preliminary shakedown, ORNL provided NHTSA with an operational demonstration of DASCAR at their East Liberty, OH test facility. The demonstration included an on-the-road demonstration of the entire data acquisition system using NHTSA'S test track. In addition, the demonstration also consisted of a briefing, containing the following: ORNL generated a plan for validating the prototype data acquisition system with regard to: removal of DASCAR from an existing vehicle, and installation and calibration in other vehicles; reliability of the sensors and systems; data collection and transmission process (data integrity); impact on the drivability of the vehicle and obtrusiveness of the system to the driver; data analysis procedures; conspicuousness of the vehicle to other drivers; and DASCAR installation and removal training and documentation. In order to identify any operational problems not captured by the systems testing and evaluation, the validation plan also addressed a short-term pilot research program to manipulate DASCAR under operational conditions using "naive" drivers. The effort exercised the fill capabilities of the data acquisition system. ORNL subsequently evaluated and pilot tested the data acquisition system using the validation plan. The plan was implemented in full at the NHTSA East Liberty, OH test facility, and was carried out as a cooperative effort with the Vehicle Research and Test Center staff. ORNL determined the reliability of the sensors and systems by exercising DASCAR For one vehicle type, ORNL evaluated systems reliability over a continuous period of 30 days with particular attention paid to maintenance of calibration and data integrity.« less

  10. Live video monitoring robot controlled by web over internet

    NASA Astrophysics Data System (ADS)

    Lokanath, M.; Akhil Sai, Guruju

    2017-11-01

    Future is all about robots, robot can perform tasks where humans cannot, Robots have huge applications in military and industrial area for lifting heavy weights, for accurate placements, for repeating the same task number of times, where human are not efficient. Generally robot is a mix of electronic, electrical and mechanical engineering and can do the tasks automatically on its own or under the supervision of humans. The camera is the eye for robot, call as robovision helps in monitoring security system and also can reach into the places where the human eye cannot reach. This paper presents about developing a live video streaming robot controlled from the website. We designed the web, controlling for the robot to move left, right, front and back while streaming video. As we move to the smart environment or IoT (Internet of Things) by smart devices the system we developed here connects over the internet and can be operated with smart mobile phone using a web browser. The Raspberry Pi model B chip acts as heart for this system robot, the sufficient motors, surveillance camera R pi 2 are connected to Raspberry pi.

  11. Wavelet library for constrained devices

    NASA Astrophysics Data System (ADS)

    Ehlers, Johan Hendrik; Jassim, Sabah A.

    2007-04-01

    The wavelet transform is a powerful tool for image and video processing, useful in a range of applications. This paper is concerned with the efficiency of a certain fast-wavelet-transform (FWT) implementation and several wavelet filters, more suitable for constrained devices. Such constraints are typically found on mobile (cell) phones or personal digital assistants (PDA). These constraints can be a combination of; limited memory, slow floating point operations (compared to integer operations, most often as a result of no hardware support) and limited local storage. Yet these devices are burdened with demanding tasks such as processing a live video or audio signal through on-board capturing sensors. In this paper we present a new wavelet software library, HeatWave, that can be used efficiently for image/video processing/analysis tasks on mobile phones and PDA's. We will demonstrate that HeatWave is suitable for realtime applications with fine control and range to suit transform demands. We shall present experimental results to substantiate these claims. Finally this library is intended to be of real use and applied, hence we considered several well known and common embedded operating system platform differences; such as a lack of common routines or functions, stack limitations, etc. This makes HeatWave suitable for a range of applications and research projects.

  12. Multiple video sequences synchronization during minimally invasive surgery

    NASA Astrophysics Data System (ADS)

    Belhaoua, Abdelkrim; Moreau, Johan; Krebs, Alexandre; Waechter, Julien; Radoux, Jean-Pierre; Marescaux, Jacques

    2016-03-01

    Hybrid operating rooms are an important development in the medical ecosystem. They allow integrating, in the same procedure, the advantages of radiological imaging and surgical tools. However, one of the challenges faced by clinical engineers is to support the connectivity and interoperability of medical-electrical point-of-care devices. A system that could enable plug-and-play connectivity and interoperability for medical devices would improve patient safety, save hospitals time and money, and provide data for electronic medical records. In this paper, we propose a hardware platform dedicated to collect and synchronize multiple videos captured from medical equipment in real-time. The final objective is to integrate augmented reality technology into an operation room (OR) in order to assist the surgeon during a minimally invasive operation. To the best of our knowledge, there is no prior work dealing with hardware based video synchronization for augmented reality applications on OR. Whilst hardware synchronization methods can embed temporal value, so called timestamp, into each sequence on-the-y and require no post-processing, they require specialized hardware. However the design of our hardware is simple and generic. This approach was adopted and implemented in this work and its performance is evaluated by comparison to the start-of-the-art methods.

  13. Evaluation of stereoscopic video cameras synchronized with the movement of an operator's head on the teleoperation of the actual backhoe shovel

    NASA Astrophysics Data System (ADS)

    Minamoto, Masahiko; Matsunaga, Katsuya

    1999-05-01

    Operator performance while using a remote controlled backhoe shovel is described for three different stereoscopic viewing conditions: direct view, fixed stereoscopic cameras connected to a helmet mounted display (HMD), and rotating stereo camera connected and slaved to the head orientation of a free moving stereo HMD. Results showed that the head- slaved system provided the best performance.

  14. IMIS: An intelligence microscope imaging system

    NASA Technical Reports Server (NTRS)

    Caputo, Michael; Hunter, Norwood; Taylor, Gerald

    1994-01-01

    Until recently microscope users in space relied on traditional microscopy techniques that required manual operation of the microscope and recording of observations in the form of written notes, drawings, or photographs. This method was time consuming and required the return of film and drawings from space for analysis. No real-time data analysis was possible. Advances in digital and video technologies along with recent developments in article intelligence will allow future space microscopists to have a choice of three additional modes of microscopy: remote coaching, remote control, and automation. Remote coaching requires manual operations of the microscope with instructions given by two-way audio/video transmission during critical phases of the experiment. When using the remote mode of microscopy, the Principal Investigator controls the microscope from the ground. The automated mode employs artificial intelligence to control microscope functions and is the only mode that can be operated in the other three modes as well. The purpose of this presentation is to discuss the advantages and disadvantages of the four modes of of microscopy and how the IMIS, a proposed intelligent microscope imaging system, can be used as a model for developing and testing concepts, operating procedures, and equipment design of specifications required to provide a comprehensive microscopy/imaging capability onboard Space Station Freedom.

  15. Applications of ENF criterion in forensic audio, video, computer and telecommunication analysis.

    PubMed

    Grigoras, Catalin

    2007-04-11

    This article reports on the electric network frequency criterion as a means of assessing the integrity of digital audio/video evidence and forensic IT and telecommunication analysis. A brief description is given to different ENF types and phenomena that determine ENF variations. In most situations, to reach a non-authenticity opinion, the visual inspection of spectrograms and comparison with an ENF database are enough. A more detailed investigation, in the time domain, requires short time windows measurements and analyses. The stability of the ENF over geographical distances has been established by comparison of synchronized recordings made at different locations on the same network. Real cases are presented, in which the ENF criterion was used to investigate audio and video files created with secret surveillance systems, a digitized audio/video recording and a TV broadcasted reportage. By applying the ENF Criterion in forensic audio/video analysis, one can determine whether and where a digital recording has been edited, establish whether it was made at the time claimed, and identify the time and date of the registering operation.

  16. Digital Motion Imagery, Interoperability Challenges for Space Operations

    NASA Technical Reports Server (NTRS)

    Grubbs, Rodney

    2012-01-01

    With advances in available bandwidth from spacecraft and between terrestrial control centers, digital motion imagery and video is becoming more practical as a data gathering tool for science and engineering, as well as for sharing missions with the public. The digital motion imagery and video industry has done a good job of creating standards for compression, distribution, and physical interfaces. Compressed data streams can easily be transmitted or distributed over radio frequency, internet protocol, and other data networks. All of these standards, however, can make sharing video between spacecraft and terrestrial control centers a frustrating and complicated task when different standards and protocols are used by different agencies. This paper will explore the challenges presented by the abundance of motion imagery and video standards, interfaces and protocols with suggestions for common formats that could simplify interoperability between spacecraft and ground support systems. Real-world examples from the International Space Station will be examined. The paper will also discuss recent trends in the development of new video compression algorithms, as well likely expanded use of Delay (or Disruption) Tolerant Networking nodes.

  17. Improving stop line detection using video imaging detectors.

    DOT National Transportation Integrated Search

    2010-11-01

    The Texas Department of Transportation and other state departments of transportation as well as cities : nationwide are using video detection successfully at signalized intersections. However, operational : issues with video imaging vehicle detection...

  18. 47 CFR 74.870 - Wireless video assist devices.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 4 2012-10-01 2012-10-01 false Wireless video assist devices. 74.870 Section... Stations § 74.870 Wireless video assist devices. Television broadcast auxiliary licensees and motion picture and television producers, as defined in § 74.801 may operate wireless video assist devices on a...

  19. 47 CFR 74.870 - Wireless video assist devices.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 4 2011-10-01 2011-10-01 false Wireless video assist devices. 74.870 Section... Stations § 74.870 Wireless video assist devices. Television broadcast auxiliary licensees and motion picture and television producers, as defined in § 74.801 may operate wireless video assist devices on a...

  20. 47 CFR 74.870 - Wireless video assist devices.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 4 2014-10-01 2014-10-01 false Wireless video assist devices. 74.870 Section... Stations § 74.870 Wireless video assist devices. Television broadcast auxiliary licensees and motion picture and television producers, as defined in § 74.801 may operate wireless video assist devices on a...

  1. 47 CFR 74.870 - Wireless video assist devices.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 4 2013-10-01 2013-10-01 false Wireless video assist devices. 74.870 Section... Stations § 74.870 Wireless video assist devices. Television broadcast auxiliary licensees and motion picture and television producers, as defined in § 74.801 may operate wireless video assist devices on a...

  2. Optimisation Issues of High Throughput Medical Data and Video Streaming Traffic in 3G Wireless Environments.

    PubMed

    Istepanian, R S H; Philip, N

    2005-01-01

    In this paper we describe some of the optimisation issues relevant to the requirements of high throughput of medical data and video streaming traffic in 3G wireless environments. In particular we present a challenging 3G mobile health care application that requires a demanding 3G medical data throughput. We also describe the 3G QoS requirement of mObile Tele-Echography ultra-Light rObot system (OTELO that is designed to provide seamless 3G connectivity for real-time ultrasound medical video streams and diagnosis from a remote site (robotic and patient station) manipulated by an expert side (specialists) that is controlling the robotic scanning operation and presenting a real-time feedback diagnosis using 3G wireless communication links.

  3. Minimally invasive video-assisted thyroidectomy: Ascending the learning curve

    PubMed Central

    Capponi, Michela Giulii; Bellotti, Carlo; Lotti, Marco; Ansaloni, Luca

    2015-01-01

    BACKGROUND: Minimally invasive video-assisted thyroidectomy (MIVAT) is a technically demanding procedure and requires a surgical team skilled in both endocrine and endoscopic surgery. The aim of this report is to point out some aspects of the learning curve of the video-assisted thyroid surgery, through the analysis of our preliminary series of procedures. PATIENTS AND METHODS: Over a period of 8 months, we selected 36 patients for minimally invasive video-assisted surgery of the thyroid. The patients were considered eligible if they presented with a nodule not exceeding 35 mm and total thyroid volume <20 ml; presence of biochemical and ultrasound signs of thyroiditis and pre-operative diagnosis of cancer were exclusion criteria. We analysed surgical results, conversion rate, operating time, post-operative complications, hospital stay and cosmetic outcomes of the series. RESULTS: We performed 36 total thyroidectomy and in one case we performed a consensual parathyroidectomy. The procedure was successfully carried out in 33 out of 36 cases (conversion rate 8.3%). The mean operating time was 109 min (range: 80-241 min) and reached a plateau after 29 MIVAT. Post-operative complications included three transient recurrent nerve palsies and two transient hypocalcemias; no definitive hypoparathyroidism was registered. The cosmetic result was considered excellent by most patients. CONCLUSIONS: Advances in skills and technology allow surgeons to easily reproduce the standard open total thyroidectomy with video-assistance. Although the learning curve represents a time-consuming step, training remains a crucial point in gaining a reasonable confidence with video-assisted surgical technique. PMID:25883451

  4. Knowledge representation in space flight operations

    NASA Technical Reports Server (NTRS)

    Busse, Carl

    1989-01-01

    In space flight operations rapid understanding of the state of the space vehicle is essential. Representation of knowledge depicting space vehicle status in a dynamic environment presents a difficult challenge. The NASA Jet Propulsion Laboratory has pursued areas of technology associated with the advancement of spacecraft operations environment. This has led to the development of several advanced mission systems which incorporate enhanced graphics capabilities. These systems include: (1) Spacecraft Health Automated Reasoning Prototype (SHARP); (2) Spacecraft Monitoring Environment (SME); (3) Electrical Power Data Monitor (EPDM); (4) Generic Payload Operations Control Center (GPOCC); and (5) Telemetry System Monitor Prototype (TSM). Knowledge representation in these systems provides a direct representation of the intrinsic images associated with the instrument and satellite telemetry and telecommunications systems. The man-machine interface includes easily interpreted contextual graphic displays. These interactive video displays contain multiple display screens with pop-up windows and intelligent, high resolution graphics linked through context and mouse-sensitive icons and text.

  5. Comparison of UNL laser imaging and sizing system and a phase Doppler system for analyzing sprays from a NASA nozzle

    NASA Technical Reports Server (NTRS)

    Alexander, Dennis R.

    1990-01-01

    Research was conducted on characteristics of aerosol sprays using a P/DPA and a laser imaging/video processing system on a NASA MOD-1 air assist nozzle being evaluated for use in aircraft icing research. Benchmark tests were performed on monodispersed particles and on the NASA MOD-1 nozzle under identical lab operating conditions. The laser imaging/video processing system and the P/DPA showed agreement on a calibration tests in monodispersed aerosol sprays of + or - 2.6 micron with a standard deviation of + or - 2.6 micron. Benchmark tests were performed on the NASA MOD-1 nozzle on the centerline and radially at 0.5 inch increments to the outer edge of the spray plume at a distance 2 ft downstream from the exit nozzle. Comparative results at two operation conditions of the nozzle are presented for the two instruments. For the 1st case studied, the deviation in arithmetic mean diameters determined by the two instruments was in a range of 0.1 to 2.8 micron, and the deviation in Sauter mean diameters varied from 0 to 2.2 micron. Severe operating conditions in the 2nd case resulted in the arithmetic mean diameter deviating from 1.4 to 7.1 micron and the deviation in the Sauter mean diameters ranging from 0.4 to 6.7 micron.

  6. Comparison of UNL laser imaging and sizing system and a phase/Doppler system for analyzing sprays from a NASA nozzle

    NASA Technical Reports Server (NTRS)

    Alexander, Dennis R.

    1988-01-01

    Aerosol spray characterization was done using a P/DPA and a laser imaging/video processing system on a NASA MOD-1 air-assist nozzle being evaluated for use in aircraft icing research. Benchmark tests were performed on monodispersed particles and on the NASA MOD-1 nozzle under identical laboratory operating conditions. The laser imaging/video processing system and the P/DPA showed agreement on calibration tests in monodispersed aerosol sprays of + or - 2.6 microns with a standard deviation of + or - 2.6 microns. Tests were performed on the NASA MOD-1 nozzle on the centerline and radially at one-half inch increments to the outer edge of the spray plume at a distance two feet (0.61 m) downstream from the exit of the nozzle. Comparative results at two operating conditions of the nozzle are presented for the two instruments. For the first case, the deviation in arithmetic mean diameters determined by the two instruments was in a range of 0.1 to 2.8 microns, and the deviation in Sauter mean diameters varied from 0 to 2.2 microns. Operating conditions in the second case were more severe which resulted in the arithmetic mean diameter deviating from 1.4 to 7.1 microns and the deviation in the Sauter mean diameters ranging from 0.4 to 6.7 microns.

  7. The Cam Shell: An Innovative Design With Materials and Manufacturing

    NASA Technical Reports Server (NTRS)

    Chung, W. Richard; Larsen, Frank M.; Kornienko, Rob

    2003-01-01

    Most of the personal audio and video recording devices currently sold on the open market all require hands to operate. Little consideration was given to designing a hands-free unit. Such a system once designed and made available to the public could greatly benefit mobile police officers, bicyclists, adventurers, street and dirt motorcyclists, horseback riders and many others. With a few design changes water sports and skiing activities could be another large area of application. The cam shell is an innovative design in which an audio and video recording device (such as palm camcorder) is housed in a body-mounted protection system. This system is based on the concept of viewing and recording at the same time. A view cam is attached to a helmet wired to a recording unit encased in a transparent body-mounted protection system. The helmet can also be controlled by remote. The operator will have full control in recording everything. However, the recording unit will be operated completely hands-free. This project will address the design considerations and their effects on material selection and manufacturing. It will enhance the understanding of the structure of materials, and how the structure affects the behavior of the material, and the role that processing play in linking the relationship between structure and properties. A systematic approach to design feasibility study, cost analysis and problem solving will also be discussed.

  8. Development and testing of a photometric method to identify non-operating solar hot water systems in field settings.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Hongbo; Vorobieff, Peter V.; Menicucci, David

    2012-06-01

    This report presents the results of experimental tests of a concept for using infrared (IR) photos to identify non-operational systems based on their glazing temperatures; operating systems have lower glazing temperatures than those in stagnation. In recent years thousands of new solar hot water (SHW) systems have been installed in some utility districts. As these numbers increase, concern is growing about the systems dependability because installation rebates are often based on the assumption that all of the SHW systems will perform flawlessly for a 20-year period. If SHW systems routinely fail prematurely, then the utilities will have overpaid for grid-energymore » reduction performance that is unrealized. Moreover, utilities are responsible for replacing energy for loads that failed SHW system were supplying. Thus, utilities are seeking data to quantify the reliability of SHW systems. The work described herein is intended to help meet this need. The details of the experiment are presented, including a description of the SHW collectors that were examined, the testbed that was used to control the system and record data, the IR camera that was employed, and the conditions in which testing was completed. The details of the associated analysis are presented, including direct examination of the video records of operational and stagnant collectors, as well as the development of a model to predict glazing temperatures and an analysis of temporal intermittency of the images, both of which are critical to properly adjusting the IR camera for optimal performance. Many IR images and a video are presented to show the contrast between operating and stagnant collectors. The major conclusion is that the technique has potential to be applied by using an aircraft fitted with an IR camera that can fly over an area with installed SHW systems, thus recording the images. Subsequent analysis of the images can determine the operational condition of the fielded collectors. Specific recommendations are presented relative to the application of the technique, including ways to mitigate and manage potential sources of error.« less

  9. Final Report: Non-Visible, Automated Target Acquisition and Tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ziock, Klaus-Peter; Fabris, Lorenzo; Goddard, James K.

    The Roadside Tracker (RST) represents a new approach to radiation portal monitors. It uses a combination of gamma-ray and visible-light imaging to localize gamma-ray radiation sources to individual vehicles in free-flowing, multi-lane traffic. Deployed as two trailers that are parked on either side of the roadway (Fig. 1); the RST scans passing traffic with two large gamma-ray imagers, one mounted in each trailer. The system compensates for vehicle motion through the imager’s fields of view by using automated target acquisition and tracking (TAT) software applied to a stream of video images. Once a vehicle has left the field of view,more » the radiation image of that vehicle is analyzed for the presence of a source, and if one is found, an alarm is sounded. The gamma-ray image is presented to the operator together with the video image of the traffic stream when the vehicle was approximately closest to the system (Fig. 2). The offending vehicle is identified with a bounding box to distinguish it from other vehicles that might be present at the same time. The system was developed under a previous grant from the Department of Homeland Security’s (DHS’s) Domestic Nuclear Detection Office (DNDO). This report documents work performed with follow-on funding from DNDO to further advance the development of the RST. Specifically, the primary thrust was to extend the performance envelope of the system by replacing the visible-light video cameras used by the TAT software with sensors that would allow operation at night and during inclement weather. In particular, it was desired to allow operation after dark without requiring external lighting. As part of this work, the system software was also upgraded to allow the use of 64-bit computers, the current generation operating system (OS), software development environment (Windows 7 vs. Windows XP, and current Visual Studio.Net), and improved software version controls (GIT vs. Source Safe.) With the upgraded performance allowed by new computers, and the additional memory available in a 64-bit OS, the system was able to handle greater traffic densities, and this also allowed addition of the ability to handle stop-and-go traffic.« less

  10. Army Networks: Opportunities Exist to Better Utilize Results from Network Integration Evaluations

    DTIC Science & Technology

    2013-08-01

    monitor operations; a touch screen-based mission command planning tool; and an antenna mast . The Army will field only one of these systems in capability...Office JTRS Joint Tactical Radio System NIE Network Integration Evaluation OSD Office of the Secretary of Defense SUE System under Evaluation...command systems . A robust transport layer capable of delivering voice, data, imagery, and video to the tactical edge (i.e., the forward battle lines

  11. Viewer discretion advised: is YouTube a friend or foe in surgical education?

    PubMed

    Rodriguez, H Alejandro; Young, Monica T; Jackson, Hope T; Oelschlager, Brant K; Wright, Andrew S

    2018-04-01

    In the current era, trainees frequently use unvetted online resources for their own education, including viewing surgical videos on YouTube. While operative videos are an important resource in surgical education, YouTube content is not selected or organized by quality but instead is ranked by popularity and other factors. This creates a potential for videos that feature poor technique or critical safety violations to become the most viewed for a given procedure. A YouTube search for "Laparoscopic cholecystectomy" was performed. Search results were screened to exclude animations and lectures; the top ten operative videos were evaluated. Three reviewers independently analyzed each of the 10 videos. Technical skill was rated using the GOALS score. Establishment of a critical view of safety (CVS) was scored according to CVS "doublet view" score, where a score of ≥5 points (out of 6) is considered satisfactory. Videos were also screened for safety concerns not listed by the previous tools. Median competence score was 8 (±1.76) and difficulty was 2 (±1.8). GOALS score median was 18 (±3.4). Only one video achieved adequate critical view of safety; median CVS score was 2 (range 0-6). Five videos were noted to have other potentially dangerous safety violations, including placing hot ultrasonic shears on the duodenum, non-clipping of the cystic artery, blind dissection in the hepatocystic triangle, and damage to the liver capsule. Top ranked laparoscopic cholecystectomy videos on YouTube show suboptimal technique with half of videos demonstrating concerning maneuvers and only one in ten having an adequate critical view of safety. While observing operative videos can be an important learning tool, surgical educators should be aware of the low quality of popular videos on YouTube. Dissemination of high-quality content on video sharing platforms should be a priority for surgical societies.

  12. Application of a Multimedia Service and Resource Management Architecture for Fault Diagnosis

    PubMed Central

    Castro, Alfonso; Sedano, Andrés A.; García, Fco. Javier; Villoslada, Eduardo

    2017-01-01

    Nowadays, the complexity of global video products has substantially increased. They are composed of several associated services whose functionalities need to adapt across heterogeneous networks with different technologies and administrative domains. Each of these domains has different operational procedures; therefore, the comprehensive management of multi-domain services presents serious challenges. This paper discusses an approach to service management linking fault diagnosis system and Business Processes for Telefónica’s global video service. The main contribution of this paper is the proposal of an extended service management architecture based on Multi Agent Systems able to integrate the fault diagnosis with other different service management functionalities. This architecture includes a distributed set of agents able to coordinate their actions under the umbrella of a Shared Knowledge Plane, inferring and sharing their knowledge with semantic techniques and three types of automatic reasoning: heterogeneous, ontology-based and Bayesian reasoning. This proposal has been deployed and validated in a real scenario in the video service offered by Telefónica Latam. PMID:29283398

  13. Application of a Multimedia Service and Resource Management Architecture for Fault Diagnosis.

    PubMed

    Castro, Alfonso; Sedano, Andrés A; García, Fco Javier; Villoslada, Eduardo; Villagrá, Víctor A

    2017-12-28

    Nowadays, the complexity of global video products has substantially increased. They are composed of several associated services whose functionalities need to adapt across heterogeneous networks with different technologies and administrative domains. Each of these domains has different operational procedures; therefore, the comprehensive management of multi-domain services presents serious challenges. This paper discusses an approach to service management linking fault diagnosis system and Business Processes for Telefónica's global video service. The main contribution of this paper is the proposal of an extended service management architecture based on Multi Agent Systems able to integrate the fault diagnosis with other different service management functionalities. This architecture includes a distributed set of agents able to coordinate their actions under the umbrella of a Shared Knowledge Plane, inferring and sharing their knowledge with semantic techniques and three types of automatic reasoning: heterogeneous, ontology-based and Bayesian reasoning. This proposal has been deployed and validated in a real scenario in the video service offered by Telefónica Latam.

  14. Training to Operate a Simulated Micro-Unmanned Aerial Vehicle With Continuous or Discrete Manual Control

    DTIC Science & Technology

    2008-05-01

    people use this type of device to navigate through 3-D virtual environments while playing video games , and this is essentially what the operator of an...How many days in the past week have you played video games ? 0 0 1 0 2 o 3 o 4 o 5 o 6 o 7 o 16. Estimate how many hours per day you play video games on

  15. COBRA ATD minefield detection results for the Joint Countermine ACTD Demonstrations

    NASA Astrophysics Data System (ADS)

    Stetson, Suzanne P.; Witherspoon, Ned H.; Holloway, John H., Jr.; Suiter, Harold R.; Crosby, Frank J.; Hilton, Russell J.; McCarley, Karen A.

    2000-08-01

    The Coastal Battlefield Reconnaissance and Analysis)COBRA) system described here was a Marine Corps Advanced Technology Demonstration (ATD) development consisting of an unmanned aerial vehicle (UAV) airborne multispectral video sensor system and ground station which processes the multispectral video data to automatically detect minefields along the flight path. After successful completion of the ATD, the residual COBRA ATD system participated in the Joint Countermine (JCM) Advanced Concept Technology Demonstration (ACTD) Demo I held at Camp Lejeune, North Carolina in conjunction with JTFX97 and Demo II held in Stephenville, Newfoundland in conjunction with MARCOT98. These exercises demonstrated the COBRA ATD system in an operational environment, detecting minefields that included several different mine types in widely varying backgrounds. The COBRA system performed superbly during these demonstrations, detecting mines under water, in the surf zone, on the beach, and inland, and has transitioned to an acquisition program. This paper describes the COBRA operation and performance results for these demonstrations, which represent the first demonstrated capability for remote tactical minefield detection from a UAV. The successful COBRA technologies and techniques demonstrated for tactical UAV minefield detection in the Joint Countermine Advanced Concept Technology Demonstrations have formed the technical foundation for future developments in Marine Corps, Navy, and Army tactical remote airborne mine detection systems.

  16. Cost-Benefit Performance of Robotic Surgery Compared with Video-Assisted Thoracoscopic Surgery under the Japanese National Health Insurance System.

    PubMed

    Kajiwara, Naohiro; Patrick Barron, James; Kato, Yasufumi; Kakihana, Masatoshi; Ohira, Tatsuo; Kawate, Norihiko; Ikeda, Norihiko

    2015-01-01

    Medical economics have significant impact on the entire country. The explosion in surgical techniques has been accompanied by questions regarding actual improvements in outcome and cost-effectiveness, such as the da Vinci(®) Surgical System (dVS) compared with conventional video-assisted thoracic surgery (VATS). To establish a medical fee system for robot-assisted thoracic surgery (RATS), which is a system not yet firmly established in Japan. This study examines the cost benefit performance (CBP) based on medical fees compared with VATS and RATS under the Japanese National Health Insurance System (JNHIS) introduced in 2012. The projected (but as yet undecided) price in the JNHIS would be insufficient if institutions have less than even 200 dVS cases per year. Only institutions which perform more than 300 dVS operations per year would obtain a positive CBP with the projected JNHIS reimbursement. Thus, under the present conditions, it is necessary to perform at least 300 dVS operations per year in each institution with a dVS system to avoid financial deficit with current robotic surgical management. This may hopefully encourage a downward price revision of the dVS equipment by the manufacture which would result in a decrease in the cost per procedure.

  17. 37 CFR 201.40 - Exemption to prohibition against circumvention.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... security of the owner or operator of a computer, computer system, or computer network; and (ii) The... film and media studies students; (ii) Documentary filmmaking; (iii) Noncommercial videos. (2) Computer... lawfully obtained, with computer programs on the telephone handset. (3) Computer programs, in the form of...

  18. Augmented reality system for CT-guided interventions: system description and initial phantom trials

    NASA Astrophysics Data System (ADS)

    Sauer, Frank; Schoepf, Uwe J.; Khamene, Ali; Vogt, Sebastian; Das, Marco; Silverman, Stuart G.

    2003-05-01

    We are developing an augmented reality (AR) image guidance system, in which information derived from medical images is overlaid onto a video view of the patient. The interventionalist wears a head-mounted display (HMD) that presents him with the augmented stereo view. The HMD is custom fitted with two miniature color video cameras that capture the stereo view of the scene. A third video camera, operating in the near IR, is also attached to the HMD and is used for head tracking. The system achieves real-time performance of 30 frames per second. The graphics appears firmly anchored in the scne, without any noticeable swimming or jitter or time lag. For the application of CT-guided interventions, we extended our original prototype system to include tracking of a biopsy needle to which we attached a set of optical markers. The AR visualization provides very intuitive guidance for planning and placement of the needle and reduces radiation to patient and radiologist. We used an interventional abdominal phantom with simulated liver lesions to perform an inital set of experiments. The users were consistently able to locate the target lesion with the first needle pass. These results provide encouragement to move the system towards clinical trials.

  19. An integrated port camera and display system for laparoscopy.

    PubMed

    Terry, Benjamin S; Ruppert, Austin D; Steinhaus, Kristen R; Schoen, Jonathan A; Rentschler, Mark E

    2010-05-01

    In this paper, we built and tested the port camera, a novel, inexpensive, portable, and battery-powered laparoscopic tool that integrates the components of a vision system with a cannula port. This new device 1) minimizes the invasiveness of laparoscopic surgery by combining a camera port and tool port; 2) reduces the cost of laparoscopic vision systems by integrating an inexpensive CMOS sensor and LED light source; and 3) enhances laparoscopic surgical procedures by mechanically coupling the camera, tool port, and liquid crystal display (LCD) screen to provide an on-patient visual display. The port camera video system was compared to two laparoscopic video systems: a standard resolution unit from Karl Storz (model 22220130) and a high definition unit from Stryker (model 1188HD). Brightness, contrast, hue, colorfulness, and sharpness were compared. The port camera video is superior to the Storz scope and approximately equivalent to the Stryker scope. An ex vivo study was conducted to measure the operative performance of the port camera. The results suggest that simulated tissue identification and biopsy acquisition with the port camera is as efficient as with a traditional laparoscopic system. The port camera was successfully used by a laparoscopic surgeon for exploratory surgery and liver biopsy during a porcine surgery, demonstrating initial surgical feasibility.

  20. Lameness detection in dairy cattle: single predictor v. multivariate analysis of image-based posture processing and behaviour and performance sensing.

    PubMed

    Van Hertem, T; Bahr, C; Schlageter Tello, A; Viazzi, S; Steensels, M; Romanini, C E B; Lokhorst, C; Maltz, E; Halachmi, I; Berckmans, D

    2016-09-01

    The objective of this study was to evaluate if a multi-sensor system (milk, activity, body posture) was a better classifier for lameness than the single-sensor-based detection models. Between September 2013 and August 2014, 3629 cow observations were collected on a commercial dairy farm in Belgium. Human locomotion scoring was used as reference for the model development and evaluation. Cow behaviour and performance was measured with existing sensors that were already present at the farm. A prototype of three-dimensional-based video recording system was used to quantify automatically the back posture of a cow. For the single predictor comparisons, a receiver operating characteristics curve was made. For the multivariate detection models, logistic regression and generalized linear mixed models (GLMM) were developed. The best lameness classification model was obtained by the multi-sensor analysis (area under the receiver operating characteristics curve (AUC)=0.757±0.029), containing a combination of milk and milking variables, activity and gait and posture variables from videos. Second, the multivariate video-based system (AUC=0.732±0.011) performed better than the multivariate milk sensors (AUC=0.604±0.026) and the multivariate behaviour sensors (AUC=0.633±0.018). The video-based system performed better than the combined behaviour and performance-based detection model (AUC=0.669±0.028), indicating that it is worthwhile to consider a video-based lameness detection system, regardless the presence of other existing sensors in the farm. The results suggest that Θ2, the feature variable for the back curvature around the hip joints, with an AUC of 0.719 is the best single predictor variable for lameness detection based on locomotion scoring. In general, this study showed that the video-based back posture monitoring system is outperforming the behaviour and performance sensing techniques for locomotion scoring-based lameness detection. A GLMM with seven specific variables (walking speed, back posture measurement, daytime activity, milk yield, lactation stage, milk peak flow rate and milk peak conductivity) is the best combination of variables for lameness classification. The accuracy on four-level lameness classification was 60.3%. The accuracy improved to 79.8% for binary lameness classification. The binary GLMM obtained a sensitivity of 68.5% and a specificity of 87.6%, which both exceed the sensitivity (52.1%±4.7%) and specificity (83.2%±2.3%) of the multi-sensor logistic regression model. This shows that the repeated measures analysis in the GLMM, taking into account the individual history of the animal, outperforms the classification when thresholds based on herd level (a statistical population) are used.

  1. 360° Operative Videos: A Randomised Cross-Over Study Evaluating Attentiveness and Information Retention.

    PubMed

    Harrington, Cuan M; Kavanagh, Dara O; Wright Ballester, Gemma; Wright Ballester, Athena; Dicker, Patrick; Traynor, Oscar; Hill, Arnold; Tierney, Sean

    2017-11-06

    Although two-dimensional (2D) and three-dimensional videos have traditionally provided foundations for reviewing operative procedures, the recent 360º format may provide new dimensions to surgical education. This study sought to describe the production of a high quality 360º video for an index-operation (augmented with educational material), while evaluating for variances in attentiveness, information retention, and appraisal compared to 2D. A 6-camera synchronised array (GoPro Omni, [California, United States]) was suspended inverted and recorded an elective laparoscopic cholecystectomy in 2016. A single-blinded randomised cross-over study was performed to evaluate this video in 360º vs 2D formats. Group A experienced the 360º video using Samsung (Suwon, South-Korea) GearVR virtual-reality headsets, followed by the 2D experience on a 75-inch television. Group B were reversed. Each video was probed at designated time points for engagement levels and task-unrelated images or thoughts. Alternating question banks were administered following each video experience. Feedback was obtained via a short survey at study completion. The New Academic and Education Building (NAEB) in Dublin, Royal College of Surgeons in Ireland, July 2017. Preclinical undergraduate students from a medical university in Ireland. Forty students participated with a mean age of 23.2 ± 4.5 years and equal sex involvement. The 360º video demonstrated significantly higher engagement (p < 0.01) throughout the experience and lower task-unrelated images or thoughts (p < 0.01). Significant variances in information retention between the 2 groups were absent (p = 0.143) but most (65%) reported the 360º video as their learning platform of choice. Mean appraisal levels for the 360º platform were positive with mean responses of >8/10 for the platform for learning, immersion, and entertainment. This study describes the successful development and evaluation of a 360º operative video. This new video format demonstrated significant engagement and attentiveness benefits compared to traditional 2D formats. This requires further evaluation in the field of technology enhanced learning. Copyright © 2017 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  2. Convergence of broadband optical and wireless access networks

    NASA Astrophysics Data System (ADS)

    Chang, Gee-Kung; Jia, Zhensheng; Chien, Hung-Chang; Chowdhury, Arshad; Hsueh, Yu-Ting; Yu, Jianjun

    2009-01-01

    This paper describes convergence of optical and wireless access networks for delivering high-bandwidth integrated services over optical fiber and air links. Several key system technologies are proposed and experimentally demonstrated. We report here, for the first ever, a campus-wide field trial demonstration of radio-over-fiber (RoF) system transmitting uncompressed standard-definition (SD) high-definition (HD) real-time video contents, carried by 2.4-GHz radio and 60- GHz millimeter-wave signals, respectively, over 2.5-km standard single mode fiber (SMF-28) through the campus fiber network at Georgia Institute of Technology (GT). In addition, subsystem technologies of Base Station and wireless tranceivers operated at 60 GHz for real-time video distribution have been developed and tested.

  3. Portable Video/Digital Retinal Funduscope

    NASA Technical Reports Server (NTRS)

    Taylor, Gerald R.; Meehan, Richard; Hunter, Norwood; Caputo, Michael; Gibson, C. Robert

    1991-01-01

    Lightweight, inexpensive electronic and photographic instrument developed for detection, monitoring, and objective quantification of ocular/systemic disease or physiological alterations of retina, blood vessels, or other structures in anterior and posterior chambers of eye. Operated with little training. Functions with human or animal subject seated, recumbent, inverted, or in almost any other orientation; and in hospital, laboratory, field, or other environment. Produces video images viewed directly and/or digitized for simultaneous or subsequent analysis. Also equipped to produce photographs and/or fitted with adaptors to produce stereoscopic or magnified images of skin, nose, ear, throat, or mouth to detect lesions or diseases.

  4. Virtual reality for spherical images

    NASA Astrophysics Data System (ADS)

    Pilarczyk, Rafal; Skarbek, Władysław

    2017-08-01

    Paper presents virtual reality application framework and application concept for mobile devices. Framework uses Google Cardboard library for Android operating system. Framework allows to create virtual reality 360 video player using standard OpenGL ES rendering methods. Framework provides network methods in order to connect to web server as application resource provider. Resources are delivered using JSON response as result of HTTP requests. Web server also uses Socket.IO library for synchronous communication between application and server. Framework implements methods to create event driven process of rendering additional content based on video timestamp and virtual reality head point of view.

  5. Using a high-definition stereoscopic video system to teach microscopic surgery

    NASA Astrophysics Data System (ADS)

    Ilgner, Justus; Park, Jonas Jae-Hyun; Labbé, Daniel; Westhofen, Martin

    2007-02-01

    Introduction: While there is an increasing demand for minimally invasive operative techniques in Ear, Nose and Throat surgery, these operations are difficult to learn for junior doctors and demanding to supervise for experienced surgeons. The motivation for this study was to integrate high-definition (HD) stereoscopic video monitoring in microscopic surgery in order to facilitate teaching interaction between senior and junior surgeon. Material and methods: We attached a 1280x1024 HD stereo camera (TrueVisionSystems TM Inc., Santa Barbara, CA, USA) to an operating microscope (Zeiss ProMagis, Zeiss Co., Oberkochen, Germany), whose images were processed online by a PC workstation consisting of a dual Intel® Xeon® CPU (Intel Co., Santa Clara, CA). The live image was displayed by two LCD projectors @ 1280x768 pixels on a 1,25m rear-projection screen by polarized filters. While the junior surgeon performed the surgical procedure based on the displayed stereoscopic image, all other participants (senior surgeon, nurse and medical students) shared the same stereoscopic image from the screen. Results: With the basic setup being performed only once on the day before surgery, fine adjustments required about 10 minutes extra during the operation schedule, which fitted into the time interval between patients and thus did not prolong operation times. As all relevant features of the operative field were demonstrated on one large screen, four major effects were obtained: A) Stereoscopy facilitated orientation for the junior surgeon as well as for medical students. B) The stereoscopic image served as an unequivocal guide for the senior surgeon to demonstrate the next surgical steps to the junior colleague. C) The theatre nurse shared the same image, anticipating the next instruments which were needed. D) Medical students instantly share the information given by all staff and the image, thus avoiding the need for an extra teaching session. Conclusion: High definition stereoscopy bears the potential to compress the learning curve for undergraduate as well as postgraduate medical professionals in minimally invasive surgery. Further studies will focus on the long term effect for operative training as well as on post-processing of HD stereoscopy video content for off-line interactive medical education.

  6. Internet Protocol Display Sharing Solution for Mission Control Center Video System

    NASA Technical Reports Server (NTRS)

    Brown, Michael A.

    2009-01-01

    With the advent of broadcast television as a constant source of information throughout the NASA manned space flight Mission Control Center (MCC) at the Johnson Space Center (JSC), the current Video Transport System (VTS) characteristics provides the ability to visually enhance real-time applications as a broadcast channel that decision making flight controllers come to rely on, but can be difficult to maintain and costly. The Operations Technology Facility (OTF) of the Mission Operations Facility Division (MOFD) has been tasked to provide insight to new innovative technological solutions for the MCC environment focusing on alternative architectures for a VTS. New technology will be provided to enable sharing of all imagery from one specific computer display, better known as Display Sharing (DS), to other computer displays and display systems such as; large projector systems, flight control rooms, and back supporting rooms throughout the facilities and other offsite centers using IP networks. It has been stated that Internet Protocol (IP) applications are easily readied to substitute for the current visual architecture, but quality and speed may need to be forfeited for reducing cost and maintainability. Although the IP infrastructure can support many technologies, the simple task of sharing ones computer display can be rather clumsy and difficult to configure and manage to the many operators and products. The DS process shall invest in collectively automating the sharing of images while focusing on such characteristics as; managing bandwidth, encrypting security measures, synchronizing disconnections from loss of signal / loss of acquisitions, performance latency, and provide functions like, scalability, multi-sharing, ease of initial integration / sustained configuration, integration with video adjustments packages, collaborative tools, host / recipient controllability, and the utmost paramount priority, an enterprise solution that provides ownership to the whole process, while maintaining the integrity of the latest technological displayed image devices. This study will provide insights to the many possibilities that can be filtered down to a harmoniously responsive product that can be used in today's MCC environment.

  7. Satellite-aided coastal zone monitoring and vessel traffic system

    NASA Technical Reports Server (NTRS)

    Baker, J. L.

    1981-01-01

    The development and demonstration of a coastal zone monitoring and vessel traffic system is described. This technique uses a LORAN-C navigational system and relays signals via the ATS-3 satellite to a computer driven color video display for real time control. Multi-use applications of the system to search and rescue operations, coastal zone management and marine safety are described. It is emphasized that among the advantages of the system are: its unlimited range; compatibility with existing navigation systems; and relatively inexpensive cost.

  8. Artificial Intelligence and Spacecraft Power Systems

    NASA Technical Reports Server (NTRS)

    Dugel-Whitehead, Norma R.

    1997-01-01

    This talk will present the work which has been done at NASA Marshall Space Flight Center involving the use of Artificial Intelligence to control the power system in a spacecraft. The presentation will include a brief history of power system automation, and some basic definitions of the types of artificial intelligence which have been investigated at MSFC for power system automation. A video tape of one of our autonomous power systems using co-operating expert systems, and advanced hardware will be presented.

  9. Video. A Guide to the Use of Portable Video Equipment.

    ERIC Educational Resources Information Center

    Rowatt, Robert W.

    This guide describes the portable equipment necessary for preparing a video production, and recommends ways of using that equipment to create a video program. Step by step instructions are provided for setting up the equipment for battery operation or with a mains electricity supply. Information is also given on procedures for recording, playing…

  10. 47 CFR 79.101 - Closed caption decoder requirements for analog television receivers.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...) BROADCAST RADIO SERVICES CLOSED CAPTIONING AND VIDEO DESCRIPTION OF VIDEO PROGRAMMING § 79.101 Closed... display the captioning for whichever channel the user selects. The TV Mode of operation allows the video... and rows. The characters must be displayed clearly separated from the video over which they are placed...

  11. Alternative Fuels Data Center: Idaho Transportation Data for Alternative

    Science.gov Websites

    the National Renewable Energy Laboratory Case Studies Video thumbnail for Idaho National Laboratory Operating Costs and Emissions May 16, 2014 Video thumbnail for Republic Services Reduces Waste with 87 CNG Videos on YouTube Video thumbnail for Idaho Surges Ahead with Electric Vehicle Charging Idaho Surges

  12. ICC '86; Proceedings of the International Conference on Communications, Toronto, Canada, June 22-25, 1986, Conference Record. Volumes 1, 2, & 3

    NASA Astrophysics Data System (ADS)

    Papers are presented on ISDN, mobile radio systems and techniques for digital connectivity, centralized and distributed algorithms in computer networks, communications networks, quality assurance and impact on cost, adaptive filters in communications, the spread spectrum, signal processing, video communication techniques, and digital satellite services. Topics discussed include performance evaluation issues for integrated protocols, packet network operations, the computer network theory and multiple-access, microwave single sideband systems, switching architectures, fiber optic systems, wireless local communications, modulation, coding, and synchronization, remote switching, software quality, transmission, and expert systems in network operations. Consideration is given to wide area networks, image and speech processing, office communications application protocols, multimedia systems, customer-controlled network operations, digital radio systems, channel modeling and signal processing in digital communications, earth station/on-board modems, computer communications system performance evaluation, source encoding, compression, and quantization, and adaptive communications systems.

  13. Video games and surgical ability: a literature review.

    PubMed

    Lynch, Jeremy; Aughwane, Paul; Hammond, Toby M

    2010-01-01

    Surgical training is rapidly evolving because of reduced training hours and the reduction of training opportunities due to patient safety concerns. There is a popular conception that video game usage might be linked to improved operating ability especially those techniques involving endoscopic modalities. If true this might suggest future directions for training. A search was made of the MEDLINE databases for the MeSH term, "Video Games," combined with the terms "Surgical Procedures, Operative," "Endoscopy," "Robotics," "Education," "Learning," "Simulators," "Computer Simulation," "Psychomotor Performance," and "Surgery, Computer-Assisted,"encompassing all journal articles before November 2009. References of articles were searched for further studies. Twelve relevant journal articles were discovered. Video game usage has been studied in relationship to laparoscopic, gastrointestinal endoscopic, endovascular, and robotic surgery. Video game users acquire endoscopic but not robotic techniques quicker, and training on video games appears to improve performance. Copyright (c) 2010 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  14. Update on POCIT portable optical communicators: VideoBeam and EtherBeam

    NASA Astrophysics Data System (ADS)

    Mecherle, G. Stephen; Holcomb, Terry L.

    2000-05-01

    LDSC is developing the POCITTM (Portable Optical Communication Integrated Transceiver) family of products which includes VideoBeamTM and the latest addition, EtherBeamTM. Each is a full duplex portable laser communicator: VideoBeamTM providing near-broadcast- quality analog video and stereo audio, and EtherBeamTM providing standard Ethernet connectivity. Each POCITTM transceiver consists of a 3.5-pound unit with a binocular- type form factor, which can be manually pointed, tripod- mounted or gyro-stabilized. Both units have an operational range of over two miles (clear air) with excellent jam- resistance and low probability of interception characteristics. The transmission wavelength of 1550 nm enables Class 1 eyesafe operation (ANSI, IEC). The POCITTM units are ideally suited for numerous military scenarios, surveillance/espionage, industrial precious mineral exploration, and campus video teleconferencing applications. VideoBeam will be available second quarter 2000, followed by EtherBeam in third quarter 2000.

  15. The Ocean in Depth - Ideas for Using Marine Technology in Science Communication

    NASA Astrophysics Data System (ADS)

    Gerdes, A.

    2009-04-01

    By deploying camera and video systems on remotely operated diving vehicles (ROVs), new and fascinating insights concerning the functioning of deep ocean ecosystems like cold-water coral reef communities can be gained. Moreover, mapping hot vents at mid-ocean ridge locations, and exploring asphalt and mud volcanoes in the Gulf of Mexico and the Mediterranean Sea with the aid of video camera systems have illustrated the scientific value of state-of-the-art diving tools. In principle, the deployment of sophisticated marine technology on seagoing expeditions and their results - video tapes and photographs of fascinating submarine environments, publication of new scientific findings - offer unique opportunities for communicating marine sciences. Experience shows that an interest in marine technology can easily be stirred in laypersons if the deployment of underwater vehicles such as ROVs during seagoing expeditions can be presented using catchwords like "discovery", "new frontier", groundbreaking mission", etc. On the other hand, however, a number of restrictions and challenges have to be kept in mind. Communicating marine science in general, and the achievements of marine technology in particular, can only be successful with the application of a well-defined target-audience concept. While national and international TV stations and production companies are very much interested in using high quality underwater video footage, the involvement of journalists and camera teams in seagoing expeditions entails a number a challenges: berths onboard research vessels are limited; safety aspects have to be considered; copyright and utilisation questions of digitalized video and photo material has to be handled with special care. To cite one example: on-board video material produced by professional TV teams cannot be used by the research institute that operated the expedition. This presentation aims at (1)informing members of the scientific community about new opportunities related to marine technology, (2)discussing challenges and limitations in cooperative projects with media,(3) presenting new ways of marketing scientific findings, (4) promoting the interest of the media present at the EGU09 conference in cooperating with research institutes.

  16. The da Vinci telerobotic surgical system: the virtual operative field and telepresence surgery.

    PubMed

    Ballantyne, Garth H; Moll, Fred

    2003-12-01

    The United States Department of Defense developed the telepresence surgery concept to meet battlefield demands. The da Vinci telerobotic surgery system evolved from these efforts. In this article, the authors describe the components of the da Vinci system and explain how the surgeon sits at a computer console, views a three-dimensional virtual operative field, and performs the operation by controlling robotic arms that hold the stereoscopic video telescope and surgical instruments that simulate hand motions with seven degrees of freedom. The three-dimensional imaging and handlike motions of the system facilitate advanced minimally invasive thoracic, cardiac, and abdominal procedures. da Vinci has recently released a second generation of telerobots with four arms and will continue to meet the evolving challenges of surgery.

  17. Feasibility of dynamic cardiac ultrasound transmission via mobile phone for basic emergency teleconsultation.

    PubMed

    Lim, Tae Ho; Choi, Hyuk Joong; Kang, Bo Seung

    2010-01-01

    We assessed the feasibility of using a camcorder mobile phone for teleconsulting about cardiac echocardiography. The diagnostic performance of evaluating left ventricle (LV) systolic function was measured by three emergency medicine physicians. A total of 138 short echocardiography video sequences (from 70 subjects) was selected from previous emergency room ultrasound examinations. The measurement of LV ejection fraction based on the transmitted video displayed on a mobile phone was compared with the original video displayed on the LCD monitor of the ultrasound machine. The image quality was evaluated using the double stimulation impairment scale (DSIS). All observers showed high sensitivity. There was an improvement in specificity with the observer's increasing experience of cardiac ultrasound. Although the image quality of video on the mobile phone was lower than that of the original, a receiver operating characteristic (ROC) analysis indicated that there was no significant difference in diagnostic performance. Immediate basic teleconsulting of echocardiography movies is possible using current commercially-available mobile phone systems.

  18. High-throughput sample adaptive offset hardware architecture for high-efficiency video coding

    NASA Astrophysics Data System (ADS)

    Zhou, Wei; Yan, Chang; Zhang, Jingzhi; Zhou, Xin

    2018-03-01

    A high-throughput hardware architecture for a sample adaptive offset (SAO) filter in the high-efficiency video coding video coding standard is presented. First, an implementation-friendly and simplified bitrate estimation method of rate-distortion cost calculation is proposed to reduce the computational complexity in the mode decision of SAO. Then, a high-throughput VLSI architecture for SAO is presented based on the proposed bitrate estimation method. Furthermore, multiparallel VLSI architecture for in-loop filters, which integrates both deblocking filter and SAO filter, is proposed. Six parallel strategies are applied in the proposed in-loop filters architecture to improve the system throughput and filtering speed. Experimental results show that the proposed in-loop filters architecture can achieve up to 48% higher throughput in comparison with prior work. The proposed architecture can reach a high-operating clock frequency of 297 MHz with TSMC 65-nm library and meet the real-time requirement of the in-loop filters for 8 K × 4 K video format at 132 fps.

  19. Improvement of Hungarian Joint Terminal Attack Program

    DTIC Science & Technology

    2013-06-13

    LST Laser Spot Tracker NVG Night Vision Goggle ROMAD Radio Operator Maintainer and Driver ROVER Remotely Operated Video Enhanced Receiver TACP...visual target designation. The other component consists of a laser spot tracker (LST), which identifies targets by tracking laser energy reflecting...capability for every type of night time missions, laser spot tracker for laser spot search missions, remotely operated video enhanced receiver

  20. Consumer-based technology for distribution of surgical videos for objective evaluation.

    PubMed

    Gonzalez, Ray; Martinez, Jose M; Lo Menzo, Emanuele; Iglesias, Alberto R; Ro, Charles Y; Madan, Atul K

    2012-08-01

    The Global Operative Assessment of Laparoscopic Skill (GOALS) is one validated metric utilized to grade laparoscopic skills and has been utilized to score recorded operative videos. To facilitate easier viewing of these recorded videos, we are developing novel techniques to enable surgeons to view these videos. The objective of this study is to determine the feasibility of utilizing widespread current consumer-based technology to assist in distributing appropriate videos for objective evaluation. Videos from residents were recorded via a direct connection from the camera processor via an S-video output via a cable into a hub to connect to a standard laptop computer via a universal serial bus (USB) port. A standard consumer-based video editing program was utilized to capture the video and record in appropriate format. We utilized mp4 format, and depending on the size of the file, the videos were scaled down (compressed), their format changed (using a standard video editing program), or sliced into multiple videos. Standard available consumer-based programs were utilized to convert the video into a more appropriate format for handheld personal digital assistants. In addition, the videos were uploaded to a social networking website and video sharing websites. Recorded cases of laparoscopic cholecystectomy in a porcine model were utilized. Compression was required for all formats. All formats were accessed from home computers, work computers, and iPhones without difficulty. Qualitative analyses by four surgeons demonstrated appropriate quality to grade for these formats. Our preliminary results show promise that, utilizing consumer-based technology, videos can be easily distributed to surgeons to grade via GOALS via various methods. Easy accessibility may help make evaluation of resident videos less complicated and cumbersome.

Top