Sample records for mounted video camera

  1. A Portable Shoulder-Mounted Camera System for Surgical Education in Spine Surgery.

    PubMed

    Pham, Martin H; Ohiorhenuan, Ifije E; Patel, Neil N; Jakoi, Andre M; Hsieh, Patrick C; Acosta, Frank L; Wang, Jeffrey C; Liu, John C

    2017-02-07

    The past several years have demonstrated an increased recognition of operative videos as an important adjunct for resident education. Currently lacking, however, are effective methods to record video for the purposes of illustrating the techniques of minimally invasive (MIS) and complex spine surgery. We describe here our experiences developing and using a shoulder-mounted camera system for recording surgical video. Our requirements for an effective camera system included wireless portability to allow for movement around the operating room, camera mount location for comfort and loupes/headlight usage, battery life for long operative days, and sterile control of on/off recording. With this in mind, we created a shoulder-mounted camera system utilizing a GoPro™ HERO3+, its Smart Remote (GoPro, Inc., San Mateo, California), a high-capacity external battery pack, and a commercially available shoulder-mount harness. This shoulder-mounted system was more comfortable to wear for long periods of time in comparison to existing head-mounted and loupe-mounted systems. Without requiring any wired connections, the surgeon was free to move around the room as needed. Over the past several years, we have recorded numerous MIS and complex spine surgeries for the purposes of surgical video creation for resident education. Surgical videos serve as a platform to distribute important operative nuances in rich multimedia. Effective and practical camera system setups are needed to encourage the continued creation of videos to illustrate the surgical maneuvers in minimally invasive and complex spinal surgery. We describe here a novel portable shoulder-mounted camera system setup specifically designed to be worn and used for long periods of time in the operating room.

  2. Video monitoring system for car seat

    NASA Technical Reports Server (NTRS)

    Elrod, Susan Vinz (Inventor); Dabney, Richard W. (Inventor)

    2004-01-01

    A video monitoring system for use with a child car seat has video camera(s) mounted in the car seat. The video images are wirelessly transmitted to a remote receiver/display encased in a portable housing that can be removably mounted in the vehicle in which the car seat is installed.

  3. Assessment of the DoD Embedded Media Program

    DTIC Science & Technology

    2004-09-01

    Classified and Sensitive Information ................... VII-22 3. Weapons Systems Video, Gun Camera Video, and Lipstick Cameras...Weapons Systems Video, Gun Camera Video, and Lipstick Cameras A SECDEF and CJCS message to commanders stated, “Put in place mechanisms and processes...of public communication activities.”126 The 10 February 2003 PAG stated, “Use of lipstick and helmet-mounted cameras on combat sorties is approved

  4. NV-CMOS HD camera for day/night imaging

    NASA Astrophysics Data System (ADS)

    Vogelsong, T.; Tower, J.; Sudol, Thomas; Senko, T.; Chodelka, D.

    2014-06-01

    SRI International (SRI) has developed a new multi-purpose day/night video camera with low-light imaging performance comparable to an image intensifier, while offering the size, weight, ruggedness, and cost advantages enabled by the use of SRI's NV-CMOS HD digital image sensor chip. The digital video output is ideal for image enhancement, sharing with others through networking, video capture for data analysis, or fusion with thermal cameras. The camera provides Camera Link output with HD/WUXGA resolution of 1920 x 1200 pixels operating at 60 Hz. Windowing to smaller sizes enables operation at higher frame rates. High sensitivity is achieved through use of backside illumination, providing high Quantum Efficiency (QE) across the visible and near infrared (NIR) bands (peak QE <90%), as well as projected low noise (<2h+) readout. Power consumption is minimized in the camera, which operates from a single 5V supply. The NVCMOS HD camera provides a substantial reduction in size, weight, and power (SWaP) , ideal for SWaP-constrained day/night imaging platforms such as UAVs, ground vehicles, fixed mount surveillance, and may be reconfigured for mobile soldier operations such as night vision goggles and weapon sights. In addition the camera with the NV-CMOS HD imager is suitable for high performance digital cinematography/broadcast systems, biofluorescence/microscopy imaging, day/night security and surveillance, and other high-end applications which require HD video imaging with high sensitivity and wide dynamic range. The camera comes with an array of lens mounts including C-mount and F-mount. The latest test data from the NV-CMOS HD camera will be presented.

  5. Surgeon point-of-view recording: Using a high-definition head-mounted video camera in the operating room.

    PubMed

    Nair, Akshay Gopinathan; Kamal, Saurabh; Dave, Tarjani Vivek; Mishra, Kapil; Reddy, Harsha S; Della Rocca, David; Della Rocca, Robert C; Andron, Aleza; Jain, Vandana

    2015-10-01

    To study the utility of a commercially available small, portable ultra-high definition (HD) camera (GoPro Hero 4) for intraoperative recording. A head mount was used to fix the camera on the operating surgeon's head. Due care was taken to protect the patient's identity. The recorded video was subsequently edited and used as a teaching tool. This retrospective, noncomparative study was conducted at three tertiary eye care centers. The surgeries recorded were ptosis correction, ectropion correction, dacryocystorhinostomy, angular dermoid excision, enucleation, blepharoplasty and lid tear repair surgery (one each). The recorded videos were reviewed, edited, and checked for clarity, resolution, and reproducibility. The recorded videos were found to be high quality, which allowed for zooming and visualization of the surgical anatomy clearly. Minimal distortion is a drawback that can be effectively addressed during postproduction. The camera, owing to its lightweight and small size, can be mounted on the surgeon's head, thus offering a unique surgeon point-of-view. In our experience, the results were of good quality and reproducible. A head-mounted ultra-HD video recording system is a cheap, high quality, and unobtrusive technique to record surgery and can be a useful teaching tool in external facial and ophthalmic plastic surgery.

  6. Surgeon point-of-view recording: Using a high-definition head-mounted video camera in the operating room

    PubMed Central

    Nair, Akshay Gopinathan; Kamal, Saurabh; Dave, Tarjani Vivek; Mishra, Kapil; Reddy, Harsha S; Rocca, David Della; Rocca, Robert C Della; Andron, Aleza; Jain, Vandana

    2015-01-01

    Objective: To study the utility of a commercially available small, portable ultra-high definition (HD) camera (GoPro Hero 4) for intraoperative recording. Methods: A head mount was used to fix the camera on the operating surgeon's head. Due care was taken to protect the patient's identity. The recorded video was subsequently edited and used as a teaching tool. This retrospective, noncomparative study was conducted at three tertiary eye care centers. The surgeries recorded were ptosis correction, ectropion correction, dacryocystorhinostomy, angular dermoid excision, enucleation, blepharoplasty and lid tear repair surgery (one each). The recorded videos were reviewed, edited, and checked for clarity, resolution, and reproducibility. Results: The recorded videos were found to be high quality, which allowed for zooming and visualization of the surgical anatomy clearly. Minimal distortion is a drawback that can be effectively addressed during postproduction. The camera, owing to its lightweight and small size, can be mounted on the surgeon's head, thus offering a unique surgeon point-of-view. In our experience, the results were of good quality and reproducible. Conclusions: A head-mounted ultra-HD video recording system is a cheap, high quality, and unobtrusive technique to record surgery and can be a useful teaching tool in external facial and ophthalmic plastic surgery. PMID:26655001

  7. The Surgeon's View: Comparison of Two Digital Video Recording Systems in Veterinary Surgery.

    PubMed

    Giusto, Gessica; Caramello, Vittorio; Comino, Francesco; Gandini, Marco

    2015-01-01

    Video recording and photography during surgical procedures are useful in veterinary medicine for several reasons, including legal, educational, and archival purposes. Many systems are available, such as hand cameras, light-mounted cameras, and head cameras. We chose a reasonably priced head camera that is among the smallest video cameras available. To best describe its possible uses and advantages, we recorded video and images of eight different surgical cases and procedures, both in hospital and field settings. All procedures were recorded both with a head-mounted camera and a commercial hand-held photo camera. Then sixteen volunteers (eight senior clinicians and eight final-year students) completed an evaluation questionnaire. Both cameras produced high-quality photographs and videos, but observers rated the head camera significantly better regarding point of view and their understanding of the surgical operation. The head camera was considered significantly more useful in teaching surgical procedures. Interestingly, senior clinicians tended to assign generally lower scores compared to students. The head camera we tested is an effective, easy-to-use tool for recording surgeries and various veterinary procedures in all situations, with no need for assistance from a dedicated operator. It can be a valuable aid for veterinarians working in all fields of the profession and a useful tool for veterinary surgical education.

  8. Voss with video camera in Service Module

    NASA Image and Video Library

    2001-04-08

    ISS002-E-5329 (08 April 2001) --- Astronaut James S. Voss, Expedition Two flight engineer, sets up a video camera on a mounting bracket in the Zvezda / Service Module of the International Space Station (ISS). A 35mm camera and a digital still camera are also visible nearby. This image was recorded with a digital still camera.

  9. Patterned Video Sensors For Low Vision

    NASA Technical Reports Server (NTRS)

    Juday, Richard D.

    1996-01-01

    Miniature video cameras containing photoreceptors arranged in prescribed non-Cartesian patterns to compensate partly for some visual defects proposed. Cameras, accompanied by (and possibly integrated with) miniature head-mounted video display units restore some visual function in humans whose visual fields reduced by defects like retinitis pigmentosa.

  10. Contributions of Head-Mounted Cameras to Studying the Visual Environments of Infants and Young Children

    ERIC Educational Resources Information Center

    Smith, Linda B.; Yu, Chen; Yoshida, Hanako; Fausey, Caitlin M.

    2015-01-01

    Head-mounted video cameras (with and without an eye camera to track gaze direction) are being increasingly used to study infants' and young children's visual environments and provide new and often unexpected insights about the visual world from a child's point of view. The challenge in using head cameras is principally conceptual and concerns the…

  11. Surgical video recording with a modified GoPro Hero 4 camera

    PubMed Central

    Lin, Lily Koo

    2016-01-01

    Background Surgical videography can provide analytical self-examination for the surgeon, teaching opportunities for trainees, and allow for surgical case presentations. This study examined if a modified GoPro Hero 4 camera with a 25 mm lens could prove to be a cost-effective method of surgical videography with enough detail for oculoplastic and strabismus surgery. Method The stock lens mount and lens were removed from a GoPro Hero 4 camera, and was refitted with a Peau Productions SuperMount and 25 mm lens. The modified GoPro Hero 4 camera was then fixed to an overhead surgical light. Results Camera settings were set to 1080p video resolution. The 25 mm lens allowed for nine times the magnification as the GoPro stock lens. There was no noticeable video distortion. The entire cost was less than 600 USD. Conclusion The adapted GoPro Hero 4 with a 25 mm lens allows for high-definition, cost-effective, portable video capture of oculoplastic and strabismus surgery. The 25 mm lens allows for detailed videography that can enhance surgical teaching and self-examination. PMID:26834455

  12. Surgical video recording with a modified GoPro Hero 4 camera.

    PubMed

    Lin, Lily Koo

    2016-01-01

    Surgical videography can provide analytical self-examination for the surgeon, teaching opportunities for trainees, and allow for surgical case presentations. This study examined if a modified GoPro Hero 4 camera with a 25 mm lens could prove to be a cost-effective method of surgical videography with enough detail for oculoplastic and strabismus surgery. The stock lens mount and lens were removed from a GoPro Hero 4 camera, and was refitted with a Peau Productions SuperMount and 25 mm lens. The modified GoPro Hero 4 camera was then fixed to an overhead surgical light. Camera settings were set to 1080p video resolution. The 25 mm lens allowed for nine times the magnification as the GoPro stock lens. There was no noticeable video distortion. The entire cost was less than 600 USD. The adapted GoPro Hero 4 with a 25 mm lens allows for high-definition, cost-effective, portable video capture of oculoplastic and strabismus surgery. The 25 mm lens allows for detailed videography that can enhance surgical teaching and self-examination.

  13. Help for the Visually Impaired

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The Low Vision Enhancement System (LVES) is a video headset that offers people with low vision a view of their surroundings equivalent to the image on a five-foot television screen four feet from the viewer. It will not make the blind see but for many people with low vision, it eases everyday activities such as reading, watching TV and shopping. LVES was developed over almost a decade of cooperation between Stennis Space Center, the Wilmer Eye Institute of the Johns Hopkins Medical Institutions, the Department of Veteran Affairs, and Visionics Corporation. With the aid of Stennis scientists, Wilmer researchers used NASA technology for computer processing of satellite images and head-mounted vision enhancement systems originally intended for the space station. The unit consists of a head-mounted video display, three video cameras, and a control unit for the cameras. The cameras feed images to the video display in the headset.

  14. Representing videos in tangible products

    NASA Astrophysics Data System (ADS)

    Fageth, Reiner; Weiting, Ralf

    2014-03-01

    Videos can be taken with nearly every camera, digital point and shoot cameras, DSLRs as well as smartphones and more and more with so-called action cameras mounted on sports devices. The implementation of videos while generating QR codes and relevant pictures out of the video stream via a software implementation was contents in last years' paper. This year we present first data about what contents is displayed and how the users represent their videos in printed products, e.g. CEWE PHOTOBOOKS and greeting cards. We report the share of the different video formats used, the number of images extracted out of the video in order to represent the video, the positions in the book and different design strategies compared to regular books.

  15. Enhanced Video-Oculography System

    NASA Technical Reports Server (NTRS)

    Moore, Steven T.; MacDougall, Hamish G.

    2009-01-01

    A previously developed video-oculography system has been enhanced for use in measuring vestibulo-ocular reflexes of a human subject in a centrifuge, motor vehicle, or other setting. The system as previously developed included a lightweight digital video camera mounted on goggles. The left eye was illuminated by an infrared light-emitting diode via a dichroic mirror, and the camera captured images of the left eye in infrared light. To extract eye-movement data, the digitized video images were processed by software running in a laptop computer. Eye movements were calibrated by having the subject view a target pattern, fixed with respect to the subject s head, generated by a goggle-mounted laser with a diffraction grating. The system as enhanced includes a second camera for imaging the scene from the subject s perspective, and two inertial measurement units (IMUs) for measuring linear accelerations and rates of rotation for computing head movements. One IMU is mounted on the goggles, the other on the centrifuge or vehicle frame. All eye-movement and head-motion data are time-stamped. In addition, the subject s point of regard is superimposed on each scene image to enable analysis of patterns of gaze in real time.

  16. Recording stereoscopic 3D neurosurgery with a head-mounted 3D camera system.

    PubMed

    Lee, Brian; Chen, Brian R; Chen, Beverly B; Lu, James Y; Giannotta, Steven L

    2015-06-01

    Stereoscopic three-dimensional (3D) imaging can present more information to the viewer and further enhance the learning experience over traditional two-dimensional (2D) video. Most 3D surgical videos are recorded from the operating microscope and only feature the crux, or the most important part of the surgery, leaving out other crucial parts of surgery including the opening, approach, and closing of the surgical site. In addition, many other surgeries including complex spine, trauma, and intensive care unit procedures are also rarely recorded. We describe and share our experience with a commercially available head-mounted stereoscopic 3D camera system to obtain stereoscopic 3D recordings of these seldom recorded aspects of neurosurgery. The strengths and limitations of using the GoPro(®) 3D system as a head-mounted stereoscopic 3D camera system in the operating room are reviewed in detail. Over the past several years, we have recorded in stereoscopic 3D over 50 cranial and spinal surgeries and created a library for education purposes. We have found the head-mounted stereoscopic 3D camera system to be a valuable asset to supplement 3D footage from a 3D microscope. We expect that these comprehensive 3D surgical videos will become an important facet of resident education and ultimately lead to improved patient care.

  17. Advancement of thyroid surgery video recording: A comparison between two full HD head mounted video cameras.

    PubMed

    Ortensi, Andrea; Panunzi, Andrea; Trombetta, Silvia; Cattaneo, Alberto; Sorrenti, Salvatore; D'Orazi, Valerio

    2017-05-01

    The aim of this study was to test two different video cameras and recording systems used in thyroid surgery in our Department. This is meant to be an attempt to record the real point of view of the magnified vision of surgeon, so as to make the viewer aware of the difference with the naked eye vision. In this retrospective study, we recorded and compared twenty thyroidectomies performed using loupes magnification and microsurgical technique: ten were recorded with GoPro ® 4 Session action cam (commercially available) and ten with our new prototype of head mounted video camera. Settings were selected before surgery for both cameras. The recording time is about from 1 to 2 h for GoPro ® and from 3 to 5 h for our prototype. The average time of preparation to fit the camera on the surgeon's head and set the functionality is about 5 min for GoPro ® and 7-8 min for the prototype, mostly due to HDMI wiring cable. Videos recorded with the prototype require no further editing, which is mandatory for videos recorded with GoPro ® to highlight the surgical details. the present study showed that our prototype of video camera, compared with GoPro ® 4 Session, guarantees best results in terms of surgical video recording quality, provides to the viewer the exact perspective of the microsurgeon and shows accurately his magnified view through the loupes in thyroid surgery. These recordings are surgical aids for teaching and education and might be a method of self-analysis of surgical technique. Copyright © 2017 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.

  18. Installing Snowplow Cameras and Integrating Images into MnDOT's Traveler Information System

    DOT National Transportation Integrated Search

    2017-10-01

    In 2015 and 2016, the Minnesota Department of Transportation (MnDOT) installed network video dash- and ceiling-mounted cameras on 226 snowplows, approximately one-quarter of MnDOT's total snowplow fleet. The cameras were integrated with the onboard m...

  19. Helms in FGB/Zarya with cameras

    NASA Image and Video Library

    2001-06-08

    ISS002-E-6526 (8 June 2001) --- Astronaut Susan J. Helms, Expedition Two flight engineer, mounts a video camera onto a bracket in the Zarya or Functional Cargo Block (FGB) of the International Space Station (ISS). The image was recorded with a digital still camera.

  20. Very High-Speed Digital Video Capability for In-Flight Use

    NASA Technical Reports Server (NTRS)

    Corda, Stephen; Tseng, Ting; Reaves, Matthew; Mauldin, Kendall; Whiteman, Donald

    2006-01-01

    digital video camera system has been qualified for use in flight on the NASA supersonic F-15B Research Testbed aircraft. This system is capable of very-high-speed color digital imaging at flight speeds up to Mach 2. The components of this system have been ruggedized and shock-mounted in the aircraft to survive the severe pressure, temperature, and vibration of the flight environment. The system includes two synchronized camera subsystems installed in fuselage-mounted camera pods (see Figure 1). Each camera subsystem comprises a camera controller/recorder unit and a camera head. The two camera subsystems are synchronized by use of an MHub(TradeMark) synchronization unit. Each camera subsystem is capable of recording at a rate up to 10,000 pictures per second (pps). A state-of-the-art complementary metal oxide/semiconductor (CMOS) sensor in the camera head has a maximum resolution of 1,280 1,024 pixels at 1,000 pps. Exposure times of the electronic shutter of the camera range from 1/200,000 of a second to full open. The recorded images are captured in a dynamic random-access memory (DRAM) and can be downloaded directly to a personal computer or saved on a compact flash memory card. In addition to the high-rate recording of images, the system can display images in real time at 30 pps. Inter Range Instrumentation Group (IRIG) time code can be inserted into the individual camera controllers or into the M-Hub unit. The video data could also be used to obtain quantitative, three-dimensional trajectory information. The first use of this system was in support of the Space Shuttle Return to Flight effort. Data were needed to help in understanding how thermally insulating foam is shed from a space shuttle external fuel tank during launch. The cameras captured images of simulated external tank debris ejected from a fixture mounted under the centerline of the F-15B aircraft. Digital video was obtained at subsonic and supersonic flight conditions, including speeds up to Mach 2 and altitudes up to 50,000 ft (15.24 km). The digital video was used to determine the structural survivability of the debris in a real flight environment and quantify the aerodynamic trajectories of the debris.

  1. On-line content creation for photo products: understanding what the user wants

    NASA Astrophysics Data System (ADS)

    Fageth, Reiner

    2015-03-01

    This paper describes how videos can be implemented into printed photo books and greeting cards. We will show that - surprisingly or not- pictures from videos are similarly used such as classical images to tell compelling stories. Videos can be taken with nearly every camera, digital point and shoot cameras, DSLRs as well as smartphones and more and more with so-called action cameras mounted on sports devices. The implementation of videos while generating QR codes and relevant pictures out of the video stream via a software implementation was contents in last years' paper. This year we present first data about what contents is displayed and how the users represent their videos in printed products, e.g. CEWE PHOTOBOOKS and greeting cards. We report the share of the different video formats used.

  2. KSC-04pd1226

    NASA Image and Video Library

    2004-05-19

    KENNEDY SPACE CENTER, FLA. -- Johnson Controls operator Rick Wetherington checks out one of the recently acquired Contraves-Goerz Kineto Tracking Mounts (KTM). There are 10 KTMs certified for use on the Eastern Range. The KTM, which is trailer-mounted with an electric drive tracking mount, includes a two-camera, camera control unit that will be used during launches. The KTM is designed for remotely controlled operations and offers a combination of film, shuttered and high-speed digital video, and FLIR cameras configured with 20-inch to 150-inch focal length lenses. The KTMs are generally placed in the field and checked out the day before a launch and manned 3 hours prior to liftoff.

  3. KSC-04pd1220

    NASA Image and Video Library

    2004-05-19

    KENNEDY SPACE CENTER, FLA. -- Johnson Controls operator Kenny Allen works on the recently acquired Contraves-Goerz Kineto Tracking Mount (KTM). Trailer-mounted with a center console/seat and electric drive tracking mount, the KTM includes a two-camera, camera control unit that will be used during launches. The KTM is designed for remotely controlled operations and offers a combination of film, shuttered and high-speed digital video, and FLIR cameras configured with 20-inch to 150-inch focal length lenses. The KTMs are generally placed in the field and checked out the day before a launch and manned 3 hours prior to liftoff. There are 10 KTMs certified for use on the Eastern Range.

  4. KSC-04pd1219

    NASA Image and Video Library

    2004-05-19

    KENNEDY SPACE CENTER, FLA. -- Johnson Controls operator Kenny Allen works on the recently acquired Contraves-Goerz Kineto Tracking Mount (KTM). Trailer-mounted with a center console/seat and electric drive tracking mount, the KTM includes a two-camera, camera control unit that will be used during launches. The KTM is designed for remotely controlled operations and offers a combination of film, shuttered and high-speed digital video, and FLIR cameras configured with 20-inch to 150-inch focal length lenses. The KTMs are generally placed in the field and checked out the day before a launch and manned 3 hours prior to liftoff. There are 10 KTMs certified for use on the Eastern Range.

  5. KSC-04pd1227

    NASA Image and Video Library

    2004-05-19

    KENNEDY SPACE CENTER, FLA. -- Johnson Controls operator Kenny Allen checks out one of the recently acquired Contraves-Goerz Kineto Tracking Mounts (KTM). There are 10 KTMs certified for use on the Eastern Range. The KTM, which is trailer-mounted with an electric drive tracking mount, includes a two-camera, camera control unit that will be used during launches. The KTM is designed for remotely controlled operations and offers a combination of film, shuttered and high-speed digital video, and FLIR cameras configured with 20-inch to 150-inch focal length lenses. The KTMs are generally placed in the field and checked out the day before a launch and manned 3 hours prior to liftoff.

  6. Obstacles encountered in the development of the low vision enhancement system.

    PubMed

    Massof, R W; Rickman, D L

    1992-01-01

    The Johns Hopkins Wilmer Eye Institute and the NASA Stennis Space Center are collaborating on the development of a new high technology low vision aid called the Low Vision Enhancement System (LVES). The LVES consists of a binocular head-mounted video display system, video cameras mounted on the head-mounted display, and real-time video image processing in a system package that is battery powered and portable. Through a phased development approach, several generations of the LVES can be made available to the patient in a timely fashion. This paper describes the LVES project with major emphasis on technical problems encountered or anticipated during the development process.

  7. Astronaut Susan J. Helms Mounts a Videao Camera in Zarya

    NASA Technical Reports Server (NTRS)

    2001-01-01

    Astronaut Susan J. Helms, Expedition Two flight engineer, mounts a video camera onto a bracket in the Russian Zarya or Functional Cargo Block (FGB) of the International Space Station (ISS). Launched by a Russian Proton rocket from the Baikonu Cosmodrome on November 20, 1998, the Unites States-funded and Russian-built Zarya was the first element of the ISS, followed by the U.S. Unity Node.

  8. Mounted Video Camera Captures Launch of STS-112, Shuttle Orbiter Atlantis

    NASA Technical Reports Server (NTRS)

    2002-01-01

    A color video camera mounted to the top of the External Tank (ET) provided this spectacular never-before-seen view of the STS-112 mission as the Space Shuttle Orbiter Atlantis lifted off in the afternoon of October 7, 2002, The camera provided views as the the orbiter began its ascent until it reached near-orbital speed, about 56 miles above the Earth, including a view of the front and belly of the orbiter, a portion of the Solid Rocket Booster, and ET. The video was downlinked during flight to several NASA data-receiving sites, offering the STS-112 team an opportunity to monitor the shuttle's performance from a new angle. Atlantis carried the S1 Integrated Truss Structure and the Crew and Equipment Translation Aid (CETA) Cart. The CETA is the first of two human-powered carts that will ride along the International Space Station's railway providing a mobile work platform for future extravehicular activities by astronauts. Landing on October 18, 2002, the Orbiter Atlantis ended its 11-day mission.

  9. Mounted Video Camera Captures Launch of STS-112, Shuttle Orbiter Atlantis

    NASA Technical Reports Server (NTRS)

    2002-01-01

    A color video camera mounted to the top of the External Tank (ET) provided this spectacular never-before-seen view of the STS-112 mission as the Space Shuttle Orbiter Atlantis lifted off in the afternoon of October 7, 2002. The camera provided views as the orbiter began its ascent until it reached near-orbital speed, about 56 miles above the Earth, including a view of the front and belly of the orbiter, a portion of the Solid Rocket Booster, and ET. The video was downlinked during flight to several NASA data-receiving sites, offering the STS-112 team an opportunity to monitor the shuttle's performance from a new angle. Atlantis carried the S1 Integrated Truss Structure and the Crew and Equipment Translation Aid (CETA) Cart. The CETA is the first of two human-powered carts that will ride along the International Space Station's railway providing a mobile work platform for future extravehicular activities by astronauts. Landing on October 18, 2002, the Orbiter Atlantis ended its 11-day mission.

  10. Semantic Information Extraction of Lanes Based on Onboard Camera Videos

    NASA Astrophysics Data System (ADS)

    Tang, L.; Deng, T.; Ren, C.

    2018-04-01

    In the field of autonomous driving, semantic information of lanes is very important. This paper proposes a method of automatic detection of lanes and extraction of semantic information from onboard camera videos. The proposed method firstly detects the edges of lanes by the grayscale gradient direction, and improves the Probabilistic Hough transform to fit them; then, it uses the vanishing point principle to calculate the lane geometrical position, and uses lane characteristics to extract lane semantic information by the classification of decision trees. In the experiment, 216 road video images captured by a camera mounted onboard a moving vehicle were used to detect lanes and extract lane semantic information. The results show that the proposed method can accurately identify lane semantics from video images.

  11. KSC-04pd1223

    NASA Image and Video Library

    2004-05-19

    KENNEDY SPACE CENTER, FLA. -- Johnson Controls operator Kenny Allen makes adjustments on one of the recently acquired Contraves-Goerz Kineto Tracking Mounts (KTM). There are 10 KTMs certified for use on the Eastern Range. The KTM, which is trailer-mounted with a center console/seat and electric drive tracking mount, includes a two-camera, camera control unit that will be used during launches. The KTM is designed for remotely controlled operations and offers a combination of film, shuttered and high-speed digital video, and FLIR cameras configured with 20-inch to 150-inch focal length lenses. The KTMs are generally placed in the field and checked out the day before a launch and manned 3 hours prior to liftoff.

  12. KSC-04pd1221

    NASA Image and Video Library

    2004-05-19

    KENNEDY SPACE CENTER, FLA. -- Johnson Controls operators Rick Worthington (left) and Kenny Allen work on one of the recently acquired Contraves-Goerz Kineto Tracking Mounts (KTM). There are 10 KTMs certified for use on the Eastern Range. The KTM, which is trailer-mounted with a center console/seat and electric drive tracking mount, includes a two-camera, camera control unit that will be used during launches. The KTM is designed for remotely controlled operations and offers a combination of film, shuttered and high-speed digital video, and FLIR cameras configured with 20-inch to 150-inch focal length lenses. The KTMs are generally placed in the field and checked out the day before a launch and manned 3 hours prior to liftoff.

  13. KSC-04pd1225

    NASA Image and Video Library

    2004-05-19

    KENNEDY SPACE CENTER, FLA. -- Johnson Controls operator Kenny Allen stands in the center console area of one of the recently acquired Contraves-Goerz Kineto Tracking Mounts (KTM). There are 10 KTMs certified for use on the Eastern Range. The KTM, which is trailer-mounted with an electric-drive tracking mount, includes a two-camera, camera control unit that will be used during launches. The KTM is designed for remotely controlled operations and offers a combination of film, shuttered and high-speed digital video, and FLIR cameras configured with 20-inch to 150-inch focal length lenses. The KTMs are generally placed in the field and checked out the day before a launch and manned 3 hours prior to liftoff.

  14. KSC-04pd1224

    NASA Image and Video Library

    2004-05-19

    KENNEDY SPACE CENTER, FLA. -- Johnson Controls operator Rick Wetherington sits in the center console seat of one of the recently acquired Contraves-Goerz Kineto Tracking Mounts (KTM). There are 10 KTMs certified for use on the Eastern Range. The KTM, which is trailer-mounted with an electric drive tracking mount, includes a two-camera, camera control unit that will be used during launches. The KTM is designed for remotely controlled operations and offers a combination of film, shuttered and high-speed digital video, and FLIR cameras configured with 20-inch to 150-inch focal length lenses. The KTMs are generally placed in the field and checked out the day before a launch and manned 3 hours prior to liftoff.

  15. KSC-04pd1222

    NASA Image and Video Library

    2004-05-19

    KENNEDY SPACE CENTER, FLA. -- Johnson Controls operators Rick Wetherington (left) and Kenny Allen work on two of the recently acquired Contraves-Goerz Kineto Tracking Mounts (KTM). There are 10 KTMs certified for use on the Eastern Range. The KTM, which is trailer-mounted with a center console/seat and electric drive tracking mount, includes a two-camera, camera control unit that will be used during launches. The KTM is designed for remotely controlled operations and offers a combination of film, shuttered and high-speed digital video, and FLIR cameras configured with 20-inch to 150-inch focal length lenses. The KTMs are generally placed in the field and checked out the day before a launch and manned 3 hours prior to liftoff.

  16. Ranging Apparatus and Method Implementing Stereo Vision System

    NASA Technical Reports Server (NTRS)

    Li, Larry C. (Inventor); Cox, Brian J. (Inventor)

    1997-01-01

    A laser-directed ranging system for use in telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a left and right video camera mounted on a camera platform, and a remotely positioned operator. The position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. A laser is provided between the left and right video camera and is directed by the user to point to a target device. The images produced by the left and right video cameras are processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. The horizontal disparity between the two processed images is calculated for use in a stereometric ranging analysis from which range is determined.

  17. Video stroke assessment (VSA) project: design and production of a prototype system for the remote diagnosis of stroke

    NASA Astrophysics Data System (ADS)

    Urias, Adrian R.; Draghic, Nicole; Lui, Janet; Cho, Angie; Curtis, Calvin; Espinosa, Joseluis; Wottawa, Christopher; Wiesmann, William P.; Schwamm, Lee H.

    2005-04-01

    Stroke remains the third most frequent cause of death in the United States and the leading cause of disability in adults. Long-term effects of ischemic stroke can be mitigated by the opportune administration of Tissue Plasminogen Activator (t-PA); however, the decision regarding the appropriate use of this therapy is dependant on timely, effective neurological assessment by a trained specialist. The lack of available stroke expertise is a key barrier preventing frequent use of t-PA. We report here on the development of a prototype research system capable of performing a semi-automated neurological examination from an offsite location via the Internet and a Computed Tomography (CT) scanner to facilitate the diagnosis and treatment of acute stroke. The Video Stroke Assessment (VSA) System consists of a video camera, a camera mounting frame, and a computer with software and algorithms to collect, interpret, and store patient neurological responses to stimuli. The video camera is mounted on a mobility track in front of the patient; camera direction and zoom are remotely controlled on a graphical user interface (GUI) by the specialist. The VSA System also performs a partially-autonomous examination based on the NIH Stroke Scale (NIHSS). Various response data indicative of stroke are recorded, analyzed and transmitted in real time to the specialist. The VSA provides unbiased, quantitative results for most categories of the NIHSS along with video and audio playback to assist in accurate diagnosis. The system archives the complete exam and results.

  18. Researching Literacy in Context: Using Video Analysis to Explore School Literacies

    ERIC Educational Resources Information Center

    Blikstad-Balas, Marte; Sørvik, Gard Ove

    2015-01-01

    This article addresses how methodological approaches relying on video can be included in literacy research to capture changing literacies. In addition to arguing why literacy is best studied in context, we provide empirical examples of how small, head-mounted video cameras have been used in two different research projects that share a common aim:…

  19. A multiscale video system for studying an optical phenomena during active experiments in the upper atmosphere

    NASA Astrophysics Data System (ADS)

    Nikolashkin, S. V.; Reshetnikov, A. A.

    2017-11-01

    The system of video surveillance during active rocket experiments in the Polar geophysical observatory "Tixie" and studies of the effects of "Soyuz" vehicle launches from the "Vostochny" cosmodrome over the territory of the Republic of Sakha (Yakutia) is presented. The created system consists of three AHD video cameras with different angles of view mounted on a common platform mounted on a tripod with the possibility of manual guiding. The main camera with high-sensitivity black and white CCD matrix SONY EXview HADII is equipped depending on the task with lenses "MTO-1000" (F = 1000 mm) or "Jupiter-21M " (F = 300 mm) and is designed for more detailed shooting of luminous formations. The second camera of the same type, but with a 30 degree angle of view. It is intended for shooting of the general plan and large objects, and also for a binding of coordinates of object on stars. The third color wide-angle camera (120 degrees) is designed to be connected to landmarks in the daytime, the optical axis of this channel is directed at 60 degrees down. The data is recorded on the hard disk of a four-channel digital video recorder. Tests of the original version of the system with two channels were conducted during the launch of the geophysical rocket in Tixie in September 2015 and showed its effectiveness.

  20. USE OF VIDEO TO ACCESS JUVENILE WINTER FLOUNDER DENSITIES AND HABITATS

    EPA Science Inventory

    We used a digital video camera mounted to a 1-m beam trawl together with an attached continuous recording YSI sonde and a GPS unit to quantify juvenile winter flounder (Pseudopleuronectes americanus) densities and fish habitat in Narragansett Bay, RI. The YSI sonde measured te...

  1. A view of the ET camera on STS-112

    NASA Technical Reports Server (NTRS)

    2002-01-01

    KENNEDY SPACE CENTER, FLA. - A view of the camera mounted on the external tank of Space Shuttle Atlantis. The color video camera mounted to the top of Atlantis' external tank will provide a view of the front and belly of the orbiter and a portion of the solid rocket boosters (SRBs) and external tank during the launch of Atlantis on mission STS-112. It will offer the STS-112 team an opportunity to monitor the shuttle's performance from a new angle. The camera will be turned on fifteen minutes prior to launch and will show the orbiter and solid rocket boosters on the launch pad. The video will be downlinked from the external tank during flight to several NASA data-receiving sites and then relayed to the live television broadcast. The camera is expected to operate for about 15 minutes following liftoff. At liftoff, viewers will see the shuttle clearing the launch tower and, at two minutes after liftoff, see the right SRB separate from the external tank. When the external tank separates from Atlantis about eight minutes into the flight, the camera is expected to continue its live feed for about six more minutes although NASA may be unable to pick up the camera's signal because the tank may have moved out of range.

  2. A view of the ET camera on STS-112

    NASA Technical Reports Server (NTRS)

    2002-01-01

    KENNEDY SPACE CENTER, FLA. - A closeup view of the camera mounted on the external tank of Space Shuttle Atlantis. The color video camera mounted to the top of Atlantis' external tank will provide a view of the front and belly of the orbiter and a portion of the solid rocket boosters (SRBs) and external tank during the launch of Atlantis on mission STS-112. It will offer the STS-112 team an opportunity to monitor the shuttle's performance from a new angle. The camera will be turned on fifteen minutes prior to launch and will show the orbiter and solid rocket boosters on the launch pad. The video will be downlinked from the external tank during flight to several NASA data-receiving sites and then relayed to the live television broadcast. The camera is expected to operate for about 15 minutes following liftoff. At liftoff, viewers will see the shuttle clearing the launch tower and, at two minutes after liftoff, see the right SRB separate from the external tank. When the external tank separates from Atlantis about eight minutes into the flight, the camera is expected to continue its live feed for about six more minutes although NASA may be unable to pick up the camera's signal because the tank may have moved out of range.

  3. The High Definition Earth Viewing (HDEV) Payload

    NASA Technical Reports Server (NTRS)

    Muri, Paul; Runco, Susan; Fontanot, Carlos; Getteau, Chris

    2017-01-01

    The High Definition Earth Viewing (HDEV) payload enables long-term experimentation of four, commercial-of-the-shelf (COTS) high definition video, cameras mounted on the exterior of the International Space Station. The payload enables testing of cameras in the space environment. The HDEV cameras transmit imagery continuously to an encoder that then sends the video signal via Ethernet through the space station for downlink. The encoder, cameras, and other electronics are enclosed in a box pressurized to approximately one atmosphere, containing dry nitrogen, to provide a level of protection to the electronics from the space environment. The encoded video format supports streaming live video of Earth for viewing online. Camera sensor types include charge-coupled device and complementary metal-oxide semiconductor. Received imagery data is analyzed on the ground to evaluate camera sensor performance. Since payload deployment, minimal degradation to imagery quality has been observed. The HDEV payload continues to operate by live streaming and analyzing imagery. Results from the experiment reduce risk in the selection of cameras that could be considered for future use on the International Space Station and other spacecraft. This paper discusses the payload development, end-to- end architecture, experiment operation, resulting image analysis, and future work.

  4. The Lancashire telemedicine ambulance.

    PubMed

    Curry, G R; Harrop, N

    1998-01-01

    An emergency ambulance was equipped with three video-cameras and a system for transmitting slow-scan video-pictures through a cellular telephone link to a hospital accident and emergency department. Video-pictures were trasmitted at a resolution of 320 x 240 pixels and a frame rate of 15 pictures/min. In addition, a helmet-mounted camera was used with a wireless transmission link to the ambulance and thence the hospital. Speech was transmitted by a second hand-held cellular telephone. The equipment was installed in 1996-7 and video-recordings of actual ambulance journeys were made in July 1997. The technical feasibility of the telemedicine ambulance has been demonstrated and further clinical assessment is now in progress.

  5. Wide-Field-of-View, High-Resolution, Stereoscopic Imager

    NASA Technical Reports Server (NTRS)

    Prechtl, Eric F.; Sedwick, Raymond J.

    2010-01-01

    A device combines video feeds from multiple cameras to provide wide-field-of-view, high-resolution, stereoscopic video to the user. The prototype under development consists of two camera assemblies, one for each eye. One of these assemblies incorporates a mounting structure with multiple cameras attached at offset angles. The video signals from the cameras are fed to a central processing platform where each frame is color processed and mapped into a single contiguous wide-field-of-view image. Because the resolution of most display devices is typically smaller than the processed map, a cropped portion of the video feed is output to the display device. The positioning of the cropped window will likely be controlled through the use of a head tracking device, allowing the user to turn his or her head side-to-side or up and down to view different portions of the captured image. There are multiple options for the display of the stereoscopic image. The use of head mounted displays is one likely implementation. However, the use of 3D projection technologies is another potential technology under consideration, The technology can be adapted in a multitude of ways. The computing platform is scalable, such that the number, resolution, and sensitivity of the cameras can be leveraged to improve image resolution and field of view. Miniaturization efforts can be pursued to shrink the package down for better mobility. Power savings studies can be performed to enable unattended, remote sensing packages. Image compression and transmission technologies can be incorporated to enable an improved telepresence experience.

  6. The Use of Smart Glasses for Surgical Video Streaming.

    PubMed

    Hiranaka, Takafumi; Nakanishi, Yuta; Fujishiro, Takaaki; Hida, Yuichi; Tsubosaka, Masanori; Shibata, Yosaku; Okimura, Kenjiro; Uemoto, Harunobu

    2017-04-01

    Observation of surgical procedures performed by experts is extremely important for acquisition and improvement of surgical skills. Smart glasses are small computers, which comprise a head-mounted monitor and video camera, and can be connected to the internet. They can be used for remote observation of surgeries by video streaming. Although Google Glass is the most commonly used smart glasses for medical purposes, it is still unavailable commercially and has some limitations. This article reports the use of a different type of smart glasses, InfoLinker, for surgical video streaming. InfoLinker has been commercially available in Japan for industrial purposes for more than 2 years. It is connected to a video server via wireless internet directly, and streaming video can be seen anywhere an internet connection is available. We have attempted live video streaming of knee arthroplasty operations that were viewed at several different locations, including foreign countries, on a common web browser. Although the quality of video images depended on the resolution and dynamic range of the video camera, speed of internet connection, and the wearer's attention to minimize image shaking, video streaming could be easily performed throughout the procedure. The wearer could confirm the quality of the video as the video was being shot by the head-mounted display. The time and cost for observation of surgical procedures can be reduced by InfoLinker, and further improvement of hardware as well as the wearer's video shooting technique is expected. We believe that this can be used in other medical settings.

  7. Thermoelastic Analysis of Hyper-X Camera Windows Suddenly Exposed to Mach 7 Stagnation Aerothermal Shock

    NASA Technical Reports Server (NTRS)

    Ko, William L.; Gong, Leslie

    2000-01-01

    To visually record the initial free flight event of the Hyper-X research flight vehicle immediately after separation from the Pegasus(registered) booster rocket, a video camera was mounted on the bulkhead of the adapter through which Hyper-X rides on Pegasus. The video camera was shielded by a protecting camera window made of heat-resistant quartz material. When Hyper-X separates from Pegasus, this camera window will be suddenly exposed to Mach 7 stagnation thermal shock and dynamic pressure loading (aerothermal loading). To examine the structural integrity, thermoelastic analysis was performed, and the stress distributions in the camera windows were calculated. The critical stress point where the tensile stress reaches a maximum value for each camera window was identified, and the maximum tensile stress level at that critical point was found to be considerably lower than the tensile failure stress of the camera window material.

  8. The Art of Astrophotography

    NASA Astrophysics Data System (ADS)

    Morison, Ian

    2017-02-01

    1. Imaging star trails; 2. Imaging a constellation with a DSLR and tripod; 3. Imaging the Milky Way with a DSLR and tracking mount; 4. Imaging the Moon with a compact camera or smartphone; 5. Imaging the Moon with a DSLR; 6. Imaging the Pleiades Cluster with a DSLR and small refractor; 7. Imaging the Orion Nebula, M42, with a modified Canon DSLR; 8. Telescopes and their accessories for use in astroimaging; 9. Towards stellar excellence; 10. Cooling a DSLR camera to reduce sensor noise; 11. Imaging the North American and Pelican Nebulae; 12. Combating light pollution - the bane of astrophotographers; 13. Imaging planets with an astronomical video camera or Canon DSLR; 14. Video imaging the Moon with a webcam or DSLR; 15. Imaging the Sun in white light; 16. Imaging the Sun in the light of its H-alpha emission; 17. Imaging meteors; 18. Imaging comets; 19. Using a cooled 'one shot colour' camera; 20. Using a cooled monochrome CCD camera; 21. LRGB colour imaging; 22. Narrow band colour imaging; Appendix A. Telescopes for imaging; Appendix B. Telescope mounts; Appendix C. The effects of the atmosphere; Appendix D. Auto guiding; Appendix E. Image calibration; Appendix F. Practical aspects of astroimaging.

  9. Video Capture of Plastic Surgery Procedures Using the GoPro HERO 3+.

    PubMed

    Graves, Steven Nicholas; Shenaq, Deana Saleh; Langerman, Alexander J; Song, David H

    2015-02-01

    Significant improvements can be made in recoding surgical procedures, particularly in capturing high-quality video recordings from the surgeons' point of view. This study examined the utility of the GoPro HERO 3+ Black Edition camera for high-definition, point-of-view recordings of plastic and reconstructive surgery. The GoPro HERO 3+ Black Edition camera was head-mounted on the surgeon and oriented to the surgeon's perspective using the GoPro App. The camera was used to record 4 cases: 2 fat graft procedures and 2 breast reconstructions. During cases 1-3, an assistant remotely controlled the GoPro via the GoPro App. For case 4 the GoPro was linked to a WiFi remote, and controlled by the surgeon. Camera settings for case 1 were as follows: 1080p video resolution; 48 fps; Protune mode on; wide field of view; 16:9 aspect ratio. The lighting contrast due to the overhead lights resulted in limited washout of the video image. Camera settings were adjusted for cases 2-4 to a narrow field of view, which enabled the camera's automatic white balance to better compensate for bright lights focused on the surgical field. Cases 2-4 captured video sufficient for teaching or presentation purposes. The GoPro HERO 3+ Black Edition camera enables high-quality, cost-effective video recording of plastic and reconstructive surgery procedures. When set to a narrow field of view and automatic white balance, the camera is able to sufficiently compensate for the contrasting light environment of the operating room and capture high-resolution, detailed video.

  10. Video photographic considerations for measuring the proximity of a probe aircraft with a smoke seeded trailing vortex

    NASA Technical Reports Server (NTRS)

    Childers, Brooks A.; Snow, Walter L.

    1990-01-01

    Considerations for acquiring and analyzing 30 Hz video frames from charge coupled device (CCD) cameras mounted in the wing tips of a Beech T-34 aircraft are described. Particular attention is given to the characterization and correction of optical distortions inherent in the data.

  11. Use of a UAV-mounted video camera to assess feeding behavior of Raramuri Criollo cows

    USDA-ARS?s Scientific Manuscript database

    Interest in use of unmanned aerial vehicles in science has increased in recent years. It is predicted that they will be a preferred remote sensing platform for applications that inform sustainable rangeland management in the future. The objective of this study was to determine whether UAV video moni...

  12. 2. CHANNEL DIMENSIONS AND ALIGNMENT RESEARCH INSTRUMENTATION. HYDRAULIC ENGINEER PILOTING ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    2. CHANNEL DIMENSIONS AND ALIGNMENT RESEARCH INSTRUMENTATION. HYDRAULIC ENGINEER PILOTING VIDEO-CONTROLED BOAT MODEL FROM CONTROL TRAILER. NOTE VIEW FROM BOAT-MOUNTED VIDEO CAMERA SHOWN ON MONITOR, AND MODEL WATERWAY VISIBLE THROUGH WINDOW AT LEFT. - Waterways Experiment Station, Hydraulics Laboratory, Halls Ferry Road, 2 miles south of I-20, Vicksburg, Warren County, MS

  13. Evaluation of stereoscopic video cameras synchronized with the movement of an operator's head on the teleoperation of the actual backhoe shovel

    NASA Astrophysics Data System (ADS)

    Minamoto, Masahiko; Matsunaga, Katsuya

    1999-05-01

    Operator performance while using a remote controlled backhoe shovel is described for three different stereoscopic viewing conditions: direct view, fixed stereoscopic cameras connected to a helmet mounted display (HMD), and rotating stereo camera connected and slaved to the head orientation of a free moving stereo HMD. Results showed that the head- slaved system provided the best performance.

  14. KSC-02pd1374

    NASA Image and Video Library

    2002-09-26

    KENNEDY SPACE CENTER, FLA. - A view of the camera mounted on the external tank of Space Shuttle Atlantis. The color video camera mounted to the top of Atlantis' external tank will provide a view of the front and belly of the orbiter and a portion of the solid rocket boosters (SRBs) and external tank during the launch of Atlantis on mission STS-112. It will offer the STS-112 team an opportunity to monitor the shuttle's performance from a new angle. The camera will be turned on fifteen minutes prior to launch and will show the orbiter and solid rocket boosters on the launch pad. The video will be downlinked from the external tank during flight to several NASA data-receiving sites and then relayed to the live television broadcast. The camera is expected to operate for about 15 minutes following liftoff. At liftoff, viewers will see the shuttle clearing the launch tower and, at two minutes after liftoff, see the right SRB separate from the external tank. When the external tank separates from Atlantis about eight minutes into the flight, the camera is expected to continue its live feed for about six more minutes although NASA may be unable to pick up the camera's signal because the tank may have moved out of range.

  15. KSC-02pd1376

    NASA Image and Video Library

    2002-09-26

    KENNEDY SPACE CENTER, FLA. - A closeup view of the camera mounted on the external tank of Space Shuttle Atlantis. The color video camera mounted to the top of Atlantis' external tank will provide a view of the front and belly of the orbiter and a portion of the solid rocket boosters (SRBs) and external tank during the launch of Atlantis on mission STS-112. It will offer the STS-112 team an opportunity to monitor the shuttle's performance from a new angle. The camera will be turned on fifteen minutes prior to launch and will show the orbiter and solid rocket boosters on the launch pad. The video will be downlinked from the external tank during flight to several NASA data-receiving sites and then relayed to the live television broadcast. The camera is expected to operate for about 15 minutes following liftoff. At liftoff, viewers will see the shuttle clearing the launch tower and, at two minutes after liftoff, see the right SRB separate from the external tank. When the external tank separates from Atlantis about eight minutes into the flight, the camera is expected to continue its live feed for about six more minutes although NASA may be unable to pick up the camera's signal because the tank may have moved out of range.

  16. KSC-02pd1375

    NASA Image and Video Library

    2002-09-26

    KENNEDY SPACE CENTER, FLA. - A closeup view of the camera mounted on the external tank of Space Shuttle Atlantis. The color video camera mounted to the top of Atlantis' external tank will provide a view of the front and belly of the orbiter and a portion of the solid rocket boosters (SRBs) and external tank during the launch of Atlantis on mission STS-112. It will offer the STS-112 team an opportunity to monitor the shuttle's performance from a new angle. The camera will be turned on fifteen minutes prior to launch and will show the orbiter and solid rocket boosters on the launch pad. The video will be downlinked from the external tank during flight to several NASA data-receiving sites and then relayed to the live television broadcast. The camera is expected to operate for about 15 minutes following liftoff. At liftoff, viewers will see the shuttle clearing the launch tower and, at two minutes after liftoff, see the right SRB separate from the external tank. When the external tank separates from Atlantis about eight minutes into the flight, the camera is expected to continue its live feed for about six more minutes although NASA may be unable to pick up the camera's signal because the tank may have moved out of range.

  17. Light-Directed Ranging System Implementing Single Camera System for Telerobotics Applications

    NASA Technical Reports Server (NTRS)

    Wells, Dennis L. (Inventor); Li, Larry C. (Inventor); Cox, Brian J. (Inventor)

    1997-01-01

    A laser-directed ranging system has utility for use in various fields, such as telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a single video camera and a directional light source such as a laser mounted on a camera platform, and a remotely positioned operator. In one embodiment, the position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. The laser is offset vertically and horizontally from the camera, and the laser/camera platform is directed by the user to point the laser and the camera toward a target device. The image produced by the video camera is processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. A reference point is defined at a point in the video frame, which may be located outside of the image area of the camera. The disparity between the digital image of the laser spot and the reference point is calculated for use in a ranging analysis to determine range to the target.

  18. Markerless client-server augmented reality system with natural features

    NASA Astrophysics Data System (ADS)

    Ning, Shuangning; Sang, Xinzhu; Chen, Duo

    2017-10-01

    A markerless client-server augmented reality system is presented. In this research, the more extensive and mature virtual reality head-mounted display is adopted to assist the implementation of augmented reality. The viewer is provided an image in front of their eyes with the head-mounted display. The front-facing camera is used to capture video signals into the workstation. The generated virtual scene is merged with the outside world information received from the camera. The integrated video is sent to the helmet display system. The distinguishing feature and novelty is to realize the augmented reality with natural features instead of marker, which address the limitations of the marker, such as only black and white, the inapplicability of different environment conditions, and particularly cannot work when the marker is partially blocked. Further, 3D stereoscopic perception of virtual animation model is achieved. The high-speed and stable socket native communication method is adopted for transmission of the key video stream data, which can reduce the calculation burden of the system.

  19. SAFER Under Vehicle Inspection Through Video Mosaic Building

    DTIC Science & Technology

    2004-01-01

    this work were taken using a Polaris Wp-300c Lipstick video camera mounted on a mobile platform. Infrared video was taken using a Raytheon PalmIR PRO...Tank- Automotive Research, Development and Engineering Center, US Army RDECOM, Warren, Michigan, USA. Keywords Inspection, Road vehicles, State...security, Robotics Abstract The current threats to US security, both military and civilian, have led to an increased interest in the development of

  20. Use of an unmanned aerial vehicle-mounted video camera to assess feeding behavior of Raramuri Criollo cows

    USDA-ARS?s Scientific Manuscript database

    We determined the feasibility of using unmanned aerial vehicle (UAV) video monitoring to predict intake of discrete food items of rangeland-raised Raramuri Criollo non-nursing beef cows. Thirty-five cows were released into a 405-m2 rectangular dry lot, either in pairs (pilot tests) or individually (...

  1. Through the Eyes of the Participant: Making Connections between Researcher and Subject with Participant Viewpoint Ethnography

    ERIC Educational Resources Information Center

    Wilhoit, Elizabeth D.; Kisselburgh, Lorraine G.

    2016-01-01

    In this article, we introduce participant viewpoint ethnography (PVE), a phenomenological video research method that combines reflexive, interview-based data with video capture of actual experiences. In PVE, participants wear a head-mounted camera to record the phenomena of study from their point of view. The researcher and participant then review…

  2. Photogrammetric Applications of Immersive Video Cameras

    NASA Astrophysics Data System (ADS)

    Kwiatek, K.; Tokarczyk, R.

    2014-05-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to overcome it and applying immersive cameras in photogrammetry provides a new potential. The paper presents two applications of immersive video in photogrammetry. At first, the creation of a low-cost mobile mapping system based on Ladybug®3 and GPS device is discussed. The amount of panoramas is much too high for photogrammetric purposes as the base line between spherical panoramas is around 1 metre. More than 92 000 panoramas were recorded in one Polish region of Czarny Dunajec and the measurements from panoramas enable the user to measure the area of outdoors (adverting structures) and billboards. A new law is being created in order to limit the number of illegal advertising structures in the Polish landscape and immersive video recorded in a short period of time is a candidate for economical and flexible measurements off-site. The second approach is a generation of 3d video-based reconstructions of heritage sites based on immersive video (structure from immersive video). A mobile camera mounted on a tripod dolly was used to record the interior scene and immersive video, separated into thousands of still panoramas, was converted from video into 3d objects using Agisoft Photoscan Professional. The findings from these experiments demonstrated that immersive photogrammetry seems to be a flexible and prompt method of 3d modelling and provides promising features for mobile mapping systems.

  3. Spherical visual system for real-time virtual reality and surveillance

    NASA Astrophysics Data System (ADS)

    Chen, Su-Shing

    1998-12-01

    A spherical visual system has been developed for full field, web-based surveillance, virtual reality, and roundtable video conference. The hardware is a CycloVision parabolic lens mounted on a video camera. The software was developed at the University of Missouri-Columbia. The mathematical model is developed by Su-Shing Chen and Michael Penna in the 1980s. The parabolic image, capturing the full (360 degrees) hemispherical field (except the north pole) of view is transformed into the spherical model of Chen and Penna. In the spherical model, images are invariant under the rotation group and are easily mapped to the image plane tangent to any point on the sphere. The projected image is exactly what the usual camera produces at that angle. Thus a real-time full spherical field video camera is developed by using two pieces of parabolic lenses.

  4. In-camera video-stream processing for bandwidth reduction in web inspection

    NASA Astrophysics Data System (ADS)

    Jullien, Graham A.; Li, QiuPing; Hajimowlana, S. Hossain; Morvay, J.; Conflitti, D.; Roberts, James W.; Doody, Brian C.

    1996-02-01

    Automated machine vision systems are now widely used for industrial inspection tasks where video-stream data information is taken in by the camera and then sent out to the inspection system for future processing. In this paper we describe a prototype system for on-line programming of arbitrary real-time video data stream bandwidth reduction algorithms; the output of the camera only contains information that has to be further processed by a host computer. The processing system is built into a DALSA CCD camera and uses a microcontroller interface to download bit-stream data to a XILINXTM FPGA. The FPGA is directly connected to the video data-stream and outputs data to a low bandwidth output bus. The camera communicates to a host computer via an RS-232 link to the microcontroller. Static memory is used to both generate a FIFO interface for buffering defect burst data, and for off-line examination of defect detection data. In addition to providing arbitrary FPGA architectures, the internal program of the microcontroller can also be changed via the host computer and a ROM monitor. This paper describes a prototype system board, mounted inside a DALSA camera, and discusses some of the algorithms currently being implemented for web inspection applications.

  5. Video Capture of Plastic Surgery Procedures Using the GoPro HERO 3+

    PubMed Central

    Graves, Steven Nicholas; Shenaq, Deana Saleh; Langerman, Alexander J.

    2015-01-01

    Background: Significant improvements can be made in recoding surgical procedures, particularly in capturing high-quality video recordings from the surgeons’ point of view. This study examined the utility of the GoPro HERO 3+ Black Edition camera for high-definition, point-of-view recordings of plastic and reconstructive surgery. Methods: The GoPro HERO 3+ Black Edition camera was head-mounted on the surgeon and oriented to the surgeon’s perspective using the GoPro App. The camera was used to record 4 cases: 2 fat graft procedures and 2 breast reconstructions. During cases 1-3, an assistant remotely controlled the GoPro via the GoPro App. For case 4 the GoPro was linked to a WiFi remote, and controlled by the surgeon. Results: Camera settings for case 1 were as follows: 1080p video resolution; 48 fps; Protune mode on; wide field of view; 16:9 aspect ratio. The lighting contrast due to the overhead lights resulted in limited washout of the video image. Camera settings were adjusted for cases 2-4 to a narrow field of view, which enabled the camera’s automatic white balance to better compensate for bright lights focused on the surgical field. Cases 2-4 captured video sufficient for teaching or presentation purposes. Conclusions: The GoPro HERO 3+ Black Edition camera enables high-quality, cost-effective video recording of plastic and reconstructive surgery procedures. When set to a narrow field of view and automatic white balance, the camera is able to sufficiently compensate for the contrasting light environment of the operating room and capture high-resolution, detailed video. PMID:25750851

  6. Aerial Video Imaging

    NASA Technical Reports Server (NTRS)

    1991-01-01

    When Michael Henry wanted to start an aerial video service, he turned to Johnson Space Center for assistance. Two NASA engineers - one had designed and developed TV systems in Apollo, Skylab, Apollo- Soyuz and Space Shuttle programs - designed a wing-mounted fiberglass camera pod. Camera head and angles are adjustable, and the pod is shaped to reduce vibration. The controls are located so a solo pilot can operate the system. A microprocessor displays latitude, longitude, and bearing, and a GPS receiver provides position data for possible legal references. The service has been successfully utilized by railroads, oil companies, real estate companies, etc.

  7. Student-Built Underwater Video and Data Capturing Device

    NASA Astrophysics Data System (ADS)

    Whitt, F.

    2016-12-01

    Students from Stockbridge High School Robotics Team invention is a low cost underwater video and data capturing device. This system is capable of shooting time-lapse photography and/or video for up to 3 days of video at a time. It can be used in remote locations without having to change batteries or adding additional external hard drives for data storage. The video capturing device has a unique base and mounting system which houses a pi drive and a programmable raspberry pi with a camera module. This system is powered by two 12 volt batteries, which makes it easier for users to recharge after use. Our data capturing device has the same unique base and mounting system as the underwater camera. The data capturing device consists of an Arduino and SD card shield that is capable of collecting continuous temperature and pH readings underwater. This data will then be logged onto the SD card for easy access and recording. The low cost underwater video and data capturing device can reach depths up to 100 meters while recording 36 hours of video on 1 terabyte of storage. It also features night vision infrared light capabilities. The cost to build our invention is $500. The goal of this was to provide a device that can easily be accessed by marine biologists, teachers, researchers and citizen scientists to capture photographic and water quality data in marine environments over extended periods of time.

  8. Enhanced video indirect ophthalmoscopy (VIO) via robust mosaicing.

    PubMed

    Estrada, Rolando; Tomasi, Carlo; Cabrera, Michelle T; Wallace, David K; Freedman, Sharon F; Farsiu, Sina

    2011-10-01

    Indirect ophthalmoscopy (IO) is the standard of care for evaluation of the neonatal retina. When recorded on video from a head-mounted camera, IO images have low quality and narrow Field of View (FOV). We present an image fusion methodology for converting a video IO recording into a single, high quality, wide-FOV mosaic that seamlessly blends the best frames in the video. To this end, we have developed fast and robust algorithms for automatic evaluation of video quality, artifact detection and removal, vessel mapping, registration, and multi-frame image fusion. Our experiments show the effectiveness of the proposed methods.

  9. Very low cost real time histogram-based contrast enhancer utilizing fixed-point DSP processing

    NASA Astrophysics Data System (ADS)

    McCaffrey, Nathaniel J.; Pantuso, Francis P.

    1998-03-01

    A real time contrast enhancement system utilizing histogram- based algorithms has been developed to operate on standard composite video signals. This low-cost DSP based system is designed with fixed-point algorithms and an off-chip look up table (LUT) to reduce the cost considerably over other contemporary approaches. This paper describes several real- time contrast enhancing systems advanced at the Sarnoff Corporation for high-speed visible and infrared cameras. The fixed-point enhancer was derived from these high performance cameras. The enhancer digitizes analog video and spatially subsamples the stream to qualify the scene's luminance. Simultaneously, the video is streamed through a LUT that has been programmed with the previous calculation. Reducing division operations by subsampling reduces calculation- cycles and also allows the processor to be used with cameras of nominal resolutions. All values are written to the LUT during blanking so no frames are lost. The enhancer measures 13 cm X 6.4 cm X 3.2 cm, operates off 9 VAC and consumes 12 W. This processor is small and inexpensive enough to be mounted with field deployed security cameras and can be used for surveillance, video forensics and real- time medical imaging.

  10. RELATIONSHIPS BETWEEN HABITAT QUALITY AND DENSITY OF JUVENILE WINTER FLOUNDER

    EPA Science Inventory

    We used a digital video camera mounted to a 1-m beam trawl together with an attached continuous recording YSI sonde and GPS unit to quantify juvenile winter flounder (Pseudopleuronectes americanus) densities and fish habitat. The YSI sonde measured temperature, salinity, dissolve...

  11. Video System for Viewing From a Remote or Windowless Cockpit

    NASA Technical Reports Server (NTRS)

    Banerjee, Amamath

    2009-01-01

    A system of electronic hardware and software synthesizes, in nearly real time, an image of a portion of a scene surveyed by as many as eight video cameras aimed, in different directions, at portions of the scene. This is a prototype of systems that would enable a pilot to view the scene outside a remote or windowless cockpit. The outputs of the cameras are digitized. Direct memory addressing is used to store the data of a few captured images in sequence, and the sequence is repeated in cycles. Cylindrical warping is used in merging adjacent images at their borders to construct a mosaic image of the scene. The mosaic-image data are written to a memory block from which they can be rendered on a head-mounted display (HMD) device. A subsystem in the HMD device tracks the direction of gaze of the wearer, providing data that are used to select, for display, the portion of the mosaic image corresponding to the direction of gaze. The basic functionality of the system has been demonstrated by mounting the cameras on the roof of a van and steering the van by use of the images presented on the HMD device.

  12. Project Morpheus testing

    NASA Image and Video Library

    2012-06-25

    A frame grab from a mounted video camera on the E-3 Test Stand at Stennis Space Center documents testing of the new Project Morpheus engine. The new liquid methane, liquid oxygen engine will power the Morpheus prototype lander, which could one day evolve to carry cargo safely to the moon, asteroids or Mars surfaces.

  13. Machine vision based teleoperation aid

    NASA Technical Reports Server (NTRS)

    Hoff, William A.; Gatrell, Lance B.; Spofford, John R.

    1991-01-01

    When teleoperating a robot using video from a remote camera, it is difficult for the operator to gauge depth and orientation from a single view. In addition, there are situations where a camera mounted for viewing by the teleoperator during a teleoperation task may not be able to see the tool tip, or the viewing angle may not be intuitive (requiring extensive training to reduce the risk of incorrect or dangerous moves by the teleoperator). A machine vision based teleoperator aid is presented which uses the operator's camera view to compute an object's pose (position and orientation), and then overlays onto the operator's screen information on the object's current and desired positions. The operator can choose to display orientation and translation information as graphics and/or text. This aid provides easily assimilated depth and relative orientation information to the teleoperator. The camera may be mounted at any known orientation relative to the tool tip. A preliminary experiment with human operators was conducted and showed that task accuracies were significantly greater with than without this aid.

  14. Robust real-time horizon detection in full-motion video

    NASA Astrophysics Data System (ADS)

    Young, Grace B.; Bagnall, Bryan; Lane, Corey; Parameswaran, Shibin

    2014-06-01

    The ability to detect the horizon on a real-time basis in full-motion video is an important capability to aid and facilitate real-time processing of full-motion videos for the purposes such as object detection, recognition and other video/image segmentation applications. In this paper, we propose a method for real-time horizon detection that is designed to be used as a front-end processing unit for a real-time marine object detection system that carries out object detection and tracking on full-motion videos captured by ship/harbor-mounted cameras, Unmanned Aerial Vehicles (UAVs) or any other method of surveillance for Maritime Domain Awareness (MDA). Unlike existing horizon detection work, we cannot assume a priori the angle or nature (for e.g. straight line) of the horizon, due to the nature of the application domain and the data. Therefore, the proposed real-time algorithm is designed to identify the horizon at any angle and irrespective of objects appearing close to and/or occluding the horizon line (for e.g. trees, vehicles at a distance) by accounting for its non-linear nature. We use a simple two-stage hierarchical methodology, leveraging color-based features, to quickly isolate the region of the image containing the horizon and then perform a more ne-grained horizon detection operation. In this paper, we present our real-time horizon detection results using our algorithm on real-world full-motion video data from a variety of surveillance sensors like UAVs and ship mounted cameras con rming the real-time applicability of this method and its ability to detect horizon with no a priori assumptions.

  15. 1200737

    NASA Image and Video Library

    2012-08-21

    FINAL DEMONSTRATION OF A WIRELESS DATA TASK SUPPORTED BY SLS ADVANCED DEVELOPMENT USED TO DEMONSTRATE REAL-TIME VIDEO OVER WIRELESS CONNECTIONS ALONG WITH DATA AND COMMANDS AS DEMONSTRATED VIA THE ROBOTIC ARMS. THE ARMS AND VIDEO CAMERAS WERE MOUNTED ON FREE FLOATING AIR-BEARING VEHICLES TO SIMULATE CONDITIONS IN SPACE. THEY WERE USED TO SHOW HOW A CHASE VEHICLE COULD MOVE UP TO AND CAPTURE A SATELLITE, SUCH AS THE FASTSAT MOCKUP DEMONSTRITING HOW ROBOTIC TECHNOLOGY AND SMALL SPACECRAFT COULD ASSIST WITH ORBITAL DEBRIS MITIGATION

  16. 1200739

    NASA Image and Video Library

    2012-08-21

    FINAL DEMONSTRATION OF A WIRELESS DATA TASK SUPPORTED BY SLS ADVANCED DEVELOPMENT USED TO DEMONSTRATE REAL-TIME VIDEO OVER WIRELESS CONNECTIONS ALONG WITH DATA AND COMMANDS AS DEMONSTRATED VIA THE ROBOTIC ARMS. THE ARMS AND VIDEO CAMERAS WERE MOUNTED ON FREE FLOATING AIR-BEARING VEHICLES TO SIMULATE CONDITIONS IN SPACE. THEY WERE USED TO SHOW HOW A CHASE VEHICLE COULD MOVE UP TO AND CAPTURE A SATELLITE, SUCH AS THE FASTSAT MOCKUP DEMONSTRITING HOW ROBOTIC TECHNOLOGY AND SMALL SPACECRAFT COULD ASSIST WITH ORBITAL DEBRIS MITIGATION

  17. 1200738

    NASA Image and Video Library

    2012-08-21

    FINAL DEMONSTRATION OF A WIRELESS DATA TASK SUPPORTED BY SLS ADVANCED DEVELOPMENT USED TO DEMONSTRATE REAL-TIME VIDEO OVER WIRELESS CONNECTIONS ALONG WITH DATA AND COMMANDS AS DEMONSTRATED VIA THE ROBOTIC ARMS. THE ARMS AND VIDEO CAMERAS WERE MOUNTED ON FREE FLOATING AIR-BEARING VEHICLES TO SIMULATE CONDITIONS IN SPACE. THEY WERE USED TO SHOW HOW A CHASE VEHICLE COULD MOVE UP TO AND CAPTURE A SATELLITE, SUCH AS THE FASTSAT MOCKUP DEMONSTRITING HOW ROBOTIC TECHNOLOGY AND SMALL SPACECRAFT COULD ASSIST WITH ORBITAL DEBRIS MITIGATION

  18. SailSpy: a vision system for yacht sail shape measurement

    NASA Astrophysics Data System (ADS)

    Olsson, Olof J.; Power, P. Wayne; Bowman, Chris C.; Palmer, G. Terry; Clist, Roger S.

    1992-11-01

    SailSpy is a real-time vision system which we have developed for automatically measuring sail shapes and masthead rotation on racing yachts. Versions have been used by the New Zealand team in two America's Cup challenges in 1988 and 1992. SailSpy uses four miniature video cameras mounted at the top of the mast to provide views of the headsail and mainsail on either tack. The cameras are connected to the SailSpy computer below deck using lightweight cables mounted inside the mast. Images received from the cameras are automatically analyzed by the SailSpy computer, and sail shape and mast rotation parameters are calculated. The sail shape parameters are calculated by recognizing sail markers (ellipses) that have been attached to the sails, and the mast rotation parameters by recognizing deck markers painted on the deck. This paper describes the SailSpy system and some of the vision algorithms used.

  19. New design environment for defect detection in web inspection systems

    NASA Astrophysics Data System (ADS)

    Hajimowlana, S. Hossain; Muscedere, Roberto; Jullien, Graham A.; Roberts, James W.

    1997-09-01

    One of the aims of industrial machine vision is to develop computer and electronic systems destined to replace human vision in the process of quality control of industrial production. In this paper we discuss the development of a new design environment developed for real-time defect detection using reconfigurable FPGA and DSP processor mounted inside a DALSA programmable CCD camera. The FPGA is directly connected to the video data-stream and outputs data to a low bandwidth output bus. The system is targeted for web inspection but has the potential for broader application areas. We describe and show test results of the prototype system board, mounted inside a DALSA camera and discuss some of the algorithms currently simulated and implemented for web inspection applications.

  20. A Structured Light Sensor System for Tree Inventory

    NASA Technical Reports Server (NTRS)

    Chien, Chiun-Hong; Zemek, Michael C.

    2000-01-01

    Tree Inventory is referred to measurement and estimation of marketable wood volume in a piece of land or forest for purposes such as investment or for loan applications. Exist techniques rely on trained surveyor conducting measurements manually using simple optical or mechanical devices, and hence are time consuming subjective and error prone. The advance of computer vision techniques makes it possible to conduct automatic measurements that are more efficient, objective and reliable. This paper describes 3D measurements of tree diameters using a uniquely designed ensemble of two line laser emitters rigidly mounted on a video camera. The proposed laser camera system relies on a fixed distance between two parallel laser planes and projections of laser lines to calculate tree diameters. Performance of the laser camera system is further enhanced by fusion of information induced from structured lighting and that contained in video images. Comparison will be made between the laser camera sensor system and a stereo vision system previously developed for measurements of tree diameters.

  1. Multiple Target Tracking in a Wide-Field-of-View Camera System

    DTIC Science & Technology

    1990-01-01

    assembly is mounted on a Contraves alt-azi axis table with a pointing accuracy of < 2 Urad. * Work performed under the auspices of the U.S. Department of... Contraves SUN 3 CCD DR11W VME EITHERNET SUN 3 !3T 3 RS170 Video 1 Video ^mglifier^ I WWV Clock VCR Datacube u Monitor Monitor UL...displaying processed images with overlay from the Datacube. We control the Contraves table using a GPIB interface on the SUN. GPIB also interfaces a

  2. Microgravity

    NASA Image and Video Library

    1994-07-10

    TEMPUS, an electromagnetic levitation facility that allows containerless processing of metallic samples in microgravity, first flew on the IML-2 Spacelab mission. The principle of electromagnetic levitation is used commonly in ground-based experiments to melt and then cool metallic melts below their freezing points without solidification occurring. The TEMPUS operation is controlled by its own microprocessor system; although commands may be sent remotely from the ground and real time adjustments may be made by the crew. Two video cameras, a two-color pyrometer for measuring sample temperatures, and a fast infrared detector for monitoring solidification spikes, will be mounted to the process chamber to facilitate observation and analysis. In addition, a dedicated high-resolution video camera can be attached to the TEMPUS to measure the sample volume precisely.

  3. Taking the High Road: Privacy in the Age of Drones

    ERIC Educational Resources Information Center

    Hamilton, Lucas; Harrington, Michael; Lawrence, Cameron; Perrot, Remy; Studer, Severin

    2017-01-01

    This case examines the technological, ethical and legal issues surrounding the use of drones in business. Mary McKay, a recent Management Information Systems (MIS) graduate sets up a professional photography and videography business. She gains a leg up on the competition with drone-mounted cameras and live video streaming through the free…

  4. Architecture of PAU survey camera readout electronics

    NASA Astrophysics Data System (ADS)

    Castilla, Javier; Cardiel-Sas, Laia; De Vicente, Juan; Illa, Joseph; Jimenez, Jorge; Maiorino, Marino; Martinez, Gustavo

    2012-07-01

    PAUCam is a new camera for studying the physics of the accelerating universe. The camera will consist of eighteen 2Kx4K HPK CCDs: sixteen for science and two for guiding. The camera will be installed at the prime focus of the WHT (William Herschel Telescope). In this contribution, the architecture of the readout electronics system is presented. Back- End and Front-End electronics are described. Back-End consists of clock, bias and video processing boards, mounted on Monsoon crates. The Front-End is based on patch panel boards. These boards are plugged outside the camera feed-through panel for signal distribution. Inside the camera, individual preamplifier boards plus kapton cable completes the path to connect to each CCD. The overall signal distribution and grounding scheme is shown in this paper.

  5. GoPro HERO 4 Black recording of scleral buckle placement during retinal detachment repair.

    PubMed

    Ho, Vincent Y; Shah, Vaishali G; Yates, David M; Shah, Gaurav K

    2017-08-01

    GoPro and Google Glass technology have previously been used to record procedures in ophthalmology and other medical fields. In this manuscript, GoPro's latest HERO 4 Black edition camera (GoPro Inc, San Mateo, Calif.) will be used to record the placement of a scleral buckle during retinal detachment surgery. GoPro HERO 4 Black edition camera, which records 4K-quality video with a resolution of 3840 (pixels) x 2160 (lines), was mounted on a head strap to record placement of a scleral buckle for a retinal detachment. Excellent video quality was achieved with the 4K SuperView setting. Bluetooth connection with an Apple iPad (Apple Inc, Cupertino, Calif.) provided live streaming and use of the GoPro App. Zoom, horizontal/vertical alignment, exposure, and contrast adjustments were made with postproduction editing on GoPro Studio software. Video recording with the GoPro HERO 4 Black edition camera is an excellent way to document extraocular procedures to improve medical education, self-training, or medicolegal documentation. Copyright © 2017 Canadian Ophthalmological Society. Published by Elsevier Inc. All rights reserved.

  6. In-flight Video Captured by External Tank Camera System

    NASA Technical Reports Server (NTRS)

    2005-01-01

    In this July 26, 2005 video, Earth slowly fades into the background as the STS-114 Space Shuttle Discovery climbs into space until the External Tank (ET) separates from the orbiter. An External Tank ET Camera System featuring a Sony XC-999 model camera provided never before seen footage of the launch and tank separation. The camera was installed in the ET LO2 Feedline Fairing. From this position, the camera had a 40% field of view with a 3.5 mm lens. The field of view showed some of the Bipod area, a portion of the LH2 tank and Intertank flange area, and some of the bottom of the shuttle orbiter. Contained in an electronic box, the battery pack and transmitter were mounted on top of the Solid Rocker Booster (SRB) crossbeam inside the ET. The battery pack included 20 Nickel-Metal Hydride batteries (similar to cordless phone battery packs) totaling 28 volts DC and could supply about 70 minutes of video. Located 95 degrees apart on the exterior of the Intertank opposite orbiter side, there were 2 blade S-Band antennas about 2 1/2 inches long that transmitted a 10 watt signal to the ground stations. The camera turned on approximately 10 minutes prior to launch and operated for 15 minutes following liftoff. The complete camera system weighs about 32 pounds. Marshall Space Flight Center (MSFC), Johnson Space Center (JSC), Goddard Space Flight Center (GSFC), and Kennedy Space Center (KSC) participated in the design, development, and testing of the ET camera system.

  7. Modification of the Miyake-Apple technique for simultaneous anterior and posterior video imaging of wet laboratory-based corneal surgery.

    PubMed

    Tan, Johnson C H; Meadows, Howard; Gupta, Aanchal; Yeung, Sonia N; Moloney, Gregory

    2014-03-01

    The aim of this study was to describe a modification of the Miyake-Apple posterior video analysis for the simultaneous visualization of the anterior and posterior corneal surfaces during wet laboratory-based deep anterior lamellar keratoplasty (DALK). A human donor corneoscleral button was affixed to a microscope slide and placed onto a custom-made mounting box. A big bubble DALK was performed on the cornea in the wet laboratory. An 11-diopter intraocular lens was positioned over the aperture of the back camera of an iPhone. This served to video record the posterior view of the corneoscleral button during the big bubble formation. An overhead operating microscope with an attached video camcorder recorded the anterior view during the surgery. The anterior and posterior views of the wet laboratory-based DALK surgery were simultaneously captured and edited using video editing software. The formation of the big bubble can be studied. This video recording camera system has the potential to act as a valuable research and teaching tool in corneal lamellar surgery, especially in the behavior of the big bubble formation in DALK.

  8. National Register Testing of 42 Prehistoric Archeological Sites on Fort Hood, Texas: The 1996 Season

    DTIC Science & Technology

    1999-12-01

    alluvium ca. 1100- 1000 B.P. The fact that flood stabilization, pedo - genesis, and accompanying increases in C4 395 National Register Testing at Fort...JVC TK-107OU Color Video Camera mounted on a Zeiss petrographic microscope using a 10X objective lens. The images were saved and imported into Adobe

  9. Head-coupled remote stereoscopic camera system for telepresence applications

    NASA Astrophysics Data System (ADS)

    Bolas, Mark T.; Fisher, Scott S.

    1990-09-01

    The Virtual Environment Workstation Project (VIEW) at NASA's Ames Research Center has developed a remotely controlled stereoscopic camera system that can be used for telepresence research and as a tool to develop and evaluate configurations for head-coupled visual systems associated with space station telerobots and remote manipulation robotic arms. The prototype camera system consists of two lightweight CCD video cameras mounted on a computer controlled platform that provides real-time pan, tilt, and roll control of the camera system in coordination with head position transmitted from the user. This paper provides an overall system description focused on the design and implementation of the camera and platform hardware configuration and the development of control software. Results of preliminary performance evaluations are reported with emphasis on engineering and mechanical design issues and discussion of related psychophysiological effects and objectives.

  10. Use of an UROV to develop 3-D optical models of submarine environments

    NASA Astrophysics Data System (ADS)

    Null, W. D.; Landry, B. J.

    2017-12-01

    The ability to rapidly obtain high-fidelity bathymetry is crucial for a broad range of engineering, scientific, and defense applications ranging from bridge scour, bedform morphodynamics, and coral reef health to unexploded ordnance detection and monitoring. The present work introduces the use of an Underwater Remotely Operated Vehicle (UROV) to develop 3-D optical models of submarine environments. The UROV used a Raspberry Pi camera mounted to a small servo which allowed for pitch control. Prior to video data collection, in situ camera calibration was conducted with the system. Multiple image frames were extracted from the underwater video for 3D reconstruction using Structure from Motion (SFM). This system provides a simple and cost effective solution to obtaining detailed bathymetry in optically clear submarine environments.

  11. Marshall Grazing Incidence X-ray Spectrometer (MaGIXS) Slit-Jaw Imaging System

    NASA Astrophysics Data System (ADS)

    Wilkerson, P.; Champey, P. R.; Winebarger, A. R.; Kobayashi, K.; Savage, S. L.

    2017-12-01

    The Marshall Grazing Incidence X-ray Spectrometer is a NASA sounding rocket payload providing a 0.6 - 2.5 nm spectrum with unprecedented spatial and spectral resolution. The instrument is comprised of a novel optical design, featuring a Wolter1 grazing incidence telescope, which produces a focused solar image on a slit plate, an identical pair of stigmatic optics, a planar diffraction grating and a low-noise detector. When MaGIXS flies on a suborbital launch in 2019, a slit-jaw camera system will reimage the focal plane of the telescope providing a reference for pointing the telescope on the solar disk and aligning the data to supporting observations from satellites and other rockets. The telescope focuses the X-ray and EUV image of the sun onto a plate covered with a phosphor coating that absorbs EUV photons, which then fluoresces in visible light. This 10-week REU project was aimed at optimizing an off-axis mounted camera with 600-line resolution NTSC video for extremely low light imaging of the slit plate. Radiometric calculations indicate an intensity of less than 1 lux at the slit jaw plane, which set the requirement for camera sensitivity. We selected a Watec 910DB EIA charge-coupled device (CCD) monochrome camera, which has a manufacturer quoted sensitivity of 0.0001 lux at F1.2. A high magnification and low distortion lens was then identified to image the slit jaw plane from a distance of approximately 10 cm. With the selected CCD camera, tests show that at extreme low-light levels, we achieve a higher resolution than expected, with only a moderate drop in frame rate. Based on sounding rocket flight heritage, the launch vehicle attitude control system is known to stabilize the instrument pointing such that jitter does not degrade video quality for context imaging. Future steps towards implementation of the imaging system will include ruggedizing the flight camera housing and mounting the selected camera and lens combination to the instrument structure.

  12. Development of an autonomous video rendezvous and docking system, phase 2

    NASA Technical Reports Server (NTRS)

    Tietz, J. C.; Richardson, T. E.

    1983-01-01

    The critical elements of an autonomous video rendezvous and docking system were built and used successfully in a physical laboratory simulation. The laboratory system demonstrated that a small, inexpensive electronic package and a flight computer of modest size can analyze television images to derive guidance information for spacecraft. In the ultimate application, the system would use a docking aid consisting of three flashing lights mounted on a passive target spacecraft. Television imagery of the docking aid would be processed aboard an active chase vehicle to derive relative positions and attitudes of the two spacecraft. The demonstration system used scale models of the target spacecraft with working docking aids. A television camera mounted on a 6 degree of freedom (DOF) simulator provided imagery of the target to simulate observations from the chase vehicle. A hardware video processor extracted statistics from the imagery, from which a computer quickly computed position and attitude. Computer software known as a Kalman filter derived velocity information from position measurements.

  13. International Space Station (ISS)

    NASA Image and Video Library

    2001-06-08

    Astronaut Susan J. Helms, Expedition Two flight engineer, mounts a video camera onto a bracket in the Russian Zarya or Functional Cargo Block (FGB) of the International Space Station (ISS). Launched by a Russian Proton rocket from the Baikonu Cosmodrome on November 20, 1998, the Unites States-funded and Russian-built Zarya was the first element of the ISS, followed by the U.S. Unity Node.

  14. An economical wireless cavity-nest viewer

    Treesearch

    Daniel P. Huebner; Sarah R. Hurteau

    2007-01-01

    Inspection of cavity nests and nest boxes is often required during studies of cavity-nesting birds, and fiberscopes and pole-mounted video cameras are sometimes used for such inspection. However, the cost of these systems may be prohibitive for some potential users. We describe a user-built, wireless cavity viewer that can be used to access cavities as high as 15 m and...

  15. Utilization of a Terrestrial Laser Scanner for the Calibration of Mobile Mapping Systems

    PubMed Central

    Hong, Seunghwan; Park, Ilsuk; Lee, Jisang; Lim, Kwangyong; Choi, Yoonjo; Sohn, Hong-Gyoo

    2017-01-01

    This paper proposes a practical calibration solution for estimating the boresight and lever-arm parameters of the sensors mounted on a Mobile Mapping System (MMS). On our MMS devised for conducting the calibration experiment, three network video cameras, one mobile laser scanner, and one Global Navigation Satellite System (GNSS)/Inertial Navigation System (INS) were mounted. The geometric relationships between three sensors were solved by the proposed calibration, considering the GNSS/INS as one unit sensor. Our solution basically uses the point cloud generated by a 3-dimensional (3D) terrestrial laser scanner rather than using conventionally obtained 3D ground control features. With the terrestrial laser scanner, accurate and precise reference data could be produced and the plane features corresponding with the sparse mobile laser scanning data could be determined with high precision. Furthermore, corresponding point features could be extracted from the dense terrestrial laser scanning data and the images captured by the video cameras. The parameters of the boresight and the lever-arm were calculated based on the least squares approach and the precision of the boresight and lever-arm could be achieved by 0.1 degrees and 10 mm, respectively. PMID:28264457

  16. Video Toroid Cavity Imager

    DOEpatents

    Gerald, II, Rex E.; Sanchez, Jairo; Rathke, Jerome W.

    2004-08-10

    A video toroid cavity imager for in situ measurement of electrochemical properties of an electrolytic material sample includes a cylindrical toroid cavity resonator containing the sample and employs NMR and video imaging for providing high-resolution spectral and visual information of molecular characteristics of the sample on a real-time basis. A large magnetic field is applied to the sample under controlled temperature and pressure conditions to simultaneously provide NMR spectroscopy and video imaging capabilities for investigating electrochemical transformations of materials or the evolution of long-range molecular aggregation during cooling of hydrocarbon melts. The video toroid cavity imager includes a miniature commercial video camera with an adjustable lens, a modified compression coin cell imager with a fiat circular principal detector element, and a sample mounted on a transparent circular glass disk, and provides NMR information as well as a video image of a sample, such as a polymer film, with micrometer resolution.

  17. A combined stereo-photogrammetry and underwater-video system to study group composition of dolphins

    NASA Astrophysics Data System (ADS)

    Bräger, S.; Chong, A.; Dawson, S.; Slooten, E.; Würsig, B.

    1999-11-01

    One reason for the paucity of knowledge of dolphin social structure is the difficulty of measuring individual dolphins. In Hector's dolphins, Cephalorhynchus hectori, total body length is a function of age, and sex can be determined by individual colouration pattern. We developed a novel system combining stereo-photogrammetry and underwater-video to record dolphin group composition. The system consists of two downward-looking single-lens-reflex (SLR) cameras and a Hi8 video camera in an underwater housing mounted on a small boat. Bow-riding Hector's dolphins were photographed and video-taped at close range in coastal waters around the South Island of New Zealand. Three-dimensional, stereoscopic measurements of the distance between the blowhole and the anterior margin of the dorsal fin (BH-DF) were calibrated by a suspended frame with reference points. Growth functions derived from measurements of 53 dead Hector's dolphins (29 female : 24 male) provided the necessary reference data. For the analysis, the measurements were synchronised with corresponding underwater-video of the genital area. A total of 27 successful measurements (8 with corresponding sex) were obtained, showing how this new system promises to be potentially useful for cetacean studies.

  18. Creating cinematic wide gamut HDR-video for the evaluation of tone mapping operators and HDR-displays

    NASA Astrophysics Data System (ADS)

    Froehlich, Jan; Grandinetti, Stefan; Eberhardt, Bernd; Walter, Simon; Schilling, Andreas; Brendel, Harald

    2014-03-01

    High quality video sequences are required for the evaluation of tone mapping operators and high dynamic range (HDR) displays. We provide scenic and documentary scenes with a dynamic range of up to 18 stops. The scenes are staged using professional film lighting, make-up and set design to enable the evaluation of image and material appearance. To address challenges for HDR-displays and temporal tone mapping operators, the sequences include highlights entering and leaving the image, brightness changing over time, high contrast skin tones, specular highlights and bright, saturated colors. HDR-capture is carried out using two cameras mounted on a mirror-rig. To achieve a cinematic depth of field, digital motion picture cameras with Super-35mm size sensors are used. We provide HDR-video sequences to serve as a common ground for the evaluation of temporal tone mapping operators and HDR-displays. They are available to the scientific community for further research.

  19. Head-mounted display for use in functional endoscopic sinus surgery

    NASA Astrophysics Data System (ADS)

    Wong, Brian J.; Lee, Jon P.; Dugan, F. Markoe; MacArthur, Carol J.

    1995-05-01

    Since the introduction of functional endoscopic sinus surgery (FESS), the procedure has undergone rapid change with evolution keeping pace with technological advances. The advent of low cost charge coupled device 9CCD) cameras revolutionized the practice and instruction of FESS. Video-based FESS has allowed for documentation of the surgical procedure as well as interactive instruction during surgery. Presently, the technical requirements of video-based FESS include the addition of one or more television monitors positioned strategically in the operating room. Thought video monitors have greatly enhanced surgical endoscopy by re- involving nurses and assistants in the actual mechanics of surgery, video monitors require the operating surgeon to be focused on the screen instead of the patient. In this study, we describe the use of a new low-cost liquid crystal display (LCD) based device that functions as a monitor but is mounted on the head on a visor (PT-O1, O1 Products, Westlake Village, CA). This study illustrates the application of these HMD devices to FESS operations. The same surgeon performed the operation in each patient. In one nasal fossa, surgery was performed using conventional video FESS methods. The contralateral side was operated on while wearing the head mounted video display. The device had adequate resolution for the purposes of FESS. No adverse effects were noted intraoperatively. The results on the patients ipsalateral and contralateral sides were similar. The visor did eliminated significant torsion of the surgeon's neck during the operation, while at the same time permitted simultaneous viewing of both the patient and the intranasal surgical field.

  20. Autonomous mobile platform for enhanced situational awareness in Mass Casualty Incidents.

    PubMed

    Yang, Dongyi; Schafer, James; Wang, Sili; Ganz, Aura

    2014-01-01

    To enhance the efficiency of the search and rescue process of a Mass Casualty Incident, we introduce a low cost autonomous mobile platform. The mobile platform motion is controlled by an Android Smartphone mounted on a robot. The pictures and video captured by the Smartphone camera can significantly enhance the situational awareness of the incident commander leading to a more efficient search and rescue process. Moreover, the active RFID readers mounted on the mobile platform can improve the localization accuracy of victims in the disaster site in areas where the paramedics are not present, reducing the triage and evacuation time.

  1. Automation of the targeting and reflective alignment concept

    NASA Technical Reports Server (NTRS)

    Redfield, Robin C.

    1992-01-01

    The automated alignment system, described herein, employs a reflective, passive (requiring no power) target and includes a PC-based imaging system and one camera mounted on a six degree of freedom robot manipulator. The system detects and corrects for manipulator misalignment in three translational and three rotational directions by employing the Targeting and Reflective Alignment Concept (TRAC), which simplifies alignment by decoupling translational and rotational alignment control. The concept uses information on the camera and the target's relative position based on video feedback from the camera. These relative positions are converted into alignment errors and minimized by motions of the robot. The system is robust to exogenous lighting by virtue of a subtraction algorithm which enables the camera to only see the target. These capabilities are realized with relatively minimal complexity and expense.

  2. Video Capture of Perforator Flap Harvesting Procedure with a Full High-definition Wearable Camera

    PubMed Central

    2016-01-01

    Summary: Recent advances in wearable recording technology have enabled high-quality video recording of several surgical procedures from the surgeon’s perspective. However, the available wearable cameras are not optimal for recording the harvesting of perforator flaps because they are too heavy and cannot be attached to the surgical loupe. The Ecous is a small high-resolution camera that was specially developed for recording loupe magnification surgery. This study investigated the use of the Ecous for recording perforator flap harvesting procedures. The Ecous SC MiCron is a high-resolution camera that can be mounted directly on the surgical loupe. The camera is light (30 g) and measures only 28 × 32 × 60 mm. We recorded 23 perforator flap harvesting procedures with the Ecous connected to a laptop through a USB cable. The elevated flaps included 9 deep inferior epigastric artery perforator flaps, 7 thoracodorsal artery perforator flaps, 4 anterolateral thigh flaps, and 3 superficial inferior epigastric artery flaps. All procedures were recorded with no equipment failure. The Ecous recorded the technical details of the perforator dissection at a high-resolution level. The surgeon did not feel any extra stress or interference when wearing the Ecous. The Ecous is an ideal camera for recording perforator flap harvesting procedures. It fits onto the surgical loupe perfectly without creating additional stress on the surgeon. High-quality video from the surgeon’s perspective makes accurate documentation of the procedures possible, thereby enhancing surgical education and allowing critical self-reflection. PMID:27482504

  3. Visual analysis of trash bin processing on garbage trucks in low resolution video

    NASA Astrophysics Data System (ADS)

    Sidla, Oliver; Loibner, Gernot

    2015-03-01

    We present a system for trash can detection and counting from a camera which is mounted on a garbage collection truck. A working prototype has been successfully implemented and tested with several hours of real-world video. The detection pipeline consists of HOG detectors for two trash can sizes, and meanshift tracking and low level image processing for the analysis of the garbage disposal process. Considering the harsh environment and unfavorable imaging conditions, the process works already good enough so that very useful measurements from video data can be extracted. The false positive/false negative rate of the full processing pipeline is about 5-6% at fully automatic operation. Video data of a full day (about 8 hrs) can be processed in about 30 minutes on a standard PC.

  4. STS-111 Flight Day 7 Highlights

    NASA Technical Reports Server (NTRS)

    2002-01-01

    On Flight Day 7 of STS-111 (Space Shuttle Endeavour crew includes: Kenneth Cockrell, Commander; Paul Lockhart, Pilot; Franklin Chang-Diaz, Mission Specialist; Philippe Perrin, Mission Specialist; International Space Station (ISS) Expedition 5 crew includes Valery Korzun, Commander; Peggy Whitson, Flight Engineer; Sergei Treschev, Flight Engineer; ISS Expedition 4 crew includes: Yury Onufrienko, Commander; Daniel Bursch, Flight Engineer; Carl Walz, Flight Engineer), this video opens with answers to questions asked by the public via e-mail about the altitude of the space station, the length of its orbit, how astronauts differentiate between up and down in the microgravity environment, and whether they hear wind noise during the shuttle's reentry. In video footage shot from inside the Quest airlock, Perrin is shown exiting the station to perform an extravehicular activity (EVA) with Chang-Diaz. Chang-Diaz is shown, in helmet mounted camera footage, attaching cable protection booties to a fish-stringer device with multiple hooks, and Perrin is seen loosening bolts that hold the replacement unit accomodation in launch position atop the Mobile Base System (MBS). Perrin then mounts a camera atop the mast of the MBS. During this EVA, the astronauts installed the MBS on the Mobile Transporter (MT) to support the Canadarm 2 robotic arm. A camera in the Endeavour's payload bay provides footage of the Pacific Ocean, the Baja Peninsula, and Midwestern United States. Plumes from wildfires in Nevada, Idaho, Yellowstone National Park, Wyoming, and Montana are visible. The station continues over the Great Lakes and the Eastern Provinces of Canada.

  5. STS-111 Flight Day 7 Highlights

    NASA Astrophysics Data System (ADS)

    2002-06-01

    On Flight Day 7 of STS-111 (Space Shuttle Endeavour crew includes: Kenneth Cockrell, Commander; Paul Lockhart, Pilot; Franklin Chang-Diaz, Mission Specialist; Philippe Perrin, Mission Specialist; International Space Station (ISS) Expedition 5 crew includes Valery Korzun, Commander; Peggy Whitson, Flight Engineer; Sergei Treschev, Flight Engineer; ISS Expedition 4 crew includes: Yury Onufrienko, Commander; Daniel Bursch, Flight Engineer; Carl Walz, Flight Engineer), this video opens with answers to questions asked by the public via e-mail about the altitude of the space station, the length of its orbit, how astronauts differentiate between up and down in the microgravity environment, and whether they hear wind noise during the shuttle's reentry. In video footage shot from inside the Quest airlock, Perrin is shown exiting the station to perform an extravehicular activity (EVA) with Chang-Diaz. Chang-Diaz is shown, in helmet mounted camera footage, attaching cable protection booties to a fish-stringer device with multiple hooks, and Perrin is seen loosening bolts that hold the replacement unit accomodation in launch position atop the Mobile Base System (MBS). Perrin then mounts a camera atop the mast of the MBS. During this EVA, the astronauts installed the MBS on the Mobile Transporter (MT) to support the Canadarm 2 robotic arm. A camera in the Endeavour's payload bay provides footage of the Pacific Ocean, the Baja Peninsula, and Midwestern United States. Plumes from wildfires in Nevada, Idaho, Yellowstone National Park, Wyoming, and Montana are visible. The station continues over the Great Lakes and the Eastern Provinces of Canada.

  6. Design, implementation and accuracy of a prototype for medical augmented reality.

    PubMed

    Pandya, Abhilash; Siadat, Mohammad-Reza; Auner, Greg

    2005-01-01

    This paper is focused on prototype development and accuracy evaluation of a medical Augmented Reality (AR) system. The accuracy of such a system is of critical importance for medical use, and is hence considered in detail. We analyze the individual error contributions and the system accuracy of the prototype. A passive articulated arm is used to track a calibrated end-effector-mounted video camera. The live video view is superimposed in real time with the synchronized graphical view of CT-derived segmented object(s) of interest within a phantom skull. The AR accuracy mostly depends on the accuracy of the tracking technology, the registration procedure, the camera calibration, and the image scanning device (e.g., a CT or MRI scanner). The accuracy of the Microscribe arm was measured to be 0.87 mm. After mounting the camera on the tracking device, the AR accuracy was measured to be 2.74 mm on average (standard deviation = 0.81 mm). After using data from a 2-mm-thick CT scan, the AR error remained essentially the same at an average of 2.75 mm (standard deviation = 1.19 mm). For neurosurgery, the acceptable error is approximately 2-3 mm, and our prototype approaches these accuracy requirements. The accuracy could be increased with a higher-fidelity tracking system and improved calibration and object registration. The design and methods of this prototype device can be extrapolated to current medical robotics (due to the kinematic similarity) and neuronavigation systems.

  7. The Short Wave Aerostat-Mounted Imager (SWAMI): A novel platform for acquiring remotely sensed data from a tethered balloon

    USGS Publications Warehouse

    Vierling, L.A.; Fersdahl, M.; Chen, X.; Li, Z.; Zimmerman, P.

    2006-01-01

    We describe a new remote sensing system called the Short Wave Aerostat-Mounted Imager (SWAMI). The SWAMI is designed to acquire co-located video imagery and hyperspectral data to study basic remote sensing questions and to link landscape level trace gas fluxes with spatially and temporally appropriate spectral observations. The SWAMI can fly at altitudes up to 2 km above ground level to bridge the spatial gap between radiometric measurements collected near the surface and those acquired by other aircraft or satellites. The SWAMI platform consists of a dual channel hyperspectral spectroradiometer, video camera, GPS, thermal infrared sensor, and several meteorological and control sensors. All SWAMI functions (e.g. data acquisition and sensor pointing) can be controlled from the ground via wireless transmission. Sample data from the sampling platform are presented, along with several potential scientific applications of SWAMI data.

  8. Use of Body-Mounted Cameras to Enhance Data Collection: An Evaluation of Two Arthropod Sampling Techniques.

    PubMed

    Hagler, James R; Thompson, Alison L; Stefanek, Melissa A; Machtley, Scott A

    2018-03-01

    A study was conducted that compared the effectiveness of a sweepnet versus a vacuum suction device for collecting arthropods in cotton. The study differs from previous research in that body-mounted action cameras (B-MACs) were used to record the activity of the person conducting the arthropod collections. The videos produced by the B-MACs were then analyzed with behavioral event recording software to quantify various aspects of the sampling process. The sampler's speed and the number of sampling sweeps or vacuum suctions taken over a fixed distance (12.2 m) of cotton were two of the more significant sampling characteristics quantified for each method. The arthropod counts obtained, combined with the analyses of the videos, enabled us to estimate arthropod sampling efficiency for each technique based on fixed distance, time, and sample unit measurements. Data revealed that the vacuuming was the most precise method for collecting arthropods in the relatively small cotton research plots. However, data also indicates that the sweepnet method would be more efficient for collecting most of the cotton-dwelling arthropod taxa, especially if the sampler could continuously sweep for at least 1 min or ≥80 m (e.g., in larger research plots). The B-MACs are inexpensive and non-cumbersome, the video images generated are outstanding, and they can be archived to provide permanent documentation of a research project. The methods described here could be useful for other types of field-based research to enhance data collection efficiency.

  9. Mapping wide row crops with video sequences acquired from a tractor moving at treatment speed.

    PubMed

    Sainz-Costa, Nadir; Ribeiro, Angela; Burgos-Artizzu, Xavier P; Guijarro, María; Pajares, Gonzalo

    2011-01-01

    This paper presents a mapping method for wide row crop fields. The resulting map shows the crop rows and weeds present in the inter-row spacing. Because field videos are acquired with a camera mounted on top of an agricultural vehicle, a method for image sequence stabilization was needed and consequently designed and developed. The proposed stabilization method uses the centers of some crop rows in the image sequence as features to be tracked, which compensates for the lateral movement (sway) of the camera and leaves the pitch unchanged. A region of interest is selected using the tracked features, and an inverse perspective technique transforms the selected region into a bird's-eye view that is centered on the image and that enables map generation. The algorithm developed has been tested on several video sequences of different fields recorded at different times and under different lighting conditions, with good initial results. Indeed, lateral displacements of up to 66% of the inter-row spacing were suppressed through the stabilization process, and crop rows in the resulting maps appear straight.

  10. Underwater image mosaicking and visual odometry

    NASA Astrophysics Data System (ADS)

    Sadjadi, Firooz; Tangirala, Sekhar; Sorber, Scott

    2017-05-01

    This paper summarizes the results of studies in underwater odometery using a video camera for estimating the velocity of an unmanned underwater vehicle (UUV). Underwater vehicles are usually equipped with sonar and Inertial Measurement Unit (IMU) - an integrated sensor package that combines multiple accelerometers and gyros to produce a three dimensional measurement of both specific force and angular rate with respect to an inertial reference frame for navigation. In this study, we investigate the use of odometry information obtainable from a video camera mounted on a UUV to extract vehicle velocity relative to the ocean floor. A key challenge with this process is the seemingly bland (i.e. featureless) nature of video data obtained underwater which could make conventional approaches to image-based motion estimation difficult. To address this problem, we perform image enhancement, followed by frame to frame image transformation, registration and mosaicking/stitching. With this approach the velocity components associated with the moving sensor (vehicle) are readily obtained from (i) the components of the transform matrix at each frame; (ii) information about the height of the vehicle above the seabed; and (iii) the sensor resolution. Preliminary results are presented.

  11. Fluorescence-guided tumor visualization using a custom designed NIR attachment to a surgical microscope for high sensitivity imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Kittle, David S.; Patil, Chirag G.; Mamelak, Adam; Hansen, Stacey; Perry, Jeff; Ishak, Laura; Black, Keith L.; Butte, Pramod V.

    2016-03-01

    Current surgical microscopes are limited in sensitivity for NIR fluorescence. Recent developments in tumor markers attached with NIR dyes require newer, more sensitive imaging systems with high resolution to guide surgical resection. We report on a small, single camera solution enabling advanced image processing opportunities previously unavailable for ultra-high sensitivity imaging of these agents. The system captures both visible reflectance and NIR fluorescence at 300 fps while displaying full HD resolution video at 60 fps. The camera head has been designed to easily mount onto the Zeiss Pentero microscope head for seamless integration into surgical procedures.

  12. Testbed for remote telepresence research

    NASA Astrophysics Data System (ADS)

    Adnan, Sarmad; Cheatham, John B., Jr.

    1992-11-01

    Teleoperated robots offer solutions to problems associated with operations in remote and unknown environments, such as space. Teleoperated robots can perform tasks related to inspection, maintenance, and retrieval. A video camera can be used to provide some assistance in teleoperations, but for fine manipulation and control, a telepresence system that gives the operator a sense of actually being at the remote location is more desirable. A telepresence system comprised of a head-tracking stereo camera system, a kinematically redundant arm, and an omnidirectional mobile robot has been developed at the mechanical engineering department at Rice University. This paper describes the design and implementation of this system, its control hardware, and software. The mobile omnidirectional robot has three independent degrees of freedom that permit independent control of translation and rotation, thereby simulating a free flying robot in a plane. The kinematically redundant robot arm has eight degrees of freedom that assist in obstacle and singularity avoidance. The on-board control computers permit control of the robot from the dual hand controllers via a radio modem system. A head-mounted display system provides the user with a stereo view from a pair of cameras attached to the mobile robotics system. The head tracking camera system moves stereo cameras mounted on a three degree of freedom platform to coordinate with the operator's head movements. This telepresence system provides a framework for research in remote telepresence, and teleoperations for space.

  13. Curiosity's Mars Hand Lens Imager (MAHLI) Investigation

    USGS Publications Warehouse

    Edgett, Kenneth S.; Yingst, R. Aileen; Ravine, Michael A.; Caplinger, Michael A.; Maki, Justin N.; Ghaemi, F. Tony; Schaffner, Jacob A.; Bell, James F.; Edwards, Laurence J.; Herkenhoff, Kenneth E.; Heydari, Ezat; Kah, Linda C.; Lemmon, Mark T.; Minitti, Michelle E.; Olson, Timothy S.; Parker, Timothy J.; Rowland, Scott K.; Schieber, Juergen; Sullivan, Robert J.; Sumner, Dawn Y.; Thomas, Peter C.; Jensen, Elsa H.; Simmonds, John J.; Sengstacken, Aaron J.; Wilson, Reg G.; Goetz, Walter

    2012-01-01

    The Mars Science Laboratory (MSL) Mars Hand Lens Imager (MAHLI) investigation will use a 2-megapixel color camera with a focusable macro lens aboard the rover, Curiosity, to investigate the stratigraphy and grain-scale texture, structure, mineralogy, and morphology of geologic materials in northwestern Gale crater. Of particular interest is the stratigraphic record of a ?5 km thick layered rock sequence exposed on the slopes of Aeolis Mons (also known as Mount Sharp). The instrument consists of three parts, a camera head mounted on the turret at the end of a robotic arm, an electronics and data storage assembly located inside the rover body, and a calibration target mounted on the robotic arm shoulder azimuth actuator housing. MAHLI can acquire in-focus images at working distances from ?2.1 cm to infinity. At the minimum working distance, image pixel scale is ?14 μm per pixel and very coarse silt grains can be resolved. At the working distance of the Mars Exploration Rover Microscopic Imager cameras aboard Spirit and Opportunity, MAHLI?s resolution is comparable at ?30 μm per pixel. Onboard capabilities include autofocus, auto-exposure, sub-framing, video imaging, Bayer pattern color interpolation, lossy and lossless compression, focus merging of up to 8 focus stack images, white light and longwave ultraviolet (365 nm) illumination of nearby subjects, and 8 gigabytes of non-volatile memory data storage.

  14. Innovative Solution to Video Enhancement

    NASA Technical Reports Server (NTRS)

    2001-01-01

    Through a licensing agreement, Intergraph Government Solutions adapted a technology originally developed at NASA's Marshall Space Flight Center for enhanced video imaging by developing its Video Analyst(TM) System. Marshall's scientists developed the Video Image Stabilization and Registration (VISAR) technology to help FBI agents analyze video footage of the deadly 1996 Olympic Summer Games bombing in Atlanta, Georgia. VISAR technology enhanced nighttime videotapes made with hand-held camcorders, revealing important details about the explosion. Intergraph's Video Analyst System is a simple, effective, and affordable tool for video enhancement and analysis. The benefits associated with the Video Analyst System include support of full-resolution digital video, frame-by-frame analysis, and the ability to store analog video in digital format. Up to 12 hours of digital video can be stored and maintained for reliable footage analysis. The system also includes state-of-the-art features such as stabilization, image enhancement, and convolution to help improve the visibility of subjects in the video without altering underlying footage. Adaptable to many uses, Intergraph#s Video Analyst System meets the stringent demands of the law enforcement industry in the areas of surveillance, crime scene footage, sting operations, and dash-mounted video cameras.

  15. Evolution of the Mobile Information SysTem (MIST)

    NASA Technical Reports Server (NTRS)

    Litaker, Harry L., Jr.; Thompson, Shelby; Archer, Ronald D.

    2008-01-01

    The Mobile Information SysTem (MIST) had its origins in the need to determine whether commercial off the shelf (COTS) technologies could improve intervehicular activities (IVA) on International Space Station (ISS) crew maintenance productivity. It began with an exploration of head mounted displays (HMDs), but quickly evolved to include voice recognition, mobile personal computing, and data collection. The unique characteristic of the MIST lies within its mobility, in which a vest is worn that contains a mini-computer and supporting equipment, and a headband with attachments for a HMD, lipstick camera, and microphone. Data is then captured directly by the computer running Morae(TM) or similar software for analysis. To date, the MIST system has been tested in numerous environments such as two parabolic flights on NASA's C-9 microgravity aircraft and several mockup facilities ranging from ISS to the Altair Lunar Sortie Lander. Functional capabilities have included its lightweight and compact design, commonality across systems and environments, and usefulness in remote collaboration. Human Factors evaluations of the system have proven the MIST's ability to be worn for long durations of time (approximately four continuous hours) with no adverse physical deficits, moderate operator compensation, and low workload being reported as measured by Corlett Bishop Discomfort Scale, Cooper-Harper Ratings, and the NASA Total Workload Index (TLX), respectively. Additionally, through development of the system, it has spawned several new applications useful in research. For example, by only employing the lipstick camera, microphone, and a compact digital video recorder (DVR), we created a portable, lightweight data collection device. Video is recorded from the participants point of view (POV) through the use of the camera mounted on the side of the head. Both the video and audio is recorded directly into the DVR located on a belt around the waist. This data is then transferred to another computer for video editing and analysis. Another application has been discovered using simulated flight, in which, a kneeboard is replaced with mini-computer and the HMD to project flight paths and glide slopes for lunar ascent. As technologies evolve, so will the system and its application for research and space system operations.

  16. Deep-Sea Video Cameras Without Pressure Housings

    NASA Technical Reports Server (NTRS)

    Cunningham, Thomas

    2004-01-01

    Underwater video cameras of a proposed type (and, optionally, their light sources) would not be housed in pressure vessels. Conventional underwater cameras and their light sources are housed in pods that keep the contents dry and maintain interior pressures of about 1 atmosphere (.0.1 MPa). Pods strong enough to withstand the pressures at great ocean depths are bulky, heavy, and expensive. Elimination of the pods would make it possible to build camera/light-source units that would be significantly smaller, lighter, and less expensive. The depth ratings of the proposed camera/light source units would be essentially unlimited because the strengths of their housings would no longer be an issue. A camera according to the proposal would contain an active-pixel image sensor and readout circuits, all in the form of a single silicon-based complementary metal oxide/semiconductor (CMOS) integrated- circuit chip. As long as none of the circuitry and none of the electrical leads were exposed to seawater, which is electrically conductive, silicon integrated- circuit chips could withstand the hydrostatic pressure of even the deepest ocean. The pressure would change the semiconductor band gap by only a slight amount . not enough to degrade imaging performance significantly. Electrical contact with seawater would be prevented by potting the integrated-circuit chip in a transparent plastic case. The electrical leads for supplying power to the chip and extracting the video signal would also be potted, though not necessarily in the same transparent plastic. The hydrostatic pressure would tend to compress the plastic case and the chip equally on all sides; there would be no need for great strength because there would be no need to hold back high pressure on one side against low pressure on the other side. A light source suitable for use with the camera could consist of light-emitting diodes (LEDs). Like integrated- circuit chips, LEDs can withstand very large hydrostatic pressures. If power-supply regulators or filter capacitors were needed, these could be attached in chip form directly onto the back of, and potted with, the imager chip. Because CMOS imagers dissipate little power, the potting would not result in overheating. To minimize the cost of the camera, a fixed lens could be fabricated as part of the plastic case. For improved optical performance at greater cost, an adjustable glass achromatic lens would be mounted in a reservoir that would be filled with transparent oil and subject to the full hydrostatic pressure, and the reservoir would be mounted on the case to position the lens in front of the image sensor. The lens would by adjusted for focus by use of a motor inside the reservoir (oil-filled motors already exist).

  17. Surgical Videos with Synchronised Vertical 2-Split Screens Recording the Surgeons' Hand Movement.

    PubMed

    Kaneko, Hiroki; Ra, Eimei; Kawano, Kenichi; Yasukawa, Tsutomu; Takayama, Kei; Iwase, Takeshi; Terasaki, Hiroko

    2015-01-01

    To improve the state-of-the-art teaching system by creating surgical videos with synchronised vertical 2-split screens. An ultra-compact, wide-angle point-of-view camcorder (HX-A1, Panasonic) was mounted on the surgical microscope focusing mostly on the surgeons' hand movements. In combination with the regular surgical videos obtained from the CCD camera in the surgical microscope, synchronised vertical 2-split-screen surgical videos were generated with the video-editing software. Using synchronised vertical 2-split-screen videos, residents of the ophthalmology department could watch and learn how assistant surgeons controlled the eyeball, while the main surgeons performed scleral buckling surgery. In vitrectomy, the synchronised vertical 2-split-screen videos showed the surgeons' hands holding the instruments and moving roughly and boldly, in contrast to the very delicate movements of the vitrectomy instruments inside the eye. Synchronised vertical 2-split-screen surgical videos are beneficial for the education of young surgical trainees when learning surgical skills including the surgeons' hand movements. © 2015 S. Karger AG, Basel.

  18. Advanced Spacesuit Informatics Software Design for Power, Avionics and Software Version 2.0

    NASA Technical Reports Server (NTRS)

    Wright, Theodore W.

    2016-01-01

    A description of the software design for the 2016 edition of the Informatics computer assembly of the NASAs Advanced Extravehicular Mobility Unit (AEMU), also called the Advanced Spacesuit. The Informatics system is an optional part of the spacesuit assembly. It adds a graphical interface for displaying suit status, timelines, procedures, and warning information. It also provides an interface to the suit mounted camera for recording still images, video, and audio field notes.

  19. Projection of controlled repeatable real-time moving targets to test and evaluate motion imagery quality

    NASA Astrophysics Data System (ADS)

    Scopatz, Stephen D.; Mendez, Michael; Trent, Randall

    2015-05-01

    The projection of controlled moving targets is key to the quantitative testing of video capture and post processing for Motion Imagery. This presentation will discuss several implementations of target projectors with moving targets or apparent moving targets creating motion to be captured by the camera under test. The targets presented are broadband (UV-VIS-IR) and move in a predictable, repeatable and programmable way; several short videos will be included in the presentation. Among the technical approaches will be targets that move independently in the camera's field of view, as well targets that change size and shape. The development of a rotating IR and VIS 4 bar target projector with programmable rotational velocity and acceleration control for testing hyperspectral cameras is discussed. A related issue for motion imagery is evaluated by simulating a blinding flash which is an impulse of broadband photons in fewer than 2 milliseconds to assess the camera's reaction to a large, fast change in signal. A traditional approach of gimbal mounting the camera in combination with the moving target projector is discussed as an alternative to high priced flight simulators. Based on the use of the moving target projector several standard tests are proposed to provide a corresponding test to MTF (resolution), SNR and minimum detectable signal at velocity. Several unique metrics are suggested for Motion Imagery including Maximum Velocity Resolved (the measure of the greatest velocity that is accurately tracked by the camera system) and Missing Object Tolerance (measurement of tracking ability when target is obscured in the images). These metrics are applicable to UV-VIS-IR wavelengths and can be used to assist in camera and algorithm development as well as comparing various systems by presenting the exact scenes to the cameras in a repeatable way.

  20. Research Instruments

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The GENETI-SCANNER, newest product of Perceptive Scientific Instruments, Inc. (PSI), rapidly scans slides, locates, digitizes, measures and classifies specific objects and events in research and diagnostic applications. Founded by former NASA employees, PSI's primary product line is based on NASA image processing technology. The instruments karyotype - a process employed in analysis and classification of chromosomes - using a video camera mounted on a microscope. Images are digitized, enabling chromosome image enhancement. The system enables karyotyping to be done significantly faster, increasing productivity and lowering costs. Product is no longer being manufactured.

  1. Rugged Walking Robot

    NASA Technical Reports Server (NTRS)

    Larimer, Stanley J.; Lisec, Thomas R.; Spiessbach, Andrew J.

    1990-01-01

    Proposed walking-beam robot simpler and more rugged than articulated-leg walkers. Requires less data processing, and uses power more efficiently. Includes pair of tripods, one nested in other. Inner tripod holds power supplies, communication equipment, computers, instrumentation, sampling arms, and articulated sensor turrets. Outer tripod holds mast on which antennas for communication with remote control site and video cameras for viewing local and distant terrain mounted. Propels itself by raising, translating, and lowering tripods in alternation. Steers itself by rotating raised tripod on turntable.

  2. Head-camera video recordings of trauma core competency procedures can evaluate surgical resident's technical performance as well as colocated evaluators.

    PubMed

    Mackenzie, Colin F; Pasley, Jason; Garofalo, Evan; Shackelford, Stacy; Chen, Hegang; Longinaker, Nyaradzo; Granite, Guinevere; Pugh, Kristy; Hagegeorge, George; Tisherman, Samuel A

    2017-07-01

    Unbiased evaluation of trauma core competency procedures is necessary to determine if residency and predeployment training courses are useful. We tested whether a previously validated individual procedure score (IPS) for individual procedure vascular exposure and fasciotomy (FAS) performance skills could discriminate training status by comparing IPS of evaluators colocated with surgeons to blind video evaluations. Performance of axillary artery (AA), brachial artery (BA), and femoral artery (FA) vascular exposures and lower extremity FAS on fresh cadavers by 40 PGY-2 to PGY-6 residents was video-recorded from head-mounted cameras. Two colocated trained evaluators assessed IPS before and after training. One surgeon in each pretraining tertile of IPS for each procedure was randomly identified for blind video review. The same 12 surgeons were video-recorded repeating the procedures less than 4 weeks after training. Five evaluators independently reviewed all 96 randomly arranged deidentified videos. Inter-rater reliability/consistency, intraclass correlation coefficients were compared by colocated versus video review of IPS, and errors. Study methodology and bias were judged by Medical Education Research Study Quality Instrument and the Quality Assessment of Diagnostic Accuracy Studies criteria. There were no differences (p ≥ 0.5) in IPS for AA, FA, FAS, whether evaluators were colocated or reviewed video recordings. Evaluator consistency was 0.29 (BA) - 0.77 (FA). Video and colocated evaluators were in total agreement (p = 1.0) for error recognition. Intraclass correlation coefficient was 0.73 to 0.92, dependent on procedure. Correlations video versus colocated evaluations were 0.5 to 0.9. Except for BA, blinded video evaluators discriminated (p < 0.002) whether procedures were performed before training versus after training. Study methodology by Medical Education Research Study Quality Instrument criteria scored 15.5/19, Quality Assessment of Diagnostic Accuracy Studies 2 showed low bias risk. Video evaluations of AA, FA, and FAS procedures with IPS are unbiased, valid, and have potential for formative assessments of competency. Prognostic study, level II.

  3. System Synchronizes Recordings from Separated Video Cameras

    NASA Technical Reports Server (NTRS)

    Nail, William; Nail, William L.; Nail, Jasper M.; Le, Doung T.

    2009-01-01

    A system of electronic hardware and software for synchronizing recordings from multiple, physically separated video cameras is being developed, primarily for use in multiple-look-angle video production. The system, the time code used in the system, and the underlying method of synchronization upon which the design of the system is based are denoted generally by the term "Geo-TimeCode(TradeMark)." The system is embodied mostly in compact, lightweight, portable units (see figure) denoted video time-code units (VTUs) - one VTU for each video camera. The system is scalable in that any number of camera recordings can be synchronized. The estimated retail price per unit would be about $350 (in 2006 dollars). The need for this or another synchronization system external to video cameras arises because most video cameras do not include internal means for maintaining synchronization with other video cameras. Unlike prior video-camera-synchronization systems, this system does not depend on continuous cable or radio links between cameras (however, it does depend on occasional cable links lasting a few seconds). Also, whereas the time codes used in prior video-camera-synchronization systems typically repeat after 24 hours, the time code used in this system does not repeat for slightly more than 136 years; hence, this system is much better suited for long-term deployment of multiple cameras.

  4. 7. VAL CAMERA STATION, INTERIOR VIEW OF CAMERA MOUNT, COMMUNICATION ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    7. VAL CAMERA STATION, INTERIOR VIEW OF CAMERA MOUNT, COMMUNICATION EQUIPMENT AND STORAGE CABINET. - Variable Angle Launcher Complex, Camera Stations, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  5. How much camera separation should be used for the capture and presentation of 3D stereoscopic imagery on binocular HMDs?

    NASA Astrophysics Data System (ADS)

    McIntire, John; Geiselman, Eric; Heft, Eric; Havig, Paul

    2011-06-01

    Designers, researchers, and users of binocular stereoscopic head- or helmet-mounted displays (HMDs) face the tricky issue of what imagery to present in their particular displays, and how to do so effectively. Stereoscopic imagery must often be created in-house with a 3D graphics program or from within a 3D virtual environment, or stereoscopic photos/videos must be carefully captured, perhaps for relaying to an operator in a teleoperative system. In such situations, the question arises as to what camera separation (real or virtual) is appropriate or desirable for end-users and operators. We review some of the relevant literature regarding the question of stereo pair camera separation using deskmounted or larger scale stereoscopic displays, and employ our findings to potential HMD applications, including command & control, teleoperation, information and scientific visualization, and entertainment.

  6. Video Recording With a GoPro in Hand and Upper Extremity Surgery.

    PubMed

    Vara, Alexander D; Wu, John; Shin, Alexander Y; Sobol, Gregory; Wiater, Brett

    2016-10-01

    Video recordings of surgical procedures are an excellent tool for presentations, analyzing self-performance, illustrating publications, and educating surgeons and patients. Recording the surgeon's perspective with high-resolution video in the operating room or clinic has become readily available and advances in software improve the ease of editing these videos. A GoPro HERO 4 Silver or Black was mounted on a head strap and worn over the surgical scrub cap, above the loupes of the operating surgeon. Five live surgical cases were recorded with the camera. The videos were uploaded to a computer and subsequently edited with iMovie or the GoPro software. The optimal settings for both the Silver and Black editions, when operating room lights are used, were determined to be a narrow view, 1080p, 60 frames per second (fps), spot meter on, protune on with auto white balance, exposure compensation at -0.5, and without a polarizing lens. When the operating room lights were not used, it was determined that the standard settings for a GoPro camera were ideal for positioning and editing (4K, 15 frames per second, spot meter and protune off). The GoPro HERO 4 provides high-quality, the surgeon perspective, and a cost-effective video recording of upper extremity surgical procedures. Challenges include finding the optimal settings for each surgical procedure and the length of recording due to battery life limitations. Copyright © 2016 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.

  7. Easily Accessible Camera Mount

    NASA Technical Reports Server (NTRS)

    Chalson, H. E.

    1986-01-01

    Modified mount enables fast alinement of movie cameras in explosionproof housings. Screw on side and readily reached through side door of housing. Mount includes right-angle drive mechanism containing two miter gears that turn threaded shaft. Shaft drives movable dovetail clamping jaw that engages fixed dovetail plate on camera. Mechanism alines camera in housing and secures it. Reduces installation time by 80 percent.

  8. Photogrammetric Trajectory Estimation of Foam Debris Ejected From an F-15 Aircraft

    NASA Technical Reports Server (NTRS)

    Smith, Mark S.

    2006-01-01

    Photogrammetric analysis of high-speed digital video data was performed to estimate trajectories of foam debris ejected from an F-15B aircraft. This work was part of a flight test effort to study the transport properties of insulating foam shed by the Space Shuttle external tank during ascent. The conical frustum-shaped pieces of debris, called "divots," were ejected from a flight test fixture mounted underneath the F-15B aircraft. Two onboard cameras gathered digital video data at two thousand frames per second. Time histories of divot positions were determined from the videos post flight using standard photogrammetry techniques. Divot velocities were estimated by differentiating these positions with respect to time. Time histories of divot rotations were estimated using four points on the divot face. Estimated divot position, rotation, and Mach number for selected cases are presented. Uncertainty in the results is discussed.

  9. Generalized parallel-perspective stereo mosaics from airborne video.

    PubMed

    Zhu, Zhigang; Hanson, Allen R; Riseman, Edward M

    2004-02-01

    In this paper, we present a new method for automatically and efficiently generating stereoscopic mosaics by seamless registration of images collected by a video camera mounted on an airborne platform. Using a parallel-perspective representation, a pair of geometrically registered stereo mosaics can be precisely constructed under quite general motion. A novel parallel ray interpolation for stereo mosaicing (PRISM) approach is proposed to make stereo mosaics seamless in the presence of obvious motion parallax and for rather arbitrary scenes. Parallel-perspective stereo mosaics generated with the PRISM method have better depth resolution than perspective stereo due to the adaptive baseline geometry. Moreover, unlike previous results showing that parallel-perspective stereo has a constant depth error, we conclude that the depth estimation error of stereo mosaics is in fact a linear function of the absolute depths of a scene. Experimental results on long video sequences are given.

  10. Multiple Sensor Camera for Enhanced Video Capturing

    NASA Astrophysics Data System (ADS)

    Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko

    A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.

  11. Integrated remotely sensed datasets for disaster management

    NASA Astrophysics Data System (ADS)

    McCarthy, Timothy; Farrell, Ronan; Curtis, Andrew; Fotheringham, A. Stewart

    2008-10-01

    Video imagery can be acquired from aerial, terrestrial and marine based platforms and has been exploited for a range of remote sensing applications over the past two decades. Examples include coastal surveys using aerial video, routecorridor infrastructures surveys using vehicle mounted video cameras, aerial surveys over forestry and agriculture, underwater habitat mapping and disaster management. Many of these video systems are based on interlaced, television standards such as North America's NTSC and European SECAM and PAL television systems that are then recorded using various video formats. This technology has recently being employed as a front-line, remote sensing technology for damage assessment post-disaster. This paper traces the development of spatial video as a remote sensing tool from the early 1980s to the present day. The background to a new spatial-video research initiative based at National University of Ireland, Maynooth, (NUIM) is described. New improvements are proposed and include; low-cost encoders, easy to use software decoders, timing issues and interoperability. These developments will enable specialists and non-specialists collect, process and integrate these datasets within minimal support. This integrated approach will enable decision makers to access relevant remotely sensed datasets quickly and so, carry out rapid damage assessment during and post-disaster.

  12. Visual acuity, contrast sensitivity, and range performance with compressed motion video

    NASA Astrophysics Data System (ADS)

    Bijl, Piet; de Vries, Sjoerd C.

    2010-10-01

    Video of visual acuity (VA) and contrast sensitivity (CS) test charts in a complex background was recorded using a CCD color camera mounted on a computer-controlled tripod and was fed into real-time MPEG-2 compression/decompression equipment. The test charts were based on the triangle orientation discrimination (TOD) test method and contained triangle test patterns of different sizes and contrasts in four possible orientations. In a perception experiment, observers judged the orientation of the triangles in order to determine VA and CS thresholds at the 75% correct level. Three camera velocities (0, 1.0, and 2.0 deg/s, or 0, 4.1, and 8.1 pixels/frame) and four compression rates (no compression, 4 Mb/s, 2 Mb/s, and 1 Mb/s) were used. VA is shown to be rather robust to any combination of motion and compression. CS, however, dramatically decreases when motion is combined with high compression ratios. The measured thresholds were fed into the TOD target acquisition model to predict the effect of motion and compression on acquisition ranges for tactical military vehicles. The effect of compression on static performance is limited but strong with motion video. The data suggest that with the MPEG2 algorithm, the emphasis is on the preservation of image detail at the cost of contrast loss.

  13. Augmented reality system for CT-guided interventions: system description and initial phantom trials

    NASA Astrophysics Data System (ADS)

    Sauer, Frank; Schoepf, Uwe J.; Khamene, Ali; Vogt, Sebastian; Das, Marco; Silverman, Stuart G.

    2003-05-01

    We are developing an augmented reality (AR) image guidance system, in which information derived from medical images is overlaid onto a video view of the patient. The interventionalist wears a head-mounted display (HMD) that presents him with the augmented stereo view. The HMD is custom fitted with two miniature color video cameras that capture the stereo view of the scene. A third video camera, operating in the near IR, is also attached to the HMD and is used for head tracking. The system achieves real-time performance of 30 frames per second. The graphics appears firmly anchored in the scne, without any noticeable swimming or jitter or time lag. For the application of CT-guided interventions, we extended our original prototype system to include tracking of a biopsy needle to which we attached a set of optical markers. The AR visualization provides very intuitive guidance for planning and placement of the needle and reduces radiation to patient and radiologist. We used an interventional abdominal phantom with simulated liver lesions to perform an inital set of experiments. The users were consistently able to locate the target lesion with the first needle pass. These results provide encouragement to move the system towards clinical trials.

  14. An image-tube camera for cometary spectrography

    NASA Astrophysics Data System (ADS)

    Mamadov, O.

    The paper discusses the mounting of an image tube camera. The cathode is of antimony, sodium, potassium, and cesium. The parts used for mounting are of acrylic plastic and a fabric-based laminate. A mounting design that does not include cooling is presented. The aperture ratio of the camera is 1:27. Also discussed is the way that the camera is joined to the spectrograph.

  15. An algorithm of a real time image tracking system using a camera with pan/tilt motors on an embedded system

    NASA Astrophysics Data System (ADS)

    Kim, Hie-Sik; Nam, Chul; Ha, Kwan-Yong; Ayurzana, Odgeral; Kwon, Jong-Won

    2005-12-01

    The embedded systems have been applied to many fields, including households and industrial sites. The user interface technology with simple display on the screen was implemented more and more. The user demands are increasing and the system has more various applicable fields due to a high penetration rate of the Internet. Therefore, the demand for embedded system is tend to rise. An embedded system for image tracking was implemented. This system is used a fixed IP for the reliable server operation on TCP/IP networks. Using an USB camera on the embedded Linux system developed a real time broadcasting of video image on the Internet. The digital camera is connected at the USB host port of the embedded board. All input images from the video camera are continuously stored as a compressed JPEG file in a directory at the Linux web-server. And each frame image data from web camera is compared for measurement of displacement Vector. That used Block matching algorithm and edge detection algorithm for past speed. And the displacement vector is used at pan/tilt motor control through RS232 serial cable. The embedded board utilized the S3C2410 MPU, which used the ARM 920T core form Samsung. The operating system was ported to embedded Linux kernel and mounted of root file system. And the stored images are sent to the client PC through the web browser. It used the network function of Linux and it developed a program with protocol of the TCP/IP.

  16. OPALS: A COTS-based Tech Demo of Optical Communications

    NASA Technical Reports Server (NTRS)

    Oaida, Bogdan

    2012-01-01

    I. Objective: Deliver video from ISS to optical ground terminal via an optical communications link. a) JPL Phaeton/Early Career Hire (ECH) training project. b) Implemented as Class-D payload. c) Downlink at approx.30Mb/s. II. Flight System a) Optical Head Beacon Acquisition Camera. Downlink Transmitter. 2-axis Gimbal. b) Sealed Container Laser Avionics Power distribution Digital I/O board III. Implementation: a) Ground Station - Optical Communications Telescope Laboratory at Table Mountain Facility b) Flight System mounted to ISS FRAM as standard I/F. Attached externally on Express Logistics Carrier.

  17. Camera Control and Geo-Registration for Video Sensor Networks

    NASA Astrophysics Data System (ADS)

    Davis, James W.

    With the use of large video networks, there is a need to coordinate and interpret the video imagery for decision support systems with the goal of reducing the cognitive and perceptual overload of human operators. We present computer vision strategies that enable efficient control and management of cameras to effectively monitor wide-coverage areas, and examine the framework within an actual multi-camera outdoor urban video surveillance network. First, we construct a robust and precise camera control model for commercial pan-tilt-zoom (PTZ) video cameras. In addition to providing a complete functional control mapping for PTZ repositioning, the model can be used to generate wide-view spherical panoramic viewspaces for the cameras. Using the individual camera control models, we next individually map the spherical panoramic viewspace of each camera to a large aerial orthophotograph of the scene. The result provides a unified geo-referenced map representation to permit automatic (and manual) video control and exploitation of cameras in a coordinated manner. The combined framework provides new capabilities for video sensor networks that are of significance and benefit to the broad surveillance/security community.

  18. STS-74/Mir photogrammetric appendage structural dynamics experiment

    NASA Technical Reports Server (NTRS)

    Welch, Sharon S.; Gilbert, Michael G.

    1996-01-01

    The Photogrammetric Appendage Structural Dynamics Experiment (PASDE) is an International Space Station (ISS) Phase-1 risk mitigation experiment. Phase-1 experiments are performed during docking missions of the U.S. Space Shuttle to the Russian Space Station Mir. The purpose of the experiment is to demonstrate the use of photogrammetric techniques for determination of structural dynamic mode parameters of solar arrays and other spacecraft appendages. Photogrammetric techniques are a low cost alternative to appendage mounted accelerometers for the ISS program. The objective of the first flight of PASDE, on STS-74 in November 1995, was to obtain video images of Mir Kvant-2 solar array response to various structural dynamic excitation events. More than 113 minutes of high quality structural response video data was collected during the mission. The PASDE experiment hardware consisted of three instruments each containing two video cameras, two video tape recorders, a modified video signal time inserter, and associated avionics boxes. The instruments were designed, fabricated, and tested at the NASA Langley Research Center in eight months. The flight hardware was integrated into standard Hitchhiker canisters at the NASA Goddard Space Flight Center and then installed into the Space Shuttle cargo bay in locations selected to achieve good video coverage and photogrammetric geometry.

  19. Dante's Volcano

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This video contains two segments: one a 0:01:50 spot and the other a 0:08:21 feature. Dante 2, an eight-legged walking machine, is shown during field trials as it explores the inner depths of an active volcano at Mount Spurr, Alaska. A NASA sponsored team at Carnegie Mellon University built Dante to withstand earth's harshest conditions, to deliver a science payload to the interior of a volcano, and to report on its journey to the floor of a volcano. Remotely controlled from 80-miles away, the robot explored the inner depths of the volcano and information from onboard video cameras and sensors was relayed via satellite to scientists in Anchorage. There, using a computer generated image, controllers tracked the robot's movement. Ultimately the robot team hopes to apply the technology to future planetary missions.

  20. Deformable three-dimensional model architecture for interactive augmented reality in minimally invasive surgery.

    PubMed

    Vemuri, Anant S; Wu, Jungle Chi-Hsiang; Liu, Kai-Che; Wu, Hurng-Sheng

    2012-12-01

    Surgical procedures have undergone considerable advancement during the last few decades. More recently, the availability of some imaging methods intraoperatively has added a new dimension to minimally invasive techniques. Augmented reality in surgery has been a topic of intense interest and research. Augmented reality involves usage of computer vision algorithms on video from endoscopic cameras or cameras mounted in the operating room to provide the surgeon additional information that he or she otherwise would have to recognize intuitively. One of the techniques combines a virtual preoperative model of the patient with the endoscope camera using natural or artificial landmarks to provide an augmented reality view in the operating room. The authors' approach is to provide this with the least number of changes to the operating room. Software architecture is presented to provide interactive adjustment in the registration of a three-dimensional (3D) model and endoscope video. Augmented reality including adrenalectomy, ureteropelvic junction obstruction, and retrocaval ureter and pancreas was used to perform 12 surgeries. The general feedback from the surgeons has been very positive not only in terms of deciding the positions for inserting points but also in knowing the least change in anatomy. The approach involves providing a deformable 3D model architecture and its application to the operating room. A 3D model with a deformable structure is needed to show the shape change of soft tissue during the surgery. The software architecture to provide interactive adjustment in registration of the 3D model and endoscope video with adjustability of every 3D model is presented.

  1. Two degree of freedom camera mount

    NASA Technical Reports Server (NTRS)

    Ambrose, Robert O. (Inventor)

    2003-01-01

    A two degree of freedom camera mount. The camera mount includes a socket, a ball, a first linkage and a second linkage. The socket includes an interior surface and an opening. The ball is positioned within an interior of the socket. The ball includes a coupling point for rotating the ball relative to the socket and an aperture for mounting a camera. The first and second linkages are rotatably connected to the socket and slidably connected to the coupling point of the ball. Rotation of the linkages with respect to the socket causes the ball to rotate with respect to the socket.

  2. 2011 Tohoku tsunami video and TLS based measurements: hydrographs, currents, inundation flow velocities, and ship tracks

    NASA Astrophysics Data System (ADS)

    Fritz, H. M.; Phillips, D. A.; Okayasu, A.; Shimozono, T.; Liu, H.; Takeda, S.; Mohammed, F.; Skanavis, V.; Synolakis, C. E.; Takahashi, T.

    2012-12-01

    The March 11, 2011, magnitude Mw 9.0 earthquake off the coast of the Tohoku region caused catastrophic damage and loss of life in Japan. The mid-afternoon tsunami arrival combined with survivors equipped with cameras on top of vertical evacuation buildings provided spontaneous spatially and temporally resolved inundation recordings. This report focuses on the surveys at 9 tsunami eyewitness video recording locations in Myako, Kamaishi, Kesennuma and Yoriisohama along Japan's Sanriku coast and the subsequent video image calibration, processing, tsunami hydrograph and flow velocity analysis. Selected tsunami video recording sites were explored, eyewitnesses interviewed and some ground control points recorded during the initial tsunami reconnaissance in April, 2011. A follow-up survey in June, 2011 focused on terrestrial laser scanning (TLS) at locations with high quality eyewitness videos. We acquired precise topographic data using TLS at the video sites producing a 3-dimensional "point cloud" dataset. A camera mounted on the Riegl VZ-400 scanner yields photorealistic 3D images. Integrated GPS measurements allow accurate georeferencing. The original video recordings were recovered from eyewitnesses and the Japanese Coast Guard (JCG). The analysis of the tsunami videos follows an adapted four step procedure originally developed for the analysis of 2004 Indian Ocean tsunami videos at Banda Aceh, Indonesia (Fritz et al., 2006). The first step requires the calibration of the sector of view present in the eyewitness video recording based on ground control points measured in the LiDAR data. In a second step the video image motion induced by the panning of the video camera was determined from subsequent images by particle image velocimetry (PIV) applied to fixed objects. The third step involves the transformation of the raw tsunami video images from image coordinates to world coordinates with a direct linear transformation (DLT) procedure. Finally, the instantaneous tsunami surface current and flooding velocity vector maps are determined by applying the digital PIV analysis method to the rectified tsunami video images with floating debris clusters. Tsunami currents up to 11 m/s per second were measured in Kesennuma Bay making navigation impossible. Tsunami hydrographs are derived from the videos based on water surface elevations at surface piercing objects identified in the acquired topographic TLS data. Apart from a dominant tsunami crest the hydrograph at Kamaishi also reveals a subsequent draw down to -10m exposing the harbor bottom. In some cases ship moorings resist the main tsunami crest only to be broken by the extreme draw down and setting vessels a drift for hours. Further we discuss the complex effects of coastal structures on inundation and outflow hydrographs and flow velocities.;

  3. Acquisition and Analysis of Dynamic Responses of a Historic Pedestrian Bridge using Video Image Processing

    NASA Astrophysics Data System (ADS)

    O'Byrne, Michael; Ghosh, Bidisha; Schoefs, Franck; O'Donnell, Deirdre; Wright, Robert; Pakrashi, Vikram

    2015-07-01

    Video based tracking is capable of analysing bridge vibrations that are characterised by large amplitudes and low frequencies. This paper presents the use of video images and associated image processing techniques to obtain the dynamic response of a pedestrian suspension bridge in Cork, Ireland. This historic structure is one of the four suspension bridges in Ireland and is notable for its dynamic nature. A video camera is mounted on the river-bank and the dynamic responses of the bridge have been measured from the video images. The dynamic response is assessed without the need of a reflector on the bridge and in the presence of various forms of luminous complexities in the video image scenes. Vertical deformations of the bridge were measured in this regard. The video image tracking for the measurement of dynamic responses of the bridge were based on correlating patches in time-lagged scenes in video images and utilisinga zero mean normalisedcross correlation (ZNCC) metric. The bridge was excited by designed pedestrian movement and by individual cyclists traversing the bridge. The time series data of dynamic displacement responses of the bridge were analysedto obtain the frequency domain response. Frequencies obtained from video analysis were checked against accelerometer data from the bridge obtained while carrying out the same set of experiments used for video image based recognition.

  4. Acquisition and Analysis of Dynamic Responses of a Historic Pedestrian Bridge using Video Image Processing

    NASA Astrophysics Data System (ADS)

    O'Byrne, Michael; Ghosh, Bidisha; Schoefs, Franck; O'Donnell, Deirdre; Wright, Robert; Pakrashi, Vikram

    2015-07-01

    Video based tracking is capable of analysing bridge vibrations that are characterised by large amplitudes and low frequencies. This paper presents the use of video images and associated image processing techniques to obtain the dynamic response of a pedestrian suspension bridge in Cork, Ireland. This historic structure is one of the four suspension bridges in Ireland and is notable for its dynamic nature. A video camera is mounted on the river-bank and the dynamic responses of the bridge have been measured from the video images. The dynamic response is assessed without the need of a reflector on the bridge and in the presence of various forms of luminous complexities in the video image scenes. Vertical deformations of the bridge were measured in this regard. The video image tracking for the measurement of dynamic responses of the bridge were based on correlating patches in time-lagged scenes in video images and utilisinga zero mean normalised cross correlation (ZNCC) metric. The bridge was excited by designed pedestrian movement and by individual cyclists traversing the bridge. The time series data of dynamic displacement responses of the bridge were analysedto obtain the frequency domain response. Frequencies obtained from video analysis were checked against accelerometer data from the bridge obtained while carrying out the same set of experiments used for video image based recognition.

  5. Evaluation of Suppression of Hydroprocessed Renewable Jet (HRJ) Fuel Fires with Aqueous Film Forming Foam (AFFF)

    DTIC Science & Technology

    2011-07-01

    cameras were installed around the test pan and an underwater GoPro ® video camera recorded the fire from below the layer of fuel. 3.2.2. Camera Images...Distribution A: Approved for public release; distribution unlimited. 3.2.3. Video Images A GoPro video camera with a wide angle lens recorded the tests...camera and the GoPro ® video camera were not used for fire suppression experiments. 3.3.2. Test Pans Two ¼-in thick stainless steel test pans were

  6. Pettit works with two still cameras mounted together in the U.S. Laboratory

    NASA Image and Video Library

    2012-01-21

    ISS030-E-049636 (21 Jan. 2012) --- NASA astronaut Don Pettit, Expedition 30 flight engineer, works with two still cameras mounted together in the Destiny laboratory of the International Space Station. One camera is an infrared modified still camera.

  7. Pettit works with two still cameras mounted together in the U.S. Laboratory

    NASA Image and Video Library

    2012-01-21

    ISS030-E-049643 (21 Jan. 2012) --- NASA astronaut Don Pettit, Expedition 30 flight engineer, works with two still cameras mounted together in the Destiny laboratory of the International Space Station. One camera is an infrared modified still camera.

  8. Development of high-speed video cameras

    NASA Astrophysics Data System (ADS)

    Etoh, Takeharu G.; Takehara, Kohsei; Okinaka, Tomoo; Takano, Yasuhide; Ruckelshausen, Arno; Poggemann, Dirk

    2001-04-01

    Presented in this paper is an outline of the R and D activities on high-speed video cameras, which have been done in Kinki University since more than ten years ago, and are currently proceeded as an international cooperative project with University of Applied Sciences Osnabruck and other organizations. Extensive marketing researches have been done, (1) on user's requirements on high-speed multi-framing and video cameras by questionnaires and hearings, and (2) on current availability of the cameras of this sort by search of journals and websites. Both of them support necessity of development of a high-speed video camera of more than 1 million fps. A video camera of 4,500 fps with parallel readout was developed in 1991. A video camera with triple sensors was developed in 1996. The sensor is the same one as developed for the previous camera. The frame rate is 50 million fps for triple-framing and 4,500 fps for triple-light-wave framing, including color image capturing. Idea on a video camera of 1 million fps with an ISIS, In-situ Storage Image Sensor, was proposed in 1993 at first, and has been continuously improved. A test sensor was developed in early 2000, and successfully captured images at 62,500 fps. Currently, design of a prototype ISIS is going on, and, hopefully, will be fabricated in near future. Epoch-making cameras in history of development of high-speed video cameras by other persons are also briefly reviewed.

  9. Helicopter-based Photography for use in SfM over the West Greenland Ablation Zone

    NASA Astrophysics Data System (ADS)

    Mote, T. L.; Tedesco, M.; Astuti, I.; Cotten, D.; Jordan, T.; Rennermalm, A. K.

    2015-12-01

    Results of low-elevation high-resolution aerial photography from a helicopter are reported for a supraglacial watershed in West Greenland. Data were collected at the end of July 2015 over a supraglacial watershed terminating in the Kangerlussuaq region of Greenland and following the Utrecht University K-Transect of meteorological stations. The aerial photography reported here were complementary observations used to support hyperspectral measurements of albedo, discussed in the Greenland Ice sheet hydrology session of this AGU Fall meeting. A compact digital camera was installed inside a pod mounted on the side of the helicopter together with gyroscopes and accelerometers that were used to estimate the relative orientation. Continuous video was collected on 19 and 21 July flights, and frames extracted from the videos are used to create a series of aerial photos. Individual geo-located aerial photos were also taken on a 24 July flight. We demonstrate that by maintaining a constant flight elevation and a near constant ground speed, a helicopter with a mounted camera can produce 3-D structure of the ablation zone of the ice sheet at unprecedented spatial resolution of the order of 5 - 10 cm. By setting the intervalometer on the camera to 2 seconds, the images obtained provide sufficient overlap (>60%) for digital image alignment, even at a flight elevation of ~170m. As a result, very accurate point matching between photographs can be achieved and an extremely dense RGB encoded point cloud can be extracted. Overlapping images provide a series of stereopairs that can be used to create point cloud data consisting of 3 position and 3 color variables, X, Y, Z, R, G, and B. This point cloud is then used to create orthophotos or large scale digital elevation models, thus accurately displaying ice structure. The geo-referenced images provide a ground spatial resolution of approximately 6 cm, permitting analysis of detailed features, such as cryoconite holes, evolving small order streams, and cracks from hydrofracturing.

  10. Simple and cost-effective hardware and software for functional brain mapping using intrinsic optical signal imaging.

    PubMed

    Harrison, Thomas C; Sigler, Albrecht; Murphy, Timothy H

    2009-09-15

    We describe a simple and low-cost system for intrinsic optical signal (IOS) imaging using stable LED light sources, basic microscopes, and commonly available CCD cameras. IOS imaging measures activity-dependent changes in the light reflectance of brain tissue, and can be performed with a minimum of specialized equipment. Our system uses LED ring lights that can be mounted on standard microscope objectives or video lenses to provide a homogeneous and stable light source, with less than 0.003% fluctuation across images averaged from 40 trials. We describe the equipment and surgical techniques necessary for both acute and chronic mouse preparations, and provide software that can create maps of sensory representations from images captured by inexpensive 8-bit cameras or by 12-bit cameras. The IOS imaging system can be adapted to commercial upright microscopes or custom macroscopes, eliminating the need for dedicated equipment or complex optical paths. This method can be combined with parallel high resolution imaging techniques such as two-photon microscopy.

  11. Use of thermal infrared imaging for monitoring renewed dome growth at Mount St. Helens, 2004: Chapter 17 in A volcano rekindled: the renewed eruption of Mount St. Helens, 2004-2006

    USGS Publications Warehouse

    Schneider, David J.; Vallance, James W.; Wessels, Rick L.; Logan, Matthew; Ramsey, Michael S.; Sherrod, David R.; Scott, William E.; Stauffer, Peter H.

    2008-01-01

    A helicopter-mounted thermal imaging radiometer documented the explosive vent-clearing and effusive phases of the eruption of Mount St. Helens in 2004. A gyrostabilized gimbal controlled by a crew member housed the radiometer and an optical video camera attached to the nose of the helicopter. Since October 1, 2004, the system has provided thermal and video observations of dome growth. Flights conducted as frequently as twice daily during the initial month of the eruption monitored rapid changes in the crater and 1980-86 lava dome. Thermal monitoring decreased to several times per week once dome extrusion began. The thermal imaging system provided unique observations, including timely recognition that the early explosive phase was phreatic, location of structures controlling thermal emissions and active faults, detection of increased heat flow prior to the extrusion of lava, and recognition of new lava extrusion. The first spines, 1 and 2, were hotter when they emerged (maximum temperature 700-730°C) than subsequent spines insulated by as much as several meters of fault gouge. Temperature of gouge-covered spines was about 200°C where they emerged from the vent, and it decreased rapidly with distance from the vent. The hottest parts of these spines were as high as 500-730°C in fractured and broken-up regions. Such temperature variation needs to be accounted for in the retrieval of eruption parameters using satellite-based techniques, as such features are smaller than pixels in satellite images.

  12. UrtheCast Second-Generation Earth Observation Sensors

    NASA Astrophysics Data System (ADS)

    Beckett, K.

    2015-04-01

    UrtheCast's Second-Generation state-of-the-art Earth Observation (EO) remote sensing platform will be hosted on the NASA segment of International Space Station (ISS). This platform comprises a high-resolution dual-mode (pushbroom and video) optical camera and a dual-band (X and L) Synthetic Aperture RADAR (SAR) instrument. These new sensors will complement the firstgeneration medium-resolution pushbroom and high-definition video cameras that were mounted on the Russian segment of the ISS in early 2014. The new cameras are expected to be launched to the ISS in late 2017 via the Space Exploration Technologies Corporation Dragon spacecraft. The Canadarm will then be used to install the remote sensing platform onto a CBM (Common Berthing Mechanism) hatch on Node 3, allowing the sensor electronics to be accessible from the inside of the station, thus limiting their exposure to the space environment and allowing for future capability upgrades. The UrtheCast second-generation system will be able to take full advantage of the strengths that each of the individual sensors offers, such that the data exploitation capabilities of the combined sensors is significantly greater than from either sensor alone. This represents a truly novel platform that will lead to significant advances in many other Earth Observation applications such as environmental monitoring, energy and natural resources management, and humanitarian response, with data availability anticipated to begin after commissioning is completed in early 2018.

  13. A head-mounted display-based personal integrated-image monitoring system for transurethral resection of the prostate.

    PubMed

    Yoshida, Soichiro; Kihara, Kazunori; Takeshita, Hideki; Fujii, Yasuhisa

    2014-12-01

    The head-mounted display (HMD) is a new image monitoring system. We developed the Personal Integrated-image Monitoring System (PIM System) using the HMD (HMZ-T2, Sony Corporation, Tokyo, Japan) in combination with video splitters and multiplexers as a surgical guide system for transurethral resection of the prostate (TURP). The imaging information obtained from the cystoscope, the transurethral ultrasonography (TRUS), the video camera attached to the HMD, and the patient's vital signs monitor were split and integrated by the PIM System and a composite image was displayed by the HMD using a four-split screen technique. Wearing the HMD, the lead surgeon and the assistant could simultaneously and continuously monitor the same information displayed by the HMD in an ergonomically efficient posture. Each participant could independently rearrange the images comprising the composite image depending on the engaging step. Two benign prostatic hyperplasia (BPH) patients underwent TURP performed by surgeons guided with this system. In both cases, the TURP procedure was successfully performed, and their postoperative clinical courses had no remarkable unfavorable events. During the procedure, none of the participants experienced any HMD-wear related adverse effects or reported any discomfort.

  14. Video-microscopy for use in microsurgical aspects of complex hepatobiliary and pancreatic surgery: a preliminary report

    PubMed Central

    Nissen, Nicholas N; Menon, Vijay; Williams, James; Berci, George

    2011-01-01

    Background The use of loupe magnification during complex hepatobiliary and pancreatic (HBP) surgery has become routine. Unfortunately, loupe magnification has several disadvantages including limited magnification, a fixed field and non-variable magnification parameters. The aim of this report is to describe a simple system of video-microscopy for use in open surgery as an alternative to loupe magnification. Methods In video-microscopy, the operative field is displayed on a TV monitor using a high-definition (HD) camera with a special optic mounted on an adjustable mechanical arm. The set-up and application of this system are described and illustrated using examples drawn from pancreaticoduodenectomy, bile duct repair and liver transplantation. Results This system is easy to use and can provide variable magnification of ×4–12 at a camera distance of 25–35 cm from the operative field and a depth of field of 15 mm. This system allows the surgeon and assistant to work from a HD TV screen during critical phases of microsurgery. Conclusions The system described here provides better magnification than loupe lenses and thus may be beneficial during complex HPB procedures. Other benefits of this system include the fact that its use decreases neck strain and postural fatigue in the surgeon and it can be used as a tool for documentation and teaching. PMID:21929677

  15. Egocentric Temporal Action Proposals.

    PubMed

    Shao Huang; Weiqiang Wang; Shengfeng He; Lau, Rynson W H

    2018-02-01

    We present an approach to localize generic actions in egocentric videos, called temporal action proposals (TAPs), for accelerating the action recognition step. An egocentric TAP refers to a sequence of frames that may contain a generic action performed by the wearer of a head-mounted camera, e.g., taking a knife, spreading jam, pouring milk, or cutting carrots. Inspired by object proposals, this paper aims at generating a small number of TAPs, thereby replacing the popular sliding window strategy, for localizing all action events in the input video. To this end, we first propose to temporally segment the input video into action atoms, which are the smallest units that may contain an action. We then apply a hierarchical clustering algorithm with several egocentric cues to generate TAPs. Finally, we propose two actionness networks to score the likelihood of each TAP containing an action. The top ranked candidates are returned as output TAPs. Experimental results show that the proposed TAP detection framework performs significantly better than relevant approaches for egocentric action detection.

  16. Laser-Directed Ranging System Implementing Single Camera System for Telerobotics Applications

    NASA Technical Reports Server (NTRS)

    Wells, Dennis L. (Inventor); Li, Larry C. (Inventor); Cox, Brian J. (Inventor)

    1995-01-01

    The invention relates generally to systems for determining the range of an object from a reference point and, in one embodiment, to laser-directed ranging systems useful in telerobotics applications. Digital processing techniques are employed which minimize the complexity and cost of the hardware and software for processing range calculations, thereby enhancing the commercial attractiveness of the system for use in relatively low-cost robotic systems. The system includes a video camera for generating images of the target, image digitizing circuitry, and an associated frame grabber circuit. The circuit first captures one of the pairs of stereo video images of the target, and then captures a second video image of the target as it is partly illuminated by the light beam, suitably generated by a laser. The two video images, taken sufficiently close together in time to minimize camera and scene motion, are converted to digital images and then compared. Common pixels are eliminated, leaving only a digital image of the laser-illuminated spot on the target. Mw centroid of the laser illuminated spot is dm obtained and compared with a predetermined reference point, predetermined by design or calibration, which represents the coordinate at the focal plane of the laser illumination at infinite range. Preferably, the laser and camera are mounted on a servo-driven platform which can be oriented to direct the camera and the laser toward the target. In one embodiment the platform is positioned in response to movement of the operator's head. Position and orientation sensors are used to monitor head movement. The disparity between the digital image of the laser spot and the reference point is calculated for determining range to the target. Commercial applications for the system relate to active range-determination systems, such as those used with robotic systems in which it is necessary to determine the, range to a workpiece or object to be grasped or acted upon by a robot arm end-effector in response to commands generated by an operator. In one embodiment, the system provides a real-time image of the target for the operator as the robot approaches the object. The system is also adapted for use in virtual reality systems in which a remote object or workpiece is to be acted upon by a remote robot arm or other mechanism controlled by an operator.

  17. Novel method based on video tracking system for simultaneous measurement of kinematics and flow in the wake of a freely swimming fish

    NASA Astrophysics Data System (ADS)

    Wu, Guanhao; Yang, Yan; Zeng, Lijiang

    2006-11-01

    A novel method based on video tracking system for simultaneous measurement of kinematics and flow in the wake of a freely swimming fish is described. Spontaneous and continuous swimming behaviors of a variegated carp (Cyprinus carpio) are recorded by two cameras mounted on a translation stage which is controlled to track the fish. By processing the images recorded during tracking, the detailed kinematics based on calculated midlines and quantitative analysis of the flow in the wake during a low-speed turn and burst-and-coast swimming are revealed. We also draw the trajectory of the fish during a continuous swimming bout containing several moderate maneuvers. The results prove that our method is effective for studying maneuvers of fish both from kinematic and hydrodynamic viewpoints.

  18. Membrane Vibration Analysis Above the Nyquist Limit with Fluorescence Videogrammetry

    NASA Technical Reports Server (NTRS)

    Dorrington, Adrian A.; Jones, Thomas W.; Danehy, Paul M.; Pappa, Richard S.

    2004-01-01

    A new method for generating photogrammetric targets by projecting an array of laser beams onto a membrane doped with fluorescent laser dye has recently been developed. In this paper we review this new fluorescence based technique, then proceed to show how it can be used for dynamic measurements, and how a short pulsed (10 ns) laser allows the measurement of vibration modes at frequencies several times the sampling frequency. In addition, we present experimental results showing the determination of fundamental and harmonic vibration modes of a drum style dye-doped polymer membrane tautly mounted on a 12-inch circular hoop and excited with 30 Hz and 62 Hz sinusoidal acoustic waves. The projected laser dot pattern was generated by passing the beam from a pulsed Nd:YAG laser though a diffractive optical element, and the resulting fluorescence was imaged with three digital video cameras, all of which were synchronized with a pulse and delay generator. Although the video cameras are capable of 240 Hz frame rates, the laser s output was limited to 30 Hz and below. Consequently, aliasing techniques were used to allow the measurement of vibration modes up to 186 Hz with a Nyquist limit of less than 15 Hz.

  19. Microscope basics.

    PubMed

    Sluder, Greenfield; Nordberg, Joshua J

    2013-01-01

    This chapter provides information on how microscopes work and discusses some of the microscope issues to be considered in using a video camera on the microscope. There are two types of microscopes in use today for research in cell biology-the older finite tube-length (typically 160mm mechanical tube length) microscopes and the infinity optics microscopes that are now produced. The objective lens forms a magnified, real image of the specimen at a specific distance from the objective known as the intermediate image plane. All objectives are designed to be used with the specimen at a defined distance from the front lens element of the objective (the working distance) so that the image formed is located at a specific location in the microscope. Infinity optics microscopes differ from the finite tube-length microscopes in that the objectives are designed to project the image of the specimen to infinity and do not, on their own, form a real image of the specimen. Three types of objectives are in common use today-plan achromats, plan apochromats, and plan fluorite lenses. The concept of mounting video cameras on the microscope is also presented in the chapter. Copyright © 2003 Elsevier Inc. All rights reserved.

  20. Video auto stitching in multicamera surveillance system

    NASA Astrophysics Data System (ADS)

    He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang

    2012-01-01

    This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.

  1. Video auto stitching in multicamera surveillance system

    NASA Astrophysics Data System (ADS)

    He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang

    2011-12-01

    This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.

  2. Automatic inference of geometric camera parameters and inter-camera topology in uncalibrated disjoint surveillance cameras

    NASA Astrophysics Data System (ADS)

    den Hollander, Richard J. M.; Bouma, Henri; Baan, Jan; Eendebak, Pieter T.; van Rest, Jeroen H. C.

    2015-10-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many cameras, or for frequent ad-hoc deployments of cameras, the cost of this calibration is high. This creates a barrier for the use of video analytics. Automating the calibration allows for a short configuration time, and the use of video analytics in a wider range of scenarios, including ad-hoc crisis situations and large scale surveillance systems. We show an autocalibration method entirely based on pedestrian detections in surveillance video in multiple non-overlapping cameras. In this paper, we show the two main components of automatic calibration. The first shows the intra-camera geometry estimation that leads to an estimate of the tilt angle, focal length and camera height, which is important for the conversion from pixels to meters and vice versa. The second component shows the inter-camera topology inference that leads to an estimate of the distance between cameras, which is important for spatio-temporal analysis of multi-camera tracking. This paper describes each of these methods and provides results on realistic video data.

  3. D Surface Generation from Aerial Thermal Imagery

    NASA Astrophysics Data System (ADS)

    Khodaei, B.; Samadzadegan, F.; Dadras Javan, F.; Hasani, H.

    2015-12-01

    Aerial thermal imagery has been recently applied to quantitative analysis of several scenes. For the mapping purpose based on aerial thermal imagery, high accuracy photogrammetric process is necessary. However, due to low geometric resolution and low contrast of thermal imaging sensors, there are some challenges in precise 3D measurement of objects. In this paper the potential of thermal video in 3D surface generation is evaluated. In the pre-processing step, thermal camera is geometrically calibrated using a calibration grid based on emissivity differences between the background and the targets. Then, Digital Surface Model (DSM) generation from thermal video imagery is performed in four steps. Initially, frames are extracted from video, then tie points are generated by Scale-Invariant Feature Transform (SIFT) algorithm. Bundle adjustment is then applied and the camera position and orientation parameters are determined. Finally, multi-resolution dense image matching algorithm is used to create 3D point cloud of the scene. Potential of the proposed method is evaluated based on thermal imaging cover an industrial area. The thermal camera has 640×480 Uncooled Focal Plane Array (UFPA) sensor, equipped with a 25 mm lens which mounted in the Unmanned Aerial Vehicle (UAV). The obtained results show the comparable accuracy of 3D model generated based on thermal images with respect to DSM generated from visible images, however thermal based DSM is somehow smoother with lower level of texture. Comparing the generated DSM with the 9 measured GCPs in the area shows the Root Mean Square Error (RMSE) value is smaller than 5 decimetres in both X and Y directions and 1.6 meters for the Z direction.

  4. Social Justice through Literacy: Integrating Digital Video Cameras in Reading Summaries and Responses

    ERIC Educational Resources Information Center

    Liu, Rong; Unger, John A.; Scullion, Vicki A.

    2014-01-01

    Drawing data from an action-oriented research project for integrating digital video cameras into the reading process in pre-college courses, this study proposes using digital video cameras in reading summaries and responses to promote critical thinking and to teach social justice concepts. The digital video research project is founded on…

  5. Leveraging traffic and surveillance video cameras for urban traffic.

    DOT National Transportation Integrated Search

    2014-12-01

    The objective of this project was to investigate the use of existing video resources, such as traffic : cameras, police cameras, red light cameras, and security cameras for the long-term, real-time : collection of traffic statistics. An additional ob...

  6. Helmet-mounted uncooled FPA camera for use in firefighting applications

    NASA Astrophysics Data System (ADS)

    Wu, Cheng; Feng, Shengrong; Li, Kai; Pan, Shunchen; Su, Junhong; Jin, Weiqi

    2000-05-01

    From the concept and need background of firefighters to the thermal imager, we discuss how the helmet-mounted camera applied in the bad environment of conflagration, especially at the high temperature, and how the better matching between the thermal imager with the helmet will be put into effect in weight, size, etc. Finally, give a practical helmet- mounted IR camera based on the uncooled focal plane array detector for in firefighting.

  7. Remote camera observations of lava dome growth at Mount St. Helens, Washington, October 2004 to February 2006: Chapter 11 in A volcano rekindled: the renewed eruption of Mount St. Helens, 2004-2006

    USGS Publications Warehouse

    Poland, Michael P.; Dzurisin, Daniel; LaHusen, Richard G.; Major, John J.; Lapcewich, Dennis; Endo, Elliot T.; Gooding, Daniel J.; Schilling, Steve P.; Janda, Christine G.; Sherrod, David R.; Scott, William E.; Stauffer, Peter H.

    2008-01-01

    Images from a Web-based camera (Webcam) located 8 km north of Mount St. Helens and a network of remote, telemetered digital cameras were used to observe eruptive activity at the volcano between October 2004 and February 2006. The cameras offered the advantages of low cost, low power, flexibility in deployment, and high spatial and temporal resolution. Images obtained from the cameras provided important insights into several aspects of dome extrusion, including rockfalls, lava extrusion rates, and explosive activity. Images from the remote, telemetered digital cameras were assembled into time-lapse animations of dome extrusion that supported monitoring, research, and outreach efforts. The wide-ranging utility of remote camera imagery should motivate additional work, especially to develop the three-dimensional quantitative capabilities of terrestrial camera networks.

  8. Close-range photogrammetry with video cameras

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Snow, W. L.; Goad, W. K.

    1985-01-01

    Examples of photogrammetric measurements made with video cameras uncorrected for electronic and optical lens distortions are presented. The measurement and correction of electronic distortions of video cameras using both bilinear and polynomial interpolation are discussed. Examples showing the relative stability of electronic distortions over long periods of time are presented. Having corrected for electronic distortion, the data are further corrected for lens distortion using the plumb line method. Examples of close-range photogrammetric data taken with video cameras corrected for both electronic and optical lens distortion are presented.

  9. Close-Range Photogrammetry with Video Cameras

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Snow, W. L.; Goad, W. K.

    1983-01-01

    Examples of photogrammetric measurements made with video cameras uncorrected for electronic and optical lens distortions are presented. The measurement and correction of electronic distortions of video cameras using both bilinear and polynomial interpolation are discussed. Examples showing the relative stability of electronic distortions over long periods of time are presented. Having corrected for electronic distortion, the data are further corrected for lens distortion using the plumb line method. Examples of close-range photogrammetric data taken with video cameras corrected for both electronic and optical lens distortion are presented.

  10. 2011 Tohoku tsunami hydrographs, currents, flow velocities and ship tracks based on video and TLS measurements

    NASA Astrophysics Data System (ADS)

    Fritz, Hermann M.; Phillips, David A.; Okayasu, Akio; Shimozono, Takenori; Liu, Haijiang; Takeda, Seiichi; Mohammed, Fahad; Skanavis, Vassilis; Synolakis, Costas E.; Takahashi, Tomoyuki

    2013-04-01

    The March 11, 2011, magnitude Mw 9.0 earthquake off the Tohoku coast of Japan caused catastrophic damage and loss of life to a tsunami aware population. The mid-afternoon tsunami arrival combined with survivors equipped with cameras on top of vertical evacuation buildings provided fragmented spatially and temporally resolved inundation recordings. This report focuses on the surveys at 9 tsunami eyewitness video recording locations in Myako, Kamaishi, Kesennuma and Yoriisohama along Japan's Sanriku coast and the subsequent video image calibration, processing, tsunami hydrograph and flow velocity analysis. Selected tsunami video recording sites were explored, eyewitnesses interviewed and some ground control points recorded during the initial tsunami reconnaissance in April, 2011. A follow-up survey in June, 2011 focused on terrestrial laser scanning (TLS) at locations with high quality eyewitness videos. We acquired precise topographic data using TLS at the video sites producing a 3-dimensional "point cloud" dataset. A camera mounted on the Riegl VZ-400 scanner yields photorealistic 3D images. Integrated GPS measurements allow accurate georeferencing. The original video recordings were recovered from eyewitnesses and the Japanese Coast Guard (JCG). The analysis of the tsunami videos follows an adapted four step procedure originally developed for the analysis of 2004 Indian Ocean tsunami videos at Banda Aceh, Indonesia (Fritz et al., 2006). The first step requires the calibration of the sector of view present in the eyewitness video recording based on ground control points measured in the LiDAR data. In a second step the video image motion induced by the panning of the video camera was determined from subsequent images by particle image velocimetry (PIV) applied to fixed objects. The third step involves the transformation of the raw tsunami video images from image coordinates to world coordinates with a direct linear transformation (DLT) procedure. Finally, the instantaneous tsunami surface current and flooding velocity vector maps are determined by applying the digital PIV analysis method to the rectified tsunami video images with floating debris clusters. Tsunami currents up to 11 m/s were measured in Kesennuma Bay making navigation impossible (Fritz et al., 2012). Tsunami hydrographs are derived from the videos based on water surface elevations at surface piercing objects identified in the acquired topographic TLS data. Apart from a dominant tsunami crest the hydrograph at Kamaishi also reveals a subsequent draw down to minus 10m exposing the harbor bottom. In some cases ship moorings resist the main tsunami crest only to be broken by the extreme draw down and setting vessels a drift for hours. Further we discuss the complex effects of coastal structures on inundation and outflow hydrographs and flow velocities. Lastly a perspective on the recovery and reconstruction process is provided based on numerous revisits of identical sites between April 2011 and July 2012.

  11. Instant Video Revisiting: The Video Camera as a "Tool of the Mind" for Young Children.

    ERIC Educational Resources Information Center

    Forman, George

    1999-01-01

    Once used only to record special events in the classroom, video cameras are now small enough and affordable enough to be used to document everyday events. Video cameras, with foldout screens, allow children to watch their activities immediately after they happen and to discuss them with a teacher. This article coins the term instant video…

  12. PC-based control unit for a head-mounted operating microscope for augmented-reality visualization in surgical navigation

    NASA Astrophysics Data System (ADS)

    Figl, Michael; Birkfellner, Wolfgang; Watzinger, Franz; Wanschitz, Felix; Hummel, Johann; Hanel, Rudolf A.; Ewers, Rolf; Bergmann, Helmar

    2002-05-01

    Two main concepts of Head Mounted Displays (HMD) for augmented reality (AR) visualization exist, the optical and video-see through type. Several research groups have pursued both approaches for utilizing HMDs for computer aided surgery. While the hardware requirements for a video see through HMD to achieve acceptable time delay and frame rate seem to be enormous the clinical acceptance of such a device is doubtful from a practical point of view. Starting from previous work in displaying additional computer-generated graphics in operating microscopes, we have adapted a miniature head mounted operating microscope for AR by integrating two very small computer displays. To calibrate the projection parameters of this so called Varioscope AR we have used Tsai's Algorithm for camera calibration. Connection to a surgical navigation system was performed by defining an open interface to the control unit of the Varioscope AR. The control unit consists of a standard PC with a dual head graphics adapter to render and display the desired augmentation of the scene. We connected this control unit to a computer aided surgery (CAS) system by the TCP/IP interface. In this paper we present the control unit for the HMD and its software design. We tested two different optical tracking systems, the Flashpoint (Image Guided Technologies, Boulder, CO), which provided about 10 frames per second, and the Polaris (Northern Digital, Ontario, Canada) which provided at least 30 frames per second, both with a time delay of one frame.

  13. Firefly: A HOT camera core for thermal imagers with enhanced functionality

    NASA Astrophysics Data System (ADS)

    Pillans, Luke; Harmer, Jack; Edwards, Tim

    2015-06-01

    Raising the operating temperature of mercury cadmium telluride infrared detectors from 80K to above 160K creates new applications for high performance infrared imagers by vastly reducing the size, weight and power consumption of the integrated cryogenic cooler. Realizing the benefits of Higher Operating Temperature (HOT) requires a new kind of infrared camera core with the flexibility to address emerging applications in handheld, weapon mounted and UAV markets. This paper discusses the Firefly core developed to address these needs by Selex ES in Southampton UK. Firefly represents a fundamental redesign of the infrared signal chain reducing power consumption and providing compatibility with low cost, low power Commercial Off-The-Shelf (COTS) computing technology. This paper describes key innovations in this signal chain: a ROIC purpose built to minimize power consumption in the proximity electronics, GPU based image processing of infrared video, and a software customisable infrared core which can communicate wirelessly with other Battlespace systems.

  14. Instrumentation for Aim Point Determination in the Close-in Battle

    DTIC Science & Technology

    2007-12-01

    Rugged camcorder with remote “ lipstick ” camera (http://www.samsung.com/Products/ Camcorder/DigitalMemory/files/scx210wl.pdf). ........ 5 Figure 5...target. One way of making a measurement is to mount a small “ lipstick ” camera to the rifle with a mount similar to the laser-tag transmitter mount...technology.com/contractors/surveillance/viotac-inc/viotac-inc1.html). Figure 4. Rugged camcorder with remote “ lipstick ” camera (http://www.samsung.com

  15. Nonchronological video synopsis and indexing.

    PubMed

    Pritch, Yael; Rav-Acha, Alex; Peleg, Shmuel

    2008-11-01

    The amount of captured video is growing with the increased numbers of video cameras, especially the increase of millions of surveillance cameras that operate 24 hours a day. Since video browsing and retrieval is time consuming, most captured video is never watched or examined. Video synopsis is an effective tool for browsing and indexing of such a video. It provides a short video representation, while preserving the essential activities of the original video. The activity in the video is condensed into a shorter period by simultaneously showing multiple activities, even when they originally occurred at different times. The synopsis video is also an index into the original video by pointing to the original time of each activity. Video Synopsis can be applied to create a synopsis of an endless video streams, as generated by webcams and by surveillance cameras. It can address queries like "Show in one minute the synopsis of this camera broadcast during the past day''. This process includes two major phases: (i) An online conversion of the endless video stream into a database of objects and activities (rather than frames). (ii) A response phase, generating the video synopsis as a response to the user's query.

  16. The Mars Science Laboratory (MSL) Mast cameras and Descent imager: Investigation and instrument descriptions

    NASA Astrophysics Data System (ADS)

    Malin, Michal C.; Ravine, Michael A.; Caplinger, Michael A.; Tony Ghaemi, F.; Schaffner, Jacob A.; Maki, Justin N.; Bell, James F.; Cameron, James F.; Dietrich, William E.; Edgett, Kenneth S.; Edwards, Laurence J.; Garvin, James B.; Hallet, Bernard; Herkenhoff, Kenneth E.; Heydari, Ezat; Kah, Linda C.; Lemmon, Mark T.; Minitti, Michelle E.; Olson, Timothy S.; Parker, Timothy J.; Rowland, Scott K.; Schieber, Juergen; Sletten, Ron; Sullivan, Robert J.; Sumner, Dawn Y.; Aileen Yingst, R.; Duston, Brian M.; McNair, Sean; Jensen, Elsa H.

    2017-08-01

    The Mars Science Laboratory Mast camera and Descent Imager investigations were designed, built, and operated by Malin Space Science Systems of San Diego, CA. They share common electronics and focal plane designs but have different optics. There are two Mastcams of dissimilar focal length. The Mastcam-34 has an f/8, 34 mm focal length lens, and the M-100 an f/10, 100 mm focal length lens. The M-34 field of view is about 20° × 15° with an instantaneous field of view (IFOV) of 218 μrad; the M-100 field of view (FOV) is 6.8° × 5.1° with an IFOV of 74 μrad. The M-34 can focus from 0.5 m to infinity, and the M-100 from 1.6 m to infinity. All three cameras can acquire color images through a Bayer color filter array, and the Mastcams can also acquire images through seven science filters. Images are ≤1600 pixels wide by 1200 pixels tall. The Mastcams, mounted on the 2 m tall Remote Sensing Mast, have a 360° azimuth and 180° elevation field of regard. Mars Descent Imager is fixed-mounted to the bottom left front side of the rover at 66 cm above the surface. Its fixed focus lens is in focus from 2 m to infinity, but out of focus at 66 cm. The f/3 lens has a FOV of 70° by 52° across and along the direction of motion, with an IFOV of 0.76 mrad. All cameras can acquire video at 4 frames/second for full frames or 720p HD at 6 fps. Images can be processed using lossy Joint Photographic Experts Group and predictive lossless compression.

  17. Robust Video Stabilization Using Particle Keypoint Update and l1-Optimized Camera Path

    PubMed Central

    Jeon, Semi; Yoon, Inhye; Jang, Jinbeum; Yang, Seungji; Kim, Jisung; Paik, Joonki

    2017-01-01

    Acquisition of stabilized video is an important issue for various type of digital cameras. This paper presents an adaptive camera path estimation method using robust feature detection to remove shaky artifacts in a video. The proposed algorithm consists of three steps: (i) robust feature detection using particle keypoints between adjacent frames; (ii) camera path estimation and smoothing; and (iii) rendering to reconstruct a stabilized video. As a result, the proposed algorithm can estimate the optimal homography by redefining important feature points in the flat region using particle keypoints. In addition, stabilized frames with less holes can be generated from the optimal, adaptive camera path that minimizes a temporal total variation (TV). The proposed video stabilization method is suitable for enhancing the visual quality for various portable cameras and can be applied to robot vision, driving assistant systems, and visual surveillance systems. PMID:28208622

  18. Feasibility Study of Utilization of Action Camera, GoPro Hero 4, Google Glass, and Panasonic HX-A100 in Spine Surgery.

    PubMed

    Lee, Chang Kyu; Kim, Youngjun; Lee, Nam; Kim, Byeongwoo; Kim, Doyoung; Yi, Seong

    2017-02-15

    Study for feasibility of commercially available action cameras in recording video of spine. Recent innovation of the wearable action camera with high-definition video recording enables surgeons to use camera in the operation at ease without high costs. The purpose of this study is to compare the feasibility, safety, and efficacy of commercially available action cameras in recording video of spine surgery. There are early reports of medical professionals using Google Glass throughout the hospital, Panasonic HX-A100 action camera, and GoPro. This study is the first report for spine surgery. Three commercially available cameras were tested: GoPro Hero 4 Silver, Google Glass, and Panasonic HX-A100 action camera. Typical spine surgery was selected for video recording; posterior lumbar laminectomy and fusion. Three cameras were used by one surgeon and video was recorded throughout the operation. The comparison was made on the perspective of human factor, specification, and video quality. The most convenient and lightweight device for wearing and holding throughout the long operation time was Google Glass. The image quality; all devices except Google Glass supported HD format and GoPro has unique 2.7K or 4K resolution. Quality of video resolution was best in GoPro. Field of view, GoPro can adjust point of interest, field of view according to the surgery. Narrow FOV option was the best for recording in GoPro to share the video clip. Google Glass has potentials by using application programs. Connectivity such as Wi-Fi and Bluetooth enables video streaming for audience, but only Google Glass has two-way communication feature in device. Action cameras have the potential to improve patient safety, operator comfort, and procedure efficiency in the field of spinal surgery and broadcasting a surgery with development of the device and applied program in the future. N/A.

  19. Real-time registration of video with ultrasound using stereo disparity

    NASA Astrophysics Data System (ADS)

    Wang, Jihang; Horvath, Samantha; Stetten, George; Siegel, Mel; Galeotti, John

    2012-02-01

    Medical ultrasound typically deals with the interior of the patient, with the exterior left to the original medical imaging modality, direct human vision. For the human operator scanning the patient, the view of the external anatomy is essential for correctly locating the ultrasound probe on the body and making sense of the resulting ultrasound images in their proper anatomical context. The operator, after all, is not expected to perform the scan with his eyes shut. Over the past decade, our laboratory has developed a method of fusing these two information streams in the mind of the operator, the Sonic Flashlight, which uses a half silvered mirror and miniature display mounted on an ultrasound probe to produce a virtual image within the patient at its proper location. We are now interested in developing a similar data fusion approach within the ultrasound machine itself, by, in effect, giving vision to the transducer. Our embodiment of this concept consists of an ultrasound probe with two small video cameras mounted on it, with software capable of locating the surface of an ultrasound phantom using stereo disparity between the two video images. We report its first successful operation, demonstrating a 3D rendering of the phantom's surface with the ultrasound data superimposed at its correct relative location. Eventually, automated analysis of these registered data sets may permit the scanner and its associated computational apparatus to interpret the ultrasound data within its anatomical context, much as the human operator does today.

  20. Dynamic strain distribution of FRP plate under blast loading

    NASA Astrophysics Data System (ADS)

    Saburi, T.; Yoshida, M.; Kubota, S.

    2017-02-01

    The dynamic strain distribution of a fiber re-enforced plastic (FRP) plate under blast loading was investigated using a Digital Image Correlation (DIC) image analysis method. The testing FRP plates were mounted in parallel to each other on a steel frame. 50 g of composition C4 explosive was used as a blast loading source and set in the center of the FRP plates. The dynamic behavior of the FRP plate under blast loading were observed by two high-speed video cameras. The set of two high-speed video image sequences were used to analyze the FRP three-dimensional strain distribution by means of DIC method. A point strain profile extracted from the analyzed strain distribution data was compared with a directly observed strain profile using a strain gauge and it was shown that the strain profile under the blast loading by DIC method is quantitatively accurate.

  1. Development and human factors analysis of neuronavigation vs. augmented reality.

    PubMed

    Pandya, Abhilash; Siadat, Mohammad-Reza; Auner, Greg; Kalash, Mohammad; Ellis, R Darin

    2004-01-01

    This paper is focused on the human factors analysis comparing a standard neuronavigation system with an augmented reality system. We use a passive articulated arm (Microscribe, Immersion technology) to track a calibrated end-effector mounted video camera. In real time, we superimpose the live video view with the synchronized graphical view of CT-derived segmented object(s) of interest within a phantom skull. Using the same robotic arm, we have developed a neuronavigation system able to show the end-effector of the arm on orthogonal CT scans. Both the AR and the neuronavigation systems have been shown to be within 3mm of accuracy. A human factors study was conducted in which subjects were asked to draw craniotomies and answer questions to gage their understanding of the phantom objects. The human factors study included 21 subjects and indicated that the subjects performed faster, with more accuracy and less errors using the Augmented Reality interface.

  2. Multi-Target Camera Tracking, Hand-off and Display LDRD 158819 Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Robert J.

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn’t lead to more alarms, moremore » monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identify individual moving targets from the background imagery, and then display the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.« less

  3. Multi-target camera tracking, hand-off and display LDRD 158819 final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Robert J.

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn't lead to more alarms, moremore » monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identifies individual moving targets from the background imagery, and then displays the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.« less

  4. Advanced High-Definition Video Cameras

    NASA Technical Reports Server (NTRS)

    Glenn, William

    2007-01-01

    A product line of high-definition color video cameras, now under development, offers a superior combination of desirable characteristics, including high frame rates, high resolutions, low power consumption, and compactness. Several of the cameras feature a 3,840 2,160-pixel format with progressive scanning at 30 frames per second. The power consumption of one of these cameras is about 25 W. The size of the camera, excluding the lens assembly, is 2 by 5 by 7 in. (about 5.1 by 12.7 by 17.8 cm). The aforementioned desirable characteristics are attained at relatively low cost, largely by utilizing digital processing in advanced field-programmable gate arrays (FPGAs) to perform all of the many functions (for example, color balance and contrast adjustments) of a professional color video camera. The processing is programmed in VHDL so that application-specific integrated circuits (ASICs) can be fabricated directly from the program. ["VHDL" signifies VHSIC Hardware Description Language C, a computing language used by the United States Department of Defense for describing, designing, and simulating very-high-speed integrated circuits (VHSICs).] The image-sensor and FPGA clock frequencies in these cameras have generally been much higher than those used in video cameras designed and manufactured elsewhere. Frequently, the outputs of these cameras are converted to other video-camera formats by use of pre- and post-filters.

  5. Studying Upper-Limb Amputee Prosthesis Use to Inform Device Design

    DTIC Science & Technology

    2015-10-01

    the study. This equipment has included a modified GoPro head-mounted camera and a Vicon 13-camera optical motion capture system, which was not part...also completed for relevant members of the study team. 4. The head-mounted camera setup has been established (a modified GoPro Hero 3 with external

  6. A comparison of Google Glass and traditional video vantage points for bedside procedural skill assessment.

    PubMed

    Evans, Heather L; O'Shea, Dylan J; Morris, Amy E; Keys, Kari A; Wright, Andrew S; Schaad, Douglas C; Ilgen, Jonathan S

    2016-02-01

    This pilot study assessed the feasibility of using first person (1P) video recording with Google Glass (GG) to assess procedural skills, as compared with traditional third person (3P) video. We hypothesized that raters reviewing 1P videos would visualize more procedural steps with greater inter-rater reliability than 3P rating vantages. Seven subjects performed simulated internal jugular catheter insertions. Procedures were recorded by both Google Glass and an observer's head-mounted camera. Videos were assessed by 3 expert raters using a task-specific checklist (CL) and both an additive- and summative-global rating scale (GRS). Mean scores were compared by t-tests. Inter-rater reliabilities were calculated using intraclass correlation coefficients. The 1P vantage was associated with a significantly higher mean CL score than the 3P vantage (7.9 vs 6.9, P = .02). Mean GRS scores were not significantly different. Mean inter-rater reliabilities for the CL, additive-GRS, and summative-GRS were similar between vantages. 1P vantage recordings may improve visualization of tasks for behaviorally anchored instruments (eg, CLs), whereas maintaining similar global ratings and inter-rater reliability when compared with conventional 3P vantage recordings. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Texture-adaptive hyperspectral video acquisition system with a spatial light modulator

    NASA Astrophysics Data System (ADS)

    Fang, Xiaojing; Feng, Jiao; Wang, Yongjin

    2014-10-01

    We present a new hybrid camera system based on spatial light modulator (SLM) to capture texture-adaptive high-resolution hyperspectral video. The hybrid camera system records a hyperspectral video with low spatial resolution using a gray camera and a high-spatial resolution video using a RGB camera. The hyperspectral video is subsampled by the SLM. The subsampled points can be adaptively selected according to the texture characteristic of the scene by combining with digital imaging analysis and computational processing. In this paper, we propose an adaptive sampling method utilizing texture segmentation and wavelet transform (WT). We also demonstrate the effectiveness of the sampled pattern on the SLM with the proposed method.

  8. Reduction in Fall Rate in Dementia Managed Care Through Video Incident Review: Pilot Study

    PubMed Central

    Netscher, George; Agrawal, Pulkit; Tabb Noyce, Lynn; Bayen, Alexandre

    2017-01-01

    Background Falls of individuals with dementia are frequent, dangerous, and costly. Early detection and access to the history of a fall is crucial for efficient care and secondary prevention in cognitively impaired individuals. However, most falls remain unwitnessed events. Furthermore, understanding why and how a fall occurred is a challenge. Video capture and secure transmission of real-world falls thus stands as a promising assistive tool. Objective The objective of this study was to analyze how continuous video monitoring and review of falls of individuals with dementia can support better quality of care. Methods A pilot observational study (July-September 2016) was carried out in a Californian memory care facility. Falls were video-captured (24×7), thanks to 43 wall-mounted cameras (deployed in all common areas and in 10 out of 40 private bedrooms of consenting residents and families). Video review was provided to facility staff, thanks to a customized mobile device app. The outcome measures were the count of residents’ falls happening in the video-covered areas, the acceptability of video recording, the analysis of video review, and video replay possibilities for care practice. Results Over 3 months, 16 falls were video-captured. A drop in fall rate was observed in the last month of the study. Acceptability was good. Video review enabled screening for the severity of falls and fall-related injuries. Video replay enabled identifying cognitive-behavioral deficiencies and environmental circumstances contributing to the fall. This allowed for secondary prevention in high-risk multi-faller individuals and for updated facility care policies regarding a safer living environment for all residents. Conclusions Video monitoring offers high potential to support conventional care in memory care facilities. PMID:29042342

  9. Mars Exploration Rover engineering cameras

    USGS Publications Warehouse

    Maki, J.N.; Bell, J.F.; Herkenhoff, K. E.; Squyres, S. W.; Kiely, A.; Klimesh, M.; Schwochert, M.; Litwin, T.; Willson, R.; Johnson, Aaron H.; Maimone, M.; Baumgartner, E.; Collins, A.; Wadsworth, M.; Elliot, S.T.; Dingizian, A.; Brown, D.; Hagerott, E.C.; Scherr, L.; Deen, R.; Alexander, D.; Lorre, J.

    2003-01-01

    NASA's Mars Exploration Rover (MER) Mission will place a total of 20 cameras (10 per rover) onto the surface of Mars in early 2004. Fourteen of the 20 cameras are designated as engineering cameras and will support the operation of the vehicles on the Martian surface. Images returned from the engineering cameras will also be of significant importance to the scientific community for investigative studies of rock and soil morphology. The Navigation cameras (Navcams, two per rover) are a mast-mounted stereo pair each with a 45?? square field of view (FOV) and an angular resolution of 0.82 milliradians per pixel (mrad/pixel). The Hazard Avoidance cameras (Hazcams, four per rover) are a body-mounted, front- and rear-facing set of stereo pairs, each with a 124?? square FOV and an angular resolution of 2.1 mrad/pixel. The Descent camera (one per rover), mounted to the lander, has a 45?? square FOV and will return images with spatial resolutions of ???4 m/pixel. All of the engineering cameras utilize broadband visible filters and 1024 x 1024 pixel detectors. Copyright 2003 by the American Geophysical Union.

  10. MuSCoWERT: multi-scale consistence of weighted edge Radon transform for horizon detection in maritime images.

    PubMed

    Prasad, Dilip K; Rajan, Deepu; Rachmawati, Lily; Rajabally, Eshan; Quek, Chai

    2016-12-01

    This paper addresses the problem of horizon detection, a fundamental process in numerous object detection algorithms, in a maritime environment. The maritime environment is characterized by the absence of fixed features, the presence of numerous linear features in dynamically changing objects and background and constantly varying illumination, rendering the typically simple problem of detecting the horizon a challenging one. We present a novel method called multi-scale consistence of weighted edge Radon transform, abbreviated as MuSCoWERT. It detects the long linear features consistent over multiple scales using multi-scale median filtering of the image followed by Radon transform on a weighted edge map and computing the histogram of the detected linear features. We show that MuSCoWERT has excellent performance, better than seven other contemporary methods, for 84 challenging maritime videos, containing over 33,000 frames, and captured using visible range and near-infrared range sensors mounted onboard, onshore, or on floating buoys. It has a median error of about 2 pixels (less than 0.2%) from the center of the actual horizon and a median angular error of less than 0.4 deg. We are also sharing a new challenging horizon detection dataset of 65 videos of visible, infrared cameras for onshore and onboard ship camera placement.

  11. Hesitation and error: Does product placement in an emergency department influence hand hygiene performance?

    PubMed

    Stackelroth, Jenny; Sinnott, Michael; Shaban, Ramon Z

    2015-09-01

    Existing research has consistently demonstrated poor compliance by health care workers with hand hygiene standards. This study examined the extent to which incorrect hand hygiene occurred as a result of the inability to easily distinguish between different hand hygiene solutions placed at washbasins. A direct observational method was used using ceiling-mounted, motion-activated video camera surveillance in a tertiary referral emergency department in Australia. Data from a 24-hour period on day 10 of the recordings were collected into the Hand Hygiene-Technique Observation Tool based on Feldman's criteria as modified by Larson and Lusk. A total of 459 episodes of hand hygiene were recorded by 6 video cameras in the 24-hour period. The observed overall rate of error in this study was 6.2% (27 episodes). In addition an overall rate of hesitation was 5.8% (26 episodes). There was no statistically significant difference in error rates with the 2 hand washbasin configurations. The amelioration of causes of error and hesitation by standardization of the appearance and relative positioning of hand hygiene solutions at washbasins may translate in to improved hand hygiene behaviors. Placement of moisturizer at the washbasin may not be essential. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.

  12. An innovative experimental setup for Large Scale Particle Image Velocimetry measurements in riverine environments

    NASA Astrophysics Data System (ADS)

    Tauro, Flavia; Olivieri, Giorgio; Porfiri, Maurizio; Grimaldi, Salvatore

    2014-05-01

    Large Scale Particle Image Velocimetry (LSPIV) is a powerful methodology to nonintrusively monitor surface flows. Its use has been beneficial to the development of rating curves in riverine environments and to map geomorphic features in natural waterways. Typical LSPIV experimental setups rely on the use of mast-mounted cameras for the acquisition of natural stream reaches. Such cameras are installed on stream banks and are angled with respect to the water surface to capture large scale fields of view. Despite its promise and the simplicity of the setup, the practical implementation of LSPIV is affected by several challenges, including the acquisition of ground reference points for image calibration and time-consuming and highly user-assisted procedures to orthorectify images. In this work, we perform LSPIV studies on stream sections in the Aniene and Tiber basins, Italy. To alleviate the limitations of traditional LSPIV implementations, we propose an improved video acquisition setup comprising a telescopic, an inexpensive GoPro Hero 3 video camera, and a system of two lasers. The setup allows for maintaining the camera axis perpendicular to the water surface, thus mitigating uncertainties related to image orthorectification. Further, the mast encases a laser system for remote image calibration, thus allowing for nonintrusively calibrating videos without acquiring ground reference points. We conduct measurements on two different water bodies to outline the performance of the methodology in case of varying flow regimes, illumination conditions, and distribution of surface tracers. Specifically, the Aniene river is characterized by high surface flow velocity, the presence of abundant, homogeneously distributed ripples and water reflections, and a meagre number of buoyant tracers. On the other hand, the Tiber river presents lower surface flows, isolated reflections, and several floating objects. Videos are processed through image-based analyses to correct for lens distortions and analyzed with a commercially available PIV software. Surface flow velocity estimates are compared to supervised measurements performed by visually tracking objects floating on the stream surface and to rating curves developed by the Ufficio Idrografico e Mareografico (UIM) at Regione Lazio, Italy. Experimental findings demonstrate that the presence of tracers is crucial for surface flow velocity estimates. Further, considering surface ripples and patterns may lead to underestimations in LSPIV analyses.

  13. High-frame-rate infrared and visible cameras for test range instrumentation

    NASA Astrophysics Data System (ADS)

    Ambrose, Joseph G.; King, B.; Tower, John R.; Hughes, Gary W.; Levine, Peter A.; Villani, Thomas S.; Esposito, Benjamin J.; Davis, Timothy J.; O'Mara, K.; Sjursen, W.; McCaffrey, Nathaniel J.; Pantuso, Francis P.

    1995-09-01

    Field deployable, high frame rate camera systems have been developed to support the test and evaluation activities at the White Sands Missile Range. The infrared cameras employ a 640 by 480 format PtSi focal plane array (FPA). The visible cameras employ a 1024 by 1024 format backside illuminated CCD. The monolithic, MOS architecture of the PtSi FPA supports commandable frame rate, frame size, and integration time. The infrared cameras provide 3 - 5 micron thermal imaging in selectable modes from 30 Hz frame rate, 640 by 480 frame size, 33 ms integration time to 300 Hz frame rate, 133 by 142 frame size, 1 ms integration time. The infrared cameras employ a 500 mm, f/1.7 lens. Video outputs are 12-bit digital video and RS170 analog video with histogram-based contrast enhancement. The 1024 by 1024 format CCD has a 32-port, split-frame transfer architecture. The visible cameras exploit this architecture to provide selectable modes from 30 Hz frame rate, 1024 by 1024 frame size, 32 ms integration time to 300 Hz frame rate, 1024 by 1024 frame size (with 2:1 vertical binning), 0.5 ms integration time. The visible cameras employ a 500 mm, f/4 lens, with integration time controlled by an electro-optical shutter. Video outputs are RS170 analog video (512 by 480 pixels), and 12-bit digital video.

  14. Clinical applications of commercially available video recording and monitoring systems: inexpensive, high-quality video recording and monitoring systems for endoscopy and microsurgery.

    PubMed

    Tsunoda, Koichi; Tsunoda, Atsunobu; Ishimoto, ShinnIchi; Kimura, Satoko

    2006-01-01

    The exclusive charge-coupled device (CCD) camera system for the endoscope and electronic fiberscopes are in widespread use. However, both are usually stationary in an office or examination room, and a wheeled cart is needed for mobility. The total costs of the CCD camera system and electronic fiberscopy system are at least US Dollars 10,000 and US Dollars 30,000, respectively. Recently, the performance of audio and visual instruments has improved dramatically, with a concomitant reduction in their cost. Commercially available CCD video cameras with small monitors have become common. They provide excellent image quality and are much smaller and less expensive than previous models. The authors have developed adaptors for the popular mini-digital video (mini-DV) camera. The camera also provides video and acoustic output signals; therefore, the endoscopic images can be viewed on a large monitor simultaneously. The new system (a mini-DV video camera and an adaptor) costs only US Dollars 1,000. Therefore, the system is both cost-effective and useful for the outpatient clinic or casualty setting, or on house calls for the purpose of patient education. In the future, the authors plan to introduce the clinical application of a high-vision camera and an infrared camera as medical instruments for clinical and research situations.

  15. Field Test Data for Detecting Vibrations of a Building Using High-Speed Video Cameras

    DTIC Science & Technology

    2017-10-01

    ARL-TR-8185 ● OCT 2017 US Army Research Laboratory Field Test Data for Detecting Vibrations of a Building Using High -Speed Video...Field Test Data for Detecting Vibrations of a Building Using High -Speed Video Cameras by Caitlin P Conn and Geoffrey H Goldman Sensors and...June 2016 – October 2017 4. TITLE AND SUBTITLE Field Test Data for Detecting Vibrations of a Building Using High -Speed Video Cameras 5a. CONTRACT

  16. Design of a MATLAB(registered trademark) Image Comparison and Analysis Tool for Augmentation of the Results of the Ann Arbor Distortion Test

    DTIC Science & Technology

    2016-06-25

    The equipment used in this procedure includes: Ann Arbor distortion tester with 50-line grating reticule, IQeye 720 digital video camera with 12...and import them into MATLAB. In order to digitally capture images of the distortion in an optical sample, an IQeye 720 video camera with a 12... video camera and Ann Arbor distortion tester. Figure 8. Computer interface for capturing images seen by IQeye 720 camera. Once an image was

  17. Advantages of computer cameras over video cameras/frame grabbers for high-speed vision applications

    NASA Astrophysics Data System (ADS)

    Olson, Gaylord G.; Walker, Jo N.

    1997-09-01

    Cameras designed to work specifically with computers can have certain advantages in comparison to the use of cameras loosely defined as 'video' cameras. In recent years the camera type distinctions have become somewhat blurred, with a great presence of 'digital cameras' aimed more at the home markets. This latter category is not considered here. The term 'computer camera' herein is intended to mean one which has low level computer (and software) control of the CCD clocking. These can often be used to satisfy some of the more demanding machine vision tasks, and in some cases with a higher rate of measurements than video cameras. Several of these specific applications are described here, including some which use recently designed CCDs which offer good combinations of parameters such as noise, speed, and resolution. Among the considerations for the choice of camera type in any given application would be such effects as 'pixel jitter,' and 'anti-aliasing.' Some of these effects may only be relevant if there is a mismatch between the number of pixels per line in the camera CCD and the number of analog to digital (A/D) sampling points along a video scan line. For the computer camera case these numbers are guaranteed to match, which alleviates some measurement inaccuracies and leads to higher effective resolution.

  18. Video sensor with range measurement capability

    NASA Technical Reports Server (NTRS)

    Howard, Richard T. (Inventor); Briscoe, Jeri M. (Inventor); Corder, Eric L. (Inventor); Broderick, David J. (Inventor)

    2008-01-01

    A video sensor device is provided which incorporates a rangefinder function. The device includes a single video camera and a fixed laser spaced a predetermined distance from the camera for, when activated, producing a laser beam. A diffractive optic element divides the beam so that multiple light spots are produced on a target object. A processor calculates the range to the object based on the known spacing and angles determined from the light spots on the video images produced by the camera.

  19. Effect of Body-Worn Cameras on EMS Documentation Accuracy: A Pilot Study.

    PubMed

    Ho, Jeffrey D; Dawes, Donald M; McKay, Evan M; Taliercio, Jeremy J; White, Scott D; Woodbury, Blair J; Sandefur, Mark A; Miner, James R

    2017-01-01

    Current Emergency Medical Services (EMS) documentation practices usually occur from memory after an event is over. While this practice is fairly standard, it is unclear if it can introduce significant error. Modern technology has seen the increased use of recorded video by society to more objectively document notable events. Stationary mounted cameras, cell-phone cameras, and law enforcement officer Body-Worn Cameras (BWCs) are increasingly used by society for this purpose. Video used in this way can often clarify or contradict recall from memory. BWCs are currently not widely used by EMS. The hypothesis is that current EMS documentation practices are inaccurate and that BWCs will have a positive effect on documentation accuracy. This prospective, observational study used a convenience sample of paramedics in a simulation lab. The Paramedics wore a BWC and responded to a simulated call of "One Down" (unresponsive from heroin abuse) involving Role Players (RPs). The paramedics received standardized cues from the RPs during the simulation to keep it on track.  The simulation contained many factors of concern (e.g., weapons and drugs in plain view, unattended minors, etc.) and intentional stressors (e.g., distraught family member, uncooperative patient, etc.). Upon completion of the scenario, paramedic documentation occurred from memory on an electronic template.  After initial documentation, paramedics viewed their BWC recording and were allowed to make tabulated changes. Changes were categorized by a priori criteria as minor, moderate, or major.  Ten paramedics participated with an average age of 33.3 years (range 22-43), 8 males and 2 females. The average length of paramedic career experience was 7.7 years (range 2 months to 20 years). There were 71 total documentation changes (7 minor, 51 moderate, 13 major) made after video review. Linear regression (ANCOVA) indicated changes made indirectly correlated with years of experience (coefficient 8.27, 4.22-12.3, 95% CI, p = 0.002), but all made some changes. Current EMS documentation practices demonstrate significant inaccuracy regardless of years of experience. Use of BWC technology appears to significantly improve EMS documentation accuracy in this pilot study.

  20. Burbank uses video camera during installation and routing of HRCS Video Cables

    NASA Image and Video Library

    2012-02-01

    ISS030-E-060104 (1 Feb. 2012) --- NASA astronaut Dan Burbank, Expedition 30 commander, uses a video camera in the Destiny laboratory of the International Space Station during installation and routing of video cable for the High Rate Communication System (HRCS). HRCS will allow for two additional space-to-ground audio channels and two additional downlink video channels.

  1. Real-time Enhancement, Registration, and Fusion for an Enhanced Vision System

    NASA Technical Reports Server (NTRS)

    Hines, Glenn D.; Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.

    2006-01-01

    Over the last few years NASA Langley Research Center (LaRC) has been developing an Enhanced Vision System (EVS) to aid pilots while flying in poor visibility conditions. The EVS captures imagery using two infrared video cameras. The cameras are placed in an enclosure that is mounted and flown forward-looking underneath the NASA LaRC ARIES 757 aircraft. The data streams from the cameras are processed in real-time and displayed on monitors on-board the aircraft. With proper processing the camera system can provide better-than-human-observed imagery particularly during poor visibility conditions. However, to obtain this goal requires several different stages of processing including enhancement, registration, and fusion, and specialized processing hardware for real-time performance. We are using a real-time implementation of the Retinex algorithm for image enhancement, affine transformations for registration, and weighted sums to perform fusion. All of the algorithms are executed on a single TI DM642 digital signal processor (DSP) clocked at 720 MHz. The image processing components were added to the EVS system, tested, and demonstrated during flight tests in August and September of 2005. In this paper we briefly discuss the EVS image processing hardware and algorithms. We then discuss implementation issues and show examples of the results obtained during flight tests.

  2. Automatic and robust extrinsic camera calibration for high-accuracy mobile mapping

    NASA Astrophysics Data System (ADS)

    Goeman, Werner; Douterloigne, Koen; Bogaert, Peter; Pires, Rui; Gautama, Sidharta

    2012-10-01

    A mobile mapping system (MMS) is the answer of the geoinformation community to the exponentially growing demand for various geospatial data with increasingly higher accuracies and captured by multiple sensors. As the mobile mapping technology is pushed to explore its use for various applications on water, rail, or road, the need emerges to have an external sensor calibration procedure which is portable, fast and easy to perform. This way, sensors can be mounted and demounted depending on the application requirements without the need for time consuming calibration procedures. A new methodology is presented to provide a high quality external calibration of cameras which is automatic, robust and fool proof.The MMS uses an Applanix POSLV420, which is a tightly coupled GPS/INS positioning system. The cameras used are Point Grey color video cameras synchronized with the GPS/INS system. The method uses a portable, standard ranging pole which needs to be positioned on a known ground control point. For calibration a well studied absolute orientation problem needs to be solved. Here, a mutual information based image registration technique is studied for automatic alignment of the ranging pole. Finally, a few benchmarking tests are done under various lighting conditions which proves the methodology's robustness, by showing high absolute stereo measurement accuracies of a few centimeters.

  3. Crew Field Notes: A New Tool for Planetary Surface Exploration

    NASA Technical Reports Server (NTRS)

    Horz, Friedrich; Evans, Cynthia; Eppler, Dean; Gernhardt, Michael; Bluethmann, William; Graf, Jodi; Bleisath, Scott

    2011-01-01

    The Desert Research and Technology Studies (DRATS) field tests of 2010 focused on the simultaneous operation of two rovers, a historical first. The complexity and data volume of two rovers operating simultaneously presented significant operational challenges for the on-site Mission Control Center, including the real time science support function. The latter was split into two "tactical" back rooms, one for each rover, that supported the real time traverse activities; in addition, a "strategic" science team convened overnight to synthesize the day's findings, and to conduct the strategic forward planning of the next day or days as detailed in [1, 2]. Current DRATS simulations and operations differ dramatically from those of Apollo, including the most evolved Apollo 15-17 missions, due to the advent of digital technologies. Modern digital still and video cameras, combined with the capability for real time transmission of large volumes of data, including multiple video streams, offer the prospect for the ground based science support room(s) in Mission Control to witness all crew activities in unprecedented detail and in real time. It was not uncommon during DRATS 2010 that each tactical science back room simultaneously received some 4-6 video streams from cameras mounted on the rover or the crews' backpacks. Some of the rover cameras are controllable PZT (pan, zoom, tilt) devices that can be operated by the crews (during extensive drives) or remotely by the back room (during EVAs). Typically, a dedicated "expert" and professional geologist in the tactical back room(s) controls, monitors and analyses a single video stream and provides the findings to the team, commonly supported by screen-saved images. It seems obvious, that the real time comprehension and synthesis of the verbal descriptions, extensive imagery, and other information (e.g. navigation data; time lines etc) flowing into the science support room(s) constitute a fundamental challenge to future mission operations: how can one analyze, comprehend and synthesize -in real time- the enormous data volume coming to the ground? Real time understanding of all data is needed for constructive interaction with the surface crews, and it becomes critical for the strategic forward planning process.

  4. Distributing digital video to multiple computers

    PubMed Central

    Murray, James A.

    2004-01-01

    Video is an effective teaching tool, and live video microscopy is especially helpful in teaching dissection techniques and the anatomy of small neural structures. Digital video equipment is more affordable now and allows easy conversion from older analog video devices. I here describe a simple technique for bringing digital video from one camera to all of the computers in a single room. This technique allows students to view and record the video from a single camera on a microscope. PMID:23493464

  5. Video segmentation and camera motion characterization using compressed data

    NASA Astrophysics Data System (ADS)

    Milanese, Ruggero; Deguillaume, Frederic; Jacot-Descombes, Alain

    1997-10-01

    We address the problem of automatically extracting visual indexes from videos, in order to provide sophisticated access methods to the contents of a video server. We focus on tow tasks, namely the decomposition of a video clip into uniform segments, and the characterization of each shot by camera motion parameters. For the first task we use a Bayesian classification approach to detecting scene cuts by analyzing motion vectors. For the second task a least- squares fitting procedure determines the pan/tilt/zoom camera parameters. In order to guarantee the highest processing speed, all techniques process and analyze directly MPEG-1 motion vectors, without need for video decompression. Experimental results are reported for a database of news video clips.

  6. Linear array of photodiodes to track a human speaker for video recording

    NASA Astrophysics Data System (ADS)

    DeTone, D.; Neal, H.; Lougheed, R.

    2012-12-01

    Communication and collaboration using stored digital media has garnered more interest by many areas of business, government and education in recent years. This is due primarily to improvements in the quality of cameras and speed of computers. An advantage of digital media is that it can serve as an effective alternative when physical interaction is not possible. Video recordings that allow for viewers to discern a presenter's facial features, lips and hand motions are more effective than videos that do not. To attain this, one must maintain a video capture in which the speaker occupies a significant portion of the captured pixels. However, camera operators are costly, and often do an imperfect job of tracking presenters in unrehearsed situations. This creates motivation for a robust, automated system that directs a video camera to follow a presenter as he or she walks anywhere in the front of a lecture hall or large conference room. Such a system is presented. The system consists of a commercial, off-the-shelf pan/tilt/zoom (PTZ) color video camera, a necklace of infrared LEDs and a linear photodiode array detector. Electronic output from the photodiode array is processed to generate the location of the LED necklace, which is worn by a human speaker. The computer controls the video camera movements to record video of the speaker. The speaker's vertical position and depth are assumed to remain relatively constant- the video camera is sent only panning (horizontal) movement commands. The LED necklace is flashed at 70Hz at a 50% duty cycle to provide noise-filtering capability. The benefit to using a photodiode array versus a standard video camera is its higher frame rate (4kHz vs. 60Hz). The higher frame rate allows for the filtering of infrared noise such as sunlight and indoor lighting-a capability absent from other tracking technologies. The system has been tested in a large lecture hall and is shown to be effective.

  7. An evaluation of video cameras for collecting observational data on sanctuary-housed chimpanzees (Pan troglodytes).

    PubMed

    Hansen, Bethany K; Fultz, Amy L; Hopper, Lydia M; Ross, Stephen R

    2018-05-01

    Video cameras are increasingly being used to monitor captive animals in zoo, laboratory, and agricultural settings. This technology may also be useful in sanctuaries with large and/or complex enclosures. However, the cost of camera equipment and a lack of formal evaluations regarding the use of cameras in sanctuary settings make it challenging for facilities to decide whether and how to implement this technology. To address this, we evaluated the feasibility of using a video camera system to monitor chimpanzees at Chimp Haven. We viewed a group of resident chimpanzees in a large forested enclosure and compared observations collected in person and with remote video cameras. We found that via camera, the observer viewed fewer chimpanzees in some outdoor locations (GLMM post hoc test: est. = 1.4503, SE = 0.1457, Z = 9.951, p < 0.001) and identified a lower proportion of chimpanzees (GLMM post hoc test: est. = -2.17914, SE = 0.08490, Z = -25.666, p < 0.001) compared to in-person observations. However, the observer could view the 2 ha enclosure 15 times faster by camera compared to in person. In addition to these results, we provide recommendations to animal facilities considering the installation of a video camera system. Despite some limitations of remote monitoring, we posit that there are substantial benefits of using camera systems in sanctuaries to facilitate animal care and observational research. © 2018 Wiley Periodicals, Inc.

  8. Camera network video summarization

    NASA Astrophysics Data System (ADS)

    Panda, Rameswar; Roy-Chowdhury, Amit K.

    2017-05-01

    Networks of vision sensors are deployed in many settings, ranging from security needs to disaster response to environmental monitoring. Many of these setups have hundreds of cameras and tens of thousands of hours of video. The difficulty of analyzing such a massive volume of video data is apparent whenever there is an incident that requires foraging through vast video archives to identify events of interest. As a result, video summarization, that automatically extract a brief yet informative summary of these videos, has attracted intense attention in the recent years. Much progress has been made in developing a variety of ways to summarize a single video in form of a key sequence or video skim. However, generating a summary from a set of videos captured in a multi-camera network still remains as a novel and largely under-addressed problem. In this paper, with the aim of summarizing videos in a camera network, we introduce a novel representative selection approach via joint embedding and capped l21-norm minimization. The objective function is two-fold. The first is to capture the structural relationships of data points in a camera network via an embedding, which helps in characterizing the outliers and also in extracting a diverse set of representatives. The second is to use a capped l21-norm to model the sparsity and to suppress the influence of data outliers in representative selection. We propose to jointly optimize both of the objectives, such that embedding can not only characterize the structure, but also indicate the requirements of sparse representative selection. Extensive experiments on standard multi-camera datasets well demonstrate the efficacy of our method over state-of-the-art methods.

  9. 6. VAL CAMERA CAR, DETAIL OF COMMUNICATION EQUIPMENT INSIDE CAMERA ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    6. VAL CAMERA CAR, DETAIL OF COMMUNICATION EQUIPMENT INSIDE CAMERA CAR WITH CAMERA MOUNT IN FOREGROUND. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  10. Three-dimensional visualization and display technologies; Proceedings of the Meeting, Los Angeles, CA, Jan. 18-20, 1989

    NASA Technical Reports Server (NTRS)

    Robbins, Woodrow E. (Editor); Fisher, Scott S. (Editor)

    1989-01-01

    Special attention was given to problems of stereoscopic display devices, such as CAD for enhancement of the design process in visual arts, stereo-TV improvement of remote manipulator performance, a voice-controlled stereographic video camera system, and head-mounted displays and their low-cost design alternatives. Also discussed was a novel approach to chromostereoscopic microscopy, computer-generated barrier-strip autostereography and lenticular stereograms, and parallax barrier three-dimensional TV. Additional topics include processing and user interface isssues and visualization applications, including automated analysis and fliud flow topology, optical tomographic measusrements of mixing fluids, visualization of complex data, visualization environments, and visualization management systems.

  11. A compact high-definition low-cost digital stereoscopic video camera for rapid robotic surgery development.

    PubMed

    Carlson, Jay; Kowalczuk, Jędrzej; Psota, Eric; Pérez, Lance C

    2012-01-01

    Robotic surgical platforms require vision feedback systems, which often consist of low-resolution, expensive, single-imager analog cameras. These systems are retooled for 3D display by simply doubling the cameras and outboard control units. Here, a fully-integrated digital stereoscopic video camera employing high-definition sensors and a class-compliant USB video interface is presented. This system can be used with low-cost PC hardware and consumer-level 3D displays for tele-medical surgical applications including military medical support, disaster relief, and space exploration.

  12. Feasibility study of transmission of OTV camera control information in the video vertical blanking interval

    NASA Technical Reports Server (NTRS)

    White, Preston A., III

    1994-01-01

    The Operational Television system at Kennedy Space Center operates hundreds of video cameras, many remotely controllable, in support of the operations at the center. This study was undertaken to determine if commercial NABTS (North American Basic Teletext System) teletext transmission in the vertical blanking interval of the genlock signals distributed to the cameras could be used to send remote control commands to the cameras and the associated pan and tilt platforms. Wavelength division multiplexed fiberoptic links are being installed in the OTV system to obtain RS-250 short-haul quality. It was demonstrated that the NABTS transmission could be sent over the fiberoptic cable plant without excessive video quality degradation and that video cameras could be controlled using NABTS transmissions over multimode fiberoptic paths as long as 1.2 km.

  13. Rugged Video System For Inspecting Animal Burrows

    NASA Technical Reports Server (NTRS)

    Triandafils, Dick; Maples, Art; Breininger, Dave

    1992-01-01

    Video system designed for examining interiors of burrows of gopher tortoises, 5 in. (13 cm) in diameter or greater, to depth of 18 ft. (about 5.5 m), includes video camera, video cassette recorder (VCR), television monitor, control unit, and power supply, all carried in backpack. Polyvinyl chloride (PVC) poles used to maneuver camera into (and out of) burrows, stiff enough to push camera into burrow, but flexible enough to bend around curves. Adult tortoises and other burrow inhabitants observable, young tortoises and such small animals as mice obscured by sand or debris.

  14. Using a Video Camera to Measure the Radius of the Earth

    ERIC Educational Resources Information Center

    Carroll, Joshua; Hughes, Stephen

    2013-01-01

    A simple but accurate method for measuring the Earth's radius using a video camera is described. A video camera was used to capture a shadow rising up the wall of a tall building at sunset. A free program called ImageJ was used to measure the time it took the shadow to rise a known distance up the building. The time, distance and length of…

  15. Instrumentation for Infrared Airglow Clutter.

    DTIC Science & Technology

    1987-03-10

    gain, and filter position to the Camera Head, and monitors these parameters as well as preamp video. GAZER is equipped with a Lenzar wide angle, low...Specifications/Parameters VIDEO SENSOR: Camera ...... . LENZAR Intensicon-8 LLLTV using 2nd gen * micro-channel intensifier and proprietary camera tube

  16. Using high-technology to enforce low-technology safety measures: the use of third-party remote video auditing and real-time feedback in healthcare.

    PubMed

    Armellino, Donna; Hussain, Erfan; Schilling, Mary Ellen; Senicola, William; Eichorn, Ann; Dlugacz, Yosef; Farber, Bruce F

    2012-01-01

    Hand hygiene is a key measure in preventing infections. We evaluated healthcare worker (HCW) hand hygiene with the use of remote video auditing with and without feedback. The study was conducted in an 17-bed intensive care unit from June 2008 through June 2010. We placed cameras with views of every sink and hand sanitizer dispenser to record hand hygiene of HCWs. Sensors in doorways identified when an individual(s) entered/exited. When video auditors observed a HCW performing hand hygiene upon entering/exiting, they assigned a pass; if not, a fail was assigned. Hand hygiene was measured during a 16-week period of remote video auditing without feedback and a 91-week period with feedback of data. Performance feedback was continuously displayed on electronic boards mounted within the hallways, and summary reports were delivered to supervisors by electronic mail. During the 16-week prefeedback period, hand hygiene rates were less than 10% (3933/60 542) and in the 16-week postfeedback period it was 81.6% (59 627/73 080). The increase was maintained through 75 weeks at 87.9% (262 826/298 860). The data suggest that remote video auditing combined with feedback produced a significant and sustained improvement in hand hygiene.

  17. Design of a high-resolution optoelectronic retinal prosthesis.

    PubMed

    Palanker, Daniel; Vankov, Alexander; Huie, Phil; Baccus, Stephen

    2005-03-01

    It has been demonstrated that electrical stimulation of the retina can produce visual percepts in blind patients suffering from macular degeneration and retinitis pigmentosa. However, current retinal implants provide very low resolution (just a few electrodes), whereas at least several thousand pixels would be required for functional restoration of sight. This paper presents the design of an optoelectronic retinal prosthetic system with a stimulating pixel density of up to 2500 pix mm(-2) (corresponding geometrically to a maximum visual acuity of 20/80). Requirements on proximity of neural cells to the stimulation electrodes are described as a function of the desired resolution. Two basic geometries of sub-retinal implants providing required proximity are presented: perforated membranes and protruding electrode arrays. To provide for natural eye scanning of the scene, rather than scanning with a head-mounted camera, the system operates similar to 'virtual reality' devices. An image from a video camera is projected by a goggle-mounted collimated infrared LED-LCD display onto the retina, activating an array of powered photodiodes in the retinal implant. The goggles are transparent to visible light, thus allowing for the simultaneous use of remaining natural vision along with prosthetic stimulation. Optical delivery of visual information to the implant allows for real-time image processing adjustable to retinal architecture, as well as flexible control of image processing algorithms and stimulation parameters.

  18. Design of a high-resolution optoelectronic retinal prosthesis

    NASA Astrophysics Data System (ADS)

    Palanker, Daniel; Vankov, Alexander; Huie, Phil; Baccus, Stephen

    2005-03-01

    It has been demonstrated that electrical stimulation of the retina can produce visual percepts in blind patients suffering from macular degeneration and retinitis pigmentosa. However, current retinal implants provide very low resolution (just a few electrodes), whereas at least several thousand pixels would be required for functional restoration of sight. This paper presents the design of an optoelectronic retinal prosthetic system with a stimulating pixel density of up to 2500 pix mm-2 (corresponding geometrically to a maximum visual acuity of 20/80). Requirements on proximity of neural cells to the stimulation electrodes are described as a function of the desired resolution. Two basic geometries of sub-retinal implants providing required proximity are presented: perforated membranes and protruding electrode arrays. To provide for natural eye scanning of the scene, rather than scanning with a head-mounted camera, the system operates similar to 'virtual reality' devices. An image from a video camera is projected by a goggle-mounted collimated infrared LED-LCD display onto the retina, activating an array of powered photodiodes in the retinal implant. The goggles are transparent to visible light, thus allowing for the simultaneous use of remaining natural vision along with prosthetic stimulation. Optical delivery of visual information to the implant allows for real-time image processing adjustable to retinal architecture, as well as flexible control of image processing algorithms and stimulation parameters.

  19. STS-31 MS Sullivan and Pilot Bolden monitor SE 82-16 Ion Arc on OV-103 middeck

    NASA Technical Reports Server (NTRS)

    1990-01-01

    STS-31 Mission Specialist (MS) Kathryn D. Sullivan monitors and advises ground controllers of the activity inside the Student Experiment (SE) 82-16, Ion arc - studies of the effects of microgravity and a magnetic field on an electric arc, mounted in front of the middeck lockers aboard Discovery, Orbiter Vehicle (OV) 103. Pilot Charles F. Bolden uses a video camera and an ARRIFLEX motion picture camera to record the activity inside the special chamber. A sign in front of the experiment reads 'SSIP 82-16 Greg's Experiment Happy Graduation from STS-31.' SSIP stands for Shuttle Student Involvement Program. Gregory S. Peterson who developed the experiment (Greg's Experiment) is a student at Utah State University and monitored the experiment's operation from JSC's Mission Control Center (MCC) during the flight. Decals displayed in the background on the orbiter galley represent the Hubble Space Telescope (HST), the United States (U.S.) Naval Reserve, Navy Oceanographers, U.S. Navy, and Univer

  20. Broadcast-quality-stereoscopic video in a time-critical entertainment and corporate environment

    NASA Astrophysics Data System (ADS)

    Gay, Jean-Philippe

    1995-03-01

    `reality present: Peter Gabrial and Cirque du Soleil' is a 12 minute original work directed and produced by Doug Brown, Jean-Philippe Gay & A. Coogan, which showcases creative content applications of commercial stereoscopic video equipment. For production, a complete equipment package including a Steadicam mount was used in support of the Ikegami LK-33 camera. Remote production units were fielded in the time critical, on-stage and off-stage environments of 2 major live concerts: Peter Gabriel's Secret World performance at the San Diego Sports Arena, and Cirque du Soleil's Saltimbanco performance in Chicago. Twin 60 Hz video channels were captured on Beta SP for maximum post production flexibility. Digital post production and field sequential mastering were effected in D-2 format at studio facilities in Los Angeles. The program was world premiered to a large public at the World of Music, Arts and Dance festivals in Los Angeles and San Francisco, in late 1993. It was presented to the artists in Los Angeles, Montreal and Washington D.C. Additional presentations have been made using a broad range of commercial and experimental stereoscopic video equipment, including projection systems, LCD and passive eyewear, and digital signal processors. Technical packages for live presentation have been fielded on site and off, through to the present.

  1. Electronic cameras for low-light microscopy.

    PubMed

    Rasnik, Ivan; French, Todd; Jacobson, Ken; Berland, Keith

    2013-01-01

    This chapter introduces to electronic cameras, discusses the various parameters considered for evaluating their performance, and describes some of the key features of different camera formats. The chapter also presents the basic understanding of functioning of the electronic cameras and how these properties can be exploited to optimize image quality under low-light conditions. Although there are many types of cameras available for microscopy, the most reliable type is the charge-coupled device (CCD) camera, which remains preferred for high-performance systems. If time resolution and frame rate are of no concern, slow-scan CCDs certainly offer the best available performance, both in terms of the signal-to-noise ratio and their spatial resolution. Slow-scan cameras are thus the first choice for experiments using fixed specimens such as measurements using immune fluorescence and fluorescence in situ hybridization. However, if video rate imaging is required, one need not evaluate slow-scan CCD cameras. A very basic video CCD may suffice if samples are heavily labeled or are not perturbed by high intensity illumination. When video rate imaging is required for very dim specimens, the electron multiplying CCD camera is probably the most appropriate at this technological stage. Intensified CCDs provide a unique tool for applications in which high-speed gating is required. The variable integration time video cameras are very attractive options if one needs to acquire images at video rate acquisition, as well as with longer integration times for less bright samples. This flexibility can facilitate many diverse applications with highly varied light levels. Copyright © 2007 Elsevier Inc. All rights reserved.

  2. Ground-based remote sensing with long lens video camera for upper-stem diameter and other tree crown measurements

    Treesearch

    Neil A. Clark; Sang-Mook Lee

    2004-01-01

    This paper demonstrates how a digital video camera with a long lens can be used with pulse laser ranging in order to collect very large-scale tree crown measurements. The long focal length of the camera lens provides the magnification required for precise viewing of distant points with the trade-off of spatial coverage. Multiple video frames are mosaicked into a single...

  3. The Camera Is Not a Methodology: Towards a Framework for Understanding Young Children's Use of Video Cameras

    ERIC Educational Resources Information Center

    Bird, Jo; Colliver, Yeshe; Edwards, Susan

    2014-01-01

    Participatory research methods argue that young children should be enabled to contribute their perspectives on research seeking to understand their worldviews. Visual research methods, including the use of still and video cameras with young children have been viewed as particularly suited to this aim because cameras have been considered easy and…

  4. Falcons pursue prey using visual motion cues: new perspectives from animal-borne cameras

    PubMed Central

    Kane, Suzanne Amador; Zamani, Marjon

    2014-01-01

    This study reports on experiments on falcons wearing miniature videocameras mounted on their backs or heads while pursuing flying prey. Videos of hunts by a gyrfalcon (Falco rusticolus), gyrfalcon (F. rusticolus)/Saker falcon (F. cherrug) hybrids and peregrine falcons (F. peregrinus) were analyzed to determine apparent prey positions on their visual fields during pursuits. These video data were then interpreted using computer simulations of pursuit steering laws observed in insects and mammals. A comparison of the empirical and modeling data indicates that falcons use cues due to the apparent motion of prey on the falcon's visual field to track and capture flying prey via a form of motion camouflage. The falcons also were found to maintain their prey's image at visual angles consistent with using their shallow fovea. These results should prove relevant for understanding the co-evolution of pursuit and evasion, as well as the development of computer models of predation and the integration of sensory and locomotion systems in biomimetic robots. PMID:24431144

  5. Detecting background changes in environments with dynamic foreground by separating probability distribution function mixtures using Pearson's method of moments

    NASA Astrophysics Data System (ADS)

    Jenkins, Colleen; Jordan, Jay; Carlson, Jeff

    2007-02-01

    This paper presents parameter estimation techniques useful for detecting background changes in a video sequence with extreme foreground activity. A specific application of interest is automated detection of the covert placement of threats (e.g., a briefcase bomb) inside crowded public facilities. We propose that a histogram of pixel intensity acquired from a fixed mounted camera over time for a series of images will be a mixture of two Gaussian functions: the foreground probability distribution function and background probability distribution function. We will use Pearson's Method of Moments to separate the two probability distribution functions. The background function can then be "remembered" and changes in the background can be detected. Subsequent comparisons of background estimates are used to detect changes. Changes are flagged to alert security forces to the presence and location of potential threats. Results are presented that indicate the significant potential for robust parameter estimation techniques as applied to video surveillance.

  6. Falcons pursue prey using visual motion cues: new perspectives from animal-borne cameras.

    PubMed

    Kane, Suzanne Amador; Zamani, Marjon

    2014-01-15

    This study reports on experiments on falcons wearing miniature videocameras mounted on their backs or heads while pursuing flying prey. Videos of hunts by a gyrfalcon (Falco rusticolus), gyrfalcon (F. rusticolus)/Saker falcon (F. cherrug) hybrids and peregrine falcons (F. peregrinus) were analyzed to determine apparent prey positions on their visual fields during pursuits. These video data were then interpreted using computer simulations of pursuit steering laws observed in insects and mammals. A comparison of the empirical and modeling data indicates that falcons use cues due to the apparent motion of prey on the falcon's visual field to track and capture flying prey via a form of motion camouflage. The falcons also were found to maintain their prey's image at visual angles consistent with using their shallow fovea. These results should prove relevant for understanding the co-evolution of pursuit and evasion, as well as the development of computer models of predation and the integration of sensory and locomotion systems in biomimetic robots.

  7. Video model deformation system for the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Snow, W. L.; Goad, W. K.

    1983-01-01

    A photogrammetric closed circuit television system to measure model deformation at the National Transonic Facility is described. The photogrammetric approach was chosen because of its inherent rapid data recording of the entire object field. Video cameras are used to acquire data instead of film cameras due to the inaccessibility of cameras which must be housed within the cryogenic, high pressure plenum of this facility. A rudimentary theory section is followed by a description of the video-based system and control measures required to protect cameras from the hostile environment. Preliminary results obtained with the same camera placement as planned for NTF are presented and plans for facility testing with a specially designed test wing are discussed.

  8. Free-viewpoint video of human actors using multiple handheld Kinects.

    PubMed

    Ye, Genzhi; Liu, Yebin; Deng, Yue; Hasler, Nils; Ji, Xiangyang; Dai, Qionghai; Theobalt, Christian

    2013-10-01

    We present an algorithm for creating free-viewpoint video of interacting humans using three handheld Kinect cameras. Our method reconstructs deforming surface geometry and temporal varying texture of humans through estimation of human poses and camera poses for every time step of the RGBZ video. Skeletal configurations and camera poses are found by solving a joint energy minimization problem, which optimizes the alignment of RGBZ data from all cameras, as well as the alignment of human shape templates to the Kinect data. The energy function is based on a combination of geometric correspondence finding, implicit scene segmentation, and correspondence finding using image features. Finally, texture recovery is achieved through jointly optimization on spatio-temporal RGB data using matrix completion. As opposed to previous methods, our algorithm succeeds on free-viewpoint video of human actors under general uncontrolled indoor scenes with potentially dynamic background, and it succeeds even if the cameras are moving.

  9. A Remote-Control Airship for Coastal and Environmental Research

    NASA Astrophysics Data System (ADS)

    Puleo, J. A.; O'Neal, M. A.; McKenna, T. E.; White, T.

    2008-12-01

    The University of Delaware recently acquired an 18 m (60 ft) remote-control airship capable of carrying a 36 kg (120 lb) scientific payload for coastal and environmental research. By combining the benefits of tethered balloons (stable dwell time) and powered aircraft (ability to navigate), the platform allows for high-resolution data collection in both time and space. The platform was developed by Galaxy Blimps, LLC of Dallas, TX for collecting high-definition video of sporting events. The airship can fly to altitudes of at least 600 m (2000 ft) reaching speeds between zero and 18 m/s (35 knots) in winds up to 13 m/s (25 knots). Using a hand-held console and radio transmitter, a ground-based operator can manipulate the orientation and throttle of two gasoline engines, and the orientation of four fins. Airship location is delivered to the operator through a data downlink from an onboard altimeter and global positioning system (GPS) receiver. Scientific payloads are easily attached to a rail system on the underside of the blimp. Data collection can be automated (fixed time intervals) or triggered by a second operator using a second hand-held console. Data can be stored onboard or transmitted in real-time to a ground-based computer. The first science mission (Fall 2008) is designed to collect images of tidal inundation of a salt marsh to support numerical modeling of water quality in the Murderkill River Estuary in Kent County, Delaware (a tributary of Delaware Bay in the USA Mid-Atlantic region). Time sequenced imagery will be collected by a ten-megapixel camera and a thermal- infrared imager mounted in separate remote-control, gyro-stabilized camera mounts on the blimp. Live video- feeds will be transmitted to the instrument operator on the ground. Resulting time series data will ultimately be used to compare/update independent estimates of inundation based on LiDAR elevations and a suite of tide and temperature gauges.

  10. The Mars Science Laboratory (MSL) Mast cameras and Descent imager: Investigation and instrument descriptions.

    PubMed

    Malin, Michal C; Ravine, Michael A; Caplinger, Michael A; Tony Ghaemi, F; Schaffner, Jacob A; Maki, Justin N; Bell, James F; Cameron, James F; Dietrich, William E; Edgett, Kenneth S; Edwards, Laurence J; Garvin, James B; Hallet, Bernard; Herkenhoff, Kenneth E; Heydari, Ezat; Kah, Linda C; Lemmon, Mark T; Minitti, Michelle E; Olson, Timothy S; Parker, Timothy J; Rowland, Scott K; Schieber, Juergen; Sletten, Ron; Sullivan, Robert J; Sumner, Dawn Y; Aileen Yingst, R; Duston, Brian M; McNair, Sean; Jensen, Elsa H

    2017-08-01

    The Mars Science Laboratory Mast camera and Descent Imager investigations were designed, built, and operated by Malin Space Science Systems of San Diego, CA. They share common electronics and focal plane designs but have different optics. There are two Mastcams of dissimilar focal length. The Mastcam-34 has an f/8, 34 mm focal length lens, and the M-100 an f/10, 100 mm focal length lens. The M-34 field of view is about 20° × 15° with an instantaneous field of view (IFOV) of 218 μrad; the M-100 field of view (FOV) is 6.8° × 5.1° with an IFOV of 74 μrad. The M-34 can focus from 0.5 m to infinity, and the M-100 from ~1.6 m to infinity. All three cameras can acquire color images through a Bayer color filter array, and the Mastcams can also acquire images through seven science filters. Images are ≤1600 pixels wide by 1200 pixels tall. The Mastcams, mounted on the ~2 m tall Remote Sensing Mast, have a 360° azimuth and ~180° elevation field of regard. Mars Descent Imager is fixed-mounted to the bottom left front side of the rover at ~66 cm above the surface. Its fixed focus lens is in focus from ~2 m to infinity, but out of focus at 66 cm. The f/3 lens has a FOV of ~70° by 52° across and along the direction of motion, with an IFOV of 0.76 mrad. All cameras can acquire video at 4 frames/second for full frames or 720p HD at 6 fps. Images can be processed using lossy Joint Photographic Experts Group and predictive lossless compression.

  11. The Mars Science Laboratory (MSL) Mast cameras and Descent imager: Investigation and instrument descriptions

    PubMed Central

    Ravine, Michael A.; Caplinger, Michael A.; Tony Ghaemi, F.; Schaffner, Jacob A.; Maki, Justin N.; Bell, James F.; Cameron, James F.; Dietrich, William E.; Edgett, Kenneth S.; Edwards, Laurence J.; Garvin, James B.; Hallet, Bernard; Herkenhoff, Kenneth E.; Heydari, Ezat; Kah, Linda C.; Lemmon, Mark T.; Minitti, Michelle E.; Olson, Timothy S.; Parker, Timothy J.; Rowland, Scott K.; Schieber, Juergen; Sletten, Ron; Sullivan, Robert J.; Sumner, Dawn Y.; Aileen Yingst, R.; Duston, Brian M.; McNair, Sean; Jensen, Elsa H.

    2017-01-01

    Abstract The Mars Science Laboratory Mast camera and Descent Imager investigations were designed, built, and operated by Malin Space Science Systems of San Diego, CA. They share common electronics and focal plane designs but have different optics. There are two Mastcams of dissimilar focal length. The Mastcam‐34 has an f/8, 34 mm focal length lens, and the M‐100 an f/10, 100 mm focal length lens. The M‐34 field of view is about 20° × 15° with an instantaneous field of view (IFOV) of 218 μrad; the M‐100 field of view (FOV) is 6.8° × 5.1° with an IFOV of 74 μrad. The M‐34 can focus from 0.5 m to infinity, and the M‐100 from ~1.6 m to infinity. All three cameras can acquire color images through a Bayer color filter array, and the Mastcams can also acquire images through seven science filters. Images are ≤1600 pixels wide by 1200 pixels tall. The Mastcams, mounted on the ~2 m tall Remote Sensing Mast, have a 360° azimuth and ~180° elevation field of regard. Mars Descent Imager is fixed‐mounted to the bottom left front side of the rover at ~66 cm above the surface. Its fixed focus lens is in focus from ~2 m to infinity, but out of focus at 66 cm. The f/3 lens has a FOV of ~70° by 52° across and along the direction of motion, with an IFOV of 0.76 mrad. All cameras can acquire video at 4 frames/second for full frames or 720p HD at 6 fps. Images can be processed using lossy Joint Photographic Experts Group and predictive lossless compression. PMID:29098171

  12. Pettit runs a drill while looking through a camera mounted on the Nadir window in the U.S. Lab

    NASA Image and Video Library

    2003-04-05

    ISS006-E-44305 (5 April 2003) --- Astronaut Donald R. Pettit, Expedition Six NASA ISS science officer, runs a drill while looking through a camera mounted on the nadir window in the Destiny laboratory on the International Space Station (ISS). The device is called a “barn door tracker”. The drill turns the screw, which moves the camera and its spotting scope.

  13. A Mars Rover Mission Simulation on Kilauea Volcano

    NASA Technical Reports Server (NTRS)

    Stoker, Carol; Cuzzi, Jeffery N. (Technical Monitor)

    1995-01-01

    A field experiment to simulate a rover mission on Mars was performed using the Russian Marsokhod rover deployed on Kilauea Volcano HI in February, 1995. A Russian Marsokhod rover chassis was equipped with American avionics equipment, stereo cameras on a pan and tilt platform, a digital high resolution body-mounted camera, and a manipulator arm on which was mounted a camera with a close-up lens. The six wheeled rover is 2 meters long and has a mass of 120 kg. The imaging system was designed to simulate that used on the planned "Mars Together" mission. The rover was deployed on Kilauea Volcano HI and operated from NASA Ames by a team of planetary geologists and exobiologists. Two modes of mission operations were simulated for three days each: (1) long time delay, low data bandwidth (simulating a Mars mission), and (2) live video, wide-bandwidth data (allowing active control simulating a Lunar rover mission or a Mars rover mission controlled from on or near the Martian surface). Simulated descent images (aerial photographs) were used to plan traverses to address a detailed set of science questions. The actual route taken was determined by the science team and the traverse path was frequently changed in response to the data acquired and to unforeseen operational issues. Traverses were thereby optimized to efficiently answer scientific questions. During the Mars simulation, the rover traversed a distance of 800 m. Based on the time delay between Earth and Mars, we estimate that the same operation would have taken 30 days to perform on Mars. This paper will describe the mission simulation and make recommendations about incorporating rovers into the Mars surveyor program.

  14. A High-Speed Spectroscopy System for Observing Lightning and Transient Luminous Events

    NASA Astrophysics Data System (ADS)

    Boggs, L.; Liu, N.; Austin, M.; Aguirre, F.; Tilles, J.; Nag, A.; Lazarus, S. M.; Rassoul, H.

    2017-12-01

    Here we present a high-speed spectroscopy system that can be used to record atmospheric electrical discharges, including lightning and transient luminous events. The system consists of a Phantom V1210 high-speed camera, a Volume Phase Holographic (VPH) grism, an optional optical slit, and lenses. The spectrograph has the capability to record videos at speeds of 200,000 frames per second and has an effective wavelength band of 550-775 nm for the first order spectra. When the slit is used, the system has a spectral resolution of about 0.25 nm per pixel. We have constructed a durable enclosure made of heavy duty aluminum to house the high-speed spectrograph. It has two fans for continuous air flow and a removable tray to mount the spectrograph components. In addition, a Watec video camera (30 frames per second) is attached to the top of the enclosure to provide a scene view. A heavy duty Pelco pan/tilt motor is used to position the enclosure and can be controlled remotely through a Rasperry Pi computer. An observation campaign has been conducted during the summer and fall of 2017 at the Florida Institute of Technology. Several close cloud-to-ground discharges were recorded at 57,000 frames per second. The spectrum of a downward stepped negative leader and a positive cloud-to-ground return stroke will be reported on.

  15. Feasibility of telementoring between Baltimore (USA) and Rome (Italy): the first five cases.

    PubMed

    Micali, S; Virgili, G; Vannozzi, E; Grassi, N; Jarrett, T W; Bauer, J J; Vespasiani, G; Kavoussi, L R

    2000-08-01

    Telemedicine is the use of telecommunication technology to deliver healthcare. Telementoring has been developed to allow a surgeon at a remote site to offer guidance and assistance to a less-experienced surgeon. We report on our experience during laparoscopic urologic procedures with mentoring between Rome, Italy, and Baltimore, USA. Over a period of 3 months, two laparoscopic left spermatic vein ligations, one retroperitoneal renal biopsy, one laparoscopic nephrectomy, and one percutaneous access to the kidney were telementored. Transperitoneal laparoscopic cases were performed with the use of AESOP, a robotic for remote manipulation of the endoscopic camera. A second robot, PAKY, was used to perform radiologically guided needle orientation and insertion for percutaneous renal access. In addition to controlling the robotic devices, the system provided real-time video display for either the laparoscope or an externally mounted camera located in the operating room, full duplex audio, telestration over live video, and access to electrocautery for tissue cutting or hemostasis. All procedures were accomplished with an uneventful postoperative course. One technical failure occurred because the robotic device was not properly positioned on the operating table. The round-trip delay of image transmission was less than 1 second. International telementoring is a feasible technique that can enhance surgeon education and decrease the likelihood of complications attributable to inexperience with new operative techniques.

  16. Data Mining and Information Technology: Its Impact on Intelligence Collection and Privacy Rights

    DTIC Science & Technology

    2007-11-26

    sources include: Cameras - Digital cameras (still and video ) have been improving in capability while simultaneously dropping in cost at a rate...citizen is caught on camera 300 times each day.5 The power of extensive video coverage is magnified greatly by the nascent capability for voice and...software on security videos and tracking cell phone usage in the local area. However, it would only return the names and data of those who

  17. Novel Robotic Tools for Piping Inspection and Repair, Phase 1

    DTIC Science & Technology

    2014-02-13

    35 Figure 57 - Accowle ODVS cross section and reflective path ......................................... 36 Figure 58 - Leopard Imaging HD...mounted to iPhone ............................................................................. 39 Figure 63 - Kogeto mounted to Leopard Imaging HD...40 Figure 65 - Leopard Imaging HD camera pipe test (letters) ............................................. 40 Figure 66 - Leopard Imaging HD camera

  18. Camera Operator and Videographer

    ERIC Educational Resources Information Center

    Moore, Pam

    2007-01-01

    Television, video, and motion picture camera operators produce images that tell a story, inform or entertain an audience, or record an event. They use various cameras to shoot a wide range of material, including television series, news and sporting events, music videos, motion pictures, documentaries, and training sessions. Those who film or…

  19. The use of digital imaging, video conferencing, and telepathology in histopathology: a national survey.

    PubMed

    Dennis, T; Start, R D; Cross, S S

    2005-03-01

    To undertake a large scale survey of histopathologists in the UK to determine the current infrastructure, training, and attitudes to digital pathology. A postal questionnaire was sent to 500 consultant histopathologists randomly selected from the membership of the Royal College of Pathologists in the UK. There was a response rate of 47%. Sixty four per cent of respondents had a digital camera mounted on their microscope, but only 12% had any sort of telepathology equipment. Thirty per cent used digital images in electronic presentations at meetings at least once a year and only 24% had ever used telepathology in a diagnostic situation. Fifty nine per cent had received no training in digital imaging. Fifty eight per cent felt that the medicolegal implications of duty of care were a barrier to its use. A large proportion of pathologists (69%) were interested in using video conferencing for remote attendance at multidisciplinary team meetings. There is a reasonable level of equipment and communications infrastructure among histopathologists in the UK but a very low level of training. There is resistance to the use of telepathology in the diagnostic context but enthusiasm for the use of video conferencing in multidisciplinary team meetings.

  20. Wrap-Around Out-the-Window Sensor Fusion System

    NASA Technical Reports Server (NTRS)

    Fox, Jeffrey; Boe, Eric A.; Delgado, Francisco; Secor, James B.; Clark, Michael R.; Ehlinger, Kevin D.; Abernathy, Michael F.

    2009-01-01

    The Advanced Cockpit Evaluation System (ACES) includes communication, computing, and display subsystems, mounted in a van, that synthesize out-the-window views to approximate the views of the outside world as it would be seen from the cockpit of a crewed spacecraft, aircraft, or remote control of a ground vehicle or UAV (unmanned aerial vehicle). The system includes five flat-panel display units arranged approximately in a semicircle around an operator, like cockpit windows. The scene displayed on each panel represents the view through the corresponding cockpit window. Each display unit is driven by a personal computer equipped with a video-capture card that accepts live input from any of a variety of sensors (typically, visible and/or infrared video cameras). Software running in the computers blends the live video images with synthetic images that could be generated, for example, from heads-up-display outputs, waypoints, corridors, or from satellite photographs of the same geographic region. Data from a Global Positioning System receiver and an inertial navigation system aboard the remote vehicle are used by the ACES software to keep the synthetic and live views in registration. If the live image were to fail, the synthetic scenes could still be displayed to maintain situational awareness.

  1. Design of video interface conversion system based on FPGA

    NASA Astrophysics Data System (ADS)

    Zhao, Heng; Wang, Xiang-jun

    2014-11-01

    This paper presents a FPGA based video interface conversion system that enables the inter-conversion between digital and analog video. Cyclone IV series EP4CE22F17C chip from Altera Corporation is used as the main video processing chip, and single-chip is used as the information interaction control unit between FPGA and PC. The system is able to encode/decode messages from the PC. Technologies including video decoding/encoding circuits, bus communication protocol, data stream de-interleaving and de-interlacing, color space conversion and the Camera Link timing generator module of FPGA are introduced. The system converts Composite Video Broadcast Signal (CVBS) from the CCD camera into Low Voltage Differential Signaling (LVDS), which will be collected by the video processing unit with Camera Link interface. The processed video signals will then be inputted to system output board and displayed on the monitor.The current experiment shows that it can achieve high-quality video conversion with minimum board size.

  2. Airborne thermal infrared imaging of the 2004-2005 eruption of Mount St. Helens

    NASA Astrophysics Data System (ADS)

    Schneider, D. J.; Vallance, J. W.; Logan, M.; Wessels, R.; Ramsey, M.

    2005-12-01

    A helicopter-mounted forward-looking infrared imaging radiometer (FLIR) documented the explosive and effusive activity at Mount St. Helens during the 2004-2005 eruption. A gyrostabilzed gimbal controlled by a crew member houses the FLIR radiometer and an optical video camera attached at the lower front of the helicopter. Since October 1, 2004 the system has provided an unprecedented data set of thermal and video dome-growth observations. Flights were conducted as frequently as twice daily during the initial month of the eruption (when changes in the crater and dome occurred rapidly), and have been continued on a tri-weekly basis during the period of sustained dome growth. As with any new technology, the routine use of FLIR images to aid in volcano monitoring has been a learning experience in terms of observation strategy and data interpretation. Some of the unique information that has been derived from these data to date include: 1) Rapid identification of the phreatic nature of the early explosive phase; 2) Observation of faulting and associated heat flow during times of large scale deformation; 3) Venting of hot gas through a short lived crater lake, indicative of a shallow magma source; 4) Increased heat flow of the crater floor prior to the initial dome extrusion; 5) Confirmation of new magma reaching the surface; 6) Identification of the source of active lava extrusion, dome collapse, and block and ash flows. Temperatures vary from ambient, in areas insulated by fault gouge and talus produced during extrusion, to as high as 500-740 degrees C in regions of active extrusion, collapse, and fracturing. This temperature variation needs to be accounted for in the retrieval of eruption parameters using satellite-based techniques as such features are sub-pixel size in satellite images.

  3. Coordinated Global Measurements of TLEs from the Space Shuttle and Ground Stations during MEIDEX

    NASA Astrophysics Data System (ADS)

    Yair, Y.; Price, C.; Levin, Z.; Israelevitch, P.; Devir, A.; Ziv, B.; Jospeh, J.; Mekler, Y.

    2001-12-01

    The Mediterranean Israeli Dust Experiment (MEIDEX) is scheduled to fly on-board the Columbia in May 2002, in a 39º inclination orbit for 16 days, passing over the major thunderstorm regions on Earth. The primary science instrument is a Xybion IMC-201 image-intensified radiometric camera with 6 narrow band filters (340nm, 380nm, 470nm, 555nm, 665nm, 860nm). A Sekai color video camera is a boresighted wide-FOV viewfinder. The cameras are mounted on a single-axis gimbal with a cross-track scan of ±22º degrees, inside a pressurized canister sealed with a coated quartz window that is mounted in the shuttle cargo bay. Data will be recorded in 3 digital VCRs and downlinked to the ground. During the night-side of the orbit there will be dedicated observations toward the Earth's limb above areas of active thunderstorms, in an effort to image TLEs from space. While earlier shuttle flights have succeeded in recording several ionospheric discharges by using cargo bay video cameras, MEIDEX offers a unique opportunity to conduct targeted observations with a calibrated, multispectral instrument. The Xybion camera has a rectangular FOV of 14.04(H) x 10.76 (V) degrees, that covers a volume of 466km (H) x 358km (V) at the Earth's limb, 1900km away from the shuttle. The spatial resolution is 665m (H) x 745m (V) per pixel, enabling to resolve some structural features of TLEs. Optical observations from space will be conducted with the 665nm filter that matches the observed wide peak centered at 670nm that typifies red sprites, and also with the 380 and 470nm filters to record blue jets. Observations will consist of a continuous recording of the Earth's limb, from the direction of the dusk terminator towards the night side. Areas of high convective activity will be forecast by using global aviation SIG maps, and uplinked to the crew before the observation. The astronaut will direct the camera toward areas with lightning activity, observed visually through the windows and on monitors in the crew cabin. Simultaneously with the optical observations from space, dedicated ground measurements will be conducted on a global scale. Two field sites in the Negev Desert in Israel will be used to collect electromagnetic data in the ELF and VLF frequency range. Additional ground stations in Germany, Hungary, USA, Antarctica, Chile, South Africa, Australia, Taiwan and Japan will also record Schumann Resonance and VLF signals. The coordinated measurements from various locations on Earth and from space will enable us to triangulate the location, and determine the polarity and charge moment of the parent lightning of the optically observed TLEs. The success of the campaign will further clarify the global picture of TLE occurrence.

  4. 50 CFR 216.155 - Requirements for monitoring and reporting.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... place 3 autonomous digital video cameras overlooking chosen haul-out sites located varying distances from the missile launch site. Each video camera will be set to record a focal subgroup within the... presence and activity will be conducted and recorded in a field logbook or recorded on digital video for...

  5. The calibration of video cameras for quantitative measurements

    NASA Technical Reports Server (NTRS)

    Snow, Walter L.; Childers, Brooks A.; Shortis, Mark R.

    1993-01-01

    Several different recent applications of velocimetry at Langley Research Center are described in order to show the need for video camera calibration for quantitative measurements. Problems peculiar to video sensing are discussed, including synchronization and timing, targeting, and lighting. The extension of the measurements to include radiometric estimates is addressed.

  6. Using Google Streetview Panoramic Imagery for Geoscience Education

    NASA Astrophysics Data System (ADS)

    De Paor, D. G.; Dordevic, M. M.

    2014-12-01

    Google Streetview is a feature of Google Maps and Google Earth that allows viewers to switch from map or satellite view to 360° panoramic imagery recorded close to the ground. Most panoramas are recorded by Google engineers using special cameras mounted on the roofs of cars. Bicycles, snowmobiles, and boats have also been used and sometimes the camera has been mounted on a backpack for off-road use by hikers and skiers or attached to scuba-diving gear for "Underwater Streetview (sic)." Streetview panoramas are linked together so that the viewer can change viewpoint by clicking forward and reverse buttons. They therefore create a 4-D touring effect. As part of the GEODE project ("Google Earth for Onsite and Distance Education"), we are experimenting with the use of Streetview imagery for geoscience education. Our web-based test application allows instructors to select locations for students to study. Students are presented with a set of questions or tasks that they must address by studying the panoramic imagery. Questions include identification of rock types, structures such as faults, and general geological setting. The student view is locked into Streetview mode until they submit their answers, whereupon the map and satellite views become available, allowing students to zoom out and verify their location on Earth. Student learning is scaffolded by automatic computerized feedback. There are lots of existing Streetview panoramas with rich geological content. Additionally, instructors and members of the general public can create panoramas, including 360° Photo Spheres, by stitching images taken with their mobiles devices and submitting them to Google for evaluation and hosting. A multi-thousand-dollar, multi-directional camera and mount can be purchased from DIY-streetview.com. This allows power users to generate their own high-resolution panoramas. A cheaper, 360° video camera is soon to be released according to geonaute.com. Thus there are opportunities for geoscience educators both to use existing Streetview imagery and to generate new imagery for specific locations of geological interest. The GEODE team includes the authors and: H. Almquist, C. Bentley, S. Burgin, C. Cervato, G. Cooper, P. Karabinos, T. Pavlis, J. Piatek, B. Richards, J. Ryan, R. Schott, K. St. John, B. Tewksbury, and S. Whitmeyer.

  7. Constrained space camera assembly

    DOEpatents

    Heckendorn, Frank M.; Anderson, Erin K.; Robinson, Casandra W.; Haynes, Harriet B.

    1999-01-01

    A constrained space camera assembly which is intended to be lowered through a hole into a tank, a borehole or another cavity. The assembly includes a generally cylindrical chamber comprising a head and a body and a wiring-carrying conduit extending from the chamber. Means are included in the chamber for rotating the body about the head without breaking an airtight seal formed therebetween. The assembly may be pressurized and accompanied with a pressure sensing means for sensing if a breach has occurred in the assembly. In one embodiment, two cameras, separated from their respective lenses, are installed on a mounting apparatus disposed in the chamber. The mounting apparatus includes means allowing both longitudinal and lateral movement of the cameras. Moving the cameras longitudinally focuses the cameras, and moving the cameras laterally away from one another effectively converges the cameras so that close objects can be viewed. The assembly further includes means for moving lenses of different magnification forward of the cameras.

  8. On the Complexity of Digital Video Cameras in/as Research: Perspectives and Agencements

    ERIC Educational Resources Information Center

    Bangou, Francis

    2014-01-01

    The goal of this article is to consider the potential for digital video cameras to produce as part of a research agencement. Our reflection will be guided by the current literature on the use of video recordings in research, as well as by the rhizoanalysis of two vignettes. The first of these vignettes is associated with a short video clip shot by…

  9. Virtual viewpoint synthesis in multi-view video system

    NASA Astrophysics Data System (ADS)

    Li, Fang; Yang, Shiqiang

    2005-07-01

    In this paper, we present a virtual viewpoint video synthesis algorithm to satisfy the following three aims: low computing consuming; real time interpolation and acceptable video quality. In contrast with previous technologies, this method obtain incompletely 3D structure using neighbor video sources instead of getting total 3D information with all video sources, so that the computation is reduced greatly. So we demonstrate our interactive multi-view video synthesis algorithm in a personal computer. Furthermore, adopting the method of choosing feature points to build the correspondence between the frames captured by neighbor cameras, we need not require camera calibration. Finally, our method can be used when the angle between neighbor cameras is 25-30 degrees that it is much larger than common computer vision experiments. In this way, our method can be applied into many applications such as sports live, video conference, etc.

  10. Reduction in Fall Rate in Dementia Managed Care Through Video Incident Review: Pilot Study.

    PubMed

    Bayen, Eleonore; Jacquemot, Julien; Netscher, George; Agrawal, Pulkit; Tabb Noyce, Lynn; Bayen, Alexandre

    2017-10-17

    Falls of individuals with dementia are frequent, dangerous, and costly. Early detection and access to the history of a fall is crucial for efficient care and secondary prevention in cognitively impaired individuals. However, most falls remain unwitnessed events. Furthermore, understanding why and how a fall occurred is a challenge. Video capture and secure transmission of real-world falls thus stands as a promising assistive tool. The objective of this study was to analyze how continuous video monitoring and review of falls of individuals with dementia can support better quality of care. A pilot observational study (July-September 2016) was carried out in a Californian memory care facility. Falls were video-captured (24×7), thanks to 43 wall-mounted cameras (deployed in all common areas and in 10 out of 40 private bedrooms of consenting residents and families). Video review was provided to facility staff, thanks to a customized mobile device app. The outcome measures were the count of residents' falls happening in the video-covered areas, the acceptability of video recording, the analysis of video review, and video replay possibilities for care practice. Over 3 months, 16 falls were video-captured. A drop in fall rate was observed in the last month of the study. Acceptability was good. Video review enabled screening for the severity of falls and fall-related injuries. Video replay enabled identifying cognitive-behavioral deficiencies and environmental circumstances contributing to the fall. This allowed for secondary prevention in high-risk multi-faller individuals and for updated facility care policies regarding a safer living environment for all residents. Video monitoring offers high potential to support conventional care in memory care facilities. ©Eleonore Bayen, Julien Jacquemot, George Netscher, Pulkit Agrawal, Lynn Tabb Noyce, Alexandre Bayen. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 17.10.2017.

  11. Movable Cameras And Monitors For Viewing Telemanipulator

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B.; Venema, Steven C.

    1993-01-01

    Three methods proposed to assist operator viewing telemanipulator on video monitor in control station when video image generated by movable video camera in remote workspace of telemanipulator. Monitors rotated or shifted and/or images in them transformed to adjust coordinate systems of scenes visible to operator according to motions of cameras and/or operator's preferences. Reduces operator's workload and probability of error by obviating need for mental transformations of coordinates during operation. Methods applied in outer space, undersea, in nuclear industry, in surgery, in entertainment, and in manufacturing.

  12. The Mars Hand Lens Imager (MAHLI) aboard the Mars rover, Curiosity

    NASA Astrophysics Data System (ADS)

    Edgett, K. S.; Ravine, M. A.; Caplinger, M. A.; Ghaemi, F. T.; Schaffner, J. A.; Malin, M. C.; Baker, J. M.; Dibiase, D. R.; Laramee, J.; Maki, J. N.; Willson, R. G.; Bell, J. F., III; Cameron, J. F.; Dietrich, W. E.; Edwards, L. J.; Hallet, B.; Herkenhoff, K. E.; Heydari, E.; Kah, L. C.; Lemmon, M. T.; Minitti, M. E.; Olson, T. S.; Parker, T. J.; Rowland, S. K.; Schieber, J.; Sullivan, R. J.; Sumner, D. Y.; Thomas, P. C.; Yingst, R. A.

    2009-08-01

    The Mars Science Laboratory (MSL) rover, Curiosity, is expected to land on Mars in 2012. The Mars Hand Lens Imager (MAHLI) will be used to document martian rocks and regolith with a 2-megapixel RGB color CCD camera with a focusable macro lens mounted on an instrument-bearing turret on the end of Curiosity's robotic arm. The flight MAHLI can focus on targets at working distances of 20.4 mm to infinity. At 20.4 mm, images have a pixel scale of 13.9 μm/pixel. The pixel scale at 66 mm working distance is about the same (31 μm/pixel) as that of the Mars Exploration Rover (MER) Microscopic Imager (MI). MAHLI camera head placement is dependent on the capabilities of the MSL robotic arm, the design for which presently has a placement uncertainty of ~20 mm in 3 dimensions; hence, acquisition of images at the minimum working distance may be challenging. The MAHLI consists of 3 parts: a camera head, a Digital Electronics Assembly (DEA), and a calibration target. The camera head and DEA are connected by a JPL-provided cable which transmits data, commands, and power. JPL is also providing a contact sensor. The camera head will be mounted on the rover's robotic arm turret, the DEA will be inside the rover body, and the calibration target will be mounted on the robotic arm azimuth motor housing. Camera Head. MAHLI uses a Kodak KAI-2020CM interline transfer CCD (1600 x 1200 active 7.4 μm square pixels with RGB filtered microlenses arranged in a Bayer pattern). The optics consist of a group of 6 fixed lens elements, a movable group of 3 elements, and a fixed sapphire window front element. Undesired near-infrared radiation is blocked using a coating deposited on the inside surface of the sapphire window. The lens is protected by a dust cover with a Lexan window through which imaging can be ac-complished if necessary, and targets can be illuminated by sunlight or two banks of two white light LEDs. Two 365 nm UV LEDs are included to search for fluores-cent materials at night. DEA and Onboard Processing. The DEA incorpo-rates the circuit elements required for data processing, compression, and buffering. It also includes all power conversion and regulation capabilities for both the DEA and the camera head. The DEA has an 8 GB non-volatile flash memory plus 128 MB volatile storage. Images can be commanded as full-frame or sub-frame and the camera has autofocus and autoexposure capa-bilities. MAHLI can also acquire 720p, ~7 Hz high definition video. Onboard processing includes options for Bayer pattern filter interpolation, JPEG-based compression, and focus stack merging (z-stacking). Malin Space Science Systems (MSSS) built and will operate the MAHLI. Alliance Spacesystems, LLC, designed and built the lens mechanical assembly. MAHLI shares common electronics, detector, and software designs with the MSL Mars Descent Imager (MARDI) and the 2 MSL Mast Cameras (Mastcam). Pre-launch images of geologic materials imaged by MAHLI are online at: http://www.msss.com/msl/mahli/prelaunch_images/.

  13. An Automatic Portable Telecine Camera.

    DTIC Science & Technology

    1978-08-01

    five television frames to achieve synchronous operation, that is about 0.2 second. 6.3 Video recorder noise imnunity The synchronisation pulse separator...display is filmed by a modified 16 am cine camera driven by a control unit in which the camera supply voltage is derived from the field synchronisation ...pulses of the video signal. Automatic synchronisation of the camera mechanism is achieved over a wide range of television field frequencies and the

  14. Demonstrations of Optical Spectra with a Video Camera

    ERIC Educational Resources Information Center

    Kraftmakher, Yaakov

    2012-01-01

    The use of a video camera may markedly improve demonstrations of optical spectra. First, the output electrical signal from the camera, which provides full information about a picture to be transmitted, can be used for observing the radiant power spectrum on the screen of a common oscilloscope. Second, increasing the magnification by the camera…

  15. STS-36 Mission Specialist Thuot operates 16mm camera on OV-104's middeck

    NASA Image and Video Library

    1990-03-03

    STS-36 Mission Specialist (MS) Pierre J. Thuot operates 16mm ARRIFLEX motion picture camera mounted on the open airlock hatch via a bracket. Thuot uses the camera to record activity of his fellow STS-36 crewmembers on the middeck of Atlantis, Orbiter Vehicle (OV) 104. Positioned between the airlock hatch and the starboard wall-mounted sleep restraints, Thuot, wearing a FAIRFAX t-shirt, squints into the cameras eye piece. Thuot and four other astronauts spent four days, 10 hours and 19 minutes aboard OV-104 for the Department of Defense (DOD) devoted mission.

  16. STS-36 Mission Specialist Thuot operates 16mm camera on OV-104's middeck

    NASA Technical Reports Server (NTRS)

    1990-01-01

    STS-36 Mission Specialist (MS) Pierre J. Thuot operates 16mm ARRIFLEX motion picture camera mounted on the open airlock hatch via a bracket. Thuot uses the camera to record activity of his fellow STS-36 crewmembers on the middeck of Atlantis, Orbiter Vehicle (OV) 104. Positioned between the airlock hatch and the starboard wall-mounted sleep restraints, Thuot, wearing a FAIRFAX t-shirt, squints into the cameras eye piece. Thuot and four other astronauts spent four days, 10 hours and 19 minutes aboard OV-104 for the Department of Defense (DOD) devoted mission.

  17. Fixed mount wavefront sensor

    DOEpatents

    Neal, Daniel R.

    2000-01-01

    A rigid mount and method of mounting for a wavefront sensor. A wavefront dissector, such as a lenslet array, is rigidly mounted at a fixed distance relative to an imager, such as a CCD camera, without need for a relay imaging lens therebetween.

  18. Tracking a Head-Mounted Display in a Room-Sized Environment with Head-Mounted Cameras

    DTIC Science & Technology

    1990-04-01

    poor resolution and a very limited working volume [Wan90]. 4 OPTOTRAK [Nor88] uses one camera with two dual-axis CCD infrared position sensors. Each...Nor88] Northern Digital. Trade literature on Optotrak - Northern Digital’s Three Dimensional Optical Motion Tracking and Analysis System. Northern Digital

  19. An affordable wearable video system for emergency response training

    NASA Astrophysics Data System (ADS)

    King-Smith, Deen; Mikkilineni, Aravind; Ebert, David; Collins, Timothy; Delp, Edward J.

    2009-02-01

    Many emergency response units are currently faced with restrictive budgets that prohibit their use of advanced technology-based training solutions. Our work focuses on creating an affordable, mobile, state-of-the-art emergency response training solution through the integration of low-cost, commercially available products. The system we have developed consists of tracking, audio, and video capability, coupled with other sensors that can all be viewed through a unified visualization system. In this paper we focus on the video sub-system which helps provide real time tracking and video feeds from the training environment through a system of wearable and stationary cameras. These two camera systems interface with a management system that handles storage and indexing of the video during and after training exercises. The wearable systems enable the command center to have live video and tracking information for each trainee in the exercise. The stationary camera systems provide a fixed point of reference for viewing action during the exercise and consist of a small Linux based portable computer and mountable camera. The video management system consists of a server and database which work in tandem with a visualization application to provide real-time and after action review capability to the training system.

  20. Real-time Enhancement, Registration, and Fusion for a Multi-Sensor Enhanced Vision System

    NASA Technical Reports Server (NTRS)

    Hines, Glenn D.; Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.

    2006-01-01

    Over the last few years NASA Langley Research Center (LaRC) has been developing an Enhanced Vision System (EVS) to aid pilots while flying in poor visibility conditions. The EVS captures imagery using two infrared video cameras. The cameras are placed in an enclosure that is mounted and flown forward-looking underneath the NASA LaRC ARIES 757 aircraft. The data streams from the cameras are processed in real-time and displayed on monitors on-board the aircraft. With proper processing the camera system can provide better-than- human-observed imagery particularly during poor visibility conditions. However, to obtain this goal requires several different stages of processing including enhancement, registration, and fusion, and specialized processing hardware for real-time performance. We are using a real-time implementation of the Retinex algorithm for image enhancement, affine transformations for registration, and weighted sums to perform fusion. All of the algorithms are executed on a single TI DM642 digital signal processor (DSP) clocked at 720 MHz. The image processing components were added to the EVS system, tested, and demonstrated during flight tests in August and September of 2005. In this paper we briefly discuss the EVS image processing hardware and algorithms. We then discuss implementation issues and show examples of the results obtained during flight tests. Keywords: enhanced vision system, image enhancement, retinex, digital signal processing, sensor fusion

  1. Caught on Camera: Special Education Classrooms and Video Surveillance

    ERIC Educational Resources Information Center

    Heintzelman, Sara C.; Bathon, Justin M.

    2017-01-01

    In Texas, state policy anticipates that installing video cameras in special education classrooms will decrease student abuse inflicted by teachers. Lawmakers assume that collecting video footage will prevent teachers from engaging in malicious actions and prosecute those who choose to harm children. At the request of a parent, Section 29.022 of…

  2. Improving Photometric Calibration of Meteor Video Camera Systems

    NASA Technical Reports Server (NTRS)

    Ehlert, Steven; Kingery, Aaron; Suggs, Robert

    2016-01-01

    We present the results of new calibration tests performed by the NASA Meteoroid Environment Oce (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the rst point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric ux within the camera band-pass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at 0:20 mag, and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to 0:05 ?? 0:10 mag in both ltered and un ltered camera observations with no evidence for lingering systematics.

  3. Robotic Vehicle Communications Interoperability

    DTIC Science & Technology

    1988-08-01

    starter (cold start) X X Fire suppression X Fording control X Fuel control X Fuel tank selector X Garage toggle X Gear selector X X X X Hazard warning...optic Sensors Sensor switch Video Radar IR Thermal imaging system Image intensifier Laser ranger Video camera selector Forward Stereo Rear Sensor control...optic sensors Sensor switch Video Radar IR Thermal imaging system Image intensifier Laser ranger Video camera selector Forward Stereo Rear Sensor

  4. 8. VAL CAMERA CAR, CLOSEUP VIEW OF 'FLARE' OR TRAJECTORY ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    8. VAL CAMERA CAR, CLOSE-UP VIEW OF 'FLARE' OR TRAJECTORY CAMERA ON SLIDING MOUNT. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  5. Hemispherical Laue camera

    DOEpatents

    Li, James C. M.; Chu, Sungnee G.

    1980-01-01

    A hemispherical Laue camera comprises a crystal sample mount for positioning a sample to be analyzed at the center of sphere of a hemispherical, X-radiation sensitive film cassette, a collimator, a stationary or rotating sample mount and a set of standard spherical projection spheres. X-radiation generated from an external source is directed through the collimator to impinge onto the single crystal sample on the stationary mount. The diffracted beam is recorded on the hemispherical X-radiation sensitive film mounted inside the hemispherical film cassette in either transmission or back-reflection geometry. The distances travelled by X-radiation diffracted from the crystal to the hemispherical film are the same for all crystal planes which satisfy Bragg's Law. The recorded diffraction spots or Laue spots on the film thereby preserve both the symmetry information of the crystal structure and the relative intensities which are directly related to the relative structure factors of the crystal orientations. The diffraction pattern on the exposed film is compared with the known diffraction pattern on one of the standard spherical projection spheres for a specific crystal structure to determine the orientation of the crystal sample. By replacing the stationary sample support with a rotating sample mount, the hemispherical Laue camera can be used for crystal structure determination in a manner previously provided in conventional Debye-Scherrer cameras.

  6. Object tracking using multiple camera video streams

    NASA Astrophysics Data System (ADS)

    Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford

    2010-05-01

    Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.

  7. GoPro Hero Cameras for Creation of a Three-Dimensional, Educational, Neurointerventional Video.

    PubMed

    Park, Min S; Brock, Andrea; Mortimer, Vance; Taussky, Philipp; Couldwell, William T; Quigley, Edward

    2017-10-01

    Neurointerventional education relies on an apprenticeship model, with the trainee observing and participating in procedures with the guidance of a mentor. While educational videos are becoming prevalent in surgical cases, there is a dearth of comparable educational material for trainees in neurointerventional programs. We sought to create a high-quality, three-dimensional video of a routine diagnostic cerebral angiogram for use as an educational tool. A diagnostic cerebral angiogram was recorded using two GoPro HERO 3+ cameras with the Dual HERO System to capture the proceduralist's hands during the case. This video was edited with recordings from the video monitors to create a real-time three-dimensional video of both the actions of the neurointerventionalist and the resulting wire/catheter movements. The final edited video, in either two or three dimensions, can serve as another instructional tool for the training of residents and/or fellows. Additional videos can be created in a similar fashion of more complicated neurointerventional cases. The GoPro HERO 3+ camera and Dual HERO System can be used to create educational videos of neurointerventional procedures.

  8. Still-Video Photography: Tomorrow's Electronic Cameras in the Hands of Today's Photojournalists.

    ERIC Educational Resources Information Center

    Foss, Kurt; Kahan, Robert S.

    This paper examines the still-video camera and its potential impact by looking at recent experiments and by gathering information from some of the few people knowledgeable about the new technology. The paper briefly traces the evolution of the tools and processes of still-video photography, examining how photographers and their work have been…

  9. Review of intelligent video surveillance with single camera

    NASA Astrophysics Data System (ADS)

    Liu, Ying; Fan, Jiu-lun; Wang, DianWei

    2012-01-01

    Intelligent video surveillance has found a wide range of applications in public security. This paper describes the state-of- the-art techniques in video surveillance system with single camera. This can serve as a starting point for building practical video surveillance systems in developing regions, leveraging existing ubiquitous infrastructure. In addition, this paper discusses the gap between existing technologies and the requirements in real-world scenario, and proposes potential solutions to reduce this gap.

  10. Body worn camera

    NASA Astrophysics Data System (ADS)

    Aishwariya, A.; Pallavi Sudhir, Gulavani; Garg, Nemesa; Karthikeyan, B.

    2017-11-01

    A body worn camera is small video camera worn on the body, typically used by police officers to record arrests, evidence from crime scenes. It helps preventing and resolving complaints brought by members of the public; and strengthening police transparency, performance, and accountability. The main constants of this type of the system are video format, resolution, frames rate, and audio quality. This system records the video in .mp4 format with 1080p resolution and 30 frames per second. One more important aspect to while designing this system is amount of power the system requires as battery management becomes very critical. The main design challenges are Size of the Video, Audio for the video. Combining both audio and video and saving it in .mp4 format, Battery, size that is required for 8 hours of continuous recording, Security. For prototyping this system is implemented using Raspberry Pi model B.

  11. Video-Camera-Based Position-Measuring System

    NASA Technical Reports Server (NTRS)

    Lane, John; Immer, Christopher; Brink, Jeffrey; Youngquist, Robert

    2005-01-01

    A prototype optoelectronic system measures the three-dimensional relative coordinates of objects of interest or of targets affixed to objects of interest in a workspace. The system includes a charge-coupled-device video camera mounted in a known position and orientation in the workspace, a frame grabber, and a personal computer running image-data-processing software. Relative to conventional optical surveying equipment, this system can be built and operated at much lower cost; however, it is less accurate. It is also much easier to operate than are conventional instrumentation systems. In addition, there is no need to establish a coordinate system through cooperative action by a team of surveyors. The system operates in real time at around 30 frames per second (limited mostly by the frame rate of the camera). It continuously tracks targets as long as they remain in the field of the camera. In this respect, it emulates more expensive, elaborate laser tracking equipment that costs of the order of 100 times as much. Unlike laser tracking equipment, this system does not pose a hazard of laser exposure. Images acquired by the camera are digitized and processed to extract all valid targets in the field of view. The three-dimensional coordinates (x, y, and z) of each target are computed from the pixel coordinates of the targets in the images to accuracy of the order of millimeters over distances of the orders of meters. The system was originally intended specifically for real-time position measurement of payload transfers from payload canisters into the payload bay of the Space Shuttle Orbiters (see Figure 1). The system may be easily adapted to other applications that involve similar coordinate-measuring requirements. Examples of such applications include manufacturing, construction, preliminary approximate land surveying, and aerial surveying. For some applications with rectangular symmetry, it is feasible and desirable to attach a target composed of black and white squares to an object of interest (see Figure 2). For other situations, where circular symmetry is more desirable, circular targets also can be created. Such a target can readily be generated and modified by use of commercially available software and printed by use of a standard office printer. All three relative coordinates (x, y, and z) of each target can be determined by processing the video image of the target. Because of the unique design of corresponding image-processing filters and targets, the vision-based position- measurement system is extremely robust and tolerant of widely varying fields of view, lighting conditions, and varying background imagery.

  12. Use of body-mounted cameras to enhance data collection: an evaluation of two arthropod sampling techniques

    USDA-ARS?s Scientific Manuscript database

    A study was conducted that compared the effectiveness of a sweepnet versus a vacuum suction device for collecting arthropods in cotton. The study differs from previous research in that body-mounted action cameras (B-MACs) were used to record the activity of the person conducting the collections. The...

  13. Recording medical students' encounters with standardized patients using Google Glass: providing end-of-life clinical education.

    PubMed

    Tully, Jeffrey; Dameff, Christian; Kaib, Susan; Moffitt, Maricela

    2015-03-01

    Medical education today frequently includes standardized patient (SP) encounters to teach history-taking, physical exam, and communication skills. However, traditional wall-mounted cameras, used to record video for faculty and student feedback and evaluation, provide a limited view of key nonverbal communication behaviors during clinical encounters. In 2013, 30 second-year medical students participated in an end-of-life module that included SP encounters in which the SPs used Google Glass to record their first-person perspective. Students reviewed the Google Glass video and traditional videos and then completed a postencounter, self-evaluation survey and a follow-up survey about the experience. Google Glass was used successfully to record 30 student/SP encounters. One temporary Google Glass hardware failure was observed. Of the 30 students, 7 (23%) reported a "positive, nondistracting experience"; 11 (37%) a "positive, initially distracting experience"; 5 (17%) a "neutral experience"; and 3 (10%) a "negative experience." Four students (13%) opted to withhold judgment until they reviewed the videos but reported Google Glass as "distracting." According to follow-up survey responses, 16 students (of 23; 70%) found Google Glass "worth including in the [clinical skills program]," whereas 7 (30%) did not. Google Glass can be used to video record students during SP encounters and provides a novel perspective for the analysis and evaluation of their interpersonal communication skills and nonverbal behaviors. Next steps include a larger, more rigorous comparison of Google Glass versus traditional videos and expanded use of this technology in other aspects of the clinical skills training program.

  14. Evaluating video digitizer errors

    NASA Astrophysics Data System (ADS)

    Peterson, C.

    2016-01-01

    Analog output video cameras remain popular for recording meteor data. Although these cameras uniformly employ electronic detectors with fixed pixel arrays, the digitization process requires resampling the horizontal lines as they are output in order to reconstruct the pixel data, usually resulting in a new data array of different horizontal dimensions than the native sensor. Pixel timing is not provided by the camera, and must be reconstructed based on line sync information embedded in the analog video signal. Using a technique based on hot pixels, I present evidence that jitter, sync detection, and other timing errors introduce both position and intensity errors which are not present in cameras which internally digitize their sensors and output the digital data directly.

  15. Toward high-resolution optoelectronic retinal prosthesis

    NASA Astrophysics Data System (ADS)

    Palanker, Daniel; Huie, Philip; Vankov, Alexander; Asher, Alon; Baccus, Steven

    2005-04-01

    It has been already demonstrated that electrical stimulation of retina can produce visual percepts in blind patients suffering from macular degeneration and retinitis pigmentosa. Current retinal implants provide very low resolution (just a few electrodes), while several thousand pixels are required for functional restoration of sight. We present a design of the optoelectronic retinal prosthetic system that can activate a retinal stimulating array with pixel density up to 2,500 pix/mm2 (geometrically corresponding to a visual acuity of 20/80), and allows for natural eye scanning rather than scanning with a head-mounted camera. The system operates similarly to "virtual reality" imaging devices used in military and medical applications. An image from a video camera is projected by a goggle-mounted infrared LED-LCD display onto the retina, activating an array of powered photodiodes in the retinal implant. Such a system provides a broad field of vision by allowing for natural eye scanning. The goggles are transparent to visible light, thus allowing for simultaneous utilization of remaining natural vision along with prosthetic stimulation. Optical control of the implant allows for simple adjustment of image processing algorithms and for learning. A major prerequisite for high resolution stimulation is the proximity of neural cells to the stimulation sites. This can be achieved with sub-retinal implants constructed in a manner that directs migration of retinal cells to target areas. Two basic implant geometries are described: perforated membranes and protruding electrode arrays. Possibility of the tactile neural stimulation is also examined.

  16. State of the art in video system performance

    NASA Technical Reports Server (NTRS)

    Lewis, Michael J.

    1990-01-01

    The closed circuit television (CCTV) system that is onboard the Space Shuttle has the following capabilities: camera, video signal switching and routing unit (VSU); and Space Shuttle video tape recorder. However, this system is inadequate for use with many experiments that require video imaging. In order to assess the state-of-the-art in video technology and data storage systems, a survey was conducted of the High Resolution, High Frame Rate Video Technology (HHVT) products. The performance of the state-of-the-art solid state cameras and image sensors, video recording systems, data transmission devices, and data storage systems versus users' requirements are shown graphically.

  17. Pettit holds cameras in the U.S. Laboratory

    NASA Image and Video Library

    2012-01-15

    ISS030-E-175788 (15 Jan. 2012) --- NASA astronaut Don Pettit, Expedition 30 flight engineer, is pictured with two still cameras mounted together in the Destiny laboratory of the International Space Station. One camera is an infrared modified still camera.

  18. Constrained space camera assembly

    DOEpatents

    Heckendorn, F.M.; Anderson, E.K.; Robinson, C.W.; Haynes, H.B.

    1999-05-11

    A constrained space camera assembly which is intended to be lowered through a hole into a tank, a borehole or another cavity is disclosed. The assembly includes a generally cylindrical chamber comprising a head and a body and a wiring-carrying conduit extending from the chamber. Means are included in the chamber for rotating the body about the head without breaking an airtight seal formed therebetween. The assembly may be pressurized and accompanied with a pressure sensing means for sensing if a breach has occurred in the assembly. In one embodiment, two cameras, separated from their respective lenses, are installed on a mounting apparatus disposed in the chamber. The mounting apparatus includes means allowing both longitudinal and lateral movement of the cameras. Moving the cameras longitudinally focuses the cameras, and moving the cameras laterally away from one another effectively converges the cameras so that close objects can be viewed. The assembly further includes means for moving lenses of different magnification forward of the cameras. 17 figs.

  19. Beats: Video Monitors and Cameras.

    ERIC Educational Resources Information Center

    Worth, Frazier

    1996-01-01

    Presents a method to teach the concept of beats as a generalized phenomenon rather than teaching it only in the context of sound. Involves using a video camera to film a computer terminal, 16-mm projector, or TV monitor. (JRH)

  20. The use of head-mounted display eyeglasses for teaching surgical skills: A prospective randomised study.

    PubMed

    Peden, Robert G; Mercer, Rachel; Tatham, Andrew J

    2016-10-01

    To investigate whether 'surgeon's eye view' videos provided via head-mounted displays can improve skill acquisition and satisfaction in basic surgical training compared with conventional wet-lab teaching. A prospective randomised study of 14 medical students with no prior suturing experience, randomised to 3 groups: 1) conventional teaching; 2) head-mounted display-assisted teaching and 3) head-mounted display self-learning. All were instructed in interrupted suturing followed by 15 minutes' practice. Head-mounted displays provided a 'surgeon's eye view' video demonstrating the technique, available during practice. Subsequently students undertook a practical assessment, where suturing was videoed and graded by masked assessors using a 10-point surgical skill score (1 = very poor technique, 10 = very good technique). Students completed a questionnaire assessing confidence and satisfaction. Suturing ability after teaching was similar between groups (P = 0.229, Kruskal-Wallis test). Median surgical skill scores were 7.5 (range 6-10), 6 (range 3-8) and 7 (range 1-7) following head-mounted display-assisted teaching, conventional teaching, and head-mounted display self-learning respectively. There was good agreement between graders regarding surgical skill scores (rho.c = 0.599, r = 0.603), and no difference in number of sutures placed between groups (P = 0.120). The head-mounted display-assisted teaching group reported greater enjoyment than those attending conventional teaching (P = 0.033). Head-mounted display self-learning was regarded as least useful (7.4 vs 9.0 for conventional teaching, P = 0.021), but more enjoyable than conventional teaching (9.6 vs 8.0, P = 0.050). Teaching augmented with head-mounted displays was significantly more enjoyable than conventional teaching. Students undertaking self-directed learning using head-mounted displays with pre-recorded videos had comparable skill acquisition to those attending traditional wet-lab tutorials. Copyright © 2016 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.

  1. Improving Photometric Calibration of Meteor Video Camera Systems.

    PubMed

    Ehlert, Steven; Kingery, Aaron; Suggs, Robert

    2017-09-01

    We present the results of new calibration tests performed by the NASA Meteoroid Environment Office (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the first point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric flux within the camera band pass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at ∼ 0.20 mag, and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to ∼ 0.05 - 0.10 mag in both filtered and unfiltered camera observations with no evidence for lingering systematics. These improvements are essential to accurately measuring photometric masses of individual meteors and source mass indexes.

  2. Improving Photometric Calibration of Meteor Video Camera Systems

    NASA Technical Reports Server (NTRS)

    Ehlert, Steven; Kingery, Aaron; Suggs, Robert

    2017-01-01

    We present the results of new calibration tests performed by the NASA Meteoroid Environment Office (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the first point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric flux within the camera bandpass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at approx. 0.20 mag, and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to 0.05 - 0.10 mag in both filtered and unfiltered camera observations with no evidence for lingering systematics. These improvements are essential to accurately measuring photometric masses of individual meteors and source mass indexes.

  3. Architecture and Protocol of a Semantic System Designed for Video Tagging with Sensor Data in Mobile Devices

    PubMed Central

    Macias, Elsa; Lloret, Jaime; Suarez, Alvaro; Garcia, Miguel

    2012-01-01

    Current mobile phones come with several sensors and powerful video cameras. These video cameras can be used to capture good quality scenes, which can be complemented with the information gathered by the sensors also embedded in the phones. For example, the surroundings of a beach recorded by the camera of the mobile phone, jointly with the temperature of the site can let users know via the Internet if the weather is nice enough to swim. In this paper, we present a system that tags the video frames of the video recorded from mobile phones with the data collected by the embedded sensors. The tagged video is uploaded to a video server, which is placed on the Internet and is accessible by any user. The proposed system uses a semantic approach with the stored information in order to make easy and efficient video searches. Our experimental results show that it is possible to tag video frames in real time and send the tagged video to the server with very low packet delay variations. As far as we know there is not any other application developed as the one presented in this paper. PMID:22438753

  4. Architecture and protocol of a semantic system designed for video tagging with sensor data in mobile devices.

    PubMed

    Macias, Elsa; Lloret, Jaime; Suarez, Alvaro; Garcia, Miguel

    2012-01-01

    Current mobile phones come with several sensors and powerful video cameras. These video cameras can be used to capture good quality scenes, which can be complemented with the information gathered by the sensors also embedded in the phones. For example, the surroundings of a beach recorded by the camera of the mobile phone, jointly with the temperature of the site can let users know via the Internet if the weather is nice enough to swim. In this paper, we present a system that tags the video frames of the video recorded from mobile phones with the data collected by the embedded sensors. The tagged video is uploaded to a video server, which is placed on the Internet and is accessible by any user. The proposed system uses a semantic approach with the stored information in order to make easy and efficient video searches. Our experimental results show that it is possible to tag video frames in real time and send the tagged video to the server with very low packet delay variations. As far as we know there is not any other application developed as the one presented in this paper.

  5. Augmented-reality visualization of brain structures with stereo and kinetic depth cues: system description and initial evaluation with head phantom

    NASA Astrophysics Data System (ADS)

    Maurer, Calvin R., Jr.; Sauer, Frank; Hu, Bo; Bascle, Benedicte; Geiger, Bernhard; Wenzel, Fabian; Recchi, Filippo; Rohlfing, Torsten; Brown, Christopher R.; Bakos, Robert J.; Maciunas, Robert J.; Bani-Hashemi, Ali R.

    2001-05-01

    We are developing a video see-through head-mounted display (HMD) augmented reality (AR) system for image-guided neurosurgical planning and navigation. The surgeon wears a HMD that presents him with the augmented stereo view. The HMD is custom fitted with two miniature color video cameras that capture a stereo view of the real-world scene. We are concentrating specifically at this point on cranial neurosurgery, so the images will be of the patient's head. A third video camera, operating in the near infrared, is also attached to the HMD and is used for head tracking. The pose (i.e., position and orientation) of the HMD is used to determine where to overlay anatomic structures segmented from preoperative tomographic images (e.g., CT, MR) on the intraoperative video images. Two SGI 540 Visual Workstation computers process the three video streams and render the augmented stereo views for display on the HMD. The AR system operates in real time at 30 frames/sec with a temporal latency of about three frames (100 ms) and zero relative lag between the virtual objects and the real-world scene. For an initial evaluation of the system, we created AR images using a head phantom with actual internal anatomic structures (segmented from CT and MR scans of a patient) realistically positioned inside the phantom. When using shaded renderings, many users had difficulty appreciating overlaid brain structures as being inside the head. When using wire frames, and texture-mapped dot patterns, most users correctly visualized brain anatomy as being internal and could generally appreciate spatial relationships among various objects. The 3D perception of these structures is based on both stereoscopic depth cues and kinetic depth cues, with the user looking at the head phantom from varying positions. The perception of the augmented visualization is natural and convincing. The brain structures appear rigidly anchored in the head, manifesting little or no apparent swimming or jitter. The initial evaluation of the system is encouraging, and we believe that AR visualization might become an important tool for image-guided neurosurgical planning and navigation.

  6. Calibration of Action Cameras for Photogrammetric Purposes

    PubMed Central

    Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo

    2014-01-01

    The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution. PMID:25237898

  7. Calibration of action cameras for photogrammetric purposes.

    PubMed

    Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo

    2014-09-18

    The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution.

  8. Audiovisual quality estimation of mobile phone video cameras with interpretation-based quality approach

    NASA Astrophysics Data System (ADS)

    Radun, Jenni E.; Virtanen, Toni; Olives, Jean-Luc; Vaahteranoksa, Mikko; Vuori, Tero; Nyman, Göte

    2007-01-01

    We present an effective method for comparing subjective audiovisual quality and the features related to the quality changes of different video cameras. Both quantitative estimation of overall quality and qualitative description of critical quality features are achieved by the method. The aim was to combine two image quality evaluation methods, the quantitative Absolute Category Rating (ACR) method with hidden reference removal and the qualitative Interpretation- Based Quality (IBQ) method in order to see how they complement each other in audiovisual quality estimation tasks. 26 observers estimated the audiovisual quality of six different cameras, mainly mobile phone video cameras. In order to achieve an efficient subjective estimation of audiovisual quality, only two contents with different quality requirements were recorded with each camera. The results show that the subjectively important quality features were more related to the overall estimations of cameras' visual video quality than to the features related to sound. The data demonstrated two significant quality dimensions related to visual quality: darkness and sharpness. We conclude that the qualitative methodology can complement quantitative quality estimations also with audiovisual material. The IBQ approach is valuable especially, when the induced quality changes are multidimensional.

  9. What do we do with all this video? Better understanding public engagement for image and video annotation

    NASA Astrophysics Data System (ADS)

    Wiener, C.; Miller, A.; Zykov, V.

    2016-12-01

    Advanced robotic vehicles are increasingly being used by oceanographic research vessels to enable more efficient and widespread exploration of the ocean, particularly the deep ocean. With cutting-edge capabilities mounted onto robotic vehicles, data at high resolutions is being generated more than ever before, enabling enhanced data collection and the potential for broader participation. For example, high resolution camera technology not only improves visualization of the ocean environment, but also expands the capacity to engage participants remotely through increased use of telepresence and virtual reality techniques. Schmidt Ocean Institute is a private, non-profit operating foundation established to advance the understanding of the world's oceans through technological advancement, intelligent observation and analysis, and open sharing of information. Telepresence-enabled research is an important component of Schmidt Ocean Institute's science research cruises, which this presentation will highlight. Schmidt Ocean Institute is one of the only research programs that make their entire underwater vehicle dive series available online, creating a collection of video that enables anyone to follow deep sea research in real time. We encourage students, educators and the general public to take advantage of freely available dive videos. Additionally, other SOI-supported internet platforms, have engaged the public in image and video annotation activities. Examples of these new online platforms, which utilize citizen scientists to annotate scientific image and video data will be provided. This presentation will include an introduction to SOI-supported video and image tagging citizen science projects, real-time robot tracking, live ship-to-shore communications, and an array of outreach activities that enable scientists to interact with the public and explore the ocean in fascinating detail.

  10. CVD2014-A Database for Evaluating No-Reference Video Quality Assessment Algorithms.

    PubMed

    Nuutinen, Mikko; Virtanen, Toni; Vaahteranoksa, Mikko; Vuori, Tero; Oittinen, Pirkko; Hakkinen, Jukka

    2016-07-01

    In this paper, we present a new video database: CVD2014-Camera Video Database. In contrast to previous video databases, this database uses real cameras rather than introducing distortions via post-processing, which results in a complex distortion space in regard to the video acquisition process. CVD2014 contains a total of 234 videos that are recorded using 78 different cameras. Moreover, this database contains the observer-specific quality evaluation scores rather than only providing mean opinion scores. We have also collected open-ended quality descriptions that are provided by the observers. These descriptions were used to define the quality dimensions for the videos in CVD2014. The dimensions included sharpness, graininess, color balance, darkness, and jerkiness. At the end of this paper, a performance study of image and video quality algorithms for predicting the subjective video quality is reported. For this performance study, we proposed a new performance measure that accounts for observer variance. The performance study revealed that there is room for improvement regarding the video quality assessment algorithms. The CVD2014 video database has been made publicly available for the research community. All video sequences and corresponding subjective ratings can be obtained from the CVD2014 project page (http://www.helsinki.fi/psychology/groups/visualcognition/).

  11. Joint Video Stitching and Stabilization from Moving Cameras.

    PubMed

    Guo, Heng; Liu, Shuaicheng; He, Tong; Zhu, Shuyuan; Zeng, Bing; Gabbouj, Moncef

    2016-09-08

    In this paper, we extend image stitching to video stitching for videos that are captured for the same scene simultaneously by multiple moving cameras. In practice, videos captured under this circumstance often appear shaky. Directly applying image stitching methods for shaking videos often suffers from strong spatial and temporal artifacts. To solve this problem, we propose a unified framework in which video stitching and stabilization are performed jointly. Specifically, our system takes several overlapping videos as inputs. We estimate both inter motions (between different videos) and intra motions (between neighboring frames within a video). Then, we solve an optimal virtual 2D camera path from all original paths. An enlarged field of view along the virtual path is finally obtained by a space-temporal optimization that takes both inter and intra motions into consideration. Two important components of this optimization are that (1) a grid-based tracking method is designed for an improved robustness, which produces features that are distributed evenly within and across multiple views, and (2) a mesh-based motion model is adopted for the handling of the scene parallax. Some experimental results are provided to demonstrate the effectiveness of our approach on various consumer-level videos and a Plugin, named "Video Stitcher" is developed at Adobe After Effects CC2015 to show the processed videos.

  12. A Comparison of Techniques for Camera Selection and Hand-Off in a Video Network

    NASA Astrophysics Data System (ADS)

    Li, Yiming; Bhanu, Bir

    Video networks are becoming increasingly important for solving many real-world problems. Multiple video sensors require collaboration when performing various tasks. One of the most basic tasks is the tracking of objects, which requires mechanisms to select a camera for a certain object and hand-off this object from one camera to another so as to accomplish seamless tracking. In this chapter, we provide a comprehensive comparison of current and emerging camera selection and hand-off techniques. We consider geometry-, statistics-, and game theory-based approaches and provide both theoretical and experimental comparison using centralized and distributed computational models. We provide simulation and experimental results using real data for various scenarios of a large number of cameras and objects for in-depth understanding of strengths and weaknesses of these techniques.

  13. Results of the IMO Video Meteor Network - June 2017, and effective collection area study

    NASA Astrophysics Data System (ADS)

    Molau, Sirko; Crivello, Stefano; Goncalves, Rui; Saraiva, Carlos; Stomeo, Enrico; Kac, Javor

    2017-12-01

    Over 18000 meteors were recorded by the IMO Video Meteor Network cameras during more than 7100 hours of observing time during 2017 June. The June Bootids were not detectable this year. Nearly 50 Daytime Arietids were recorded in 2017, and a first flux density profile for this shower in the optical domain is calculated, using video data from the period 2011-2017. Effective collection area of video cameras is discussed in more detail.

  14. Repurposing video recordings for structure motion estimations

    NASA Astrophysics Data System (ADS)

    Khaloo, Ali; Lattanzi, David

    2016-04-01

    Video monitoring of public spaces is becoming increasingly ubiquitous, particularly near essential structures and facilities. During any hazard event that dynamically excites a structure, such as an earthquake or hurricane, proximal video cameras may inadvertently capture the motion time-history of the structure during the event. If this dynamic time-history could be extracted from the repurposed video recording it would become a valuable forensic analysis tool for engineers performing post-disaster structural evaluations. The difficulty is that almost all potential video cameras are not installed to monitor structure motions, leading to camera perspective distortions and other associated challenges. This paper presents a method for extracting structure motions from videos using a combination of computer vision techniques. Images from a video recording are first reprojected into synthetic images that eliminate perspective distortion, using as-built knowledge of a structure for calibration. The motion of the camera itself during an event is also considered. Optical flow, a technique for tracking per-pixel motion, is then applied to these synthetic images to estimate the building motion. The developed method was validated using the experimental records of the NEESHub earthquake database. The results indicate that the technique is capable of estimating structural motions, particularly the frequency content of the response. Further work will evaluate variants and alternatives to the optical flow algorithm, as well as study the impact of video encoding artifacts on motion estimates.

  15. Fluorescence endoscopic video system

    NASA Astrophysics Data System (ADS)

    Papayan, G. V.; Kang, Uk

    2006-10-01

    This paper describes a fluorescence endoscopic video system intended for the diagnosis of diseases of the internal organs. The system operates on the basis of two-channel recording of the video fluxes from a fluorescence channel and a reflected-light channel by means of a high-sensitivity monochrome television camera and a color camera, respectively. Examples are given of the application of the device in gastroenterology.

  16. A multiple camera tongue switch for a child with severe spastic quadriplegic cerebral palsy.

    PubMed

    Leung, Brian; Chau, Tom

    2010-01-01

    The present study proposed a video-based access technology that facilitated a non-contact tongue protrusion access modality for a 7-year-old boy with severe spastic quadriplegic cerebral palsy (GMFCS level 5). The proposed system featured a centre camera and two peripheral cameras to extend coverage of the frontal face view of this user for longer durations. The child participated in a descriptive case study. The participant underwent 3 months of tongue protrusion training while the multiple camera tongue switch prototype was being prepared. Later, the participant was brought back for five experiment sessions where he worked on a single-switch picture matching activity, using the multiple camera tongue switch prototype in a controlled environment. The multiple camera tongue switch achieved an average sensitivity of 82% and specificity of 80%. In three of the experiment sessions, the peripheral cameras were associated with most of the true positive switch activations. These activations would have been missed by a centre-camera-only setup. The study demonstrated proof-of-concept of a non-contact tongue access modality implemented by a video-based system involving three cameras and colour video processing.

  17. DETAIL VIEW OF A VIDEO CAMERA POSITIONED ALONG THE PERIMETER ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    DETAIL VIEW OF A VIDEO CAMERA POSITIONED ALONG THE PERIMETER OF THE MLP - Cape Canaveral Air Force Station, Launch Complex 39, Mobile Launcher Platforms, Launcher Road, East of Kennedy Parkway North, Cape Canaveral, Brevard County, FL

  18. Opportunistic traffic sensing using existing video sources (phase II).

    DOT National Transportation Integrated Search

    2017-02-01

    The purpose of the project reported on here was to investigate methods for automatic traffic sensing using traffic surveillance : cameras, red light cameras, and other permanent and pre-existing video sources. Success in this direction would potentia...

  19. First stereo video dataset with ground truth for remote car pose estimation using satellite markers

    NASA Astrophysics Data System (ADS)

    Gil, Gustavo; Savino, Giovanni; Pierini, Marco

    2018-04-01

    Leading causes of PTW (Powered Two-Wheeler) crashes and near misses in urban areas are on the part of a failure or delayed prediction of the changing trajectories of other vehicles. Regrettably, misperception from both car drivers and motorcycle riders results in fatal or serious consequences for riders. Intelligent vehicles could provide early warning about possible collisions, helping to avoid the crash. There is evidence that stereo cameras can be used for estimating the heading angle of other vehicles, which is key to anticipate their imminent location, but there is limited heading ground truth data available in the public domain. Consequently, we employed a marker-based technique for creating ground truth of car pose and create a dataset∗ for computer vision benchmarking purposes. This dataset of a moving vehicle collected from a static mounted stereo camera is a simplification of a complex and dynamic reality, which serves as a test bed for car pose estimation algorithms. The dataset contains the accurate pose of the moving obstacle, and realistic imagery including texture-less and non-lambertian surfaces (e.g. reflectance and transparency).

  20. Motion sickness, console video games, and head-mounted displays.

    PubMed

    Merhi, Omar; Faugloire, Elise; Flanagan, Moira; Stoffregen, Thomas A

    2007-10-01

    We evaluated the nauseogenic properties of commercial console video games (i.e., games that are sold to the public) when presented through a head-mounted display. Anecdotal reports suggest that motion sickness may occur among players of contemporary commercial console video games. Participants played standard console video games using an Xbox game system. We varied the participants' posture (standing vs. sitting) and the game (two Xbox games). Participants played for up to 50 min and were asked to discontinue if they experienced any symptoms of motion sickness. Sickness occurred in all conditions, but it was more common during standing. During seated play there were significant differences in head motion between sick and well participants before the onset of motion sickness. The results indicate that commercial console video game systems can induce motion sickness when presented via a head-mounted display and support the hypothesis that motion sickness is preceded by instability in the control of seated posture. Potential applications of this research include changes in the design of console video games and recommendations for how such systems should be used.

  1. Hardware accelerator design for tracking in smart camera

    NASA Astrophysics Data System (ADS)

    Singh, Sanjay; Dunga, Srinivasa Murali; Saini, Ravi; Mandal, A. S.; Shekhar, Chandra; Vohra, Anil

    2011-10-01

    Smart Cameras are important components in video analysis. For video analysis, smart cameras needs to detect interesting moving objects, track such objects from frame to frame, and perform analysis of object track in real time. Therefore, the use of real-time tracking is prominent in smart cameras. The software implementation of tracking algorithm on a general purpose processor (like PowerPC) could achieve low frame rate far from real-time requirements. This paper presents the SIMD approach based hardware accelerator designed for real-time tracking of objects in a scene. The system is designed and simulated using VHDL and implemented on Xilinx XUP Virtex-IIPro FPGA. Resulted frame rate is 30 frames per second for 250x200 resolution video in gray scale.

  2. A Robust Mechanical Sensing System for Unmanned Sea Surface Vehicles

    NASA Technical Reports Server (NTRS)

    Kulczycki, Eric A.; Magnone, Lee J.; Huntsberger, Terrance; Aghazarian, Hrand; Padgett, Curtis W.; Trotz, David C.; Garrett, Michael S.

    2009-01-01

    The need for autonomous navigation and intelligent control of unmanned sea surface vehicles requires a mechanically robust sensing architecture that is watertight, durable, and insensitive to vibration and shock loading. The sensing system developed here comprises four black and white cameras and a single color camera. The cameras are rigidly mounted to a camera bar that can be reconfigured to mount multiple vehicles, and act as both navigational cameras and application cameras. The cameras are housed in watertight casings to protect them and their electronics from moisture and wave splashes. Two of the black and white cameras are positioned to provide lateral vision. They are angled away from the front of the vehicle at horizontal angles to provide ideal fields of view for mapping and autonomous navigation. The other two black and white cameras are positioned at an angle into the color camera's field of view to support vehicle applications. These two cameras provide an overlap, as well as a backup to the front camera. The color camera is positioned directly in the middle of the bar, aimed straight ahead. This system is applicable to any sea-going vehicle, both on Earth and in space.

  3. Development of imaging bolometers for magnetic fusion reactors (invited)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peterson, Byron J.; Parchamy, Homaira; Ashikawa, Naoko

    2008-10-15

    Imaging bolometers utilize an infrared (IR) video camera to measure the change in temperature of a thin foil exposed to the plasma radiation, thereby avoiding the risks of conventional resistive bolometers related to electric cabling and vacuum feedthroughs in a reactor environment. A prototype of the IR imaging video bolometer (IRVB) has been installed and operated on the JT-60U tokamak demonstrating its applicability to a reactor environment and its ability to provide two-dimensional measurements of the radiation emissivity in a poloidal cross section. In this paper we review this development and present the first results of an upgraded version ofmore » this IRVB on JT-60U. This upgrade utilizes a state-of-the-art IR camera (FLIR/Indigo Phoenix-InSb) (3-5 {mu}m, 256x360 pixels, 345 Hz, 11 mK) mounted in a neutron/gamma/magnetic shield behind a 3.6 m IR periscope consisting of CaF{sub 2} optics and an aluminum mirror. The IRVB foil is 7 cmx9 cmx5 {mu}m tantalum. A noise equivalent power density of 300 {mu}W/cm{sup 2} is achieved with 40x24 channels and a time response of 10 ms or 23 {mu}W/cm{sup 2} for 16x12 channels and a time response of 33 ms, which is 30 times better than the previous version of the IRVB on JT-60U.« less

  4. Video quality of 3G videophones for telephone cardiopulmonary resuscitation.

    PubMed

    Tränkler, Uwe; Hagen, Oddvar; Horsch, Alexander

    2008-01-01

    We simulated a cardiopulmonary resuscitation (CPR) scene with a manikin and used two 3G videophones on the caller's side to transmit video to a laptop PC. Five observers (two doctors with experience in emergency medicine and three paramedics) evaluated the video. They judged whether the manikin was breathing and whether they would give advice for CPR; they also graded the confidence of their decision-making. Breathing was only visible from certain orientations of the videophones, at distances below 150 cm with good illumination and a still background. Since the phones produced a degradation in colours and shadows, detection of breathing mainly depended on moving contours. Low camera positioning produced better results than having the camera high up. Darkness, shaking of the camera and a moving background made detection of breathing almost impossible. The video from the two 3G videophones that were tested was of sufficient quality for telephone CPR provided that camera orientation, distance, illumination and background were carefully chosen. Thus it seems possible to use 3G videophones for emergency calls involving CPR. However, further studies on the required video quality in different scenarios are necessary.

  5. A teledentistry system for the second opinion.

    PubMed

    Gambino, Orazio; Lima, Fausto; Pirrone, Roberto; Ardizzone, Edoardo; Campisi, Giuseppina; di Fede, Olga

    2014-01-01

    In this paper we present a Teledentistry system aimed to the Second Opinion task. It make use of a particular camera called intra-oral camera, also called dental camera, in order to perform the photo shooting and real-time video of the inner part of the mouth. The pictures acquired by the Operator with such a device are sent to the Oral Medicine Expert (OME) by means of a current File Transfer Protocol (FTP) service and the real-time video is channeled into a video streaming thanks to the VideoLan client/server (VLC) application. It is composed by a HTML5 web-pages generated by PHP and allows to perform the Second Opinion both when Operator and OME are logged and when one of them is offline.

  6. DETAIL VIEW OF VIDEO CAMERA, MAIN FLOOR LEVEL, PLATFORM ESOUTH, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    DETAIL VIEW OF VIDEO CAMERA, MAIN FLOOR LEVEL, PLATFORM E-SOUTH, HB-3, FACING SOUTHWEST - Cape Canaveral Air Force Station, Launch Complex 39, Vehicle Assembly Building, VAB Road, East of Kennedy Parkway North, Cape Canaveral, Brevard County, FL

  7. Optical stereo video signal processor

    NASA Technical Reports Server (NTRS)

    Craig, G. D. (Inventor)

    1985-01-01

    An otpical video signal processor is described which produces a two-dimensional cross-correlation in real time of images received by a stereo camera system. The optical image of each camera is projected on respective liquid crystal light valves. The images on the liquid crystal valves modulate light produced by an extended light source. This modulated light output becomes the two-dimensional cross-correlation when focused onto a video detector and is a function of the range of a target with respect to the stereo camera. Alternate embodiments utilize the two-dimensional cross-correlation to determine target movement and target identification.

  8. The Effects of Radiation on Imagery Sensors in Space

    NASA Technical Reports Server (NTRS)

    Mathis, Dylan

    2007-01-01

    Recent experience using high definition video on the International Space Station reveals camera pixel degradation due to particle radiation to be a much more significant problem with high definition cameras than with standard definition video. Although it may at first appear that increased pixel density on the imager is the logical explanation for this, the ISS implementations of high definition suggest a more complex causal and mediating factor mix. The degree of damage seems to vary from one type of camera to another, and this variation prompts a reconsideration of the possible factors in pixel loss, such as imager size, number of pixels, pixel aperture ratio, imager type (CCD or CMOS), method of error correction/concealment, and the method of compression used for recording or transmission. The problem of imager pixel loss due to particle radiation is not limited to out-of-atmosphere applications. Since particle radiation increases with altitude, it is not surprising to find anecdotal evidence that video cameras subject to many hours of airline travel show an increased incidence of pixel loss. This is even evident in some standard definition video applications, and pixel loss due to particle radiation only stands to become a more salient issue considering the continued diffusion of high definition video cameras in the marketplace.

  9. Automated recording of home cage activity and temperature of individual rats housed in social groups: The Rodent Big Brother project

    PubMed Central

    Tse, Karen; Grant, Claire; Keerie, Amy; Simpson, David J.; Pedersen, John C.; Rimmer, Victoria; Leslie, Lauren; Klein, Stephanie K.; Karp, Natasha A.; Sillito, Rowland; Chartsias, Agis; Lukins, Tim; Heward, James; Vickers, Catherine; Chapman, Kathryn; Armstrong, J. Douglas

    2017-01-01

    Measuring the activity and temperature of rats is commonly required in biomedical research. Conventional approaches necessitate single housing, which affects their behavior and wellbeing. We have used a subcutaneous radiofrequency identification (RFID) transponder to measure ambulatory activity and temperature of individual rats when group-housed in conventional, rack-mounted home cages. The transponder location and temperature is detected by a matrix of antennae in a baseplate under the cage. An infrared high-definition camera acquires side-view video of the cage and also enables automated detection of vertical activity. Validation studies showed that baseplate-derived ambulatory activity correlated well with manual tracking and with side-view whole-cage video pixel movement. This technology enables individual behavioral and temperature data to be acquired continuously from group-housed rats in their familiar, home cage environment. We demonstrate its ability to reliably detect naturally occurring behavioral effects, extending beyond the capabilities of routine observational tests and conventional monitoring equipment. It has numerous potential applications including safety pharmacology, toxicology, circadian biology, disease models and drug discovery. PMID:28877172

  10. Feral Cats Are Better Killers in Open Habitats, Revealed by Animal-Borne Video.

    PubMed

    McGregor, Hugh; Legge, Sarah; Jones, Menna E; Johnson, Christopher N

    2015-01-01

    One of the key gaps in understanding the impacts of predation by small mammalian predators on prey is how habitat structure affects the hunting success of small predators, such as feral cats. These effects are poorly understood due to the difficulty of observing actual hunting behaviours. We attached collar-mounted video cameras to feral cats living in a tropical savanna environment in northern Australia, and measured variation in hunting success among different microhabitats (open areas, dense grass and complex rocks). From 89 hours of footage, we recorded 101 hunting events, of which 32 were successful. Of these kills, 28% were not eaten. Hunting success was highly dependent on microhabitat structure surrounding prey, increasing from 17% in habitats with dense grass or complex rocks to 70% in open areas. This research shows that habitat structure has a profound influence on the impacts of small predators on their prey. This has broad implications for management of vegetation and disturbance processes (like fire and grazing) in areas where feral cats threaten native fauna. Maintaining complex vegetation cover can reduce predation rates of small prey species from feral cat predation.

  11. Automated recording of home cage activity and temperature of individual rats housed in social groups: The Rodent Big Brother project.

    PubMed

    Redfern, William S; Tse, Karen; Grant, Claire; Keerie, Amy; Simpson, David J; Pedersen, John C; Rimmer, Victoria; Leslie, Lauren; Klein, Stephanie K; Karp, Natasha A; Sillito, Rowland; Chartsias, Agis; Lukins, Tim; Heward, James; Vickers, Catherine; Chapman, Kathryn; Armstrong, J Douglas

    2017-01-01

    Measuring the activity and temperature of rats is commonly required in biomedical research. Conventional approaches necessitate single housing, which affects their behavior and wellbeing. We have used a subcutaneous radiofrequency identification (RFID) transponder to measure ambulatory activity and temperature of individual rats when group-housed in conventional, rack-mounted home cages. The transponder location and temperature is detected by a matrix of antennae in a baseplate under the cage. An infrared high-definition camera acquires side-view video of the cage and also enables automated detection of vertical activity. Validation studies showed that baseplate-derived ambulatory activity correlated well with manual tracking and with side-view whole-cage video pixel movement. This technology enables individual behavioral and temperature data to be acquired continuously from group-housed rats in their familiar, home cage environment. We demonstrate its ability to reliably detect naturally occurring behavioral effects, extending beyond the capabilities of routine observational tests and conventional monitoring equipment. It has numerous potential applications including safety pharmacology, toxicology, circadian biology, disease models and drug discovery.

  12. The Automatically Triggered Video or Imaging Station (ATVIS): An Inexpensive Way to Catch Geomorphic Events on Camera

    NASA Astrophysics Data System (ADS)

    Wickert, A. D.

    2010-12-01

    To understand how single events can affect landscape change, we must catch the landscape in the act. Direct observations are rare and often dangerous. While video is a good alternative, commercially-available video systems for field installation cost 11,000, weigh ~100 pounds (45 kg), and shoot 640x480 pixel video at 4 frames per second. This is the same resolution as a cheap point-and-shoot camera, with a frame rate that is nearly an order of magnitude worse. To overcome these limitations of resolution, cost, and portability, I designed and built a new observation station. This system, called ATVIS (Automatically Triggered Video or Imaging Station), costs 450--500 and weighs about 15 pounds. It can take roughly 3 hours of 1280x720 pixel video, 6.5 hours of 640x480 video, or 98,000 1600x1200 pixel photos (one photo every 7 seconds for 8 days). The design calls for a simple Canon point-and-shoot camera fitted with custom firmware that allows 5V pulses through its USB cable to trigger it to take a picture or to initiate or stop video recording. These pulses are provided by a programmable microcontroller that can take input from either sensors or a data logger. The design is easily modifiable to a variety of camera and sensor types, and can also be used for continuous time-lapse imagery. We currently have prototypes set up at a gully near West Bijou Creek on the Colorado high plains and at tributaries to Marble Canyon in northern Arizona. Hopefully, a relatively inexpensive and portable system such as this will allow geomorphologists to supplement sensor networks with photo or video monitoring and allow them to see—and better quantify—the fantastic array of processes that modify landscapes as they unfold. Camera station set up at Badger Canyon, Arizona.Inset: view into box. Clockwise from bottom right: camera, microcontroller (blue), DC converter (red), solar charge controller, 12V battery. Materials and installation assistance courtesy of Ron Griffiths and the USGS Grand Canyon Monitoring and Research Center.

  13. The use of digital imaging, video conferencing, and telepathology in histopathology: a national survey

    PubMed Central

    Dennis, T; Start, R D; Cross, S S

    2005-01-01

    Aims: To undertake a large scale survey of histopathologists in the UK to determine the current infrastructure, training, and attitudes to digital pathology. Methods: A postal questionnaire was sent to 500 consultant histopathologists randomly selected from the membership of the Royal College of Pathologists in the UK. Results: There was a response rate of 47%. Sixty four per cent of respondents had a digital camera mounted on their microscope, but only 12% had any sort of telepathology equipment. Thirty per cent used digital images in electronic presentations at meetings at least once a year and only 24% had ever used telepathology in a diagnostic situation. Fifty nine per cent had received no training in digital imaging. Fifty eight per cent felt that the medicolegal implications of duty of care were a barrier to its use. A large proportion of pathologists (69%) were interested in using video conferencing for remote attendance at multidisciplinary team meetings. Conclusions: There is a reasonable level of equipment and communications infrastructure among histopathologists in the UK but a very low level of training. There is resistance to the use of telepathology in the diagnostic context but enthusiasm for the use of video conferencing in multidisciplinary team meetings. PMID:15735155

  14. Analyzing at-home prosthesis use in unilateral upper-limb amputees to inform treatment & device design.

    PubMed

    Spiers, Adam J; Resnik, Linda; Dollar, Aaron M

    2017-07-01

    New upper limb prosthetic devices are continuously being developed by a variety of industrial, academic, and hobbyist groups. Yet, little research has evaluated the long term use of currently available prostheses in daily life activities, beyond laboratory or survey studies. We seek to objectively measure how experienced unilateral upper limb prosthesis-users employ their prosthetic devices and unaffected limb for manipulation during everyday activities. In particular, our goal is to create a method for evaluating all types of amputee manipulation, including non-prehensile actions beyond conventional grasp functions, as well as to examine the relative use of both limbs in unilateral and bilateral cases. This study employs a head-mounted video camera to record participant's hands and arms as they complete unstructured domestic tasks within their own homes. A new 'Unilateral Prosthesis-User Manipulation Taxonomy' is presented based observations from 10 hours of recorded videos. The taxonomy addresses manipulation actions of the intact hand, prostheses, bilateral activities, and environmental feature-use (aiïordances). Our preliminary results involved tagging 23 minute segments of the full videos from 3 amputee participants using the taxonomy. This resulted in over 2,300 tag instances. Observations included that non-prehensile interactions outnumbered prehensile interactions in the affected limb for users with more distal amputation that allowed arm mobility.

  15. A preliminary study to estimate contact rates between free-roaming domestic dogs using novel miniature cameras.

    PubMed

    Bombara, Courtenay B; Dürr, Salome; Machovsky-Capuska, Gabriel E; Jones, Peter W; Ward, Michael P

    2017-01-01

    Information on contacts between individuals within a population is crucial to inform disease control strategies, via parameterisation of disease spread models. In this study we investigated the use of dog-borne video cameras-in conjunction with global positioning systems (GPS) loggers-to both characterise dog-to-dog contacts and to estimate contact rates. We customized miniaturised video cameras, enclosed within 3D-printed plastic cases, and attached these to nylon dog collars. Using two 3400 mAh NCR lithium Li-ion batteries, cameras could record a maximum of 22 hr of continuous video footage. Together with a GPS logger, collars were attached to six free roaming domestic dogs (FRDDs) in two remote Indigenous communities in northern Australia. We recorded a total of 97 hr of video footage, ranging from 4.5 to 22 hr (mean 19.1) per dog, and observed a wide range of social behaviours. The majority (69%) of all observed interactions between community dogs involved direct physical contact. Direct contact behaviours included sniffing, licking, mouthing and play fighting. No contacts appeared to be aggressive, however multiple teeth baring incidents were observed during play fights. We identified a total of 153 contacts-equating to 8 to 147 contacts per dog per 24 hr-from the videos of the five dogs with camera data that could be analysed. These contacts were attributed to 42 unique dogs (range 1 to 19 per video) which could be identified (based on colour patterns and markings). Most dog activity was observed in urban (houses and roads) environments, but contacts were more common in bushland and beach environments. A variety of foraging behaviours were observed, included scavenging through rubbish and rolling on dead animal carcasses. Identified food consumed included chicken, raw bones, animal carcasses, rubbish, grass and cheese. For characterising contacts between FRDD, several benefits of analysing videos compared to GPS fixes alone were identified in this study, including visualisation of the nature of the contact between two dogs; and inclusion of a greater number of dogs in the study (which do not need to be wearing video or GPS collars). Some limitations identified included visualisation of contacts only during daylight hours; the camera lens being obscured on occasion by the dog's mandible or the dog resting on the camera; an insufficiently wide viewing angle (36°); battery life and robustness of the deployments; high costs of the deployment; and analysis of large volumes of often unsteady video footage. This study demonstrates that dog-borne video cameras, are a feasible technology for estimating and characterising contacts between FRDDs. Modifying camera specifications and developing new analytical methods will improve applicability of this technology for monitoring FRDD populations, providing insights into dog-to-dog contacts and therefore how disease might spread within these populations.

  16. Use of a microscope-mounted wide-angle point of view camera to record optimal hand position in ocular surgery.

    PubMed

    Gooi, Patrick; Ahmed, Yusuf; Ahmed, Iqbal Ike K

    2014-07-01

    We describe the use of a microscope-mounted wide-angle point-of-view camera to record optimal hand positions in ocular surgery. The camera is mounted close to the objective lens beneath the surgeon's oculars and faces the same direction as the surgeon, providing a surgeon's view. A wide-angle lens enables viewing of both hands simultaneously and does not require repositioning the camera during the case. Proper hand positioning and instrument placement through microincisions are critical for effective and atraumatic handling of tissue within the eye. Our technique has potential in the assessment and training of optimal hand position for surgeons performing intraocular surgery. It is an innovative way to routinely record instrument and operating hand positions in ophthalmic surgery and has minimal requirements in terms of cost, personnel, and operating-room space. No author has a financial or proprietary interest in any material or method mentioned. Copyright © 2014 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  17. The Integrated Radiation Mapper Assistant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlton, R.E.; Tripp, L.R.

    1995-03-01

    The Integrated Radiation Mapper Assistant (IRMA) system combines state-of-the-art radiation sensors and microprocessor based analysis techniques to perform radiation surveys. Control of the survey function is from a control station located outside the radiation thus reducing time spent in radiation areas performing radiation surveys. The system consists of a directional radiation sensor, a laser range finder, two area radiation sensors, and a video camera mounted on a pan and tilt platform. THis sensor package is deployable on a remotely operated vehicle. The outputs of the system are radiation intensity maps identifying both radiation source intensities and radiation levels throughout themore » room being surveyed. After completion of the survey, the data can be removed from the control station computer for further analysis or archiving.« less

  18. KSC-97PC1277

    NASA Image and Video Library

    1997-08-22

    In the Payload Hazardous Servicing Facility (PHSF), Dan Maynard, a Jet Propulsion Laboratory technician, inserts the Digital Video Disk (DVD) into a shallow cavity between two pieces of aluminum that will protect it from micrometeoroid impacts. The package will be mounted to the side of the two-story-tall spacecraft beneath a pallet carrying cameras and other space instruments that will be used to study the Saturnian system. A specially designed, multicolored patch of thermal blanket material will be installed over the disk package. Along with the spacecraft, the disk will reside in Saturn's orbit centuries after the primary mission is completed in July 2008. The Cassini mission is managed for NASA's Office of Space Science, Washington, D.C., by the Jet Propulsion Laboratory, a division of the California Institute of Technology

  19. Feasibility of Using Video Cameras for Automated Enforcement on Red-Light Running and Managed Lanes.

    DOT National Transportation Integrated Search

    2009-12-01

    The overall objective of this study is to evaluate the feasibility, effectiveness, legality, and public acceptance aspects of automated enforcement on red light running and high occupancy vehicle (HOV) occupancy requirement using video cameras in Nev...

  20. Brownian Movement and Avogadro's Number: A Laboratory Experiment.

    ERIC Educational Resources Information Center

    Kruglak, Haym

    1988-01-01

    Reports an experimental procedure for studying Einstein's theory of Brownian movement using commercially available latex microspheres and a video camera. Describes how students can monitor sphere motions and determine Avogadro's number. Uses a black and white video camera, microscope, and TV. (ML)

  1. 67. DETAIL OF VIDEO CAMERA CONTROL PANEL LOCATED IMMEDIATELY WEST ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    67. DETAIL OF VIDEO CAMERA CONTROL PANEL LOCATED IMMEDIATELY WEST OF ASSISTANT LAUNCH CONDUCTOR PANEL SHOWN IN CA-133-1-A-66 - Vandenberg Air Force Base, Space Launch Complex 3, Launch Operations Building, Napa & Alden Roads, Lompoc, Santa Barbara County, CA

  2. The Use of Video-Tacheometric Technology for Documenting and Analysing Geometric Features of Objects

    NASA Astrophysics Data System (ADS)

    Woźniak, Marek; Świerczyńska, Ewa; Jastrzębski, Sławomir

    2015-12-01

    This paper analyzes selected aspects of the use of video-tacheometric technology for inventorying and documenting geometric features of objects. Data was collected with the use of the video-tacheometer Topcon Image Station IS-3 and the professional camera Canon EOS 5D Mark II. During the field work and the development of data the following experiments have been performed: multiple determination of the camera interior orientation parameters and distortion parameters of five lenses with different focal lengths, reflectorless measurements of profiles for the elevation and inventory of decorative surface wall of the building of Warsaw Ballet School. During the research the process of acquiring and integrating video-tacheometric data was analysed as well as the process of combining "point cloud" acquired by using video-tacheometer in the scanning process with independent photographs taken by a digital camera. On the basis of tests performed, utility of the use of video-tacheometric technology in geodetic surveys of geometrical features of buildings has been established.

  3. Application of PLZT electro-optical shutter to diaphragm of visible and mid-infrared cameras

    NASA Astrophysics Data System (ADS)

    Fukuyama, Yoshiyuki; Nishioka, Shunji; Chonan, Takao; Sugii, Masakatsu; Shirahata, Hiromichi

    1997-04-01

    Pb0.9La0.09(Zr0.65,Ti0.35)0.9775O3 9/65/35) commonly used as an electro-optical shutter exhibits large phase retardation with low applied voltage. This shutter features as follows; (1) high shutter speed, (2) wide optical transmittance, and (3) high optical density in 'OFF'-state. If the shutter is applied to a diaphragm of video-camera, it could protect its sensor from intense lights. We have tested the basic characteristics of the PLZT electro-optical shutter and resolved power of imaging. The ratio of optical transmittance at 'ON' and 'OFF'-states was 1.1 X 103. The response time of the PLZT shutter from 'ON'-state to 'OFF'-state was 10 micro second. MTF reduction when putting the PLZT shutter in from of the visible video- camera lens has been observed only with 12 percent at a spatial frequency of 38 cycles/mm which are sensor resolution of the video-camera. Moreover, we took the visible image of the Si-CCD video-camera. The He-Ne laser ghost image was observed at 'ON'-state. On the contrary, the ghost image was totally shut out at 'OFF'-state. From these teste, it has been found that the PLZT shutter is useful for the diaphragm of the visible video-camera. The measured optical transmittance of PLZT wafer with no antireflection coating was 78 percent over the range from 2 to 6 microns.

  4. Imaging fall Chinook salmon redds in the Columbia River with a dual-frequency identification sonar

    USGS Publications Warehouse

    Tiffan, K.F.; Rondorf, D.W.; Skalicky, J.J.

    2004-01-01

    We tested the efficacy of a dual-frequency identification sonar (DIDSON) for imaging and enumeration of fall Chinook salmon Oncorhynchus tshawytscha redds in a spawning area below Bonneville Dam on the Columbia River. The DIDSON uses sound to form near-video-quality images and has the advantages of imaging in zero-visibility water and possessing a greater detection range and field of view than underwater video cameras. We suspected that the large size and distinct morphology of a fall Chinook salmon redd would facilitate acoustic imaging if the DIDSON was towed near the river bottom so as to cast an acoustic shadow from the tailspill over the redd pocket. We tested this idea by observing 22 different redds with an underwater video camera, spatially referencing their locations, and then navigating to them while imaging them with the DIDSON. All 22 redds were successfully imaged with the DIDSON. We subsequently conducted redd searches along transects to compare the number of redds imaged by the DIDSON with the number observed using an underwater video camera. We counted 117 redds with the DIDSON and 81 redds with the underwater video camera. Only one of the redds observed with the underwater video camera was not also documented by the DIDSON. In spite of the DIDSON's high cost, it may serve as a useful tool for enumerating fall Chinook salmon redds in conditions that are not conducive to underwater videography.

  5. User interface using a 3D model for video surveillance

    NASA Astrophysics Data System (ADS)

    Hata, Toshihiko; Boh, Satoru; Tsukada, Akihiro; Ozaki, Minoru

    1998-02-01

    These days fewer people, who must carry out their tasks quickly and precisely, are required in industrial surveillance and monitoring applications such as plant control or building security. Utilizing multimedia technology is a good approach to meet this need, and we previously developed Media Controller, which is designed for the applications and provides realtime recording and retrieval of digital video data in a distributed environment. In this paper, we propose a user interface for such a distributed video surveillance system in which 3D models of buildings and facilities are connected to the surveillance video. A novel method of synchronizing camera field data with each frame of a video stream is considered. This method records and reads the camera field data similarity to the video data and transmits it synchronously with the video stream. This enables the user interface to have such useful functions as comprehending the camera field immediately and providing clues when visibility is poor, for not only live video but also playback video. We have also implemented and evaluated the display function which makes surveillance video and 3D model work together using Media Controller with Java and Virtual Reality Modeling Language employed for multi-purpose and intranet use of 3D model.

  6. Human silhouette matching based on moment invariants

    NASA Astrophysics Data System (ADS)

    Sun, Yong-Chao; Qiu, Xian-Jie; Xia, Shi-Hong; Wang, Zhao-Qi

    2005-07-01

    This paper aims to apply the method of silhouette matching based on moment invariants to infer the human motion parameters from video sequences of single monocular uncalibrated camera. Currently, there are two ways of tracking human motion: Marker and Markerless. While a hybrid framework is introduced in this paper to recover the input video contents. A standard 3D motion database is built up by marker technique in advance. Given a video sequences, human silhouettes are extracted as well as the viewpoint information of the camera which would be utilized to project the standard 3D motion database onto the 2D one. Therefore, the video recovery problem is formulated as a matching issue of finding the most similar body pose in standard 2D library with the one in video image. The framework is applied to the special trampoline sport where we can obtain the complicated human motion parameters in the single camera video sequences, and a lot of experiments are demonstrated that this approach is feasible in the field of monocular video-based 3D motion reconstruction.

  7. Solid State Television Camera (CID)

    NASA Technical Reports Server (NTRS)

    Steele, D. W.; Green, W. T.

    1976-01-01

    The design, development and test are described of a charge injection device (CID) camera using a 244x248 element array. A number of video signal processing functions are included which maximize the output video dynamic range while retaining the inherently good resolution response of the CID. Some of the unique features of the camera are: low light level performance, high S/N ratio, antiblooming, geometric distortion, sequential scanning and AGC.

  8. Feasibility of Using Video Camera for Automated Enforcement on Red-Light Running and Managed Lanes.

    DOT National Transportation Integrated Search

    2009-12-25

    The overall objective of this study is to evaluate the feasibility, effectiveness, legality, and public acceptance aspects of automated enforcement on red light running and HOV occupancy requirement using video cameras in Nevada. This objective was a...

  9. Preplanning and Evaluating Video Documentaries and Features.

    ERIC Educational Resources Information Center

    Maynard, Riley

    1997-01-01

    This article presents a ten-part pre-production outline and post-production evaluation that helps communications students more effectively improve video skills. Examines camera movement and motion, camera angle and perspective, lighting, audio, graphics, backgrounds and color, special effects, editing, transitions, and music. Provides a glossary…

  10. On the development of new SPMN diurnal video systems for daylight fireball monitoring

    NASA Astrophysics Data System (ADS)

    Madiedo, J. M.; Trigo-Rodríguez, J. M.; Castro-Tirado, A. J.

    2008-09-01

    Daylight fireball video monitoring High-sensitivity video devices are commonly used for the study of the activity of meteor streams during the night. These provide useful data for the determination, for instance, of radiant, orbital and photometric parameters ([1] to [7]). With this aim, during 2006 three automated video stations supported by Universidad de Huelva were set up in Andalusia within the framework of the SPanish Meteor Network (SPMN). These are endowed with 8-9 high sensitivity wide-field video cameras that achieve a meteor limiting magnitude of about +3. These stations have increased the coverage performed by the low-scan allsky CCD systems operated by the SPMN and, besides, achieve a time accuracy of about 0.01s for determining the appearance of meteor and fireball events. Despite of these nocturnal monitoring efforts, we realised the need of setting up stations for daylight fireball detection. Such effort was also motivated by the appearance of the two recent meteorite-dropping events of Villalbeto de la Peña [8,9] and Puerto Lápice [10]. Although the Villalbeto de la Peña event was casually videotaped, and photographed, no direct pictures or videos were obtained for the Puerto Lápice event. Consequently, in order to perform a continuous recording of daylight fireball events, we setup new automated systems based on CCD video cameras. However, the development of these video stations implies several issues with respect to nocturnal systems that must be properly solved in order to get an optimal operation. The first of these video stations, also supported by University of Huelva, has been setup in Sevilla (Andalusia) during May 2007. But, of course, fireball association is unequivocal only in those cases when two or more stations recorded the fireball, and when consequently the geocentric radiant is accurately determined. With this aim, a second diurnal video station is being setup in Andalusia in the facilities of Centro Internacional de Estudios y Convenciones Ecológicas y Medioambientales (CIECEM, University of Huelva), in the environment of Doñana Natural Park (Huelva province). In this way, both stations, which are separated by a distance of 75 km, will work as a double video station system in order to provide trajectory and orbit information of mayor bolides and, thus, increase the chance of meteorite recovery in the Iberian Peninsula. The new diurnal SPMN video stations are endowed with different models of Mintron cameras (Mintron Enterprise Co., LTD). These are high-sensitivity devices that employ a colour 1/2" Sony interline transfer CCD image sensor. Aspherical lenses are attached to the video cameras in order to maximize image quality. However, the use of fast lenses is not a priority here: while most of our nocturnal cameras use f0.8 or f1.0 lenses in order to detect meteors as faint as magnitude +3, diurnal systems employ in most cases f1.4 to f2.0 lenses. Their focal length ranges from 3.8 to 12 mm to cover different atmospheric volumes. The cameras are arranged in such a way that the whole sky is monitored from every observing station. Figure 1. A daylight event recorded from Sevilla on May 26, 2008 at 4h30m05.4 +-0.1s UT. The way our diurnal video cameras work is similar to the operation of our nocturnal systems [1]. Thus, diurnal stations are automatically switched on and off at sunrise and sunset, respectively. The images taken at 25 fps and with a resolution of 720x576 pixels are continuously sent to PC computers through a video capture device. The computers run a software (UFOCapture, by SonotaCo, Japan) that automatically registers meteor trails and stores the corresponding video frames on hard disk. Besides, before the signal from the cameras reaches the computers, a video time inserter that employs a GPS device (KIWI-OSD, by PFD Systems) inserts time information on every video frame. This allows us to measure time in a precise way (about 0.01 sec.) along the whole fireball path. EPSC Abstracts, Vol. 3, EPSC2008-A-00319, 2008 European Planetary Science Congress, Author(s) 2008 However, one of the issues with respect to nocturnal observing stations is the high number of false detections as a consequence of several factors: higher activity of birds and insects, reflection of sunlight on planes and helicopters, etc. Sometimes some of these false events follow a pattern which is very similar to fireball trails, which makes absolutely necessary the use of a second station in order to discriminate between them. Other key issue is related to the passage of the Sun before the field of view of some of the cameras. In fact, special care is necessary with this to avoid any damage to the CCD sensor. Besides, depending on atmospheric conditions (dust or moisture, for instance), the Sun may saturate most of the video frame. To solve this, our automated system determines which camera is pointing towards the Sun at a given moment and disconnects it. As the cameras are endowed with autoiris lenses, its disconnection means that the optics is fully closed and, so, the CCD sensor is protected. This, of course, means that when this happens the atmospheric volume covered by the corresponding camera is not monitored. It must be also taken into account that, in general, operation temperatures are higher for diurnal cameras. This results in higher thermal noise and, so, poses some difficulties to the detection software. To minimize this effect, it is necessary to employ CCD video cameras with proper signal to noise ratio. Refrigeration of the CCD sensor with, for instance, a Peltier system, can also be considered. The astrometric reduction procedure is also somewhat different for daytime events: it requires that reference objects are located within the field of view of every camera in order to calibrate the corresponding images. This is done by allowing every camera to capture distant buildings that, by means of said calibration, would allow us to obtain the equatorial coordinates of the fireball along its path by measuring its corresponding X and Y positions on every video frame. Such calibration can be performed from stars positions measured from nocturnal images taken with the same cameras. Once made, if the cameras are not moved it is possible to estimate the equatorial coordinates of any future fireball event. We don't use any software for automatic astrometry of the images. This crucial step is made via direct measurements of the pixel position as in all our previous work. Then, from these astrometric measurements, our software estimates the atmospheric trajectory and radiant for each fireball ([10] to [13]). During 2007 and 2008 the SPMN has also setup other diurnal stations based on 1/3' progressive-scan CMOS sensors attached to modified wide-field lenses covering a 120x80 degrees FOV. They are placed in Andalusia: El Arenosillo (Huelva), La Mayora (Málaga) and Murtas (Granada). They have also night sensitivity thanks to a infrared cut filter (ICR) which enables the camera to perform well in both high and low light condition in colour as well as provide IR sensitive Black/White video at night. Conclusions First detections of daylight fireballs by CCD video camera are being achieved in the SPMN framework. Future expansion and set up of new observing stations is currently being planned. The future establishment of additional diurnal SPMN stations will allow an increase in the number of daytime fireballs detected. This will also increase our chance of meteorite recovery.

  11. Using remote underwater video to estimate freshwater fish species richness.

    PubMed

    Ebner, B C; Morgan, D L

    2013-05-01

    Species richness records from replicated deployments of baited remote underwater video stations (BRUVS) and unbaited remote underwater video stations (UBRUVS) in shallow (<1 m) and deep (>1 m) water were compared with those obtained from using fyke nets, gillnets and beach seines. Maximum species richness (14 species) was achieved through a combination of conventional netting and camera-based techniques. Chanos chanos was the only species not recorded on camera, whereas Lutjanus argentimaculatus, Selenotoca multifasciata and Gerres filamentosus were recorded on camera in all three waterholes but were not detected by netting. BRUVSs and UBRUVSs provided versatile techniques that were effective at a range of depths and microhabitats. It is concluded that cameras warrant application in aquatic areas of high conservation value with high visibility. Non-extractive video methods are particularly desirable where threatened species are a focus of monitoring or might be encountered as by-catch in net meshes. © 2013 The Authors. Journal of Fish Biology © 2013 The Fisheries Society of the British Isles.

  12. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras.

    PubMed

    Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki

    2016-06-24

    Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system.

  13. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras

    PubMed Central

    Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki

    2016-01-01

    Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system. PMID:27347961

  14. An integrated port camera and display system for laparoscopy.

    PubMed

    Terry, Benjamin S; Ruppert, Austin D; Steinhaus, Kristen R; Schoen, Jonathan A; Rentschler, Mark E

    2010-05-01

    In this paper, we built and tested the port camera, a novel, inexpensive, portable, and battery-powered laparoscopic tool that integrates the components of a vision system with a cannula port. This new device 1) minimizes the invasiveness of laparoscopic surgery by combining a camera port and tool port; 2) reduces the cost of laparoscopic vision systems by integrating an inexpensive CMOS sensor and LED light source; and 3) enhances laparoscopic surgical procedures by mechanically coupling the camera, tool port, and liquid crystal display (LCD) screen to provide an on-patient visual display. The port camera video system was compared to two laparoscopic video systems: a standard resolution unit from Karl Storz (model 22220130) and a high definition unit from Stryker (model 1188HD). Brightness, contrast, hue, colorfulness, and sharpness were compared. The port camera video is superior to the Storz scope and approximately equivalent to the Stryker scope. An ex vivo study was conducted to measure the operative performance of the port camera. The results suggest that simulated tissue identification and biopsy acquisition with the port camera is as efficient as with a traditional laparoscopic system. The port camera was successfully used by a laparoscopic surgeon for exploratory surgery and liver biopsy during a porcine surgery, demonstrating initial surgical feasibility.

  15. Utilizing ISS Camera Systems for Scientific Analysis of Lightning Characteristics and Comparison with ISS-LIS and GLM

    NASA Technical Reports Server (NTRS)

    Schultz, Christopher J.; Lang, Timothy J.; Leake, Skye; Runco, Mario, Jr.; Blakeslee, Richard J.

    2017-01-01

    Video and still frame images from cameras aboard the International Space Station (ISS) are used to inspire, educate, and provide a unique vantage point from low-Earth orbit that is second to none; however, these cameras have overlooked capabilities for contributing to scientific analysis of the Earth and near-space environment. The goal of this project is to study how geo referenced video/images from available ISS camera systems can be useful for scientific analysis, using lightning properties as a demonstration.

  16. Geometrical and optical calibration of a vehicle-mounted IR imager for land mine localization

    NASA Astrophysics Data System (ADS)

    Aitken, Victor C.; Russell, Kevin L.; McFee, John E.

    2000-08-01

    Many present day vehicle-mounted landmine detection systems use IR imagers. Information furnished by these imaging systems usually consists of video and the location of targets within the video. In multisensor systems employing data fusion, there is a need to convert sensor information to a common coordinate system that all sensors share.

  17. Video cameras on wild birds.

    PubMed

    Rutz, Christian; Bluff, Lucas A; Weir, Alex A S; Kacelnik, Alex

    2007-11-02

    New Caledonian crows (Corvus moneduloides) are renowned for using tools for extractive foraging, but the ecological context of this unusual behavior is largely unknown. We developed miniaturized, animal-borne video cameras to record the undisturbed behavior and foraging ecology of wild, free-ranging crows. Our video recordings enabled an estimate of the species' natural foraging efficiency and revealed that tool use, and choice of tool materials, are more diverse than previously thought. Video tracking has potential for studying the behavior and ecology of many other bird species that are shy or live in inaccessible habitats.

  18. Testing and Performance Validation of a Sensitive Gamma Ray Camera Designed for Radiation Detection and Decommissioning Measurements in Nuclear Facilities-13044

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mason, John A.; Looman, Marc R.; Poundall, Adam J.

    2013-07-01

    This paper describes the measurements, testing and performance validation of a sensitive gamma ray camera designed for radiation detection and quantification in the environment and decommissioning and hold-up measurements in nuclear facilities. The instrument, which is known as RadSearch, combines a sensitive and highly collimated LaBr{sub 3} scintillation detector with an optical (video) camera with controllable zoom and focus and a laser range finder in one detector head. The LaBr{sub 3} detector has a typical energy resolution of between 2.5% and 3% at the 662 keV energy of Cs-137 compared to that of NaI detectors with a resolution of typicallymore » 7% to 8% at the same energy. At this energy the tungsten shielding of the detector provides a shielding ratio of greater than 900:1 in the forward direction and 100:1 on the sides and from the rear. The detector head is mounted on a pan/tile mechanism with a range of motion of ±180 degrees (pan) and ±90 degrees (tilt) equivalent to 4 π steradians. The detector head with pan/tilt is normally mounted on a tripod or wheeled cart. It can also be mounted on vehicles or a mobile robot for access to high dose-rate areas and areas with high levels of contamination. Ethernet connects RadSearch to a ruggedized notebook computer from which it is operated and controlled. Power can be supplied either as 24-volts DC from a battery or as 50 volts DC supplied by a small mains (110 or 230 VAC) power supply unit that is co-located with the controlling notebook computer. In this latter case both power and Ethernet are supplied through a single cable that can be up to 80 metres in length. If a local battery supplies power, the unit can be controlled through wireless Ethernet. Both manual operation and automatic scanning of surfaces and objects is available through the software interface on the notebook computer. For each scan element making up a part of an overall scanned area, the unit measures a gamma ray spectrum. Multiple radionuclides may be selected by the operator and will be identified if present. In scanning operation the unit scans a designated region and superimposes over a video image the distribution of measured radioactivity. For the total scanned area or object RadSearch determines the total activity of operator selected radionuclides present and the gamma dose-rate measured at the detector head. Results of hold-up measurements made in a nuclear facility are presented, as are test measurements of point sources distributed arbitrarily on surfaces. These latter results are compared with the results of benchmarked MCNP Monte Carlo calculations. The use of the device for hold-up and decommissioning measurements is validated. (authors)« less

  19. Use and validation of mirrorless digital single light reflex camera for recording of vitreoretinal surgeries in high definition

    PubMed Central

    Khanduja, Sumeet; Sampangi, Raju; Hemlatha, B C; Singh, Satvir; Lall, Ashish

    2018-01-01

    Purpose: The purpose of this study is to describe the use of commercial digital single light reflex (DSLR) for vitreoretinal surgery recording and compare it to standard 3-chip charged coupling device (CCD) camera. Methods: Simultaneous recording was done using Sony A7s2 camera and Sony high-definition 3-chip camera attached to each side of the microscope. The videos recorded from both the camera systems were edited and sequences of similar time frames were selected. Three sequences that selected for evaluation were (a) anterior segment surgery, (b) surgery under direct viewing system, and (c) surgery under indirect wide-angle viewing system. The videos of each sequence were evaluated and rated on a scale of 0-10 for color, contrast, and overall quality Results: Most results were rated either 8/10 or 9/10 for both the cameras. A noninferiority analysis by comparing mean scores of DSLR camera versus CCD camera was performed and P values were obtained. The mean scores of the two cameras were comparable for each other on all parameters assessed in the different videos except of color and contrast in posterior pole view and color on wide-angle view, which were rated significantly higher (better) in DSLR camera. Conclusion: Commercial DSLRs are an affordable low-cost alternative for vitreoretinal surgery recording and may be used for documentation and teaching. PMID:29283133

  20. Use and validation of mirrorless digital single light reflex camera for recording of vitreoretinal surgeries in high definition.

    PubMed

    Khanduja, Sumeet; Sampangi, Raju; Hemlatha, B C; Singh, Satvir; Lall, Ashish

    2018-01-01

    The purpose of this study is to describe the use of commercial digital single light reflex (DSLR) for vitreoretinal surgery recording and compare it to standard 3-chip charged coupling device (CCD) camera. Simultaneous recording was done using Sony A7s2 camera and Sony high-definition 3-chip camera attached to each side of the microscope. The videos recorded from both the camera systems were edited and sequences of similar time frames were selected. Three sequences that selected for evaluation were (a) anterior segment surgery, (b) surgery under direct viewing system, and (c) surgery under indirect wide-angle viewing system. The videos of each sequence were evaluated and rated on a scale of 0-10 for color, contrast, and overall quality Results: Most results were rated either 8/10 or 9/10 for both the cameras. A noninferiority analysis by comparing mean scores of DSLR camera versus CCD camera was performed and P values were obtained. The mean scores of the two cameras were comparable for each other on all parameters assessed in the different videos except of color and contrast in posterior pole view and color on wide-angle view, which were rated significantly higher (better) in DSLR camera. Commercial DSLRs are an affordable low-cost alternative for vitreoretinal surgery recording and may be used for documentation and teaching.

  1. Collection and Analysis of Crowd Data with Aerial, Rooftop, and Ground Views

    DTIC Science & Technology

    2014-11-10

    collected these datasets using different aircrafts. Erista 8 HL OctaCopter is a heavy-lift aerial platform capable of using high-resolution cinema ...is another high-resolution camera that is cinema grade and high quality, with the capability of capturing videos with 4K resolution at 30 frames per...292.58 Imaging Systems and Accessories Blackmagic Production Camera 4 Crowd Counting using 4K Cameras High resolution cinema grade digital video

  2. Flat-panel detector, CCD cameras, and electron-beam-tube-based video for use in portal imaging

    NASA Astrophysics Data System (ADS)

    Roehrig, Hans; Tang, Chuankun; Cheng, Chee-Way; Dallas, William J.

    1998-07-01

    This paper provides a comparison of some imaging parameters of four portal imaging systems at 6 MV: a flat panel detector, two CCD cameras and an electron beam tube based video camera. Measurements were made of signal and noise and consequently of signal-to-noise per pixel as a function of the exposure. All systems have a linear response with respect to exposure, and with the exception of the electron beam tube based video camera, the noise is proportional to the square-root of the exposure, indicating photon-noise limitation. The flat-panel detector has a signal-to-noise ratio, which is higher than that observed with both CCD-Cameras or with the electron beam tube based video camera. This is expected because most portal imaging systems using optical coupling with a lens exhibit severe quantum-sinks. The measurements of signal-and noise were complemented by images of a Las Vegas-type aluminum contrast detail phantom, located at the ISO-Center. These images were generated at an exposure of 1 MU. The flat-panel detector permits detection of Aluminum holes of 1.2 mm diameter and 1.6 mm depth, indicating the best signal-to-noise ratio. The CCD-cameras rank second and third in signal-to- noise ratio, permitting detection of Aluminum-holes of 1.2 mm diameter and 2.2 mm depth (CCD_1) and of 1.2 mm diameter and 3.2 mm depth (CCD_2) respectively, while the electron beam tube based video camera permits detection of only a hole of 1.2 mm diameter and 4.6 mm depth. Rank Order Filtering was applied to the raw images from the CCD-based systems in order to remove the direct hits. These are camera responses to scattered x-ray photons which interact directly with the CCD of the CCD-Camera and generate 'Salt and Pepper type noise,' which interferes severely with attempts to determine accurate estimates of the image noise. The paper also presents data on the metal-phosphor's photon gain (the number of light-photons per interacting x-ray photon).

  3. The Experimental Study of Rayleigh-Taylor Instability using a Linear Induction Motor Accelerator

    NASA Astrophysics Data System (ADS)

    Yamashita, Nicholas; Jacobs, Jeffrey

    2009-11-01

    The experiments to be presented utilize an incompressible system of two stratified miscible liquids of different densities that are accelerated in order to produce the Rayleigh-Taylor instability. Three liquid combinations are used: isopropyl alcohol with water, a calcium nitrate solution or a lithium polytungstate solution, giving Atwood numbers of 0.11, 0.22 and 0.57, respectively. The acceleration required to drive the instability is produced by two high-speed linear induction motors mounted to an 8 m tall drop tower. The motors are mounted in parallel and have an effective acceleration length of 1.7 m and are each capable of producing 15 kN of thrust. The liquid system is contained within a square acrylic tank with inside dimensions 76 x76x184 mm. The tank is mounted to an aluminum plate, which is driven by the motors to create constant accelerations in the range of 1-20 g's, though the potential exists for higher accelerations. Also attached to the plate are a high-speed camera and an LED backlight to provide continuous video of the instability. In addition, an accelerometer is used to provide acceleration measurements during each experiment. Experimental image sequences will be presented which show the development of a random three-dimensional instability from an unforced initial perturbation. Measurements of the mixing zone width will be compared with traditional growth models.

  4. STS-111 Flight Day 5 Highlights

    NASA Astrophysics Data System (ADS)

    2002-06-01

    On Flight Day 5 of STS-111, the crew of Endeavour (Kenneth Cockrell, Commander; Paul Lockhart, Pilot; Franklin Chang-Diaz, Mission Specialist; Philippe Perrin, Mission Specialist) and the Expedition 5 crew (Valery Korzun, Commander; Peggy Whitson, Flight Engineer; Sergei Treschev, Flight Engineer) and Expedition 4 crew (Yury Onufrienko, Commander; Daniel Bursch, Flight Engineer; Carl Walz, Flight Engineer) are aboard the docked Endeavour and International Space Station (ISS). The ISS cameras show the station in orbit above the North African coast and the Mediterranean Sea, as Chang-Diaz and Perrin prepare for an EVA (extravehicular activity). The Canadarm 2 robotic arm is shown in motion in a wide-angle shot. The Quest Airlock is shown as it opens to allow the astronauts to exit the station. As orbital sunrise approaches, the astronauts are shown already engaged in their EVA activities. Chang-Diaz is shown removing the PDGF (Power and Data Grapple Fixture) from Endeavour's payload bay as Perrin prepares its installation position in the ISS's P6 truss structure; The MPLM is also visible. Following the successful detachment of the PDGF, Chang-Diaz carries it to the installation site as he is transported there by the robotic arm. The astronauts are then shown installing the PDGF, with video provided by helmet-mounted cameras. Following this task, the astronauts are shown preparing the MBS (Mobile Base System) for grappling by the robotic arm. It will be mounted to the Mobile Transporter (MT), which will traverse a railroad-like system along the truss structures of the ISS, and support astronaut activities as well as provide an eventual mobile base for the robotic arm.

  5. Evaluation of smart video for transit event detection : final report.

    DOT National Transportation Integrated Search

    2009-06-01

    Transit agencies are increasingly using video cameras to fight crime and terrorism. As the volume of video data increases, the existing digital video surveillance systems provide the infrastructure only to capture, store and distribute video, while l...

  6. Digital Video Cameras for Brainstorming and Outlining: The Process and Potential

    ERIC Educational Resources Information Center

    Unger, John A.; Scullion, Vicki A.

    2013-01-01

    This "Voices from the Field" paper presents methods and participant-exemplar data for integrating digital video cameras into the writing process across postsecondary literacy contexts. The methods and participant data are part of an ongoing action-based research project systematically designed to bring research and theory into practice…

  7. Studying medical communication with video vignettes: a randomized study on how variations in video-vignette introduction format and camera focus influence analogue patients' engagement.

    PubMed

    Visser, Leonie N C; Bol, Nadine; Hillen, Marij A; Verdam, Mathilde G E; de Haes, Hanneke C J M; van Weert, Julia C M; Smets, Ellen M A

    2018-01-19

    Video vignettes are used to test the effects of physicians' communication on patient outcomes. Methodological choices in video-vignette development may have far-stretching consequences for participants' engagement with the video, and thus the ecological validity of this design. To supplement the scant evidence in this field, this study tested how variations in video-vignette introduction format and camera focus influence participants' engagement with a video vignette showing a bad news consultation. Introduction format (A = audiovisual vs. B = written) and camera focus (1 = the physician only, 2 = the physician and the patient at neutral moments alternately, 3 = the physician and the patient at emotional moments alternately) were varied in a randomized 2 × 3 between-subjects design. One hundred eighty-one students were randomly assigned to watch one of the six resulting video-vignette conditions as so-called analogue patients, i.e., they were instructed to imagine themselves being in the video patient's situation. Four dimensions of self-reported engagement were assessed retrospectively. Emotional engagement was additionally measured by recording participants' electrodermal and cardiovascular activity continuously while watching. Analyses of variance were used to test the effects of introduction format, camera focus and their interaction. The audiovisual introduction induced a stronger blood pressure response during watching the introduction (p = 0.048, [Formula: see text]= 0.05) and the consultation part of the vignette (p = 0.051, [Formula: see text]= 0.05), when compared to the written introduction. With respect to camera focus, results revealed that the variant focusing on the patient at emotional moments evoked a higher level of electrodermal activity (p = 0.003, [Formula: see text]= 0.06), when compared to the other two variants. Furthermore, an interaction effect was shown on self-reported emotional engagement (p = 0.045, [Formula: see text]= 0.04): the physician-only variant resulted in lower emotional engagement if the vignette was preceded by the audiovisual introduction. No effects were shown on the other dimensions of self-reported engagement. Our findings imply that using an audiovisual introduction combined with alternating camera focus depicting patient's emotions results in the highest levels of emotional engagement in analogue patients. This evidence can inform methodological decisions during the development of video vignettes, and thereby enhance the ecological validity of future video-vignettes studies.

  8. Image system for three dimensional, 360 DEGREE, time sequence surface mapping of moving objects

    DOEpatents

    Lu, Shin-Yee

    1998-01-01

    A three-dimensional motion camera system comprises a light projector placed between two synchronous video cameras all focused on an object-of-interest. The light projector shines a sharp pattern of vertical lines (Ronchi ruling) on the object-of-interest that appear to be bent differently to each camera by virtue of the surface shape of the object-of-interest and the relative geometry of the cameras, light projector and object-of-interest Each video frame is captured in a computer memory and analyzed. Since the relative geometry is known and the system pre-calibrated, the unknown three-dimensional shape of the object-of-interest can be solved for by matching the intersections of the projected light lines with orthogonal epipolar lines corresponding to horizontal rows in the video camera frames. A surface reconstruction is made and displayed on a monitor screen. For 360.degree. all around coverage of theobject-of-interest, two additional sets of light projectors and corresponding cameras are distributed about 120.degree. apart from one another.

  9. Image system for three dimensional, 360{degree}, time sequence surface mapping of moving objects

    DOEpatents

    Lu, S.Y.

    1998-12-22

    A three-dimensional motion camera system comprises a light projector placed between two synchronous video cameras all focused on an object-of-interest. The light projector shines a sharp pattern of vertical lines (Ronchi ruling) on the object-of-interest that appear to be bent differently to each camera by virtue of the surface shape of the object-of-interest and the relative geometry of the cameras, light projector and object-of-interest. Each video frame is captured in a computer memory and analyzed. Since the relative geometry is known and the system pre-calibrated, the unknown three-dimensional shape of the object-of-interest can be solved for by matching the intersections of the projected light lines with orthogonal epipolar lines corresponding to horizontal rows in the video camera frames. A surface reconstruction is made and displayed on a monitor screen. For 360{degree} all around coverage of the object-of-interest, two additional sets of light projectors and corresponding cameras are distributed about 120{degree} apart from one another. 20 figs.

  10. High-Definition Television (HDTV) Images for Earth Observations and Earth Science Applications

    NASA Technical Reports Server (NTRS)

    Robinson, Julie A.; Holland, S. Douglas; Runco, Susan K.; Pitts, David E.; Whitehead, Victor S.; Andrefouet, Serge M.

    2000-01-01

    As part of Detailed Test Objective 700-17A, astronauts acquired Earth observation images from orbit using a high-definition television (HDTV) camcorder, Here we provide a summary of qualitative findings following completion of tests during missions STS (Space Transport System)-93 and STS-99. We compared HDTV imagery stills to images taken using payload bay video cameras, Hasselblad film camera, and electronic still camera. We also evaluated the potential for motion video observations of changes in sunlight and the use of multi-aspect viewing to image aerosols. Spatial resolution and color quality are far superior in HDTV images compared to National Television Systems Committee (NTSC) video images. Thus, HDTV provides the first viable option for video-based remote sensing observations of Earth from orbit. Although under ideal conditions, HDTV images have less spatial resolution than medium-format film cameras, such as the Hasselblad, under some conditions on orbit, the HDTV image acquired compared favorably with the Hasselblad. Of particular note was the quality of color reproduction in the HDTV images HDTV and electronic still camera (ESC) were not compared with matched fields of view, and so spatial resolution could not be compared for the two image types. However, the color reproduction of the HDTV stills was truer than colors in the ESC images. As HDTV becomes the operational video standard for Space Shuttle and Space Station, HDTV has great potential as a source of Earth-observation data. Planning for the conversion from NTSC to HDTV video standards should include planning for Earth data archiving and distribution.

  11. Curiosity Rover View of Alluring Martian Geology Ahead

    NASA Image and Video Library

    2015-08-05

    A southward-looking panorama combining images from both cameras of the Mast Camera Mastcam instrument on NASA Curiosity Mars Rover shows diverse geological textures on Mount Sharp. A southward-looking panorama combining images from both cameras of the Mast Camera (Mastcam) instrument on NASA's Curiosity Mars Rover shows diverse geological textures on Mount Sharp. Three years after landing on Mars, the mission is investigating this layered mountain for evidence about changes in Martian environmental conditions, from an ancient time when conditions were favorable for microbial life to the much-drier present. Gravel and sand ripples fill the foreground, typical of terrains that Curiosity traversed to reach Mount Sharp from its landing site. Outcrops in the midfield are of two types: dust-covered, smooth bedrock that forms the base of the mountain, and sandstone ridges that shed boulders as they erode. Rounded buttes in the distance contain sulfate minerals, perhaps indicating a change in the availability of water when they formed. Some of the layering patterns on higher levels of Mount Sharp in the background are tilted at different angles than others, evidence of complicated relationships still to be deciphered. The scene spans from southeastward at left to southwestward at right. The component images were taken on April 10 and 11, 2015, the 952nd and 953rd Martian days (or sols) since the rover's landing on Mars on Aug. 6, 2012, UTC (Aug. 5, PDT). Images in the central part of the panorama are from Mastcam's right-eye camera, which is equipped with a 100-millimeter-focal-length telephoto lens. Images used in outer portions, including the most distant portions of the mountain in the scene, were taken with Mastcam's left-eye camera, using a wider-angle, 34-millimeter lens. http://photojournal.jpl.nasa.gov/catalog/PIA19803

  12. Hand-held photomicroscope

    NASA Technical Reports Server (NTRS)

    Zabower, H. R. (Inventor)

    1973-01-01

    A small, lightweight, compact, hand-held photomicroscope provides simultaneous viewing and photographing, with adjustable specimen illumination and exchangeable camera format. The novel photomicroscope comprises a main housing having a top plate, bottom plate, and side walls. The objective lens is mounted on the top plate in an inverted manner relative to the normal type of mounting. The specimen holder has an adjusting mechanism for adjustably moving the specimen vertically along an axis extending through the objective lens as well as transverse of the axis. The lens system serves to split the beam of light into two paths, one to the eyepiece and the other to a camera mounting. A light source is mounted on the top plate and directs light onto the specimen. A rheostat device is mounted on the top plate and coupled to the power supply for the light source so that the intensity of the light may be varied.

  13. An integrated multispectral video and environmental monitoring system for the study of coastal processes and the support of beach management operations

    NASA Astrophysics Data System (ADS)

    Ghionis, George; Trygonis, Vassilis; Karydis, Antonis; Vousdoukas, Michalis; Alexandrakis, George; Drakopoulos, Panos; Amdreadis, Olympos; Psarros, Fotis; Velegrakis, Antonis; Poulos, Serafim

    2016-04-01

    Effective beach management requires environmental assessments that are based on sound science, are cost-effective and are available to beach users and managers in an accessible, timely and transparent manner. The most common problems are: 1) The available field data are scarce and of sub-optimal spatio-temporal resolution and coverage, 2) our understanding of local beach processes needs to be improved in order to accurately model/forecast beach dynamics under a changing climate, and 3) the information provided by coastal scientists/engineers in the form of data, models and scientific interpretation is often too complicated to be of direct use by coastal managers/decision makers. A multispectral video system has been developed, consisting of one or more video cameras operating in the visible part of the spectrum, a passive near-infrared (NIR) camera, an active NIR camera system, a thermal infrared camera and a spherical video camera, coupled with innovative image processing algorithms and a telemetric system for the monitoring of coastal environmental parameters. The complete system has the capability to record, process and communicate (in quasi-real time) high frequency information on shoreline position, wave breaking zones, wave run-up, erosion hot spots along the shoreline, nearshore wave height, turbidity, underwater visibility, wind speed and direction, air and sea temperature, solar radiation, UV radiation, relative humidity, barometric pressure and rainfall. An innovative, remotely-controlled interactive visual monitoring system, based on the spherical video camera (with 360°field of view), combines the video streams from all cameras and can be used by beach managers to monitor (in real time) beach user numbers, flow activities and safety at beaches of high touristic value. The high resolution near infrared cameras permit 24-hour monitoring of beach processes, while the thermal camera provides information on beach sediment temperature and moisture, can detect upwelling in the nearshore zone, and enhances the safety of beach users. All data can be presented in real- or quasi-real time and are stored for future analysis and training/validation of coastal processes models. Acknowledgements: This work was supported by the project BEACHTOUR (11SYN-8-1466) of the Operational Program "Cooperation 2011, Competitiveness and Entrepreneurship", co-funded by the European Regional Development Fund and the Greek Ministry of Education and Religious Affairs.

  14. First results of the multi-purpose real-time processing video camera system on the Wendelstein 7-X stellarator and implications for future devices

    NASA Astrophysics Data System (ADS)

    Zoletnik, S.; Biedermann, C.; Cseh, G.; Kocsis, G.; König, R.; Szabolics, T.; Szepesi, T.; Wendelstein 7-X Team

    2018-01-01

    A special video camera has been developed for the 10-camera overview video system of the Wendelstein 7-X (W7-X) stellarator considering multiple application needs and limitations resulting from this complex long-pulse superconducting stellarator experiment. The event detection intelligent camera (EDICAM) uses a special 1.3 Mpixel CMOS sensor with non-destructive read capability which enables fast monitoring of smaller Regions of Interest (ROIs) even during long exposures. The camera can perform simple data evaluation algorithms (minimum/maximum, mean comparison to levels) on the ROI data which can dynamically change the readout process and generate output signals. Multiple EDICAM cameras were operated in the first campaign of W7-X and capabilities were explored in the real environment. Data prove that the camera can be used for taking long exposure (10-100 ms) overview images of the plasma while sub-ms monitoring and even multi-camera correlated edge plasma turbulence measurements of smaller areas can be done in parallel. These latter revealed that filamentary turbulence structures extend between neighboring modules of the stellarator. Considerations emerging for future upgrades of this system and similar setups on future long-pulse fusion experiments such as ITER are discussed.

  15. Coordinating High-Resolution Traffic Cameras : Developing Intelligent, Collaborating Cameras for Transportation Security and Communications

    DOT National Transportation Integrated Search

    2015-08-01

    Cameras are used prolifically to monitor transportation incidents, infrastructure, and congestion. Traditional camera systems often require human monitoring and only offer low-resolution video. Researchers for the Exploratory Advanced Research (EAR) ...

  16. "Ipsilateral, high, single-hand, sideways"-Ruijin rule for camera assistant in uniportal video-assisted thoracoscopic surgery.

    PubMed

    Gao, Taotao; Xiang, Jie; Jin, Runsen; Zhang, Yajie; Wu, Han; Li, Hecheng

    2016-10-01

    Camera assistant plays a very important role in uniportal video-assisted thoracoscopic surgery (VATS), who acts as the eye of the surgeon, providing the VATS team with a stable and clear operating view. Thus, a good assistant should cooperate with surgeon and manipulate the camera expertly, to ensure eye-hand coordination. We have performed more than 100 uniportal VATS in the Department Of Thoracic Surgery in Ruijin Hospital. Based on our experiences, we summarized the method of holding camera, known as "ipsilateral, high, single-hand, sideways", which largely improves the comfort and fluency of surgery.

  17. Video Altimeter and Obstruction Detector for an Aircraft

    NASA Technical Reports Server (NTRS)

    Delgado, Frank J.; Abernathy, Michael F.; White, Janis; Dolson, William R.

    2013-01-01

    Video-based altimetric and obstruction detection systems for aircraft have been partially developed. The hardware of a system of this type includes a downward-looking video camera, a video digitizer, a Global Positioning System receiver or other means of measuring the aircraft velocity relative to the ground, a gyroscope based or other attitude-determination subsystem, and a computer running altimetric and/or obstruction-detection software. From the digitized video data, the altimetric software computes the pixel velocity in an appropriate part of the video image and the corresponding angular relative motion of the ground within the field of view of the camera. Then by use of trigonometric relationships among the aircraft velocity, the attitude of the camera, the angular relative motion, and the altitude, the software computes the altitude. The obstruction-detection software performs somewhat similar calculations as part of a larger task in which it uses the pixel velocity data from the entire video image to compute a depth map, which can be correlated with a terrain map, showing locations of potential obstructions. The depth map can be used as real-time hazard display and/or to update an obstruction database.

  18. High Speed Digital Camera Technology Review

    NASA Technical Reports Server (NTRS)

    Clements, Sandra D.

    2009-01-01

    A High Speed Digital Camera Technology Review (HSD Review) is being conducted to evaluate the state-of-the-shelf in this rapidly progressing industry. Five HSD cameras supplied by four camera manufacturers participated in a Field Test during the Space Shuttle Discovery STS-128 launch. Each camera was also subjected to Bench Tests in the ASRC Imaging Development Laboratory. Evaluation of the data from the Field and Bench Tests is underway. Representatives from the imaging communities at NASA / KSC and the Optical Systems Group are participating as reviewers. A High Speed Digital Video Camera Draft Specification was updated to address Shuttle engineering imagery requirements based on findings from this HSD Review. This draft specification will serve as the template for a High Speed Digital Video Camera Specification to be developed for the wider OSG imaging community under OSG Task OS-33.

  19. Motion-Blur-Free High-Speed Video Shooting Using a Resonant Mirror

    PubMed Central

    Inoue, Michiaki; Gu, Qingyi; Takaki, Takeshi; Ishii, Idaku; Tajima, Kenji

    2017-01-01

    This study proposes a novel concept of actuator-driven frame-by-frame intermittent tracking for motion-blur-free video shooting of fast-moving objects. The camera frame and shutter timings are controlled for motion blur reduction in synchronization with a free-vibration-type actuator vibrating with a large amplitude at hundreds of hertz so that motion blur can be significantly reduced in free-viewpoint high-frame-rate video shooting for fast-moving objects by deriving the maximum performance of the actuator. We develop a prototype of a motion-blur-free video shooting system by implementing our frame-by-frame intermittent tracking algorithm on a high-speed video camera system with a resonant mirror vibrating at 750 Hz. It can capture 1024 × 1024 images of fast-moving objects at 750 fps with an exposure time of 0.33 ms without motion blur. Several experimental results for fast-moving objects verify that our proposed method can reduce image degradation from motion blur without decreasing the camera exposure time. PMID:29109385

  20. A reaction-diffusion-based coding rate control mechanism for camera sensor networks.

    PubMed

    Yamamoto, Hiroshi; Hyodo, Katsuya; Wakamiya, Naoki; Murata, Masayuki

    2010-01-01

    A wireless camera sensor network is useful for surveillance and monitoring for its visibility and easy deployment. However, it suffers from the limited capacity of wireless communication and a network is easily overflown with a considerable amount of video traffic. In this paper, we propose an autonomous video coding rate control mechanism where each camera sensor node can autonomously determine its coding rate in accordance with the location and velocity of target objects. For this purpose, we adopted a biological model, i.e., reaction-diffusion model, inspired by the similarity of biological spatial patterns and the spatial distribution of video coding rate. Through simulation and practical experiments, we verify the effectiveness of our proposal.

  1. Moving Object Detection on a Vehicle Mounted Back-Up Camera

    PubMed Central

    Kim, Dong-Sun; Kwon, Jinsan

    2015-01-01

    In the detection of moving objects from vision sources one usually assumes that the scene has been captured by stationary cameras. In case of backing up a vehicle, however, the camera mounted on the vehicle moves according to the vehicle’s movement, resulting in ego-motions on the background. This results in mixed motion in the scene, and makes it difficult to distinguish between the target objects and background motions. Without further treatments on the mixed motion, traditional fixed-viewpoint object detection methods will lead to many false-positive detection results. In this paper, we suggest a procedure to be used with the traditional moving object detection methods relaxing the stationary cameras restriction, by introducing additional steps before and after the detection. We also decribe the implementation as a FPGA platform along with the algorithm. The target application of this suggestion is use with a road vehicle’s rear-view camera systems. PMID:26712761

  2. The Mars Science Laboratory Curiosity rover Mastcam instruments: Preflight and in-flight calibration, validation, and data archiving

    NASA Astrophysics Data System (ADS)

    Bell, J. F.; Godber, A.; McNair, S.; Caplinger, M. A.; Maki, J. N.; Lemmon, M. T.; Van Beek, J.; Malin, M. C.; Wellington, D.; Kinch, K. M.; Madsen, M. B.; Hardgrove, C.; Ravine, M. A.; Jensen, E.; Harker, D.; Anderson, R. B.; Herkenhoff, K. E.; Morris, R. V.; Cisneros, E.; Deen, R. G.

    2017-07-01

    The NASA Curiosity rover Mast Camera (Mastcam) system is a pair of fixed-focal length, multispectral, color CCD imagers mounted 2 m above the surface on the rover's remote sensing mast, along with associated electronics and an onboard calibration target. The left Mastcam (M-34) has a 34 mm focal length, an instantaneous field of view (IFOV) of 0.22 mrad, and a FOV of 20° × 15° over the full 1648 × 1200 pixel span of its Kodak KAI-2020 CCD. The right Mastcam (M-100) has a 100 mm focal length, an IFOV of 0.074 mrad, and a FOV of 6.8° × 5.1° using the same detector. The cameras are separated by 24.2 cm on the mast, allowing stereo images to be obtained at the resolution of the M-34 camera. Each camera has an eight-position filter wheel, enabling it to take Bayer pattern red, green, and blue (RGB) "true color" images, multispectral images in nine additional bands spanning 400-1100 nm, and images of the Sun in two colors through neutral density-coated filters. An associated Digital Electronics Assembly provides command and data interfaces to the rover, 8 Gb of image storage per camera, 11 bit to 8 bit companding, JPEG compression, and acquisition of high-definition video. Here we describe the preflight and in-flight calibration of Mastcam images, the ways that they are being archived in the NASA Planetary Data System, and the ways that calibration refinements are being developed as the investigation progresses on Mars. We also provide some examples of data sets and analyses that help to validate the accuracy and precision of the calibration.

  3. Mars Science Laboratory Engineering Cameras

    NASA Technical Reports Server (NTRS)

    Maki, Justin N.; Thiessen, David L.; Pourangi, Ali M.; Kobzeff, Peter A.; Lee, Steven W.; Dingizian, Arsham; Schwochert, Mark A.

    2012-01-01

    NASA's Mars Science Laboratory (MSL) Rover, which launched to Mars in 2011, is equipped with a set of 12 engineering cameras. These cameras are build-to-print copies of the Mars Exploration Rover (MER) cameras, which were sent to Mars in 2003. The engineering cameras weigh less than 300 grams each and use less than 3 W of power. Images returned from the engineering cameras are used to navigate the rover on the Martian surface, deploy the rover robotic arm, and ingest samples into the rover sample processing system. The navigation cameras (Navcams) are mounted to a pan/tilt mast and have a 45-degree square field of view (FOV) with a pixel scale of 0.82 mrad/pixel. The hazard avoidance cameras (Haz - cams) are body-mounted to the rover chassis in the front and rear of the vehicle and have a 124-degree square FOV with a pixel scale of 2.1 mrad/pixel. All of the cameras utilize a frame-transfer CCD (charge-coupled device) with a 1024x1024 imaging region and red/near IR bandpass filters centered at 650 nm. The MSL engineering cameras are grouped into two sets of six: one set of cameras is connected to rover computer A and the other set is connected to rover computer B. The MSL rover carries 8 Hazcams and 4 Navcams.

  4. Spatial calibration of an optical see-through head mounted display

    PubMed Central

    Gilson, Stuart J.; Fitzgibbon, Andrew W.; Glennerster, Andrew

    2010-01-01

    We present here a method for calibrating an optical see-through Head Mounted Display (HMD) using techniques usually applied to camera calibration (photogrammetry). Using a camera placed inside the HMD to take pictures simultaneously of a tracked object and features in the HMD display, we could exploit established camera calibration techniques to recover both the intrinsic and extrinsic properties of the HMD (width, height, focal length, optic centre and principal ray of the display). Our method gives low re-projection errors and, unlike existing methods, involves no time-consuming and error-prone human measurements, nor any prior estimates about the HMD geometry. PMID:18599125

  5. Using a Digital Video Camera to Study Motion

    ERIC Educational Resources Information Center

    Abisdris, Gil; Phaneuf, Alain

    2007-01-01

    To illustrate how a digital video camera can be used to analyze various types of motion, this simple activity analyzes the motion and measures the acceleration due to gravity of a basketball in free fall. Although many excellent commercially available data loggers and software can accomplish this task, this activity requires almost no financial…

  6. STS-32 photographic equipment (cameras,lenses,film magazines) on flight deck

    NASA Technical Reports Server (NTRS)

    1990-01-01

    STS-32 photographic equipment is displayed on the aft flight deck of Columbia, Orbiter Vehicle (OV) 102. On the payload station are a dual camera mount with two handheld HASSELBLAD cameras, camera lenses, and film magazines. This array of equipment will be used to record onboard activities and observations of the Earth's surface.

  7. Real-time inspection by submarine images

    NASA Astrophysics Data System (ADS)

    Tascini, Guido; Zingaretti, Primo; Conte, Giuseppe

    1996-10-01

    A real-time application of computer vision concerning tracking and inspection of a submarine pipeline is described. The objective is to develop automatic procedures for supporting human operators in the real-time analysis of images acquired by means of cameras mounted on underwater remotely operated vehicles (ROV) Implementation of such procedures gives rise to a human-machine system for underwater pipeline inspection that can automatically detect and signal the presence of the pipe, of its structural or accessory elements, and of dangerous or alien objects in its neighborhood. The possibility of modifying the image acquisition rate in the simulations performed on video- recorded images is used to prove that the system performs all necessary processing with an acceptable robustness working in real-time up to a speed of about 2.5 kn, widely greater than that the actual ROVs and the security features allow.

  8. Solid state electro-optic color filter and iris

    NASA Technical Reports Server (NTRS)

    1975-01-01

    A pair of solid state electro-optic filters (SSEF) in a binocular holder were designed and fabricated for evaluation of field sequential stereo TV applications. The electronic circuitry for use with the stereo goggles was designed and fabricated, requiring only an external video input. A polarizing screen suitable for attachment to various size TV monitors for use in conjunction with the stereo goggles was designed and fabricated. An improved engineering model 2 filter was fabricated using the bonded holder technique developed previously and integrated to a GCTA color TV camera. An engineering model color filter was fabricated and assembled using PLZT control elements. In addition, a ruggedized holder assembly was designed, fabricated and tested. This assembly provides electrical contacts, high voltage protection, and support for the fragile PLZT disk, and also permits mounting and optical alignment of the associated polarizers.

  9. Sensor data fusion for automated threat recognition in manned-unmanned infantry platoons

    NASA Astrophysics Data System (ADS)

    Wildt, J.; Varela, M.; Ulmke, M.; Brüggermann, B.

    2017-05-01

    To support a dismounted infantry platoon during deployment we team it with several unmanned aerial and ground vehicles (UAV and UGV, respectively). The unmanned systems integrate seamlessly into the infantry platoon, providing automated reconnaissance during movement while keeping formation as well as conducting close range reconnaissance during halt. The sensor data each unmanned system provides is continuously analyzed in real time by specialized algorithms, detecting humans in live videos of UAV mounted infrared cameras as well as gunshot detection and bearing by acoustic sensors. All recognized threats are fused into a consistent situational picture in real time, available to platoon and squad leaders as well as higher level command and control (C2) systems. This gives friendly forces local information superiority and increased situational awareness without the need to constantly monitor the unmanned systems and sensor data.

  10. Using a digital video camera to examine coupled oscillations

    NASA Astrophysics Data System (ADS)

    Greczylo, T.; Debowska, E.

    2002-07-01

    In our previous paper (Debowska E, Jakubowicz S and Mazur Z 1999 Eur. J. Phys. 20 89-95), thanks to the use of an ultrasound distance sensor, experimental verification of the solution of Lagrange equations for longitudinal oscillations of the Wilberforce pendulum was shown. In this paper the sensor and a digital video camera were used to monitor and measure the changes of both the pendulum's coordinates (vertical displacement and angle of rotation) simultaneously. The experiments were performed with the aid of the integrated software package COACH 5. Fourier analysis in Microsoft^{\\circledR} Excel 97 was used to find normal modes in each case of the measured oscillations. Comparison of the results with those presented in our previous paper (as given above) leads to the conclusion that a digital video camera is a powerful tool for measuring coupled oscillations of a Wilberforce pendulum. The most important conclusion is that a video camera is able to do something more than merely register interesting physical phenomena - it can be used to perform measurements of physical quantities at an advanced level.

  11. Optical Head-Mounted Computer Display for Education, Research, and Documentation in Hand Surgery.

    PubMed

    Funk, Shawn; Lee, Donald H

    2016-01-01

    Intraoperative photography and capturing videos is important for the hand surgeon. Recently, optical head-mounted computer display has been introduced as a means of capturing photographs and videos. In this article, we discuss this new technology and review its potential use in hand surgery. Copyright © 2016 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.

  12. Remote presence proctoring by using a wireless remote-control videoconferencing system.

    PubMed

    Smith, C Daniel; Skandalakis, John E

    2005-06-01

    Remote presence in an operating room to allow an experienced surgeon to proctor a surgeon has been promised through robotics and telesurgery solutions. Although several such systems have been developed and commercialized, little progress has been made using telesurgery for anything more than live demonstrations of surgery. This pilot project explored the use of a new videoconferencing capability to determine if it offers advantages over existing systems. The video conferencing system used is a PC-based system with a flat screen monitor and an attached camera that is then mounted on a remotely controlled platform. This device is controlled from a remotely placed PC-based videoconferencing system computer outfitted with a joystick. Using the public Internet and a wireless router at the client site, a surgeon at the control station can manipulate the videoconferencing system. Controls include navigating the unit around the room and moving the flat screen/camera portion like a head looking up/down and right/left. This system (InTouch Medical, Santa Barbara, CA) was used to proctor medical students during an anatomy class cadaver dissection. The ability of the remote surgeon to effectively monitor the students' dissections and direct their activities was assessed subjectively by students and surgeon. This device was very effective at providing a controllable and interactive presence in the anatomy lab. Students felt they were interacting with a person rather than a video screen and quickly forgot that the surgeon was not in the room. The ability to move the device within the environment rather than just observe the environment from multiple fixed camera angles gave the surgeon a similar feel of true presence. A remote-controlled videoconferencing system provides a more real experience for both student and proctor. Future development of such a device could greatly facilitate progress in implementation of remote presence proctoring.

  13. ONR Workshop on Magnetohydrodynamic Submarine Propulsion (2nd), Held in San Diego, California on November 16-17, 1989

    DTIC Science & Technology

    1990-07-01

    electrohtic dissociation of the electrode mate- pedo applications seem to be still somewhat rial, and to provide a good gas evolution wlhich out of the...rod cathode. A unique feature of this preliminary experiment was the use of a prototype gated, intensified video camera. This camera is based on a...microprocessor controlled microchannel plate intensifier tube. The intensifier tube image is focused on a standard CCD video camera so that the object

  14. Encrypting Digital Camera with Automatic Encryption Key Deletion

    NASA Technical Reports Server (NTRS)

    Oakley, Ernest C. (Inventor)

    2007-01-01

    A digital video camera includes an image sensor capable of producing a frame of video data representing an image viewed by the sensor, an image memory for storing video data such as previously recorded frame data in a video frame location of the image memory, a read circuit for fetching the previously recorded frame data, an encryption circuit having an encryption key input connected to receive the previously recorded frame data from the read circuit as an encryption key, an un-encrypted data input connected to receive the frame of video data from the image sensor and an encrypted data output port, and a write circuit for writing a frame of encrypted video data received from the encrypted data output port of the encryption circuit to the memory and overwriting the video frame location storing the previously recorded frame data.

  15. Structure-From for Calibration of a Vehicle Camera System with Non-Overlapping Fields-Of in AN Urban Environment

    NASA Astrophysics Data System (ADS)

    Hanel, A.; Stilla, U.

    2017-05-01

    Vehicle environment cameras observing traffic participants in the area around a car and interior cameras observing the car driver are important data sources for driver intention recognition algorithms. To combine information from both camera groups, a camera system calibration can be performed. Typically, there is no overlapping field-of-view between environment and interior cameras. Often no marked reference points are available in environments, which are a large enough to cover a car for the system calibration. In this contribution, a calibration method for a vehicle camera system with non-overlapping camera groups in an urban environment is described. A-priori images of an urban calibration environment taken with an external camera are processed with the structure-frommotion method to obtain an environment point cloud. Images of the vehicle interior, taken also with an external camera, are processed to obtain an interior point cloud. Both point clouds are tied to each other with images of both image sets showing the same real-world objects. The point clouds are transformed into a self-defined vehicle coordinate system describing the vehicle movement. On demand, videos can be recorded with the vehicle cameras in a calibration drive. Poses of vehicle environment cameras and interior cameras are estimated separately using ground control points from the respective point cloud. All poses of a vehicle camera estimated for different video frames are optimized in a bundle adjustment. In an experiment, a point cloud is created from images of an underground car park, as well as a point cloud of the interior of a Volkswagen test car is created. Videos of two environment and one interior cameras are recorded. Results show, that the vehicle camera poses are estimated successfully especially when the car is not moving. Position standard deviations in the centimeter range can be achieved for all vehicle cameras. Relative distances between the vehicle cameras deviate between one and ten centimeters from tachymeter reference measurements.

  16. A highly sensitive underwater video system for use in turbid aquaculture ponds.

    PubMed

    Hung, Chin-Chang; Tsao, Shih-Chieh; Huang, Kuo-Hao; Jang, Jia-Pu; Chang, Hsu-Kuang; Dobbs, Fred C

    2016-08-24

    The turbid, low-light waters characteristic of aquaculture ponds have made it difficult or impossible for previous video cameras to provide clear imagery of the ponds' benthic habitat. We developed a highly sensitive, underwater video system (UVS) for this particular application and tested it in shrimp ponds having turbidities typical of those in southern Taiwan. The system's high-quality video stream and images, together with its camera capacity (up to nine cameras), permit in situ observations of shrimp feeding behavior, shrimp size and internal anatomy, and organic matter residues on pond sediments. The UVS can operate continuously and be focused remotely, a convenience to shrimp farmers. The observations possible with the UVS provide aquaculturists with information critical to provision of feed with minimal waste; determining whether the accumulation of organic-matter residues dictates exchange of pond water; and management decisions concerning shrimp health.

  17. A highly sensitive underwater video system for use in turbid aquaculture ponds

    NASA Astrophysics Data System (ADS)

    Hung, Chin-Chang; Tsao, Shih-Chieh; Huang, Kuo-Hao; Jang, Jia-Pu; Chang, Hsu-Kuang; Dobbs, Fred C.

    2016-08-01

    The turbid, low-light waters characteristic of aquaculture ponds have made it difficult or impossible for previous video cameras to provide clear imagery of the ponds’ benthic habitat. We developed a highly sensitive, underwater video system (UVS) for this particular application and tested it in shrimp ponds having turbidities typical of those in southern Taiwan. The system’s high-quality video stream and images, together with its camera capacity (up to nine cameras), permit in situ observations of shrimp feeding behavior, shrimp size and internal anatomy, and organic matter residues on pond sediments. The UVS can operate continuously and be focused remotely, a convenience to shrimp farmers. The observations possible with the UVS provide aquaculturists with information critical to provision of feed with minimal waste; determining whether the accumulation of organic-matter residues dictates exchange of pond water; and management decisions concerning shrimp health.

  18. A highly sensitive underwater video system for use in turbid aquaculture ponds

    PubMed Central

    Hung, Chin-Chang; Tsao, Shih-Chieh; Huang, Kuo-Hao; Jang, Jia-Pu; Chang, Hsu-Kuang; Dobbs, Fred C.

    2016-01-01

    The turbid, low-light waters characteristic of aquaculture ponds have made it difficult or impossible for previous video cameras to provide clear imagery of the ponds’ benthic habitat. We developed a highly sensitive, underwater video system (UVS) for this particular application and tested it in shrimp ponds having turbidities typical of those in southern Taiwan. The system’s high-quality video stream and images, together with its camera capacity (up to nine cameras), permit in situ observations of shrimp feeding behavior, shrimp size and internal anatomy, and organic matter residues on pond sediments. The UVS can operate continuously and be focused remotely, a convenience to shrimp farmers. The observations possible with the UVS provide aquaculturists with information critical to provision of feed with minimal waste; determining whether the accumulation of organic-matter residues dictates exchange of pond water; and management decisions concerning shrimp health. PMID:27554201

  19. Thermal imagers: from ancient analog video output to state-of-the-art video streaming

    NASA Astrophysics Data System (ADS)

    Haan, Hubertus; Feuchter, Timo; Münzberg, Mario; Fritze, Jörg; Schlemmer, Harry

    2013-06-01

    The video output of thermal imagers stayed constant over almost two decades. When the famous Common Modules were employed a thermal image at first was presented to the observer in the eye piece only. In the early 1990s TV cameras were attached and the standard output was CCIR. In the civil camera market output standards changed to digital formats a decade ago with digital video streaming being nowadays state-of-the-art. The reasons why the output technique in the thermal world stayed unchanged over such a long time are: the very conservative view of the military community, long planning and turn-around times of programs and a slower growth of pixel number of TIs in comparison to consumer cameras. With megapixel detectors the CCIR output format is not sufficient any longer. The paper discusses the state-of-the-art compression and streaming solutions for TIs.

  20. Composite video and graphics display for multiple camera viewing system in robotics and teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)

    1991-01-01

    A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.

  1. Composite video and graphics display for camera viewing systems in robotics and teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)

    1993-01-01

    A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.

  2. Economical Video Monitoring of Traffic

    NASA Technical Reports Server (NTRS)

    Houser, B. C.; Paine, G.; Rubenstein, L. D.; Parham, O. Bruce, Jr.; Graves, W.; Bradley, C.

    1986-01-01

    Data compression allows video signals to be transmitted economically on telephone circuits. Telephone lines transmit television signals to remote traffic-control center. Lines also carry command signals from center to TV camera and compressor at highway site. Video system with television cameras positioned at critical points on highways allows traffic controllers to determine visually, almost immediately, exact cause of traffic-flow disruption; e.g., accidents, breakdowns, or spills, almost immediately. Controllers can then dispatch appropriate emergency services and alert motorists to minimize traffic backups.

  3. Real-Time Acquisition and Display of Data and Video

    NASA Technical Reports Server (NTRS)

    Bachnak, Rafic; Chakinarapu, Ramya; Garcia, Mario; Kar, Dulal; Nguyen, Tien

    2007-01-01

    This paper describes the development of a prototype that takes in an analog National Television System Committee (NTSC) video signal generated by a video camera and data acquired by a microcontroller and display them in real-time on a digital panel. An 8051 microcontroller is used to acquire power dissipation by the display panel, room temperature, and camera zoom level. The paper describes the major hardware components and shows how they are interfaced into a functional prototype. Test data results are presented and discussed.

  4. Explosive Transient Camera (ETC) Program

    DTIC Science & Technology

    1991-10-01

    VOLTAGES 4.- VIDEO OUT CCD CLOCKING UNIT UUPSTAIRS" ELECTRONICS AND ANALOG TO DIGITAL IPR OCECSSER I COMMANDS TO DATA AND STATUS INSTRUMENT INFORMATION I...and transmits digital video and status information to the "downstairs" system. The clocking unit and regulator/driver board are the only CCD dependent...A. 1001, " Video Cam-era’CC’" tandari Piells" (1(P’ll m-norartlum, unpublished). Condon,, J.J., Puckpan, M.A., and Vachalski, J. 1970, A. J., 9U, 1149

  5. In-Home Exposure Therapy for Veterans with PTSD

    DTIC Science & Technology

    2017-10-01

    telehealth (HBT; Veterans stay at home and meet with the therapist using the computer and video cameras), and (3) PE delivered in home, in person (IHIP... video cameras), and (3) PE delivered in home, in person (IHIP; the therapist comes to the Veterans’ homes for treatment). We will be checking to see...when providing treatment in homes and through home based video technology. BODY: Our focus in the past year (30 Sept 2016 – 10 Oct 2017) has been to

  6. STS-112 Flight Day 4 Highlights

    NASA Astrophysics Data System (ADS)

    2002-10-01

    On the fourth day of STS-112, its crew (Jeffrey Ashby, Commander; Pamela Melroy, Pilot; David Wolf, Mission Specialist; Piers Sellers, Mission Specialist; Sandra Magnus, Mission Specialist; Fyodor Yurchikhin, Mission Specialist) onboard Atlantis and the Expedition 5 crew (Valery Korzun, Commander; Peggy Whitson, Flight Engineer; Sergei Treschev, Flight Engineer) onboard the International Space Station (ISS) are seen preparing for the installation of the S1 truss structure. Inside the Destiny Laboratory Module, Korzun and other crewmembers are seen as they busily prepare for the work of the day. Sellers dons an oxygen mask and uses an exercise machine in order to purge the nitrogen from his bloodstream, in preparation for an extravehicular activity (EVA). Whitson uses the ISS's Canadarm 2 robotic arm to grapple the S1 truss and remove it from Atlantis' payload bay, with the assistance of Magnus. Using the robotic arm, Whitson slowly maneuvers the 15 ton truss structure into alignment with its attachment point on the starboard side of the S0 truss structure, where the carefully orchestrated mating procedures take place. There is video footage of the entire truss being rotated and positioned by the arm, and ammonia tank assembly on the structure is visible, with Earth in the background. Following the completion of the second stage capture, the robotic arm is ungrappled from truss. Sellers and Wolf are shown exiting the the Quest airlock hatch to begin their EVA. They are shown performing a variety of tasks on the now attached S1 truss structure, including work on the Crew Equipment Translation Cart (CETA), the S-band Antenna Assembly, and umbilical cables that provide power and remote operation capability to cameras. During their EVA, they are shown using a foot platform on the robotic arm. Significant portions of their activities are shown from the vantage of helmet mounted video cameras. The video closes with a final shot of the ISS and its new S1 truss.

  7. Real-time moving objects detection and tracking from airborne infrared camera

    NASA Astrophysics Data System (ADS)

    Zingoni, Andrea; Diani, Marco; Corsini, Giovanni

    2017-10-01

    Detecting and tracking moving objects in real-time from an airborne infrared (IR) camera offers interesting possibilities in video surveillance, remote sensing and computer vision applications, such as monitoring large areas simultaneously, quickly changing the point of view on the scene and pursuing objects of interest. To fully exploit such a potential, versatile solutions are needed, but, in the literature, the majority of them works only under specific conditions about the considered scenario, the characteristics of the moving objects or the aircraft movements. In order to overcome these limitations, we propose a novel approach to the problem, based on the use of a cheap inertial navigation system (INS), mounted on the aircraft. To exploit jointly the information contained in the acquired video sequence and the data provided by the INS, a specific detection and tracking algorithm has been developed. It consists of three main stages performed iteratively on each acquired frame. The detection stage, in which a coarse detection map is computed, using a local statistic both fast to calculate and robust to noise and self-deletion of the targeted objects. The registration stage, in which the position of the detected objects is coherently reported on a common reference frame, by exploiting the INS data. The tracking stage, in which the steady objects are rejected, the moving objects are tracked, and an estimation of their future position is computed, to be used in the subsequent iteration. The algorithm has been tested on a large dataset of simulated IR video sequences, recreating different environments and different movements of the aircraft. Promising results have been obtained, both in terms of detection and false alarm rate, and in terms of accuracy in the estimation of position and velocity of the objects. In addition, for each frame, the detection and tracking map has been generated by the algorithm, before the acquisition of the subsequent frame, proving its capability to work in real-time.

  8. Airborne infrared video radiometry as a low-cost tool for remote sensing of the environment, two mapping examples from Israel of urban heat islands and mineralogical site

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ben-Dor, E.; Saaroni, H.; Ochana, D.

    1996-10-01

    In this study we examined the capability of a laboratory infrared video camera for use in remote sensing of the environment. The instrument used, INFRAMETRICS 760, was mounted onboard a Bell 206 helicopter. Under the flight conditions examined, the radiometer proved itself to be very stable and produced high-quality thermal images in a real-time mode. We studied two different environmental aspects, as follows: (1) Urban heat island of the most dense city in Israel, Tel-Aviv- and (2) lithological distribution of a well-known mineralogical site in Israel, Makhtesh Ramon. The radiometer used in both studies was able to produce a temperaturemore » presentation, rather than a gray scale from an altitude of 7,000 and 10,000 feet and at 70 knots air speed. The instrument produced a high-quality set of data in terms of signal-to-noise, stability, temperature accuracy and spatial resolution. In the Tel-Aviv case, the results showed that the urban heat island of the city can be depicted in a very high spatial and thermal resolutions domain and that a significant correlation exists between ground objects and the surrounding air temperature values. Based on the flight results, we could generated an isotherm map of the city that, for the first time, located the urban heat island of the city both in meso- and microscales. In the case of Makhtesh Ramon, we found that under field conditions, the radiometer, coupled with a VIS-CCD camera can provide significant ATI parameters of typical rocks that characterize tile study area. Although more study is planned and suggested based on the current data, it was concluded that the airborne thermal video radiometry, is a promising, inexpensive tool for monitoring the environment on a real-time basis. 10 refs., 5 figs., 1 tab.« less

  9. Blinded evaluation of the effects of high definition and magnification on perceived image quality in laryngeal imaging.

    PubMed

    Otto, Kristen J; Hapner, Edie R; Baker, Michael; Johns, Michael M

    2006-02-01

    Advances in commercial video technology have improved office-based laryngeal imaging. This study investigates the perceived image quality of a true high-definition (HD) video camera and the effect of magnification on laryngeal videostroboscopy. We performed a prospective, dual-armed, single-blinded analysis of a standard laryngeal videostroboscopic examination comparing 3 separate add-on camera systems: a 1-chip charge-coupled device (CCD) camera, a 3-chip CCD camera, and a true 720p (progressive scan) HD camera. Displayed images were controlled for magnification and image size (20-inch [50-cm] display, red-green-blue, and S-video cable for 1-chip and 3-chip cameras; digital visual interface cable and HD monitor for HD camera). Ten blinded observers were then asked to rate the following 5 items on a 0-to-100 visual analog scale: resolution, color, ability to see vocal fold vibration, sense of depth perception, and clarity of blood vessels. Eight unblinded observers were then asked to rate the difference in perceived resolution and clarity of laryngeal examination images when displayed on a 10-inch (25-cm) monitor versus a 42-inch (105-cm) monitor. A visual analog scale was used. These monitors were controlled for actual resolution capacity. For each item evaluated, randomized block design analysis demonstrated that the 3-chip camera scored significantly better than the 1-chip camera (p < .05). For the categories of color and blood vessel discrimination, the 3-chip camera scored significantly better than the HD camera (p < .05). For magnification alone, observers rated the 42-inch monitor statistically better than the 10-inch monitor. The expense of new medical technology must be judged against its added value. This study suggests that HD laryngeal imaging may not add significant value over currently available video systems, in perceived image quality, when a small monitor is used. Although differences in clarity between standard and HD cameras may not be readily apparent on small displays, a large display size coupled with HD technology may impart improved diagnosis of subtle vocal fold lesions and vibratory anomalies.

  10. Viewpoint Integration for Hand-Based Recognition of Social Interactions from a First-Person View.

    PubMed

    Bambach, Sven; Crandall, David J; Yu, Chen

    2015-11-01

    Wearable devices are becoming part of everyday life, from first-person cameras (GoPro, Google Glass), to smart watches (Apple Watch), to activity trackers (FitBit). These devices are often equipped with advanced sensors that gather data about the wearer and the environment. These sensors enable new ways of recognizing and analyzing the wearer's everyday personal activities, which could be used for intelligent human-computer interfaces and other applications. We explore one possible application by investigating how egocentric video data collected from head-mounted cameras can be used to recognize social activities between two interacting partners (e.g. playing chess or cards). In particular, we demonstrate that just the positions and poses of hands within the first-person view are highly informative for activity recognition, and present a computer vision approach that detects hands to automatically estimate activities. While hand pose detection is imperfect, we show that combining evidence across first-person views from the two social partners significantly improves activity recognition accuracy. This result highlights how integrating weak but complimentary sources of evidence from social partners engaged in the same task can help to recognize the nature of their interaction.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leary, T.J.; Lamb, A.

    The Department of Energy`s Office of Arms Control and Non-Proliferation (NN-20) has developed a suite of airborne remote sensing systems that simultaneously collect coincident data from a US Navy P-3 aircraft. The primary objective of the Airborne Multisensor Pod System (AMPS) Program is {open_quotes}to collect multisensor data that can be used for data research, both to reduce interpretation problems associated with data overload and to develop information products more complete than can be obtained from any single sensor.{close_quotes} The sensors are housed in wing-mounted pods and include: a Ku-Band Synthetic Aperture Radar; a CASI Hyperspectral Imager; a Daedalus 3600 Airbornemore » Multispectral Scanner; a Wild Heerbrugg RC-30 motion compensated large format camera; various high resolution, light intensified and thermal video cameras; and several experimental sensors (e.g. the Portable Hyperspectral Imager of Low-Light Spectroscopy (PHILLS)). Over the past year or so, the Coastal Marine Resource Assessment (CAMRA) group at the Florida Department of Environmental Protection`s Marine Research Institute (FMRI) has been working with the Department of Energy through the Naval Research Laboratory to develop applications and products from existing data. Considerable effort has been spent identifying image formats integration parameters. 2 refs., 3 figs., 2 tabs.« less

  12. Viewpoint Integration for Hand-Based Recognition of Social Interactions from a First-Person View

    PubMed Central

    Bambach, Sven; Crandall, David J.; Yu, Chen

    2016-01-01

    Wearable devices are becoming part of everyday life, from first-person cameras (GoPro, Google Glass), to smart watches (Apple Watch), to activity trackers (FitBit). These devices are often equipped with advanced sensors that gather data about the wearer and the environment. These sensors enable new ways of recognizing and analyzing the wearer’s everyday personal activities, which could be used for intelligent human-computer interfaces and other applications. We explore one possible application by investigating how egocentric video data collected from head-mounted cameras can be used to recognize social activities between two interacting partners (e.g. playing chess or cards). In particular, we demonstrate that just the positions and poses of hands within the first-person view are highly informative for activity recognition, and present a computer vision approach that detects hands to automatically estimate activities. While hand pose detection is imperfect, we show that combining evidence across first-person views from the two social partners significantly improves activity recognition accuracy. This result highlights how integrating weak but complimentary sources of evidence from social partners engaged in the same task can help to recognize the nature of their interaction. PMID:28966999

  13. “Ipsilateral, high, single-hand, sideways”—Ruijin rule for camera assistant in uniportal video-assisted thoracoscopic surgery

    PubMed Central

    Gao, Taotao; Xiang, Jie; Jin, Runsen; Zhang, Yajie; Wu, Han

    2016-01-01

    Camera assistant plays a very important role in uniportal video-assisted thoracoscopic surgery (VATS), who acts as the eye of the surgeon, providing the VATS team with a stable and clear operating view. Thus, a good assistant should cooperate with surgeon and manipulate the camera expertly, to ensure eye-hand coordination. We have performed more than 100 uniportal VATS in the Department Of Thoracic Surgery in Ruijin Hospital. Based on our experiences, we summarized the method of holding camera, known as “ipsilateral, high, single-hand, sideways”, which largely improves the comfort and fluency of surgery. PMID:27867573

  14. Cross-Correlation-Based Structural System Identification Using Unmanned Aerial Vehicles

    PubMed Central

    Yoon, Hyungchul; Hoskere, Vedhus; Park, Jong-Woong; Spencer, Billie F.

    2017-01-01

    Computer vision techniques have been employed to characterize dynamic properties of structures, as well as to capture structural motion for system identification purposes. All of these methods leverage image-processing techniques using a stationary camera. This requirement makes finding an effective location for camera installation difficult, because civil infrastructure (i.e., bridges, buildings, etc.) are often difficult to access, being constructed over rivers, roads, or other obstacles. This paper seeks to use video from Unmanned Aerial Vehicles (UAVs) to address this problem. As opposed to the traditional way of using stationary cameras, the use of UAVs brings the issue of the camera itself moving; thus, the displacements of the structure obtained by processing UAV video are relative to the UAV camera. Some efforts have been reported to compensate for the camera motion, but they require certain assumptions that may be difficult to satisfy. This paper proposes a new method for structural system identification using the UAV video directly. Several challenges are addressed, including: (1) estimation of an appropriate scale factor; and (2) compensation for the rolling shutter effect. Experimental validation is carried out to validate the proposed approach. The experimental results demonstrate the efficacy and significant potential of the proposed approach. PMID:28891985

  15. The Heights of Mount Sharp

    NASA Image and Video Library

    2012-08-20

    With the addition of four high-resolution Navigation Camera, or Navcam, images, taken on Aug. 18 Sol 12, Curiosity 360-degree landing-site panorama now includes the highest point on Mount Sharp visible from the rover.

  16. ASTP video tape recorder ground support equipment (audio/CTE splitter/interleaver). Operations manual

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A descriptive handbook for the audio/CTE splitter/interleaver (RCA part No. 8673734-502) was presented. This unit is designed to perform two major functions: extract audio and time data from an interleaved video/audio signal (splitter section), and provide a test interleaved video/audio/CTE signal for the system (interleaver section). It is a rack mounting unit 7 inches high, 19 inches wide, 20 inches deep, mounted on slides for retracting from the rack, and weighs approximately 40 pounds. The following information is provided: installation, operation, principles of operation, maintenance, schematics and parts lists.

  17. Using Video Self-Analysis to Improve the "Withitness" of Student Teachers

    ERIC Educational Resources Information Center

    Snoeyink, Rick

    2010-01-01

    Although video self-analysis has been used for years in teacher education, the camera has almost always focused on the preservice teacher. In this study, the researcher videotaped eight preservice teachers four times each during their student-teaching internships. One camera was focused on them while another was focused on their students. Their…

  18. Lights, Camera, Action! Using Video Recordings to Evaluate Teachers

    ERIC Educational Resources Information Center

    Petrilli, Michael J.

    2011-01-01

    Teachers and their unions do not want test scores to count for everything; classroom observations are key, too. But planning a couple of visits from the principal is hardly sufficient. These visits may "change the teacher's behavior"; furthermore, principals may not be the best judges of effective teaching. So why not put video cameras in…

  19. Jack & the Video Camera

    ERIC Educational Resources Information Center

    Charlan, Nathan

    2010-01-01

    This article narrates how the use of video camera has transformed the life of Jack Williams, a 10-year-old boy from Colorado Springs, Colorado, who has autism. The way autism affected Jack was unique. For the first nine years of his life, Jack remained in his world, alone. Functionally non-verbal and with motor skill problems that affected his…

  20. 78 FR 17939 - Announcement of Funding Awards; Capital Fund Safety and Security Grants; Fiscal Year 2012

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-25

    ... publishing the names, addresses, and amounts of the 18 awards made under the set aside in Appendix A to this... Security Camera Harrison Street, Oakland, CA Surveillance System 94612. including digital video recorders... Cameras, 50 Lincoln Plaza, Wilkes-Barre, Network Video PA 18702. Recorders, and Lighting. Ft. Worth...

  1. The California All-sky Meteor Surveillance (CAMS) System

    NASA Astrophysics Data System (ADS)

    Gural, P. S.

    2011-01-01

    A unique next generation multi-camera, multi-site video meteor system is being developed and deployed in California to provide high accuracy orbits of simultaneously captured meteors. Included herein is a description of the goals, concept of operations, hardware, and software development progress. An appendix contains a meteor camera performance trade study made for video systems circa 2010.

  2. Photometric Calibration of Consumer Video Cameras

    NASA Technical Reports Server (NTRS)

    Suggs, Robert; Swift, Wesley, Jr.

    2007-01-01

    Equipment and techniques have been developed to implement a method of photometric calibration of consumer video cameras for imaging of objects that are sufficiently narrow or sufficiently distant to be optically equivalent to point or line sources. Heretofore, it has been difficult to calibrate consumer video cameras, especially in cases of image saturation, because they exhibit nonlinear responses with dynamic ranges much smaller than those of scientific-grade video cameras. The present method not only takes this difficulty in stride but also makes it possible to extend effective dynamic ranges to several powers of ten beyond saturation levels. The method will likely be primarily useful in astronomical photometry. There are also potential commercial applications in medical and industrial imaging of point or line sources in the presence of saturation.This development was prompted by the need to measure brightnesses of debris in amateur video images of the breakup of the Space Shuttle Columbia. The purpose of these measurements is to use the brightness values to estimate relative masses of debris objects. In most of the images, the brightness of the main body of Columbia was found to exceed the dynamic ranges of the cameras. A similar problem arose a few years ago in the analysis of video images of Leonid meteors. The present method is a refined version of the calibration method developed to solve the Leonid calibration problem. In this method, one performs an endto- end calibration of the entire imaging system, including not only the imaging optics and imaging photodetector array but also analog tape recording and playback equipment (if used) and any frame grabber or other analog-to-digital converter (if used). To automatically incorporate the effects of nonlinearity and any other distortions into the calibration, the calibration images are processed in precisely the same manner as are the images of meteors, space-shuttle debris, or other objects that one seeks to analyze. The light source used to generate the calibration images is an artificial variable star comprising a Newtonian collimator illuminated by a light source modulated by a rotating variable neutral- density filter. This source acts as a point source, the brightness of which varies at a known rate. A video camera to be calibrated is aimed at this source. Fixed neutral-density filters are inserted in or removed from the light path as needed to make the video image of the source appear to fluctuate between dark and saturated bright. The resulting video-image data are analyzed by use of custom software that determines the integrated signal in each video frame and determines the system response curve (measured output signal versus input brightness). These determinations constitute the calibration, which is thereafter used in automatic, frame-by-frame processing of the data from the video images to be analyzed.

  3. A passive terahertz video camera based on lumped element kinetic inductance detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rowe, Sam, E-mail: sam.rowe@astro.cf.ac.uk; Pascale, Enzo; Doyle, Simon

    We have developed a passive 350 GHz (850 μm) video-camera to demonstrate lumped element kinetic inductance detectors (LEKIDs)—designed originally for far-infrared astronomy—as an option for general purpose terrestrial terahertz imaging applications. The camera currently operates at a quasi-video frame rate of 2 Hz with a noise equivalent temperature difference per frame of ∼0.1 K, which is close to the background limit. The 152 element superconducting LEKID array is fabricated from a simple 40 nm aluminum film on a silicon dielectric substrate and is read out through a single microwave feedline with a cryogenic low noise amplifier and room temperature frequencymore » domain multiplexing electronics.« less

  4. Multimodal Friction Ignition Tester

    NASA Technical Reports Server (NTRS)

    Davis, Eddie; Howard, Bill; Herald, Stephen

    2009-01-01

    The multimodal friction ignition tester (MFIT) is a testbed for experiments on the thermal and mechanical effects of friction on material specimens in pressurized, oxygen-rich atmospheres. In simplest terms, a test involves recording sensory data while rubbing two specimens against each other at a controlled normal force, with either a random stroke or a sinusoidal stroke having controlled amplitude and frequency. The term multimodal in the full name of the apparatus refers to a capability for imposing any combination of widely ranging values of the atmospheric pressure, atmospheric oxygen content, stroke length, stroke frequency, and normal force. The MFIT was designed especially for studying the tendency toward heating and combustion of nonmetallic composite materials and the fretting of metals subjected to dynamic (vibrational) friction forces in the presence of liquid oxygen or pressurized gaseous oxygen test conditions approximating conditions expected to be encountered in proposed composite material oxygen tanks aboard aircraft and spacecraft in flight. The MFIT includes a stainless-steel pressure vessel capable of retaining the required test atmosphere. Mounted atop the vessel is a pneumatic cylinder containing a piston for exerting the specified normal force between the two specimens. Through a shaft seal, the piston shaft extends downward into the vessel. One of the specimens is mounted on a block, denoted the pressure block, at the lower end of the piston shaft. This specimen is pressed down against the other specimen, which is mounted in a recess in another block, denoted the slip block, that can be moved horizontally but not vertically. The slip block is driven in reciprocating horizontal motion by an electrodynamic vibration exciter outside the pressure vessel. The armature of the electrodynamic exciter is connected to the slip block via a horizontal shaft that extends into the pressure vessel via a second shaft seal. The reciprocating horizontal motion can be chosen to be random with a flat spectrum over the frequency range of 10 Hz to 1 kHz, or to be sinusoidal at any peak-to-peak amplitude up to 0.8 in. (.2 cm) and fixed or varying frequency up to 1 kHz. The temperatures of the specimen and of the vessel are measured by thermocouples. A digital video camera mounted outside the pressure vessel is aimed into the vessel through a sapphire window, with its focus fixed on the interface between the two specimens. A position transducer monitors the displacement of the pneumatic-cylinder shaft. The pressure in the vessel is also monitored. During a test, the output of the video camera, the temperatures, and the pneumatic-shaft displacement are monitored and recorded. The test is continued for a predetermined amount of time (typically, 10 minutes) or until either (1) the output of the position transducer shows a sudden change indicative of degradation of either or both specimens, (2) ignition or another significant reaction is observed, or (3) pressure in the vessel increases beyond a pre-set level that triggers an automatic shutdown.

  5. Dash Cam videos on YouTube™ offer insights into factors related to moose-vehicle collisions.

    PubMed

    Rea, Roy V; Johnson, Chris J; Aitken, Daniel A; Child, Kenneth N; Hesse, Gayle

    2018-03-26

    To gain a better understanding of the dynamics of moose-vehicle collisions, we analyzed 96 videos of moose-vehicle interactions recorded by vehicle dash-mounted cameras (Dash Cams) that had been posted to the video-sharing website YouTube™. Our objective was to determine the effects of road conditions, season and weather, moose behavior, and driver response to actual collisions compared to near misses when the collision was avoided. We identified 11 variables that were consistently observable in each video and that we hypothesized would help to explain a collision or near miss. The most parsimonious logistic regression model contained variables for number of moose, sight time, vehicle slows, and vehicle swerves (AIC c w = 0.529). This model had good predictive accuracy (AUC = 0.860, SE = 0.041). The only statistically significant variable from this model that explained the difference between moose-vehicle collisions and near misses was 'Vehicle slows'. Our results provide no evidence that road surface conditions (dry, wet, ice or snow), roadside habitat type (forested or cleared), the extent to which roadside vegetation was cleared, natural light conditions (overcast, clear, twilight, dark), season (winter, spring and summer, fall), the presence of oncoming traffic, or the direction from which the moose entered the roadway had any influence on whether a motorist collided with a moose. Dash Cam videos posted to YouTube™ provide a unique source of data for road safety planners trying to understand what happens in the moments just before a moose-vehicle collision and how those factors may differ from moose-vehicle encounters that do not result in a collision. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Age-related changes in visual exploratory behavior in a natural scene setting

    PubMed Central

    Hamel, Johanna; De Beukelaer, Sophie; Kraft, Antje; Ohl, Sven; Audebert, Heinrich J.; Brandt, Stephan A.

    2013-01-01

    Diverse cognitive functions decline with increasing age, including the ability to process central and peripheral visual information in a laboratory testing situation (useful visual field of view). To investigate whether and how this influences activities of daily life, we studied age-related changes in visual exploratory behavior in a natural scene setting: a driving simulator paradigm of variable complexity was tested in subjects of varying ages with simultaneous eye- and head-movement recordings via a head-mounted camera. Detection and reaction times were also measured by visual fixation and manual reaction. We considered video computer game experience as a possible influence on performance. Data of 73 participants of varying ages were analyzed, driving two different courses. We analyzed the influence of route difficulty level, age, and eccentricity of test stimuli on oculomotor and driving behavior parameters. No significant age effects were found regarding saccadic parameters. In the older subjects head-movements increasingly contributed to gaze amplitude. More demanding courses and more peripheral stimuli locations induced longer reaction times in all age groups. Deterioration of the functionally useful visual field of view with increasing age was not suggested in our study group. However, video game-experienced subjects revealed larger saccade amplitudes and a broader distribution of fixations on the screen. They reacted faster to peripheral objects suggesting the notion of a general detection task rather than perceiving driving as a central task. As the video game-experienced population consisted of younger subjects, our study indicates that effects due to video game experience can easily be misinterpreted as age effects if not accounted for. We therefore view it as essential to consider video game experience in all testing methods using virtual media. PMID:23801970

  7. Standardized access, display, and retrieval of medical video

    NASA Astrophysics Data System (ADS)

    Bellaire, Gunter; Steines, Daniel; Graschew, Georgi; Thiel, Andreas; Bernarding, Johannes; Tolxdorff, Thomas; Schlag, Peter M.

    1999-05-01

    The system presented here enhances documentation and data- secured, second-opinion facilities by integrating video sequences into DICOM 3.0. We present an implementation for a medical video server extended by a DICOM interface. Security mechanisms conforming with DICOM are integrated to enable secure internet access. Digital video documents of diagnostic and therapeutic procedures should be examined regarding the clip length and size necessary for second opinion and manageable with today's hardware. Image sources relevant for this paper include 3D laparoscope, 3D surgical microscope, 3D open surgery camera, synthetic video, and monoscopic endoscopes, etc. The global DICOM video concept and three special workplaces of distinct applications are described. Additionally, an approach is presented to analyze the motion of the endoscopic camera for future automatic video-cutting. Digital stereoscopic video sequences are especially in demand for surgery . Therefore DSVS are also integrated into the DICOM video concept. Results are presented describing the suitability of stereoscopic display techniques for the operating room.

  8. Feral Cats Are Better Killers in Open Habitats, Revealed by Animal-Borne Video

    PubMed Central

    McGregor, Hugh; Legge, Sarah; Jones, Menna E.; Johnson, Christopher N.

    2015-01-01

    One of the key gaps in understanding the impacts of predation by small mammalian predators on prey is how habitat structure affects the hunting success of small predators, such as feral cats. These effects are poorly understood due to the difficulty of observing actual hunting behaviours. We attached collar-mounted video cameras to feral cats living in a tropical savanna environment in northern Australia, and measured variation in hunting success among different microhabitats (open areas, dense grass and complex rocks). From 89 hours of footage, we recorded 101 hunting events, of which 32 were successful. Of these kills, 28% were not eaten. Hunting success was highly dependent on microhabitat structure surrounding prey, increasing from 17% in habitats with dense grass or complex rocks to 70% in open areas. This research shows that habitat structure has a profound influence on the impacts of small predators on their prey. This has broad implications for management of vegetation and disturbance processes (like fire and grazing) in areas where feral cats threaten native fauna. Maintaining complex vegetation cover can reduce predation rates of small prey species from feral cat predation. PMID:26288224

  9. Synchronization of video recording and laser pulses including background light suppression

    NASA Technical Reports Server (NTRS)

    Kalshoven, Jr., James E. (Inventor); Tierney, Jr., Michael (Inventor); Dabney, Philip W. (Inventor)

    2004-01-01

    An apparatus for and a method of triggering a pulsed light source, in particular a laser light source, for predictable capture of the source by video equipment. A frame synchronization signal is derived from the video signal of a camera to trigger the laser and position the resulting laser light pulse in the appropriate field of the video frame and during the opening of the electronic shutter, if such shutter is included in the camera. Positioning of the laser pulse in the proper video field allows, after recording, for the viewing of the laser light image with a video monitor using the pause mode on a standard cassette-type VCR. This invention also allows for fine positioning of the laser pulse to fall within the electronic shutter opening. For cameras with externally controllable electronic shutters, the invention provides for background light suppression by increasing shutter speed during the frame in which the laser light image is captured. This results in the laser light appearing in one frame in which the background scene is suppressed with the laser light being uneffected, while in all other frames, the shutter speed is slower, allowing for the normal recording of the background scene. This invention also allows for arbitrary (manual or external) triggering of the laser with full video synchronization and background light suppression.

  10. Toward Dietary Assessment via Mobile Phone Video Cameras.

    PubMed

    Chen, Nicholas; Lee, Yun Young; Rabb, Maurice; Schatz, Bruce

    2010-11-13

    Reliable dietary assessment is a challenging yet essential task for determining general health. Existing efforts are manual, require considerable effort, and are prone to underestimation and misrepresentation of food intake. We propose leveraging mobile phones to make this process faster, easier and automatic. Using mobile phones with built-in video cameras, individuals capture short videos of their meals; our software then automatically analyzes the videos to recognize dishes and estimate calories. Preliminary experiments on 20 typical dishes from a local cafeteria show promising results. Our approach complements existing dietary assessment methods to help individuals better manage their diet to prevent obesity and other diet-related diseases.

  11. Astrometric and Photometric Analysis of the September 2008 ATV-1 Re-Entry Event

    NASA Technical Reports Server (NTRS)

    Mulrooney, Mark K.; Barker, Edwin S.; Maley, Paul D.; Beaulieu, Kevin R.; Stokely, Christopher L.

    2008-01-01

    NASA utilized Image Intensified Video Cameras for ATV data acquisition from a jet flying at 12.8 km. Afterwards the video was digitized and then analyzed with a modified commercial software package, Image Systems Trackeye. Astrometric results were limited by saturation, plate scale, and imposed linear plate solution based on field reference stars. Time-dependent fragment angular trajectories, velocities, accelerations, and luminosities were derived in each video segment. It was evident that individual fragments behave differently. Photometric accuracy was insufficient to confidently assess correlations between luminosity and fragment spatial behavior (velocity, deceleration). Use of high resolution digital video cameras in future should remedy this shortcoming.

  12. Secure Video Surveillance System Acquisition Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2009-12-04

    The SVSS Acquisition Software collects and displays video images from two cameras through a VPN, and store the images onto a collection controller. The software is configured to allow a user to enter a time window to display up to 2 1/2, hours of video review. The software collects images from the cameras at a rate of 1 image per second and automatically deletes images older than 3 hours. The software code operates in a linux environment and can be run in a virtual machine on Windows XP. The Sandia software integrates the different COTS software together to build themore » video review system.« less

  13. VizieR Online Data Catalog: Young stellar objects in NGC 6823 (Riaz+, 2012)

    NASA Astrophysics Data System (ADS)

    Riaz, B.; Martin, E. L.; Tata, R.; Monin, J.-L.; Phan-Bao, N.; Bouy, H.

    2016-10-01

    The optical V-, R- and I-band images were obtained using the Prime Focus camera [William Herschel Telescope (WHT)/Wide Field Camera (WFC) detector] mounted on 4-m WHT in La Palma, Canary Islands, Spain. Observations were performed in 2005 May, The NIR J-, H-, Ks-band images were obtained using the Infrared Side Port Imager (ISPI) mounted on Cerro Tololo Inter-American Observatory (CTIO) 4-m Blanco Telescope in Cerro Tololo, Chile. Observations were performed in 2007 March. (3 data files).

  14. Geometric Calibration and Radiometric Correction of the Maia Multispectral Camera

    NASA Astrophysics Data System (ADS)

    Nocerino, E.; Dubbini, M.; Menna, F.; Remondino, F.; Gattelli, M.; Covi, D.

    2017-10-01

    Multispectral imaging is a widely used remote sensing technique, whose applications range from agriculture to environmental monitoring, from food quality check to cultural heritage diagnostic. A variety of multispectral imaging sensors are available on the market, many of them designed to be mounted on different platform, especially small drones. This work focuses on the geometric and radiometric characterization of a brand-new, lightweight, low-cost multispectral camera, called MAIA. The MAIA camera is equipped with nine sensors, allowing for the acquisition of images in the visible and near infrared parts of the electromagnetic spectrum. Two versions are available, characterised by different set of band-pass filters, inspired by the sensors mounted on the WorlView-2 and Sentinel2 satellites, respectively. The camera details and the developed procedures for the geometric calibrations and radiometric correction are presented in the paper.

  15. Video Diagnostic for W7-X Stellarator

    NASA Astrophysics Data System (ADS)

    Sárközi, J.; Grosser, K.; Kocsis, G.; König, R.; Neuner, U.; Molnár, Á.; Petravich, G.; Por, G.; Porempovics, G.; Récsei, S.; Szabó, V.; Szappanos, A.; Zoletnik, S.

    2008-03-01

    The video diagnostics for W7-X—which is under development—is devoted to observe plasma and frrst wall elements during operation, to warn in case of hot spots and dangerous heat load and to give information about the plasma size, position, edge structure, the geometry and location of magnetic islands and distribution of impurities. The video diagnostics will be mounted on the tangential AEQ-ports of the torus that are not straight and have about 2m length and a typical diameter of 0.1m which makes its realization more difficult. The geometry of the 10 tangential views of the AEQ-ports allows giving an almost complete overview of the vessel interior making this diagnostic indispensable for the machine operation. Different concepts of the diagnostics were investigated and finally the following design was selected. As a large heat load is expected on the optical window located at the plasma-facing end of the AEQ-port, the port window is protected by a cooled pinhole. An uncooled shutter located behind the pinhole can be closed to prevent window contamination during vessel conditioning discharges (glow discharge cleaning) and from inter-pulse deposition of soft a-C:H layers. The imaging optics and the detection sensor are located behind the port window in the port tube, which will be under atmospheric pressure. To detect the visible radiation distribution a new camera system called Event Detection Intelligent Camera (EDICAM) is under development. The system is divided into three major separated components. The Sensor Module contains only the selected CMOS sensor, the analog digital converters and the minimal electronics necessary for the communication with the subsequent camera system module called Image Processing and Control Unit (IPCU). Its simple structure makes the Sensor Module suitable to operate despite being exposed to ionizing (neutron, γ-) radiation. The IPCU, which can be located far from the Sensor Module and therefore far from the plasma, is designed to perform real time evaluation of the images detecting predefined events, managing the sensor read-out and the input triggers and producing the output triggers generated by the detected events. The IPCU can also be used to reduce the amount of the stored data. A Standard 10 Gigabit Ethernet fiber optics connection connects the IPCU module to the PC with GigEVision communication protocol.

  16. Intermediate view synthesis algorithm using mesh clustering for rectangular multiview camera system

    NASA Astrophysics Data System (ADS)

    Choi, Byeongho; Kim, Taewan; Oh, Kwan-Jung; Ho, Yo-Sung; Choi, Jong-Soo

    2010-02-01

    A multiview video-based three-dimensional (3-D) video system offers a realistic impression and a free view navigation to the user. The efficient compression and intermediate view synthesis are key technologies since 3-D video systems deal multiple views. We propose an intermediate view synthesis using a rectangular multiview camera system that is suitable to realize 3-D video systems. The rectangular multiview camera system not only can offer free view navigation both horizontally and vertically but also can employ three reference views such as left, right, and bottom for intermediate view synthesis. The proposed view synthesis method first represents the each reference view to meshes and then finds the best disparity for each mesh element by using the stereo matching between reference views. Before stereo matching, we separate the virtual image to be synthesized into several regions to enhance the accuracy of disparities. The mesh is classified into foreground and background groups by disparity values and then affine transformed. By experiments, we confirm that the proposed method synthesizes a high-quality image and is suitable for 3-D video systems.

  17. Visualizing the history of living spaces.

    PubMed

    Ivanov, Yuri; Wren, Christopher; Sorokin, Alexander; Kaur, Ishwinder

    2007-01-01

    The technology available to building designers now makes it possible to monitor buildings on a very large scale. Video cameras and motion sensors are commonplace in practically every office space, and are slowly making their way into living spaces. The application of such technologies, in particular video cameras, while improving security, also violates privacy. On the other hand, motion sensors, while being privacy-conscious, typically do not provide enough information for a human operator to maintain the same degree of awareness about the space that can be achieved by using video cameras. We propose a novel approach in which we use a large number of simple motion sensors and a small set of video cameras to monitor a large office space. In our system we deployed 215 motion sensors and six video cameras to monitor the 3,000-square-meter office space occupied by 80 people for a period of about one year. The main problem in operating such systems is finding a way to present this highly multidimensional data, which includes both spatial and temporal components, to a human operator to allow browsing and searching recorded data in an efficient and intuitive way. In this paper we present our experiences and the solutions that we have developed in the course of our work on the system. We consider this work to be the first step in helping designers and managers of building systems gain access to information about occupants' behavior in the context of an entire building in a way that is only minimally intrusive to the occupants' privacy.

  18. Nyquist Sampling Theorem: Understanding the Illusion of a Spinning Wheel Captured with a Video Camera

    ERIC Educational Resources Information Center

    Levesque, Luc

    2014-01-01

    Inaccurate measurements occur regularly in data acquisition as a result of improper sampling times. An understanding of proper sampling times when collecting data with an analogue-to-digital converter or video camera is crucial in order to avoid anomalies. A proper choice of sampling times should be based on the Nyquist sampling theorem. If the…

  19. Texton-based super-resolution for achieving high spatiotemporal resolution in hybrid camera system

    NASA Astrophysics Data System (ADS)

    Kamimura, Kenji; Tsumura, Norimichi; Nakaguchi, Toshiya; Miyake, Yoichi

    2010-05-01

    Many super-resolution methods have been proposed to enhance the spatial resolution of images by using iteration and multiple input images. In a previous paper, we proposed the example-based super-resolution method to enhance an image through pixel-based texton substitution to reduce the computational cost. In this method, however, we only considered the enhancement of a texture image. In this study, we modified this texton substitution method for a hybrid camera to reduce the required bandwidth of a high-resolution video camera. We applied our algorithm to pairs of high- and low-spatiotemporal-resolution videos, which were synthesized to simulate a hybrid camera. The result showed that the fine detail of the low-resolution video can be reproduced compared with bicubic interpolation and the required bandwidth could be reduced to about 1/5 in a video camera. It was also shown that the peak signal-to-noise ratios (PSNRs) of the images improved by about 6 dB in a trained frame and by 1.0-1.5 dB in a test frame, as determined by comparison with the processed image using bicubic interpolation, and the average PSNRs were higher than those obtained by the well-known Freeman’s patch-based super-resolution method. Compared with that of the Freeman’s patch-based super-resolution method, the computational time of our method was reduced to almost 1/10.

  20. Mechanical design for the Evryscope: a minute cadence, 10,000-square-degree FoV, gigapixel-scale telescope

    NASA Astrophysics Data System (ADS)

    Ratzloff, Jeff; Law, Nicholas M.; Fors, Octavi; Wulfken, Philip J.

    2015-01-01

    We designed, tested, prototyped and built a compact 27-camera robotic telescope that images 10,000 square degrees in 2-minute exposures. We exploit mass produced interline CCD Cameras with Rokinon consumer lenses to economically build a telescope that covers this large part of the sky simultaneously with a good enough pixel sampling to avoid the confusion limit over most of the sky. We developed the initial concept into a 3-d mechanical design with the aid of computer modeling programs. Significant design components include the camera assembly-mounting modules, the hemispherical support structure, and the instrument base structure. We simulated flexure and material stress in each of the three main components, which helped us optimize the rigidity and materials selection, while reducing weight. The camera mounts are CNC aluminum and the support shell is reinforced fiberglass. Other significant project components include optimizing camera locations, camera alignment, thermal analysis, environmental sealing, wind protection, and ease of access to internal components. The Evryscope will be assembled at UNC Chapel Hill and deployed to the CTIO in 2015.

  1. Watching elderly and disabled person's physical condition by remotely controlled monorail robot

    NASA Astrophysics Data System (ADS)

    Nagasaka, Yasunori; Matsumoto, Yoshinori; Fukaya, Yasutoshi; Takahashi, Tomoichi; Takeshita, Toru

    2001-10-01

    We are developing a nursing system using robots and cameras. The cameras are mounted on a remote controlled monorail robot which moves inside a room and watches the elderly. It is necessary to pay attention to the elderly at home or nursing homes all time. This requires staffs to pay attention to them at every time. The purpose of our system is to help those staffs. This study intends to improve such situation. A host computer controls a monorail robot to go in front of the elderly using the images taken by cameras on the ceiling. A CCD camera is mounted on the monorail robot to take pictures of their facial expression or movements. The robot sends the images to a host computer that checks them whether something unusual happens or not. We propose a simple calibration method for positioning the monorail robots to track the moves of the elderly for keeping their faces at center of camera view. We built a small experiment system, and evaluated our camera calibration method and image processing algorithm.

  2. Learning to Characterize Submarine Lava Flow Morphology at Seamounts and Spreading Centers using High Definition Video and Photomosaics

    NASA Astrophysics Data System (ADS)

    Fundis, A. T.; Sautter, L. R.; Kelley, D. S.; Delaney, J. R.; Kerr-Riess, M.; Denny, A. R.; Elend, M.

    2010-12-01

    In August, 2010 the UW ENLIGHTEN ’10 expedition provided ~140 hours of seafloor HD video footage at Axial Seamount, the most magmatically robust submarine volcano on the Juan de Fuca Ridge. During this expedition, direct imagery from an Insite Pacific HD camera mounted on the ROV Jason 2 was used to classify broad expanses of seafloor where high power (8 kw) and high bandwidth (10 Gb/s) fiber optic cable will be laid as part of the Regional Scale Nodes (RSN) component of the NSF funded Ocean Observatories Initiative. The cable will provide power and two-way, real-time communication to an array of >20 sensors deployed at the summit of the volcano and at active sites of hydrothermal venting to investigate how active processes within the volcano and at seafloor hot springs within the caldera are connected. In addition to HD imagery, over 10,000 overlapping photographs from a down-looking still camera were merged and co-registered to create high resolution photomosaics of two areas within Axial’s caldera. Thousands of additional images were taken to characterize the seafloor along proposed cable routes, allowing optimal routes to be planned well in advance of deployment. Lowest risk areas included those free of large collapse basins, steep flow fronts and fissures. Characterizing the modes of lava distribution across the seafloor is crucial to understanding the construction history of the upper oceanic crust at mid-ocean ridges. In part, reconstruction of crustal development and eruptive histories can be inferred from surface flow morphologies, which provide insights into lava emplacement dynamics and effusion rates of past eruptions. An online resource is under development that will educate students about lava flow morphologies through the use of HD video and still photographs. The objective of the LavaFlow exercise is to map out a proposed cable route across the Axial Seamount caldera. Students are first trained in appropriate terminology and background content to learn to recognize and identify various lava flow morphologies and volcanic features. They then conduct a virtual ROV Jason 2 dive using video and still photographs, and characterize the terrain. Their observations are supplemented by the integration of high resolution (1 m scale resolution) bathymetry collected with a RESON SeaBat 7125 sonar mounted on Jason 2 during ENLIGHTEN’10. Students visualize the bathymetry in 2D and 3D using CARIS HIPS 7.0 software. COVE (Collaborative Ocean Visualization Environment) geospatial software is then used to plan and map out an optimal cable route. The LavaFlow exercise allows students to employ the same technologies used by the RSN team for designing the Axial Seamount cabled observatory infrastructure. When completed in 2014, real-time HD imagery, geophysical, chemical and biological sensors will provide data in real-time from this site to educators throughout the US and globe via the Internet.

  3. Mount Sharp Panorama in Raw Colors

    NASA Image and Video Library

    2013-03-15

    This mosaic of images from the Mastcam onboard NASA Mars rover Curiosity shows Mount Sharp in raw color. Raw color shows the scene colors as they would look in a typical smart-phone camera photo, before any adjustment.

  4. First Sampling Hole in Mount Sharp

    NASA Image and Video Library

    2014-09-25

    This image from the Mars Hand Lens Imager MAHLI camera on NASA Curiosity Mars rover shows the first sample-collection hole drilled in Mount Sharp, the layered mountain that is the science destination of the rover extended mission.

  5. Structural analysis of color video camera installation on tank 241AW101 (2 Volumes)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strehlow, J.P.

    1994-08-24

    A video camera is planned to be installed on the radioactive storage tank 241AW101 at the DOE` s Hanford Site in Richland, Washington. The camera will occupy the 20 inch port of the Multiport Flange riser which is to be installed on riser 5B of the 241AW101 (3,5,10). The objective of the project reported herein was to perform a seismic analysis and evaluation of the structural components of the camera for a postulated Design Basis Earthquake (DBE) per the reference Structural Design Specification (SDS) document (6). The detail of supporting engineering calculations is documented in URS/Blume Calculation No. 66481-01-CA-03 (1).

  6. Streaming Audio and Video: New Challenges and Opportunities for Museums.

    ERIC Educational Resources Information Center

    Spadaccini, Jim

    Streaming audio and video present new challenges and opportunities for museums. Streaming media is easier to author and deliver to Internet audiences than ever before; digital video editing is commonplace now that the tools--computers, digital video cameras, and hard drives--are so affordable; the cost of serving video files across the Internet…

  7. An attentive multi-camera system

    NASA Astrophysics Data System (ADS)

    Napoletano, Paolo; Tisato, Francesco

    2014-03-01

    Intelligent multi-camera systems that integrate computer vision algorithms are not error free, and thus both false positive and negative detections need to be revised by a specialized human operator. Traditional multi-camera systems usually include a control center with a wall of monitors displaying videos from each camera of the network. Nevertheless, as the number of cameras increases, switching from a camera to another becomes hard for a human operator. In this work we propose a new method that dynamically selects and displays the content of a video camera from all the available contents in the multi-camera system. The proposed method is based on a computational model of human visual attention that integrates top-down and bottom-up cues. We believe that this is the first work that tries to use a model of human visual attention for the dynamic selection of the camera view of a multi-camera system. The proposed method has been experimented in a given scenario and has demonstrated its effectiveness with respect to the other methods and manually generated ground-truth. The effectiveness has been evaluated in terms of number of correct best-views generated by the method with respect to the camera views manually generated by a human operator.

  8. Study of atmospheric discharges caracteristics using with a standard video camera

    NASA Astrophysics Data System (ADS)

    Ferraz, E. C.; Saba, M. M. F.

    In this study is showed some preliminary statistics on lightning characteristics such as: flash multiplicity, number of ground contact points, formation of new and altered channels and presence of continuous current in the strokes that form the flash. The analysis is based on the images of a standard video camera (30 frames.s-1). The results obtained for some flashes will be compared to the images of a high-speed CCD camera (1000 frames.s-1). The camera observing site is located in São José dos Campos (23°S,46° W) at an altitude of 630m. This observational site has nearly 360° field of view at a height of 25m. It is possible to visualize distant thunderstorms occurring within a radius of 25km from the site. The room, situated over a metal structure, has water and power supplies, a telephone line and a small crane on the roof. KEY WORDS: Video images, Lightning, Multiplicity, Stroke.

  9. Flat-panel video resolution LED display system

    NASA Astrophysics Data System (ADS)

    Wareberg, P. G.; Kennedy, D. I.

    The system consists of a 128 x 128 element X-Y addressable LED array fabricated from green-emitting gallium phosphide. The LED array is interfaced with a 128 x 128 matrix TV camera. Associated electronics provides for seven levels of grey scale above zero with a grey scale ratio of square root of 2. Picture elements are on 0.008 inch centers resulting in a resolution of 125 lines-per-inch and a display area of approximately 1 sq. in. The LED array concept lends itself to modular construction, permitting assembly of a flat panel screen of any desired size from 1 x 1 inch building blocks without loss of resolution. A wide range of prospective aerospace applications exist extending from helmet-mounted systems involving small dedicated arrays to multimode cockpit displays constructed as modular screens. High-resolution LED arrays are already used as CRT replacements in military film-marking reconnaissance applications.

  10. Ultra-wide Range Gamma Detector System for Search and Locate Operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Odell, D. Mackenzie Odell; Harpring, Larry J.; Moore, Frank S. Jr.

    2005-10-26

    Collecting debris samples following a nuclear event requires that operations be conducted from a considerable stand-off distance. An ultra-wide range gamma detector system has been constructed to accomplish both long range radiation search and close range hot sample collection functions. Constructed and tested on a REMOTEC Andros platform, the system has demonstrated reliable operation over six orders of magnitude of gamma dose from 100's of uR/hr to over 100 R/hr. Functional elements include a remotely controlled variable collimator assembly, a NaI(Tl)/photomultiplier tube detector, a proprietary digital radiation instrument, a coaxially mounted video camera, a digital compass, and both local andmore » remote control computers with a user interface designed for long range operations. Long range sensitivity and target location, as well as close range sample selection performance are presented.« less

  11. In-situ measurement of concentrated solar flux and distribution at the aperture of a central solar receiver

    NASA Astrophysics Data System (ADS)

    Ferriere, Alain; Volut, Mikael; Perez, Antoine; Volut, Yann

    2016-05-01

    A flux mapping system has been designed, implemented and experimented at the top of the Themis solar tower in France. This system features a moving bar associated to a CCD video camera and a flux gauge mounted onto the bar used as reference measurement for calibration purpose. Images and flux signal are acquired separately. The paper describes the equipment and focus on the data processing to issue the distribution of flux density and concentration at the aperture of the solar receiver. Finally, the solar power entering into the receiver is estimated by integration of flux density. The processing is largely automated in the form of a dedicated software with fast execution. A special attention is paid to the accuracy of the results, to the robustness of the algorithm and to the velocity of the processing.

  12. Multipurpose surgical robot as a laparoscope assistant.

    PubMed

    Nelson, Carl A; Zhang, Xiaoli; Shah, Bhavin C; Goede, Matthew R; Oleynikov, Dmitry

    2010-07-01

    This study demonstrates the effectiveness of a new, compact surgical robot at improving laparoscope guidance. Currently, the assistant guiding the laparoscope camera tends to be less experienced and requires physical and verbal direction from the surgeon. Human guidance has disadvantages of fatigue and shakiness leading to inconsistency in the field of view. This study investigates whether replacing the assistant with a compact robot can improve the stability of the surgeon's field of view and also reduce crowding at the operating table. A compact robot based on a bevel-geared "spherical mechanism" with 4 degrees of freedom and capable of full dexterity through a 15-mm port was designed and built. The robot was mounted on the standard railing of the operating table and used to manipulate a laparoscope through a supraumbilical port in a porcine model via a joystick controlled externally by a surgeon. The process was videotaped externally via digital video recorder and internally via laparoscope. Robot position data were also recorded within the robot's motion control software. The robot effectively manipulated the laparoscope in all directions to provide a clear and consistent view of liver, small intestine, and spleen. Its range of motion was commensurate with typical motions executed by a human assistant and was well controlled with the joystick. Qualitative analysis of the video suggested that this method of laparoscope guidance provides highly stable imaging during laparoscopic surgery, which was confirmed by robot position data. Because the robot was table-mounted and compact in design, it increased standing room around the operation table and did not interfere with the workspace of other surgical instruments. The study results also suggest that this robotic method may be combined with flexible endoscopes for highly dexterous visualization with more degrees of freedom.

  13. ISAAC: A REXUS Student Experiment to Demonstrate an Ejection System with Predefined Direction

    NASA Astrophysics Data System (ADS)

    Balmer, G.; Berquand, A.; Company-Vallet, E.; Granberg, V.; Grigore, V.; Ivchenko, N.; Kevorkov, R.; Lundkvist, E.; Olentsenko, G.; Pacheco-Labrador, J.; Tibert, G.; Yuan, Y.

    2015-09-01

    ISAAC Infrared Spectroscopy to Analyse the middle Atmosphere Composition — was a student experiment launched from SSC's Esrange Space Centre, Sweden, on 29th May 2014, on board the sounding rocket REXUS 15 in the frame of the REXUS/BEXUS programme. The main focus of the experiment was to implement an ejection system for two large Free Falling Units (FFUs) (240 mm x 80 mm) to be ejected from a spinning rocket into a predefined direction. The system design relied on a spring-based ejection system. Sun and angular rate sensors were used to control and time the ejection. The flight data includes telemetry from the Rocket Mounted Unit (RMU), received and saved during flight, as well as video footage from the GoPro camera mounted inside the RMU and recovered after the flight. The FFUs' direction, speed and spin frequency as well as the rocket spin frequency were determined by analyzing the video footage. The FFU-Rocket-Sun angles were 64.3° and 104.3°, within the required margins of 90°+45°. The FFU speeds were 3.98 mIs and 3.74 mIs, lower than the expected 5± 1 mIs. The FFUs' spin frequencies were 1 .38 Hz and 1 .60 Hz, approximately half the rocket's spin frequency. The rocket spin rate slightly changed from 3. 163 Hz before the ejection to 3.1 17 Hz after the ejection of the two FFUs. The angular rate, sun sensor data and temperature on the inside of the rocket module skin were also recorded. The experiment design and results of the data analysis are presented in this paper.

  14. High definition in minimally invasive surgery: a review of methods for recording, editing, and distributing video.

    PubMed

    Kelly, Christopher R; Hogle, Nancy J; Landman, Jaime; Fowler, Dennis L

    2008-09-01

    The use of high-definition cameras and monitors during minimally invasive procedures can provide the surgeon and operating team with more than twice the resolution of standard definition systems. Although this dramatic improvement in visualization offers numerous advantages, the adoption of high definition cameras in the operating room can be challenging because new recording equipment must be purchased, and several new technologies are required to edit and distribute video. The purpose of this review article is to provide an overview of the popular methods for recording, editing, and distributing high-definition video. This article discusses the essential technical concepts of high-definition video, reviews the different kinds of equipment and methods most often used for recording, and describes several options for video distribution.

  15. HDR video synthesis for vision systems in dynamic scenes

    NASA Astrophysics Data System (ADS)

    Shopovska, Ivana; Jovanov, Ljubomir; Goossens, Bart; Philips, Wilfried

    2016-09-01

    High dynamic range (HDR) image generation from a number of differently exposed low dynamic range (LDR) images has been extensively explored in the past few decades, and as a result of these efforts a large number of HDR synthesis methods have been proposed. Since HDR images are synthesized by combining well-exposed regions of the input images, one of the main challenges is dealing with camera or object motion. In this paper we propose a method for the synthesis of HDR video from a single camera using multiple, differently exposed video frames, with circularly alternating exposure times. One of the potential applications of the system is in driver assistance systems and autonomous vehicles, involving significant camera and object movement, non- uniform and temporally varying illumination, and the requirement of real-time performance. To achieve these goals simultaneously, we propose a HDR synthesis approach based on weighted averaging of aligned radiance maps. The computational complexity of high-quality optical flow methods for motion compensation is still pro- hibitively high for real-time applications. Instead, we rely on more efficient global projective transformations to solve camera movement, while moving objects are detected by thresholding the differences between the trans- formed and brightness adapted images in the set. To attain temporal consistency of the camera motion in the consecutive HDR frames, the parameters of the perspective transformation are stabilized over time by means of computationally efficient temporal filtering. We evaluated our results on several reference HDR videos, on synthetic scenes, and using 14-bit raw images taken with a standard camera.

  16. Vehicle-triggered video compression/decompression for fast and efficient searching in large video databases

    NASA Astrophysics Data System (ADS)

    Bulan, Orhan; Bernal, Edgar A.; Loce, Robert P.; Wu, Wencheng

    2013-03-01

    Video cameras are widely deployed along city streets, interstate highways, traffic lights, stop signs and toll booths by entities that perform traffic monitoring and law enforcement. The videos captured by these cameras are typically compressed and stored in large databases. Performing a rapid search for a specific vehicle within a large database of compressed videos is often required and can be a time-critical life or death situation. In this paper, we propose video compression and decompression algorithms that enable fast and efficient vehicle or, more generally, event searches in large video databases. The proposed algorithm selects reference frames (i.e., I-frames) based on a vehicle having been detected at a specified position within the scene being monitored while compressing a video sequence. A search for a specific vehicle in the compressed video stream is performed across the reference frames only, which does not require decompression of the full video sequence as in traditional search algorithms. Our experimental results on videos captured in a local road show that the proposed algorithm significantly reduces the search space (thus reducing time and computational resources) in vehicle search tasks within compressed video streams, particularly those captured in light traffic volume conditions.

  17. Evaluation of the geometric stability and the accuracy potential of digital cameras — Comparing mechanical stabilisation versus parameterisation

    NASA Astrophysics Data System (ADS)

    Rieke-Zapp, D.; Tecklenburg, W.; Peipe, J.; Hastedt, H.; Haig, Claudia

    Recent tests on the geometric stability of several digital cameras that were not designed for photogrammetric applications have shown that the accomplished accuracies in object space are either limited or that the accuracy potential is not exploited to the fullest extent. A total of 72 calibrations were calculated with four different software products for eleven digital camera models with different hardware setups, some with mechanical fixation of one or more parts. The calibration procedure was chosen in accord to a German guideline for evaluation of optical 3D measuring systems [VDI/VDE, VDI/VDE 2634 Part 1, 2002. Optical 3D Measuring Systems-Imaging Systems with Point-by-point Probing. Beuth Verlag, Berlin]. All images were taken with ringflashes which was considered a standard method for close-range photogrammetry. In cases where the flash was mounted to the lens, the force exerted on the lens tube and the camera mount greatly reduced the accomplished accuracy. Mounting the ringflash to the camera instead resulted in a large improvement of accuracy in object space. For standard calibration best accuracies in object space were accomplished with a Canon EOS 5D and a 35 mm Canon lens where the focusing tube was fixed with epoxy (47 μm maximum absolute length measurement error in object space). The fixation of the Canon lens was fairly easy and inexpensive resulting in a sevenfold increase in accuracy compared with the same lens type without modification. A similar accuracy was accomplished with a Nikon D3 when mounting the ringflash to the camera instead of the lens (52 μm maximum absolute length measurement error in object space). Parameterisation of geometric instabilities by introduction of an image variant interior orientation in the calibration process improved results for most cameras. In this case, a modified Alpa 12 WA yielded the best results (29 μm maximum absolute length measurement error in object space). Extending the parameter model with FiBun software to model not only an image variant interior orientation, but also deformations in the sensor domain of the cameras, showed significant improvements only for a small group of cameras. The Nikon D3 camera yielded the best overall accuracy (25 μm maximum absolute length measurement error in object space) with this calibration procedure indicating at the same time the presence of image invariant error in the sensor domain. Overall, calibration results showed that digital cameras can be applied for an accurate photogrammetric survey and that only a little effort was sufficient to greatly improve the accuracy potential of digital cameras.

  18. Research on inosculation between master of ceremonies or players and virtual scene in virtual studio

    NASA Astrophysics Data System (ADS)

    Li, Zili; Zhu, Guangxi; Zhu, Yaoting

    2003-04-01

    A technical principle about construction of virtual studio has been proposed where orientation tracker and telemeter has been used for improving conventional BETACAM pickup camera and connecting with the software module of the host. A model of virtual camera named Camera & Post-camera Coupling Pair has been put forward, which is different from the common model in computer graphics and has been bound to real BETACAM pickup camera for shooting. The formula has been educed to compute the foreground frame buffer image and the background frame buffer image of the virtual scene whose boundary is based on the depth information of target point of the real BETACAM pickup camera's projective ray. The effect of real-time consistency has been achieved between the video image sequences of the master of ceremonies or players and the CG video image sequences for the virtual scene in spatial position, perspective relationship and image object masking. The experimental result has shown that the technological scheme of construction of virtual studio submitted in this paper is feasible and more applicative and more effective than the existing technology to establish a virtual studio based on color-key and image synthesis with background using non-linear video editing technique.

  19. Head Mounted Alerting for Urban Operations via Tactical Information Management System

    DTIC Science & Technology

    2006-03-01

    MOUT Area Based Experiments .......................................................................... 62 6.4.2 Video Game Based Experiments...associated with the video game task. ................................................................ 35 Figure 20: The learning rate for truth sets defined...23 Table 6: Results of experiments from Breakthrough Mission for our Video Game Configuration

  20. Analysis of the color rendition of flexible endoscopes

    NASA Astrophysics Data System (ADS)

    Murphy, Edward M.; Hegarty, Francis J.; McMahon, Barry P.; Boyle, Gerard

    2003-03-01

    Endoscopes are imaging devices routinely used for the diagnosis of disease within the human digestive tract. Light is transmitted into the body cavity via incoherent fibreoptic bundles and is controlled by a light feedback system. Fibreoptic endoscopes use coherent fibreoptic bundles to provide the clinician with an image. It is also possible to couple fibreoptic endoscopes to a clip-on video camera. Video endoscopes consist of a small CCD camera, which is inserted into gastrointestinal tract, and associated image processor to convert the signal to analogue RGB video signals. Images from both types of endoscope are displayed on standard video monitors. Diagnosis is dependent upon being able to determine changes in the structure and colour of tissues and biological fluids, and therefore is dependent upon the ability of the endoscope to reproduce the colour of these tissues and fluids with fidelity. This study investigates the colour reproduction of flexible optical and video endoscopes. Fibreoptic and video endoscopes alter image colour characteristics in different ways. The colour rendition of fibreoptic endoscopes was assessed by coupling them to a video camera and applying video colorimetric techniques. These techniques were then used on video endoscopes to assess how the colour rendition of video endoscopes compared with that of optical endoscopes. In both cases results were obtained at fixed illumination settings. Video endoscopes were then assessed with varying levels of illumination. Initial results show that at constant luminance endoscopy systems introduce non-linear shifts in colour. Techniques for examining how this colour shift varies with illumination intensity were developed and both methodology and results will be presented. We conclude that more rigorous quality assurance is required to reduce colour error and are developing calibration procedures applicable to medical endoscopes.

  1. A Taxonomy of Asynchronous Instructional Video Styles

    ERIC Educational Resources Information Center

    Chorianopoulos, Konstantinos

    2018-01-01

    Many educational organizations are employing instructional videos in their pedagogy, but there is a limited understanding of the possible video formats. In practice, the presentation format of instructional videos ranges from direct recording of classroom teaching with a stationary camera, or screencasts with voiceover, to highly elaborate video…

  2. Registration of an on-axis see-through head-mounted display and camera system

    NASA Astrophysics Data System (ADS)

    Luo, Gang; Rensing, Noa M.; Weststrate, Evan; Peli, Eli

    2005-02-01

    An optical see-through head-mounted display (HMD) system integrating a miniature camera that is aligned with the user's pupil is developed and tested. Such an HMD system has a potential value in many augmented reality applications, in which registration of the virtual display to the real scene is one of the critical aspects. The camera alignment to the user's pupil results in a simple yet accurate calibration and a low registration error across a wide range of depth. In reality, a small camera-eye misalignment may still occur in such a system due to the inevitable variations of HMD wearing position with respect to the eye. The effects of such errors are measured. Calculation further shows that the registration error as a function of viewing distance behaves nearly the same for different virtual image distances, except for a shift. The impact of prismatic effect of the display lens on registration is also discussed.

  3. Topview stereo: combining vehicle-mounted wide-angle cameras to a distance sensor array

    NASA Astrophysics Data System (ADS)

    Houben, Sebastian

    2015-03-01

    The variety of vehicle-mounted sensors in order to fulfill a growing number of driver assistance tasks has become a substantial factor in automobile manufacturing cost. We present a stereo distance method exploiting the overlapping field of view of a multi-camera fisheye surround view system, as they are used for near-range vehicle surveillance tasks, e.g. in parking maneuvers. Hence, we aim at creating a new input signal from sensors that are already installed. Particular properties of wide-angle cameras (e.g. hanging resolution) demand an adaptation of the image processing pipeline to several problems that do not arise in classical stereo vision performed with cameras carefully designed for this purpose. We introduce the algorithms for rectification, correspondence analysis, and regularization of the disparity image, discuss reasons and avoidance of the shown caveats, and present first results on a prototype topview setup.

  4. Choreographing the Frame: A Critical Investigation into How Dance for the Camera Extends the Conceptual and Artistic Boundaries of Dance

    ERIC Educational Resources Information Center

    Preston, Hilary

    2006-01-01

    This essay investigates the collaboration between dance and choreographic practice and film/video medium in a contemporary context. By looking specifically at dance made for the camera and the proliferation of dance-film/video, critical issues will be explored that have surfaced in response to this burgeoning form. Presenting a view of avant-garde…

  5. ASTP video tape recorder ground support equipment, addendum 1 (CTE splitter/buffer). Operations manual

    NASA Technical Reports Server (NTRS)

    1975-01-01

    A descriptive handbook for the CTE splitter (RCA part No. 8673734-503) was presented. This unit is designed to extract time data from an interleaved video audio signal. It is a rack mounting unit 7 inches high, 19 inches wide, 20 inches deep, mounted on slides for retracting from the rack, and weighs approximately 40 pounds. The following information is provided: installation, operation, principles of operation, maintenance, schematics and parts lists.

  6. ASTP video tape recorder ground support equipment, addendum 2 (CTE splitter/buffer). Operations manual

    NASA Technical Reports Server (NTRS)

    1975-01-01

    A descriptive handbook for the CTE splitter (RCA part No. 8673734-50A) was presented. This unit is designed to extract time data from an interleaved video/audio signal. It is a rack mounting unit 7 inches high, 19 inches wide, 20 inches deep, mounted on slides for retracting from the rack, and weighs approximately 40 pounds. The following information is provided: installation, operation, principles of operation, maintenance, schematics and parts lists.

  7. Mount Sharp Panorama in White-Balanced Colors

    NASA Image and Video Library

    2013-03-15

    This mosaic of images from the Mast Camera Mastcam on NASA Mars rover Curiosity shows Mount Sharp in a white-balanced color adjustment that makes the sky look overly blue but shows the terrain as if under Earth-like lighting.

  8. Evaluation of commercial video-based intersection signal actuation systems.

    DOT National Transportation Integrated Search

    2008-12-01

    Video cameras and computer image processors have come into widespread use for the detection of : vehicles for signal actuation at controlled intersections. Video is considered both a cost-saving and : convenient alternative to conventional stop-line ...

  9. An ultrahigh-speed color video camera operating at 1,000,000 fps with 288 frame memories

    NASA Astrophysics Data System (ADS)

    Kitamura, K.; Arai, T.; Yonai, J.; Hayashida, T.; Kurita, T.; Maruyama, H.; Namiki, J.; Yanagi, T.; Yoshida, T.; van Kuijk, H.; Bosiers, Jan T.; Saita, A.; Kanayama, S.; Hatade, K.; Kitagawa, S.; Etoh, T. Goji

    2008-11-01

    We developed an ultrahigh-speed color video camera that operates at 1,000,000 fps (frames per second) and had capacity to store 288 frame memories. In 2005, we developed an ultrahigh-speed, high-sensitivity portable color camera with a 300,000-pixel single CCD (ISIS-V4: In-situ Storage Image Sensor, Version 4). Its ultrahigh-speed shooting capability of 1,000,000 fps was made possible by directly connecting CCD storages, which record video images, to the photodiodes of individual pixels. The number of consecutive frames was 144. However, longer capture times were demanded when the camera was used during imaging experiments and for some television programs. To increase ultrahigh-speed capture times, we used a beam splitter and two ultrahigh-speed 300,000-pixel CCDs. The beam splitter was placed behind the pick up lens. One CCD was located at each of the two outputs of the beam splitter. The CCD driving unit was developed to separately drive two CCDs, and the recording period of the two CCDs was sequentially switched. This increased the recording capacity to 288 images, an increase of a factor of two over that of conventional ultrahigh-speed camera. A problem with the camera was that the incident light on each CCD was reduced by a factor of two by using the beam splitter. To improve the light sensitivity, we developed a microlens array for use with the ultrahigh-speed CCDs. We simulated the operation of the microlens array in order to optimize its shape and then fabricated it using stamping technology. Using this microlens increased the light sensitivity of the CCDs by an approximate factor of two. By using a beam splitter in conjunction with the microlens array, it was possible to make an ultrahigh-speed color video camera that has 288 frame memories but without decreasing the camera's light sensitivity.

  10. High-Speed Video Analysis in a Conceptual Physics Class

    NASA Astrophysics Data System (ADS)

    Desbien, Dwain M.

    2011-09-01

    The use of probe ware and computers has become quite common in introductory physics classrooms. Video analysis is also becoming more popular and is available to a wide range of students through commercially available and/or free software.2,3 Video analysis allows for the study of motions that cannot be easily measured in the traditional lab setting and also allows real-world situations to be analyzed. Many motions are too fast to easily be captured at the standard video frame rate of 30 frames per second (fps) employed by most video cameras. This paper will discuss using a consumer camera that can record high-frame-rate video in a college-level conceptual physics class. In particular this will involve the use of model rockets to determine the acceleration during the boost period right at launch and compare it to a simple model of the expected acceleration.

  11. Low cost thermal camera for use in preclinical detection of diabetic peripheral neuropathy in primary care setting

    NASA Astrophysics Data System (ADS)

    Joshi, V.; Manivannan, N.; Jarry, Z.; Carmichael, J.; Vahtel, M.; Zamora, G.; Calder, C.; Simon, J.; Burge, M.; Soliz, P.

    2018-02-01

    Diabetic peripheral neuropathy (DPN) accounts for around 73,000 lower-limb amputations annually in the US on patients with diabetes. Early detection of DPN is critical. Current clinical methods for diagnosing DPN are subjective and effective only at later stages. Until recently, thermal cameras used for medical imaging have been expensive and hence prohibitive to be installed in primary care setting. The objective of this study is to compare results from a low-cost thermal camera with a high-end thermal camera used in screening for DPN. Thermal imaging has demonstrated changes in microvascular function that correlates with nerve function affected by DPN. The limitations for using low-cost cameras for DPN imaging are: less resolution (active pixels), frame rate, thermal sensitivity etc. We integrated two FLIR Lepton (80x60 active pixels, 50° HFOV, thermal sensitivity < 50mK) as one unit. Right and left cameras record the videos of right and left foot respectively. A compactible embedded system (raspberry pi3 model Bv1.2) is used to configure the sensors, capture and stream the video via ethernet. The resulting video has 160x120 active pixels (8 frames/second). We compared the temperature measurement of feet obtained using low-cost camera against the gold standard highend FLIR SC305. Twelve subjects (aged 35-76) were recruited. Difference in the temperature measurements between cameras was calculated for each subject and the results show that the difference between the temperature measurements of two cameras (mean difference=0.4, p-value=0.2) is not statistically significant. We conclude that the low-cost thermal camera system shows potential for use in detecting early-signs of DPN in under-served and rural clinics.

  12. Development of the SEASIS instrument for SEDSAT

    NASA Technical Reports Server (NTRS)

    Maier, Mark W.

    1996-01-01

    Two SEASIS experiment objectives are key: take images that allow three axis attitude determination and take multi-spectral images of the earth. During the tether mission it is also desirable to capture images for the recoiling tether from the endmass perspective (which has never been observed). SEASIS must store all its imagery taken during the tether mission until the earth downlink can be established. SEASIS determines attitude with a panoramic camera and performs earth observation with a telephoto lens camera. Camera video is digitized, compressed, and stored in solid state memory. These objectives are addressed through the following architectural choices: (1) A camera system using a Panoramic Annular Lens (PAL). This lens has a 360 deg. azimuthal field of view by a +45 degree vertical field measured from a plan normal to the lens boresight axis. It has been shown in Mr. Mark Steadham's UAH M.S. thesis that his camera can determine three axis attitude anytime the earth and one other recognizable celestial object (for example, the sun) is in the field of view. This will be essentially all the time during tether deployment. (2) A second camera system using telephoto lens and filter wheel. The camera is a black and white standard video camera. The filters are chosen to cover the visible spectral bands of remote sensing interest. (3) A processor and mass memory arrangement linked to the cameras. Video signals from the cameras are digitized, compressed in the processor, and stored in a large static RAM bank. The processor is a multi-chip module consisting of a T800 Transputer and three Zoran floating point Digital Signal Processors. This processor module was supplied under ARPA contract by the Space Computer Corporation to demonstrate its use in space.

  13. Packet based serial link realized in FPGA dedicated for high resolution infrared image transmission

    NASA Astrophysics Data System (ADS)

    Bieszczad, Grzegorz

    2015-05-01

    In article the external digital interface specially designed for thermographic camera built in Military University of Technology is described. The aim of article is to illustrate challenges encountered during design process of thermal vision camera especially related to infrared data processing and transmission. Article explains main requirements for interface to transfer Infra-Red or Video digital data and describes the solution which we elaborated based on Low Voltage Differential Signaling (LVDS) physical layer and signaling scheme. Elaborated link for image transmission is built using FPGA integrated circuit with built-in high speed serial transceivers achieving up to 2500Gbps throughput. Image transmission is realized using proprietary packet protocol. Transmission protocol engine was described in VHDL language and tested in FPGA hardware. The link is able to transmit 1280x1024@60Hz 24bit video data using one signal pair. Link was tested to transmit thermal-vision camera picture to remote monitor. Construction of dedicated video link allows to reduce power consumption compared to solutions with ASIC based encoders and decoders realizing video links like DVI or packed based Display Port, with simultaneous reduction of wires needed to establish link to one pair. Article describes functions of modules integrated in FPGA design realizing several functions like: synchronization to video source, video stream packeting, interfacing transceiver module and dynamic clock generation for video standard conversion.

  14. INFIBRA: machine vision inspection of acrylic fiber production

    NASA Astrophysics Data System (ADS)

    Davies, Roger; Correia, Bento A. B.; Contreiras, Jose; Carvalho, Fernando D.

    1998-10-01

    This paper describes the implementation of INFIBRA, a machine vision system for the inspection of acrylic fiber production lines. The system was developed by INETI under a contract from Fisipe, Fibras Sinteticas de Portugal, S.A. At Fisipe there are ten production lines in continuous operation, each approximately 40 m in length. A team of operators used to perform periodic manual visual inspection of each line in conditions of high ambient temperature and humidity. It is not surprising that failures in the manual inspection process occurred with some frequency, with consequences that ranged from reduced fiber quality to production stoppages. The INFIBRA system architecture is a specialization of a generic, modular machine vision architecture based on a network of Personal Computers (PCs), each equipped with a low cost frame grabber. Each production line has a dedicated PC that performs automatic inspection, using specially designed metrology algorithms, via four video cameras located at key positions on the line. The cameras are mounted inside custom-built, hermetically sealed water-cooled housings to protect them from the unfriendly environment. The ten PCs, one for each production line, communicate with a central PC via a standard Ethernet connection. The operator controls all aspects of the inspection process, from configuration through to handling alarms, via a simple graphical interface on the central PC. At any time the operator can also view on the central PC's screen the live image from any one of the 40 cameras employed by the system.

  15. System of launchable mesoscale robots for distributed sensing

    NASA Astrophysics Data System (ADS)

    Yesin, Kemal B.; Nelson, Bradley J.; Papanikolopoulos, Nikolaos P.; Voyles, Richard M.; Krantz, Donald G.

    1999-08-01

    A system of launchable miniature mobile robots with various sensors as payload is used for distributed sensing. The robots are projected to areas of interest either by a robot launcher or by a human operator using standard equipment. A wireless communication network is used to exchange information with the robots. Payloads such as a MEMS sensor for vibration detection, a microphone and an active video module are used mainly to detect humans. The video camera provides live images through a wireless video transmitter and a pan-tilt mechanism expands the effective field of view. There are strict restrictions on total volume and power consumption of the payloads due to the small size of the robot. Emerging technologies are used to address these restrictions. In this paper, we describe the use of microrobotic technologies to develop active vision modules for the mesoscale robot. A single chip CMOS video sensor is used along with a miniature lens that is approximately the size of a sugar cube. The device consumes 100 mW; about 5 times less than the power consumption of a comparable CCD camera. Miniature gearmotors 3 mm in diameter are used to drive the pan-tilt mechanism. A miniature video transmitter is used to transmit analog video signals from the camera.

  16. Application of robust face recognition in video surveillance systems

    NASA Astrophysics Data System (ADS)

    Zhang, De-xin; An, Peng; Zhang, Hao-xiang

    2018-03-01

    In this paper, we propose a video searching system that utilizes face recognition as searching indexing feature. As the applications of video cameras have great increase in recent years, face recognition makes a perfect fit for searching targeted individuals within the vast amount of video data. However, the performance of such searching depends on the quality of face images recorded in the video signals. Since the surveillance video cameras record videos without fixed postures for the object, face occlusion is very common in everyday video. The proposed system builds a model for occluded faces using fuzzy principal component analysis (FPCA), and reconstructs the human faces with the available information. Experimental results show that the system has very high efficiency in processing the real life videos, and it is very robust to various kinds of face occlusions. Hence it can relieve people reviewers from the front of the monitors and greatly enhances the efficiency as well. The proposed system has been installed and applied in various environments and has already demonstrated its power by helping solving real cases.

  17. APOLLO 17 - INFLIGHT

    NASA Image and Video Library

    1972-12-14

    The Apollo 17 Lunar Module (LM) "Challenger" ascent stage leaves the Taurus-Littrow landing site as it makes its spectacular liftoff from the lunar surface, as seen in this reproduction taken from a color television transmission made by the color RCA TV camera mounted on the Lunar Roving Vehicle (LRV). The LRV-mounted TV camera, remotely controlled from the Mission Control Center (MCC) in Houston, made it possible for people on Earth to watch the fantastic event. The LM liftoff was at 188:01:36 ground elapsed time, 4:54:36 p.m. (CST), Thursday, December 14, 1972.

  18. KENNEDY SPACE CENTER, FLA. - A camera is installed on the aft skirt of a solid rocket booster in preparation for a vibration test of the Mobile Launcher Platform with SRBs and external tank mounted. The MLP will roll from one bay to another in the Vehicle Assembly Building.

    NASA Image and Video Library

    2003-11-06

    KENNEDY SPACE CENTER, FLA. - A camera is installed on the aft skirt of a solid rocket booster in preparation for a vibration test of the Mobile Launcher Platform with SRBs and external tank mounted. The MLP will roll from one bay to another in the Vehicle Assembly Building.

  19. Patient training in respiratory-gated radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kini, Vijay R.; Vedam, Subrahmanya S.; Keall, Paul J.

    2003-03-31

    Respiratory gating is used to counter the effects of organ motion during radiotherapy for chest tumors. The effects of variations in patient breathing patterns during a single treatment and from day to day are unknown. We evaluated the feasibility of using patient training tools and their effect on the breathing cycle regularity and reproducibility during respiratory-gated radiotherapy. To monitor respiratory patterns, we used a component of a commercially available respiratory-gated radiotherapy system (Real Time Position Management (RPM) System, Varian Oncology Systems, Palo Alto, CA 94304). This passive marker video tracking system consists of reflective markers placed on the patient's chestmore » or abdomen, which are detected by a wall-mounted video camera. Software installed on a PC interfaced to this camera detects the marker motion digitally and records it. The marker position as a function of time serves as the motion signal that may be used to trigger imaging or treatment. The training tools used were audio prompting and visual feedback, with free breathing as a control. The audio prompting method used instructions to 'breathe in' or 'breathe out' at periodic intervals deduced from patients' own breathing patterns. In the visual feedback method, patients were shown a real-time trace of their abdominal wall motion due to breathing. Using this, they were asked to maintain a constant amplitude of motion. Motion traces of the abdominal wall were recorded for each patient for various maneuvers. Free breathing showed a variable amplitude and frequency. Audio prompting resulted in a reproducible frequency; however, the variability and the magnitude of amplitude increased. Visual feedback gave a better control over the amplitude but showed minor variations in frequency. We concluded that training improves the reproducibility of amplitude and frequency of patient breathing cycles. This may increase the accuracy of respiratory-gated radiation therapy.« less

  20. Performance of a real-time sensor and processing system on a helicopter

    NASA Astrophysics Data System (ADS)

    Kurz, F.; Rosenbaum, D.; Meynberg, O.; Mattyus, G.; Reinartz, P.

    2014-11-01

    A new optical real-time sensor system (4k system) on a helicopter is now ready to use for applications during disasters, mass events and traffic monitoring scenarios. The sensor was developed light-weighted, small with relatively cheap components in a pylon mounted sideward on a helicopter. The sensor architecture is finally a compromise between the required functionality, the development costs, the weight and the sensor size. Aboard processors are integrated in the 4k sensor system for orthophoto generation, for automatic traffic parameter extraction and for data downlinks. It is planned to add real-time processors for person detection and tracking, for DSM generation and for water detection. Equipped with the newest and most powerful off-the-shelf cameras available, a wide variety of viewing configurations with a frame rate of up to 12 Hz for the different applications is possible. Based on three cameras with 50 mm lenses which are looking in different directions, a maximal FOV of 104° is reachable; with 100 mm lenses a ground sampling distance of 3.5 cm is possible at a flight height of 500 m above ground. In this paper, we present the first data sets and describe the technical components of the sensor. The effect of vibrations of the helicopter on the GNSS/IMU accuracy and on the 4k video quality is analysed. It can be shown, that if the helicopter hoovers the rolling shutter effect affects the 4k video quality drastically. The GNSS/IMU error is higher than the specified limit, which is mainly caused by the vibrations on the helicopter and the insufficient vibrational absorbers on the sensor board.

  1. Coendutermes tucum Fontes (Isoptera, Termitidae, Nasutitermitinae): description of the imago caste and additional notes.

    PubMed

    Cuezzo, Carolina

    2016-12-09

    Coendutermes Fontes, 1985 is a monotypic South American termite genus. Coendutermes tucum Fontes, 1985, was described based on morphological characters from soldiers and workers collected in Mato Grosso, Brazil, and Jodensavanne, Suriname. Herein, I describe the imago caste of C. tucum for the first time with additional notes on soldiers, workers, and new distributional records. The studied material is deposited at the Museu de Zoologia da Universidade de São Paulo, São Paulo, Brazil (MZUSP). I use the terminology of Fontes (1987) to describe worker mandibles, and that of Noirot (2001) for the different parts of the digestive tube of workers. I measured the imagoes morphometric characters following Roonwal (1970): LH, length of head capsule (9); WH, width of head capsule without eyes (18); OF, occipito-fontanelle distance (23); DE, diameter of eye (48); LO, length of ocellus (55); WO, width of ocellus (56); EOD, eye-ocellus distance (57); LP, length of pronotum (65); WP, width of pronotum (68); LT, length of hind tibia (85). I took photographs of all castes with a stereomicroscope (Leica M205C) attached to a video camera (Leica DFC295) and images of gizzard and enteric valve under a microscope (Leica DM750B) attached to a video camera (Leica ICC50HD), then I combined the stacks of images with the software Leica LAS EZ 2.0 or Helicon Focus 5.2.11 X64. For the scanning electron micrographs (SEM), one soldier was dried to critical point while directly mounted on a stub with double face adhesive tape, then coated with gold and photographed with the SEM (Zeiss LEO 440 ®).

  2. Reading Visual Braille with a Retinal Prosthesis

    PubMed Central

    Lauritzen, Thomas Z.; Harris, Jordan; Mohand-Said, Saddek; Sahel, Jose A.; Dorn, Jessy D.; McClure, Kelly; Greenberg, Robert J.

    2012-01-01

    Retinal prostheses, which restore partial vision to patients blinded by outer retinal degeneration, are currently in clinical trial. The Argus II retinal prosthesis system was recently awarded CE approval for commercial use in Europe. While retinal prosthesis users have achieved remarkable visual improvement to the point of reading letters and short sentences, the reading process is still fairly cumbersome. This study investigates the possibility of using an epiretinal prosthesis to stimulate visual braille as a sensory substitution for reading written letters and words. The Argus II retinal prosthesis system, used in this study, includes a 10 × 6 electrode array implanted epiretinally, a tiny video camera mounted on a pair of glasses, and a wearable computer that processes the video and determines the stimulation current of each electrode in real time. In the braille reading system, individual letters are created by a subset of dots from a 3 by 2 array of six dots. For the visual braille experiment, a grid of six electrodes was chosen out of the 10 × 6 Argus II array. Groups of these electrodes were then directly stimulated (bypassing the camera) to create visual percepts of individual braille letters. Experiments were performed in a single subject. Single letters were stimulated in an alternative forced choice (AFC) paradigm, and short 2–4-letter words were stimulated (one letter at a time) in an open-choice reading paradigm. The subject correctly identified 89% of single letters, 80% of 2-letter, 60% of 3-letter, and 70% of 4-letter words. This work suggests that text can successfully be stimulated and read as visual braille in retinal prosthesis patients. PMID:23189036

  3. Reading visual braille with a retinal prosthesis.

    PubMed

    Lauritzen, Thomas Z; Harris, Jordan; Mohand-Said, Saddek; Sahel, Jose A; Dorn, Jessy D; McClure, Kelly; Greenberg, Robert J

    2012-01-01

    Retinal prostheses, which restore partial vision to patients blinded by outer retinal degeneration, are currently in clinical trial. The Argus II retinal prosthesis system was recently awarded CE approval for commercial use in Europe. While retinal prosthesis users have achieved remarkable visual improvement to the point of reading letters and short sentences, the reading process is still fairly cumbersome. This study investigates the possibility of using an epiretinal prosthesis to stimulate visual braille as a sensory substitution for reading written letters and words. The Argus II retinal prosthesis system, used in this study, includes a 10 × 6 electrode array implanted epiretinally, a tiny video camera mounted on a pair of glasses, and a wearable computer that processes the video and determines the stimulation current of each electrode in real time. In the braille reading system, individual letters are created by a subset of dots from a 3 by 2 array of six dots. For the visual braille experiment, a grid of six electrodes was chosen out of the 10 × 6 Argus II array. Groups of these electrodes were then directly stimulated (bypassing the camera) to create visual percepts of individual braille letters. Experiments were performed in a single subject. Single letters were stimulated in an alternative forced choice (AFC) paradigm, and short 2-4-letter words were stimulated (one letter at a time) in an open-choice reading paradigm. The subject correctly identified 89% of single letters, 80% of 2-letter, 60% of 3-letter, and 70% of 4-letter words. This work suggests that text can successfully be stimulated and read as visual braille in retinal prosthesis patients.

  4. Using drone-mounted cameras for on-site body documentation: 3D mapping and active survey.

    PubMed

    Urbanová, Petra; Jurda, Mikoláš; Vojtíšek, Tomáš; Krajsa, Jan

    2017-12-01

    Recent advances in unmanned aerial technology have substantially lowered the cost associated with aerial imagery. As a result, forensic practitioners are today presented with easy low-cost access to aerial photographs at remote locations. The present paper aims to explore boundaries in which the low-end drone technology can operate as professional crime scene equipment, and to test the prospects of aerial 3D modeling in the forensic context. The study was based on recent forensic cases of falls from height admitted for postmortem examinations. Three mock outdoor forensic scenes featuring a dummy, skeletal remains and artificial blood were constructed at an abandoned quarry and subsequently documented using a commercial DJI Phantom 2 drone equipped with a GoPro HERO 4 digital camera. In two of the experiments, the purpose was to conduct aerial and ground-view photography and to process the acquired images with a photogrammetry protocol (using Agisoft PhotoScan ® 1.2.6) in order to generate 3D textured models. The third experiment tested the employment of drone-based video recordings in mapping scattered body parts. The results show that drone-based aerial photography is capable of producing high-quality images, which are appropriate for building accurate large-scale 3D models of a forensic scene. If, however, high-resolution top-down three-dimensional scene documentation featuring details on a corpse or other physical evidence is required, we recommend building a multi-resolution model by processing aerial and ground-view imagery separately. The video survey showed that using an overview recording for seeking out scattered body parts was efficient. In contrast, the less easy-to-spot evidence, such as bloodstains, was detected only after having been marked properly with crime scene equipment. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Guerrilla Video: A New Protocol for Producing Classroom Video

    ERIC Educational Resources Information Center

    Fadde, Peter; Rich, Peter

    2010-01-01

    Contemporary changes in pedagogy point to the need for a higher level of video production value in most classroom video, replacing the default video protocol of an unattended camera in the back of the classroom. The rich and complex environment of today's classroom can be captured more fully using the higher level, but still easily manageable,…

  6. High dynamic range adaptive real-time smart camera: an overview of the HDR-ARTiST project

    NASA Astrophysics Data System (ADS)

    Lapray, Pierre-Jean; Heyrman, Barthélémy; Ginhac, Dominique

    2015-04-01

    Standard cameras capture only a fraction of the information that is visible to the human visual system. This is specifically true for natural scenes including areas of low and high illumination due to transitions between sunlit and shaded areas. When capturing such a scene, many cameras are unable to store the full Dynamic Range (DR) resulting in low quality video where details are concealed in shadows or washed out by sunlight. The imaging technique that can overcome this problem is called HDR (High Dynamic Range) imaging. This paper describes a complete smart camera built around a standard off-the-shelf LDR (Low Dynamic Range) sensor and a Virtex-6 FPGA board. This smart camera called HDR-ARtiSt (High Dynamic Range Adaptive Real-time Smart camera) is able to produce a real-time HDR live video color stream by recording and combining multiple acquisitions of the same scene while varying the exposure time. This technique appears as one of the most appropriate and cheapest solution to enhance the dynamic range of real-life environments. HDR-ARtiSt embeds real-time multiple captures, HDR processing, data display and transfer of a HDR color video for a full sensor resolution (1280 1024 pixels) at 60 frames per second. The main contributions of this work are: (1) Multiple Exposure Control (MEC) dedicated to the smart image capture with alternating three exposure times that are dynamically evaluated from frame to frame, (2) Multi-streaming Memory Management Unit (MMMU) dedicated to the memory read/write operations of the three parallel video streams, corresponding to the different exposure times, (3) HRD creating by combining the video streams using a specific hardware version of the Devebecs technique, and (4) Global Tone Mapping (GTM) of the HDR scene for display on a standard LCD monitor.

  7. Versatile microsecond movie camera

    NASA Astrophysics Data System (ADS)

    Dreyfus, R. W.

    1980-03-01

    A laboratory-type movie camera is described which satisfies many requirements in the range 1 microsec to 1 sec. The camera consists of a He-Ne laser and compatible state-of-the-art components; the primary components are an acoustooptic modulator, an electromechanical beam deflector, and a video tape system. The present camera is distinct in its operation in that submicrosecond laser flashes freeze the image motion while still allowing the simplicity of electromechanical image deflection in the millisecond range. The gating and pulse delay circuits of an oscilloscope synchronize the modulator and scanner relative to the subject being photographed. The optical table construction and electronic control enhance the camera's versatility and adaptability. The instant replay video tape recording allows for easy synchronization and immediate viewing of the results. Economy is achieved by using off-the-shelf components, optical table construction, and short assembly time.

  8. Collaborative real-time scheduling of multiple PTZ cameras for multiple object tracking in video surveillance

    NASA Astrophysics Data System (ADS)

    Liu, Yu-Che; Huang, Chung-Lin

    2013-03-01

    This paper proposes a multi-PTZ-camera control mechanism to acquire close-up imagery of human objects in a surveillance system. The control algorithm is based on the output of multi-camera, multi-target tracking. Three main concerns of the algorithm are (1) the imagery of human object's face for biometric purposes, (2) the optimal video quality of the human objects, and (3) minimum hand-off time. Here, we define an objective function based on the expected capture conditions such as the camera-subject distance, pan tile angles of capture, face visibility and others. Such objective function serves to effectively balance the number of captures per subject and quality of captures. In the experiments, we demonstrate the performance of the system which operates in real-time under real world conditions on three PTZ cameras.

  9. The design of red-blue 3D video fusion system based on DM642

    NASA Astrophysics Data System (ADS)

    Fu, Rongguo; Luo, Hao; Lv, Jin; Feng, Shu; Wei, Yifang; Zhang, Hao

    2016-10-01

    Aiming at the uncertainty of traditional 3D video capturing including camera focal lengths, distance and angle parameters between two cameras, a red-blue 3D video fusion system based on DM642 hardware processing platform is designed with the parallel optical axis. In view of the brightness reduction of traditional 3D video, the brightness enhancement algorithm based on human visual characteristics is proposed and the luminance component processing method based on YCbCr color space is also proposed. The BIOS real-time operating system is used to improve the real-time performance. The video processing circuit with the core of DM642 enhances the brightness of the images, then converts the video signals of YCbCr to RGB and extracts the R component from one camera, so does the other video and G, B component are extracted synchronously, outputs 3D fusion images finally. The real-time adjustments such as translation and scaling of the two color components are realized through the serial communication between the VC software and BIOS. The system with the method of adding red-blue components reduces the lost of the chrominance components and makes the picture color saturation reduce to more than 95% of the original. Enhancement algorithm after optimization to reduce the amount of data fusion in the processing of video is used to reduce the fusion time and watching effect is improved. Experimental results show that the system can capture images in near distance, output red-blue 3D video and presents the nice experiences to the audience wearing red-blue glasses.

  10. What are we missing? Advantages of more than one viewpoint to estimate fish assemblages using baited video

    PubMed Central

    Huveneers, Charlie; Fairweather, Peter G.

    2018-01-01

    Counting errors can bias assessments of species abundance and richness, which can affect assessments of stock structure, population structure and monitoring programmes. Many methods for studying ecology use fixed viewpoints (e.g. camera traps, underwater video), but there is little known about how this biases the data obtained. In the marine realm, most studies using baited underwater video, a common method for monitoring fish and nekton, have previously only assessed fishes using a single bait-facing viewpoint. To investigate the biases stemming from using fixed viewpoints, we added cameras to cover 360° views around the units. We found similar species richness for all observed viewpoints but the bait-facing viewpoint recorded the highest fish abundance. Sightings of infrequently seen and shy species increased with the additional cameras and the extra viewpoints allowed the abundance estimates of highly abundant schooling species to be up to 60% higher. We specifically recommend the use of additional cameras for studies focusing on shyer species or those particularly interested in increasing the sensitivity of the method by avoiding saturation in highly abundant species. Studies may also benefit from using additional cameras to focus observation on the downstream viewpoint. PMID:29892386

  11. 4K Video of Colorful Liquid in Space

    NASA Image and Video Library

    2015-10-09

    Once again, astronauts on the International Space Station dissolved an effervescent tablet in a floating ball of water, and captured images using a camera capable of recording four times the resolution of normal high-definition cameras. The higher resolution images and higher frame rate videos can reveal more information when used on science investigations, giving researchers a valuable new tool aboard the space station. This footage is one of the first of its kind. The cameras are being evaluated for capturing science data and vehicle operations by engineers at NASA's Marshall Space Flight Center in Huntsville, Alabama.

  12. Schwarzschild camera

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The fabrication procedures for the primary and secondary mirrors for a Schwarzschild camera are summarized. The achieved wave front for the telescope was 1/2 wave at .63 microns. Interferograms of the two mirrors as a system are given and the mounting procedures are outlined.

  13. 2. View from same camera position facing 232 degrees southwest ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    2. View from same camera position facing 232 degrees southwest showing abandoned section of old grade - Oak Creek Administrative Center, One half mile east of Zion-Mount Carmel Highway at Oak Creek, Springdale, Washington County, UT

  14. Virtual Vision

    NASA Astrophysics Data System (ADS)

    Terzopoulos, Demetri; Qureshi, Faisal Z.

    Computer vision and sensor networks researchers are increasingly motivated to investigate complex multi-camera sensing and control issues that arise in the automatic visual surveillance of extensive, highly populated public spaces such as airports and train stations. However, they often encounter serious impediments to deploying and experimenting with large-scale physical camera networks in such real-world environments. We propose an alternative approach called "Virtual Vision", which facilitates this type of research through the virtual reality simulation of populated urban spaces, camera sensor networks, and computer vision on commodity computers. We demonstrate the usefulness of our approach by developing two highly automated surveillance systems comprising passive and active pan/tilt/zoom cameras that are deployed in a virtual train station environment populated by autonomous, lifelike virtual pedestrians. The easily reconfigurable virtual cameras distributed in this environment generate synthetic video feeds that emulate those acquired by real surveillance cameras monitoring public spaces. The novel multi-camera control strategies that we describe enable the cameras to collaborate in persistently observing pedestrians of interest and in acquiring close-up videos of pedestrians in designated areas.

  15. Evaluation of a video image detection system : final report.

    DOT National Transportation Integrated Search

    1994-05-01

    A video image detection system (VIDS) is an advanced wide-area traffic monitoring system : that processes input from a video camera. The Autoscope VIDS coupled with an information : management system was selected as the monitoring device because test...

  16. Performance evaluation of a two detector camera for real-time video.

    PubMed

    Lochocki, Benjamin; Gambín-Regadera, Adrián; Artal, Pablo

    2016-12-20

    Single pixel imaging can be the preferred method over traditional 2D-array imaging in spectral ranges where conventional cameras are not available. However, when it comes to real-time video imaging, single pixel imaging cannot compete with the framerates of conventional cameras, especially when high-resolution images are desired. Here we evaluate the performance of an imaging approach using two detectors simultaneously. First, we present theoretical results on how low SNR affects final image quality followed by experimentally determined results. Obtained video framerates were doubled compared to state of the art systems, resulting in a framerate from 22 Hz for a 32×32 resolution to 0.75 Hz for a 128×128 resolution image. Additionally, the two detector imaging technique enables the acquisition of images with a resolution of 256×256 in less than 3 s.

  17. Cable and Line Inspection Mechanism

    NASA Technical Reports Server (NTRS)

    Ross, Terence J. (Inventor)

    2003-01-01

    An automated cable and line inspection mechanism visually scans the entire surface of a cable as the mechanism travels along the cable=s length. The mechanism includes a drive system, a video camera, a mirror assembly for providing the camera with a 360 degree view of the cable, and a laser micrometer for measuring the cable=s diameter. The drive system includes an electric motor and a plurality of drive wheels and tension wheels for engaging the cable or line to be inspected, and driving the mechanism along the cable. The mirror assembly includes mirrors that are positioned to project multiple images of the cable on the camera lens, each of which is of a different portion of the cable. A data transceiver and a video transmitter are preferably employed for transmission of video images, data and commands between the mechanism and a remote control station.

  18. Cable and line inspection mechanism

    NASA Technical Reports Server (NTRS)

    Ross, Terence J. (Inventor)

    2003-01-01

    An automated cable and line inspection mechanism visually scans the entire surface of a cable as the mechanism travels along the cable=s length. The mechanism includes a drive system, a video camera, a mirror assembly for providing the camera with a 360 degree view of the cable, and a laser micrometer for measuring the cable=s diameter. The drive system includes an electric motor and a plurality of drive wheels and tension wheels for engaging the cable or line to be inspected, and driving the mechanism along the cable. The mirror assembly includes mirrors that are positioned to project multiple images of the cable on the camera lens, each of which is of a different portion of the cable. A data transceiver and a video transmitter are preferably employed for transmission of video images, data and commands between the mechanism and a remote control station.

  19. Identifying sports videos using replay, text, and camera motion features

    NASA Astrophysics Data System (ADS)

    Kobla, Vikrant; DeMenthon, Daniel; Doermann, David S.

    1999-12-01

    Automated classification of digital video is emerging as an important piece of the puzzle in the design of content management systems for digital libraries. The ability to classify videos into various classes such as sports, news, movies, or documentaries, increases the efficiency of indexing, browsing, and retrieval of video in large databases. In this paper, we discuss the extraction of features that enable identification of sports videos directly from the compressed domain of MPEG video. These features include detecting the presence of action replays, determining the amount of scene text in vide, and calculating various statistics on camera and/or object motion. The features are derived from the macroblock, motion,and bit-rate information that is readily accessible from MPEG video with very minimal decoding, leading to substantial gains in processing speeds. Full-decoding of selective frames is required only for text analysis. A decision tree classifier built using these features is able to identify sports clips with an accuracy of about 93 percent.

  20. Design and evaluation of controls for drift, video gain, and color balance in spaceborne facsimile cameras

    NASA Technical Reports Server (NTRS)

    Katzberg, S. J.; Kelly, W. L., IV; Rowland, C. W.; Burcher, E. E.

    1973-01-01

    The facsimile camera is an optical-mechanical scanning device which has become an attractive candidate as an imaging system for planetary landers and rovers. This paper presents electronic techniques which permit the acquisition and reconstruction of high quality images with this device, even under varying lighting conditions. These techniques include a control for low frequency noise and drift, an automatic gain control, a pulse-duration light modulation scheme, and a relative spectral gain control. Taken together, these techniques allow the reconstruction of radiometrically accurate and properly balanced color images from facsimile camera video data. These techniques have been incorporated into a facsimile camera and reproduction system, and experimental results are presented for each technique and for the complete system.

  1. Low-complexity camera digital signal imaging for video document projection system

    NASA Astrophysics Data System (ADS)

    Hsia, Shih-Chang; Tsai, Po-Shien

    2011-04-01

    We present high-performance and low-complexity algorithms for real-time camera imaging applications. The main functions of the proposed camera digital signal processing (DSP) involve color interpolation, white balance, adaptive binary processing, auto gain control, and edge and color enhancement for video projection systems. A series of simulations demonstrate that the proposed method can achieve good image quality while keeping computation cost and memory requirements low. On the basis of the proposed algorithms, the cost-effective hardware core is developed using Verilog HDL. The prototype chip has been verified with one low-cost programmable device. The real-time camera system can achieve 1270 × 792 resolution with the combination of extra components and can demonstrate each DSP function.

  2. Object Occlusion Detection Using Automatic Camera Calibration for a Wide-Area Video Surveillance System

    PubMed Central

    Jung, Jaehoon; Yoon, Inhye; Paik, Joonki

    2016-01-01

    This paper presents an object occlusion detection algorithm using object depth information that is estimated by automatic camera calibration. The object occlusion problem is a major factor to degrade the performance of object tracking and recognition. To detect an object occlusion, the proposed algorithm consists of three steps: (i) automatic camera calibration using both moving objects and a background structure; (ii) object depth estimation; and (iii) detection of occluded regions. The proposed algorithm estimates the depth of the object without extra sensors but with a generic red, green and blue (RGB) camera. As a result, the proposed algorithm can be applied to improve the performance of object tracking and object recognition algorithms for video surveillance systems. PMID:27347978

  3. Scene-aware joint global and local homographic video coding

    NASA Astrophysics Data System (ADS)

    Peng, Xiulian; Xu, Jizheng; Sullivan, Gary J.

    2016-09-01

    Perspective motion is commonly represented in video content that is captured and compressed for various applications including cloud gaming, vehicle and aerial monitoring, etc. Existing approaches based on an eight-parameter homography motion model cannot deal with this efficiently, either due to low prediction accuracy or excessive bit rate overhead. In this paper, we consider the camera motion model and scene structure in such video content and propose a joint global and local homography motion coding approach for video with perspective motion. The camera motion is estimated by a computer vision approach, and camera intrinsic and extrinsic parameters are globally coded at the frame level. The scene is modeled as piece-wise planes, and three plane parameters are coded at the block level. Fast gradient-based approaches are employed to search for the plane parameters for each block region. In this way, improved prediction accuracy and low bit costs are achieved. Experimental results based on the HEVC test model show that up to 9.1% bit rate savings can be achieved (with equal PSNR quality) on test video content with perspective motion. Test sequences for the example applications showed a bit rate savings ranging from 3.7 to 9.1%.

  4. Evaluation of lens distortion errors using an underwater camera system for video-based motion analysis

    NASA Technical Reports Server (NTRS)

    Poliner, Jeffrey; Fletcher, Lauren; Klute, Glenn K.

    1994-01-01

    Video-based motion analysis systems are widely employed to study human movement, using computers to capture, store, process, and analyze video data. This data can be collected in any environment where cameras can be located. One of the NASA facilities where human performance research is conducted is the Weightless Environment Training Facility (WETF), a pool of water which simulates zero-gravity with neutral buoyance. Underwater video collection in the WETF poses some unique problems. This project evaluates the error caused by the lens distortion of the WETF cameras. A grid of points of known dimensions was constructed and videotaped using a video vault underwater system. Recorded images were played back on a VCR and a personal computer grabbed and stored the images on disk. These images were then digitized to give calculated coordinates for the grid points. Errors were calculated as the distance from the known coordinates of the points to the calculated coordinates. It was demonstrated that errors from lens distortion could be as high as 8 percent. By avoiding the outermost regions of a wide-angle lens, the error can be kept smaller.

  5. The effects of video compression on acceptability of images for monitoring life sciences' experiments

    NASA Technical Reports Server (NTRS)

    Haines, Richard F.; Chuang, Sherry L.

    1993-01-01

    Current plans indicate that there will be a large number of life science experiments carried out during the thirty year-long mission of the Biological Flight Research Laboratory (BFRL) on board Space Station Freedom (SSF). Non-human life science experiments will be performed in the BFRL. Two distinct types of activities have already been identified for this facility: (1) collect, store, distribute, analyze and manage engineering and science data from the Habitats, Glovebox and Centrifuge, (2) perform a broad range of remote science activities in the Glovebox and Habitat chambers in conjunction with the remotely located principal investigator (PI). These activities require extensive video coverage, viewing and/or recording and distribution to video displays on board SSF and to the ground. This paper concentrates mainly on the second type of activity. Each of the two BFRL habitat racks are designed to be configurable for either six rodent habitats per rack, four plant habitats per rack, or a combination of the above. Two video cameras will be installed in each habitat with a spare attachment for a third camera when needed. Therefore, a video system that can accommodate up to 12-18 camera inputs per habitat rack must be considered.

  6. Complete supine percutaneous nephrolithotomy with GoPro®. Ten steps for success.

    PubMed

    Vicentini, Fabio Carvalho; Dos Santos, Hugo Daniel Barone; Batagello, Carlos Alfredo; Amundson, Julia Rothe; Oliveira, Evaristo Peixoto; Marchini, Giovanni Scala; Srougi, Miguel; Nahas, Willian Carlos; Mazzucchi, Eduardo

    2018-03-15

    To show a video of a complete supine Percutaneous Nephrolithotomy (csPCNL) performed for the treatment of a staghorn calculus, from the surgeon's point of view. The procedure was recorded with a GoPro® camera, demonstrating the ten essential steps for a successful procedure. The patient was a 38 years-old woman with 2.4cm of left kidney lower pole stone burden who presented with 3 months of lumbar pain and recurrent urinary tract infections. She had a previous diagnosis of polycystic kidney disease and chronic renal failure stage 2. CT scan showed two 1.2cm stones in the lower pole (Guy's Stone Score 2). She had a previous ipsilateral double J insertion due to an obstructive pyelonephritis. The csPCNL was uneventful with a single access in the lower pole. The surgeon had a Full HD GoPro Hero 4 Session® camera mounted on his head, controlled by the surgical team with a remote control. All of the mains steps were recorded. Informed consent was obtained prior to the procedure. The surgical time was 90 minutes. Hemoglobin drop was 0.5g/dL. A post-operative CT scan was stone-free. The patient was discharged 36 hours after surgery. The camera worked properly and didn't cause pain or muscle discomfort to the surgeon. The quality of the recorded movie was excellent. GoPro® camera proved to be a very interesting tool to document surgeries without interfering with the procedure and with great educational potential. More studies should be conducted to evaluate the role of this equipment. Copyright® by the International Brazilian Journal of Urology.

  7. Photodiode Camera Measurement of Surface Strains on Tendons during Multiple Cyclic Tests

    NASA Astrophysics Data System (ADS)

    Chun, Keyoung Jin; Hubbard, Robert Philip

    The objectives of this study are to introduce the use of a photodiode camera for measuring surface strain on soft tissue and to present some representative responses of the tendon. Tendon specimens were obtained from the hindlimbs of canines and frozen to -70°C. After thawing, specimens were mounted in the immersion bath at a room temperature (22°C), preloaded to 0.13N and then subjected to 3% of the initial length at a strain rate of 2%/sec. In tendons which were tested in two blocks of seven repeated extensions to 3% strain with a 120 seconds wait period between, the surface strains were measured with a photodiode camera and near the gripped ends generally were greater than the surface strains in the middle segment of the tendon specimens. The recovery for peak load after the rest period was consistent but the changes in patterns of surface strains after the rest period were not consistent. The advantages of a photodiode measurement of surface strains include the followings: 1) it is a noncontacting method which eliminates errors and distortions caused by clip gauges or mechanical/electronic transducers; 2) it is more accurate than previous noncontact methods, e.g. the VDA and the high speed photographic method; 3) it is a fully automatic, thus reducing labor for replaying video tapes or films and potential errors from human judgement which can occur during digitizing data from photographs. Because the photodiode camera, employs a solid state photodiode array to sense black and white images, scan targets (black image) on the surface of the tendon specimen and back lighting system (white image), and stored automatically image data for surface strains of the tendon specimen on the computer during cyclic extensions.

  8. SUBSA and PFMI Transparent Furnace Systems Currently in use in the International Space Station Microgravity Science Glovebox

    NASA Technical Reports Server (NTRS)

    Spivey, Reggie A.; Gilley, Scott; Ostrogorsky, Aleksander; Grugel, Richard; Smith, Guy; Luz, Paul

    2003-01-01

    The Solidification Using a Baffle in Sealed Ampoules (SUBSA) and Pore Formation and Mobility Investigation (PFMI) furnaces were developed for operation in the International Space Station (ISS) Microgravity Science Glovebox (MSG). Both furnaces were launched to the ISS on STS-111, June 4, 2002, and are currently in use on orbit. The SUBSA furnace provides a maximum temperature of 850 C and can accommodate a metal sample as large as 30 cm long and 12mm in diameter. SUBSA utilizes a gradient freeze process with a minimum cooldown rate of 0.5C per min, and a stability of +/- 0.15C. An 8 cm long transparent gradient zone coupled with a Cohu 3812 camera and quartz ampoule allows for observation and video recording of the solidification process. PFMI is a Bridgman type furnace that operates at a maximum temperature of 130C and can accommodate a sample 23cm long and 10mm in diameter. Two Cohu 3812 cameras mounted 90 deg apart move on a separate translation system which allows for viewing of the sample in the transparent hot zone and gradient zone independent of the furnace translation rate and direction. Translation rates for both the cameras and furnace can be specified from 0.5micrometers/sec to 100 micrometers/sec with a stability of +/-5%. The two furnaces share a Process Control Module (PCM) which controls the furnace hardware, a Data Acquisition Pad (DaqPad) which provides signal condition of thermal couple data, and two Cohu 3812 cameras. The hardware and software allow for real time monitoring and commanding of critical process control parameters. This paper will provide a detailed explanation of the SUBSA and PFMI systems along with performance data and some preliminary results from completed on-orbit processing runs.

  9. Crew Activity Analyzer

    NASA Technical Reports Server (NTRS)

    Murray, James; Kirillov, Alexander

    2008-01-01

    The crew activity analyzer (CAA) is a system of electronic hardware and software for automatically identifying patterns of group activity among crew members working together in an office, cockpit, workshop, laboratory, or other enclosed space. The CAA synchronously records multiple streams of data from digital video cameras, wireless microphones, and position sensors, then plays back and processes the data to identify activity patterns specified by human analysts. The processing greatly reduces the amount of time that the analysts must spend in examining large amounts of data, enabling the analysts to concentrate on subsets of data that represent activities of interest. The CAA has potential for use in a variety of governmental and commercial applications, including planning for crews for future long space flights, designing facilities wherein humans must work in proximity for long times, improving crew training and measuring crew performance in military settings, human-factors and safety assessment, development of team procedures, and behavioral and ethnographic research. The data-acquisition hardware of the CAA (see figure) includes two video cameras: an overhead one aimed upward at a paraboloidal mirror on the ceiling and one mounted on a wall aimed in a downward slant toward the crew area. As many as four wireless microphones can be worn by crew members. The audio signals received from the microphones are digitized, then compressed in preparation for storage. Approximate locations of as many as four crew members are measured by use of a Cricket indoor location system. [The Cricket indoor location system includes ultrasonic/radio beacon and listener units. A Cricket beacon (in this case, worn by a crew member) simultaneously transmits a pulse of ultrasound and a radio signal that contains identifying information. Each Cricket listener unit measures the difference between the times of reception of the ultrasound and radio signals from an identified beacon. Assuming essentially instantaneous propagation of the radio signal, the distance between that beacon and the listener unit is estimated from this time difference and the speed of sound in air.] In this system, six Cricket listener units are mounted in various positions on the ceiling, and as many as four Cricket beacons are attached to crew members. The three-dimensional position of each Cricket beacon can be estimated from the time-difference readings of that beacon from at least three Cricket listener units

  10. Development of a camera casing suited for cryogenic and vacuum applications

    NASA Astrophysics Data System (ADS)

    Delaquis, S. C.; Gornea, R.; Janos, S.; Lüthi, M.; von Rohr, Ch Rudolf; Schenk, M.; Vuilleumier, J.-L.

    2013-12-01

    We report on the design, construction, and operation of a PID temperature controlled and vacuum tight camera casing. The camera casing contains a commercial digital camera and a lighting system. The design of the camera casing and its components are discussed in detail. Pictures taken by this cryo-camera while immersed in argon vapour and liquid nitrogen are presented. The cryo-camera can provide a live view inside cryogenic set-ups and allows to record video.

  11. Calibration Procedures in Mid Format Camera Setups

    NASA Astrophysics Data System (ADS)

    Pivnicka, F.; Kemper, G.; Geissler, S.

    2012-07-01

    A growing number of mid-format cameras are used for aerial surveying projects. To achieve a reliable and geometrically precise result also in the photogrammetric workflow, awareness on the sensitive parts is important. The use of direct referencing systems (GPS/IMU), the mounting on a stabilizing camera platform and the specific values of the mid format camera make a professional setup with various calibration and misalignment operations necessary. An important part is to have a proper camera calibration. Using aerial images over a well designed test field with 3D structures and/or different flight altitudes enable the determination of calibration values in Bingo software. It will be demonstrated how such a calibration can be performed. The direct referencing device must be mounted in a solid and reliable way to the camera. Beside the mechanical work especially in mounting the camera beside the IMU, 2 lever arms have to be measured in mm accuracy. Important are the lever arms from the GPS Antenna to the IMU's calibrated centre and also the lever arm from the IMU centre to the Camera projection centre. In fact, the measurement with a total station is not a difficult task but the definition of the right centres and the need for using rotation matrices can cause serious accuracy problems. The benefit of small and medium format cameras is that also smaller aircrafts can be used. Like that, a gyro bases stabilized platform is recommended. This causes, that the IMU must be mounted beside the camera on the stabilizer. The advantage is, that the IMU can be used to control the platform, the problematic thing is, that the IMU to GPS antenna lever arm is floating. In fact we have to deal with an additional data stream, the values of the movement of the stabiliser to correct the floating lever arm distances. If the post-processing of the GPS-IMU data by taking the floating levers into account, delivers an expected result, the lever arms between IMU and camera can be applied. However, there is a misalignment (bore side angle) that must be evaluated by photogrammetric process using advanced tools e.g. in Bingo. Once, all these parameters have been determined, the system is capable for projects without or with only a few ground control points. But which effect has the photogrammetric process when directly applying the achieved direct orientation values compared with an AT based on a proper tiepoint matching? The paper aims to show the steps to be done by potential users and gives a kind of quality estimation about the importance and quality influence of the various calibration and adjustment steps.

  12. Synchronizing A Stroboscope With A Video Camera

    NASA Technical Reports Server (NTRS)

    Rhodes, David B.; Franke, John M.; Jones, Stephen B.; Dismond, Harriet R.

    1993-01-01

    Circuit synchronizes flash of light from stroboscope with frame and field periods of video camera. Sync stripper sends vertical-synchronization signal to delay generator, which generates trigger signal. Flashlamp power supply accepts delayed trigger signal and sends pulse of power to flash lamp. Designed for use in making short-exposure images that "freeze" flow in wind tunnel. Also used for making longer-exposure images obtained by use of continuous intense illumination.

  13. Single-Fiber Optical Link For Video And Control

    NASA Technical Reports Server (NTRS)

    Galloway, F. Houston

    1993-01-01

    Single optical fiber carries control signals to remote television cameras and video signals from cameras. Fiber replaces multiconductor copper cable, with consequent reduction in size. Repeaters not needed. System works with either multimode- or single-mode fiber types. Nonmetallic fiber provides immunity to electromagnetic interference at suboptical frequencies and much less vulnerable to electronic eavesdropping and lightning strikes. Multigigahertz bandwidth more than adequate for high-resolution television signals.

  14. Beach Observations using Quadcopter Imagery

    NASA Astrophysics Data System (ADS)

    Yang, Yi-Chung; Wang, Hsing-Yu; Fang, Hui-Ming; Hsiao, Sung-Shan; Tsai, Cheng-Han

    2017-04-01

    Beaches are the places where the interaction of the land and sea takes place, and it is under the influence of many environmental factors, including meteorological and oceanic ones. To understand the evolution or changes of beaches, it may require constant monitoring. One way to monitor the beach changes is to use optical cameras. With careful placements of ground control points, land-based optical cameras, which are inexpensive compared to other remote sensing apparatuses, can be used to survey a relatively large area in a short time. For example, we have used terrestrial optical cameras incorporated with ground control points to monitor beaches. The images from the cameras were calibrated by applying the direct linear transformation, projective transformation, and Sobel edge detector to locate the shoreline. The terrestrial optical cameras can record the beach images continuous, and the shorelines can be satisfactorily identified. However, the terrestrial cameras have some limitations. First, the camera system set a sufficiently high level so that the camera can cover the whole area that is of interest; such a location may not be available. The second limitation is that objects in the image have a different resolution, depending on the distance of objects from the cameras. To overcome these limitations, the present study tested a quadcopter equipped with a down-looking camera to record video and still images of a beach. The quadcopter can be controlled to hover at one location. However, the hovering of the quadcopter can be affected by the wind, since it is not positively anchored to a structure. Although the quadcopter has a gimbal mechanism to damp out tiny shakings of the copter, it will not completely counter movements due to the wind. In our preliminary tests, we have flown the quadcopter up to 500 m high to record 10-minnte video. We then took a 10-minute average of the video data. The averaged image of the coast was blurred because of the time duration of the video and the small movement caused by the quadcopter trying to return to its original position, which is caused by the wind. To solve this problem, the feature detection technique of Speeded Up Robust Features (SURF) method was used on the image of the video, and the resulting image was much sharper than that original image. Next, we extracted the maximum and minimum of RGB value of each pixel, respectively, of the 10-minutes videos. The beach breaker zone showed up in the maximum RGB image as white color areas. Moreover, we were also able to remove the breaker from the images and see the breaker zone bottom features using minimum RGB value of the images. From this test, we also identified the location of the coastline. It was found that the correlation coefficient between the coastline identified by the copter image and that by the ground survey was as high as 0.98. By repeating this copter flight at different times, we could measure the evolution of the coastline.

  15. Utilizing ISS Camera Systems for Scientific Analysis of Lightning Characteristics and comparison with ISS-LIS and GLM

    NASA Astrophysics Data System (ADS)

    Schultz, C. J.; Lang, T. J.; Leake, S.; Runco, M.; Blakeslee, R. J.

    2017-12-01

    Video and still frame images from cameras aboard the International Space Station (ISS) are used to inspire, educate, and provide a unique vantage point from low-Earth orbit that is second to none; however, these cameras have overlooked capabilities for contributing to scientific analysis of the Earth and near-space environment. The goal of this project is to study how georeferenced video/images from available ISS camera systems can be useful for scientific analysis, using lightning properties as a demonstration. Camera images from the crew cameras and high definition video from the Chiba University Meteor Camera were combined with lightning data from the National Lightning Detection Network (NLDN), ISS-Lightning Imaging Sensor (ISS-LIS), the Geostationary Lightning Mapper (GLM) and lightning mapping arrays. These cameras provide significant spatial resolution advantages ( 10 times or better) over ISS-LIS and GLM, but with lower temporal resolution. Therefore, they can serve as a complementarity analysis tool for studying lightning and thunderstorm processes from space. Lightning sensor data, Visible Infrared Imaging Radiometer Suite (VIIRS) derived city light maps, and other geographic databases were combined with the ISS attitude and position data to reverse geolocate each image or frame. An open-source Python toolkit has been developed to assist with this effort. Next, the locations and sizes of all flashes in each frame or image were computed and compared with flash characteristics from all available lightning datasets. This allowed for characterization of cloud features that are below the 4-km and 8-km resolution of ISS-LIS and GLM which may reduce the light that reaches the ISS-LIS or GLM sensor. In the case of video, consecutive frames were overlaid to determine the rate of change of the light escaping cloud top. Characterization of the rate of change in geometry, more generally the radius, of light escaping cloud top was integrated with the NLDN, ISS-LIS and GLM to understand how the peak rate of change and the peak area of each flash aligned with each lightning system in time. Flash features like leaders could be inferred from the video frames as well. Testing is being done to see if leader speeds may be accurately calculated under certain circumstances.

  16. What Counts as Educational Video?: Working toward Best Practice Alignment between Video Production Approaches and Outcomes

    ERIC Educational Resources Information Center

    Winslett, Greg

    2014-01-01

    The twenty years since the first digital video camera was made commercially available has seen significant increases in the use of low-cost, amateur video productions for teaching and learning. In the same period, production and consumption of professionally produced video has also increased, as has the distribution platforms to access it.…

  17. Ready for Their Close-Ups

    ERIC Educational Resources Information Center

    Foster, Andrea L.

    2006-01-01

    American college students are increasingly posting videos of their lives online, due to Web sites like Vimeo and Google Video that host video material free and the ubiquity of camera phones and other devices that can take video-clips. However, the growing popularity of online socializing has many safety experts worried that students could be…

  18. Mobile Panoramic Video Applications for Learning

    ERIC Educational Resources Information Center

    Multisilta, Jari

    2014-01-01

    The use of videos on the internet has grown significantly in the last few years. For example, Khan Academy has a large collection of educational videos, especially on STEM subjects, available for free on the internet. Professional panoramic video cameras are expensive and usually not easy to carry because of the large size of the equipment.…

  19. Teacher Self-Captured Video: Learning to See

    ERIC Educational Resources Information Center

    Sherin, Miriam Gamoran; Dyer, Elizabeth B.

    2017-01-01

    Videos are often used for demonstration and evaluation, but a more productive approach would be using video to support teachers' ability to notice and interpret classroom interactions. That requires thinking carefully about the physical aspects of shooting video--where the camera is placed and how easily student interactions can be heard--as well…

  20. Mobile Video in Everyday Social Interactions

    NASA Astrophysics Data System (ADS)

    Reponen, Erika; Lehikoinen, Jaakko; Impiö, Jussi

    Video recording has become a spontaneous everyday activity for many people, thanks to the video capabilities of modern mobile phones. Internet connectivity of mobile phones enables fluent sharing of captured material even real-time, which makes video an up-and-coming everyday interaction medium. In this article we discuss the effect of the video camera in the social environment, everyday life situations, mainly based on a study where four groups of people used digital video cameras in their normal settings. We also reflect on another study of ours, relating to real-time mobile video communication and discuss future views. The aim of our research is to understand the possibilities in the domain of mobile video. Live and delayed sharing seem to have their special characteristics, live video being used as a virtual window between places whereas delayed video usage has more scope for good-quality content. While this novel way of interacting via mobile video enables new social patterns, it also raises new concerns for privacy and trust between participating persons in all roles, largely due to the widely spreading possibilities of videos. Video in a social situation affects cameramen (who record), targets (who are recorded), passers-by (who are unintentionally in the situation), and the audience (who follow the videos or recording situations) but also the other way around, the participants affect the video by their varying and evolving personal and communicational motivations for recording.

Top