Sample records for remote video camera

  1. Feasibility study of transmission of OTV camera control information in the video vertical blanking interval

    NASA Technical Reports Server (NTRS)

    White, Preston A., III

    1994-01-01

    The Operational Television system at Kennedy Space Center operates hundreds of video cameras, many remotely controllable, in support of the operations at the center. This study was undertaken to determine if commercial NABTS (North American Basic Teletext System) teletext transmission in the vertical blanking interval of the genlock signals distributed to the cameras could be used to send remote control commands to the cameras and the associated pan and tilt platforms. Wavelength division multiplexed fiberoptic links are being installed in the OTV system to obtain RS-250 short-haul quality. It was demonstrated that the NABTS transmission could be sent over the fiberoptic cable plant without excessive video quality degradation and that video cameras could be controlled using NABTS transmissions over multimode fiberoptic paths as long as 1.2 km.

  2. Using remote underwater video to estimate freshwater fish species richness.

    PubMed

    Ebner, B C; Morgan, D L

    2013-05-01

    Species richness records from replicated deployments of baited remote underwater video stations (BRUVS) and unbaited remote underwater video stations (UBRUVS) in shallow (<1 m) and deep (>1 m) water were compared with those obtained from using fyke nets, gillnets and beach seines. Maximum species richness (14 species) was achieved through a combination of conventional netting and camera-based techniques. Chanos chanos was the only species not recorded on camera, whereas Lutjanus argentimaculatus, Selenotoca multifasciata and Gerres filamentosus were recorded on camera in all three waterholes but were not detected by netting. BRUVSs and UBRUVSs provided versatile techniques that were effective at a range of depths and microhabitats. It is concluded that cameras warrant application in aquatic areas of high conservation value with high visibility. Non-extractive video methods are particularly desirable where threatened species are a focus of monitoring or might be encountered as by-catch in net meshes. © 2013 The Authors. Journal of Fish Biology © 2013 The Fisheries Society of the British Isles.

  3. Video monitoring system for car seat

    NASA Technical Reports Server (NTRS)

    Elrod, Susan Vinz (Inventor); Dabney, Richard W. (Inventor)

    2004-01-01

    A video monitoring system for use with a child car seat has video camera(s) mounted in the car seat. The video images are wirelessly transmitted to a remote receiver/display encased in a portable housing that can be removably mounted in the vehicle in which the car seat is installed.

  4. Composite video and graphics display for multiple camera viewing system in robotics and teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)

    1991-01-01

    A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.

  5. Composite video and graphics display for camera viewing systems in robotics and teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)

    1993-01-01

    A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.

  6. Airport Remote Tower Sensor Systems

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Gawdiak, Yuri; Leidichj, Christopher; Papasin, Richard; Tran, Peter B.; Bass, Kevin

    2006-01-01

    Networks of video cameras, meteorological sensors, and ancillary electronic equipment are under development in collaboration among NASA Ames Research Center, the Federal Aviation Administration (FAA), and the National Oceanic Atmospheric Administration (NOAA). These networks are to be established at and near airports to provide real-time information on local weather conditions that affect aircraft approaches and landings. The prototype network is an airport-approach-zone camera system (AAZCS), which has been deployed at San Francisco International Airport (SFO) and San Carlos Airport (SQL). The AAZCS includes remotely controlled color video cameras located on top of SFO and SQL air-traffic control towers. The cameras are controlled by the NOAA Center Weather Service Unit located at the Oakland Air Route Traffic Control Center and are accessible via a secure Web site. The AAZCS cameras can be zoomed and can be panned and tilted to cover a field of view 220 wide. The NOAA observer can see the sky condition as it is changing, thereby making possible a real-time evaluation of the conditions along the approach zones of SFO and SQL. The next-generation network, denoted a remote tower sensor system (RTSS), will soon be deployed at the Half Moon Bay Airport and a version of it will eventually be deployed at Los Angeles International Airport. In addition to remote control of video cameras via secure Web links, the RTSS offers realtime weather observations, remote sensing, portability, and a capability for deployment at remote and uninhabited sites. The RTSS can be used at airports that lack control towers, as well as at major airport hubs, to provide synthetic augmentation of vision for both local and remote operations under what would otherwise be conditions of low or even zero visibility.

  7. Ground-based remote sensing with long lens video camera for upper-stem diameter and other tree crown measurements

    Treesearch

    Neil A. Clark; Sang-Mook Lee

    2004-01-01

    This paper demonstrates how a digital video camera with a long lens can be used with pulse laser ranging in order to collect very large-scale tree crown measurements. The long focal length of the camera lens provides the magnification required for precise viewing of distant points with the trade-off of spatial coverage. Multiple video frames are mosaicked into a single...

  8. An evaluation of video cameras for collecting observational data on sanctuary-housed chimpanzees (Pan troglodytes).

    PubMed

    Hansen, Bethany K; Fultz, Amy L; Hopper, Lydia M; Ross, Stephen R

    2018-05-01

    Video cameras are increasingly being used to monitor captive animals in zoo, laboratory, and agricultural settings. This technology may also be useful in sanctuaries with large and/or complex enclosures. However, the cost of camera equipment and a lack of formal evaluations regarding the use of cameras in sanctuary settings make it challenging for facilities to decide whether and how to implement this technology. To address this, we evaluated the feasibility of using a video camera system to monitor chimpanzees at Chimp Haven. We viewed a group of resident chimpanzees in a large forested enclosure and compared observations collected in person and with remote video cameras. We found that via camera, the observer viewed fewer chimpanzees in some outdoor locations (GLMM post hoc test: est. = 1.4503, SE = 0.1457, Z = 9.951, p < 0.001) and identified a lower proportion of chimpanzees (GLMM post hoc test: est. = -2.17914, SE = 0.08490, Z = -25.666, p < 0.001) compared to in-person observations. However, the observer could view the 2 ha enclosure 15 times faster by camera compared to in person. In addition to these results, we provide recommendations to animal facilities considering the installation of a video camera system. Despite some limitations of remote monitoring, we posit that there are substantial benefits of using camera systems in sanctuaries to facilitate animal care and observational research. © 2018 Wiley Periodicals, Inc.

  9. Video Capture of Plastic Surgery Procedures Using the GoPro HERO 3+.

    PubMed

    Graves, Steven Nicholas; Shenaq, Deana Saleh; Langerman, Alexander J; Song, David H

    2015-02-01

    Significant improvements can be made in recoding surgical procedures, particularly in capturing high-quality video recordings from the surgeons' point of view. This study examined the utility of the GoPro HERO 3+ Black Edition camera for high-definition, point-of-view recordings of plastic and reconstructive surgery. The GoPro HERO 3+ Black Edition camera was head-mounted on the surgeon and oriented to the surgeon's perspective using the GoPro App. The camera was used to record 4 cases: 2 fat graft procedures and 2 breast reconstructions. During cases 1-3, an assistant remotely controlled the GoPro via the GoPro App. For case 4 the GoPro was linked to a WiFi remote, and controlled by the surgeon. Camera settings for case 1 were as follows: 1080p video resolution; 48 fps; Protune mode on; wide field of view; 16:9 aspect ratio. The lighting contrast due to the overhead lights resulted in limited washout of the video image. Camera settings were adjusted for cases 2-4 to a narrow field of view, which enabled the camera's automatic white balance to better compensate for bright lights focused on the surgical field. Cases 2-4 captured video sufficient for teaching or presentation purposes. The GoPro HERO 3+ Black Edition camera enables high-quality, cost-effective video recording of plastic and reconstructive surgery procedures. When set to a narrow field of view and automatic white balance, the camera is able to sufficiently compensate for the contrasting light environment of the operating room and capture high-resolution, detailed video.

  10. Study of design and control of remote manipulators. Part 4: Experiments in video camera positioning with regard to remote manipulation

    NASA Technical Reports Server (NTRS)

    Mackro, J.

    1973-01-01

    The results are presented of a study involving closed circuit television as the means of providing the necessary task-to-operator feedback for efficient performance of the remote manipulation system. Experiments were performed to determine the remote video configuration that will result in the best overall system. Two categories of tests were conducted which include: those which involved remote control position (rate) of just the video system, and those in which closed circuit TV was used along with manipulation of the objects themselves.

  11. Movable Cameras And Monitors For Viewing Telemanipulator

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B.; Venema, Steven C.

    1993-01-01

    Three methods proposed to assist operator viewing telemanipulator on video monitor in control station when video image generated by movable video camera in remote workspace of telemanipulator. Monitors rotated or shifted and/or images in them transformed to adjust coordinate systems of scenes visible to operator according to motions of cameras and/or operator's preferences. Reduces operator's workload and probability of error by obviating need for mental transformations of coordinates during operation. Methods applied in outer space, undersea, in nuclear industry, in surgery, in entertainment, and in manufacturing.

  12. Video Capture of Plastic Surgery Procedures Using the GoPro HERO 3+

    PubMed Central

    Graves, Steven Nicholas; Shenaq, Deana Saleh; Langerman, Alexander J.

    2015-01-01

    Background: Significant improvements can be made in recoding surgical procedures, particularly in capturing high-quality video recordings from the surgeons’ point of view. This study examined the utility of the GoPro HERO 3+ Black Edition camera for high-definition, point-of-view recordings of plastic and reconstructive surgery. Methods: The GoPro HERO 3+ Black Edition camera was head-mounted on the surgeon and oriented to the surgeon’s perspective using the GoPro App. The camera was used to record 4 cases: 2 fat graft procedures and 2 breast reconstructions. During cases 1-3, an assistant remotely controlled the GoPro via the GoPro App. For case 4 the GoPro was linked to a WiFi remote, and controlled by the surgeon. Results: Camera settings for case 1 were as follows: 1080p video resolution; 48 fps; Protune mode on; wide field of view; 16:9 aspect ratio. The lighting contrast due to the overhead lights resulted in limited washout of the video image. Camera settings were adjusted for cases 2-4 to a narrow field of view, which enabled the camera’s automatic white balance to better compensate for bright lights focused on the surgical field. Cases 2-4 captured video sufficient for teaching or presentation purposes. Conclusions: The GoPro HERO 3+ Black Edition camera enables high-quality, cost-effective video recording of plastic and reconstructive surgery procedures. When set to a narrow field of view and automatic white balance, the camera is able to sufficiently compensate for the contrasting light environment of the operating room and capture high-resolution, detailed video. PMID:25750851

  13. Highly Protable Airborne Multispectral Imaging System

    NASA Technical Reports Server (NTRS)

    Lehnemann, Robert; Mcnamee, Todd

    2001-01-01

    A portable instrumentation system is described that includes and airborne and a ground-based subsytem. It can acquire multispectral image data over swaths of terrain ranging in width from about 1.5 to 1 km. The system was developed especially for use in coastal environments and is well suited for performing remote sensing and general environmental monitoring. It includes a small,munpilotaed, remotely controlled airplance that carries a forward-looking camera for navigation, three downward-looking monochrome video cameras for imaging terrain in three spectral bands, a video transmitter, and a Global Positioning System (GPS) reciever.

  14. Quantifying Northern Goshawk diets using remote cameras and observations from blinds

    USGS Publications Warehouse

    Rogers, A.S.; DeStefano, S.; Ingraldi, M.F.

    2005-01-01

    Raptor diet is most commonly measured indirectly, by analyzing castings and prey remains, or directly, by observing prey deliveries from blinds. Indirect methods are not only time consuming, but there is evidence to suggest these methods may overestimate certain prey taxa within raptor diet. Remote video surveillance systems have been developed to aid in monitoring and data collection, but their use in field situations can be challenging and is often untested. To investigate diet and prey delivery rates of Northern Goshawks (Accipiter gentilis), we operated 10 remote camera systems at occupied nests during the breeding seasons of 1999 and 2000 in east-central Arizona. We collected 2458 hr of useable video and successfully identified 627 (93%) prey items at least to Class (Aves, Mammalia, or Reptilia). Of prey items identified to genus, we identified 344 (81%) mammals, 62 (15%) birds, and 16 (4%) reptiles. During camera operation, we also conducted observations from blinds at a subset of five nests to compare the relative efficiency and precision of both methods. Limited observations from blinds yielded fewer prey deliveries, and therefore, lower delivery rates (0.16 items/hr) than simultaneous video footage (0.28 items/hr). Observations from blinds resulted in fewer prey identified to the genus and species levels, when compared to data collected by remote cameras. Cameras provided a detailed and close view of nests, allowed for simultaneous recording at multiple nests, decreased observer bias and fatigue, and provided a permanent archive of data. ?? 2005 The Raptor Research Foundation, Inc.

  15. Depth Perception In Remote Stereoscopic Viewing Systems

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B.; Von Sydow, Marika

    1989-01-01

    Report describes theoretical and experimental studies of perception of depth by human operators through stereoscopic video systems. Purpose of such studies to optimize dual-camera configurations used to view workspaces of remote manipulators at distances of 1 to 3 m from cameras. According to analysis, static stereoscopic depth distortion decreased, without decreasing stereoscopitc depth resolution, by increasing camera-to-object and intercamera distances and camera focal length. Further predicts dynamic stereoscopic depth distortion reduced by rotating cameras around center of circle passing through point of convergence of viewing axes and first nodal points of two camera lenses.

  16. Economical Video Monitoring of Traffic

    NASA Technical Reports Server (NTRS)

    Houser, B. C.; Paine, G.; Rubenstein, L. D.; Parham, O. Bruce, Jr.; Graves, W.; Bradley, C.

    1986-01-01

    Data compression allows video signals to be transmitted economically on telephone circuits. Telephone lines transmit television signals to remote traffic-control center. Lines also carry command signals from center to TV camera and compressor at highway site. Video system with television cameras positioned at critical points on highways allows traffic controllers to determine visually, almost immediately, exact cause of traffic-flow disruption; e.g., accidents, breakdowns, or spills, almost immediately. Controllers can then dispatch appropriate emergency services and alert motorists to minimize traffic backups.

  17. Ranging Apparatus and Method Implementing Stereo Vision System

    NASA Technical Reports Server (NTRS)

    Li, Larry C. (Inventor); Cox, Brian J. (Inventor)

    1997-01-01

    A laser-directed ranging system for use in telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a left and right video camera mounted on a camera platform, and a remotely positioned operator. The position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. A laser is provided between the left and right video camera and is directed by the user to point to a target device. The images produced by the left and right video cameras are processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. The horizontal disparity between the two processed images is calculated for use in a stereometric ranging analysis from which range is determined.

  18. The effects of video compression on acceptability of images for monitoring life sciences' experiments

    NASA Technical Reports Server (NTRS)

    Haines, Richard F.; Chuang, Sherry L.

    1993-01-01

    Current plans indicate that there will be a large number of life science experiments carried out during the thirty year-long mission of the Biological Flight Research Laboratory (BFRL) on board Space Station Freedom (SSF). Non-human life science experiments will be performed in the BFRL. Two distinct types of activities have already been identified for this facility: (1) collect, store, distribute, analyze and manage engineering and science data from the Habitats, Glovebox and Centrifuge, (2) perform a broad range of remote science activities in the Glovebox and Habitat chambers in conjunction with the remotely located principal investigator (PI). These activities require extensive video coverage, viewing and/or recording and distribution to video displays on board SSF and to the ground. This paper concentrates mainly on the second type of activity. Each of the two BFRL habitat racks are designed to be configurable for either six rodent habitats per rack, four plant habitats per rack, or a combination of the above. Two video cameras will be installed in each habitat with a spare attachment for a third camera when needed. Therefore, a video system that can accommodate up to 12-18 camera inputs per habitat rack must be considered.

  19. Head-coupled remote stereoscopic camera system for telepresence applications

    NASA Astrophysics Data System (ADS)

    Bolas, Mark T.; Fisher, Scott S.

    1990-09-01

    The Virtual Environment Workstation Project (VIEW) at NASA's Ames Research Center has developed a remotely controlled stereoscopic camera system that can be used for telepresence research and as a tool to develop and evaluate configurations for head-coupled visual systems associated with space station telerobots and remote manipulation robotic arms. The prototype camera system consists of two lightweight CCD video cameras mounted on a computer controlled platform that provides real-time pan, tilt, and roll control of the camera system in coordination with head position transmitted from the user. This paper provides an overall system description focused on the design and implementation of the camera and platform hardware configuration and the development of control software. Results of preliminary performance evaluations are reported with emphasis on engineering and mechanical design issues and discussion of related psychophysiological effects and objectives.

  20. Single-Fiber Optical Link For Video And Control

    NASA Technical Reports Server (NTRS)

    Galloway, F. Houston

    1993-01-01

    Single optical fiber carries control signals to remote television cameras and video signals from cameras. Fiber replaces multiconductor copper cable, with consequent reduction in size. Repeaters not needed. System works with either multimode- or single-mode fiber types. Nonmetallic fiber provides immunity to electromagnetic interference at suboptical frequencies and much less vulnerable to electronic eavesdropping and lightning strikes. Multigigahertz bandwidth more than adequate for high-resolution television signals.

  1. General-Purpose Serial Interface For Remote Control

    NASA Technical Reports Server (NTRS)

    Busquets, Anthony M.; Gupton, Lawrence E.

    1990-01-01

    Computer controls remote television camera. General-purpose controller developed to serve as interface between host computer and pan/tilt/zoom/focus functions on series of automated video cameras. Interface port based on 8251 programmable communications-interface circuit configured for tristated outputs, and connects controller system to any host computer with RS-232 input/output (I/O) port. Accepts byte-coded data from host, compares them with prestored codes in read-only memory (ROM), and closes or opens appropriate switches. Six output ports control opening and closing of as many as 48 switches. Operator controls remote television camera by speaking commands, in system including general-purpose controller.

  2. Remote Video Auditing in the Surgical Setting.

    PubMed

    Pedersen, Anne; Getty Ritter, Elizabeth; Beaton, Megan; Gibbons, David

    2017-02-01

    Remote video auditing, a method first adopted by the food preparation industry, was later introduced to the health care industry as a novel approach to improving hand hygiene practices. This strategy yielded tremendous and sustained improvement, causing leaders to consider the potential effects of such technology on the complex surgical environment. This article outlines the implementation of remote video auditing and the first year of activity, outcomes, and measurable successes in a busy surgery department in the eastern United States. A team of anesthesia care providers, surgeons, and OR personnel used low-resolution cameras, large-screen displays, and cell phone alerts to make significant progress in three domains: application of the Universal Protocol for preventing wrong site, wrong procedure, wrong person surgery; efficiency metrics; and cleaning compliance. The use of cameras with real-time auditing and results-sharing created an environment of continuous learning, compliance, and synergy, which has resulted in a safer, cleaner, and more efficient OR. Copyright © 2017 AORN, Inc. Published by Elsevier Inc. All rights reserved.

  3. Remote repair appliance

    DOEpatents

    Heumann, Frederick K.; Wilkinson, Jay C.; Wooding, David R.

    1997-01-01

    A remote appliance for supporting a tool for performing work at a worksite on a substantially circular bore of a workpiece and for providing video signals of the worksite to a remote monitor comprising: a baseplate having an inner face and an outer face; a plurality of rollers, wherein each roller is rotatably and adjustably attached to the inner face of the baseplate and positioned to roll against the bore of the workpiece when the baseplate is positioned against the mouth of the bore such that the appliance may be rotated about the bore in a plane substantially parallel to the baseplate; a tool holding means for supporting the tool, the tool holding means being adjustably attached to the outer face of the baseplate such that the working end of the tool is positioned on the inner face side of the baseplate; a camera for providing video signals of the worksite to the remote monitor; and a camera holding means for supporting the camera on the inner face side of the baseplate, the camera holding means being adjustably attached to the outer face of the baseplate. In a preferred embodiment, roller guards are provided to protect the rollers from debris and a bore guard is provided to protect the bore from wear by the rollers and damage from debris.

  4. Video stroke assessment (VSA) project: design and production of a prototype system for the remote diagnosis of stroke

    NASA Astrophysics Data System (ADS)

    Urias, Adrian R.; Draghic, Nicole; Lui, Janet; Cho, Angie; Curtis, Calvin; Espinosa, Joseluis; Wottawa, Christopher; Wiesmann, William P.; Schwamm, Lee H.

    2005-04-01

    Stroke remains the third most frequent cause of death in the United States and the leading cause of disability in adults. Long-term effects of ischemic stroke can be mitigated by the opportune administration of Tissue Plasminogen Activator (t-PA); however, the decision regarding the appropriate use of this therapy is dependant on timely, effective neurological assessment by a trained specialist. The lack of available stroke expertise is a key barrier preventing frequent use of t-PA. We report here on the development of a prototype research system capable of performing a semi-automated neurological examination from an offsite location via the Internet and a Computed Tomography (CT) scanner to facilitate the diagnosis and treatment of acute stroke. The Video Stroke Assessment (VSA) System consists of a video camera, a camera mounting frame, and a computer with software and algorithms to collect, interpret, and store patient neurological responses to stimuli. The video camera is mounted on a mobility track in front of the patient; camera direction and zoom are remotely controlled on a graphical user interface (GUI) by the specialist. The VSA System also performs a partially-autonomous examination based on the NIH Stroke Scale (NIHSS). Various response data indicative of stroke are recorded, analyzed and transmitted in real time to the specialist. The VSA provides unbiased, quantitative results for most categories of the NIHSS along with video and audio playback to assist in accurate diagnosis. The system archives the complete exam and results.

  5. Towards real-time remote processing of laparoscopic video

    NASA Astrophysics Data System (ADS)

    Ronaghi, Zahra; Duffy, Edward B.; Kwartowitz, David M.

    2015-03-01

    Laparoscopic surgery is a minimally invasive surgical technique where surgeons insert a small video camera into the patient's body to visualize internal organs and small tools to perform surgical procedures. However, the benefit of small incisions has a drawback of limited visualization of subsurface tissues, which can lead to navigational challenges in the delivering of therapy. Image-guided surgery (IGS) uses images to map subsurface structures and can reduce the limitations of laparoscopic surgery. One particular laparoscopic camera system of interest is the vision system of the daVinci-Si robotic surgical system (Intuitive Surgical, Sunnyvale, CA, USA). The video streams generate approximately 360 megabytes of data per second, demonstrating a trend towards increased data sizes in medicine, primarily due to higher-resolution video cameras and imaging equipment. Processing this data on a bedside PC has become challenging and a high-performance computing (HPC) environment may not always be available at the point of care. To process this data on remote HPC clusters at the typical 30 frames per second (fps) rate, it is required that each 11.9 MB video frame be processed by a server and returned within 1/30th of a second. The ability to acquire, process and visualize data in real-time is essential for performance of complex tasks as well as minimizing risk to the patient. As a result, utilizing high-speed networks to access computing clusters will lead to real-time medical image processing and improve surgical experiences by providing real-time augmented laparoscopic data. We aim to develop a medical video processing system using an OpenFlow software defined network that is capable of connecting to multiple remote medical facilities and HPC servers.

  6. Security warning system monitors up to fifteen remote areas simultaneously

    NASA Technical Reports Server (NTRS)

    Fusco, R. C.

    1966-01-01

    Security warning system consisting of 15 television cameras is capable of monitoring several remote or unoccupied areas simultaneously. The system uses a commutator and decommutator, allowing time-multiplexed video transmission. This security system could be used in industrial and retail establishments.

  7. Integrated remotely sensed datasets for disaster management

    NASA Astrophysics Data System (ADS)

    McCarthy, Timothy; Farrell, Ronan; Curtis, Andrew; Fotheringham, A. Stewart

    2008-10-01

    Video imagery can be acquired from aerial, terrestrial and marine based platforms and has been exploited for a range of remote sensing applications over the past two decades. Examples include coastal surveys using aerial video, routecorridor infrastructures surveys using vehicle mounted video cameras, aerial surveys over forestry and agriculture, underwater habitat mapping and disaster management. Many of these video systems are based on interlaced, television standards such as North America's NTSC and European SECAM and PAL television systems that are then recorded using various video formats. This technology has recently being employed as a front-line, remote sensing technology for damage assessment post-disaster. This paper traces the development of spatial video as a remote sensing tool from the early 1980s to the present day. The background to a new spatial-video research initiative based at National University of Ireland, Maynooth, (NUIM) is described. New improvements are proposed and include; low-cost encoders, easy to use software decoders, timing issues and interoperability. These developments will enable specialists and non-specialists collect, process and integrate these datasets within minimal support. This integrated approach will enable decision makers to access relevant remotely sensed datasets quickly and so, carry out rapid damage assessment during and post-disaster.

  8. The integrated design and archive of space-borne signal processing and compression coding

    NASA Astrophysics Data System (ADS)

    He, Qiang-min; Su, Hao-hang; Wu, Wen-bo

    2017-10-01

    With the increasing demand of users for the extraction of remote sensing image information, it is very urgent to significantly enhance the whole system's imaging quality and imaging ability by using the integrated design to achieve its compact structure, light quality and higher attitude maneuver ability. At this present stage, the remote sensing camera's video signal processing unit and image compression and coding unit are distributed in different devices. The volume, weight and consumption of these two units is relatively large, which unable to meet the requirements of the high mobility remote sensing camera. This paper according to the high mobility remote sensing camera's technical requirements, designs a kind of space-borne integrated signal processing and compression circuit by researching a variety of technologies, such as the high speed and high density analog-digital mixed PCB design, the embedded DSP technology and the image compression technology based on the special-purpose chips. This circuit lays a solid foundation for the research of the high mobility remote sensing camera.

  9. Remote repair appliance

    DOEpatents

    Heumann, F.K.; Wilkinson, J.C.; Wooding, D.R.

    1997-12-16

    A remote appliance for supporting a tool for performing work at a work site on a substantially circular bore of a work piece and for providing video signals of the work site to a remote monitor comprises: a base plate having an inner face and an outer face; a plurality of rollers, wherein each roller is rotatably and adjustably attached to the inner face of the base plate and positioned to roll against the bore of the work piece when the base plate is positioned against the mouth of the bore such that the appliance may be rotated about the bore in a plane substantially parallel to the base plate; a tool holding means for supporting the tool, the tool holding means being adjustably attached to the outer face of the base plate such that the working end of the tool is positioned on the inner face side of the base plate; a camera for providing video signals of the work site to the remote monitor; and a camera holding means for supporting the camera on the inner face side of the base plate, the camera holding means being adjustably attached to the outer face of the base plate. In a preferred embodiment, roller guards are provided to protect the rollers from debris and a bore guard is provided to protect the bore from wear by the rollers and damage from debris. 5 figs.

  10. A real-time remote video streaming platform for ultrasound imaging.

    PubMed

    Ahmadi, Mehdi; Gross, Warren J; Kadoury, Samuel

    2016-08-01

    Ultrasound is a viable imaging technology in remote and resources-limited areas. Ultrasonography is a user-dependent skill which depends on a high degree of training and hands-on experience. However, there is a limited number of skillful sonographers located in remote areas. In this work, we aim to develop a real-time video streaming platform which allows specialist physicians to remotely monitor ultrasound exams. To this end, an ultrasound stream is captured and transmitted through a wireless network into remote computers, smart-phones and tablets. In addition, the system is equipped with a camera to track the position of the ultrasound probe. The main advantage of our work is using an open source platform for video streaming which gives us more control over streaming parameters than the available commercial products. The transmission delays of the system are evaluated for several ultrasound video resolutions and the results show that ultrasound videos close to the high-definition (HD) resolution can be received and displayed on an Android tablet with the delay of 0.5 seconds which is acceptable for accurate real-time diagnosis.

  11. Light-Directed Ranging System Implementing Single Camera System for Telerobotics Applications

    NASA Technical Reports Server (NTRS)

    Wells, Dennis L. (Inventor); Li, Larry C. (Inventor); Cox, Brian J. (Inventor)

    1997-01-01

    A laser-directed ranging system has utility for use in various fields, such as telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a single video camera and a directional light source such as a laser mounted on a camera platform, and a remotely positioned operator. In one embodiment, the position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. The laser is offset vertically and horizontally from the camera, and the laser/camera platform is directed by the user to point the laser and the camera toward a target device. The image produced by the video camera is processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. A reference point is defined at a point in the video frame, which may be located outside of the image area of the camera. The disparity between the digital image of the laser spot and the reference point is calculated for use in a ranging analysis to determine range to the target.

  12. A highly sensitive underwater video system for use in turbid aquaculture ponds.

    PubMed

    Hung, Chin-Chang; Tsao, Shih-Chieh; Huang, Kuo-Hao; Jang, Jia-Pu; Chang, Hsu-Kuang; Dobbs, Fred C

    2016-08-24

    The turbid, low-light waters characteristic of aquaculture ponds have made it difficult or impossible for previous video cameras to provide clear imagery of the ponds' benthic habitat. We developed a highly sensitive, underwater video system (UVS) for this particular application and tested it in shrimp ponds having turbidities typical of those in southern Taiwan. The system's high-quality video stream and images, together with its camera capacity (up to nine cameras), permit in situ observations of shrimp feeding behavior, shrimp size and internal anatomy, and organic matter residues on pond sediments. The UVS can operate continuously and be focused remotely, a convenience to shrimp farmers. The observations possible with the UVS provide aquaculturists with information critical to provision of feed with minimal waste; determining whether the accumulation of organic-matter residues dictates exchange of pond water; and management decisions concerning shrimp health.

  13. A highly sensitive underwater video system for use in turbid aquaculture ponds

    NASA Astrophysics Data System (ADS)

    Hung, Chin-Chang; Tsao, Shih-Chieh; Huang, Kuo-Hao; Jang, Jia-Pu; Chang, Hsu-Kuang; Dobbs, Fred C.

    2016-08-01

    The turbid, low-light waters characteristic of aquaculture ponds have made it difficult or impossible for previous video cameras to provide clear imagery of the ponds’ benthic habitat. We developed a highly sensitive, underwater video system (UVS) for this particular application and tested it in shrimp ponds having turbidities typical of those in southern Taiwan. The system’s high-quality video stream and images, together with its camera capacity (up to nine cameras), permit in situ observations of shrimp feeding behavior, shrimp size and internal anatomy, and organic matter residues on pond sediments. The UVS can operate continuously and be focused remotely, a convenience to shrimp farmers. The observations possible with the UVS provide aquaculturists with information critical to provision of feed with minimal waste; determining whether the accumulation of organic-matter residues dictates exchange of pond water; and management decisions concerning shrimp health.

  14. A highly sensitive underwater video system for use in turbid aquaculture ponds

    PubMed Central

    Hung, Chin-Chang; Tsao, Shih-Chieh; Huang, Kuo-Hao; Jang, Jia-Pu; Chang, Hsu-Kuang; Dobbs, Fred C.

    2016-01-01

    The turbid, low-light waters characteristic of aquaculture ponds have made it difficult or impossible for previous video cameras to provide clear imagery of the ponds’ benthic habitat. We developed a highly sensitive, underwater video system (UVS) for this particular application and tested it in shrimp ponds having turbidities typical of those in southern Taiwan. The system’s high-quality video stream and images, together with its camera capacity (up to nine cameras), permit in situ observations of shrimp feeding behavior, shrimp size and internal anatomy, and organic matter residues on pond sediments. The UVS can operate continuously and be focused remotely, a convenience to shrimp farmers. The observations possible with the UVS provide aquaculturists with information critical to provision of feed with minimal waste; determining whether the accumulation of organic-matter residues dictates exchange of pond water; and management decisions concerning shrimp health. PMID:27554201

  15. Development and application of remote video monitoring system for combine harvester based on embedded Linux

    NASA Astrophysics Data System (ADS)

    Chen, Jin; Wang, Yifan; Wang, Xuelei; Wang, Yuehong; Hu, Rui

    2017-01-01

    Combine harvester usually works in sparsely populated areas with harsh environment. In order to achieve the remote real-time video monitoring of the working state of combine harvester. A remote video monitoring system based on ARM11 and embedded Linux is developed. The system uses USB camera for capturing working state video data of the main parts of combine harvester, including the granary, threshing drum, cab and cut table. Using JPEG image compression standard to compress video data then transferring monitoring screen to remote monitoring center over the network for long-range monitoring and management. At the beginning of this paper it describes the necessity of the design of the system. Then it introduces realization methods of hardware and software briefly. And then it describes detailedly the configuration and compilation of embedded Linux operating system and the compiling and transplanting of video server program are elaborated. At the end of the paper, we carried out equipment installation and commissioning on combine harvester and then tested the system and showed the test results. In the experiment testing, the remote video monitoring system for combine harvester can achieve 30fps with the resolution of 800x600, and the response delay in the public network is about 40ms.

  16. Use of a UAV-mounted video camera to assess feeding behavior of Raramuri Criollo cows

    USDA-ARS?s Scientific Manuscript database

    Interest in use of unmanned aerial vehicles in science has increased in recent years. It is predicted that they will be a preferred remote sensing platform for applications that inform sustainable rangeland management in the future. The objective of this study was to determine whether UAV video moni...

  17. Evaluation of stereoscopic video cameras synchronized with the movement of an operator's head on the teleoperation of the actual backhoe shovel

    NASA Astrophysics Data System (ADS)

    Minamoto, Masahiko; Matsunaga, Katsuya

    1999-05-01

    Operator performance while using a remote controlled backhoe shovel is described for three different stereoscopic viewing conditions: direct view, fixed stereoscopic cameras connected to a helmet mounted display (HMD), and rotating stereo camera connected and slaved to the head orientation of a free moving stereo HMD. Results showed that the head- slaved system provided the best performance.

  18. Intelligent viewing control for robotic and automation systems

    NASA Astrophysics Data System (ADS)

    Schenker, Paul S.; Peters, Stephen F.; Paljug, Eric D.; Kim, Won S.

    1994-10-01

    We present a new system for supervisory automated control of multiple remote cameras. Our primary purpose in developing this system has been to provide capability for knowledge- based, `hands-off' viewing during execution of teleoperation/telerobotic tasks. The reported technology has broader applicability to remote surveillance, telescience observation, automated manufacturing workcells, etc. We refer to this new capability as `Intelligent Viewing Control (IVC),' distinguishing it from a simple programmed camera motion control. In the IVC system, camera viewing assignment, sequencing, positioning, panning, and parameter adjustment (zoom, focus, aperture, etc.) are invoked and interactively executed by real-time by a knowledge-based controller, drawing on a priori known task models and constraints, including operator preferences. This multi-camera control is integrated with a real-time, high-fidelity 3D graphics simulation, which is correctly calibrated in perspective to the actual cameras and their platform kinematics (translation/pan-tilt). Such merged graphics- with-video design allows the system user to preview and modify the planned (`choreographed') viewing sequences. Further, during actual task execution, the system operator has available both the resulting optimized video sequence, as well as supplementary graphics views from arbitrary perspectives. IVC, including operator-interactive designation of robot task actions, is presented to the user as a well-integrated video-graphic single screen user interface allowing easy access to all relevant telerobot communication/command/control resources. We describe and show pictorial results of a preliminary IVC system implementation for telerobotic servicing of a satellite.

  19. Cable and Line Inspection Mechanism

    NASA Technical Reports Server (NTRS)

    Ross, Terence J. (Inventor)

    2003-01-01

    An automated cable and line inspection mechanism visually scans the entire surface of a cable as the mechanism travels along the cable=s length. The mechanism includes a drive system, a video camera, a mirror assembly for providing the camera with a 360 degree view of the cable, and a laser micrometer for measuring the cable=s diameter. The drive system includes an electric motor and a plurality of drive wheels and tension wheels for engaging the cable or line to be inspected, and driving the mechanism along the cable. The mirror assembly includes mirrors that are positioned to project multiple images of the cable on the camera lens, each of which is of a different portion of the cable. A data transceiver and a video transmitter are preferably employed for transmission of video images, data and commands between the mechanism and a remote control station.

  20. Cable and line inspection mechanism

    NASA Technical Reports Server (NTRS)

    Ross, Terence J. (Inventor)

    2003-01-01

    An automated cable and line inspection mechanism visually scans the entire surface of a cable as the mechanism travels along the cable=s length. The mechanism includes a drive system, a video camera, a mirror assembly for providing the camera with a 360 degree view of the cable, and a laser micrometer for measuring the cable=s diameter. The drive system includes an electric motor and a plurality of drive wheels and tension wheels for engaging the cable or line to be inspected, and driving the mechanism along the cable. The mirror assembly includes mirrors that are positioned to project multiple images of the cable on the camera lens, each of which is of a different portion of the cable. A data transceiver and a video transmitter are preferably employed for transmission of video images, data and commands between the mechanism and a remote control station.

  1. [Communication subsystem design of tele-screening system for diabetic retinopathy].

    PubMed

    Chen, Jian; Pan, Lin; Zheng, Shaohua; Yu, Lun

    2013-12-01

    A design scheme of a tele-screening system for diabetic retinopathy (DR) has been proposed, especially the communication subsystem. The scheme uses serial communication module consisting of ARM 7 microcontroller and relays to connect remote computer and fundus camera, and also uses C++ programming language based on MFC to design the communication software consisting of therapy and diagnostic information module, video/audio surveillance module and fundus camera control module. The scheme possesses universal property in some remote medical treatment systems which are similar to the system.

  2. Advancements in remote physiological measurement and applications in human-computer interaction

    NASA Astrophysics Data System (ADS)

    McDuff, Daniel

    2017-04-01

    Physiological signals are important for tracking health and emotional states. Imaging photoplethysmography (iPPG) is a set of techniques for remotely recovering cardio-pulmonary signals from video of the human body. Advances in iPPG methods over the past decade combined with the ubiquity of digital cameras presents the possibility for many new, lowcost applications of physiological monitoring. This talk will highlight methods for recovering physiological signals, work characterizing the impact of video parameters and hardware on these measurements, and applications of this technology in human-computer interfaces.

  3. A Portable Shoulder-Mounted Camera System for Surgical Education in Spine Surgery.

    PubMed

    Pham, Martin H; Ohiorhenuan, Ifije E; Patel, Neil N; Jakoi, Andre M; Hsieh, Patrick C; Acosta, Frank L; Wang, Jeffrey C; Liu, John C

    2017-02-07

    The past several years have demonstrated an increased recognition of operative videos as an important adjunct for resident education. Currently lacking, however, are effective methods to record video for the purposes of illustrating the techniques of minimally invasive (MIS) and complex spine surgery. We describe here our experiences developing and using a shoulder-mounted camera system for recording surgical video. Our requirements for an effective camera system included wireless portability to allow for movement around the operating room, camera mount location for comfort and loupes/headlight usage, battery life for long operative days, and sterile control of on/off recording. With this in mind, we created a shoulder-mounted camera system utilizing a GoPro™ HERO3+, its Smart Remote (GoPro, Inc., San Mateo, California), a high-capacity external battery pack, and a commercially available shoulder-mount harness. This shoulder-mounted system was more comfortable to wear for long periods of time in comparison to existing head-mounted and loupe-mounted systems. Without requiring any wired connections, the surgeon was free to move around the room as needed. Over the past several years, we have recorded numerous MIS and complex spine surgeries for the purposes of surgical video creation for resident education. Surgical videos serve as a platform to distribute important operative nuances in rich multimedia. Effective and practical camera system setups are needed to encourage the continued creation of videos to illustrate the surgical maneuvers in minimally invasive and complex spinal surgery. We describe here a novel portable shoulder-mounted camera system setup specifically designed to be worn and used for long periods of time in the operating room.

  4. Computer-Aided Remote Driving

    NASA Technical Reports Server (NTRS)

    Wilcox, Brian H.

    1994-01-01

    System for remote control of robotic land vehicle requires only small radio-communication bandwidth. Twin video cameras on vehicle create stereoscopic images. Operator views cross-polarized images on two cathode-ray tubes through correspondingly polarized spectacles. By use of cursor on frozen image, remote operator designates path. Vehicle proceeds to follow path, by use of limited degree of autonomous control to cope with unexpected conditions. System concept, called "computer-aided remote driving" (CARD), potentially useful in exploration of other planets, military surveillance, firefighting, and clean-up of hazardous materials.

  5. Virtual displays for 360-degree video

    NASA Astrophysics Data System (ADS)

    Gilbert, Stephen; Boonsuk, Wutthigrai; Kelly, Jonathan W.

    2012-03-01

    In this paper we describe a novel approach for comparing users' spatial cognition when using different depictions of 360- degree video on a traditional 2D display. By using virtual cameras within a game engine and texture mapping of these camera feeds to an arbitrary shape, we were able to offer users a 360-degree interface composed of four 90-degree views, two 180-degree views, or one 360-degree view of the same interactive environment. An example experiment is described using these interfaces. This technique for creating alternative displays of wide-angle video facilitates the exploration of how compressed or fish-eye distortions affect spatial perception of the environment and can benefit the creation of interfaces for surveillance and remote system teleoperation.

  6. Real-time visual communication to aid disaster recovery in a multi-segment hybrid wireless networking system

    NASA Astrophysics Data System (ADS)

    Al Hadhrami, Tawfik; Wang, Qi; Grecos, Christos

    2012-06-01

    When natural disasters or other large-scale incidents occur, obtaining accurate and timely information on the developing situation is vital to effective disaster recovery operations. High-quality video streams and high-resolution images, if available in real time, would provide an invaluable source of current situation reports to the incident management team. Meanwhile, a disaster often causes significant damage to the communications infrastructure. Therefore, another essential requirement for disaster management is the ability to rapidly deploy a flexible incident area communication network. Such a network would facilitate the transmission of real-time video streams and still images from the disrupted area to remote command and control locations. In this paper, a comprehensive end-to-end video/image transmission system between an incident area and a remote control centre is proposed and implemented, and its performance is experimentally investigated. In this study a hybrid multi-segment communication network is designed that seamlessly integrates terrestrial wireless mesh networks (WMNs), distributed wireless visual sensor networks, an airborne platform with video camera balloons, and a Digital Video Broadcasting- Satellite (DVB-S) system. By carefully integrating all of these rapidly deployable, interworking and collaborative networking technologies, we can fully exploit the joint benefits provided by WMNs, WSNs, balloon camera networks and DVB-S for real-time video streaming and image delivery in emergency situations among the disaster hit area, the remote control centre and the rescue teams in the field. The whole proposed system is implemented in a proven simulator. Through extensive simulations, the real-time visual communication performance of this integrated system has been numerically evaluated, towards a more in-depth understanding in supporting high-quality visual communications in such a demanding context.

  7. Heart rate measurement based on face video sequence

    NASA Astrophysics Data System (ADS)

    Xu, Fang; Zhou, Qin-Wu; Wu, Peng; Chen, Xing; Yang, Xiaofeng; Yan, Hong-jian

    2015-03-01

    This paper proposes a new non-contact heart rate measurement method based on photoplethysmography (PPG) theory. With this method we can measure heart rate remotely with a camera and ambient light. We collected video sequences of subjects, and detected remote PPG signals through video sequences. Remote PPG signals were analyzed with two methods, Blind Source Separation Technology (BSST) and Cross Spectral Power Technology (CSPT). BSST is a commonly used method, and CSPT is used for the first time in the study of remote PPG signals in this paper. Both of the methods can acquire heart rate, but compared with BSST, CSPT has clearer physical meaning, and the computational complexity of CSPT is lower than that of BSST. Our work shows that heart rates detected by CSPT method have good consistency with the heart rates measured by a finger clip oximeter. With good accuracy and low computational complexity, the CSPT method has a good prospect for the application in the field of home medical devices and mobile health devices.

  8. High-Definition Television (HDTV) Images for Earth Observations and Earth Science Applications

    NASA Technical Reports Server (NTRS)

    Robinson, Julie A.; Holland, S. Douglas; Runco, Susan K.; Pitts, David E.; Whitehead, Victor S.; Andrefouet, Serge M.

    2000-01-01

    As part of Detailed Test Objective 700-17A, astronauts acquired Earth observation images from orbit using a high-definition television (HDTV) camcorder, Here we provide a summary of qualitative findings following completion of tests during missions STS (Space Transport System)-93 and STS-99. We compared HDTV imagery stills to images taken using payload bay video cameras, Hasselblad film camera, and electronic still camera. We also evaluated the potential for motion video observations of changes in sunlight and the use of multi-aspect viewing to image aerosols. Spatial resolution and color quality are far superior in HDTV images compared to National Television Systems Committee (NTSC) video images. Thus, HDTV provides the first viable option for video-based remote sensing observations of Earth from orbit. Although under ideal conditions, HDTV images have less spatial resolution than medium-format film cameras, such as the Hasselblad, under some conditions on orbit, the HDTV image acquired compared favorably with the Hasselblad. Of particular note was the quality of color reproduction in the HDTV images HDTV and electronic still camera (ESC) were not compared with matched fields of view, and so spatial resolution could not be compared for the two image types. However, the color reproduction of the HDTV stills was truer than colors in the ESC images. As HDTV becomes the operational video standard for Space Shuttle and Space Station, HDTV has great potential as a source of Earth-observation data. Planning for the conversion from NTSC to HDTV video standards should include planning for Earth data archiving and distribution.

  9. Remote presence proctoring by using a wireless remote-control videoconferencing system.

    PubMed

    Smith, C Daniel; Skandalakis, John E

    2005-06-01

    Remote presence in an operating room to allow an experienced surgeon to proctor a surgeon has been promised through robotics and telesurgery solutions. Although several such systems have been developed and commercialized, little progress has been made using telesurgery for anything more than live demonstrations of surgery. This pilot project explored the use of a new videoconferencing capability to determine if it offers advantages over existing systems. The video conferencing system used is a PC-based system with a flat screen monitor and an attached camera that is then mounted on a remotely controlled platform. This device is controlled from a remotely placed PC-based videoconferencing system computer outfitted with a joystick. Using the public Internet and a wireless router at the client site, a surgeon at the control station can manipulate the videoconferencing system. Controls include navigating the unit around the room and moving the flat screen/camera portion like a head looking up/down and right/left. This system (InTouch Medical, Santa Barbara, CA) was used to proctor medical students during an anatomy class cadaver dissection. The ability of the remote surgeon to effectively monitor the students' dissections and direct their activities was assessed subjectively by students and surgeon. This device was very effective at providing a controllable and interactive presence in the anatomy lab. Students felt they were interacting with a person rather than a video screen and quickly forgot that the surgeon was not in the room. The ability to move the device within the environment rather than just observe the environment from multiple fixed camera angles gave the surgeon a similar feel of true presence. A remote-controlled videoconferencing system provides a more real experience for both student and proctor. Future development of such a device could greatly facilitate progress in implementation of remote presence proctoring.

  10. New information technology tools for a medical command system for mass decontamination.

    PubMed

    Fuse, Akira; Okumura, Tetsu; Hagiwara, Jun; Tanabe, Tomohide; Fukuda, Reo; Masuno, Tomohiko; Mimura, Seiji; Yamamoto, Kaname; Yokota, Hiroyuki

    2013-06-01

    In a mass decontamination during a nuclear, biological, or chemical (NBC) response, the capability to command, control, and communicate is crucial for the proper flow of casualties at the scene and their subsequent evacuation to definitive medical facilities. Information Technology (IT) tools can be used to strengthen medical control, command, and communication during such a response. Novel IT tools comprise a vehicle-based, remote video camera and communication network systems. During an on-site verification event, an image from a remote video camera system attached to the personal protective garment of a medical responder working in the warm zone was transmitted to the on-site Medical Commander for aid in decision making. Similarly, a communication network system was used for personnel at the following points: (1) the on-site Medical Headquarters; (2) the decontamination hot zone; (3) an on-site coordination office; and (4) a remote medical headquarters of a local government office. A specially equipped, dedicated vehicle was used for the on-site medical headquarters, and facilitated the coordination with other agencies. The use of these IT tools proved effective in assisting with the medical command and control of medical resources and patient transport decisions during a mass-decontamination exercise, but improvements are required to overcome transmission delays and camera direction settings, as well as network limitations in certain areas.

  11. Packet based serial link realized in FPGA dedicated for high resolution infrared image transmission

    NASA Astrophysics Data System (ADS)

    Bieszczad, Grzegorz

    2015-05-01

    In article the external digital interface specially designed for thermographic camera built in Military University of Technology is described. The aim of article is to illustrate challenges encountered during design process of thermal vision camera especially related to infrared data processing and transmission. Article explains main requirements for interface to transfer Infra-Red or Video digital data and describes the solution which we elaborated based on Low Voltage Differential Signaling (LVDS) physical layer and signaling scheme. Elaborated link for image transmission is built using FPGA integrated circuit with built-in high speed serial transceivers achieving up to 2500Gbps throughput. Image transmission is realized using proprietary packet protocol. Transmission protocol engine was described in VHDL language and tested in FPGA hardware. The link is able to transmit 1280x1024@60Hz 24bit video data using one signal pair. Link was tested to transmit thermal-vision camera picture to remote monitor. Construction of dedicated video link allows to reduce power consumption compared to solutions with ASIC based encoders and decoders realizing video links like DVI or packed based Display Port, with simultaneous reduction of wires needed to establish link to one pair. Article describes functions of modules integrated in FPGA design realizing several functions like: synchronization to video source, video stream packeting, interfacing transceiver module and dynamic clock generation for video standard conversion.

  12. Remote autopsy services: A feasibility study on nine cases.

    PubMed

    Vodovnik, Aleksandar; Aghdam, Mohammad Reza F; Espedal, Dan Gøran

    2017-01-01

    Introduction We have conducted a feasibility study on remote autopsy services in order to increase the flexibility of the service with benefits for teaching and interdepartmental collaboration. Methods Three senior staff pathologists, one senior autopsy technician and one junior resident participated in the study. Nine autopsies were performed by the autopsy technician or resident, supervised by the primary pathologist, through the secure, double encrypted video link using Jabber Video (Cisco) with a high-speed broadband connection. The primary pathologist and autopsy room each connected to the secure virtual meeting room using 14″ laptops with in-built cameras (Hewlett-Packard). A portable high-definition web camera (Cisco) was used in the autopsy room. Primary and secondary pathologists independently interpreted and later compared gross findings for the purpose of quality assurance. The video was streamed live only during consultations and interpretation. A satisfaction survey on technical and professional aspects of the study was conducted. Results Independent interpretations of gross findings between primary and secondary pathologists yielded full agreement. A definite cause of death in one complex autopsy was determined following discussions between pathologists and reviews of the clinical notes. Our satisfaction level with the technical and professional aspects of the study was 87% and 97%, respectively. Discussion Remote autopsy services are found to be feasible in the hands of experienced staff, with increased flexibility and interest of autopsy technicians in the service as a result.

  13. Video System for Viewing From a Remote or Windowless Cockpit

    NASA Technical Reports Server (NTRS)

    Banerjee, Amamath

    2009-01-01

    A system of electronic hardware and software synthesizes, in nearly real time, an image of a portion of a scene surveyed by as many as eight video cameras aimed, in different directions, at portions of the scene. This is a prototype of systems that would enable a pilot to view the scene outside a remote or windowless cockpit. The outputs of the cameras are digitized. Direct memory addressing is used to store the data of a few captured images in sequence, and the sequence is repeated in cycles. Cylindrical warping is used in merging adjacent images at their borders to construct a mosaic image of the scene. The mosaic-image data are written to a memory block from which they can be rendered on a head-mounted display (HMD) device. A subsystem in the HMD device tracks the direction of gaze of the wearer, providing data that are used to select, for display, the portion of the mosaic image corresponding to the direction of gaze. The basic functionality of the system has been demonstrated by mounting the cameras on the roof of a van and steering the van by use of the images presented on the HMD device.

  14. Virtual Reality Calibration for Telerobotic Servicing

    NASA Technical Reports Server (NTRS)

    Kim, W.

    1994-01-01

    A virtual reality calibration technique of matching a virtual environment of simulated graphics models in 3-D geometry and perspective with actual camera views of the remote site task environment has been developed to enable high-fidelity preview/predictive displays with calibrated graphics overlay on live video.

  15. Fishery research in the Great Lakes using a low-cost remotely operated vehicle

    USGS Publications Warehouse

    Kennedy, Gregory W.; Brown, Charles L.; Argyle, Ray L.

    1988-01-01

    We used a MiniROVER MK II remotely operated vehicle (ROV) to collect ground-truth information on fish and their habitat in the Great Lakes that have traditionally been collected by divers, or with static cameras, or submersibles. The ROV, powered by 4 thrusters and controlled by the pilot at the surface, was portable and efficient to operate throughout the Great Lakes in 1987, and collected a total of 30 h of video data recorded for later analysis. We collected 50% more substrate information per unit of effort with the ROV than with static cameras. Fish behavior ranged from no avoidance reaction in ambient light, to erratic responses in the vehicle lights. The ROV's field of view depended on the time of day, light levels, and density of zooplankton. Quantification of the data collected with the ROV (either physical samples or video image data) will serve to enhance the use of the ROV as a research tool to conduct fishery research on the Great Lakes.

  16. Testbed for remote telepresence research

    NASA Astrophysics Data System (ADS)

    Adnan, Sarmad; Cheatham, John B., Jr.

    1992-11-01

    Teleoperated robots offer solutions to problems associated with operations in remote and unknown environments, such as space. Teleoperated robots can perform tasks related to inspection, maintenance, and retrieval. A video camera can be used to provide some assistance in teleoperations, but for fine manipulation and control, a telepresence system that gives the operator a sense of actually being at the remote location is more desirable. A telepresence system comprised of a head-tracking stereo camera system, a kinematically redundant arm, and an omnidirectional mobile robot has been developed at the mechanical engineering department at Rice University. This paper describes the design and implementation of this system, its control hardware, and software. The mobile omnidirectional robot has three independent degrees of freedom that permit independent control of translation and rotation, thereby simulating a free flying robot in a plane. The kinematically redundant robot arm has eight degrees of freedom that assist in obstacle and singularity avoidance. The on-board control computers permit control of the robot from the dual hand controllers via a radio modem system. A head-mounted display system provides the user with a stereo view from a pair of cameras attached to the mobile robotics system. The head tracking camera system moves stereo cameras mounted on a three degree of freedom platform to coordinate with the operator's head movements. This telepresence system provides a framework for research in remote telepresence, and teleoperations for space.

  17. Telemedicine in emergency evaluation of acute stroke: interrater agreement in remote video examination with a novel multimedia system.

    PubMed

    Handschu, René; Littmann, Rebekka; Reulbach, Udo; Gaul, Charly; Heckmann, Josef G; Neundörfer, Bernhard; Scibor, Mateusz

    2003-12-01

    In acute stroke care, rapid but careful evaluation of patients is mandatory but requires an experienced stroke neurologist. Telemedicine offers the possibility of bringing such expertise quickly to more patients. This study tested for the first time whether remote video examination is feasible and reliable when applied in emergency stroke care using the National Institutes of Health Stroke Scale (NIHSS). We used a novel multimedia telesupport system for transfer of real-time video sequences and audio data. The remote examiner could direct the set-top camera and zoom from distant overviews to close-ups from the personal computer in his office. Acute stroke patients admitted to our stroke unit were examined on admission in the emergency room. Standardized examination was performed by use of the NIHSS (German version) via telemedicine and compared with bedside application. In this pilot study, 41 patients were examined. Total examination time was 11.4 minutes on average (range, 8 to 18 minutes). None of the examinations had to be stopped or interrupted for technical reasons, although minor problems (brightness, audio quality) with influence on the examination process occurred in 2 sessions. Unweighted kappa coefficients ranged from 0.44 to 0.89; weighted kappa coefficients, from 0.85 to 0.99. Remote examination of acute stroke patients with a computer-based telesupport system is feasible and reliable when applied in the emergency room; interrater agreement was good to excellent in all items. For more widespread use, some problems that emerge from details like brightness, optimal camera position, and audio quality should be solved.

  18. A feasibility study of damage detection in beams using high-speed camera (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Wan, Chao; Yuan, Fuh-Gwo

    2017-04-01

    In this paper a method for damage detection in beam structures using high-speed camera is presented. Traditional methods of damage detection in structures typically involve contact (i.e., piezoelectric sensor or accelerometer) or non-contact sensors (i.e., laser vibrometer) which can be costly and time consuming to inspect an entire structure. With the popularity of the digital camera and the development of computer vision technology, video cameras offer a viable capability of measurement including higher spatial resolution, remote sensing and low-cost. In the study, a damage detection method based on the high-speed camera was proposed. The system setup comprises a high-speed camera and a line-laser which can capture the out-of-plane displacement of a cantilever beam. The cantilever beam with an artificial crack was excited and the vibration process was recorded by the camera. A methodology called motion magnification, which can amplify subtle motions in a video is used for modal identification of the beam. A finite element model was used for validation of the proposed method. Suggestions for applications of this methodology and challenges in future work will be discussed.

  19. What ring tone should be used for patient safety? Early results with a Blackberry-based telementoring safety solution.

    PubMed

    Parker, Alton; Rubinfeld, Ilan; Azuh, Ogochukwu; Blyden, Dionne; Falvo, Anthony; Horst, Mathilda; Velanovich, Vic; Patton, Pat

    2010-03-01

    Technology currently exists for the application of remote guidance in the laparoscopic operating suite. However, these solutions are costly and require extensive preparation and reconfiguration of current hardware. We propose a solution from existing technology, to send video of laparoscopic cholecystectomy to the Blackberry Pearl device (RIM Waterloo, ON, Canada) for remote guidance purposes. This technology is time- and cost-efficient, as well as reliable. After identification of the critical maneuver during a laparoscopic cholecystectomy as the division of the cystic duct, we captured a segment of video before it's transection. Video was captured using the laparoscopic camera input sent via DVI2USB Solo Frame Grabber (Epiphan Ottawa, Canada) to a video recording application on a laptop. Seven- to 40-second video clips were recorded. The video clip was then converted to an .mp4 file and was uploaded to our server and a link was then sent to the consultant via e-mail. The consultant accessed the file via Blackberry for viewing. After reviewing the video, the consultant was able to confidently comment on the operation. Approximately 7 to 40 seconds of 10 laparoscopic cholecystectomies were recorded and transferred to the consultant using our method. All 10 video clips were reviewed and deemed adequate for decision making. Remote guidance for laparoscopic cholecystectomy with existing technology can be accomplished with relatively low cost and minimal setup. Additional evaluation of our methods will aim to identify reliability, validity, and accuracy. Using our method, other forms of remote guidance may be feasible, such as other laparoscopic procedures, diagnostic ultrasonography, and remote intensive care unit monitoring. In addition, this method of remote guidance may be extended to centers with smaller budgets, allowing ubiquitous use of neighboring consultants and improved safety for our patients. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  20. Telepresence in neurosurgery: the integrated remote neurosurgical system.

    PubMed

    Kassell, N F; Downs, J H; Graves, B S

    1997-01-01

    This paper describes the Integrated Remote Neurosurgical System (IRNS), a remotely-operated neurosurgical microscope with high-speed communications and a surgeon-accessible user interface. The IRNS will allow high quality bidirectional mentoring in the neurosurgical suite. The research goals of this effort are twofold: to develop a clinical system allowing a remote neurosurgeon to lend expertise to the OR-based neurosurgical team and to provide an integrated training environment. The IRNS incorporates a generic microscope/transport model, Called SuMIT (Surgical Manipulator Interface Translator). Our system is currently under test using the Zeiss MKM surgical transport. A SuMIT interface is also being constructed for the Robotics Research 1607. The IRNS Remote Planning and Navigation Workstation incorporates surgical planning capabilities, real-time, 30 fps video from the microscope and overhead video camera. The remote workstation includes a force reflecting handcontroller which gives the remote surgeon an intuitive way to position the microscope head. Bidirectional audio, video whiteboarding, and image archiving are also supported by the remote workstation. A simulation mode permits pre-surgical simulation, post-surgical critique, and training for surgeons without access to an actual microscope transport system. The components of the IRNS are integrated using ATM switching to provide low latency data transfer. The research, along with the more sophisticated systems that will follow, will serve as a foundation and test-bed for extending the surgeon's skills without regard to time zone or geographic boundaries.

  1. Using high-technology to enforce low-technology safety measures: the use of third-party remote video auditing and real-time feedback in healthcare.

    PubMed

    Armellino, Donna; Hussain, Erfan; Schilling, Mary Ellen; Senicola, William; Eichorn, Ann; Dlugacz, Yosef; Farber, Bruce F

    2012-01-01

    Hand hygiene is a key measure in preventing infections. We evaluated healthcare worker (HCW) hand hygiene with the use of remote video auditing with and without feedback. The study was conducted in an 17-bed intensive care unit from June 2008 through June 2010. We placed cameras with views of every sink and hand sanitizer dispenser to record hand hygiene of HCWs. Sensors in doorways identified when an individual(s) entered/exited. When video auditors observed a HCW performing hand hygiene upon entering/exiting, they assigned a pass; if not, a fail was assigned. Hand hygiene was measured during a 16-week period of remote video auditing without feedback and a 91-week period with feedback of data. Performance feedback was continuously displayed on electronic boards mounted within the hallways, and summary reports were delivered to supervisors by electronic mail. During the 16-week prefeedback period, hand hygiene rates were less than 10% (3933/60 542) and in the 16-week postfeedback period it was 81.6% (59 627/73 080). The increase was maintained through 75 weeks at 87.9% (262 826/298 860). The data suggest that remote video auditing combined with feedback produced a significant and sustained improvement in hand hygiene.

  2. JackIn Head: Immersive Visual Telepresence System with Omnidirectional Wearable Camera.

    PubMed

    Kasahara, Shunichi; Nagai, Shohei; Rekimoto, Jun

    2017-03-01

    Sharing one's own immersive experience over the Internet is one of the ultimate goals of telepresence technology. In this paper, we present JackIn Head, a visual telepresence system featuring an omnidirectional wearable camera with image motion stabilization. Spherical omnidirectional video footage taken around the head of a local user is stabilized and then broadcast to others, allowing remote users to explore the immersive visual environment independently of the local user's head direction. We describe the system design of JackIn Head and report the evaluation results of real-time image stabilization and alleviation of cybersickness. Then, through an exploratory observation study, we investigate how individuals can remotely interact, communicate with, and assist each other with our system. We report our observation and analysis of inter-personal communication, demonstrating the effectiveness of our system in augmenting remote collaboration.

  3. Satellite markers: a simple method for ground truth car pose on stereo video

    NASA Astrophysics Data System (ADS)

    Gil, Gustavo; Savino, Giovanni; Piantini, Simone; Pierini, Marco

    2018-04-01

    Artificial prediction of future location of other cars in the context of advanced safety systems is a must. The remote estimation of car pose and particularly its heading angle is key to predict its future location. Stereo vision systems allow to get the 3D information of a scene. Ground truth in this specific context is associated with referential information about the depth, shape and orientation of the objects present in the traffic scene. Creating 3D ground truth is a measurement and data fusion task associated with the combination of different kinds of sensors. The novelty of this paper is the method to generate ground truth car pose only from video data. When the method is applied to stereo video, it also provides the extrinsic camera parameters for each camera at frame level which are key to quantify the performance of a stereo vision system when it is moving because the system is subjected to undesired vibrations and/or leaning. We developed a video post-processing technique which employs a common camera calibration tool for the 3D ground truth generation. In our case study, we focus in accurate car heading angle estimation of a moving car under realistic imagery. As outcomes, our satellite marker method provides accurate car pose at frame level, and the instantaneous spatial orientation for each camera at frame level.

  4. Unconventional techniques of fundus imaging: A review

    PubMed Central

    Shanmugam, Mahesh P; Mishra, Divyansh Kailash Chandra; Rajesh, R; Madhukumar, R

    2015-01-01

    The methods of fundus examination include direct and indirect ophthalmoscopy and imaging with a fundus camera are an essential part of ophthalmic practice. The usage of unconventional equipment such as a hand-held video camera, smartphone, and a nasal endoscope allows one to image the fundus with advantages and some disadvantages. The advantages of these instruments are the cost-effectiveness, ultra portability and ability to obtain images in a remote setting and share the same electronically. These instruments, however, are unlikely to replace the fundus camera but then would always be an additional arsenal in an ophthalmologist's armamentarium. PMID:26458475

  5. Development of the SEASIS instrument for SEDSAT

    NASA Technical Reports Server (NTRS)

    Maier, Mark W.

    1996-01-01

    Two SEASIS experiment objectives are key: take images that allow three axis attitude determination and take multi-spectral images of the earth. During the tether mission it is also desirable to capture images for the recoiling tether from the endmass perspective (which has never been observed). SEASIS must store all its imagery taken during the tether mission until the earth downlink can be established. SEASIS determines attitude with a panoramic camera and performs earth observation with a telephoto lens camera. Camera video is digitized, compressed, and stored in solid state memory. These objectives are addressed through the following architectural choices: (1) A camera system using a Panoramic Annular Lens (PAL). This lens has a 360 deg. azimuthal field of view by a +45 degree vertical field measured from a plan normal to the lens boresight axis. It has been shown in Mr. Mark Steadham's UAH M.S. thesis that his camera can determine three axis attitude anytime the earth and one other recognizable celestial object (for example, the sun) is in the field of view. This will be essentially all the time during tether deployment. (2) A second camera system using telephoto lens and filter wheel. The camera is a black and white standard video camera. The filters are chosen to cover the visible spectral bands of remote sensing interest. (3) A processor and mass memory arrangement linked to the cameras. Video signals from the cameras are digitized, compressed in the processor, and stored in a large static RAM bank. The processor is a multi-chip module consisting of a T800 Transputer and three Zoran floating point Digital Signal Processors. This processor module was supplied under ARPA contract by the Space Computer Corporation to demonstrate its use in space.

  6. Microgravity

    NASA Image and Video Library

    1994-07-10

    TEMPUS, an electromagnetic levitation facility that allows containerless processing of metallic samples in microgravity, first flew on the IML-2 Spacelab mission. The principle of electromagnetic levitation is used commonly in ground-based experiments to melt and then cool metallic melts below their freezing points without solidification occurring. The TEMPUS operation is controlled by its own microprocessor system; although commands may be sent remotely from the ground and real time adjustments may be made by the crew. Two video cameras, a two-color pyrometer for measuring sample temperatures, and a fast infrared detector for monitoring solidification spikes, will be mounted to the process chamber to facilitate observation and analysis. In addition, a dedicated high-resolution video camera can be attached to the TEMPUS to measure the sample volume precisely.

  7. An integrated multispectral video and environmental monitoring system for the study of coastal processes and the support of beach management operations

    NASA Astrophysics Data System (ADS)

    Ghionis, George; Trygonis, Vassilis; Karydis, Antonis; Vousdoukas, Michalis; Alexandrakis, George; Drakopoulos, Panos; Amdreadis, Olympos; Psarros, Fotis; Velegrakis, Antonis; Poulos, Serafim

    2016-04-01

    Effective beach management requires environmental assessments that are based on sound science, are cost-effective and are available to beach users and managers in an accessible, timely and transparent manner. The most common problems are: 1) The available field data are scarce and of sub-optimal spatio-temporal resolution and coverage, 2) our understanding of local beach processes needs to be improved in order to accurately model/forecast beach dynamics under a changing climate, and 3) the information provided by coastal scientists/engineers in the form of data, models and scientific interpretation is often too complicated to be of direct use by coastal managers/decision makers. A multispectral video system has been developed, consisting of one or more video cameras operating in the visible part of the spectrum, a passive near-infrared (NIR) camera, an active NIR camera system, a thermal infrared camera and a spherical video camera, coupled with innovative image processing algorithms and a telemetric system for the monitoring of coastal environmental parameters. The complete system has the capability to record, process and communicate (in quasi-real time) high frequency information on shoreline position, wave breaking zones, wave run-up, erosion hot spots along the shoreline, nearshore wave height, turbidity, underwater visibility, wind speed and direction, air and sea temperature, solar radiation, UV radiation, relative humidity, barometric pressure and rainfall. An innovative, remotely-controlled interactive visual monitoring system, based on the spherical video camera (with 360°field of view), combines the video streams from all cameras and can be used by beach managers to monitor (in real time) beach user numbers, flow activities and safety at beaches of high touristic value. The high resolution near infrared cameras permit 24-hour monitoring of beach processes, while the thermal camera provides information on beach sediment temperature and moisture, can detect upwelling in the nearshore zone, and enhances the safety of beach users. All data can be presented in real- or quasi-real time and are stored for future analysis and training/validation of coastal processes models. Acknowledgements: This work was supported by the project BEACHTOUR (11SYN-8-1466) of the Operational Program "Cooperation 2011, Competitiveness and Entrepreneurship", co-funded by the European Regional Development Fund and the Greek Ministry of Education and Religious Affairs.

  8. Efficient subtle motion detection from high-speed video for sound recovery and vibration analysis using singular value decomposition-based approach

    NASA Astrophysics Data System (ADS)

    Zhang, Dashan; Guo, Jie; Jin, Yi; Zhu, Chang'an

    2017-09-01

    High-speed cameras provide full field measurement of structure motions and have been applied in nondestructive testing and noncontact structure monitoring. Recently, a phase-based method has been proposed to extract sound-induced vibrations from phase variations in videos, and this method provides insights into the study of remote sound surveillance and material analysis. An efficient singular value decomposition (SVD)-based approach is introduced to detect sound-induced subtle motions from pixel intensities in silent high-speed videos. A high-speed camera is initially applied to capture a video of the vibrating objects stimulated by sound fluctuations. Then, subimages collected from a small region on the captured video are reshaped into vectors and reconstructed to form a matrix. Orthonormal image bases (OIBs) are obtained from the SVD of the matrix; available vibration signal can then be obtained by projecting subsequent subimages onto specific OIBs. A simulation test is initiated to validate the effectiveness and efficiency of the proposed method. Two experiments are conducted to demonstrate the potential applications in sound recovery and material analysis. Results show that the proposed method efficiently detects subtle motions from the video.

  9. High-performance dual-speed CCD camera system for scientific imaging

    NASA Astrophysics Data System (ADS)

    Simpson, Raymond W.

    1996-03-01

    Traditionally, scientific camera systems were partitioned with a `camera head' containing the CCD and its support circuitry and a camera controller, which provided analog to digital conversion, timing, control, computer interfacing, and power. A new, unitized high performance scientific CCD camera with dual speed readout at 1 X 106 or 5 X 106 pixels per second, 12 bit digital gray scale, high performance thermoelectric cooling, and built in composite video output is described. This camera provides all digital, analog, and cooling functions in a single compact unit. The new system incorporates the A/C converter, timing, control and computer interfacing in the camera, with the power supply remaining a separate remote unit. A 100 Mbyte/second serial link transfers data over copper or fiber media to a variety of host computers, including Sun, SGI, SCSI, PCI, EISA, and Apple Macintosh. Having all the digital and analog functions in the camera made it possible to modify this system for the Woods Hole Oceanographic Institution for use on a remote controlled submersible vehicle. The oceanographic version achieves 16 bit dynamic range at 1.5 X 105 pixels/second, can be operated at depths of 3 kilometers, and transfers data to the surface via a real time fiber optic link.

  10. Wide-Field-of-View, High-Resolution, Stereoscopic Imager

    NASA Technical Reports Server (NTRS)

    Prechtl, Eric F.; Sedwick, Raymond J.

    2010-01-01

    A device combines video feeds from multiple cameras to provide wide-field-of-view, high-resolution, stereoscopic video to the user. The prototype under development consists of two camera assemblies, one for each eye. One of these assemblies incorporates a mounting structure with multiple cameras attached at offset angles. The video signals from the cameras are fed to a central processing platform where each frame is color processed and mapped into a single contiguous wide-field-of-view image. Because the resolution of most display devices is typically smaller than the processed map, a cropped portion of the video feed is output to the display device. The positioning of the cropped window will likely be controlled through the use of a head tracking device, allowing the user to turn his or her head side-to-side or up and down to view different portions of the captured image. There are multiple options for the display of the stereoscopic image. The use of head mounted displays is one likely implementation. However, the use of 3D projection technologies is another potential technology under consideration, The technology can be adapted in a multitude of ways. The computing platform is scalable, such that the number, resolution, and sensitivity of the cameras can be leveraged to improve image resolution and field of view. Miniaturization efforts can be pursued to shrink the package down for better mobility. Power savings studies can be performed to enable unattended, remote sensing packages. Image compression and transmission technologies can be incorporated to enable an improved telepresence experience.

  11. Video-based Mobile Mapping System Using Smartphones

    NASA Astrophysics Data System (ADS)

    Al-Hamad, A.; Moussa, A.; El-Sheimy, N.

    2014-11-01

    The last two decades have witnessed a huge growth in the demand for geo-spatial data. This demand has encouraged researchers around the world to develop new algorithms and design new mapping systems in order to obtain reliable sources for geo-spatial data. Mobile Mapping Systems (MMS) are one of the main sources for mapping and Geographic Information Systems (GIS) data. MMS integrate various remote sensing sensors, such as cameras and LiDAR, along with navigation sensors to provide the 3D coordinates of points of interest from moving platform (e.g. cars, air planes, etc.). Although MMS can provide accurate mapping solution for different GIS applications, the cost of these systems is not affordable for many users and only large scale companies and institutions can benefits from MMS systems. The main objective of this paper is to propose a new low cost MMS with reasonable accuracy using the available sensors in smartphones and its video camera. Using the smartphone video camera, instead of capturing individual images, makes the system easier to be used by non-professional users since the system will automatically extract the highly overlapping frames out of the video without the user intervention. Results of the proposed system are presented which demonstrate the effect of the number of the used images in mapping solution. In addition, the accuracy of the mapping results obtained from capturing a video is compared to the same results obtained from using separate captured images instead of video.

  12. A preliminary study to estimate contact rates between free-roaming domestic dogs using novel miniature cameras.

    PubMed

    Bombara, Courtenay B; Dürr, Salome; Machovsky-Capuska, Gabriel E; Jones, Peter W; Ward, Michael P

    2017-01-01

    Information on contacts between individuals within a population is crucial to inform disease control strategies, via parameterisation of disease spread models. In this study we investigated the use of dog-borne video cameras-in conjunction with global positioning systems (GPS) loggers-to both characterise dog-to-dog contacts and to estimate contact rates. We customized miniaturised video cameras, enclosed within 3D-printed plastic cases, and attached these to nylon dog collars. Using two 3400 mAh NCR lithium Li-ion batteries, cameras could record a maximum of 22 hr of continuous video footage. Together with a GPS logger, collars were attached to six free roaming domestic dogs (FRDDs) in two remote Indigenous communities in northern Australia. We recorded a total of 97 hr of video footage, ranging from 4.5 to 22 hr (mean 19.1) per dog, and observed a wide range of social behaviours. The majority (69%) of all observed interactions between community dogs involved direct physical contact. Direct contact behaviours included sniffing, licking, mouthing and play fighting. No contacts appeared to be aggressive, however multiple teeth baring incidents were observed during play fights. We identified a total of 153 contacts-equating to 8 to 147 contacts per dog per 24 hr-from the videos of the five dogs with camera data that could be analysed. These contacts were attributed to 42 unique dogs (range 1 to 19 per video) which could be identified (based on colour patterns and markings). Most dog activity was observed in urban (houses and roads) environments, but contacts were more common in bushland and beach environments. A variety of foraging behaviours were observed, included scavenging through rubbish and rolling on dead animal carcasses. Identified food consumed included chicken, raw bones, animal carcasses, rubbish, grass and cheese. For characterising contacts between FRDD, several benefits of analysing videos compared to GPS fixes alone were identified in this study, including visualisation of the nature of the contact between two dogs; and inclusion of a greater number of dogs in the study (which do not need to be wearing video or GPS collars). Some limitations identified included visualisation of contacts only during daylight hours; the camera lens being obscured on occasion by the dog's mandible or the dog resting on the camera; an insufficiently wide viewing angle (36°); battery life and robustness of the deployments; high costs of the deployment; and analysis of large volumes of often unsteady video footage. This study demonstrates that dog-borne video cameras, are a feasible technology for estimating and characterising contacts between FRDDs. Modifying camera specifications and developing new analytical methods will improve applicability of this technology for monitoring FRDD populations, providing insights into dog-to-dog contacts and therefore how disease might spread within these populations.

  13. The Use of Smart Glasses for Surgical Video Streaming.

    PubMed

    Hiranaka, Takafumi; Nakanishi, Yuta; Fujishiro, Takaaki; Hida, Yuichi; Tsubosaka, Masanori; Shibata, Yosaku; Okimura, Kenjiro; Uemoto, Harunobu

    2017-04-01

    Observation of surgical procedures performed by experts is extremely important for acquisition and improvement of surgical skills. Smart glasses are small computers, which comprise a head-mounted monitor and video camera, and can be connected to the internet. They can be used for remote observation of surgeries by video streaming. Although Google Glass is the most commonly used smart glasses for medical purposes, it is still unavailable commercially and has some limitations. This article reports the use of a different type of smart glasses, InfoLinker, for surgical video streaming. InfoLinker has been commercially available in Japan for industrial purposes for more than 2 years. It is connected to a video server via wireless internet directly, and streaming video can be seen anywhere an internet connection is available. We have attempted live video streaming of knee arthroplasty operations that were viewed at several different locations, including foreign countries, on a common web browser. Although the quality of video images depended on the resolution and dynamic range of the video camera, speed of internet connection, and the wearer's attention to minimize image shaking, video streaming could be easily performed throughout the procedure. The wearer could confirm the quality of the video as the video was being shot by the head-mounted display. The time and cost for observation of surgical procedures can be reduced by InfoLinker, and further improvement of hardware as well as the wearer's video shooting technique is expected. We believe that this can be used in other medical settings.

  14. The Short Wave Aerostat-Mounted Imager (SWAMI): A novel platform for acquiring remotely sensed data from a tethered balloon

    USGS Publications Warehouse

    Vierling, L.A.; Fersdahl, M.; Chen, X.; Li, Z.; Zimmerman, P.

    2006-01-01

    We describe a new remote sensing system called the Short Wave Aerostat-Mounted Imager (SWAMI). The SWAMI is designed to acquire co-located video imagery and hyperspectral data to study basic remote sensing questions and to link landscape level trace gas fluxes with spatially and temporally appropriate spectral observations. The SWAMI can fly at altitudes up to 2 km above ground level to bridge the spatial gap between radiometric measurements collected near the surface and those acquired by other aircraft or satellites. The SWAMI platform consists of a dual channel hyperspectral spectroradiometer, video camera, GPS, thermal infrared sensor, and several meteorological and control sensors. All SWAMI functions (e.g. data acquisition and sensor pointing) can be controlled from the ground via wireless transmission. Sample data from the sampling platform are presented, along with several potential scientific applications of SWAMI data.

  15. KSC-04pd1226

    NASA Image and Video Library

    2004-05-19

    KENNEDY SPACE CENTER, FLA. -- Johnson Controls operator Rick Wetherington checks out one of the recently acquired Contraves-Goerz Kineto Tracking Mounts (KTM). There are 10 KTMs certified for use on the Eastern Range. The KTM, which is trailer-mounted with an electric drive tracking mount, includes a two-camera, camera control unit that will be used during launches. The KTM is designed for remotely controlled operations and offers a combination of film, shuttered and high-speed digital video, and FLIR cameras configured with 20-inch to 150-inch focal length lenses. The KTMs are generally placed in the field and checked out the day before a launch and manned 3 hours prior to liftoff.

  16. KSC-04pd1220

    NASA Image and Video Library

    2004-05-19

    KENNEDY SPACE CENTER, FLA. -- Johnson Controls operator Kenny Allen works on the recently acquired Contraves-Goerz Kineto Tracking Mount (KTM). Trailer-mounted with a center console/seat and electric drive tracking mount, the KTM includes a two-camera, camera control unit that will be used during launches. The KTM is designed for remotely controlled operations and offers a combination of film, shuttered and high-speed digital video, and FLIR cameras configured with 20-inch to 150-inch focal length lenses. The KTMs are generally placed in the field and checked out the day before a launch and manned 3 hours prior to liftoff. There are 10 KTMs certified for use on the Eastern Range.

  17. KSC-04pd1219

    NASA Image and Video Library

    2004-05-19

    KENNEDY SPACE CENTER, FLA. -- Johnson Controls operator Kenny Allen works on the recently acquired Contraves-Goerz Kineto Tracking Mount (KTM). Trailer-mounted with a center console/seat and electric drive tracking mount, the KTM includes a two-camera, camera control unit that will be used during launches. The KTM is designed for remotely controlled operations and offers a combination of film, shuttered and high-speed digital video, and FLIR cameras configured with 20-inch to 150-inch focal length lenses. The KTMs are generally placed in the field and checked out the day before a launch and manned 3 hours prior to liftoff. There are 10 KTMs certified for use on the Eastern Range.

  18. KSC-04pd1227

    NASA Image and Video Library

    2004-05-19

    KENNEDY SPACE CENTER, FLA. -- Johnson Controls operator Kenny Allen checks out one of the recently acquired Contraves-Goerz Kineto Tracking Mounts (KTM). There are 10 KTMs certified for use on the Eastern Range. The KTM, which is trailer-mounted with an electric drive tracking mount, includes a two-camera, camera control unit that will be used during launches. The KTM is designed for remotely controlled operations and offers a combination of film, shuttered and high-speed digital video, and FLIR cameras configured with 20-inch to 150-inch focal length lenses. The KTMs are generally placed in the field and checked out the day before a launch and manned 3 hours prior to liftoff.

  19. Use of an UROV to develop 3-D optical models of submarine environments

    NASA Astrophysics Data System (ADS)

    Null, W. D.; Landry, B. J.

    2017-12-01

    The ability to rapidly obtain high-fidelity bathymetry is crucial for a broad range of engineering, scientific, and defense applications ranging from bridge scour, bedform morphodynamics, and coral reef health to unexploded ordnance detection and monitoring. The present work introduces the use of an Underwater Remotely Operated Vehicle (UROV) to develop 3-D optical models of submarine environments. The UROV used a Raspberry Pi camera mounted to a small servo which allowed for pitch control. Prior to video data collection, in situ camera calibration was conducted with the system. Multiple image frames were extracted from the underwater video for 3D reconstruction using Structure from Motion (SFM). This system provides a simple and cost effective solution to obtaining detailed bathymetry in optically clear submarine environments.

  20. Telepathology. Long-distance diagnosis.

    PubMed

    Weinstein, R S; Bloom, K J; Rozek, L S

    1989-04-01

    Telepathology is defined as the practice of pathology at a distance, by visualizing an image on a video monitor rather than viewing a specimen directly through a microscope. Components of a telepathology system include the following: (1) a workstation equipped with a high-resolution video camera attached to a remote-controlled light microscope; (2) a pathologist workstation incorporating controls for manipulating the robotic microscope as well as a high-resolution video monitor; and (3) a telecommunications link. Progress has been made in designing and constructing telepathology workstations and fully motorized, computer-controlled light microscopes suitable for telepathology. In addition, components such as video signal digital encoders and decoders that produce remarkably stable, high-color fidelity, and high-resolution images have been incorporated into the workstations. Resolution requirements for the video microscopy component of telepathology have been formally examined in receiver operator characteristic (ROC) curve analyses. Test-of-concept demonstrations have been completed with the use of geostationary satellites as the broadband communication linkages for 750-line resolution video. Potential benefits of telepathology include providing a means of conveniently delivering pathology services in real-time to remote sites or underserviced areas, time-sharing of pathologists' services by multiple institutions, and increasing accessibility to specialty pathologists.

  1. Semi-automated camera trap image processing for the detection of ungulate fence crossing events.

    PubMed

    Janzen, Michael; Visser, Kaitlyn; Visscher, Darcy; MacLeod, Ian; Vujnovic, Dragomir; Vujnovic, Ksenija

    2017-09-27

    Remote cameras are an increasingly important tool for ecological research. While remote camera traps collect field data with minimal human attention, the images they collect require post-processing and characterization before it can be ecologically and statistically analyzed, requiring the input of substantial time and money from researchers. The need for post-processing is due, in part, to a high incidence of non-target images. We developed a stand-alone semi-automated computer program to aid in image processing, categorization, and data reduction by employing background subtraction and histogram rules. Unlike previous work that uses video as input, our program uses still camera trap images. The program was developed for an ungulate fence crossing project and tested against an image dataset which had been previously processed by a human operator. Our program placed images into categories representing the confidence of a particular sequence of images containing a fence crossing event. This resulted in a reduction of 54.8% of images that required further human operator characterization while retaining 72.6% of the known fence crossing events. This program can provide researchers using remote camera data the ability to reduce the time and cost required for image post-processing and characterization. Further, we discuss how this procedure might be generalized to situations not specifically related to animal use of linear features.

  2. Television image compression and small animal remote monitoring

    NASA Technical Reports Server (NTRS)

    Haines, Richard F.; Jackson, Robert W.

    1990-01-01

    It was shown that a subject can reliably discriminate a difference in video image quality (using a specific commercial product) for image compression levels ranging from 384 kbits per second to 1536 kbits per second. However, their discriminations are significantly influenced by whether or not the TV camera is stable or moving and whether or not the animals are quiescent or active, which is correlated with illumination level (daylight versus night illumination, respectively). The highest video rate used here was 1.54 megabits per second, which is about 18 percent of the so-called normal TV resolution of 8.4MHz. Since this video rate was judged to be acceptable by 27 of the 34 subjects (79 percent), for monitoring the general health and status of small animals within their illuminated (lights on) cages (regardless of whether the camera was stable or moved), it suggests that an immediate Space Station Freedom to ground bandwidth reduction of about 80 percent can be tolerated without a significant loss in general monitoring capability. Another general conclusion is that the present methodology appears to be effective in quantifying visual judgments of video image quality.

  3. Group tele-immersion:enabling natural interactions between groups at distant sites.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Christine L.; Stewart, Corbin; Nashel, Andrew

    2005-08-01

    We present techniques and a system for synthesizing views for video teleconferencing between small groups. In place of replicating one-to-one systems for each pair of users, we create a single unified display of the remote group. Instead of performing dense 3D scene computation, we use more cameras and trade-off storage and hardware for computation. While it is expensive to directly capture a scene from all possible viewpoints, we have observed that the participants viewpoints usually remain at a constant height (eye level) during video teleconferencing. Therefore, we can restrict the possible viewpoint to be within a virtual plane without sacrificingmore » much of the realism, and in cloning so we significantly reduce the number of required cameras. Based on this observation, we have developed a technique that uses light-field style rendering to guarantee the quality of the synthesized views, using a linear array of cameras with a life-sized, projected display. Our full-duplex prototype system between Sandia National Laboratories, California and the University of North Carolina at Chapel Hill has been able to synthesize photo-realistic views at interactive rates, and has been used to video conference during regular meetings between the sites.« less

  4. Automated Video Quality Assessment for Deep-Sea Video

    NASA Astrophysics Data System (ADS)

    Pirenne, B.; Hoeberechts, M.; Kalmbach, A.; Sadhu, T.; Branzan Albu, A.; Glotin, H.; Jeffries, M. A.; Bui, A. O. V.

    2015-12-01

    Video provides a rich source of data for geophysical analysis, often supplying detailed information about the environment when other instruments may not. This is especially true of deep-sea environments, where direct visual observations cannot be made. As computer vision techniques improve and volumes of video data increase, automated video analysis is emerging as a practical alternative to labor-intensive manual analysis. Automated techniques can be much more sensitive to video quality than their manual counterparts, so performing quality assessment before doing full analysis is critical to producing valid results.Ocean Networks Canada (ONC), an initiative of the University of Victoria, operates cabled ocean observatories that supply continuous power and Internet connectivity to a broad suite of subsea instruments from the coast to the deep sea, including video and still cameras. This network of ocean observatories has produced almost 20,000 hours of video (about 38 hours are recorded each day) and an additional 8,000 hours of logs from remotely operated vehicle (ROV) dives. We begin by surveying some ways in which deep-sea video poses challenges for automated analysis, including: 1. Non-uniform lighting: Single, directional, light sources produce uneven luminance distributions and shadows; remotely operated lighting equipment are also susceptible to technical failures. 2. Particulate noise: Turbidity and marine snow are often present in underwater video; particles in the water column can have sharper focus and higher contrast than the objects of interest due to their proximity to the light source and can also influence the camera's autofocus and auto white-balance routines. 3. Color distortion (low contrast): The rate of absorption of light in water varies by wavelength, and is higher overall than in air, altering apparent colors and lowering the contrast of objects at a distance.We also describe measures under development at ONC for detecting and mitigating these effects. These steps include filtering out unusable data, color and luminance balancing, and choosing the most appropriate image descriptors. We apply these techniques to generate automated quality assessment of video data and illustrate their utility with an example application where we perform vision-based substrate classification.

  5. Towards fish-eye camera based in-home activity assessment.

    PubMed

    Bas, Erhan; Erdogmus, Deniz; Ozertem, Umut; Pavel, Misha

    2008-01-01

    Indoors localization, activity classification, and behavioral modeling are increasingly important for surveillance applications including independent living and remote health monitoring. In this paper, we study the suitability of fish-eye cameras (high-resolution CCD sensors with very-wide-angle lenses) for the purpose of monitoring people in indoors environments. The results indicate that these sensors are very useful for automatic activity monitoring and people tracking. We identify practical and mathematical problems related to information extraction from these video sequences and identify future directions to solve these issues.

  6. Samara Probe For Remote Imaging

    NASA Technical Reports Server (NTRS)

    Burke, James D.

    1989-01-01

    Imaging probe descends through atmosphere of planet, obtaining images of ground surface as it travels. Released from aircraft over Earth or from spacecraft over another planet. Body and single wing shaped like samara - winged seed like those of maple trees. Rotates as descends, providing panoramic view of terrain below. Radio image obtained by video camera to aircraft or spacecraft overhead.

  7. Remote sensing and implications for variable-rate application using agricultural aircraft

    NASA Astrophysics Data System (ADS)

    Thomson, Steven J.; Smith, Lowrey A.; Ray, Jeffrey D.; Zimba, Paul V.

    2004-01-01

    Aircraft routinely used for agricultural spray application are finding utility for remote sensing. Data obtained from remote sensing can be used for prescription application of pesticides, fertilizers, cotton growth regulators, and water (the latter with the assistance of hyperspectral indices and thermal imaging). Digital video was used to detect weeds in early cotton, and preliminary data were obtained to see if nitrogen status could be detected in early soybeans. Weeds were differentiable from early cotton at very low altitudes (65-m), with the aid of supervised classification algorithms in the ENVI image analysis software. The camera was flown at very low altitude for acceptable pixel resolution. Nitrogen status was not detectable by statistical analysis of digital numbers (DNs) obtained from images, but soybean cultivar differences were statistically discernable (F=26, p=0.01). Spectroradiometer data are being analyzed to identify narrow spectral bands that might aid in selecting camera filters for determination of plant nitrogen status. Multiple camera configurations are proposed to allow vegetative indices to be developed more readily. Both remotely sensed field images and ground data are to be used for decision-making in a proposed variable-rate application system for agricultural aircraft. For this system, prescriptions generated from digital imagery and data will be coupled with GPS-based swath guidance and programmable flow control.

  8. Laser-Directed Ranging System Implementing Single Camera System for Telerobotics Applications

    NASA Technical Reports Server (NTRS)

    Wells, Dennis L. (Inventor); Li, Larry C. (Inventor); Cox, Brian J. (Inventor)

    1995-01-01

    The invention relates generally to systems for determining the range of an object from a reference point and, in one embodiment, to laser-directed ranging systems useful in telerobotics applications. Digital processing techniques are employed which minimize the complexity and cost of the hardware and software for processing range calculations, thereby enhancing the commercial attractiveness of the system for use in relatively low-cost robotic systems. The system includes a video camera for generating images of the target, image digitizing circuitry, and an associated frame grabber circuit. The circuit first captures one of the pairs of stereo video images of the target, and then captures a second video image of the target as it is partly illuminated by the light beam, suitably generated by a laser. The two video images, taken sufficiently close together in time to minimize camera and scene motion, are converted to digital images and then compared. Common pixels are eliminated, leaving only a digital image of the laser-illuminated spot on the target. Mw centroid of the laser illuminated spot is dm obtained and compared with a predetermined reference point, predetermined by design or calibration, which represents the coordinate at the focal plane of the laser illumination at infinite range. Preferably, the laser and camera are mounted on a servo-driven platform which can be oriented to direct the camera and the laser toward the target. In one embodiment the platform is positioned in response to movement of the operator's head. Position and orientation sensors are used to monitor head movement. The disparity between the digital image of the laser spot and the reference point is calculated for determining range to the target. Commercial applications for the system relate to active range-determination systems, such as those used with robotic systems in which it is necessary to determine the, range to a workpiece or object to be grasped or acted upon by a robot arm end-effector in response to commands generated by an operator. In one embodiment, the system provides a real-time image of the target for the operator as the robot approaches the object. The system is also adapted for use in virtual reality systems in which a remote object or workpiece is to be acted upon by a remote robot arm or other mechanism controlled by an operator.

  9. UrtheCast Second-Generation Earth Observation Sensors

    NASA Astrophysics Data System (ADS)

    Beckett, K.

    2015-04-01

    UrtheCast's Second-Generation state-of-the-art Earth Observation (EO) remote sensing platform will be hosted on the NASA segment of International Space Station (ISS). This platform comprises a high-resolution dual-mode (pushbroom and video) optical camera and a dual-band (X and L) Synthetic Aperture RADAR (SAR) instrument. These new sensors will complement the firstgeneration medium-resolution pushbroom and high-definition video cameras that were mounted on the Russian segment of the ISS in early 2014. The new cameras are expected to be launched to the ISS in late 2017 via the Space Exploration Technologies Corporation Dragon spacecraft. The Canadarm will then be used to install the remote sensing platform onto a CBM (Common Berthing Mechanism) hatch on Node 3, allowing the sensor electronics to be accessible from the inside of the station, thus limiting their exposure to the space environment and allowing for future capability upgrades. The UrtheCast second-generation system will be able to take full advantage of the strengths that each of the individual sensors offers, such that the data exploitation capabilities of the combined sensors is significantly greater than from either sensor alone. This represents a truly novel platform that will lead to significant advances in many other Earth Observation applications such as environmental monitoring, energy and natural resources management, and humanitarian response, with data availability anticipated to begin after commissioning is completed in early 2018.

  10. Implementation of smart phone video plethysmography and dependence on lighting parameters.

    PubMed

    Fletcher, Richard Ribón; Chamberlain, Daniel; Paggi, Nicholas; Deng, Xinyue

    2015-08-01

    The remote measurement of heart rate (HR) and heart rate variability (HRV) via a digital camera (video plethysmography) has emerged as an area of great interest for biomedical and health applications. While a few implementations of video plethysmography have been demonstrated on smart phones under controlled lighting conditions, it has been challenging to create a general scalable solution due to the large variability in smart phone hardware performance, software architecture, and the variable response to lighting parameters. In this context, we present a selfcontained smart phone implementation of video plethysmography for Android OS, which employs both stochastic and deterministic algorithms, and we use this to study the effect of lighting parameters (illuminance, color spectrum) on the accuracy of the remote HR measurement. Using two different phone models, we present the median HR error for five different video plethysmography algorithms under three different types of lighting (natural sunlight, compact fluorescent, and halogen incandescent) and variations in brightness. For most algorithms, we found the optimum light brightness to be in the range 1000-4000 lux and the optimum lighting types to be compact fluorescent and natural light. Moderate errors were found for most algorithms with some devices under conditions of low-brightness (<;500 lux) and highbrightness (>4000 lux). Our analysis also identified camera frame rate jitter as a major source of variability and error across different phone models, but this can be largely corrected through non-linear resampling. Based on testing with six human subjects, our real-time Android implementation successfully predicted the measured HR with a median error of -0.31 bpm, and an inter-quartile range of 2.1bpm.

  11. KSC-04pd1223

    NASA Image and Video Library

    2004-05-19

    KENNEDY SPACE CENTER, FLA. -- Johnson Controls operator Kenny Allen makes adjustments on one of the recently acquired Contraves-Goerz Kineto Tracking Mounts (KTM). There are 10 KTMs certified for use on the Eastern Range. The KTM, which is trailer-mounted with a center console/seat and electric drive tracking mount, includes a two-camera, camera control unit that will be used during launches. The KTM is designed for remotely controlled operations and offers a combination of film, shuttered and high-speed digital video, and FLIR cameras configured with 20-inch to 150-inch focal length lenses. The KTMs are generally placed in the field and checked out the day before a launch and manned 3 hours prior to liftoff.

  12. KSC-04pd1221

    NASA Image and Video Library

    2004-05-19

    KENNEDY SPACE CENTER, FLA. -- Johnson Controls operators Rick Worthington (left) and Kenny Allen work on one of the recently acquired Contraves-Goerz Kineto Tracking Mounts (KTM). There are 10 KTMs certified for use on the Eastern Range. The KTM, which is trailer-mounted with a center console/seat and electric drive tracking mount, includes a two-camera, camera control unit that will be used during launches. The KTM is designed for remotely controlled operations and offers a combination of film, shuttered and high-speed digital video, and FLIR cameras configured with 20-inch to 150-inch focal length lenses. The KTMs are generally placed in the field and checked out the day before a launch and manned 3 hours prior to liftoff.

  13. KSC-04pd1225

    NASA Image and Video Library

    2004-05-19

    KENNEDY SPACE CENTER, FLA. -- Johnson Controls operator Kenny Allen stands in the center console area of one of the recently acquired Contraves-Goerz Kineto Tracking Mounts (KTM). There are 10 KTMs certified for use on the Eastern Range. The KTM, which is trailer-mounted with an electric-drive tracking mount, includes a two-camera, camera control unit that will be used during launches. The KTM is designed for remotely controlled operations and offers a combination of film, shuttered and high-speed digital video, and FLIR cameras configured with 20-inch to 150-inch focal length lenses. The KTMs are generally placed in the field and checked out the day before a launch and manned 3 hours prior to liftoff.

  14. KSC-04pd1224

    NASA Image and Video Library

    2004-05-19

    KENNEDY SPACE CENTER, FLA. -- Johnson Controls operator Rick Wetherington sits in the center console seat of one of the recently acquired Contraves-Goerz Kineto Tracking Mounts (KTM). There are 10 KTMs certified for use on the Eastern Range. The KTM, which is trailer-mounted with an electric drive tracking mount, includes a two-camera, camera control unit that will be used during launches. The KTM is designed for remotely controlled operations and offers a combination of film, shuttered and high-speed digital video, and FLIR cameras configured with 20-inch to 150-inch focal length lenses. The KTMs are generally placed in the field and checked out the day before a launch and manned 3 hours prior to liftoff.

  15. KSC-04pd1222

    NASA Image and Video Library

    2004-05-19

    KENNEDY SPACE CENTER, FLA. -- Johnson Controls operators Rick Wetherington (left) and Kenny Allen work on two of the recently acquired Contraves-Goerz Kineto Tracking Mounts (KTM). There are 10 KTMs certified for use on the Eastern Range. The KTM, which is trailer-mounted with a center console/seat and electric drive tracking mount, includes a two-camera, camera control unit that will be used during launches. The KTM is designed for remotely controlled operations and offers a combination of film, shuttered and high-speed digital video, and FLIR cameras configured with 20-inch to 150-inch focal length lenses. The KTMs are generally placed in the field and checked out the day before a launch and manned 3 hours prior to liftoff.

  16. Mini AERCam Inspection Robot for Human Space Missions

    NASA Technical Reports Server (NTRS)

    Fredrickson, Steven E.; Duran, Steve; Mitchell, Jennifer D.

    2004-01-01

    The Engineering Directorate of NASA Johnson Space Center has developed a nanosatellite-class free-flyer intended for future external inspection and remote viewing of human spacecraft. The Miniature Autonomous Extravehicular Robotic Camera (Mini AERCam) technology demonstration unit has been integrated into the approximate form and function of a flight system. The spherical Mini AERCam free flyer is 7.5 inches in diameter and weighs approximately 10 pounds, yet it incorporates significant additional capabilities compared to the 35 pound, 14 inch AERCam Sprint that flew as a Shuttle flight experiment in 1997. Mini AERCam hosts a full suite of miniaturized avionics, instrumentation, communications, navigation, imaging, power, and propulsion subsystems, including digital video cameras and a high resolution still image camera. The vehicle is designed for either remotely piloted operations or supervised autonomous operations including automatic stationkeeping and point-to-point maneuvering. Mini AERCam is designed to fulfill the unique requirements and constraints associated with using a free flyer to perform external inspections and remote viewing of human spacecraft operations. This paper describes the application of Mini AERCam for stand-alone spacecraft inspection, as well as for roles on teams of humans and robots conducting future space exploration missions.

  17. Measuring frequency of one-dimensional vibration with video camera using electronic rolling shutter

    NASA Astrophysics Data System (ADS)

    Zhao, Yipeng; Liu, Jinyue; Guo, Shijie; Li, Tiejun

    2018-04-01

    Cameras offer a unique capability of collecting high density spatial data from a distant scene of interest. They can be employed as remote monitoring or inspection sensors to measure vibrating objects because of their commonplace availability, simplicity, and potentially low cost. A defect of vibrating measurement with the camera is to process the massive data generated by camera. In order to reduce the data collected from the camera, the camera using electronic rolling shutter (ERS) is applied to measure the frequency of one-dimensional vibration, whose frequency is much higher than the speed of the camera. Every row in the image captured by the ERS camera records the vibrating displacement at different times. Those displacements that form the vibration could be extracted by local analysis with sliding windows. This methodology is demonstrated on vibrating structures, a cantilever beam, and an air compressor to identify the validity of the proposed algorithm. Suggestions for applications of this methodology and challenges in real-world implementation are given at last.

  18. A Wide-field Camera and Fully Remote Operations at the Wyoming Infrared Observatory

    NASA Astrophysics Data System (ADS)

    Findlay, Joseph R.; Kobulnicky, Henry A.; Weger, James S.; Bucher, Gerald A.; Perry, Marvin C.; Myers, Adam D.; Pierce, Michael J.; Vogel, Conrad

    2016-11-01

    Upgrades at the 2.3 meter Wyoming Infrared Observatory telescope have provided the capability for fully remote operations by a single operator from the University of Wyoming campus. A line-of-sight 300 Megabit s-1 11 GHz radio link provides high-speed internet for data transfer and remote operations that include several realtime video feeds. Uninterruptable power is ensured by a 10 kVA battery supply for critical systems and a 55 kW autostart diesel generator capable of running the entire observatory for up to a week. The construction of a new four-element prime-focus corrector with fused-silica elements allows imaging over a 40‧ field of view with a new 40962 UV-sensitive prime-focus camera and filter wheel. A new telescope control system facilitates the remote operations model and provides 20″ rms pointing over the usable sky. Taken together, these improvements pave the way for a new generation of sky surveys supporting space-based missions and flexible-cadence observations advancing emerging astrophysical priorities such as planet detection, quasar variability, and long-term time-domain campaigns.

  19. QWIP technology for both military and civilian applications

    NASA Astrophysics Data System (ADS)

    Gunapala, Sarath D.; Kukkonen, Carl A.; Sirangelo, Mark N.; McQuiston, Barbara K.; Chehayeb, Riad; Kaufmann, M.

    2001-10-01

    Advanced thermal imaging infrared cameras have been a cost effective and reliable method to obtain the temperature of objects. Quantum Well Infrared Photodetector (QWIP) based thermal imaging systems have advanced the state-of-the-art and are the most sensitive commercially available thermal systems. QWIP Technologies LLC, under exclusive agreement with Caltech University, is currently manufacturing the QWIP-ChipTM, a 320 X 256 element, bound-to-quasibound QWIP FPA. The camera performance falls within the long-wave IR band, spectrally peaked at 8.5 μm. The camera is equipped with a 32-bit floating-point digital signal processor combined with multi- tasking software, delivering a digital acquisition resolution of 12-bits using nominal power consumption of less than 50 Watts. With a variety of video interface options, remote control capability via an RS-232 connection, and an integrated control driver circuit to support motorized zoom and focus- compatible lenses, this camera design has excellent application in both the military and commercial sector. In the area of remote sensing, high-performance QWIP systems can be used for high-resolution, target recognition as part of a new system of airborne platforms (including UAVs). Such systems also have direct application in law enforcement, surveillance, industrial monitoring and road hazard detection systems. This presentation will cover the current performance of the commercial QWIP cameras, conceptual platform systems and advanced image processing for use in both military remote sensing and civilian applications currently being developed in road hazard monitoring.

  20. Telehealth: voice therapy using telecommunications technology.

    PubMed

    Mashima, Pauline A; Birkmire-Peters, Deborah P; Syms, Mark J; Holtel, Michael R; Burgess, Lawrence P A; Peters, Leslie J

    2003-11-01

    Telehealth offers the potential to meet the needs of underserved populations in remote regions. The purpose of this study was a proof-of-concept to determine whether voice therapy can be delivered effectively remotely. Treatment outcomes were evaluated for a vocal rehabilitation protocol delivered under 2 conditions: with the patient and clinician interacting within the same room (conventional group) and with the patient and clinician in separate rooms, interacting in real time via a hard-wired video camera and monitor (video teleconference group). Seventy-two patients with voice disorders served as participants. Based on evaluation by otolaryngologists, 31 participants were diagnosed with vocal nodules, 29 were diagnosed with edema, 9 were diagnosed with unilateral vocal fold paralysis, and 3 presented with vocal hyperfunction with no laryngeal pathology. Fifty-one participants (71%) completed the vocal rehabilitation protocol. Outcome measures included perceptual judgments of voice quality, acoustic analyses of voice, patient satisfaction ratings, and fiber-optic laryngoscopy. There were no differences in outcome measures between the conventional group and the remote video teleconference group. Participants in both groups showed positive changes on all outcome measures after completing the vocal rehabilitation protocol. Reasons for participants discontinuing therapy prematurely provided support for the telehealth model of service delivery.

  1. Lifting Scheme DWT Implementation in a Wireless Vision Sensor Network

    NASA Astrophysics Data System (ADS)

    Ong, Jia Jan; Ang, L.-M.; Seng, K. P.

    This paper presents the practical implementation of a Wireless Visual Sensor Network (WVSN) with DWT processing on the visual nodes. WVSN consists of visual nodes that capture video and transmit to the base-station without processing. Limitation of network bandwidth restrains the implementation of real time video streaming from remote visual nodes through wireless communication. Three layers of DWT filters are implemented to process the captured image from the camera. With having all the wavelet coefficients produced, it is possible just to transmit the low frequency band coefficients and obtain an approximate image at the base-station. This will reduce the amount of power required in transmission. When necessary, transmitting all the wavelet coefficients will produce the full detail of image, which is similar to the image captured at the visual nodes. The visual node combines the CMOS camera, Xilinx Spartan-3L FPGA and wireless ZigBee® network that uses the Ember EM250 chip.

  2. System Synchronizes Recordings from Separated Video Cameras

    NASA Technical Reports Server (NTRS)

    Nail, William; Nail, William L.; Nail, Jasper M.; Le, Doung T.

    2009-01-01

    A system of electronic hardware and software for synchronizing recordings from multiple, physically separated video cameras is being developed, primarily for use in multiple-look-angle video production. The system, the time code used in the system, and the underlying method of synchronization upon which the design of the system is based are denoted generally by the term "Geo-TimeCode(TradeMark)." The system is embodied mostly in compact, lightweight, portable units (see figure) denoted video time-code units (VTUs) - one VTU for each video camera. The system is scalable in that any number of camera recordings can be synchronized. The estimated retail price per unit would be about $350 (in 2006 dollars). The need for this or another synchronization system external to video cameras arises because most video cameras do not include internal means for maintaining synchronization with other video cameras. Unlike prior video-camera-synchronization systems, this system does not depend on continuous cable or radio links between cameras (however, it does depend on occasional cable links lasting a few seconds). Also, whereas the time codes used in prior video-camera-synchronization systems typically repeat after 24 hours, the time code used in this system does not repeat for slightly more than 136 years; hence, this system is much better suited for long-term deployment of multiple cameras.

  3. Geometric database maintenance using CCTV cameras and overlay graphics

    NASA Astrophysics Data System (ADS)

    Oxenberg, Sheldon C.; Landell, B. Patrick; Kan, Edwin

    1988-01-01

    An interactive graphics system using closed circuit television (CCTV) cameras for remote verification and maintenance of a geometric world model database has been demonstrated in GE's telerobotics testbed. The database provides geometric models and locations of objects viewed by CCTV cameras and manipulated by telerobots. To update the database, an operator uses the interactive graphics system to superimpose a wireframe line drawing of an object with known dimensions on a live video scene containing that object. The methodology used is multipoint positioning to easily superimpose a wireframe graphic on the CCTV image of an object in the work scene. An enhanced version of GE's interactive graphics system will provide the object designation function for the operator control station of the Jet Propulsion Laboratory's telerobot demonstration system.

  4. General theory of remote gaze estimation using the pupil center and corneal reflections.

    PubMed

    Guestrin, Elias Daniel; Eizenman, Moshe

    2006-06-01

    This paper presents a general theory for the remote estimation of the point-of-gaze (POG) from the coordinates of the centers of the pupil and corneal reflections. Corneal reflections are produced by light sources that illuminate the eye and the centers of the pupil and corneal reflections are estimated in video images from one or more cameras. The general theory covers the full range of possible system configurations. Using one camera and one light source, the POG can be estimated only if the head is completely stationary. Using one camera and multiple light sources, the POG can be estimated with free head movements, following the completion of a multiple-point calibration procedure. When multiple cameras and multiple light sources are used, the POG can be estimated following a simple one-point calibration procedure. Experimental and simulation results suggest that the main sources of gaze estimation errors are the discrepancy between the shape of real corneas and the spherical corneal shape assumed in the general theory, and the noise in the estimation of the centers of the pupil and corneal reflections. A detailed example of a system that uses the general theory to estimate the POG on a computer screen is presented.

  5. Thermal Remote Sensing with Uav-Based Workflows

    NASA Astrophysics Data System (ADS)

    Boesch, R.

    2017-08-01

    Climate change will have a significant influence on vegetation health and growth. Predictions of higher mean summer temperatures and prolonged summer draughts may pose a threat to agriculture areas and forest canopies. Rising canopy temperatures can be an indicator of plant stress because of the closure of stomata and a decrease in the transpiration rate. Thermal cameras are available for decades, but still often used for single image analysis, only in oblique view manner or with visual evaluations of video sequences. Therefore remote sensing using a thermal camera can be an important data source to understand transpiration processes. Photogrammetric workflows allow to process thermal images similar to RGB data. But low spatial resolution of thermal cameras, significant optical distortion and typically low contrast require an adapted workflow. Temperature distribution in forest canopies is typically completely unknown and less distinct than for urban or industrial areas, where metal constructions and surfaces yield high contrast and sharp edge information. The aim of this paper is to investigate the influence of interior camera orientation, tie point matching and ground control points on the resulting accuracy of bundle adjustment and dense cloud generation with a typically used photogrammetric workflow for UAVbased thermal imagery in natural environments.

  6. Design, Development and Testing of the Miniature Autonomous Extravehicular Robotic Camera (Mini AERCam) Guidance, Navigation and Control System

    NASA Technical Reports Server (NTRS)

    Wagenknecht, J.; Fredrickson, S.; Manning, T.; Jones, B.

    2003-01-01

    Engineers at NASA Johnson Space Center have designed, developed, and tested a nanosatellite-class free-flyer intended for future external inspection and remote viewing of human spaceflight activities. The technology demonstration system, known as the Miniature Autonomous Extravehicular Robotic Camera (Mini AERCam), has been integrated into the approximate form and function of a flight system. The primary focus has been to develop a system capable of providing external views of the International Space Station. The Mini AERCam system is spherical-shaped and less than eight inches in diameter. It has a full suite of guidance, navigation, and control hardware and software, and is equipped with two digital video cameras and a high resolution still image camera. The vehicle is designed for either remotely piloted operations or supervised autonomous operations. Tests have been performed in both a six degree-of-freedom closed-loop orbital simulation and on an air-bearing table. The Mini AERCam system can also be used as a test platform for evaluating algorithms and relative navigation for autonomous proximity operations and docking around the Space Shuttle Orbiter or the ISS.

  7. Complex effusive events at Kilauea as documented by the GOES satellite and remote video cameras

    USGS Publications Warehouse

    Harris, A.J.L.; Thornber, C.R.

    1999-01-01

    GOES provides thermal data for all of the Hawaiian volcanoes once every 15 min. We show how volcanic radiance time series produced from this data stream can be used as a simple measure of effusive activity. Two types of radiance trends in these time series can be used to monitor effusive activity: (a) Gradual variations in radiance reveal steady flow-field extension and tube development. (b) Discrete spikes correlate with short bursts of activity, such as lava fountaining or lava-lake overflows. We are confident that any effusive event covering more than 10,000 m2 of ground in less than 60 min will be unambiguously detectable using this approach. We demonstrate this capability using GOES, video camera and ground-based observational data for the current eruption of Kilauea volcano (Hawai'i). A GOES radiance time series was constructed from 3987 images between 19 June and 12 August 1997. This time series displayed 24 radiance spikes elevated more than two standard deviations above the mean; 19 of these are correlated with video-recorded short-burst effusive events. Less ambiguous events are interpreted, assessed and related to specific volcanic events by simultaneous use of permanently recording video camera data and ground-observer reports. The GOES radiance time series are automatically processed on data reception and made available in near-real-time, so such time series can contribute to three main monitoring functions: (a) automatically alerting major effusive events; (b) event confirmation and assessment; and (c) establishing effusive event chronology.

  8. Student-Built Underwater Video and Data Capturing Device

    NASA Astrophysics Data System (ADS)

    Whitt, F.

    2016-12-01

    Students from Stockbridge High School Robotics Team invention is a low cost underwater video and data capturing device. This system is capable of shooting time-lapse photography and/or video for up to 3 days of video at a time. It can be used in remote locations without having to change batteries or adding additional external hard drives for data storage. The video capturing device has a unique base and mounting system which houses a pi drive and a programmable raspberry pi with a camera module. This system is powered by two 12 volt batteries, which makes it easier for users to recharge after use. Our data capturing device has the same unique base and mounting system as the underwater camera. The data capturing device consists of an Arduino and SD card shield that is capable of collecting continuous temperature and pH readings underwater. This data will then be logged onto the SD card for easy access and recording. The low cost underwater video and data capturing device can reach depths up to 100 meters while recording 36 hours of video on 1 terabyte of storage. It also features night vision infrared light capabilities. The cost to build our invention is $500. The goal of this was to provide a device that can easily be accessed by marine biologists, teachers, researchers and citizen scientists to capture photographic and water quality data in marine environments over extended periods of time.

  9. Development of tools and techniques for monitoring underwater artifacts

    NASA Astrophysics Data System (ADS)

    Lazar, Iulian; Ghilezan, Alin; Hnatiuc, Mihaela

    2016-12-01

    The different assessments provide information on the best methods to approach an artifact. The presence and extent of potential threats to archaeology must also be determined. In this paper we present an underwater robot, built in the laboratory, able to identify the artifact and to get it to the surface. It is an underwater remotely operated vehicle (ROV) which can be controlled remotely from the shore, a boat or a control station and communication is possible through an Ethernet cable with a maximum length of 100 m. The robot is equipped with an IP camera which sends real time images that can be accessed anywhere from within the network. The camera also has a microSD card to store the video. The methods developed for data communication between the robot and the user is present. A communication protocol between the client and server is developed to control the ROV.

  10. Beach Observations using Quadcopter Imagery

    NASA Astrophysics Data System (ADS)

    Yang, Yi-Chung; Wang, Hsing-Yu; Fang, Hui-Ming; Hsiao, Sung-Shan; Tsai, Cheng-Han

    2017-04-01

    Beaches are the places where the interaction of the land and sea takes place, and it is under the influence of many environmental factors, including meteorological and oceanic ones. To understand the evolution or changes of beaches, it may require constant monitoring. One way to monitor the beach changes is to use optical cameras. With careful placements of ground control points, land-based optical cameras, which are inexpensive compared to other remote sensing apparatuses, can be used to survey a relatively large area in a short time. For example, we have used terrestrial optical cameras incorporated with ground control points to monitor beaches. The images from the cameras were calibrated by applying the direct linear transformation, projective transformation, and Sobel edge detector to locate the shoreline. The terrestrial optical cameras can record the beach images continuous, and the shorelines can be satisfactorily identified. However, the terrestrial cameras have some limitations. First, the camera system set a sufficiently high level so that the camera can cover the whole area that is of interest; such a location may not be available. The second limitation is that objects in the image have a different resolution, depending on the distance of objects from the cameras. To overcome these limitations, the present study tested a quadcopter equipped with a down-looking camera to record video and still images of a beach. The quadcopter can be controlled to hover at one location. However, the hovering of the quadcopter can be affected by the wind, since it is not positively anchored to a structure. Although the quadcopter has a gimbal mechanism to damp out tiny shakings of the copter, it will not completely counter movements due to the wind. In our preliminary tests, we have flown the quadcopter up to 500 m high to record 10-minnte video. We then took a 10-minute average of the video data. The averaged image of the coast was blurred because of the time duration of the video and the small movement caused by the quadcopter trying to return to its original position, which is caused by the wind. To solve this problem, the feature detection technique of Speeded Up Robust Features (SURF) method was used on the image of the video, and the resulting image was much sharper than that original image. Next, we extracted the maximum and minimum of RGB value of each pixel, respectively, of the 10-minutes videos. The beach breaker zone showed up in the maximum RGB image as white color areas. Moreover, we were also able to remove the breaker from the images and see the breaker zone bottom features using minimum RGB value of the images. From this test, we also identified the location of the coastline. It was found that the correlation coefficient between the coastline identified by the copter image and that by the ground survey was as high as 0.98. By repeating this copter flight at different times, we could measure the evolution of the coastline.

  11. Multiple Sensor Camera for Enhanced Video Capturing

    NASA Astrophysics Data System (ADS)

    Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko

    A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.

  12. Automated videography for residential communications

    NASA Astrophysics Data System (ADS)

    Kurtz, Andrew F.; Neustaedter, Carman; Blose, Andrew C.

    2010-02-01

    The current widespread use of webcams for personal video communication over the Internet suggests that opportunities exist to develop video communications systems optimized for domestic use. We discuss both prior and existing technologies, and the results of user studies that indicate potential needs and expectations for people relative to personal video communications. In particular, users anticipate an easily used, high image quality video system, which enables multitasking communications during the course of real-world activities and provides appropriate privacy controls. To address these needs, we propose a potential approach premised on automated capture of user activity. We then describe a method that adapts cinematography principles, with a dual-camera videography system, to automatically control image capture relative to user activity, using semantic or activity-based cues to determine user position and motion. In particular, we discuss an approach to automatically manage shot framing, shot selection, and shot transitions, with respect to one or more local users engaged in real-time, unscripted events, while transmitting the resulting video to a remote viewer. The goal is to tightly frame subjects (to provide more detail), while minimizing subject loss and repeated abrupt shot framing changes in the images as perceived by a remote viewer. We also discuss some aspects of the system and related technologies that we have experimented with thus far. In summary, the method enables users to participate in interactive video-mediated communications while engaged in other activities.

  13. STS-114 Flight Day 3 Highlights

    NASA Technical Reports Server (NTRS)

    2005-01-01

    Video coverage of Day 3 includes highlights of STS-114 during the approach and docking of Discovery with the International Space Station (ISS). The Return to Flight continues with space shuttle crew members (Commander Eileen Collins, Pilot James Kelly, Mission Specialists Soichi Noguchi, Stephen Robinson, Andrew Thomas, Wendy Lawrence, and Charles Camarda) seen in onboard activities on the fore and aft portions of the flight deck during the orbiter's approach. Camarda sends a greeting to his family, and Collins maneuvers Discovery as the ISS appears steadily closer in sequential still video from the centerline camera of the Orbiter Docking System. The approach includes video of Discovery from the ISS during the orbiter's Rendezvous Pitch Maneuver, giving the ISS a clear view of the thermal protection systems underneath the orbiter. Discovery docks with the Destiny Laboratory of the ISS, and the shuttle crew greets the Expedition 11 crew (Commander Sergei Krikalev and NASA ISS Science Officer and Flight Engineer John Phillips) of the ISS onboard the station. Finally, the Space Station Remote Manipulator System hands the Orbiter Boom Sensor System to its counterpart, the Shuttle Remote Manipulator System.

  14. Assessment of the DoD Embedded Media Program

    DTIC Science & Technology

    2004-09-01

    Classified and Sensitive Information ................... VII-22 3. Weapons Systems Video, Gun Camera Video, and Lipstick Cameras...Weapons Systems Video, Gun Camera Video, and Lipstick Cameras A SECDEF and CJCS message to commanders stated, “Put in place mechanisms and processes...of public communication activities.”126 The 10 February 2003 PAG stated, “Use of lipstick and helmet-mounted cameras on combat sorties is approved

  15. Camera Control and Geo-Registration for Video Sensor Networks

    NASA Astrophysics Data System (ADS)

    Davis, James W.

    With the use of large video networks, there is a need to coordinate and interpret the video imagery for decision support systems with the goal of reducing the cognitive and perceptual overload of human operators. We present computer vision strategies that enable efficient control and management of cameras to effectively monitor wide-coverage areas, and examine the framework within an actual multi-camera outdoor urban video surveillance network. First, we construct a robust and precise camera control model for commercial pan-tilt-zoom (PTZ) video cameras. In addition to providing a complete functional control mapping for PTZ repositioning, the model can be used to generate wide-view spherical panoramic viewspaces for the cameras. Using the individual camera control models, we next individually map the spherical panoramic viewspace of each camera to a large aerial orthophotograph of the scene. The result provides a unified geo-referenced map representation to permit automatic (and manual) video control and exploitation of cameras in a coordinated manner. The combined framework provides new capabilities for video sensor networks that are of significance and benefit to the broad surveillance/security community.

  16. High-resolution streaming video integrated with UGS systems

    NASA Astrophysics Data System (ADS)

    Rohrer, Matthew

    2010-04-01

    Imagery has proven to be a valuable complement to Unattended Ground Sensor (UGS) systems. It provides ultimate verification of the nature of detected targets. However, due to the power, bandwidth, and technological limitations inherent to UGS, sacrifices have been made to the imagery portion of such systems. The result is that these systems produce lower resolution images in small quantities. Currently, a high resolution, wireless imaging system is being developed to bring megapixel, streaming video to remote locations to operate in concert with UGS. This paper will provide an overview of how using Wifi radios, new image based Digital Signal Processors (DSP) running advanced target detection algorithms, and high resolution cameras gives the user an opportunity to take high-powered video imagers to areas where power conservation is a necessity.

  17. Remote high-definition rotating video enables fast spatial survey of marine underwater macrofauna and habitats.

    PubMed

    Pelletier, Dominique; Leleu, Kévin; Mallet, Delphine; Mou-Tham, Gérard; Hervé, Gilles; Boureau, Matthieu; Guilpart, Nicolas

    2012-01-01

    Observing spatial and temporal variations of marine biodiversity from non-destructive techniques is central for understanding ecosystem resilience, and for monitoring and assessing conservation strategies, e.g. Marine Protected Areas. Observations are generally obtained through Underwater Visual Censuses (UVC) conducted by divers. The problems inherent to the presence of divers have been discussed in several papers. Video techniques are increasingly used for observing underwater macrofauna and habitat. Most video techniques that do not need the presence of a diver use baited remote systems. In this paper, we present an original video technique which relies on a remote unbaited rotating remote system including a high definition camera. The system is set on the sea floor to record images. These are then analysed at the office to quantify biotic and abiotic sea bottom cover, and to identify and count fish species and other species like marine turtles. The technique was extensively tested in a highly diversified coral reef ecosystem in the South Lagoon of New Caledonia, based on a protocol covering both protected and unprotected areas in major lagoon habitats. The technique enabled to detect and identify a large number of species, and in particular fished species, which were not disturbed by the system. Habitat could easily be investigated through the images. A large number of observations could be carried out per day at sea. This study showed the strong potential of this non obtrusive technique for observing both macrofauna and habitat. It offers a unique spatial coverage and can be implemented at sea at a reasonable cost by non-expert staff. As such, this technique is particularly interesting for investigating and monitoring coastal biodiversity in the light of current conservation challenges and increasing monitoring needs.

  18. Evaluation of Suppression of Hydroprocessed Renewable Jet (HRJ) Fuel Fires with Aqueous Film Forming Foam (AFFF)

    DTIC Science & Technology

    2011-07-01

    cameras were installed around the test pan and an underwater GoPro ® video camera recorded the fire from below the layer of fuel. 3.2.2. Camera Images...Distribution A: Approved for public release; distribution unlimited. 3.2.3. Video Images A GoPro video camera with a wide angle lens recorded the tests...camera and the GoPro ® video camera were not used for fire suppression experiments. 3.3.2. Test Pans Two ¼-in thick stainless steel test pans were

  19. Passive detection of vehicle loading

    NASA Astrophysics Data System (ADS)

    McKay, Troy R.; Salvaggio, Carl; Faulring, Jason W.; Salvaggio, Philip S.; McKeown, Donald M.; Garrett, Alfred J.; Coleman, David H.; Koffman, Larry D.

    2012-01-01

    The Digital Imaging and Remote Sensing Laboratory (DIRS) at the Rochester Institute of Technology, along with the Savannah River National Laboratory is investigating passive methods to quantify vehicle loading. The research described in this paper investigates multiple vehicle indicators including brake temperature, tire temperature, engine temperature, acceleration and deceleration rates, engine acoustics, suspension response, tire deformation and vibrational response. Our investigation into these variables includes building and implementing a sensing system for data collection as well as multiple full-scale vehicle tests. The sensing system includes; infrared video cameras, triaxial accelerometers, microphones, video cameras and thermocouples. The full scale testing includes both a medium size dump truck and a tractor-trailer truck on closed courses with loads spanning the full range of the vehicle's capacity. Statistical analysis of the collected data is used to determine the effectiveness of each of the indicators for characterizing the weight of a vehicle. The final sensing system will monitor multiple load indicators and combine the results to achieve a more accurate measurement than any of the indicators could provide alone.

  20. Development of high-speed video cameras

    NASA Astrophysics Data System (ADS)

    Etoh, Takeharu G.; Takehara, Kohsei; Okinaka, Tomoo; Takano, Yasuhide; Ruckelshausen, Arno; Poggemann, Dirk

    2001-04-01

    Presented in this paper is an outline of the R and D activities on high-speed video cameras, which have been done in Kinki University since more than ten years ago, and are currently proceeded as an international cooperative project with University of Applied Sciences Osnabruck and other organizations. Extensive marketing researches have been done, (1) on user's requirements on high-speed multi-framing and video cameras by questionnaires and hearings, and (2) on current availability of the cameras of this sort by search of journals and websites. Both of them support necessity of development of a high-speed video camera of more than 1 million fps. A video camera of 4,500 fps with parallel readout was developed in 1991. A video camera with triple sensors was developed in 1996. The sensor is the same one as developed for the previous camera. The frame rate is 50 million fps for triple-framing and 4,500 fps for triple-light-wave framing, including color image capturing. Idea on a video camera of 1 million fps with an ISIS, In-situ Storage Image Sensor, was proposed in 1993 at first, and has been continuously improved. A test sensor was developed in early 2000, and successfully captured images at 62,500 fps. Currently, design of a prototype ISIS is going on, and, hopefully, will be fabricated in near future. Epoch-making cameras in history of development of high-speed video cameras by other persons are also briefly reviewed.

  1. From Antarctica to space: Use of telepresence and virtual reality in control of remote vehicles

    NASA Technical Reports Server (NTRS)

    Stoker, Carol; Hine, Butler P., III; Sims, Michael; Rasmussen, Daryl; Hontalas, Phil; Fong, Terrence W.; Steele, Jay; Barch, Don; Andersen, Dale; Miles, Eric

    1994-01-01

    In the Fall of 1993, NASA Ames deployed a modified Phantom S2 Remotely-Operated underwater Vehicle (ROV) into an ice-covered sea environment near McMurdo Science Station, Antarctica. This deployment was part of the antarctic Space Analog Program, a joint program between NASA and the National Science Foundation to demonstrate technologies relevant for space exploration in realistic field setting in the Antarctic. The goal of the mission was to operationally test the use of telepresence and virtual reality technology in the operator interface to a remote vehicle, while performing a benthic ecology study. The vehicle was operated both locally, from above a dive hole in the ice through which it was launched, and remotely over a satellite communications link from a control room at NASA's Ames Research Center. Local control of the vehicle was accomplished using the standard Phantom control box containing joysticks and switches, with the operator viewing stereo video camera images on a stereo display monitor. Remote control of the vehicle over the satellite link was accomplished using the Virtual Environment Vehicle Interface (VEVI) control software developed at NASA Ames. The remote operator interface included either a stereo display monitor similar to that used locally or a stereo head-mounted head-tracked display. The compressed video signal from the vehicle was transmitted to NASA Ames over a 768 Kbps satellite channel. Another channel was used to provide a bi-directional Internet link to the vehicle control computer through which the command and telemetry signals traveled, along with a bi-directional telephone service. In addition to the live stereo video from the satellite link, the operator could view a computer-generated graphic representation of the underwater terrain, modeled from the vehicle's sensors. The virtual environment contained an animate graphic model of the vehicle which reflected the state of the actual vehicle, along with ancillary information such as the vehicle track, science markers, and locations of video snapshots. The actual vehicle was driven either from within the virtual environment or through a telepresence interface. All vehicle functions could be controlled remotely over the satellite link.

  2. A remote camera operation system using a marker attached cap

    NASA Astrophysics Data System (ADS)

    Kawai, Hironori; Hama, Hiromitsu

    2005-12-01

    In this paper, we propose a convenient system to control a remote camera according to the eye-gazing direction of the operator, which is approximately obtained through calculating the face direction by means of image processing. The operator put a marker attached cap on his head, and the system takes an image of the operator from above with only one video camera. Three markers are set up on the cap, and 'three' is the minimum number to calculate the tilt angle of the head. The more markers are used, the robuster system may be made to occlusion, and the wider moving range of the head is tolerated. It is supposed that the markers must not exist on any three dimensional straight line. To compensate the marker's color change due to illumination conditions, the threshold for the marker extraction is adaptively decided using a k-means clustering method. The system was implemented with MATLAB on a personal computer, and the real-time operation was realized. Through the experimental results, robustness of the system was confirmed and tilt and pan angles of the head could be calculated with enough accuracy to use.

  3. Video Guidance Sensors Using Remotely Activated Targets

    NASA Technical Reports Server (NTRS)

    Bryan, Thomas C.; Howard, Richard T.; Book, Michael L.

    2004-01-01

    Four updated video guidance sensor (VGS) systems have been proposed. As described in a previous NASA Tech Briefs article, a VGS system is an optoelectronic system that provides guidance for automated docking of two vehicles. The VGS provides relative position and attitude (6-DOF) information between the VGS and its target. In the original intended application, the two vehicles would be spacecraft, but the basic principles of design and operation of the system are applicable to aircraft, robots, objects maneuvered by cranes, or other objects that may be required to be aligned and brought together automatically or under remote control. In the first two of the four VGS systems as now proposed, the tracked vehicle would include active targets that would light up on command from the tracking vehicle, and a video camera on the tracking vehicle would be synchronized with, and would acquire images of, the active targets. The video camera would also acquire background images during the periods between target illuminations. The images would be digitized and the background images would be subtracted from the illuminated-target images. Then the position and orientation of the tracked vehicle relative to the tracking vehicle would be computed from the known geometric relationships among the positions of the targets in the image, the positions of the targets relative to each other and to the rest of the tracked vehicle, and the position and orientation of the video camera relative to the rest of the tracking vehicle. The major difference between the first two proposed systems and prior active-target VGS systems lies in the techniques for synchronizing the flashing of the active targets with the digitization and processing of image data. In the prior active-target VGS systems, synchronization was effected, variously, by use of either a wire connection or the Global Positioning System (GPS). In three of the proposed VGS systems, the synchronizing signal would be generated on, and transmitted from, the tracking vehicle. In the first proposed VGS system, the tracking vehicle would transmit a pulse of light. Upon reception of the pulse, circuitry on the tracked vehicle would activate the target lights. During the pulse, the target image acquired by the camera would be digitized. When the pulse was turned off, the target lights would be turned off and the background video image would be digitized. The second proposed system would function similarly to the first proposed system, except that the transmitted synchronizing signal would be a radio pulse instead of a light pulse. In this system, the signal receptor would be a rectifying antenna. If the signal contained sufficient power, the output of the rectifying antenna could be used to activate the target lights, making it unnecessary to include a battery or other power supply for the targets on the tracked vehicle.

  4. Low-cost telepresence for collaborative virtual environments.

    PubMed

    Rhee, Seon-Min; Ziegler, Remo; Park, Jiyoung; Naef, Martin; Gross, Markus; Kim, Myoung-Hee

    2007-01-01

    We present a novel low-cost method for visual communication and telepresence in a CAVE -like environment, relying on 2D stereo-based video avatars. The system combines a selection of proven efficient algorithms and approximations in a unique way, resulting in a convincing stereoscopic real-time representation of a remote user acquired in a spatially immersive display. The system was designed to extend existing projection systems with acquisition capabilities requiring minimal hardware modifications and cost. The system uses infrared-based image segmentation to enable concurrent acquisition and projection in an immersive environment without a static background. The system consists of two color cameras and two additional b/w cameras used for segmentation in the near-IR spectrum. There is no need for special optics as the mask and color image are merged using image-warping based on a depth estimation. The resulting stereo image stream is compressed, streamed across a network, and displayed as a frame-sequential stereo texture on a billboard in the remote virtual environment.

  5. MS Mastracchio operates the RMS on the flight deck of Atlantis during STS-106

    NASA Image and Video Library

    2000-09-11

    STS106-E-5099 (11 September 2000) --- Astronaut Richard A. Mastracchio, mission specialist, stands near viewing windows, video monitors and the controls for the remote manipulator system (RMS) arm (out of frame at left) on the flight deck of the Earth-orbiting Space Shuttle Atlantis during Flight Day 3 activity. Atlantis was docked with the International Space Station (ISS) when this photo was recorded with an electronic still camera (ESC).

  6. Feasibility of telementoring between Baltimore (USA) and Rome (Italy): the first five cases.

    PubMed

    Micali, S; Virgili, G; Vannozzi, E; Grassi, N; Jarrett, T W; Bauer, J J; Vespasiani, G; Kavoussi, L R

    2000-08-01

    Telemedicine is the use of telecommunication technology to deliver healthcare. Telementoring has been developed to allow a surgeon at a remote site to offer guidance and assistance to a less-experienced surgeon. We report on our experience during laparoscopic urologic procedures with mentoring between Rome, Italy, and Baltimore, USA. Over a period of 3 months, two laparoscopic left spermatic vein ligations, one retroperitoneal renal biopsy, one laparoscopic nephrectomy, and one percutaneous access to the kidney were telementored. Transperitoneal laparoscopic cases were performed with the use of AESOP, a robotic for remote manipulation of the endoscopic camera. A second robot, PAKY, was used to perform radiologically guided needle orientation and insertion for percutaneous renal access. In addition to controlling the robotic devices, the system provided real-time video display for either the laparoscope or an externally mounted camera located in the operating room, full duplex audio, telestration over live video, and access to electrocautery for tissue cutting or hemostasis. All procedures were accomplished with an uneventful postoperative course. One technical failure occurred because the robotic device was not properly positioned on the operating table. The round-trip delay of image transmission was less than 1 second. International telementoring is a feasible technique that can enhance surgeon education and decrease the likelihood of complications attributable to inexperience with new operative techniques.

  7. Application of video recording technology to improve husbandry and reproduction in the carmine bee-eater (Merops n. nubicus).

    PubMed

    Ferrie, Gina M; Sky, Christy; Schutz, Paul J; Quinones, Glorieli; Breeding, Shawnlei; Plasse, Chelle; Leighty, Katherine A; Bettinger, Tammie L

    2016-01-01

    Incorporating technology with research is becoming increasingly important to enhance animal welfare in zoological settings. Video technology is used in the management of avian populations to facilitate efficient information collection on aspects of avian reproduction that are impractical or impossible to obtain through direct observation. Disney's Animal Kingdom(®) maintains a successful breeding colony of Northern carmine bee-eaters. This African species is a cavity nester, making their nesting behavior difficult to study and manage in an ex situ setting. After initial research focused on developing a suitable nesting environment, our goal was to continue developing methods to improve reproductive success and increase likelihood of chicks fledging. We installed infrared bullet cameras in five nest boxes and connected them to a digital video recording system, with data recorded continuously through the breeding season. We then scored and summarized nesting behaviors. Using remote video methods of observation provided much insight into the behavior of the birds in the colony's nest boxes. We observed aggression between birds during the egg-laying period, and therefore immediately removed all of the eggs for artificial incubation which completely eliminated egg breakage. We also used observations of adult feeding behavior to refine chick hand-rearing diet and practices. Although many video recording configurations have been summarized and evaluated in various reviews, we found success with the digital video recorder and infrared cameras described here. Applying emerging technologies to cavity nesting avian species is a necessary addition to improving management in and sustainability of zoo avian populations. © 2015 Wiley Periodicals, Inc.

  8. Video framerate, resolution and grayscale tradeoffs for undersea telemanipulator

    NASA Technical Reports Server (NTRS)

    Ranadive, V.; Sheridan, T. B.

    1981-01-01

    The product of Frame Rate (F) in frames per second, Resolution (R) in total pixels and grayscale in bits (G) equals the transmission band rate in bits per second. Thus for a fixed channel capacity there are tradeoffs between F, R and G in the actual sampling of the picture for a particular manual control task in the present case remote undersea manipulation. A manipulator was used in the MASTER/SLAVE mode to study these tradeoffs. Images were systematically degraded from 28 frames per second, 128 x 128 pixels and 16 levels (4 bits) grayscale, with various FRG combinations constructed from a real-time digitized (charge-injection) video camera. It was found that frame rate, resolution and grayscale could be independently reduced without preventing the operator from accomplishing his/her task. Threshold points were found beyond which degradation would prevent any successful performance. A general conclusion is that a well trained operator can perform familiar remote manipulator tasks with a considerably degrade picture, down to 50 K bits/ sec.

  9. Wrap-Around Out-the-Window Sensor Fusion System

    NASA Technical Reports Server (NTRS)

    Fox, Jeffrey; Boe, Eric A.; Delgado, Francisco; Secor, James B.; Clark, Michael R.; Ehlinger, Kevin D.; Abernathy, Michael F.

    2009-01-01

    The Advanced Cockpit Evaluation System (ACES) includes communication, computing, and display subsystems, mounted in a van, that synthesize out-the-window views to approximate the views of the outside world as it would be seen from the cockpit of a crewed spacecraft, aircraft, or remote control of a ground vehicle or UAV (unmanned aerial vehicle). The system includes five flat-panel display units arranged approximately in a semicircle around an operator, like cockpit windows. The scene displayed on each panel represents the view through the corresponding cockpit window. Each display unit is driven by a personal computer equipped with a video-capture card that accepts live input from any of a variety of sensors (typically, visible and/or infrared video cameras). Software running in the computers blends the live video images with synthetic images that could be generated, for example, from heads-up-display outputs, waypoints, corridors, or from satellite photographs of the same geographic region. Data from a Global Positioning System receiver and an inertial navigation system aboard the remote vehicle are used by the ACES software to keep the synthetic and live views in registration. If the live image were to fail, the synthetic scenes could still be displayed to maintain situational awareness.

  10. Video auto stitching in multicamera surveillance system

    NASA Astrophysics Data System (ADS)

    He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang

    2012-01-01

    This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.

  11. Video auto stitching in multicamera surveillance system

    NASA Astrophysics Data System (ADS)

    He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang

    2011-12-01

    This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.

  12. Automatic inference of geometric camera parameters and inter-camera topology in uncalibrated disjoint surveillance cameras

    NASA Astrophysics Data System (ADS)

    den Hollander, Richard J. M.; Bouma, Henri; Baan, Jan; Eendebak, Pieter T.; van Rest, Jeroen H. C.

    2015-10-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many cameras, or for frequent ad-hoc deployments of cameras, the cost of this calibration is high. This creates a barrier for the use of video analytics. Automating the calibration allows for a short configuration time, and the use of video analytics in a wider range of scenarios, including ad-hoc crisis situations and large scale surveillance systems. We show an autocalibration method entirely based on pedestrian detections in surveillance video in multiple non-overlapping cameras. In this paper, we show the two main components of automatic calibration. The first shows the intra-camera geometry estimation that leads to an estimate of the tilt angle, focal length and camera height, which is important for the conversion from pixels to meters and vice versa. The second component shows the inter-camera topology inference that leads to an estimate of the distance between cameras, which is important for spatio-temporal analysis of multi-camera tracking. This paper describes each of these methods and provides results on realistic video data.

  13. Remotely Piloted Aircraft Systems as a Rhinoceros Anti-Poaching Tool in Africa

    PubMed Central

    Mulero-Pázmány, Margarita; Stolper, Roel; van Essen, L. D.; Negro, Juan J.; Sassen, Tyrell

    2014-01-01

    Over the last years there has been a massive increase in rhinoceros poaching incidents, with more than two individuals killed per day in South Africa in the first months of 2013. Immediate actions are needed to preserve current populations and the agents involved in their protection are demanding new technologies to increase their efficiency in the field. We assessed the use of remotely piloted aircraft systems (RPAS) to monitor for poaching activities. We performed 20 flights with 3 types of cameras: visual photo, HD video and thermal video, to test the ability of the systems to detect (a) rhinoceros, (b) people acting as poachers and (c) to do fence surveillance. The study area consisted of several large game farms in KwaZulu-Natal province, South Africa. The targets were better detected at the lowest altitudes, but to operate the plane safely and in a discreet way, altitudes between 100 and 180 m were the most convenient. Open areas facilitated target detection, while forest habitats complicated it. Detectability using visual cameras was higher at morning and midday, but the thermal camera provided the best images in the morning and at night. Considering not only the technical capabilities of the systems but also the poacherś modus operandi and the current control methods, we propose RPAS usage as a tool for surveillance of sensitive areas, for supporting field anti-poaching operations, as a deterrent tool for poachers and as a complementary method for rhinoceros ecology research. Here, we demonstrate that low cost RPAS can be useful for rhinoceros stakeholders for field control procedures. There are, however, important practical limitations that should be considered for their successful and realistic integration in the anti-poaching battle. PMID:24416177

  14. Remotely piloted aircraft systems as a rhinoceros anti-poaching tool in Africa.

    PubMed

    Mulero-Pázmány, Margarita; Stolper, Roel; van Essen, L D; Negro, Juan J; Sassen, Tyrell

    2014-01-01

    Over the last years there has been a massive increase in rhinoceros poaching incidents, with more than two individuals killed per day in South Africa in the first months of 2013. Immediate actions are needed to preserve current populations and the agents involved in their protection are demanding new technologies to increase their efficiency in the field. We assessed the use of remotely piloted aircraft systems (RPAS) to monitor for poaching activities. We performed 20 flights with 3 types of cameras: visual photo, HD video and thermal video, to test the ability of the systems to detect (a) rhinoceros, (b) people acting as poachers and (c) to do fence surveillance. The study area consisted of several large game farms in KwaZulu-Natal province, South Africa. The targets were better detected at the lowest altitudes, but to operate the plane safely and in a discreet way, altitudes between 100 and 180 m were the most convenient. Open areas facilitated target detection, while forest habitats complicated it. Detectability using visual cameras was higher at morning and midday, but the thermal camera provided the best images in the morning and at night. Considering not only the technical capabilities of the systems but also the poacherś modus operandi and the current control methods, we propose RPAS usage as a tool for surveillance of sensitive areas, for supporting field anti-poaching operations, as a deterrent tool for poachers and as a complementary method for rhinoceros ecology research. Here, we demonstrate that low cost RPAS can be useful for rhinoceros stakeholders for field control procedures. There are, however, important practical limitations that should be considered for their successful and realistic integration in the anti-poaching battle.

  15. Heating and cooling gas-gun targets: nuts and bolts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gustavsen, Richard L; Bartram, Brian D; Gehr, Russell J

    The nuts and bolts of a system used to heat and cool gas-gun targets is described. We have now used the system for more than 35 experiments, all of which have used electromagnetic gauging. Features of the system include a cover which is removed (remotely) just prior to projectile impact and the widespread use of metal/polymer insulations. Both the cover and insulation were required to obtain uniform temperatures in samples with low thermal conductivity. The use of inexpensive video cameras to make remote observations of the cover removal was found to be very useful. A brief catalog of useful glue,more » adhesive tape, insulation, and seal materials is given.« less

  16. Machine vision based teleoperation aid

    NASA Technical Reports Server (NTRS)

    Hoff, William A.; Gatrell, Lance B.; Spofford, John R.

    1991-01-01

    When teleoperating a robot using video from a remote camera, it is difficult for the operator to gauge depth and orientation from a single view. In addition, there are situations where a camera mounted for viewing by the teleoperator during a teleoperation task may not be able to see the tool tip, or the viewing angle may not be intuitive (requiring extensive training to reduce the risk of incorrect or dangerous moves by the teleoperator). A machine vision based teleoperator aid is presented which uses the operator's camera view to compute an object's pose (position and orientation), and then overlays onto the operator's screen information on the object's current and desired positions. The operator can choose to display orientation and translation information as graphics and/or text. This aid provides easily assimilated depth and relative orientation information to the teleoperator. The camera may be mounted at any known orientation relative to the tool tip. A preliminary experiment with human operators was conducted and showed that task accuracies were significantly greater with than without this aid.

  17. Social Justice through Literacy: Integrating Digital Video Cameras in Reading Summaries and Responses

    ERIC Educational Resources Information Center

    Liu, Rong; Unger, John A.; Scullion, Vicki A.

    2014-01-01

    Drawing data from an action-oriented research project for integrating digital video cameras into the reading process in pre-college courses, this study proposes using digital video cameras in reading summaries and responses to promote critical thinking and to teach social justice concepts. The digital video research project is founded on…

  18. Detection of inter-frame forgeries in digital videos.

    PubMed

    K, Sitara; Mehtre, B M

    2018-05-26

    Videos are acceptable as evidence in the court of law, provided its authenticity and integrity are scientifically validated. Videos recorded by surveillance systems are susceptible to malicious alterations of visual content by perpetrators locally or remotely. Such malicious alterations of video contents (called video forgeries) are categorized into inter-frame and intra-frame forgeries. In this paper, we propose inter-frame forgery detection techniques using tamper traces from spatio-temporal and compressed domains. Pristine videos containing frames that are recorded during sudden camera zooming event, may get wrongly classified as tampered videos leading to an increase in false positives. To address this issue, we propose a method for zooming detection and it is incorporated in video tampering detection. Frame shuffling detection, which was not explored so far is also addressed in our work. Our method is capable of differentiating various inter-frame tamper events and its localization in the temporal domain. The proposed system is tested on 23,586 videos of which 2346 are pristine and rest of them are candidates of inter-frame forged videos. Experimental results show that we have successfully detected frame shuffling with encouraging accuracy rates. We have achieved improved accuracy on forgery detection in frame insertion, frame deletion and frame duplication. Copyright © 2018. Published by Elsevier B.V.

  19. Fast noninvasive eye-tracking and eye-gaze determination for biomedical and remote monitoring applications

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Morookian, John M.; Monacos, Steve P.; Lam, Raymond K.; Lebaw, C.; Bond, A.

    2004-04-01

    Eyetracking is one of the latest technologies that has shown potential in several areas including human-computer interaction for people with and without disabilities, and for noninvasive monitoring, detection, and even diagnosis of physiological and neurological problems in individuals. Current non-invasive eyetracking methods achieve a 30 Hz rate with possibly low accuracy in gaze estimation, that is insufficient for many applications. We propose a new non-invasive visual eyetracking system that is capable of operating at speeds as high as 6-12 KHz. A new CCD video camera and hardware architecture is used, and a novel fast image processing algorithm leverages specific features of the input CCD camera to yield a real-time eyetracking system. A field programmable gate array (FPGA) is used to control the CCD camera and execute the image processing operations. Initial results show the excellent performance of our system under severe head motion and low contrast conditions.

  20. Leveraging traffic and surveillance video cameras for urban traffic.

    DOT National Transportation Integrated Search

    2014-12-01

    The objective of this project was to investigate the use of existing video resources, such as traffic : cameras, police cameras, red light cameras, and security cameras for the long-term, real-time : collection of traffic statistics. An additional ob...

  1. Close-range photogrammetry with video cameras

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Snow, W. L.; Goad, W. K.

    1985-01-01

    Examples of photogrammetric measurements made with video cameras uncorrected for electronic and optical lens distortions are presented. The measurement and correction of electronic distortions of video cameras using both bilinear and polynomial interpolation are discussed. Examples showing the relative stability of electronic distortions over long periods of time are presented. Having corrected for electronic distortion, the data are further corrected for lens distortion using the plumb line method. Examples of close-range photogrammetric data taken with video cameras corrected for both electronic and optical lens distortion are presented.

  2. Close-Range Photogrammetry with Video Cameras

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Snow, W. L.; Goad, W. K.

    1983-01-01

    Examples of photogrammetric measurements made with video cameras uncorrected for electronic and optical lens distortions are presented. The measurement and correction of electronic distortions of video cameras using both bilinear and polynomial interpolation are discussed. Examples showing the relative stability of electronic distortions over long periods of time are presented. Having corrected for electronic distortion, the data are further corrected for lens distortion using the plumb line method. Examples of close-range photogrammetric data taken with video cameras corrected for both electronic and optical lens distortion are presented.

  3. Rugged Walking Robot

    NASA Technical Reports Server (NTRS)

    Larimer, Stanley J.; Lisec, Thomas R.; Spiessbach, Andrew J.

    1990-01-01

    Proposed walking-beam robot simpler and more rugged than articulated-leg walkers. Requires less data processing, and uses power more efficiently. Includes pair of tripods, one nested in other. Inner tripod holds power supplies, communication equipment, computers, instrumentation, sampling arms, and articulated sensor turrets. Outer tripod holds mast on which antennas for communication with remote control site and video cameras for viewing local and distant terrain mounted. Propels itself by raising, translating, and lowering tripods in alternation. Steers itself by rotating raised tripod on turntable.

  4. Mobile Situational Awareness Tool: Unattended Ground Sensor-Based Remote Surveillance System

    DTIC Science & Technology

    2014-09-01

    into prototyped WSNs. In 2012, the Raspberry Pi , an SBC with an Arm-Processor running Gnu/Linux also designed for students and hobbyists, entered...the market selling for only $25 each [30]. The Raspberry Pi was the size of a credit card, had the ability to connect to a wide variety of...peripherals to include Wi-Fi adapters and cameras, and had enough processing power to play high-definition video [31]. The Raspberry Pi proved to be

  5. View of the HST berthed to the Shuttle Atlantis

    NASA Image and Video Library

    2009-05-13

    S125-E-007257 (14 May 2009) --- A wide view of the Hubble Space Telescope, locked down in the cargo bay of the Earth-orbiting Space Shuttle Atlantis, which will be site of a great deal of hands-on servicing over the next five days. The Canadian-built remote manipulator system arm (right), with its video cameras documenting activity in the shuttle's cargo bay all week, was instrumental in grappling and subsequently capturing the giant orbital observatory for the final servicing mission.

  6. Nekton Interaction Monitoring System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2017-03-15

    The software provides a real-time processing system for sonar to detect and track animals, and to extract water column biomass statistics in order to facilitate continuous monitoring of an underwater environment. The Nekton Interaction Monitoring System (NIMS) extracts and archives tracking and backscatter statistics data from a real-time stream of data from a sonar device. NIMS also sends real-time tracking messages over the network that can be used by other systems to generate other metrics or to trigger instruments such as an optical video camera. A web-based user interface provides remote monitoring and control. NIMS currently supports three popular sonarmore » devices: M3 multi-beam sonar (Kongsberg), EK60 split-beam echo-sounder (Simrad) and BlueView acoustic camera (Teledyne).« less

  7. Instant Video Revisiting: The Video Camera as a "Tool of the Mind" for Young Children.

    ERIC Educational Resources Information Center

    Forman, George

    1999-01-01

    Once used only to record special events in the classroom, video cameras are now small enough and affordable enough to be used to document everyday events. Video cameras, with foldout screens, allow children to watch their activities immediately after they happen and to discuss them with a teacher. This article coins the term instant video…

  8. Secure Internet video conferencing for assessing acute medical problems in a nursing facility.

    PubMed Central

    Weiner, M.; Schadow, G.; Lindbergh, D.; Warvel, J.; Abernathy, G.; Dexter, P.; McDonald, C. J.

    2001-01-01

    Although video-based teleconferencing is becoming more widespread in the medical profession, especially for scheduled consultations, applications for rapid assessment of acute medical problems are rare. Use of such a video system in a nursing facility may be especially beneficial, because physicians are often not immediately available to evaluate patients. We have assembled and tested a portable, wireless conferencing system to prepare for a randomized trial of the system s influence on resource utilization and satisfaction. The system includes a rolling cart with video conferencing hardware and software, a remotely controllable digital camera, light, wireless network, and battery. A semi-automated paging system informs physicians of patient s study status and indications for conferencing. Data transmission occurs wirelessly in the nursing home and then through Internet cables to the physician s home. This provides sufficient bandwidth to support quality motion images. IPsec secures communications. Despite human and technical challenges, this system is affordable and functional. Images Figure 1 PMID:11825286

  9. Image acquisition system for traffic monitoring applications

    NASA Astrophysics Data System (ADS)

    Auty, Glen; Corke, Peter I.; Dunn, Paul; Jensen, Murray; Macintyre, Ian B.; Mills, Dennis C.; Nguyen, Hao; Simons, Ben

    1995-03-01

    An imaging system for monitoring traffic on multilane highways is discussed. The system, named Safe-T-Cam, is capable of operating 24 hours per day in all but extreme weather conditions and can capture still images of vehicles traveling up to 160 km/hr. Systems operating at different remote locations are networked to allow transmission of images and data to a control center. A remote site facility comprises a vehicle detection and classification module (VCDM), an image acquisition module (IAM) and a license plate recognition module (LPRM). The remote site is connected to the central site by an ISDN communications network. The remote site system is discussed in this paper. The VCDM consists of a video camera, a specialized exposure control unit to maintain consistent image characteristics, and a 'real-time' image processing system that processes 50 images per second. The VCDM can detect and classify vehicles (e.g. cars from trucks). The vehicle class is used to determine what data should be recorded. The VCDM uses a vehicle tracking technique to allow optimum triggering of the high resolution camera of the IAM. The IAM camera combines the features necessary to operate consistently in the harsh environment encountered when imaging a vehicle 'head-on' in both day and night conditions. The image clarity obtained is ideally suited for automatic location and recognition of the vehicle license plate. This paper discusses the camera geometry, sensor characteristics and the image processing methods which permit consistent vehicle segmentation from a cluttered background allowing object oriented pattern recognition to be used for vehicle classification. The image capture of high resolution images and the image characteristics required for the LPRMs automatic reading of vehicle license plates, is also discussed. The results of field tests presented demonstrate that the vision based Safe-T-Cam system, currently installed on open highways, is capable of producing automatic classification of vehicle class and recording of vehicle numberplates with a success rate around 90 percent in a period of 24 hours.

  10. Stereo vision techniques for telescience

    NASA Astrophysics Data System (ADS)

    Hewett, S.

    1990-02-01

    The Botanic Experiment is one of the pilot experiments in the Telescience Test Bed program at the ESTEC research and technology center of the European Space Agency. The aim of the Telescience Test Bed is to develop the techniques required by an experimenter using a ground based work station for remote control, monitoring, and modification of an experiment operating on a space platform. The purpose of the Botanic Experiment is to examine the growth of seedlings under various illumination conditions with a video camera from a number of viewpoints throughout the duration of the experiment. This paper describes the Botanic Experiment and the points addressed in developing a stereo vision software package to extract quantitative information about the seedlings from the recorded video images.

  11. Nonchronological video synopsis and indexing.

    PubMed

    Pritch, Yael; Rav-Acha, Alex; Peleg, Shmuel

    2008-11-01

    The amount of captured video is growing with the increased numbers of video cameras, especially the increase of millions of surveillance cameras that operate 24 hours a day. Since video browsing and retrieval is time consuming, most captured video is never watched or examined. Video synopsis is an effective tool for browsing and indexing of such a video. It provides a short video representation, while preserving the essential activities of the original video. The activity in the video is condensed into a shorter period by simultaneously showing multiple activities, even when they originally occurred at different times. The synopsis video is also an index into the original video by pointing to the original time of each activity. Video Synopsis can be applied to create a synopsis of an endless video streams, as generated by webcams and by surveillance cameras. It can address queries like "Show in one minute the synopsis of this camera broadcast during the past day''. This process includes two major phases: (i) An online conversion of the endless video stream into a database of objects and activities (rather than frames). (ii) A response phase, generating the video synopsis as a response to the user's query.

  12. Robust Video Stabilization Using Particle Keypoint Update and l1-Optimized Camera Path

    PubMed Central

    Jeon, Semi; Yoon, Inhye; Jang, Jinbeum; Yang, Seungji; Kim, Jisung; Paik, Joonki

    2017-01-01

    Acquisition of stabilized video is an important issue for various type of digital cameras. This paper presents an adaptive camera path estimation method using robust feature detection to remove shaky artifacts in a video. The proposed algorithm consists of three steps: (i) robust feature detection using particle keypoints between adjacent frames; (ii) camera path estimation and smoothing; and (iii) rendering to reconstruct a stabilized video. As a result, the proposed algorithm can estimate the optimal homography by redefining important feature points in the flat region using particle keypoints. In addition, stabilized frames with less holes can be generated from the optimal, adaptive camera path that minimizes a temporal total variation (TV). The proposed video stabilization method is suitable for enhancing the visual quality for various portable cameras and can be applied to robot vision, driving assistant systems, and visual surveillance systems. PMID:28208622

  13. The Surgeon's View: Comparison of Two Digital Video Recording Systems in Veterinary Surgery.

    PubMed

    Giusto, Gessica; Caramello, Vittorio; Comino, Francesco; Gandini, Marco

    2015-01-01

    Video recording and photography during surgical procedures are useful in veterinary medicine for several reasons, including legal, educational, and archival purposes. Many systems are available, such as hand cameras, light-mounted cameras, and head cameras. We chose a reasonably priced head camera that is among the smallest video cameras available. To best describe its possible uses and advantages, we recorded video and images of eight different surgical cases and procedures, both in hospital and field settings. All procedures were recorded both with a head-mounted camera and a commercial hand-held photo camera. Then sixteen volunteers (eight senior clinicians and eight final-year students) completed an evaluation questionnaire. Both cameras produced high-quality photographs and videos, but observers rated the head camera significantly better regarding point of view and their understanding of the surgical operation. The head camera was considered significantly more useful in teaching surgical procedures. Interestingly, senior clinicians tended to assign generally lower scores compared to students. The head camera we tested is an effective, easy-to-use tool for recording surgeries and various veterinary procedures in all situations, with no need for assistance from a dedicated operator. It can be a valuable aid for veterinarians working in all fields of the profession and a useful tool for veterinary surgical education.

  14. Feasibility Study of Utilization of Action Camera, GoPro Hero 4, Google Glass, and Panasonic HX-A100 in Spine Surgery.

    PubMed

    Lee, Chang Kyu; Kim, Youngjun; Lee, Nam; Kim, Byeongwoo; Kim, Doyoung; Yi, Seong

    2017-02-15

    Study for feasibility of commercially available action cameras in recording video of spine. Recent innovation of the wearable action camera with high-definition video recording enables surgeons to use camera in the operation at ease without high costs. The purpose of this study is to compare the feasibility, safety, and efficacy of commercially available action cameras in recording video of spine surgery. There are early reports of medical professionals using Google Glass throughout the hospital, Panasonic HX-A100 action camera, and GoPro. This study is the first report for spine surgery. Three commercially available cameras were tested: GoPro Hero 4 Silver, Google Glass, and Panasonic HX-A100 action camera. Typical spine surgery was selected for video recording; posterior lumbar laminectomy and fusion. Three cameras were used by one surgeon and video was recorded throughout the operation. The comparison was made on the perspective of human factor, specification, and video quality. The most convenient and lightweight device for wearing and holding throughout the long operation time was Google Glass. The image quality; all devices except Google Glass supported HD format and GoPro has unique 2.7K or 4K resolution. Quality of video resolution was best in GoPro. Field of view, GoPro can adjust point of interest, field of view according to the surgery. Narrow FOV option was the best for recording in GoPro to share the video clip. Google Glass has potentials by using application programs. Connectivity such as Wi-Fi and Bluetooth enables video streaming for audience, but only Google Glass has two-way communication feature in device. Action cameras have the potential to improve patient safety, operator comfort, and procedure efficiency in the field of spinal surgery and broadcasting a surgery with development of the device and applied program in the future. N/A.

  15. Lessons from UNSCOM and IAEA regarding remote monitoring and air sampling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dupree, S.A.

    1996-01-01

    In 1991, at the direction of the United Nations Security Council, UNSCOM and IAEA developed plans for On-going Monitoring and Verification (OMV) in Iraq. The plans were accepted by the Security Council and remote monitoring and atmospheric sampling equipment has been installed at selected sites in Iraq. The remote monitoring equipment consists of video cameras and sensors positioned to observe equipment or activities at sites that could be used to support the development or manufacture of weapons of mass destruction, or long-range missiles. The atmospheric sampling equipment provides unattended collection of chemical samples from sites that could be used tomore » support the development or manufacture of chemical weapon agents. To support OMV in Iraq, UNSCOM has established the Baghdad Monitoring and Verification Centre. Imagery from the remote monitoring cameras can be accessed in near-real time from the Centre through RIF communication links with the monitored sites. The OMV program in Iraq has implications for international cooperative monitoring in both global and regional contexts. However, monitoring systems such as those used in Iraq are not sufficient, in and of themselves, to guarantee the absence of prohibited activities. Such systems cannot replace on-site inspections by competent, trained inspectors. However, monitoring similar to that used in Iraq can contribute to openness and confidence building, to the development of mutual trust, and to the improvement of regional stability.« less

  16. Multi-Target Camera Tracking, Hand-off and Display LDRD 158819 Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Robert J.

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn’t lead to more alarms, moremore » monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identify individual moving targets from the background imagery, and then display the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.« less

  17. Multi-target camera tracking, hand-off and display LDRD 158819 final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Robert J.

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn't lead to more alarms, moremore » monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identifies individual moving targets from the background imagery, and then displays the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.« less

  18. [Multimedia (visual collaboration) brings true nature of human life].

    PubMed

    Tomita, N

    2000-03-01

    Videoconferencing system, high-quality visual collaboration, is bringing Multimedia into a society. Multimedia, high quality media such as TV broadcast, looks expensive because it requires broadband network with 100-200 Mpbs bandwidth or 3,700 analog telephone lines. However, thanks to the existing digital-line called N-ISDN (Narrow Integrated Service Digital Network) and PictureTel's audio/video compression technologies, it becomes far less expensive. N-ISDN provides 128 Kbps bandwidth, over twice wider than analog line. PictureTel's technology instantly compress audio/video signal into 1/1,000 in size. This means, with ISDN and PictureTel technology. Multimedia is materialized over even single ISDN line. This will allow doctor to remotely meet face-to-face with a medical specialist or patients to interview, conduct physical examinations, review records, and prescribe treatments. Bonding multiple ISDN lines will further improve video quality that enables remote surgery. Surgeon can perform an operation on internal organ by projecting motion video from Endoscope's CCD camera to large display monitor. Also, PictureTel provides advanced technologies of eliminating background noise generated by surgical knives or scalpels during surgery. This will allow sound of the breath or heartbeat be clearly transmitted to the remote site. Thus, Multimedia eliminates the barrier of distance, enabling people to be just at home, to be anywhere in the world, to undergo up-to-date medical treatment by expertise. This will reduce medical cost and allow people to live in the suburbs, in less pollution, closer to the nature. People will foster more open and collaborative environment by participating in local activities. Such community-oriented life-style will atone for mass consumption, materialistic economy in the past, then bring true happiness and welfare into our life after all.

  19. Advanced High-Definition Video Cameras

    NASA Technical Reports Server (NTRS)

    Glenn, William

    2007-01-01

    A product line of high-definition color video cameras, now under development, offers a superior combination of desirable characteristics, including high frame rates, high resolutions, low power consumption, and compactness. Several of the cameras feature a 3,840 2,160-pixel format with progressive scanning at 30 frames per second. The power consumption of one of these cameras is about 25 W. The size of the camera, excluding the lens assembly, is 2 by 5 by 7 in. (about 5.1 by 12.7 by 17.8 cm). The aforementioned desirable characteristics are attained at relatively low cost, largely by utilizing digital processing in advanced field-programmable gate arrays (FPGAs) to perform all of the many functions (for example, color balance and contrast adjustments) of a professional color video camera. The processing is programmed in VHDL so that application-specific integrated circuits (ASICs) can be fabricated directly from the program. ["VHDL" signifies VHSIC Hardware Description Language C, a computing language used by the United States Department of Defense for describing, designing, and simulating very-high-speed integrated circuits (VHSICs).] The image-sensor and FPGA clock frequencies in these cameras have generally been much higher than those used in video cameras designed and manufactured elsewhere. Frequently, the outputs of these cameras are converted to other video-camera formats by use of pre- and post-filters.

  20. The Ocean in Depth - Ideas for Using Marine Technology in Science Communication

    NASA Astrophysics Data System (ADS)

    Gerdes, A.

    2009-04-01

    By deploying camera and video systems on remotely operated diving vehicles (ROVs), new and fascinating insights concerning the functioning of deep ocean ecosystems like cold-water coral reef communities can be gained. Moreover, mapping hot vents at mid-ocean ridge locations, and exploring asphalt and mud volcanoes in the Gulf of Mexico and the Mediterranean Sea with the aid of video camera systems have illustrated the scientific value of state-of-the-art diving tools. In principle, the deployment of sophisticated marine technology on seagoing expeditions and their results - video tapes and photographs of fascinating submarine environments, publication of new scientific findings - offer unique opportunities for communicating marine sciences. Experience shows that an interest in marine technology can easily be stirred in laypersons if the deployment of underwater vehicles such as ROVs during seagoing expeditions can be presented using catchwords like "discovery", "new frontier", groundbreaking mission", etc. On the other hand, however, a number of restrictions and challenges have to be kept in mind. Communicating marine science in general, and the achievements of marine technology in particular, can only be successful with the application of a well-defined target-audience concept. While national and international TV stations and production companies are very much interested in using high quality underwater video footage, the involvement of journalists and camera teams in seagoing expeditions entails a number a challenges: berths onboard research vessels are limited; safety aspects have to be considered; copyright and utilisation questions of digitalized video and photo material has to be handled with special care. To cite one example: on-board video material produced by professional TV teams cannot be used by the research institute that operated the expedition. This presentation aims at (1)informing members of the scientific community about new opportunities related to marine technology, (2)discussing challenges and limitations in cooperative projects with media,(3) presenting new ways of marketing scientific findings, (4) promoting the interest of the media present at the EGU09 conference in cooperating with research institutes.

  1. Two-dimensional thermal video analysis of offshore bird and bat flight

    DOE PAGES

    Matzner, Shari; Cullinan, Valerie I.; Duberstein, Corey A.

    2015-09-11

    Thermal infrared video can provide essential information about bird and bat presence and activity for risk assessment studies, but the analysis of recorded video can be time-consuming and may not extract all of the available information. Automated processing makes continuous monitoring over extended periods of time feasible, and maximizes the information provided by video. This is especially important for collecting data in remote locations that are difficult for human observers to access, such as proposed offshore wind turbine sites. We present guidelines for selecting an appropriate thermal camera based on environmental conditions and the physical characteristics of the target animals.more » We developed new video image processing algorithms that automate the extraction of bird and bat flight tracks from thermal video, and that characterize the extracted tracks to support animal identification and behavior inference. The algorithms use a video peak store process followed by background masking and perceptual grouping to extract flight tracks. The extracted tracks are automatically quantified in terms that could then be used to infer animal type and possibly behavior. The developed automated processing generates results that are reproducible and verifiable, and reduces the total amount of video data that must be retained and reviewed by human experts. Finally, we suggest models for interpreting thermal imaging information.« less

  2. Two-dimensional thermal video analysis of offshore bird and bat flight

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matzner, Shari; Cullinan, Valerie I.; Duberstein, Corey A.

    Thermal infrared video can provide essential information about bird and bat presence and activity for risk assessment studies, but the analysis of recorded video can be time-consuming and may not extract all of the available information. Automated processing makes continuous monitoring over extended periods of time feasible, and maximizes the information provided by video. This is especially important for collecting data in remote locations that are difficult for human observers to access, such as proposed offshore wind turbine sites. We present guidelines for selecting an appropriate thermal camera based on environmental conditions and the physical characteristics of the target animals.more » We developed new video image processing algorithms that automate the extraction of bird and bat flight tracks from thermal video, and that characterize the extracted tracks to support animal identification and behavior inference. The algorithms use a video peak store process followed by background masking and perceptual grouping to extract flight tracks. The extracted tracks are automatically quantified in terms that could then be used to infer animal type and possibly behavior. The developed automated processing generates results that are reproducible and verifiable, and reduces the total amount of video data that must be retained and reviewed by human experts. Finally, we suggest models for interpreting thermal imaging information.« less

  3. Texture-adaptive hyperspectral video acquisition system with a spatial light modulator

    NASA Astrophysics Data System (ADS)

    Fang, Xiaojing; Feng, Jiao; Wang, Yongjin

    2014-10-01

    We present a new hybrid camera system based on spatial light modulator (SLM) to capture texture-adaptive high-resolution hyperspectral video. The hybrid camera system records a hyperspectral video with low spatial resolution using a gray camera and a high-spatial resolution video using a RGB camera. The hyperspectral video is subsampled by the SLM. The subsampled points can be adaptively selected according to the texture characteristic of the scene by combining with digital imaging analysis and computational processing. In this paper, we propose an adaptive sampling method utilizing texture segmentation and wavelet transform (WT). We also demonstrate the effectiveness of the sampled pattern on the SLM with the proposed method.

  4. Remote Marker-Based Tracking for UAV Landing Using Visible-Light Camera Sensor.

    PubMed

    Nguyen, Phong Ha; Kim, Ki Wan; Lee, Young Won; Park, Kang Ryoung

    2017-08-30

    Unmanned aerial vehicles (UAVs), which are commonly known as drones, have proved to be useful not only on the battlefields where manned flight is considered too risky or difficult, but also in everyday life purposes such as surveillance, monitoring, rescue, unmanned cargo, aerial video, and photography. More advanced drones make use of global positioning system (GPS) receivers during the navigation and control loop which allows for smart GPS features of drone navigation. However, there are problems if the drones operate in heterogeneous areas with no GPS signal, so it is important to perform research into the development of UAVs with autonomous navigation and landing guidance using computer vision. In this research, we determined how to safely land a drone in the absence of GPS signals using our remote maker-based tracking algorithm based on the visible light camera sensor. The proposed method uses a unique marker designed as a tracking target during landing procedures. Experimental results show that our method significantly outperforms state-of-the-art object trackers in terms of both accuracy and processing time, and we perform test on an embedded system in various environments.

  5. Experiences in teleoperation of land vehicles

    NASA Technical Reports Server (NTRS)

    Mcgovern, Douglas E.

    1989-01-01

    Teleoperation of land vehicles allows the removal of the operator from the vehicle to a remote location. This can greatly increase operator safety and comfort in applications such as security patrol or military combat. The cost includes system complexity and reduced system performance. All feedback on vehicle performance and on environmental conditions must pass through sensors, a communications channel, and displays. In particular, this requires vision to be transmitted by close-circuit television with a consequent degradation of information content. Vehicular teleoperation, as a result, places severe demands on the operator. Teleoperated land vehicles have been built and tested by many organizations, including Sandia National Laboratories (SNL). The SNL fleet presently includes eight vehicles of varying capability. These vehicles have been operated using different types of controls, displays, and visual systems. Experimentation studying the effects of vision system characteristics on off-road, remote driving was performed for conditions of fixed camera versus steering-coupled camera and of color versus black and white video display. Additionally, much experience was gained through system demonstrations and hardware development trials. The preliminary experimental findings and the results of the accumulated operational experience are discussed.

  6. A Simple Approach to Collecting Useful Wildlife Data Using Remote Camera-Traps in Undergraduate Biology Courses

    ERIC Educational Resources Information Center

    Christensen, David R.

    2016-01-01

    Remote camera-traps are commonly used to estimate the abundance, diversity, behavior and habitat use of wildlife in an inexpensive and nonintrusive manner. Because of the increasing use of remote-cameras in wildlife studies, students interested in wildlife biology should be exposed to the use of remote-cameras early in their academic careers.…

  7. Representing videos in tangible products

    NASA Astrophysics Data System (ADS)

    Fageth, Reiner; Weiting, Ralf

    2014-03-01

    Videos can be taken with nearly every camera, digital point and shoot cameras, DSLRs as well as smartphones and more and more with so-called action cameras mounted on sports devices. The implementation of videos while generating QR codes and relevant pictures out of the video stream via a software implementation was contents in last years' paper. This year we present first data about what contents is displayed and how the users represent their videos in printed products, e.g. CEWE PHOTOBOOKS and greeting cards. We report the share of the different video formats used, the number of images extracted out of the video in order to represent the video, the positions in the book and different design strategies compared to regular books.

  8. High-frame-rate infrared and visible cameras for test range instrumentation

    NASA Astrophysics Data System (ADS)

    Ambrose, Joseph G.; King, B.; Tower, John R.; Hughes, Gary W.; Levine, Peter A.; Villani, Thomas S.; Esposito, Benjamin J.; Davis, Timothy J.; O'Mara, K.; Sjursen, W.; McCaffrey, Nathaniel J.; Pantuso, Francis P.

    1995-09-01

    Field deployable, high frame rate camera systems have been developed to support the test and evaluation activities at the White Sands Missile Range. The infrared cameras employ a 640 by 480 format PtSi focal plane array (FPA). The visible cameras employ a 1024 by 1024 format backside illuminated CCD. The monolithic, MOS architecture of the PtSi FPA supports commandable frame rate, frame size, and integration time. The infrared cameras provide 3 - 5 micron thermal imaging in selectable modes from 30 Hz frame rate, 640 by 480 frame size, 33 ms integration time to 300 Hz frame rate, 133 by 142 frame size, 1 ms integration time. The infrared cameras employ a 500 mm, f/1.7 lens. Video outputs are 12-bit digital video and RS170 analog video with histogram-based contrast enhancement. The 1024 by 1024 format CCD has a 32-port, split-frame transfer architecture. The visible cameras exploit this architecture to provide selectable modes from 30 Hz frame rate, 1024 by 1024 frame size, 32 ms integration time to 300 Hz frame rate, 1024 by 1024 frame size (with 2:1 vertical binning), 0.5 ms integration time. The visible cameras employ a 500 mm, f/4 lens, with integration time controlled by an electro-optical shutter. Video outputs are RS170 analog video (512 by 480 pixels), and 12-bit digital video.

  9. Clinical applications of commercially available video recording and monitoring systems: inexpensive, high-quality video recording and monitoring systems for endoscopy and microsurgery.

    PubMed

    Tsunoda, Koichi; Tsunoda, Atsunobu; Ishimoto, ShinnIchi; Kimura, Satoko

    2006-01-01

    The exclusive charge-coupled device (CCD) camera system for the endoscope and electronic fiberscopes are in widespread use. However, both are usually stationary in an office or examination room, and a wheeled cart is needed for mobility. The total costs of the CCD camera system and electronic fiberscopy system are at least US Dollars 10,000 and US Dollars 30,000, respectively. Recently, the performance of audio and visual instruments has improved dramatically, with a concomitant reduction in their cost. Commercially available CCD video cameras with small monitors have become common. They provide excellent image quality and are much smaller and less expensive than previous models. The authors have developed adaptors for the popular mini-digital video (mini-DV) camera. The camera also provides video and acoustic output signals; therefore, the endoscopic images can be viewed on a large monitor simultaneously. The new system (a mini-DV video camera and an adaptor) costs only US Dollars 1,000. Therefore, the system is both cost-effective and useful for the outpatient clinic or casualty setting, or on house calls for the purpose of patient education. In the future, the authors plan to introduce the clinical application of a high-vision camera and an infrared camera as medical instruments for clinical and research situations.

  10. Voss with video camera in Service Module

    NASA Image and Video Library

    2001-04-08

    ISS002-E-5329 (08 April 2001) --- Astronaut James S. Voss, Expedition Two flight engineer, sets up a video camera on a mounting bracket in the Zvezda / Service Module of the International Space Station (ISS). A 35mm camera and a digital still camera are also visible nearby. This image was recorded with a digital still camera.

  11. Field Test Data for Detecting Vibrations of a Building Using High-Speed Video Cameras

    DTIC Science & Technology

    2017-10-01

    ARL-TR-8185 ● OCT 2017 US Army Research Laboratory Field Test Data for Detecting Vibrations of a Building Using High -Speed Video...Field Test Data for Detecting Vibrations of a Building Using High -Speed Video Cameras by Caitlin P Conn and Geoffrey H Goldman Sensors and...June 2016 – October 2017 4. TITLE AND SUBTITLE Field Test Data for Detecting Vibrations of a Building Using High -Speed Video Cameras 5a. CONTRACT

  12. A Comparative Survey of Methods for Remote Heart Rate Detection From Frontal Face Videos

    PubMed Central

    Wang, Chen; Pun, Thierry; Chanel, Guillaume

    2018-01-01

    Remotely measuring physiological activity can provide substantial benefits for both the medical and the affective computing applications. Recent research has proposed different methodologies for the unobtrusive detection of heart rate (HR) using human face recordings. These methods are based on subtle color changes or motions of the face due to cardiovascular activities, which are invisible to human eyes but can be captured by digital cameras. Several approaches have been proposed such as signal processing and machine learning. However, these methods are compared with different datasets, and there is consequently no consensus on method performance. In this article, we describe and evaluate several methods defined in literature, from 2008 until present day, for the remote detection of HR using human face recordings. The general HR processing pipeline is divided into three stages: face video processing, face blood volume pulse (BVP) signal extraction, and HR computation. Approaches presented in the paper are classified and grouped according to each stage. At each stage, algorithms are analyzed and compared based on their performance using the public database MAHNOB-HCI. Results found in this article are limited on MAHNOB-HCI dataset. Results show that extracted face skin area contains more BVP information. Blind source separation and peak detection methods are more robust with head motions for estimating HR. PMID:29765940

  13. International Space Station: Expedition 2000

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Live footage of the International Space Station (ISS) presents an inside look at the groundwork and assembly of the ISS. Footage includes both animation and live shots of a Space Shuttle liftoff. Phil West, Engineer; Dr. Catherine Clark, Chief Scientist ISS; and Joe Edwards, Astronaut, narrate the video. The first topic of discussion is People and Communications. Good communication is a key component in our ISS endeavor. Dr. Catherine Clark uses two soup cans attached by a string to demonstrate communication. Bill Nye the Science Guy talks briefly about science aboard the ISS. Charlie Spencer, Manager of Space Station Simulators, talks about communication aboard the ISS. The second topic of discussion is Engineering. Bonnie Dunbar, Astronaut at Johnson Space Flight Center, gives a tour of the Japanese Experiment Module (JEM). She takes us inside Node 2 and the U.S. Lab Destiny. She also shows where protein crystal growth experiments are performed. Audio terminal units are used for communication in the JEM. A demonstration of solar arrays and how they are tested is shown. Alan Bell, Project Manager MRMDF (Mobile Remote Manipulator Development Facility), describes the robot arm that is used on the ISS and how it maneuvers the Space Station. The third topic of discussion is Science and Technology. Dr. Catherine Clark, using a balloon attached to a weight, drops the apparatus to the ground to demonstrate Microgravity. The bursting of the balloon is observed. Sherri Dunnette, Imaging Technologist, describes the various cameras that are used in space. The types of still cameras used are: 1) 35 mm, 2) medium format cameras, 3) large format cameras, 4) video cameras, and 5) the DV camera. Kumar Krishen, Chief Technologist ISS, explains inframetrics, infrared vision cameras and how they perform. The Short Arm Centrifuge is shown by Dr. Millard Reske, Senior Life Scientist, to subject astronauts to forces greater than 1-g. Reske is interested in the physiological effects of the eyes and the muscular system after their exposure to forces greater than 1-g.

  14. Dante's Volcano

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This video contains two segments: one a 0:01:50 spot and the other a 0:08:21 feature. Dante 2, an eight-legged walking machine, is shown during field trials as it explores the inner depths of an active volcano at Mount Spurr, Alaska. A NASA sponsored team at Carnegie Mellon University built Dante to withstand earth's harshest conditions, to deliver a science payload to the interior of a volcano, and to report on its journey to the floor of a volcano. Remotely controlled from 80-miles away, the robot explored the inner depths of the volcano and information from onboard video cameras and sensors was relayed via satellite to scientists in Anchorage. There, using a computer generated image, controllers tracked the robot's movement. Ultimately the robot team hopes to apply the technology to future planetary missions.

  15. Noncontact measurement of heart rate using facial video illuminated under natural light and signal weighted analysis.

    PubMed

    Yan, Yonggang; Ma, Xiang; Yao, Lifeng; Ouyang, Jianfei

    2015-01-01

    Non-contact and remote measurements of vital physical signals are important for reliable and comfortable physiological self-assessment. We presented a novel optical imaging-based method to measure the vital physical signals. Using a digital camera and ambient light, the cardiovascular pulse waves were extracted better from human color facial videos correctly. And the vital physiological parameters like heart rate were measured using a proposed signal-weighted analysis method. The measured HRs consistent with those measured simultaneously with reference technologies (r=0.94, p<0.001 for HR). The results show that the imaging-based method is suitable for measuring the physiological parameters, and provide a reliable and comfortable measurement mode. The study lays a physical foundation for measuring multi-physiological parameters of human noninvasively.

  16. A Robotic arm for optical and gamma radwaste inspection

    NASA Astrophysics Data System (ADS)

    Russo, L.; Cosentino, L.; Pappalardo, A.; Piscopo, M.; Scirè, C.; Scirè, S.; Vecchio, G.; Muscato, G.; Finocchiaro, P.

    2014-12-01

    We propose Radibot, a simple and cheap robotic arm for remote inspection, which interacts with the radwaste environment by means of a scintillation gamma detector and a video camera representing its light (< 1 kg) payload. It moves vertically thanks to a crane, while the other three degrees of freedom are obtained by means of revolute joints. A dedicated algorithm allows to automatically choose the best kinematics in order to reach a graphically selected position, while still allowing to fully drive the arm by means of a standard videogame joypad.

  17. Estimating the gaze of a virtuality human.

    PubMed

    Roberts, David J; Rae, John; Duckworth, Tobias W; Moore, Carl M; Aspin, Rob

    2013-04-01

    The aim of our experiment is to determine if eye-gaze can be estimated from a virtuality human: to within the accuracies that underpin social interaction; and reliably across gaze poses and camera arrangements likely in every day settings. The scene is set by explaining why Immersive Virtuality Telepresence has the potential to meet the grand challenge of faithfully communicating both the appearance and the focus of attention of a remote human participant within a shared 3D computer-supported context. Within the experiment n=22 participants rotated static 3D virtuality humans, reconstructed from surround images, until they felt most looked at. The dependent variable was absolute angular error, which was compared to that underpinning social gaze behaviour in the natural world. Independent variables were 1) relative orientations of eye, head and body of captured subject; and 2) subset of cameras used to texture the form. Analysis looked for statistical and practical significance and qualitative corroborating evidence. The analysed results tell us much about the importance and detail of the relationship between gaze pose, method of video based reconstruction, and camera arrangement. They tell us that virtuality can reproduce gaze to an accuracy useful in social interaction, but with the adopted method of Video Based Reconstruction, this is highly dependent on combination of gaze pose and camera arrangement. This suggests changes in the VBR approach in order to allow more flexible camera arrangements. The work is of interest to those wanting to support expressive meetings that are both socially and spatially situated, and particular those using or building Immersive Virtuality Telepresence to accomplish this. It is also of relevance to the use of virtuality humans in applications ranging from the study of human interactions to gaming and the crossing of the stage line in films and TV.

  18. Systems approach to the design of the CCD sensors and camera electronics for the AIA and HMI instruments on solar dynamics observatory

    NASA Astrophysics Data System (ADS)

    Waltham, N.; Beardsley, S.; Clapp, M.; Lang, J.; Jerram, P.; Pool, P.; Auker, G.; Morris, D.; Duncan, D.

    2017-11-01

    Solar Dynamics Observatory (SDO) is imaging the Sun in many wavelengths near simultaneously and with a resolution ten times higher than the average high-definition television. In this paper we describe our innovative systems approach to the design of the CCD cameras for two of SDO's remote sensing instruments, the Atmospheric Imaging Assembly (AIA) and the Helioseismic and Magnetic Imager (HMI). Both instruments share use of a custom-designed 16 million pixel science-grade CCD and common camera readout electronics. A prime requirement was for the CCD to operate with significantly lower drive voltages than before, motivated by our wish to simplify the design of the camera readout electronics. Here, the challenge lies in the design of circuitry to drive the CCD's highly capacitive electrodes and to digitize its analogue video output signal with low noise and to high precision. The challenge is greatly exacerbated when forced to work with only fully space-qualified, radiation-tolerant components. We describe our systems approach to the design of the AIA and HMI CCD and camera electronics, and the engineering solutions that enabled us to comply with both mission and instrument science requirements.

  19. Design of a MATLAB(registered trademark) Image Comparison and Analysis Tool for Augmentation of the Results of the Ann Arbor Distortion Test

    DTIC Science & Technology

    2016-06-25

    The equipment used in this procedure includes: Ann Arbor distortion tester with 50-line grating reticule, IQeye 720 digital video camera with 12...and import them into MATLAB. In order to digitally capture images of the distortion in an optical sample, an IQeye 720 video camera with a 12... video camera and Ann Arbor distortion tester. Figure 8. Computer interface for capturing images seen by IQeye 720 camera. Once an image was

  20. Advantages of computer cameras over video cameras/frame grabbers for high-speed vision applications

    NASA Astrophysics Data System (ADS)

    Olson, Gaylord G.; Walker, Jo N.

    1997-09-01

    Cameras designed to work specifically with computers can have certain advantages in comparison to the use of cameras loosely defined as 'video' cameras. In recent years the camera type distinctions have become somewhat blurred, with a great presence of 'digital cameras' aimed more at the home markets. This latter category is not considered here. The term 'computer camera' herein is intended to mean one which has low level computer (and software) control of the CCD clocking. These can often be used to satisfy some of the more demanding machine vision tasks, and in some cases with a higher rate of measurements than video cameras. Several of these specific applications are described here, including some which use recently designed CCDs which offer good combinations of parameters such as noise, speed, and resolution. Among the considerations for the choice of camera type in any given application would be such effects as 'pixel jitter,' and 'anti-aliasing.' Some of these effects may only be relevant if there is a mismatch between the number of pixels per line in the camera CCD and the number of analog to digital (A/D) sampling points along a video scan line. For the computer camera case these numbers are guaranteed to match, which alleviates some measurement inaccuracies and leads to higher effective resolution.

  1. Video sensor with range measurement capability

    NASA Technical Reports Server (NTRS)

    Howard, Richard T. (Inventor); Briscoe, Jeri M. (Inventor); Corder, Eric L. (Inventor); Broderick, David J. (Inventor)

    2008-01-01

    A video sensor device is provided which incorporates a rangefinder function. The device includes a single video camera and a fixed laser spaced a predetermined distance from the camera for, when activated, producing a laser beam. A diffractive optic element divides the beam so that multiple light spots are produced on a target object. A processor calculates the range to the object based on the known spacing and angles determined from the light spots on the video images produced by the camera.

  2. Landsat 3 return beam vidicon response artifacts

    USGS Publications Warehouse

    ,; Clark, B.

    1981-01-01

    The return beam vidicon (RBV) sensing systems employed aboard Landsats 1, 2, and 3 have all been similar in that they have utilized vidicon tube cameras. These are not mirror-sweep scanning devices such as the multispectral scanner (MSS) sensors that have also been carried aboard the Landsat satellites. The vidicons operate more like common television cameras, using an electron gun to read images from a photoconductive faceplate.In the case of Landsats 1 and 2, the RBV system consisted of three such vidicons which collected remote sensing data in three distinct spectral bands. Landsat 3, however, utilizes just two vidicon cameras, both of which sense data in a single broad band. The Landsat 3 RBV system additionally has a unique configuration. As arranged, the two cameras can be shuttered alternately, twice each, in the same time it takes for one MSS scene to be acquired. This shuttering sequence results in four RBV "subscenes" for every MSS scene acquired, similar to the four quadrants of a square. See Figure 1. Each subscene represents a ground area of approximately 98 by 98 km. The subscenes are designated A, B, C, and D, for the northwest, northeast, southwest, and southeast quarters of the full scene, respectively. RBV data products are normally ordered, reproduced, and sold on a subscene basis and are in general referred to in this way. Each exposure from the RBV camera system presents an image which is 98 km on a side. When these analog video data are subsequently converted to digital form, the picture element, or pixel, that results is 19 m on a side with an effective resolution element of 30 m. This pixel size is substantially smaller than that obtainable in MSS images (the MSS has an effective resolution element of 73.4 m), and, when RBV images are compared to equivalent MSS images, better resolution in the RBV data is clearly evident. It is for this reason that the RBV system can be a valuable tool for remote sensing of earth resources.Until recently, RBV imagery was processed directly from wideband video tape data onto 70-mm film. This changed in September 1980 when digital production of RBV data at the NASA Goddard Space Flight Center (GSFC) began. The wideband video tape data are now subjected to analog-to-digital preprocessing and corrected both radiometrically and geometrically to produce high-density digital tapes (HDT's). The HDT data are subsequently transmitted via satellite (Domsat) to the EROS Data Center (EDC) where they are used to generate 241-mm photographic images at a scale of 1:500,000. Computer-compatible tapes of the data are also generated as digital products. Of the RBV data acquired since September 1, 1980, approximately 2,800 subscenes per month have been processed at EDC.

  3. Capturing the Initiation and Spatial Variability of Runoff on Soils Affected by Wildfire

    NASA Astrophysics Data System (ADS)

    Martin, D. A.; Wickert, A. D.; Moody, J. A.

    2011-12-01

    Rainfall after wildfire often leads to intense runoff and erosion, since fire removes ground cover that impedes overland flow and water is unable to efficiently infiltrate into the fire-affected soils. In order to understand the relation between rainfall, infiltration, and runoff, we modified a camera to be triggered by a rain gage to take time-lapse photographs of the ground surface every 10 seconds until the rain stops. This camera allows us to observe directly the patterns of ground surface ponding, the initiation of overland flow, and erosion/deposition during single rainfall events. The camera was deployed on a hillslope (average slope = 23 degrees) that was severely burned by the 2010 Fourmile Canyon Fire near Boulder, Colorado. The camera's field of view is approximately 3 m2. We integrate the photographs with rainfall and overland flow measurements to determine thresholds for the initiation of overland flow and erosion. We have recorded the spatial variability of wetted patches of ground and the connection of these patches together to initiate overland flow. To date we have recorded images for rain storms with 30-minute maximum intensities ranging from 5 mm/h (our threshold to trigger continuous photographs) to 32 mm/h. In the near future we will update the camera's control system to 1) include a clock to enable time-lapse photographs at a lower frequency in addition to the event-triggered images, and 2) to add a radio to allow the camera to be triggered remotely. Radio communication will provide a means of starting the camera in response to non-local events, allowing us to capture images or video of flash flood surge fronts and debris flows, and to synchronize the operations of multiple cameras in the field. Schematics and instructions to build this camera station, which can be used to take either photos or video, are open-source licensed and are available online at http://instaar.colorado.edu/~wickert/atvis. It is our hope that this tool can be used by other researchers to better understand processes in burned watersheds and other sensitive areas that are likely to respond rapidly to rainfall.

  4. A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera

    PubMed Central

    Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo

    2016-01-01

    In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots. PMID:27023556

  5. Contactless physiological signals extraction based on skin color magnification

    NASA Astrophysics Data System (ADS)

    Suh, Kun Ha; Lee, Eui Chul

    2017-11-01

    Although the human visual system is not sufficiently sensitive to perceive blood circulation, blood flow caused by cardiac activity makes slight changes on human skin surfaces. With advances in imaging technology, it has become possible to capture these changes through digital cameras. However, it is difficult to obtain clear physiological signals from such changes due to its fineness and noise factors, such as motion artifacts and camera sensing disturbances. We propose a method for extracting physiological signals with improved quality from skin colored-videos recorded with a remote RGB camera. The results showed that our skin color magnification method reveals the hidden physiological components remarkably in the time-series signal. A Korea Food and Drug Administration-approved heart rate monitor was used for verifying the resulting signal synchronized with the actual cardiac pulse, and comparisons of signal peaks showed correlation coefficients of almost 1.0. In particular, our method can be an effective preprocessing before applying additional postfiltering techniques to improve accuracy in image-based physiological signal extractions.

  6. Using turbulence scintillation to assist object ranging from a single camera viewpoint.

    PubMed

    Wu, Chensheng; Ko, Jonathan; Coffaro, Joseph; Paulson, Daniel A; Rzasa, John R; Andrews, Larry C; Phillips, Ronald L; Crabbs, Robert; Davis, Christopher C

    2018-03-20

    Image distortions caused by atmospheric turbulence are often treated as unwanted noise or errors in many image processing studies. Our study, however, shows that in certain scenarios the turbulence distortion can be very helpful in enhancing image processing results. This paper describes a novel approach that uses the scintillation traits recorded on a video clip to perform object ranging with reasonable accuracy from a single camera viewpoint. Conventionally, a single camera would be confused by the perspective viewing problem, where a large object far away looks the same as a small object close by. When the atmospheric turbulence phenomenon is considered, the edge or texture pixels of an object tend to scintillate and vary more with increased distance. This turbulence induced signature can be quantitatively analyzed to achieve object ranging with reasonable accuracy. Despite the inevitable fact that turbulence will cause random blurring and deformation of imaging results, it also offers convenient solutions to some remote sensing and machine vision problems, which would otherwise be difficult.

  7. A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera.

    PubMed

    Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo

    2016-03-25

    In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots.

  8. Automated vehicle counting using image processing and machine learning

    NASA Astrophysics Data System (ADS)

    Meany, Sean; Eskew, Edward; Martinez-Castro, Rosana; Jang, Shinae

    2017-04-01

    Vehicle counting is used by the government to improve roadways and the flow of traffic, and by private businesses for purposes such as determining the value of locating a new store in an area. A vehicle count can be performed manually or automatically. Manual counting requires an individual to be on-site and tally the traffic electronically or by hand. However, this can lead to miscounts due to factors such as human error A common form of automatic counting involves pneumatic tubes, but pneumatic tubes disrupt traffic during installation and removal, and can be damaged by passing vehicles. Vehicle counting can also be performed via the use of a camera at the count site recording video of the traffic, with counting being performed manually post-recording or using automatic algorithms. This paper presents a low-cost procedure to perform automatic vehicle counting using remote video cameras with an automatic counting algorithm. The procedure would utilize a Raspberry Pi micro-computer to detect when a car is in a lane, and generate an accurate count of vehicle movements. The method utilized in this paper would use background subtraction to process the images and a machine learning algorithm to provide the count. This method avoids fatigue issues that are encountered in manual video counting and prevents the disruption of roadways that occurs when installing pneumatic tubes

  9. Burbank uses video camera during installation and routing of HRCS Video Cables

    NASA Image and Video Library

    2012-02-01

    ISS030-E-060104 (1 Feb. 2012) --- NASA astronaut Dan Burbank, Expedition 30 commander, uses a video camera in the Destiny laboratory of the International Space Station during installation and routing of video cable for the High Rate Communication System (HRCS). HRCS will allow for two additional space-to-ground audio channels and two additional downlink video channels.

  10. Distributing digital video to multiple computers

    PubMed Central

    Murray, James A.

    2004-01-01

    Video is an effective teaching tool, and live video microscopy is especially helpful in teaching dissection techniques and the anatomy of small neural structures. Digital video equipment is more affordable now and allows easy conversion from older analog video devices. I here describe a simple technique for bringing digital video from one camera to all of the computers in a single room. This technique allows students to view and record the video from a single camera on a microscope. PMID:23493464

  11. Video segmentation and camera motion characterization using compressed data

    NASA Astrophysics Data System (ADS)

    Milanese, Ruggero; Deguillaume, Frederic; Jacot-Descombes, Alain

    1997-10-01

    We address the problem of automatically extracting visual indexes from videos, in order to provide sophisticated access methods to the contents of a video server. We focus on tow tasks, namely the decomposition of a video clip into uniform segments, and the characterization of each shot by camera motion parameters. For the first task we use a Bayesian classification approach to detecting scene cuts by analyzing motion vectors. For the second task a least- squares fitting procedure determines the pan/tilt/zoom camera parameters. In order to guarantee the highest processing speed, all techniques process and analyze directly MPEG-1 motion vectors, without need for video decompression. Experimental results are reported for a database of news video clips.

  12. Linear array of photodiodes to track a human speaker for video recording

    NASA Astrophysics Data System (ADS)

    DeTone, D.; Neal, H.; Lougheed, R.

    2012-12-01

    Communication and collaboration using stored digital media has garnered more interest by many areas of business, government and education in recent years. This is due primarily to improvements in the quality of cameras and speed of computers. An advantage of digital media is that it can serve as an effective alternative when physical interaction is not possible. Video recordings that allow for viewers to discern a presenter's facial features, lips and hand motions are more effective than videos that do not. To attain this, one must maintain a video capture in which the speaker occupies a significant portion of the captured pixels. However, camera operators are costly, and often do an imperfect job of tracking presenters in unrehearsed situations. This creates motivation for a robust, automated system that directs a video camera to follow a presenter as he or she walks anywhere in the front of a lecture hall or large conference room. Such a system is presented. The system consists of a commercial, off-the-shelf pan/tilt/zoom (PTZ) color video camera, a necklace of infrared LEDs and a linear photodiode array detector. Electronic output from the photodiode array is processed to generate the location of the LED necklace, which is worn by a human speaker. The computer controls the video camera movements to record video of the speaker. The speaker's vertical position and depth are assumed to remain relatively constant- the video camera is sent only panning (horizontal) movement commands. The LED necklace is flashed at 70Hz at a 50% duty cycle to provide noise-filtering capability. The benefit to using a photodiode array versus a standard video camera is its higher frame rate (4kHz vs. 60Hz). The higher frame rate allows for the filtering of infrared noise such as sunlight and indoor lighting-a capability absent from other tracking technologies. The system has been tested in a large lecture hall and is shown to be effective.

  13. Photogrammetric Applications of Immersive Video Cameras

    NASA Astrophysics Data System (ADS)

    Kwiatek, K.; Tokarczyk, R.

    2014-05-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to overcome it and applying immersive cameras in photogrammetry provides a new potential. The paper presents two applications of immersive video in photogrammetry. At first, the creation of a low-cost mobile mapping system based on Ladybug®3 and GPS device is discussed. The amount of panoramas is much too high for photogrammetric purposes as the base line between spherical panoramas is around 1 metre. More than 92 000 panoramas were recorded in one Polish region of Czarny Dunajec and the measurements from panoramas enable the user to measure the area of outdoors (adverting structures) and billboards. A new law is being created in order to limit the number of illegal advertising structures in the Polish landscape and immersive video recorded in a short period of time is a candidate for economical and flexible measurements off-site. The second approach is a generation of 3d video-based reconstructions of heritage sites based on immersive video (structure from immersive video). A mobile camera mounted on a tripod dolly was used to record the interior scene and immersive video, separated into thousands of still panoramas, was converted from video into 3d objects using Agisoft Photoscan Professional. The findings from these experiments demonstrated that immersive photogrammetry seems to be a flexible and prompt method of 3d modelling and provides promising features for mobile mapping systems.

  14. Camera network video summarization

    NASA Astrophysics Data System (ADS)

    Panda, Rameswar; Roy-Chowdhury, Amit K.

    2017-05-01

    Networks of vision sensors are deployed in many settings, ranging from security needs to disaster response to environmental monitoring. Many of these setups have hundreds of cameras and tens of thousands of hours of video. The difficulty of analyzing such a massive volume of video data is apparent whenever there is an incident that requires foraging through vast video archives to identify events of interest. As a result, video summarization, that automatically extract a brief yet informative summary of these videos, has attracted intense attention in the recent years. Much progress has been made in developing a variety of ways to summarize a single video in form of a key sequence or video skim. However, generating a summary from a set of videos captured in a multi-camera network still remains as a novel and largely under-addressed problem. In this paper, with the aim of summarizing videos in a camera network, we introduce a novel representative selection approach via joint embedding and capped l21-norm minimization. The objective function is two-fold. The first is to capture the structural relationships of data points in a camera network via an embedding, which helps in characterizing the outliers and also in extracting a diverse set of representatives. The second is to use a capped l21-norm to model the sparsity and to suppress the influence of data outliers in representative selection. We propose to jointly optimize both of the objectives, such that embedding can not only characterize the structure, but also indicate the requirements of sparse representative selection. Extensive experiments on standard multi-camera datasets well demonstrate the efficacy of our method over state-of-the-art methods.

  15. Coral Reef Surveillance: Infrared-Sensitive Video Surveillance Technology as a New Tool for Diurnal and Nocturnal Long-Term Field Observations.

    PubMed

    Dirnwoeber, Markus; Machan, Rudolf; Herler, Juergen

    2012-10-31

    Direct field observations of fine-scaled biological processes and interactions of the benthic community of corals and associated reef organisms (e.g., feeding, reproduction, mutualistic or agonistic behavior, behavioral responses to changing abiotic factors) usually involve a disturbing intervention. Modern digital camcorders (without inflexible land-or ship-based cable connection) such as the GoPro camera enable undisturbed and unmanned, stationary close-up observations. Such observations, however, are also very time-limited (~3 h) and full 24 h-recordings throughout day and night, including nocturnal observations without artificial daylight illumination, are not possible. Herein we introduce the application of modern standard video surveillance technology with the main objective of providing a tool for monitoring coral reef or other sessile and mobile organisms for periods of 24 h and longer. This system includes nocturnal close-up observations with miniature infrared (IR)-sensitive cameras and separate high-power IR-LEDs. Integrating this easy-to-set up and portable remote-sensing equipment into coral reef research is expected to significantly advance our understanding of fine-scaled biotic processes on coral reefs. Rare events and long-lasting processes can easily be recorded, in situ -experiments can be monitored live on land, and nocturnal IR-observations reveal undisturbed behavior. The options and equipment choices in IR-sensitive surveillance technology are numerous and subject to a steadily increasing technical supply and quality at decreasing prices. Accompanied by short video examples, this report introduces a radio-transmission system for simultaneous recordings and real-time monitoring of multiple cameras with synchronized timestamps, and a surface-independent underwater-recording system.

  16. Snapshot hyperspectral fovea vision system (HyperVideo)

    NASA Astrophysics Data System (ADS)

    Kriesel, Jason; Scriven, Gordon; Gat, Nahum; Nagaraj, Sheela; Willson, Paul; Swaminathan, V.

    2012-06-01

    The development and demonstration of a new snapshot hyperspectral sensor is described. The system is a significant extension of the four dimensional imaging spectrometer (4DIS) concept, which resolves all four dimensions of hyperspectral imaging data (2D spatial, spectral, and temporal) in real-time. The new sensor, dubbed "4×4DIS" uses a single fiber optic reformatter that feeds into four separate, miniature visible to near-infrared (VNIR) imaging spectrometers, providing significantly better spatial resolution than previous systems. Full data cubes are captured in each frame period without scanning, i.e., "HyperVideo". The current system operates up to 30 Hz (i.e., 30 cubes/s), has 300 spectral bands from 400 to 1100 nm (~2.4 nm resolution), and a spatial resolution of 44×40 pixels. An additional 1.4 Megapixel video camera provides scene context and effectively sharpens the spatial resolution of the hyperspectral data. Essentially, the 4×4DIS provides a 2D spatially resolved grid of 44×40 = 1760 separate spectral measurements every 33 ms, which is overlaid on the detailed spatial information provided by the context camera. The system can use a wide range of off-the-shelf lenses and can either be operated so that the fields of view match, or in a "spectral fovea" mode, in which the 4×4DIS system uses narrow field of view optics, and is cued by a wider field of view context camera. Unlike other hyperspectral snapshot schemes, which require intensive computations to deconvolve the data (e.g., Computed Tomographic Imaging Spectrometer), the 4×4DIS requires only a linear remapping, enabling real-time display and analysis. The system concept has a range of applications including biomedical imaging, missile defense, infrared counter measure (IRCM) threat characterization, and ground based remote sensing.

  17. Coral Reef Surveillance: Infrared-Sensitive Video Surveillance Technology as a New Tool for Diurnal and Nocturnal Long-Term Field Observations

    PubMed Central

    Dirnwoeber, Markus; Machan, Rudolf; Herler, Juergen

    2014-01-01

    Direct field observations of fine-scaled biological processes and interactions of the benthic community of corals and associated reef organisms (e.g., feeding, reproduction, mutualistic or agonistic behavior, behavioral responses to changing abiotic factors) usually involve a disturbing intervention. Modern digital camcorders (without inflexible land-or ship-based cable connection) such as the GoPro camera enable undisturbed and unmanned, stationary close-up observations. Such observations, however, are also very time-limited (~3 h) and full 24 h-recordings throughout day and night, including nocturnal observations without artificial daylight illumination, are not possible. Herein we introduce the application of modern standard video surveillance technology with the main objective of providing a tool for monitoring coral reef or other sessile and mobile organisms for periods of 24 h and longer. This system includes nocturnal close-up observations with miniature infrared (IR)-sensitive cameras and separate high-power IR-LEDs. Integrating this easy-to-set up and portable remote-sensing equipment into coral reef research is expected to significantly advance our understanding of fine-scaled biotic processes on coral reefs. Rare events and long-lasting processes can easily be recorded, in situ-experiments can be monitored live on land, and nocturnal IR-observations reveal undisturbed behavior. The options and equipment choices in IR-sensitive surveillance technology are numerous and subject to a steadily increasing technical supply and quality at decreasing prices. Accompanied by short video examples, this report introduces a radio-transmission system for simultaneous recordings and real-time monitoring of multiple cameras with synchronized timestamps, and a surface-independent underwater-recording system. PMID:24829763

  18. Constructing spherical panoramas of a bladder phantom from endoscopic video using bundle adjustment

    NASA Astrophysics Data System (ADS)

    Soper, Timothy D.; Chandler, John E.; Porter, Michael P.; Seibel, Eric J.

    2011-03-01

    The high recurrence rate of bladder cancer requires patients to undergo frequent surveillance screenings over their lifetime following initial diagnosis and resection. Our laboratory is developing panoramic stitching software that would compile several minutes of cystoscopic video into a single panoramic image, covering the entire bladder, for review by an urolgist at a later time or remote location. Global alignment of video frames is achieved by using a bundle adjuster that simultaneously recovers both the 3D structure of the bladder as well as the scope motion using only the video frames as input. The result of the algorithm is a complete 360° spherical panorama of the outer surface. The details of the software algorithms are presented here along with results from both a virtual cystoscopy as well from real endoscopic imaging of a bladder phantom. The software successfully stitched several hundred video frames into a single panoramic with subpixel accuracy and with no knowledge of the intrinsic camera properties, such as focal length and radial distortion. In the discussion, we outline future work in development of the software as well as identifying factors pertinent to clinical translation of this technology.

  19. A compact high-definition low-cost digital stereoscopic video camera for rapid robotic surgery development.

    PubMed

    Carlson, Jay; Kowalczuk, Jędrzej; Psota, Eric; Pérez, Lance C

    2012-01-01

    Robotic surgical platforms require vision feedback systems, which often consist of low-resolution, expensive, single-imager analog cameras. These systems are retooled for 3D display by simply doubling the cameras and outboard control units. Here, a fully-integrated digital stereoscopic video camera employing high-definition sensors and a class-compliant USB video interface is presented. This system can be used with low-cost PC hardware and consumer-level 3D displays for tele-medical surgical applications including military medical support, disaster relief, and space exploration.

  20. Anatomy-driven design of a prototype video laryngoscope for extremely low birth weight infants

    NASA Astrophysics Data System (ADS)

    Baker, Katherine; Tremblay, Eric; Karp, Jason; Ford, Joseph; Finer, Neil; Rich, Wade

    2010-11-01

    Extremely low birth weight (ELBW) infants frequently require endotracheal intubation for assisted ventilation or as a route for administration of drugs or exogenous surfactant. In adults and less premature infants, the risks of this intubation can be greatly reduced using video laryngoscopy, but current products are too large and incorrectly shaped to visualize an ELBW infant's airway anatomy. We design and prototype a video laryngoscope using a miniature camera set in a curved acrylic blade with a 3×6-mm cross section at the tip. The blade provides a mechanical structure for stabilizing the tongue and acts as a light guide for an LED light source, located remotely to avoid excessive local heating at the tip. The prototype is tested on an infant manikin and found to provide sufficient image quality and mechanical properties to facilitate intubation. Finally, we show a design for a neonate laryngoscope incorporating a wafer-level microcamera that further reduces the tip cross section and offers the potential for low cost manufacture.

  1. Design and Fabrication of Nereid-UI: A Remotely Operated Underwater Vehicle for Oceanographic Access Under Ice

    NASA Astrophysics Data System (ADS)

    Whitcomb, L. L.; Bowen, A. D.; Yoerger, D.; German, C. R.; Kinsey, J. C.; Mayer, L. A.; Jakuba, M. V.; Gomez-Ibanez, D.; Taylor, C. L.; Machado, C.; Howland, J. C.; Kaiser, C. L.; Heintz, M.; Pontbriand, C.; Suman, S.; O'hara, L.

    2013-12-01

    The Woods Hole Oceanographic Institution and collaborators from the Johns Hopkins University and the University of New Hampshire are developing for the Polar Science Community a remotely-controlled underwater robotic vehicle capable of being tele-operated under ice under remote real-time human supervision. The Nereid Under-Ice (Nereid-UI) vehicle will enable exploration and detailed examination of biological and physical environments at glacial ice-tongues and ice-shelf margins, delivering high-definition video in addition to survey data from on board acoustic, chemical, and biological sensors. Preliminary propulsion system testing indicates the vehicle will be able to attain standoff distances of up to 20 km from an ice-edge boundary, as dictated by the current maximum tether length. The goal of the Nereid-UI system is to provide scientific access to under-ice and ice-margin environments that is presently impractical or infeasible. FIBER-OPTIC TETHER: The heart of the Nereid-UI system is its expendable fiber optic telemetry system. The telemetry system utilizes many of the same components pioneered for the full-ocean depth capable HROV Nereus vehicle, with the addition of continuous fiber status monitoring, and new float-pack and depressor designs that enable single-body deployment. POWER SYSTEM: Nereid-UI is powered by a pressure-tolerant lithium-ion battery system composed of 30 Ah prismatic pouch cells, arranged on a 90 volt bus and capable of delivering 15 kW. The cells are contained in modules of 8 cells, and groups of 9 modules are housed together in oil-filled plastic boxes. The power distribution system uses pressure tolerant components extensively, each of which have been individually qualified to 10 kpsi and operation between -20 C and 40 C. THRUSTERS: Nereid-UI will employ eight identical WHOI-designed thrusters, each with a frameless motor, oil-filled and individually compensated, and designed for low-speed (500 rpm max) direct drive. We expect an end-to-end propulsive efficiency of between 0.3 and 0.4 at a transit speed of 1 m/s based on testing conducted at WHOI. CAMERAS: Video imagery is one of the principal products of Nereid-UI. Two fiber-optic telemetry wavelengths deliver 1.5 Gb/s uncompressed HDSDI video to the support vessel in real time, supporting a Kongsberg OE14-522 hyperspherical pan and tilt HD camera and several utility cameras. PROJECT STATUS: The first shallow-water vehicle trials are scheduled for September 2013. The trials are designed to test core vehicle systems particularly the power system, main computer and control system, thrusters, video and telemetry system, and to refine camera, lighting and acoustic sensor placement for piloted and closed-loop control, especially as pertains to working near the underside of ice. Remaining vehicle design tasks include finalizing the single-body deployment concept and depressor, populating the scientific sensing suite, and the software development necessary to implement the planned autonomous return strategy. Final design and fabrication for these remaining components of the vehicle system will proceed through fall 2013, with trials under lake ice in early 2014, and potential polar trials beginning in 2014-15. SUPPORT: NSF OPP (ANT-1126311), WHOI, James Family Foundation, and George Frederick Jewett Foundation East.

  2. On-line content creation for photo products: understanding what the user wants

    NASA Astrophysics Data System (ADS)

    Fageth, Reiner

    2015-03-01

    This paper describes how videos can be implemented into printed photo books and greeting cards. We will show that - surprisingly or not- pictures from videos are similarly used such as classical images to tell compelling stories. Videos can be taken with nearly every camera, digital point and shoot cameras, DSLRs as well as smartphones and more and more with so-called action cameras mounted on sports devices. The implementation of videos while generating QR codes and relevant pictures out of the video stream via a software implementation was contents in last years' paper. This year we present first data about what contents is displayed and how the users represent their videos in printed products, e.g. CEWE PHOTOBOOKS and greeting cards. We report the share of the different video formats used.

  3. Rugged Video System For Inspecting Animal Burrows

    NASA Technical Reports Server (NTRS)

    Triandafils, Dick; Maples, Art; Breininger, Dave

    1992-01-01

    Video system designed for examining interiors of burrows of gopher tortoises, 5 in. (13 cm) in diameter or greater, to depth of 18 ft. (about 5.5 m), includes video camera, video cassette recorder (VCR), television monitor, control unit, and power supply, all carried in backpack. Polyvinyl chloride (PVC) poles used to maneuver camera into (and out of) burrows, stiff enough to push camera into burrow, but flexible enough to bend around curves. Adult tortoises and other burrow inhabitants observable, young tortoises and such small animals as mice obscured by sand or debris.

  4. Using a Video Camera to Measure the Radius of the Earth

    ERIC Educational Resources Information Center

    Carroll, Joshua; Hughes, Stephen

    2013-01-01

    A simple but accurate method for measuring the Earth's radius using a video camera is described. A video camera was used to capture a shadow rising up the wall of a tall building at sunset. A free program called ImageJ was used to measure the time it took the shadow to rise a known distance up the building. The time, distance and length of…

  5. Airborne infrared video radiometry as a low-cost tool for remote sensing of the environment, two mapping examples from Israel of urban heat islands and mineralogical site

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ben-Dor, E.; Saaroni, H.; Ochana, D.

    1996-10-01

    In this study we examined the capability of a laboratory infrared video camera for use in remote sensing of the environment. The instrument used, INFRAMETRICS 760, was mounted onboard a Bell 206 helicopter. Under the flight conditions examined, the radiometer proved itself to be very stable and produced high-quality thermal images in a real-time mode. We studied two different environmental aspects, as follows: (1) Urban heat island of the most dense city in Israel, Tel-Aviv- and (2) lithological distribution of a well-known mineralogical site in Israel, Makhtesh Ramon. The radiometer used in both studies was able to produce a temperaturemore » presentation, rather than a gray scale from an altitude of 7,000 and 10,000 feet and at 70 knots air speed. The instrument produced a high-quality set of data in terms of signal-to-noise, stability, temperature accuracy and spatial resolution. In the Tel-Aviv case, the results showed that the urban heat island of the city can be depicted in a very high spatial and thermal resolutions domain and that a significant correlation exists between ground objects and the surrounding air temperature values. Based on the flight results, we could generated an isotherm map of the city that, for the first time, located the urban heat island of the city both in meso- and microscales. In the case of Makhtesh Ramon, we found that under field conditions, the radiometer, coupled with a VIS-CCD camera can provide significant ATI parameters of typical rocks that characterize tile study area. Although more study is planned and suggested based on the current data, it was concluded that the airborne thermal video radiometry, is a promising, inexpensive tool for monitoring the environment on a real-time basis. 10 refs., 5 figs., 1 tab.« less

  6. Developing Short Films of Geoscience Research

    NASA Astrophysics Data System (ADS)

    Shipman, J. S.; Webley, P. W.; Dehn, J.; Harrild, M.; Kienenberger, D.; Salganek, M.

    2015-12-01

    In today's prevalence of social media and networking, video products are becoming increasingly more useful to communicate research quickly and effectively to a diverse audience, including outreach activities as well as within the research community and to funding agencies. Due to the observational nature of geoscience, researchers often take photos and video footage to document fieldwork or to record laboratory experiments. Here we present how researchers can become more effective storytellers by collaborating with filmmakers to produce short documentary films of their research. We will focus on the use of traditional high-definition (HD) camcorders and HD DSLR cameras to record the scientific story while our research topic focuses on the use of remote sensing techniques, specifically thermal infrared imaging that is often used to analyze time varying natural processes such as volcanic hazards. By capturing the story in the thermal infrared wavelength range, in addition to traditional red-green-blue (RGB) color space, the audience is able to experience the world differently. We will develop a short film specifically designed using thermal infrared cameras that illustrates how visual storytellers can use these new tools to capture unique and important aspects of their research, convey their passion for earth systems science, as well as engage and captive the viewer.

  7. Selective visual region of interest to enhance medical video conferencing

    NASA Astrophysics Data System (ADS)

    Bonneau, Walt, Jr.; Read, Christopher J.; Shirali, Girish

    1998-06-01

    The continued economic pressure that is being placed upon the healthcare industry creates both challenge and opportunity to develop cost effective healthcare tools. Tools that provide improvements in the quality of medical care at the same time improve the distribution of efficient care will create product demand. Video Conferencing systems are one of the latest product technologies that are evolving their way into healthcare applications. The systems that provide quality Bi- directional video and imaging at the lowest system and communication cost are creating many possible options for the healthcare industry. A method to use only 128k bits/sec. of ISDN bandwidth while providing quality video images in selected regions will be applied to echocardiograms using a low cost video conferencing system operating within a basic rate ISDN line bandwidth. Within a given display area (frame) it has been observed that only selected informational areas of the frame of are of value when viewing for detail and precision within an image. Much in the same manner that a photograph is cropped. If a method to accomplish Region Of Interest (ROI) was applied to video conferencing using H.320 with H.263 (compression) and H.281 (camera control) international standards, medical image quality could be achieved in a cost-effective manner. For example, the cardiologist could be provided with a selectable three to eight end-point viewable ROI polygon that defines the ROI in the image. This is achieved by the video system calculating the selected regional end-points and creating an alpha mask to signify the importance of the ROI to the compression processor. This region is then applied to the compression algorithm in a manner that the majority of the video conferencing processor cycles are focused on the ROI of the image. An occasional update of the non-ROI area is processed to maintain total image coherence. The user could control the non-ROI area updates. Providing encoder side ROI specification is of value. However, the power of this capability is improved if remote access and selection of the ROI is also provided. Using the H.281 camera standard and proposing an additional option to the standard to allow for remote ROI selection would make this possible. When ROI is applied the ability to reach the equivalent of 384K bits/sec ISDN rates may be achieved or exceeded depending upon the size of the selected ROI using 128K bits/sec. This opens additional opportunity to establish international calling and reduced call rates by up to sixty- six percent making reoccurring communication costs attractive. Rates of twenty to thirty quality ROI updates could be achieved. It is however important to understand that this technique is still under development.

  8. Instrumentation for Infrared Airglow Clutter.

    DTIC Science & Technology

    1987-03-10

    gain, and filter position to the Camera Head, and monitors these parameters as well as preamp video. GAZER is equipped with a Lenzar wide angle, low...Specifications/Parameters VIDEO SENSOR: Camera ...... . LENZAR Intensicon-8 LLLTV using 2nd gen * micro-channel intensifier and proprietary camera tube

  9. Fiber optic cable-based high-resolution, long-distance VGA extenders

    NASA Astrophysics Data System (ADS)

    Rhee, Jin-Geun; Lee, Iksoo; Kim, Heejoon; Kim, Sungjoon; Koh, Yeon-Wan; Kim, Hoik; Lim, Jiseok; Kim, Chur; Kim, Jungwon

    2013-02-01

    Remote transfer of high-resolution video information finds more applications in detached display applications for large facilities such as theaters, sports complex, airports, and security facilities. Active optical cables (AOCs) provide a promising approach for enhancing both the transmittable resolution and distance that standard copper-based cables cannot reach. In addition to the standard digital formats such as HDMI, the high-resolution, long-distance transfer of VGA format signals is important for applications where high-resolution analog video ports should be also supported, such as military/defense applications and high-resolution video camera links. In this presentation we present the development of a compressionless, high-resolution (up to WUXGA, 1920x1200), long-distance (up to 2 km) VGA extenders based on serialized technique. We employed asynchronous serial transmission and clock regeneration techniques, which enables lower cost implementation of VGA extenders by removing the necessity for clock transmission and large memory at the receiver. Two 3.125-Gbps transceivers are used in parallel to meet the required maximum video data rate of 6.25 Gbps. As the data are transmitted asynchronously, 24-bit pixel clock time stamp is employed to regenerate video pixel clock accurately at the receiver side. In parallel to the video information, stereo audio and RS-232 control signals are transmitted as well.

  10. A Remote-Control Airship for Coastal and Environmental Research

    NASA Astrophysics Data System (ADS)

    Puleo, J. A.; O'Neal, M. A.; McKenna, T. E.; White, T.

    2008-12-01

    The University of Delaware recently acquired an 18 m (60 ft) remote-control airship capable of carrying a 36 kg (120 lb) scientific payload for coastal and environmental research. By combining the benefits of tethered balloons (stable dwell time) and powered aircraft (ability to navigate), the platform allows for high-resolution data collection in both time and space. The platform was developed by Galaxy Blimps, LLC of Dallas, TX for collecting high-definition video of sporting events. The airship can fly to altitudes of at least 600 m (2000 ft) reaching speeds between zero and 18 m/s (35 knots) in winds up to 13 m/s (25 knots). Using a hand-held console and radio transmitter, a ground-based operator can manipulate the orientation and throttle of two gasoline engines, and the orientation of four fins. Airship location is delivered to the operator through a data downlink from an onboard altimeter and global positioning system (GPS) receiver. Scientific payloads are easily attached to a rail system on the underside of the blimp. Data collection can be automated (fixed time intervals) or triggered by a second operator using a second hand-held console. Data can be stored onboard or transmitted in real-time to a ground-based computer. The first science mission (Fall 2008) is designed to collect images of tidal inundation of a salt marsh to support numerical modeling of water quality in the Murderkill River Estuary in Kent County, Delaware (a tributary of Delaware Bay in the USA Mid-Atlantic region). Time sequenced imagery will be collected by a ten-megapixel camera and a thermal- infrared imager mounted in separate remote-control, gyro-stabilized camera mounts on the blimp. Live video- feeds will be transmitted to the instrument operator on the ground. Resulting time series data will ultimately be used to compare/update independent estimates of inundation based on LiDAR elevations and a suite of tide and temperature gauges.

  11. Electronic cameras for low-light microscopy.

    PubMed

    Rasnik, Ivan; French, Todd; Jacobson, Ken; Berland, Keith

    2013-01-01

    This chapter introduces to electronic cameras, discusses the various parameters considered for evaluating their performance, and describes some of the key features of different camera formats. The chapter also presents the basic understanding of functioning of the electronic cameras and how these properties can be exploited to optimize image quality under low-light conditions. Although there are many types of cameras available for microscopy, the most reliable type is the charge-coupled device (CCD) camera, which remains preferred for high-performance systems. If time resolution and frame rate are of no concern, slow-scan CCDs certainly offer the best available performance, both in terms of the signal-to-noise ratio and their spatial resolution. Slow-scan cameras are thus the first choice for experiments using fixed specimens such as measurements using immune fluorescence and fluorescence in situ hybridization. However, if video rate imaging is required, one need not evaluate slow-scan CCD cameras. A very basic video CCD may suffice if samples are heavily labeled or are not perturbed by high intensity illumination. When video rate imaging is required for very dim specimens, the electron multiplying CCD camera is probably the most appropriate at this technological stage. Intensified CCDs provide a unique tool for applications in which high-speed gating is required. The variable integration time video cameras are very attractive options if one needs to acquire images at video rate acquisition, as well as with longer integration times for less bright samples. This flexibility can facilitate many diverse applications with highly varied light levels. Copyright © 2007 Elsevier Inc. All rights reserved.

  12. Deployable reconnaissance from a VTOL UAS in urban environments

    NASA Astrophysics Data System (ADS)

    Barnett, Shane; Bird, John; Culhane, Andrew; Sharkasi, Adam; Reinholtz, Charles

    2007-04-01

    Reconnaissance collection in unknown or hostile environments can be a dangerous and life threatening task. To reduce this risk, the Unmanned Systems Group at Virginia Tech has produced a fully autonomous reconnaissance system able to provide live video reconnaissance from outside and inside unknown structures. This system consists of an autonomous helicopter which launches a small reconnaissance pod inside a building and an operator control unit (OCU) on a ground station. The helicopter is a modified Bergen Industrial Twin using a Rotomotion flight controller and can fly missions of up to one half hour. The mission planning OCU can control the helicopter remotely through teleoperation or fully autonomously by GPS waypoints. A forward facing camera and template matching aid in navigation by identifying the target building. Once the target structure is identified, vision algorithms will center the UAS adjacent to open windows or doorways. Tunable parameters in the vision algorithm account for varying launch distances and opening sizes. Launch of the reconnaissance pod may be initiated remotely through a human in the loop or autonomously. Compressed air propels the half pound stationary pod or the larger mobile pod into the open portals. Once inside the building, the reconnaissance pod will then transmit live video back to the helicopter. The helicopter acts as a repeater node for increased video range and simplification of communication back to the ground station.

  13. The Camera Is Not a Methodology: Towards a Framework for Understanding Young Children's Use of Video Cameras

    ERIC Educational Resources Information Center

    Bird, Jo; Colliver, Yeshe; Edwards, Susan

    2014-01-01

    Participatory research methods argue that young children should be enabled to contribute their perspectives on research seeking to understand their worldviews. Visual research methods, including the use of still and video cameras with young children have been viewed as particularly suited to this aim because cameras have been considered easy and…

  14. Patterned Video Sensors For Low Vision

    NASA Technical Reports Server (NTRS)

    Juday, Richard D.

    1996-01-01

    Miniature video cameras containing photoreceptors arranged in prescribed non-Cartesian patterns to compensate partly for some visual defects proposed. Cameras, accompanied by (and possibly integrated with) miniature head-mounted video display units restore some visual function in humans whose visual fields reduced by defects like retinitis pigmentosa.

  15. Video model deformation system for the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Snow, W. L.; Goad, W. K.

    1983-01-01

    A photogrammetric closed circuit television system to measure model deformation at the National Transonic Facility is described. The photogrammetric approach was chosen because of its inherent rapid data recording of the entire object field. Video cameras are used to acquire data instead of film cameras due to the inaccessibility of cameras which must be housed within the cryogenic, high pressure plenum of this facility. A rudimentary theory section is followed by a description of the video-based system and control measures required to protect cameras from the hostile environment. Preliminary results obtained with the same camera placement as planned for NTF are presented and plans for facility testing with a specially designed test wing are discussed.

  16. Free-viewpoint video of human actors using multiple handheld Kinects.

    PubMed

    Ye, Genzhi; Liu, Yebin; Deng, Yue; Hasler, Nils; Ji, Xiangyang; Dai, Qionghai; Theobalt, Christian

    2013-10-01

    We present an algorithm for creating free-viewpoint video of interacting humans using three handheld Kinect cameras. Our method reconstructs deforming surface geometry and temporal varying texture of humans through estimation of human poses and camera poses for every time step of the RGBZ video. Skeletal configurations and camera poses are found by solving a joint energy minimization problem, which optimizes the alignment of RGBZ data from all cameras, as well as the alignment of human shape templates to the Kinect data. The energy function is based on a combination of geometric correspondence finding, implicit scene segmentation, and correspondence finding using image features. Finally, texture recovery is achieved through jointly optimization on spatio-temporal RGB data using matrix completion. As opposed to previous methods, our algorithm succeeds on free-viewpoint video of human actors under general uncontrolled indoor scenes with potentially dynamic background, and it succeeds even if the cameras are moving.

  17. Remote camera observations of lava dome growth at Mount St. Helens, Washington, October 2004 to February 2006: Chapter 11 in A volcano rekindled: the renewed eruption of Mount St. Helens, 2004-2006

    USGS Publications Warehouse

    Poland, Michael P.; Dzurisin, Daniel; LaHusen, Richard G.; Major, John J.; Lapcewich, Dennis; Endo, Elliot T.; Gooding, Daniel J.; Schilling, Steve P.; Janda, Christine G.; Sherrod, David R.; Scott, William E.; Stauffer, Peter H.

    2008-01-01

    Images from a Web-based camera (Webcam) located 8 km north of Mount St. Helens and a network of remote, telemetered digital cameras were used to observe eruptive activity at the volcano between October 2004 and February 2006. The cameras offered the advantages of low cost, low power, flexibility in deployment, and high spatial and temporal resolution. Images obtained from the cameras provided important insights into several aspects of dome extrusion, including rockfalls, lava extrusion rates, and explosive activity. Images from the remote, telemetered digital cameras were assembled into time-lapse animations of dome extrusion that supported monitoring, research, and outreach efforts. The wide-ranging utility of remote camera imagery should motivate additional work, especially to develop the three-dimensional quantitative capabilities of terrestrial camera networks.

  18. Remote Marker-Based Tracking for UAV Landing Using Visible-Light Camera Sensor

    PubMed Central

    Nguyen, Phong Ha; Kim, Ki Wan; Lee, Young Won; Park, Kang Ryoung

    2017-01-01

    Unmanned aerial vehicles (UAVs), which are commonly known as drones, have proved to be useful not only on the battlefields where manned flight is considered too risky or difficult, but also in everyday life purposes such as surveillance, monitoring, rescue, unmanned cargo, aerial video, and photography. More advanced drones make use of global positioning system (GPS) receivers during the navigation and control loop which allows for smart GPS features of drone navigation. However, there are problems if the drones operate in heterogeneous areas with no GPS signal, so it is important to perform research into the development of UAVs with autonomous navigation and landing guidance using computer vision. In this research, we determined how to safely land a drone in the absence of GPS signals using our remote maker-based tracking algorithm based on the visible light camera sensor. The proposed method uses a unique marker designed as a tracking target during landing procedures. Experimental results show that our method significantly outperforms state-of-the-art object trackers in terms of both accuracy and processing time, and we perform test on an embedded system in various environments. PMID:28867775

  19. Transferring Knowledge from a Bird's-Eye View - Earth Observation and Space Travels in Schools

    NASA Astrophysics Data System (ADS)

    Rienow, Andreas; Hodam, Henryk; Menz, Gunter; Voß, Kerstin

    2014-05-01

    In spring 2014, four commercial cameras will be transported by a Dragon spacecraft to the International Space Station (ISS) and mounted to the ESA Columbus laboratory. The cameras will deliver live earth observation data from different angles. The "Columbus-Eye"* project aims at distributing the video and image data produced by those cameras through a web portal. It should primary serve as learning portal for pupils comprising teaching material around the ISS earth observation imagery. The pupils should be motivated to work with the images in order to learn about curriculum relevant topics of natural sciences. The material will be prepared based on the experiences of the FIS* (German abbreviation for "Remote Sensing in Schools") project and its learning portal. Recognizing that in-depth use of satellite imagery can only be achieved by the means of computer aided learning methods, a sizeable number of e-Learning contents in German and English have been created throughout the last 5 years since FIS' kickoff. The talk presents the educational valorization of remote sensing data as well as their interactive implementation for teachers and pupils in both learning portals. It will be shown which possibilities the topic of remote sensing holds ready for teaching the regular curricula of Geography, Biology, Physics, Math and Informatics. Beside the sequenced implementation into digital and interactive teaching units, examples of a richly illustrated encyclopedia as well as easy-to-use image processing tools are given. The presentation finally addresses the question of how synergies of space travels can be used to enhance the fascination of earth observation imagery in the light of problem-based learning in everyday school lessons.

  20. Three reasons for on-line remote telemonitoring of patients treated with high doses of radionuclide therapy. Our experience.

    PubMed

    Matovic, Milovan; Jeremic, Marija; Urosevic, Vlade; Ravlic, Miroslav; Vlajkovic, Marina

    2015-01-01

    Following radionuclide therapy, patients usually must remain hospitalised in special "restricted access area" 2-5 days, until radiation in their body drops below a certain level. During this period medical personnel can be faced with some challenges. Based on our previous experience, we used telemedicine approach as solution for it. We have developed comprehensive telemedicine system, which consists of three own developed hardware & software modules which are accessible remotely. Challenge #1 Some of patients can experiencing serious complications related to radionuclide therapy or related to co-morbidities, if they have any of it. In some of those cases audio-visual contact with patients and follow-up their vital functions can be of high importance in case of patient needs urgent intervention. Solution #1 System for on-line remote monitoring of patients' vital functions registered with bed side monitor and video surveillance of area which use patients during hospitalisation. This system is established by IP cameras and bedside patient monitor, equipped with appropriate network card and software. Using remote connection (LAN or internet), a physician can watch at personal computer or mobile phone the waves and vital signs patterns from the bedside monitor, as well as live video from surveillance cameras. It provides prompt intervention in case of emergency. Challenge #2 Having in mind the overall costs of radionuclide therapy and patients hospital stay on the one hand, and limited capacity of the hospital premises for radionuclide therapy, on the other, it is of high importance to estimate as early as possible the time period after which the radiation in a patient's body will drop below the limit imposed by the law. Solution #2 On-line remote radiation monitoring system, which measures the radiation exposure rate by means of a pancake probe, which is connected to a PTZ (Pan-tilt-zoom) device and DVR (Digital video recorder). Those devices enable precise positioning of the detector on target region of the patient's body. The positioning of the detector can be visually controlled by a micro camera, placed at the center of detector's plane. Furthermore, there are three LASER pointers placed around the detector in order to mark the area where it is directed. In addition, two ultrasound sensors placed on the edge of the detector holder in order to estimate the exact distance between the probe and the patient's body. All those devices are controlled by the DVR. The data collected by the detector are acquired and processed by a PC, using customized hardware/software system developed by Italian ThereminoR group. Using remote connection, a physician can watch on-line radiation exposure rate in any time and can use commands of PTZ and DVR device for proper positioning of probe during measurement and control it by micro camera, LASER pointers and US sensors. Physician demands from the patients to take the same position for 5 minutes on each hour, during first 10 hours. Those data we use as reference points for further processing by our software. Based on two exponential matematical model, our software estimates the whole process of elimination of radioactivity from the patient's body, using reference points collected during the first day after radionuclide therapy. Based on that, physician can predict (on first day after therapy!) when patient will be able to leave the restricted access area". Challenge #3 Despite strict instructions given to them by physician and nurse before administration of radionuclide therapy, some patients sometimes try to leave "restricted access area". Solution #3 We have developed a system which continuously monitors the corridor which a patient must use in case of an attempt to leave the "restricted access area". Our system consists of a survey meter equipped with pancake probe directed towards the corridor. The survey meter is connected to a trigger circuit which gives signal in the case when the measeured count rate exceeds previously adjusted value. Trigger circuit is connected to the programmable siren, blinking light, alarm device unit with SIM card and IP surveillance camera. On the siren we previously recorded the voice alarm. In the case when the system is triggered, the patient will hear warning message and see blinking light. When the alarm device is triggered it will call responsible physician and nurse on mobile phone and IP camera simultaneously records this event. System also sending via email appropriate data about each event, when it happens. From our experience gained over the past 4 years, our telemonitoring system dedicated for patients receiving radionuclide therapy ensures a high level of safety for the patient and medical staff.

  1. Monitoring the spatial and temporal evolution of slope instability with Digital Image Correlation

    NASA Astrophysics Data System (ADS)

    Manconi, Andrea; Glueer, Franziska; Loew, Simon

    2017-04-01

    The identification and monitoring of ground deformation is important for an appropriate analysis and interpretation of unstable slopes. Displacements are usually monitored with in-situ techniques (e.g., extensometers, inclinometers, geodetic leveling, tachymeters and D-GPS), and/or active remote sensing methods (e.g., LiDAR and radar interferometry). In particular situations, however, the choice of the appropriate monitoring system is constrained by site-specific conditions. Slope areas can be very remote and/or affected by rapid surface changes, thus hardly accessible, often unsafe, for field installations. In many cases the use of remote sensing approaches might be also hindered because of unsuitable acquisition geometries, poor spatial resolution and revisit times, and/or high costs. The increasing availability of digital imagery acquired from terrestrial photo and video cameras allows us nowadays for an additional source of data. The latter can be exploited to visually identify changes of the scene occurring over time, but also to quantify the evolution of surface displacements. Image processing analyses, such as Digital Image Correlation (known also as pixel-offset or feature-tracking), have demonstrated to provide a suitable alternative to detect and monitor surface deformation at high spatial and temporal resolutions. However, a number of intrinsic limitations have to be considered when dealing with optical imagery acquisition and processing, including the effects of light conditions, shadowing, and/or meteorological variables. Here we propose an algorithm to automatically select and process images acquired from time-lapse cameras. We aim at maximizing the results obtainable from large datasets of digital images acquired with different light and meteorological conditions, and at retrieving accurate information on the evolution of surface deformation. We show a successful example of application of our approach in the Swiss Alps, more specifically in the Great Aletsch area, where slope instability was recently reactivated due to the progressive glacier retreat. At this location, time-lapse cameras have been installed during the last two years, ranging from low-cost and low-resolution webcams to more expensive high-resolution reflex cameras. Our results confirm that time-lapse cameras provide quantitative and accurate measurements of surface deformation evolution over space and time, especially in situations when other monitoring instruments fail.

  2. Information recovery through image sequence fusion under wavelet transformation

    NASA Astrophysics Data System (ADS)

    He, Qiang

    2010-04-01

    Remote sensing is widely applied to provide information of areas with limited ground access with applications such as to assess the destruction from natural disasters and to plan relief and recovery operations. However, the data collection of aerial digital images is constrained by bad weather, atmospheric conditions, and unstable camera or camcorder. Therefore, how to recover the information from the low-quality remote sensing images and how to enhance the image quality becomes very important for many visual understanding tasks, such like feature detection, object segmentation, and object recognition. The quality of remote sensing imagery can be improved through meaningful combination of the employed images captured from different sensors or from different conditions through information fusion. Here we particularly address information fusion to remote sensing images under multi-resolution analysis in the employed image sequences. The image fusion is to recover complete information by integrating multiple images captured from the same scene. Through image fusion, a new image with high-resolution or more perceptive for human and machine is created from a time series of low-quality images based on image registration between different video frames.

  3. Data Mining and Information Technology: Its Impact on Intelligence Collection and Privacy Rights

    DTIC Science & Technology

    2007-11-26

    sources include: Cameras - Digital cameras (still and video ) have been improving in capability while simultaneously dropping in cost at a rate...citizen is caught on camera 300 times each day.5 The power of extensive video coverage is magnified greatly by the nascent capability for voice and...software on security videos and tracking cell phone usage in the local area. However, it would only return the names and data of those who

  4. Camera Operator and Videographer

    ERIC Educational Resources Information Center

    Moore, Pam

    2007-01-01

    Television, video, and motion picture camera operators produce images that tell a story, inform or entertain an audience, or record an event. They use various cameras to shoot a wide range of material, including television series, news and sporting events, music videos, motion pictures, documentaries, and training sessions. Those who film or…

  5. The effects of spatially displaced visual feedback on remote manipulator performance

    NASA Technical Reports Server (NTRS)

    Smith, Randy L.; Stuart, Mark A.

    1989-01-01

    The effects of spatially displaced visual feedback on the operation of a camera viewed remote manipulation task are analyzed. A remote manipulation task is performed by operators exposed to the following different viewing conditions: direct view of the work site; normal camera view; reversed camera view; inverted/reversed camera view; and inverted camera view. The task completion performance times are statistically analyzed with a repeated measures analysis of variance, and a Newman-Keuls pairwise comparison test is administered to the data. The reversed camera view is ranked third out of four camera viewing conditions, while the normal viewing condition is found significantly slower than the direct viewing condition. It is shown that generalization to remote manipulation applications based upon the results of direct manipulation studies are quite useful, but they should be made cautiously.

  6. WWWinda Orchestrator: a mechanism for coordinating distributed flocks of Java Applets

    NASA Astrophysics Data System (ADS)

    Gutfreund, Yechezkal-Shimon; Nicol, John R.

    1997-01-01

    The WWWinda Orchestrator is a simple but powerful tool for coordinating distributed Java applets. Loosely derived from the Linda programming language developed by David Gelernter and Nicholas Carriero of Yale, WWWinda implements a distributed shared object space called TupleSpace where applets can post, read, or permanently store arbitrary Java objects. In this manner, applets can easily share information without being aware of the underlying communication mechanisms. WWWinda is a very useful for orchestrating flocks of distributed Java applets. Coordination event scan be posted to WWWinda TupleSpace and used to orchestrate the actions of remote applets. Applets can easily share information via the TupleSpace. The technology combines several functions in one simple metaphor: distributed web objects, remote messaging between applets, distributed synchronization mechanisms, object- oriented database, and a distributed event signaling mechanisms. WWWinda can be used a s platform for implementing shared VRML environments, shared groupware environments, controlling remote devices such as cameras, distributed Karaoke, distributed gaming, and shared audio and video experiences.

  7. Design of video interface conversion system based on FPGA

    NASA Astrophysics Data System (ADS)

    Zhao, Heng; Wang, Xiang-jun

    2014-11-01

    This paper presents a FPGA based video interface conversion system that enables the inter-conversion between digital and analog video. Cyclone IV series EP4CE22F17C chip from Altera Corporation is used as the main video processing chip, and single-chip is used as the information interaction control unit between FPGA and PC. The system is able to encode/decode messages from the PC. Technologies including video decoding/encoding circuits, bus communication protocol, data stream de-interleaving and de-interlacing, color space conversion and the Camera Link timing generator module of FPGA are introduced. The system converts Composite Video Broadcast Signal (CVBS) from the CCD camera into Low Voltage Differential Signaling (LVDS), which will be collected by the video processing unit with Camera Link interface. The processed video signals will then be inputted to system output board and displayed on the monitor.The current experiment shows that it can achieve high-quality video conversion with minimum board size.

  8. 50 CFR 216.155 - Requirements for monitoring and reporting.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... place 3 autonomous digital video cameras overlooking chosen haul-out sites located varying distances from the missile launch site. Each video camera will be set to record a focal subgroup within the... presence and activity will be conducted and recorded in a field logbook or recorded on digital video for...

  9. The calibration of video cameras for quantitative measurements

    NASA Technical Reports Server (NTRS)

    Snow, Walter L.; Childers, Brooks A.; Shortis, Mark R.

    1993-01-01

    Several different recent applications of velocimetry at Langley Research Center are described in order to show the need for video camera calibration for quantitative measurements. Problems peculiar to video sensing are discussed, including synchronization and timing, targeting, and lighting. The extension of the measurements to include radiometric estimates is addressed.

  10. OzBot and haptics: remote surveillance to physical presence

    NASA Astrophysics Data System (ADS)

    Mullins, James; Fielding, Mick; Nahavandi, Saeid

    2009-05-01

    This paper reports on robotic and haptic technologies and capabilities developed for the law enforcement and defence community within Australia by the Centre for Intelligent Systems Research (CISR). The OzBot series of small and medium surveillance robots have been designed in Australia and evaluated by law enforcement and defence personnel to determine suitability and ruggedness in a variety of environments. Using custom developed digital electronics and featuring expandable data busses including RS485, I2C, RS232, video and Ethernet, the robots can be directly connected to many off the shelf payloads such as gas sensors, x-ray sources and camera systems including thermal and night vision. Differentiating the OzBot platform from its peers is its ability to be integrated directly with haptic technology or the 'haptic bubble' developed by CISR. Haptic interfaces allow an operator to physically 'feel' remote environments through position-force control and experience realistic force feedback. By adding the capability to remotely grasp an object, feel its weight, texture and other physical properties in real-time from the remote ground control unit, an operator's situational awareness is greatly improved through Haptic augmentation in an environment where remote-system feedback is often limited.

  11. Advancement of thyroid surgery video recording: A comparison between two full HD head mounted video cameras.

    PubMed

    Ortensi, Andrea; Panunzi, Andrea; Trombetta, Silvia; Cattaneo, Alberto; Sorrenti, Salvatore; D'Orazi, Valerio

    2017-05-01

    The aim of this study was to test two different video cameras and recording systems used in thyroid surgery in our Department. This is meant to be an attempt to record the real point of view of the magnified vision of surgeon, so as to make the viewer aware of the difference with the naked eye vision. In this retrospective study, we recorded and compared twenty thyroidectomies performed using loupes magnification and microsurgical technique: ten were recorded with GoPro ® 4 Session action cam (commercially available) and ten with our new prototype of head mounted video camera. Settings were selected before surgery for both cameras. The recording time is about from 1 to 2 h for GoPro ® and from 3 to 5 h for our prototype. The average time of preparation to fit the camera on the surgeon's head and set the functionality is about 5 min for GoPro ® and 7-8 min for the prototype, mostly due to HDMI wiring cable. Videos recorded with the prototype require no further editing, which is mandatory for videos recorded with GoPro ® to highlight the surgical details. the present study showed that our prototype of video camera, compared with GoPro ® 4 Session, guarantees best results in terms of surgical video recording quality, provides to the viewer the exact perspective of the microsurgeon and shows accurately his magnified view through the loupes in thyroid surgery. These recordings are surgical aids for teaching and education and might be a method of self-analysis of surgical technique. Copyright © 2017 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.

  12. On the Complexity of Digital Video Cameras in/as Research: Perspectives and Agencements

    ERIC Educational Resources Information Center

    Bangou, Francis

    2014-01-01

    The goal of this article is to consider the potential for digital video cameras to produce as part of a research agencement. Our reflection will be guided by the current literature on the use of video recordings in research, as well as by the rhizoanalysis of two vignettes. The first of these vignettes is associated with a short video clip shot by…

  13. Virtual viewpoint synthesis in multi-view video system

    NASA Astrophysics Data System (ADS)

    Li, Fang; Yang, Shiqiang

    2005-07-01

    In this paper, we present a virtual viewpoint video synthesis algorithm to satisfy the following three aims: low computing consuming; real time interpolation and acceptable video quality. In contrast with previous technologies, this method obtain incompletely 3D structure using neighbor video sources instead of getting total 3D information with all video sources, so that the computation is reduced greatly. So we demonstrate our interactive multi-view video synthesis algorithm in a personal computer. Furthermore, adopting the method of choosing feature points to build the correspondence between the frames captured by neighbor cameras, we need not require camera calibration. Finally, our method can be used when the angle between neighbor cameras is 25-30 degrees that it is much larger than common computer vision experiments. In this way, our method can be applied into many applications such as sports live, video conference, etc.

  14. The Ottawa telehealth project.

    PubMed

    Cheung, S T; Davies, R F; Smith, K; Marsh, R; Sherrard, H; Keon, W J

    1998-01-01

    To examine the telehealth system as a means of improving access to cardiac consultations and specialized health services in remote areas of Ontario. The University of Ottawa Heart Institute has set up a telehealth test program, Healthcare and Education Access for Remote Residents by Telecommunications (HEARRT), in collaboration with industry and the provincial and federal government, as well as several remote clinical test sites. The program makes off-site cardiology consultations possible. History taking and physical examinations are conducted by video and electronic stethoscope. Laboratory results and echocardiograms are transmitted by document camera and VCR. The technology is being tested in both stable outpatient and emergency situations. Various telecommunications bandwidths and encoding systems are being evaluated, including satellite and terrestrial-based asynchronous transfer-mode circuits. Patient satisfaction and cost-effectiveness are also being assessed. Bandwidths from as low as 384 kbps using H.320 encoders to 40 Mbps using digital transport of NTSC video signals have been evaluated. Although lower bandwidths are sufficient for sending echocardiographic and electrocardiogram data, bandwidths with transport speeds of 4 to 6 Mbps appear necessary to capture the nuances of the cardiac physical examination. A preliminary satisfaction survey of 19 patients noted that all felt that they could communicate effectively with the cardiologist by video, and each had confidence in the advice offered. None reported that he or she would rather have traveled to the doctor in person. Initial and projected examination of the costs suggested that telehealth will effectively reduce overall health care spending while decreasing travel expenses for rural patients. Telehealth technology is sufficiently sophisticated to allow off-site cardiology assessments. Preliminary results suggest there is a sound business case for the implementation of telehealth technology to meet the needs of remote residents in northern Ontario. Working closely with government and industry, we will develop a marketing and commercialization plan to support the use of this technology throughout Ontario and expand application to patient education and continuing medical education.

  15. An innovative experimental setup for Large Scale Particle Image Velocimetry measurements in riverine environments

    NASA Astrophysics Data System (ADS)

    Tauro, Flavia; Olivieri, Giorgio; Porfiri, Maurizio; Grimaldi, Salvatore

    2014-05-01

    Large Scale Particle Image Velocimetry (LSPIV) is a powerful methodology to nonintrusively monitor surface flows. Its use has been beneficial to the development of rating curves in riverine environments and to map geomorphic features in natural waterways. Typical LSPIV experimental setups rely on the use of mast-mounted cameras for the acquisition of natural stream reaches. Such cameras are installed on stream banks and are angled with respect to the water surface to capture large scale fields of view. Despite its promise and the simplicity of the setup, the practical implementation of LSPIV is affected by several challenges, including the acquisition of ground reference points for image calibration and time-consuming and highly user-assisted procedures to orthorectify images. In this work, we perform LSPIV studies on stream sections in the Aniene and Tiber basins, Italy. To alleviate the limitations of traditional LSPIV implementations, we propose an improved video acquisition setup comprising a telescopic, an inexpensive GoPro Hero 3 video camera, and a system of two lasers. The setup allows for maintaining the camera axis perpendicular to the water surface, thus mitigating uncertainties related to image orthorectification. Further, the mast encases a laser system for remote image calibration, thus allowing for nonintrusively calibrating videos without acquiring ground reference points. We conduct measurements on two different water bodies to outline the performance of the methodology in case of varying flow regimes, illumination conditions, and distribution of surface tracers. Specifically, the Aniene river is characterized by high surface flow velocity, the presence of abundant, homogeneously distributed ripples and water reflections, and a meagre number of buoyant tracers. On the other hand, the Tiber river presents lower surface flows, isolated reflections, and several floating objects. Videos are processed through image-based analyses to correct for lens distortions and analyzed with a commercially available PIV software. Surface flow velocity estimates are compared to supervised measurements performed by visually tracking objects floating on the stream surface and to rating curves developed by the Ufficio Idrografico e Mareografico (UIM) at Regione Lazio, Italy. Experimental findings demonstrate that the presence of tracers is crucial for surface flow velocity estimates. Further, considering surface ripples and patterns may lead to underestimations in LSPIV analyses.

  16. The High Definition Earth Viewing (HDEV) Payload

    NASA Technical Reports Server (NTRS)

    Muri, Paul; Runco, Susan; Fontanot, Carlos; Getteau, Chris

    2017-01-01

    The High Definition Earth Viewing (HDEV) payload enables long-term experimentation of four, commercial-of-the-shelf (COTS) high definition video, cameras mounted on the exterior of the International Space Station. The payload enables testing of cameras in the space environment. The HDEV cameras transmit imagery continuously to an encoder that then sends the video signal via Ethernet through the space station for downlink. The encoder, cameras, and other electronics are enclosed in a box pressurized to approximately one atmosphere, containing dry nitrogen, to provide a level of protection to the electronics from the space environment. The encoded video format supports streaming live video of Earth for viewing online. Camera sensor types include charge-coupled device and complementary metal-oxide semiconductor. Received imagery data is analyzed on the ground to evaluate camera sensor performance. Since payload deployment, minimal degradation to imagery quality has been observed. The HDEV payload continues to operate by live streaming and analyzing imagery. Results from the experiment reduce risk in the selection of cameras that could be considered for future use on the International Space Station and other spacecraft. This paper discusses the payload development, end-to- end architecture, experiment operation, resulting image analysis, and future work.

  17. [Present and prospects of telepathology].

    PubMed

    Takahashi, M; Mernyei, M; Shibuya, C; Toshima, S

    1999-01-01

    Nearly ten years have passed since telepathology was introduced and real-time pathology consultations were conducted. Long distance consultations in pathology, cytology, computed tomography and magnetic resonance imaging, which are referred to as telemedicine, clearly enhance the level of medical care in remote hospitals where no full-time specialists are employed. To transmit intraoperative frozen section images, we developed a unique hybrid system "Hi-SPEED". The imaging view through the CCD camera is controlled by a camera controller that provides NTSC composite video output for low resolution motion pictures and high resolution digital output for final interpretation on computer display. The results of intraoperative frozen section diagnosis between the Gihoku General Hospital 410 km from SRL showed a sensitivity of 97.6% for 82 cases of breast carcinoma and a false positive rate of 1.2%. This system can be used for second opinions as well as for consultations between cytologists and cytotechnologists.

  18. An Automatic Portable Telecine Camera.

    DTIC Science & Technology

    1978-08-01

    five television frames to achieve synchronous operation, that is about 0.2 second. 6.3 Video recorder noise imnunity The synchronisation pulse separator...display is filmed by a modified 16 am cine camera driven by a control unit in which the camera supply voltage is derived from the field synchronisation ...pulses of the video signal. Automatic synchronisation of the camera mechanism is achieved over a wide range of television field frequencies and the

  19. Demonstrations of Optical Spectra with a Video Camera

    ERIC Educational Resources Information Center

    Kraftmakher, Yaakov

    2012-01-01

    The use of a video camera may markedly improve demonstrations of optical spectra. First, the output electrical signal from the camera, which provides full information about a picture to be transmitted, can be used for observing the radiant power spectrum on the screen of a common oscilloscope. Second, increasing the magnification by the camera…

  20. Evaluation of an airborne remote sensing platform consisting of two consumer-grade cameras for crop identification

    USDA-ARS?s Scientific Manuscript database

    Remote sensing systems based on consumer-grade cameras have been increasingly used in scientific research and remote sensing applications because of their low cost and ease of use. However, the performance of consumer-grade cameras for practical applications have not been well documented in related ...

  1. An affordable wearable video system for emergency response training

    NASA Astrophysics Data System (ADS)

    King-Smith, Deen; Mikkilineni, Aravind; Ebert, David; Collins, Timothy; Delp, Edward J.

    2009-02-01

    Many emergency response units are currently faced with restrictive budgets that prohibit their use of advanced technology-based training solutions. Our work focuses on creating an affordable, mobile, state-of-the-art emergency response training solution through the integration of low-cost, commercially available products. The system we have developed consists of tracking, audio, and video capability, coupled with other sensors that can all be viewed through a unified visualization system. In this paper we focus on the video sub-system which helps provide real time tracking and video feeds from the training environment through a system of wearable and stationary cameras. These two camera systems interface with a management system that handles storage and indexing of the video during and after training exercises. The wearable systems enable the command center to have live video and tracking information for each trainee in the exercise. The stationary camera systems provide a fixed point of reference for viewing action during the exercise and consist of a small Linux based portable computer and mountable camera. The video management system consists of a server and database which work in tandem with a visualization application to provide real-time and after action review capability to the training system.

  2. Caught on Camera: Special Education Classrooms and Video Surveillance

    ERIC Educational Resources Information Center

    Heintzelman, Sara C.; Bathon, Justin M.

    2017-01-01

    In Texas, state policy anticipates that installing video cameras in special education classrooms will decrease student abuse inflicted by teachers. Lawmakers assume that collecting video footage will prevent teachers from engaging in malicious actions and prosecute those who choose to harm children. At the request of a parent, Section 29.022 of…

  3. Improving Photometric Calibration of Meteor Video Camera Systems

    NASA Technical Reports Server (NTRS)

    Ehlert, Steven; Kingery, Aaron; Suggs, Robert

    2016-01-01

    We present the results of new calibration tests performed by the NASA Meteoroid Environment Oce (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the rst point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric ux within the camera band-pass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at 0:20 mag, and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to 0:05 ?? 0:10 mag in both ltered and un ltered camera observations with no evidence for lingering systematics.

  4. Robotic Vehicle Communications Interoperability

    DTIC Science & Technology

    1988-08-01

    starter (cold start) X X Fire suppression X Fording control X Fuel control X Fuel tank selector X Garage toggle X Gear selector X X X X Hazard warning...optic Sensors Sensor switch Video Radar IR Thermal imaging system Image intensifier Laser ranger Video camera selector Forward Stereo Rear Sensor control...optic sensors Sensor switch Video Radar IR Thermal imaging system Image intensifier Laser ranger Video camera selector Forward Stereo Rear Sensor

  5. Smart sensing surveillance video system

    NASA Astrophysics Data System (ADS)

    Hsu, Charles; Szu, Harold

    2016-05-01

    An intelligent video surveillance system is able to detect and identify abnormal and alarming situations by analyzing object movement. The Smart Sensing Surveillance Video (S3V) System is proposed to minimize video processing and transmission, thus allowing a fixed number of cameras to be connected on the system, and making it suitable for its applications in remote battlefield, tactical, and civilian applications including border surveillance, special force operations, airfield protection, perimeter and building protection, and etc. The S3V System would be more effective if equipped with visual understanding capabilities to detect, analyze, and recognize objects, track motions, and predict intentions. In addition, alarm detection is performed on the basis of parameters of the moving objects and their trajectories, and is performed using semantic reasoning and ontologies. The S3V System capabilities and technologies have great potential for both military and civilian applications, enabling highly effective security support tools for improving surveillance activities in densely crowded environments. It would be directly applicable to solutions for emergency response personnel, law enforcement, and other homeland security missions, as well as in applications requiring the interoperation of sensor networks with handheld or body-worn interface devices.

  6. Design of a Wireless Sensor Network Platform for Tele-Homecare

    PubMed Central

    Chung, Yu-Fang; Liu, Chia-Hui

    2013-01-01

    The problem of an ageing population has become serious in the past few years as the degeneration of various physiological functions has resulted in distinct chronic diseases in the elderly. Most elderly are not willing to leave home for healthcare centers, but caring for patients at home eats up caregiver resources, and can overwhelm patients' families. Besides, a lot of chronic disease symptoms cause the elderly to visit hospitals frequently. Repeated examinations not only exhaust medical resources, but also waste patients' time and effort. To make matters worse, this healthcare system does not actually appear to be effective as expected. In response to these problems, a wireless remote home care system is designed in this study, where ZigBee is used to set up a wireless network for the users to take measurements anytime and anywhere. Using suitable measuring devices, users' physiological signals are measured, and their daily conditions are monitored by various sensors. Being transferred through ZigBee network, vital signs are analyzed in computers which deliver distinct alerts to remind the users and the family of possible emergencies. The system could be further combined with electric appliances to remotely control the users' environmental conditions. The environmental monitoring function can be activated to transmit in real time dynamic images of the cared to medical personnel through the video function when emergencies occur. Meanwhile, in consideration of privacy, the video camera would be turned on only when it is necessary. The caregiver could adjust the angle of camera to a proper position and observe the current situation of the cared when a sensor on the cared or the environmental monitoring system detects exceptions. All physiological data are stored in the database for family enquiries or accurate diagnoses by medical personnel. PMID:24351630

  7. Design of a wireless sensor network platform for tele-homecare.

    PubMed

    Chung, Yu-Fang; Liu, Chia-Hui

    2013-12-12

    The problem of an ageing population has become serious in the past few years as the degeneration of various physiological functions has resulted in distinct chronic diseases in the elderly. Most elderly are not willing to leave home for healthcare centers, but caring for patients at home eats up caregiver resources, and can overwhelm patients' families. Besides, a lot of chronic disease symptoms cause the elderly to visit hospitals frequently. Repeated examinations not only exhaust medical resources, but also waste patients' time and effort. To make matters worse, this healthcare system does not actually appear to be effective as expected. In response to these problems, a wireless remote home care system is designed in this study, where ZigBee is used to set up a wireless network for the users to take measurements anytime and anywhere. Using suitable measuring devices, users' physiological signals are measured, and their daily conditions are monitored by various sensors. Being transferred through ZigBee network, vital signs are analyzed in computers which deliver distinct alerts to remind the users and the family of possible emergencies. The system could be further combined with electric appliances to remotely control the users' environmental conditions. The environmental monitoring function can be activated to transmit in real time dynamic images of the cared to medical personnel through the video function when emergencies occur. Meanwhile, in consideration of privacy, the video camera would be turned on only when it is necessary. The caregiver could adjust the angle of camera to a proper position and observe the current situation of the cared when a sensor on the cared or the environmental monitoring system detects exceptions. All physiological data are stored in the database for family enquiries or accurate diagnoses by medical personnel.

  8. University of Virginia suborbital infrared sensing experiment

    NASA Astrophysics Data System (ADS)

    Holland, Stephen; Nunnally, Clayton; Armstrong, Sarah; Laufer, Gabriel

    2002-03-01

    An Orion sounding rocket launched from Wallops Flight Facility carried a University of Virginia payload to an altitude of 47 km and returned infrared measurements of the Earth's upper atmosphere and video images of the ocean. The payload launch was the result of a three-year undergraduate design project by a multi-disciplinary student group from the University of Virginia and James Madison University. As part of a new multi-year design course, undergraduate students designed, built, tested, and participated in the launch of a suborbital platform from which atmospheric remote sensors and other scientific experiments could operate. The first launch included a simplified atmospheric measurement system intended to demonstrate full system operation and remote sensing capabilities during suborbital flight. A thermoelectrically cooled HgCdTe infrared detector, with peak sensitivity at 10 micrometers , measured upwelling radiation and a small camera and VCR system, aligned with the infrared sensor, provided a ground reference. Additionally, a simple orientation sensor, consisting of three photodiodes, equipped with red, green, and blue light with dichroic filters, was tested. Temperature measurements of the upper atmosphere were successfully obtained during the flight. Video images were successfully recorded on-board the payload and proved a valuable tool in the data analysis process. The photodiode system, intended as a replacement for the camera and VCR system, functioned well, despite low signal amplification. This fully integrated and flight tested payload will serve as a platform for future atmospheric sensing experiments. It is currently being modified for a second suborbital flight that will incorporate a gas filter correlation radiometry (GFCR) instrument to measure the distribution of stratospheric methane and imaging capabilities to record the chlorophyll distribution in the Metompkin Bay as an indicator of pollution runoff.

  9. Wireless augmented reality communication system

    NASA Technical Reports Server (NTRS)

    Devereaux, Ann (Inventor); Agan, Martin (Inventor); Jedrey, Thomas (Inventor)

    2006-01-01

    The system of the present invention is a highly integrated radio communication system with a multimedia co-processor which allows true two-way multimedia (video, audio, data) access as well as real-time biomedical monitoring in a pager-sized portable access unit. The system is integrated in a network structure including one or more general purpose nodes for providing a wireless-to-wired interface. The network architecture allows video, audio and data (including biomedical data) streams to be connected directly to external users and devices. The portable access units may also be mated to various non-personal devices such as cameras or environmental sensors for providing a method for setting up wireless sensor nets from which reported data may be accessed through the portable access unit. The reported data may alternatively be automatically logged at a remote computer for access and viewing through a portable access unit, including the user's own.

  10. Wireless Augmented Reality Communication System

    NASA Technical Reports Server (NTRS)

    Jedrey, Thomas (Inventor); Agan, Martin (Inventor); Devereaux, Ann (Inventor)

    2014-01-01

    The system of the present invention is a highly integrated radio communication system with a multimedia co-processor which allows true two-way multimedia (video, audio, data) access as well as real-time biomedical monitoring in a pager-sized portable access unit. The system is integrated in a network structure including one or more general purpose nodes for providing a wireless-to-wired interface. The network architecture allows video, audio and data (including biomedical data) streams to be connected directly to external users and devices. The portable access units may also be mated to various non-personal devices such as cameras or environmental sensors for providing a method for setting up wireless sensor nets from which reported data may be accessed through the portable access unit. The reported data may alternatively be automatically logged at a remote computer for access and viewing through a portable access unit, including the user's own.

  11. Wireless Augmented Reality Communication System

    NASA Technical Reports Server (NTRS)

    Agan, Martin (Inventor); Devereaux, Ann (Inventor); Jedrey, Thomas (Inventor)

    2016-01-01

    The system of the present invention is a highly integrated radio communication system with a multimedia co-processor which allows true two-way multimedia (video, audio, data) access as well as real-time biomedical monitoring in a pager-sized portable access unit. The system is integrated in a network structure including one or more general purpose nodes for providing a wireless-to-wired interface. The network architecture allows video, audio and data (including biomedical data) streams to be connected directly to external users and devices. The portable access units may also be mated to various non-personal devices such as cameras or environmental sensors for providing a method for setting up wireless sensor nets from which reported data may be accessed through the portable access unit. The reported data may alternatively be automatically logged at a remote computer for access and viewing through a portable access unit, including the user's own.

  12. Object tracking using multiple camera video streams

    NASA Astrophysics Data System (ADS)

    Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford

    2010-05-01

    Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.

  13. Hardware platform for multiple mobile robots

    NASA Astrophysics Data System (ADS)

    Parzhuber, Otto; Dolinsky, D.

    2004-12-01

    This work is concerned with software and communications architectures that might facilitate the operation of several mobile robots. The vehicles should be remotely piloted or tele-operated via a wireless link between the operator and the vehicles. The wireless link will carry control commands from the operator to the vehicle, telemetry data from the vehicle back to the operator and frequently also a real-time video stream from an on board camera. For autonomous driving the link will carry commands and data between the vehicles. For this purpose we have developed a hardware platform which consists of a powerful microprocessor, different sensors, stereo- camera and Wireless Local Area Network (WLAN) for communication. The adoption of IEEE802.11 standard for the physical and access layer protocols allow a straightforward integration with the internet protocols TCP/IP. For the inspection of the environment the robots are equipped with a wide variety of sensors like ultrasonic, infrared proximity sensors and a small inertial measurement unit. Stereo cameras give the feasibility of the detection of obstacles, measurement of distance and creation of a map of the room.

  14. Flexible mobile robot system for smart optical pipe inspection

    NASA Astrophysics Data System (ADS)

    Kampfer, Wolfram; Bartzke, Ralf; Ziehl, Wolfgang

    1998-03-01

    Damages of pipes can be inspected and graded by TV technology available on the market. Remotely controlled vehicles carry a TV-camera through pipes. Thus, depending on the experience and the capability of the operator, diagnosis failures can not be avoided. The classification of damages requires the knowledge of the exact geometrical dimensions of the damages such as width and depth of cracks, fractures and defect connections. Within the framework of a joint R&D project a sensor based pipe inspection system named RODIAS has been developed with two partners from industry and research institute. It consists of a remotely controlled mobile robot which carries intelligent sensors for on-line sewerage inspection purpose. The sensor is based on a 3D-optical sensor and a laser distance sensor. The laser distance sensor is integrated in the optical system of the camera and can measure the distance between camera and object. The angle of view can be determined from the position of the pan and tilt unit. With coordinate transformations it is possible to calculate the spatial coordinates for every point of the video image. So the geometry of an object can be described exactly. The company Optimess has developed TriScan32, a special software for pipe condition classification. The user can start complex measurements of profiles, pipe displacements or crack widths simply by pressing a push-button. The measuring results are stored together with other data like verbal damage descriptions and digitized images in a data base.

  15. Tile-Image Merging and Delivering for Virtual Camera Services on Tiled-Display for Real-Time Remote Collaboration

    NASA Astrophysics Data System (ADS)

    Choe, Giseok; Nang, Jongho

    The tiled-display system has been used as a Computer Supported Cooperative Work (CSCW) environment, in which multiple local (and/or remote) participants cooperate using some shared applications whose outputs are displayed on a large-scale and high-resolution tiled-display, which is controlled by a cluster of PC's, one PC per display. In order to make the collaboration effective, each remote participant should be aware of all CSCW activities on the titled display system in real-time. This paper presents a capturing and delivering mechanism of all activities on titled-display system to remote participants in real-time. In the proposed mechanism, the screen images of all PC's are periodically captured and delivered to the Merging Server that maintains separate buffers to store the captured images from the PCs. The mechanism selects one tile image from each buffer, merges the images to make a screen shot of the whole tiled-display, clips a Region of Interest (ROI), compresses and streams it to remote participants in real-time. A technical challenge in the proposed mechanism is how to select a set of tile images, one from each buffer, for merging so that the tile images displayed at the same time on the tiled-display can be properly merged together. This paper presents three selection algorithms; a sequential selection algorithm, a capturing time based algorithm, and a capturing time and visual consistency based algorithm. It also proposes a mechanism of providing several virtual cameras on tiled-display system to remote participants by concurrently clipping several different ROI's from the same merged tiled-display images, and delivering them after compressing with video encoders requested by the remote participants. By interactively changing and resizing his/her own ROI, a remote participant can check the activities on the tiled-display effectively. Experiments on a 3 × 2 tiled-display system show that the proposed merging algorithm can build a tiled-display image stream synchronously, and the ROI-based clipping and delivering mechanism can provide individual views on the tiled-display system to multiple remote participants in real-time.

  16. GoPro Hero Cameras for Creation of a Three-Dimensional, Educational, Neurointerventional Video.

    PubMed

    Park, Min S; Brock, Andrea; Mortimer, Vance; Taussky, Philipp; Couldwell, William T; Quigley, Edward

    2017-10-01

    Neurointerventional education relies on an apprenticeship model, with the trainee observing and participating in procedures with the guidance of a mentor. While educational videos are becoming prevalent in surgical cases, there is a dearth of comparable educational material for trainees in neurointerventional programs. We sought to create a high-quality, three-dimensional video of a routine diagnostic cerebral angiogram for use as an educational tool. A diagnostic cerebral angiogram was recorded using two GoPro HERO 3+ cameras with the Dual HERO System to capture the proceduralist's hands during the case. This video was edited with recordings from the video monitors to create a real-time three-dimensional video of both the actions of the neurointerventionalist and the resulting wire/catheter movements. The final edited video, in either two or three dimensions, can serve as another instructional tool for the training of residents and/or fellows. Additional videos can be created in a similar fashion of more complicated neurointerventional cases. The GoPro HERO 3+ camera and Dual HERO System can be used to create educational videos of neurointerventional procedures.

  17. Still-Video Photography: Tomorrow's Electronic Cameras in the Hands of Today's Photojournalists.

    ERIC Educational Resources Information Center

    Foss, Kurt; Kahan, Robert S.

    This paper examines the still-video camera and its potential impact by looking at recent experiments and by gathering information from some of the few people knowledgeable about the new technology. The paper briefly traces the evolution of the tools and processes of still-video photography, examining how photographers and their work have been…

  18. Review of intelligent video surveillance with single camera

    NASA Astrophysics Data System (ADS)

    Liu, Ying; Fan, Jiu-lun; Wang, DianWei

    2012-01-01

    Intelligent video surveillance has found a wide range of applications in public security. This paper describes the state-of- the-art techniques in video surveillance system with single camera. This can serve as a starting point for building practical video surveillance systems in developing regions, leveraging existing ubiquitous infrastructure. In addition, this paper discusses the gap between existing technologies and the requirements in real-world scenario, and proposes potential solutions to reduce this gap.

  19. Body worn camera

    NASA Astrophysics Data System (ADS)

    Aishwariya, A.; Pallavi Sudhir, Gulavani; Garg, Nemesa; Karthikeyan, B.

    2017-11-01

    A body worn camera is small video camera worn on the body, typically used by police officers to record arrests, evidence from crime scenes. It helps preventing and resolving complaints brought by members of the public; and strengthening police transparency, performance, and accountability. The main constants of this type of the system are video format, resolution, frames rate, and audio quality. This system records the video in .mp4 format with 1080p resolution and 30 frames per second. One more important aspect to while designing this system is amount of power the system requires as battery management becomes very critical. The main design challenges are Size of the Video, Audio for the video. Combining both audio and video and saving it in .mp4 format, Battery, size that is required for 8 hours of continuous recording, Security. For prototyping this system is implemented using Raspberry Pi model B.

  20. Evaluating video digitizer errors

    NASA Astrophysics Data System (ADS)

    Peterson, C.

    2016-01-01

    Analog output video cameras remain popular for recording meteor data. Although these cameras uniformly employ electronic detectors with fixed pixel arrays, the digitization process requires resampling the horizontal lines as they are output in order to reconstruct the pixel data, usually resulting in a new data array of different horizontal dimensions than the native sensor. Pixel timing is not provided by the camera, and must be reconstructed based on line sync information embedded in the analog video signal. Using a technique based on hot pixels, I present evidence that jitter, sync detection, and other timing errors introduce both position and intensity errors which are not present in cameras which internally digitize their sensors and output the digital data directly.

  1. State of the art in video system performance

    NASA Technical Reports Server (NTRS)

    Lewis, Michael J.

    1990-01-01

    The closed circuit television (CCTV) system that is onboard the Space Shuttle has the following capabilities: camera, video signal switching and routing unit (VSU); and Space Shuttle video tape recorder. However, this system is inadequate for use with many experiments that require video imaging. In order to assess the state-of-the-art in video technology and data storage systems, a survey was conducted of the High Resolution, High Frame Rate Video Technology (HHVT) products. The performance of the state-of-the-art solid state cameras and image sensors, video recording systems, data transmission devices, and data storage systems versus users' requirements are shown graphically.

  2. Stereoscopic visualization and haptic technology used to create a virtual environment for remote surgery - biomed 2011.

    PubMed

    Bornhoft, J M; Strabala, K W; Wortman, T D; Lehman, A C; Oleynikov, D; Farritor, S M

    2011-01-01

    The objective of this research is to study the effectiveness of using a stereoscopic visualization system for performing remote surgery. The use of stereoscopic vision has become common with the advent of the da Vinci® system (Intuitive, Sunnyvale CA). This system creates a virtual environment that consists of a 3-D display for visual feedback and haptic tactile feedback, together providing an intuitive environment for remote surgical applications. This study will use simple in vivo robotic surgical devices and compare the performance of surgeons using the stereoscopic interfacing system to the performance of surgeons using one dimensional monitors. The stereoscopic viewing system consists of two cameras, two monitors, and four mirrors. The cameras are mounted to a multi-functional miniature in vivo robot; and mimic the depth perception of the actual human eyes. This is done by placing the cameras at a calculated angle and distance apart. Live video streams from the left and right cameras are displayed on the left and right monitors, respectively. A system of angled mirrors allows the left and right eyes to see the video stream from the left and right monitor, respectively, creating the illusion of depth. The haptic interface consists of two PHANTOM Omni® (SensAble, Woburn Ma) controllers. These controllers measure the position and orientation of a pen-like end effector with three degrees of freedom. As the surgeon uses this interface, they see a 3-D image and feel force feedback for collision and workspace limits. The stereoscopic viewing system has been used in several surgical training tests and shows a potential improvement in depth perception and 3-D vision. The haptic system accurately gives force feedback that aids in surgery. Both have been used in non-survival animal surgeries, and have successfully been used in suturing and gallbladder removal. Bench top experiments using the interfacing system have also been conducted. A group of participants completed two different surgical training tasks using both a two dimensional visual system and the stereoscopic visual system. Results suggest that the stereoscopic visual system decreased the amount of time taken to complete the tasks. All participants also reported that the stereoscopic system was easier to utilize than the two dimensional system. Haptic controllers combined with stereoscopic vision provides for a more intuitive virtual environment. This system provides the surgeon with 3-D vision, depth perception, and the ability to receive feedback through forces applied in the haptic controller while performing surgery. These capabilities potentially enable the performance of more complex surgeries with a higher level of precision.

  3. Crew Field Notes: A New Tool for Planetary Surface Exploration

    NASA Technical Reports Server (NTRS)

    Horz, Friedrich; Evans, Cynthia; Eppler, Dean; Gernhardt, Michael; Bluethmann, William; Graf, Jodi; Bleisath, Scott

    2011-01-01

    The Desert Research and Technology Studies (DRATS) field tests of 2010 focused on the simultaneous operation of two rovers, a historical first. The complexity and data volume of two rovers operating simultaneously presented significant operational challenges for the on-site Mission Control Center, including the real time science support function. The latter was split into two "tactical" back rooms, one for each rover, that supported the real time traverse activities; in addition, a "strategic" science team convened overnight to synthesize the day's findings, and to conduct the strategic forward planning of the next day or days as detailed in [1, 2]. Current DRATS simulations and operations differ dramatically from those of Apollo, including the most evolved Apollo 15-17 missions, due to the advent of digital technologies. Modern digital still and video cameras, combined with the capability for real time transmission of large volumes of data, including multiple video streams, offer the prospect for the ground based science support room(s) in Mission Control to witness all crew activities in unprecedented detail and in real time. It was not uncommon during DRATS 2010 that each tactical science back room simultaneously received some 4-6 video streams from cameras mounted on the rover or the crews' backpacks. Some of the rover cameras are controllable PZT (pan, zoom, tilt) devices that can be operated by the crews (during extensive drives) or remotely by the back room (during EVAs). Typically, a dedicated "expert" and professional geologist in the tactical back room(s) controls, monitors and analyses a single video stream and provides the findings to the team, commonly supported by screen-saved images. It seems obvious, that the real time comprehension and synthesis of the verbal descriptions, extensive imagery, and other information (e.g. navigation data; time lines etc) flowing into the science support room(s) constitute a fundamental challenge to future mission operations: how can one analyze, comprehend and synthesize -in real time- the enormous data volume coming to the ground? Real time understanding of all data is needed for constructive interaction with the surface crews, and it becomes critical for the strategic forward planning process.

  4. Evolution of the Mobile Information SysTem (MIST)

    NASA Technical Reports Server (NTRS)

    Litaker, Harry L., Jr.; Thompson, Shelby; Archer, Ronald D.

    2008-01-01

    The Mobile Information SysTem (MIST) had its origins in the need to determine whether commercial off the shelf (COTS) technologies could improve intervehicular activities (IVA) on International Space Station (ISS) crew maintenance productivity. It began with an exploration of head mounted displays (HMDs), but quickly evolved to include voice recognition, mobile personal computing, and data collection. The unique characteristic of the MIST lies within its mobility, in which a vest is worn that contains a mini-computer and supporting equipment, and a headband with attachments for a HMD, lipstick camera, and microphone. Data is then captured directly by the computer running Morae(TM) or similar software for analysis. To date, the MIST system has been tested in numerous environments such as two parabolic flights on NASA's C-9 microgravity aircraft and several mockup facilities ranging from ISS to the Altair Lunar Sortie Lander. Functional capabilities have included its lightweight and compact design, commonality across systems and environments, and usefulness in remote collaboration. Human Factors evaluations of the system have proven the MIST's ability to be worn for long durations of time (approximately four continuous hours) with no adverse physical deficits, moderate operator compensation, and low workload being reported as measured by Corlett Bishop Discomfort Scale, Cooper-Harper Ratings, and the NASA Total Workload Index (TLX), respectively. Additionally, through development of the system, it has spawned several new applications useful in research. For example, by only employing the lipstick camera, microphone, and a compact digital video recorder (DVR), we created a portable, lightweight data collection device. Video is recorded from the participants point of view (POV) through the use of the camera mounted on the side of the head. Both the video and audio is recorded directly into the DVR located on a belt around the waist. This data is then transferred to another computer for video editing and analysis. Another application has been discovered using simulated flight, in which, a kneeboard is replaced with mini-computer and the HMD to project flight paths and glide slopes for lunar ascent. As technologies evolve, so will the system and its application for research and space system operations.

  5. Beats: Video Monitors and Cameras.

    ERIC Educational Resources Information Center

    Worth, Frazier

    1996-01-01

    Presents a method to teach the concept of beats as a generalized phenomenon rather than teaching it only in the context of sound. Involves using a video camera to film a computer terminal, 16-mm projector, or TV monitor. (JRH)

  6. Design and implementation of a remote UAV-based mobile health monitoring system

    NASA Astrophysics Data System (ADS)

    Li, Songwei; Wan, Yan; Fu, Shengli; Liu, Mushuang; Wu, H. Felix

    2017-04-01

    Unmanned aerial vehicles (UAVs) play increasing roles in structure health monitoring. With growing mobility in modern Internet-of-Things (IoT) applications, the health monitoring of mobile structures becomes an emerging application. In this paper, we develop a UAV-carried vision-based monitoring system that allows a UAV to continuously track and monitor a mobile infrastructure and transmit back the monitoring information in real- time from a remote location. The monitoring system uses a simple UAV-mounted camera and requires only a single feature located on the mobile infrastructure for target detection and tracking. The computation-effective vision-based tracking solution based on a single feature is an improvement over existing vision-based lead-follower tracking systems that either have poor tracking performance due to the use of a single feature, or have improved tracking performance at a cost of the usage of multiple features. In addition, a UAV-carried aerial networking infrastructure using directional antennas is used to enable robust real-time transmission of monitoring video streams over a long distance. Automatic heading control is used to self-align headings of directional antennas to enable robust communication in mobility. Compared to existing omni-communication systems, the directional communication solution significantly increases the operation range of remote monitoring systems. In this paper, we develop the integrated modeling framework of camera and mobile platforms, design the tracking algorithm, develop a testbed of UAVs and mobile platforms, and evaluate system performance through both simulation studies and field tests.

  7. Photoplethysmographic imaging via spectrally demultiplexed erythema fluctuation analysis for remote heart rate monitoring

    NASA Astrophysics Data System (ADS)

    Deglint, Jason; Chung, Audrey G.; Chwyl, Brendan; Amelard, Robert; Kazemzadeh, Farnoud; Wang, Xiao Yu; Clausi, David A.; Wong, Alexander

    2016-03-01

    Traditional photoplethysmographic imaging (PPGI) systems use the red, green, and blue (RGB) broadband measurements of a consumer digital camera to remotely estimate a patients heart rate; however, these broadband RGB signals are often corrupted by ambient noise, making the extraction of subtle fluctuations indicative of heart rate difficult. Therefore, the use of narrow-band spectral measurements can significantly improve the accuracy. We propose a novel digital spectral demultiplexing (DSD) method to infer narrow-band spectral information from acquired broadband RGB measurements in order to estimate heart rate via the computation of motion- compensated skin erythema fluctuation. Using high-resolution video recordings of human participants, multiple measurement locations are automatically identified on the cheeks of an individual, and motion-compensated broadband reflectance measurements are acquired at each measurement location over time via measurement location tracking. The motion-compensated broadband reflectance measurements are spectrally demultiplexed using a non-linear inverse model based on the spectral sensitivity of the camera's detector. A PPG signal is then computed from the demultiplexed narrow-band spectral information via skin erythema fluctuation analysis, with improved signal-to-noise ratio allowing for reliable remote heart rate measurements. To assess the effectiveness of the proposed system, a set of experiments involving human motion in a front-facing position were performed under ambient lighting conditions. Experimental results indicate that the proposed system achieves robust and accurate heart rate measurements and can provide additional information about the participant beyond the capabilities of traditional PPGI methods.

  8. Ultra-wide Range Gamma Detector System for Search and Locate Operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Odell, D. Mackenzie Odell; Harpring, Larry J.; Moore, Frank S. Jr.

    2005-10-26

    Collecting debris samples following a nuclear event requires that operations be conducted from a considerable stand-off distance. An ultra-wide range gamma detector system has been constructed to accomplish both long range radiation search and close range hot sample collection functions. Constructed and tested on a REMOTEC Andros platform, the system has demonstrated reliable operation over six orders of magnitude of gamma dose from 100's of uR/hr to over 100 R/hr. Functional elements include a remotely controlled variable collimator assembly, a NaI(Tl)/photomultiplier tube detector, a proprietary digital radiation instrument, a coaxially mounted video camera, a digital compass, and both local andmore » remote control computers with a user interface designed for long range operations. Long range sensitivity and target location, as well as close range sample selection performance are presented.« less

  9. Beaming into the News: A System for and Case Study of Tele-Immersive Journalism.

    PubMed

    Kishore, Sameer; Navarro, Xavi; Dominguez, Eva; de la Pena, Nonny; Slater, Mel

    2016-05-25

    We show how a combination of virtual reality and robotics can be used to beam a physical representation of a person to a distant location, and describe an application of this system in the context of journalism. Full body motion capture data of a person is streamed and mapped in real time, onto the limbs of a humanoid robot present at the remote location. A pair of cameras in the robot's 'eyes' stream stereoscopic video back to the HMD worn by the visitor, and a two-way audio connection allows the visitor to talk to people in the remote destination. By fusing the multisensory data of the visitor with the robot, the visitor's 'consciousness' is transformed to the robot's body. This system was used by a journalist to interview a neuroscientist and a chef 900 miles distant, about food for the brain, resulting in an article published in the popular press.

  10. Beaming into the News: A System for and Case Study of Tele-Immersive Journalism.

    PubMed

    Kishore, Sameer; Navarro, Xavi; Dominguez, Eva; De La Pena, Nonny; Slater, Mel

    2018-03-01

    We show how a combination of virtual reality and robotics can be used to beam a physical representation of a person to a distant location, and describe an application of this system in the context of journalism. Full body motion capture data of a person is streamed and mapped in real time, onto the limbs of a humanoid robot present at the remote location. A pair of cameras in the robots eyes stream stereoscopic video back to the HMD worn by the visitor, and a two-way audio connection allows the visitor to talk to people in the remote destination. By fusing the multisensory data of the visitor with the robot, the visitors consciousness is transformed to the robots body. This system was used by a journalist to interview a neuroscientist and a chef 900 miles distant, about food for the brain, resulting in an article published in the popular press.

  11. Improving Photometric Calibration of Meteor Video Camera Systems.

    PubMed

    Ehlert, Steven; Kingery, Aaron; Suggs, Robert

    2017-09-01

    We present the results of new calibration tests performed by the NASA Meteoroid Environment Office (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the first point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric flux within the camera band pass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at ∼ 0.20 mag, and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to ∼ 0.05 - 0.10 mag in both filtered and unfiltered camera observations with no evidence for lingering systematics. These improvements are essential to accurately measuring photometric masses of individual meteors and source mass indexes.

  12. Improving Photometric Calibration of Meteor Video Camera Systems

    NASA Technical Reports Server (NTRS)

    Ehlert, Steven; Kingery, Aaron; Suggs, Robert

    2017-01-01

    We present the results of new calibration tests performed by the NASA Meteoroid Environment Office (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the first point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric flux within the camera bandpass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at approx. 0.20 mag, and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to 0.05 - 0.10 mag in both filtered and unfiltered camera observations with no evidence for lingering systematics. These improvements are essential to accurately measuring photometric masses of individual meteors and source mass indexes.

  13. Architecture and Protocol of a Semantic System Designed for Video Tagging with Sensor Data in Mobile Devices

    PubMed Central

    Macias, Elsa; Lloret, Jaime; Suarez, Alvaro; Garcia, Miguel

    2012-01-01

    Current mobile phones come with several sensors and powerful video cameras. These video cameras can be used to capture good quality scenes, which can be complemented with the information gathered by the sensors also embedded in the phones. For example, the surroundings of a beach recorded by the camera of the mobile phone, jointly with the temperature of the site can let users know via the Internet if the weather is nice enough to swim. In this paper, we present a system that tags the video frames of the video recorded from mobile phones with the data collected by the embedded sensors. The tagged video is uploaded to a video server, which is placed on the Internet and is accessible by any user. The proposed system uses a semantic approach with the stored information in order to make easy and efficient video searches. Our experimental results show that it is possible to tag video frames in real time and send the tagged video to the server with very low packet delay variations. As far as we know there is not any other application developed as the one presented in this paper. PMID:22438753

  14. Architecture and protocol of a semantic system designed for video tagging with sensor data in mobile devices.

    PubMed

    Macias, Elsa; Lloret, Jaime; Suarez, Alvaro; Garcia, Miguel

    2012-01-01

    Current mobile phones come with several sensors and powerful video cameras. These video cameras can be used to capture good quality scenes, which can be complemented with the information gathered by the sensors also embedded in the phones. For example, the surroundings of a beach recorded by the camera of the mobile phone, jointly with the temperature of the site can let users know via the Internet if the weather is nice enough to swim. In this paper, we present a system that tags the video frames of the video recorded from mobile phones with the data collected by the embedded sensors. The tagged video is uploaded to a video server, which is placed on the Internet and is accessible by any user. The proposed system uses a semantic approach with the stored information in order to make easy and efficient video searches. Our experimental results show that it is possible to tag video frames in real time and send the tagged video to the server with very low packet delay variations. As far as we know there is not any other application developed as the one presented in this paper.

  15. Calibration of Action Cameras for Photogrammetric Purposes

    PubMed Central

    Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo

    2014-01-01

    The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution. PMID:25237898

  16. Calibration of action cameras for photogrammetric purposes.

    PubMed

    Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo

    2014-09-18

    The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution.

  17. Audiovisual quality estimation of mobile phone video cameras with interpretation-based quality approach

    NASA Astrophysics Data System (ADS)

    Radun, Jenni E.; Virtanen, Toni; Olives, Jean-Luc; Vaahteranoksa, Mikko; Vuori, Tero; Nyman, Göte

    2007-01-01

    We present an effective method for comparing subjective audiovisual quality and the features related to the quality changes of different video cameras. Both quantitative estimation of overall quality and qualitative description of critical quality features are achieved by the method. The aim was to combine two image quality evaluation methods, the quantitative Absolute Category Rating (ACR) method with hidden reference removal and the qualitative Interpretation- Based Quality (IBQ) method in order to see how they complement each other in audiovisual quality estimation tasks. 26 observers estimated the audiovisual quality of six different cameras, mainly mobile phone video cameras. In order to achieve an efficient subjective estimation of audiovisual quality, only two contents with different quality requirements were recorded with each camera. The results show that the subjectively important quality features were more related to the overall estimations of cameras' visual video quality than to the features related to sound. The data demonstrated two significant quality dimensions related to visual quality: darkness and sharpness. We conclude that the qualitative methodology can complement quantitative quality estimations also with audiovisual material. The IBQ approach is valuable especially, when the induced quality changes are multidimensional.

  18. Scientists Must Not Film but Must Appear on Screen!

    NASA Astrophysics Data System (ADS)

    Gerdes, A.; Madlener, S.

    2013-12-01

    Film production in science has affected its subjects in a truly remarkable way. Where scientists were once perceived to be poor communicators with an overwhelming aptitude for numbers and figures, audiences now have access to scientists they can understand and even relate to. Over the years, scientists have grown accustomed to involving and using the media in their research and exposing their science to wider audiences, making them better communicators. This is a huge development, and one that is especially noticeable at MARUM, the Center for Marine Environmental Sciences at the University of Bremen/Germany. Over time, the collaboration between the scientists and public relations staff has taught us all to be better at what we do. A unique characteristic of MARUM TV is that more or less all videos are produced 'in house'; we have established the small yet effective infrastructure necessary do develop, execute, and distribute semi-professional videos to access broader audiences and increase world-wide visibility. MARUM TV relies on our research scientists to operate cameras and capture important moments offshore on expedition, and to cooperate with us as we shoot footage of them and conduct interviews onshore in the lab. In turn, we promote their research and help increase their accessibility. At the forefront of our success is the relatively recent implementation of HD cameras on MARUM's fleet of remotely operated vehicles, which capture stunning video footage of the deep sea. Furthermore, sustained collaborations with national tv stations, online media portals, and large production companies helps inform our process and increases MARUM's visibility. The result is an extensive suite of about 70 short and long format science videos with some of the highest view counts on YouTube compared to other marine institutes. In the session PA011 'Scientists must film!' we intent to address issues regarding roadblocks to bridging science and media: a) Science communication varies across cultures, posing different challenges for film production depending on the country or even the research institute. b) Audiences expect a high production value in video, which is difficult to achieve when using non-professionals behind the camera. c) Producers must have an acute awareness of both the outside world and the internal workings of a particular institute or research group. We aim to address these challenges and to share our experiences in successful science filmmaking with this presentation.

  19. CVD2014-A Database for Evaluating No-Reference Video Quality Assessment Algorithms.

    PubMed

    Nuutinen, Mikko; Virtanen, Toni; Vaahteranoksa, Mikko; Vuori, Tero; Oittinen, Pirkko; Hakkinen, Jukka

    2016-07-01

    In this paper, we present a new video database: CVD2014-Camera Video Database. In contrast to previous video databases, this database uses real cameras rather than introducing distortions via post-processing, which results in a complex distortion space in regard to the video acquisition process. CVD2014 contains a total of 234 videos that are recorded using 78 different cameras. Moreover, this database contains the observer-specific quality evaluation scores rather than only providing mean opinion scores. We have also collected open-ended quality descriptions that are provided by the observers. These descriptions were used to define the quality dimensions for the videos in CVD2014. The dimensions included sharpness, graininess, color balance, darkness, and jerkiness. At the end of this paper, a performance study of image and video quality algorithms for predicting the subjective video quality is reported. For this performance study, we proposed a new performance measure that accounts for observer variance. The performance study revealed that there is room for improvement regarding the video quality assessment algorithms. The CVD2014 video database has been made publicly available for the research community. All video sequences and corresponding subjective ratings can be obtained from the CVD2014 project page (http://www.helsinki.fi/psychology/groups/visualcognition/).

  20. Joint Video Stitching and Stabilization from Moving Cameras.

    PubMed

    Guo, Heng; Liu, Shuaicheng; He, Tong; Zhu, Shuyuan; Zeng, Bing; Gabbouj, Moncef

    2016-09-08

    In this paper, we extend image stitching to video stitching for videos that are captured for the same scene simultaneously by multiple moving cameras. In practice, videos captured under this circumstance often appear shaky. Directly applying image stitching methods for shaking videos often suffers from strong spatial and temporal artifacts. To solve this problem, we propose a unified framework in which video stitching and stabilization are performed jointly. Specifically, our system takes several overlapping videos as inputs. We estimate both inter motions (between different videos) and intra motions (between neighboring frames within a video). Then, we solve an optimal virtual 2D camera path from all original paths. An enlarged field of view along the virtual path is finally obtained by a space-temporal optimization that takes both inter and intra motions into consideration. Two important components of this optimization are that (1) a grid-based tracking method is designed for an improved robustness, which produces features that are distributed evenly within and across multiple views, and (2) a mesh-based motion model is adopted for the handling of the scene parallax. Some experimental results are provided to demonstrate the effectiveness of our approach on various consumer-level videos and a Plugin, named "Video Stitcher" is developed at Adobe After Effects CC2015 to show the processed videos.

  1. A Comparison of Techniques for Camera Selection and Hand-Off in a Video Network

    NASA Astrophysics Data System (ADS)

    Li, Yiming; Bhanu, Bir

    Video networks are becoming increasingly important for solving many real-world problems. Multiple video sensors require collaboration when performing various tasks. One of the most basic tasks is the tracking of objects, which requires mechanisms to select a camera for a certain object and hand-off this object from one camera to another so as to accomplish seamless tracking. In this chapter, we provide a comprehensive comparison of current and emerging camera selection and hand-off techniques. We consider geometry-, statistics-, and game theory-based approaches and provide both theoretical and experimental comparison using centralized and distributed computational models. We provide simulation and experimental results using real data for various scenarios of a large number of cameras and objects for in-depth understanding of strengths and weaknesses of these techniques.

  2. Results of the IMO Video Meteor Network - June 2017, and effective collection area study

    NASA Astrophysics Data System (ADS)

    Molau, Sirko; Crivello, Stefano; Goncalves, Rui; Saraiva, Carlos; Stomeo, Enrico; Kac, Javor

    2017-12-01

    Over 18000 meteors were recorded by the IMO Video Meteor Network cameras during more than 7100 hours of observing time during 2017 June. The June Bootids were not detectable this year. Nearly 50 Daytime Arietids were recorded in 2017, and a first flux density profile for this shower in the optical domain is calculated, using video data from the period 2011-2017. Effective collection area of video cameras is discussed in more detail.

  3. Repurposing video recordings for structure motion estimations

    NASA Astrophysics Data System (ADS)

    Khaloo, Ali; Lattanzi, David

    2016-04-01

    Video monitoring of public spaces is becoming increasingly ubiquitous, particularly near essential structures and facilities. During any hazard event that dynamically excites a structure, such as an earthquake or hurricane, proximal video cameras may inadvertently capture the motion time-history of the structure during the event. If this dynamic time-history could be extracted from the repurposed video recording it would become a valuable forensic analysis tool for engineers performing post-disaster structural evaluations. The difficulty is that almost all potential video cameras are not installed to monitor structure motions, leading to camera perspective distortions and other associated challenges. This paper presents a method for extracting structure motions from videos using a combination of computer vision techniques. Images from a video recording are first reprojected into synthetic images that eliminate perspective distortion, using as-built knowledge of a structure for calibration. The motion of the camera itself during an event is also considered. Optical flow, a technique for tracking per-pixel motion, is then applied to these synthetic images to estimate the building motion. The developed method was validated using the experimental records of the NEESHub earthquake database. The results indicate that the technique is capable of estimating structural motions, particularly the frequency content of the response. Further work will evaluate variants and alternatives to the optical flow algorithm, as well as study the impact of video encoding artifacts on motion estimates.

  4. NV-CMOS HD camera for day/night imaging

    NASA Astrophysics Data System (ADS)

    Vogelsong, T.; Tower, J.; Sudol, Thomas; Senko, T.; Chodelka, D.

    2014-06-01

    SRI International (SRI) has developed a new multi-purpose day/night video camera with low-light imaging performance comparable to an image intensifier, while offering the size, weight, ruggedness, and cost advantages enabled by the use of SRI's NV-CMOS HD digital image sensor chip. The digital video output is ideal for image enhancement, sharing with others through networking, video capture for data analysis, or fusion with thermal cameras. The camera provides Camera Link output with HD/WUXGA resolution of 1920 x 1200 pixels operating at 60 Hz. Windowing to smaller sizes enables operation at higher frame rates. High sensitivity is achieved through use of backside illumination, providing high Quantum Efficiency (QE) across the visible and near infrared (NIR) bands (peak QE <90%), as well as projected low noise (<2h+) readout. Power consumption is minimized in the camera, which operates from a single 5V supply. The NVCMOS HD camera provides a substantial reduction in size, weight, and power (SWaP) , ideal for SWaP-constrained day/night imaging platforms such as UAVs, ground vehicles, fixed mount surveillance, and may be reconfigured for mobile soldier operations such as night vision goggles and weapon sights. In addition the camera with the NV-CMOS HD imager is suitable for high performance digital cinematography/broadcast systems, biofluorescence/microscopy imaging, day/night security and surveillance, and other high-end applications which require HD video imaging with high sensitivity and wide dynamic range. The camera comes with an array of lens mounts including C-mount and F-mount. The latest test data from the NV-CMOS HD camera will be presented.

  5. Fluorescence endoscopic video system

    NASA Astrophysics Data System (ADS)

    Papayan, G. V.; Kang, Uk

    2006-10-01

    This paper describes a fluorescence endoscopic video system intended for the diagnosis of diseases of the internal organs. The system operates on the basis of two-channel recording of the video fluxes from a fluorescence channel and a reflected-light channel by means of a high-sensitivity monochrome television camera and a color camera, respectively. Examples are given of the application of the device in gastroenterology.

  6. Through the Looking Glass: Real-Time Video Using 'Smart' Technology Provides Enhanced Intraoperative Logistics.

    PubMed

    Baldwin, Andrew C W; Mallidi, Hari R; Baldwin, John C; Sandoval, Elena; Cohn, William E; Frazier, O H; Singh, Steve K

    2016-01-01

    In the setting of increasingly complex medical therapies and limited physician resources, the recent emergence of 'smart' technology offers tremendous potential for improved logistics, efficiency, and communication between medical team members. In an effort to harness these capabilities, we sought to evaluate the utility of this technology in surgical practice through the employment of a wearable camera device during cardiothoracic organ recovery. A single procurement surgeon was trained for use of an Explorer Edition Google Glass (Google Inc., Mountain View, CA) during the recovery process. Live video feed of each procedure was securely broadcast to allow for members of the home transplant team to remotely participate in organ assessment. Primary outcomes involved demonstration of technological feasibility and validation of quality assurance through group assessment. The device was employed for the recovery of four organs: a right single lung, a left single lung, and two bilateral lung harvests. Live video of the visualization process was remotely accessed by the home transplant team, and supplemented final verification of organ quality. In each case, the organs were accepted for transplant without disruption of standard procurement protocols. Media files generated during the procedures were stored in a secure drive for future documentation, evaluation, and education purposes without preservation of patient identifiers. Live video streaming can improve quality assurance measures by allowing off-site members of the transplant team to participate in the final assessment of donor organ quality. While further studies are needed, this project suggests that the application of mobile 'smart' technology offers not just immediate value, but the potential to transform our approach to the practice of medicine.

  7. A multiple camera tongue switch for a child with severe spastic quadriplegic cerebral palsy.

    PubMed

    Leung, Brian; Chau, Tom

    2010-01-01

    The present study proposed a video-based access technology that facilitated a non-contact tongue protrusion access modality for a 7-year-old boy with severe spastic quadriplegic cerebral palsy (GMFCS level 5). The proposed system featured a centre camera and two peripheral cameras to extend coverage of the frontal face view of this user for longer durations. The child participated in a descriptive case study. The participant underwent 3 months of tongue protrusion training while the multiple camera tongue switch prototype was being prepared. Later, the participant was brought back for five experiment sessions where he worked on a single-switch picture matching activity, using the multiple camera tongue switch prototype in a controlled environment. The multiple camera tongue switch achieved an average sensitivity of 82% and specificity of 80%. In three of the experiment sessions, the peripheral cameras were associated with most of the true positive switch activations. These activations would have been missed by a centre-camera-only setup. The study demonstrated proof-of-concept of a non-contact tongue access modality implemented by a video-based system involving three cameras and colour video processing.

  8. DETAIL VIEW OF A VIDEO CAMERA POSITIONED ALONG THE PERIMETER ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    DETAIL VIEW OF A VIDEO CAMERA POSITIONED ALONG THE PERIMETER OF THE MLP - Cape Canaveral Air Force Station, Launch Complex 39, Mobile Launcher Platforms, Launcher Road, East of Kennedy Parkway North, Cape Canaveral, Brevard County, FL

  9. Opportunistic traffic sensing using existing video sources (phase II).

    DOT National Transportation Integrated Search

    2017-02-01

    The purpose of the project reported on here was to investigate methods for automatic traffic sensing using traffic surveillance : cameras, red light cameras, and other permanent and pre-existing video sources. Success in this direction would potentia...

  10. Comparative Evaluation of Background Subtraction Algorithms in Remote Scene Videos Captured by MWIR Sensors

    PubMed Central

    Yao, Guangle; Lei, Tao; Zhong, Jiandan; Jiang, Ping; Jia, Wenwu

    2017-01-01

    Background subtraction (BS) is one of the most commonly encountered tasks in video analysis and tracking systems. It distinguishes the foreground (moving objects) from the video sequences captured by static imaging sensors. Background subtraction in remote scene infrared (IR) video is important and common to lots of fields. This paper provides a Remote Scene IR Dataset captured by our designed medium-wave infrared (MWIR) sensor. Each video sequence in this dataset is identified with specific BS challenges and the pixel-wise ground truth of foreground (FG) for each frame is also provided. A series of experiments were conducted to evaluate BS algorithms on this proposed dataset. The overall performance of BS algorithms and the processor/memory requirements were compared. Proper evaluation metrics or criteria were employed to evaluate the capability of each BS algorithm to handle different kinds of BS challenges represented in this dataset. The results and conclusions in this paper provide valid references to develop new BS algorithm for remote scene IR video sequence, and some of them are not only limited to remote scene or IR video sequence but also generic for background subtraction. The Remote Scene IR dataset and the foreground masks detected by each evaluated BS algorithm are available online: https://github.com/JerryYaoGl/BSEvaluationRemoteSceneIR. PMID:28837112

  11. Scaling-up camera traps: monitoring the planet's biodiversity with networks of remote sensors

    USGS Publications Warehouse

    Steenweg, Robin; Hebblewhite, Mark; Kays, Roland; Ahumada, Jorge A.; Fisher, Jason T.; Burton, Cole; Townsend, Susan E.; Carbone, Chris; Rowcliffe, J. Marcus; Whittington, Jesse; Brodie, Jedediah; Royle, Andy; Switalski, Adam; Clevenger, Anthony P.; Heim, Nicole; Rich, Lindsey N.

    2017-01-01

    Countries committed to implementing the Convention on Biological Diversity's 2011–2020 strategic plan need effective tools to monitor global trends in biodiversity. Remote cameras are a rapidly growing technology that has great potential to transform global monitoring for terrestrial biodiversity and can be an important contributor to the call for measuring Essential Biodiversity Variables. Recent advances in camera technology and methods enable researchers to estimate changes in abundance and distribution for entire communities of animals and to identify global drivers of biodiversity trends. We suggest that interconnected networks of remote cameras will soon monitor biodiversity at a global scale, help answer pressing ecological questions, and guide conservation policy. This global network will require greater collaboration among remote-camera studies and citizen scientists, including standardized metadata, shared protocols, and security measures to protect records about sensitive species. With modest investment in infrastructure, and continued innovation, synthesis, and collaboration, we envision a global network of remote cameras that not only provides real-time biodiversity data but also serves to connect people with nature.

  12. Teleoperated Marsupial Mobile Sensor Platform Pair for Telepresence Insertion Into Challenging Structures

    NASA Technical Reports Server (NTRS)

    Krasowski, Michael J.; Prokop, Norman F.; Greer, Lawrence C.

    2011-01-01

    A platform has been developed for two or more vehicles with one or more residing within the other (a marsupial pair). This configuration consists of a large, versatile robot that is carrying a smaller, more specialized autonomous operating robot(s) and/or mobile repeaters for extended transmission. The larger vehicle, which is equipped with a ramp and/or a robotic arm, is used to operate over a more challenging topography than the smaller one(s) that may have a more limited inspection area to traverse. The intended use of this concept is to facilitate the insertion of a small video camera and sensor platform into a difficult entry area. In a terrestrial application, this may be a bus or a subway car with narrow aisles or steep stairs. The first field-tested configuration is a tracked vehicle bearing a rigid ramp of fixed length and width. A smaller six-wheeled vehicle approximately 10 in. (25 cm) wide by 12 in. (30 cm) long resides at the end of the ramp within the larger vehicle. The ramp extends from the larger vehicle and is tipped up into the air. Using video feedback from a camera atop the larger robot, the operator at a remote location can steer the larger vehicle to the bus door. Once positioned at the door, the operator can switch video feedback to a camera at the end of the ramp to facilitate the mating of the end of the ramp to the top landing at the upper terminus of the steps. The ramp can be lowered by remote control until its end is in contact with the top landing. At the same time, the end of the ramp bearing the smaller vehicle is raised to minimize the angle of the slope the smaller vehicle has to climb, and further gives the operator a better view of the entry to the bus from the smaller vehicle. Control is passed over to the smaller vehicle and, using video feedback from the camera, it is driven up the ramp, turned oblique into the bus, and then sent down the aisle for surveillance. The demonstrated vehicle was used to scale the steps leading to the interior of a bus whose landing is 44 in. (.1.1 m) from the road surface. This vehicle can position the end of its ramp to a surface over 50 in. (.1.3 m) above ground level and can drive over rail heights exceeding 6 in. (.15 cm). Thus configured, this vehicle can conceivably deliver the smaller robot to the end platform of New York City subway cars from between the rails. This innovation is scalable to other formulations for size, mobility, and surveillance functions. Conceivably the larger vehicle can be configured to traverse unstable rubble and debris to transport a smaller search and rescue vehicle as close as possible to the scene of a disaster such as a collapsed building. The smaller vehicle, tethered or otherwise, and capable of penetrating and traversing within the confined spaces in the collapsed structure, can transport imaging and other sensors to look for victims or other targets.

  13. Innovative method for optimizing Side-Scan Sonar mapping: The blind band unveiled

    NASA Astrophysics Data System (ADS)

    Pergent, Gérard; Monnier, Briac; Clabaut, Philippe; Gascon, Gilles; Pergent-Martini, Christine; Valette-Sansevin, Audrey

    2017-07-01

    Over the past few years, the mapping of Mediterranean marine habitats has become a priority for scientists, environment managers and stakeholders, in particular in order to comply with European directives (Water Framework Directive and Marine Strategy Framework Directive) and to implement legislation to ensure their conservation. Side-scan sonar (SSS) is recognised as one of the most effective tool for underwater mapping. However, interpretation of acoustic data (sonograms) requires extensive field calibration and the ground-truthing process remains essential. Several techniques are commonly used, with sampling methods involving grabs, scuba diving observations or Remotely Operated Vehicle (ROV) underwater video recordings. All these techniques are time consuming, expensive and only provide sporadic informations. In the present study, the possibility of coupling a camera with a SSS and acquiring underwater videos in a continuous way has been tested. During the 'PosidCorse' oceanographic survey carried out along the eastern coast of Corsica, optical and acoustic data were respectively obtained using a GoPro™ camera and a Klein 3000™ SSS. Thereby, five profiles were performed between 10 and 50 m depth, corresponding to more than 20 km of data acquisition. The vertical images recorded with the camera fixed under the SSS and positioned facing downwards provided photo mosaics of very good quality corresponding to the entire sonograms's blind band. From the photo mosaics, 94% of the different bottom types and main habitats have been identified; specific structures linked to hydrodynamics conditions, anthropic and biological activities have also been observed as well as the substrate on which the Posidonia oceanica meadow grows. The association between acoustic data and underwater videos has proved to be a non-destructive and cost-effective method for ground-truthing in marine habitats mapping. Nevertheless, in order to optimize the results over the next surveys, certain limitations will need to be remedied.

  14. Design of an ROV-based lidar for seafloor monitoring

    NASA Astrophysics Data System (ADS)

    Harsdorf, Stefan; Janssen, Manfred; Reuter, Rainer; Wachowicz, Bernhard

    1997-05-01

    In recent years, accidents of ships with chemical cargo have led to strong impacts on the marine ecosystem, and to risks for pollution control and clean-up teams. In order to enable a fast, safe, and efficient reaction, a new optical instrument has been designed for the inspection of objects on the seafloor by range-gated scattered light images as well as for the detection of substances by measuring the laser induced emission on the seafloor and within the water column. This new lidar is operated as a payload of a remotely operated vehicle (ROV). A Nd:YAG laser is employed as the light source of the lidar. In the video mode, the submarine lidar system uses the 2nd harmonic laser pulse to illuminate the seafloor. Elastically scattered and reflected light is collected with a gateable intensified CCD camera. The beam divergence of the laser is the same as the camera field-of-view. Synchronization of laser emission and camera gate time allows to suppress backscattered light from the water column and to record only the light backscattered by the object. This results in a contrast enhanced video image which increases the visibility range in turbid water up to four times. Substances seeping out from a container are often invisible in video images because of their low contrast. Therefore, a fluorescence lidar mode is integrated into the submarine lidar. the 3rd harmonic Nd:YAG laser pulse is applied, and the emission response of the water body between ROV and seafloor and of the seafloor itself is recorded at variable wavelengths with a maximum depth resolution is realized by a 2D scanner, which allows to select targets within the range-gated image for a measurement of fluorescence. The analysis of the time- and spectral-resolved signals permits the detection, the exact location, and a classification of fluorescent and/or absorbing substances.

  15. Surgical video recording with a modified GoPro Hero 4 camera

    PubMed Central

    Lin, Lily Koo

    2016-01-01

    Background Surgical videography can provide analytical self-examination for the surgeon, teaching opportunities for trainees, and allow for surgical case presentations. This study examined if a modified GoPro Hero 4 camera with a 25 mm lens could prove to be a cost-effective method of surgical videography with enough detail for oculoplastic and strabismus surgery. Method The stock lens mount and lens were removed from a GoPro Hero 4 camera, and was refitted with a Peau Productions SuperMount and 25 mm lens. The modified GoPro Hero 4 camera was then fixed to an overhead surgical light. Results Camera settings were set to 1080p video resolution. The 25 mm lens allowed for nine times the magnification as the GoPro stock lens. There was no noticeable video distortion. The entire cost was less than 600 USD. Conclusion The adapted GoPro Hero 4 with a 25 mm lens allows for high-definition, cost-effective, portable video capture of oculoplastic and strabismus surgery. The 25 mm lens allows for detailed videography that can enhance surgical teaching and self-examination. PMID:26834455

  16. Surgical video recording with a modified GoPro Hero 4 camera.

    PubMed

    Lin, Lily Koo

    2016-01-01

    Surgical videography can provide analytical self-examination for the surgeon, teaching opportunities for trainees, and allow for surgical case presentations. This study examined if a modified GoPro Hero 4 camera with a 25 mm lens could prove to be a cost-effective method of surgical videography with enough detail for oculoplastic and strabismus surgery. The stock lens mount and lens were removed from a GoPro Hero 4 camera, and was refitted with a Peau Productions SuperMount and 25 mm lens. The modified GoPro Hero 4 camera was then fixed to an overhead surgical light. Camera settings were set to 1080p video resolution. The 25 mm lens allowed for nine times the magnification as the GoPro stock lens. There was no noticeable video distortion. The entire cost was less than 600 USD. The adapted GoPro Hero 4 with a 25 mm lens allows for high-definition, cost-effective, portable video capture of oculoplastic and strabismus surgery. The 25 mm lens allows for detailed videography that can enhance surgical teaching and self-examination.

  17. Performance of a scanning laser line striper in outdoor lighting

    NASA Astrophysics Data System (ADS)

    Mertz, Christoph

    2013-05-01

    For search and rescue robots and reconnaissance robots it is important to detect objects in their vicinity. We have developed a scanning laser line striper that can produce dense 3D images using active illumination. The scanner consists of a camera and a MEMS-micro mirror based projector. It can also detect the presence of optically difficult material like glass and metal. The sensor can be used for autonomous operation or it can help a human operator to better remotely control the robot. In this paper we will evaluate the performance of the scanner under outdoor illumination, i.e. from operating in the shade to operating in full sunlight. We report the range, resolution and accuracy of the sensor and its ability to reconstruct objects like grass, wooden blocks, wires, metal objects, electronic devices like cell phones, blank RPG, and other inert explosive devices. Furthermore we evaluate its ability to detect the presence of glass and polished metal objects. Lastly we report on a user study that shows a significant improvement in a grasping task. The user is tasked with grasping a wire with the remotely controlled hand of a robot. We compare the time it takes to complete the task using the 3D scanner with using a traditional video camera.

  18. Multiple-Agent Air/Ground Autonomous Exploration Systems

    NASA Technical Reports Server (NTRS)

    Fink, Wolfgang; Chao, Tien-Hsin; Tarbell, Mark; Dohm, James M.

    2007-01-01

    Autonomous systems of multiple-agent air/ground robotic units for exploration of the surfaces of remote planets are undergoing development. Modified versions of these systems could be used on Earth to perform tasks in environments dangerous or inaccessible to humans: examples of tasks could include scientific exploration of remote regions of Antarctica, removal of land mines, cleanup of hazardous chemicals, and military reconnaissance. A basic system according to this concept (see figure) would include a unit, suspended by a balloon or a blimp, that would be in radio communication with multiple robotic ground vehicles (rovers) equipped with video cameras and possibly other sensors for scientific exploration. The airborne unit would be free-floating, controlled by thrusters, or tethered either to one of the rovers or to a stationary object in or on the ground. Each rover would contain a semi-autonomous control system for maneuvering and would function under the supervision of a control system in the airborne unit. The rover maneuvering control system would utilize imagery from the onboard camera to navigate around obstacles. Avoidance of obstacles would also be aided by readout from an onboard (e.g., ultrasonic) sensor. Together, the rover and airborne control systems would constitute an overarching closed-loop control system to coordinate scientific exploration by the rovers.

  19. Hardware accelerator design for tracking in smart camera

    NASA Astrophysics Data System (ADS)

    Singh, Sanjay; Dunga, Srinivasa Murali; Saini, Ravi; Mandal, A. S.; Shekhar, Chandra; Vohra, Anil

    2011-10-01

    Smart Cameras are important components in video analysis. For video analysis, smart cameras needs to detect interesting moving objects, track such objects from frame to frame, and perform analysis of object track in real time. Therefore, the use of real-time tracking is prominent in smart cameras. The software implementation of tracking algorithm on a general purpose processor (like PowerPC) could achieve low frame rate far from real-time requirements. This paper presents the SIMD approach based hardware accelerator designed for real-time tracking of objects in a scene. The system is designed and simulated using VHDL and implemented on Xilinx XUP Virtex-IIPro FPGA. Resulted frame rate is 30 frames per second for 250x200 resolution video in gray scale.

  20. Near-field Oblique Remote Sensing of Stream Water-surface Elevation, Slope, and Surface Velocity

    NASA Astrophysics Data System (ADS)

    Minear, J. T.; Kinzel, P. J.; Nelson, J. M.; McDonald, R.; Wright, S. A.

    2014-12-01

    A major challenge for estimating discharges during flood events or in steep channels is the difficulty and hazard inherent in obtaining in-stream measurements. One possible solution is to use near-field remote sensing to obtain simultaneous water-surface elevations, slope, and surface velocities. In this test case, we utilized Terrestrial Laser Scanning (TLS) to remotely measure water-surface elevations and slope in combination with surface velocities estimated from particle image velocimetry (PIV) obtained by video-camera and/or infrared camera. We tested this method at several sites in New Mexico and Colorado using independent validation data consisting of in-channel measurements from survey-grade GPS and Acoustic Doppler Current Profiler (ADCP) instruments. Preliminary results indicate that for relatively turbid or steep streams, TLS collects tens of thousands of water-surface elevations and slopes in minutes, much faster than conventional means and at relatively high precision, at least as good as continuous survey-grade GPS measurements. Estimated surface velocities from this technique are within 15% of measured velocity magnitudes and within 10 degrees from the measured velocity direction (using extrapolation from the shallowest bin of the ADCP measurements). Accurately aligning the PIV results into Cartesian coordinates appears to be one of the main sources of error, primarily due to the sensitivity at these shallow oblique look angles and the low numbers of stationary objects for rectification. Combining remotely-sensed water-surface elevations, slope, and surface velocities produces simultaneous velocity measurements from a large number of locations in the channel and is more spatially extensive than traditional velocity measurements. These factors make this technique useful for improving estimates of flow measurements during flood flows and in steep channels while also decreasing the difficulty and hazard associated with making measurements in these conditions.

  1. Video quality of 3G videophones for telephone cardiopulmonary resuscitation.

    PubMed

    Tränkler, Uwe; Hagen, Oddvar; Horsch, Alexander

    2008-01-01

    We simulated a cardiopulmonary resuscitation (CPR) scene with a manikin and used two 3G videophones on the caller's side to transmit video to a laptop PC. Five observers (two doctors with experience in emergency medicine and three paramedics) evaluated the video. They judged whether the manikin was breathing and whether they would give advice for CPR; they also graded the confidence of their decision-making. Breathing was only visible from certain orientations of the videophones, at distances below 150 cm with good illumination and a still background. Since the phones produced a degradation in colours and shadows, detection of breathing mainly depended on moving contours. Low camera positioning produced better results than having the camera high up. Darkness, shaking of the camera and a moving background made detection of breathing almost impossible. The video from the two 3G videophones that were tested was of sufficient quality for telephone CPR provided that camera orientation, distance, illumination and background were carefully chosen. Thus it seems possible to use 3G videophones for emergency calls involving CPR. However, further studies on the required video quality in different scenarios are necessary.

  2. Semantic Information Extraction of Lanes Based on Onboard Camera Videos

    NASA Astrophysics Data System (ADS)

    Tang, L.; Deng, T.; Ren, C.

    2018-04-01

    In the field of autonomous driving, semantic information of lanes is very important. This paper proposes a method of automatic detection of lanes and extraction of semantic information from onboard camera videos. The proposed method firstly detects the edges of lanes by the grayscale gradient direction, and improves the Probabilistic Hough transform to fit them; then, it uses the vanishing point principle to calculate the lane geometrical position, and uses lane characteristics to extract lane semantic information by the classification of decision trees. In the experiment, 216 road video images captured by a camera mounted onboard a moving vehicle were used to detect lanes and extract lane semantic information. The results show that the proposed method can accurately identify lane semantics from video images.

  3. A teledentistry system for the second opinion.

    PubMed

    Gambino, Orazio; Lima, Fausto; Pirrone, Roberto; Ardizzone, Edoardo; Campisi, Giuseppina; di Fede, Olga

    2014-01-01

    In this paper we present a Teledentistry system aimed to the Second Opinion task. It make use of a particular camera called intra-oral camera, also called dental camera, in order to perform the photo shooting and real-time video of the inner part of the mouth. The pictures acquired by the Operator with such a device are sent to the Oral Medicine Expert (OME) by means of a current File Transfer Protocol (FTP) service and the real-time video is channeled into a video streaming thanks to the VideoLan client/server (VLC) application. It is composed by a HTML5 web-pages generated by PHP and allows to perform the Second Opinion both when Operator and OME are logged and when one of them is offline.

  4. DETAIL VIEW OF VIDEO CAMERA, MAIN FLOOR LEVEL, PLATFORM ESOUTH, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    DETAIL VIEW OF VIDEO CAMERA, MAIN FLOOR LEVEL, PLATFORM E-SOUTH, HB-3, FACING SOUTHWEST - Cape Canaveral Air Force Station, Launch Complex 39, Vehicle Assembly Building, VAB Road, East of Kennedy Parkway North, Cape Canaveral, Brevard County, FL

  5. Optical stereo video signal processor

    NASA Technical Reports Server (NTRS)

    Craig, G. D. (Inventor)

    1985-01-01

    An otpical video signal processor is described which produces a two-dimensional cross-correlation in real time of images received by a stereo camera system. The optical image of each camera is projected on respective liquid crystal light valves. The images on the liquid crystal valves modulate light produced by an extended light source. This modulated light output becomes the two-dimensional cross-correlation when focused onto a video detector and is a function of the range of a target with respect to the stereo camera. Alternate embodiments utilize the two-dimensional cross-correlation to determine target movement and target identification.

  6. The Effects of Radiation on Imagery Sensors in Space

    NASA Technical Reports Server (NTRS)

    Mathis, Dylan

    2007-01-01

    Recent experience using high definition video on the International Space Station reveals camera pixel degradation due to particle radiation to be a much more significant problem with high definition cameras than with standard definition video. Although it may at first appear that increased pixel density on the imager is the logical explanation for this, the ISS implementations of high definition suggest a more complex causal and mediating factor mix. The degree of damage seems to vary from one type of camera to another, and this variation prompts a reconsideration of the possible factors in pixel loss, such as imager size, number of pixels, pixel aperture ratio, imager type (CCD or CMOS), method of error correction/concealment, and the method of compression used for recording or transmission. The problem of imager pixel loss due to particle radiation is not limited to out-of-atmosphere applications. Since particle radiation increases with altitude, it is not surprising to find anecdotal evidence that video cameras subject to many hours of airline travel show an increased incidence of pixel loss. This is even evident in some standard definition video applications, and pixel loss due to particle radiation only stands to become a more salient issue considering the continued diffusion of high definition video cameras in the marketplace.

  7. Estimating time available for sensor fusion exception handling

    NASA Astrophysics Data System (ADS)

    Murphy, Robin R.; Rogers, Erika

    1995-09-01

    In previous work, we have developed a generate, test, and debug methodology for detecting, classifying, and responding to sensing failures in autonomous and semi-autonomous mobile robots. An important issue has arisen from these efforts: how much time is there available to classify the cause of the failure and determine an alternative sensing strategy before the robot mission must be terminated? In this paper, we consider the impact of time for teleoperation applications where a remote robot attempts to autonomously maintain sensing in the presence of failures yet has the option to contact the local for further assistance. Time limits are determined by using evidential reasoning with a novel generalization of Dempster-Shafer theory. Generalized Dempster-Shafer theory is used to estimate the time remaining until the robot behavior must be suspended because of uncertainty; this becomes the time limit on autonomous exception handling at the remote. If the remote cannot complete exception handling in this time or needs assistance, responsibility is passed to the local, while the remote assumes a `safe' state. An intelligent assistant then facilitates human intervention, either directing the remote without human assistance or coordinating data collection and presentation to the operator within time limits imposed by the mission. The impact of time on exception handling activities is demonstrated using video camera sensor data.

  8. Remote video assessment for missile launch facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wagner, G.G.; Stewart, W.A.

    1995-07-01

    The widely dispersed, unmanned launch facilities (LFs) for land-based ICBMs (intercontinental ballistic missiles) currently do not have visual assessment capability for existing intrusion alarms. The security response force currently must assess each alarm on-site. Remote assessment will enhance manpower, safety, and security efforts. Sandia National Laboratories was tasked by the USAF Electronic Systems Center to research, recommend, and demonstrate a cost-effective remote video assessment capability at missile LFs. The project`s charter was to provide: system concepts; market survey analysis; technology search recommendations; and operational hardware demonstrations for remote video assessment from a missile LF to a remote security center viamore » a cost-effective transmission medium and without using visible, on-site lighting. The technical challenges of this project were to: analyze various video transmission media and emphasize using the existing missile system copper line which can be as long as 30 miles; accentuate and extremely low-cost system because of the many sites requiring system installation; integrate the video assessment system with the current LF alarm system; and provide video assessment at the remote sites with non-visible lighting.« less

  9. The Automatically Triggered Video or Imaging Station (ATVIS): An Inexpensive Way to Catch Geomorphic Events on Camera

    NASA Astrophysics Data System (ADS)

    Wickert, A. D.

    2010-12-01

    To understand how single events can affect landscape change, we must catch the landscape in the act. Direct observations are rare and often dangerous. While video is a good alternative, commercially-available video systems for field installation cost 11,000, weigh ~100 pounds (45 kg), and shoot 640x480 pixel video at 4 frames per second. This is the same resolution as a cheap point-and-shoot camera, with a frame rate that is nearly an order of magnitude worse. To overcome these limitations of resolution, cost, and portability, I designed and built a new observation station. This system, called ATVIS (Automatically Triggered Video or Imaging Station), costs 450--500 and weighs about 15 pounds. It can take roughly 3 hours of 1280x720 pixel video, 6.5 hours of 640x480 video, or 98,000 1600x1200 pixel photos (one photo every 7 seconds for 8 days). The design calls for a simple Canon point-and-shoot camera fitted with custom firmware that allows 5V pulses through its USB cable to trigger it to take a picture or to initiate or stop video recording. These pulses are provided by a programmable microcontroller that can take input from either sensors or a data logger. The design is easily modifiable to a variety of camera and sensor types, and can also be used for continuous time-lapse imagery. We currently have prototypes set up at a gully near West Bijou Creek on the Colorado high plains and at tributaries to Marble Canyon in northern Arizona. Hopefully, a relatively inexpensive and portable system such as this will allow geomorphologists to supplement sensor networks with photo or video monitoring and allow them to see—and better quantify—the fantastic array of processes that modify landscapes as they unfold. Camera station set up at Badger Canyon, Arizona.Inset: view into box. Clockwise from bottom right: camera, microcontroller (blue), DC converter (red), solar charge controller, 12V battery. Materials and installation assistance courtesy of Ron Griffiths and the USGS Grand Canyon Monitoring and Research Center.

  10. Automated Visual Event Detection, Tracking, and Data Management System for Cabled- Observatory Video

    NASA Astrophysics Data System (ADS)

    Edgington, D. R.; Cline, D. E.; Schlining, B.; Raymond, E.

    2008-12-01

    Ocean observatories and underwater video surveys have the potential to unlock important discoveries with new and existing camera systems. Yet the burden of video management and analysis often requires reducing the amount of video recorded through time-lapse video or similar methods. It's unknown how many digitized video data sets exist in the oceanographic community, but we suspect that many remain under analyzed due to lack of good tools or human resources to analyze the video. To help address this problem, the Automated Visual Event Detection (AVED) software and The Video Annotation and Reference System (VARS) have been under development at MBARI. For detecting interesting events in the video, the AVED software has been developed over the last 5 years. AVED is based on a neuromorphic-selective attention algorithm, modeled on the human vision system. Frames are decomposed into specific feature maps that are combined into a unique saliency map. This saliency map is then scanned to determine the most salient locations. The candidate salient locations are then segmented from the scene using algorithms suitable for the low, non-uniform light and marine snow typical of deep underwater video. For managing the AVED descriptions of the video, the VARS system provides an interface and database for describing, viewing, and cataloging the video. VARS was developed by the MBARI for annotating deep-sea video data and is currently being used to describe over 3000 dives by our remotely operated vehicles (ROV), making it well suited to this deepwater observatory application with only a few modifications. To meet the compute and data intensive job of video processing, a distributed heterogeneous network of computers is managed using the Condor workload management system. This system manages data storage, video transcoding, and AVED processing. Looking to the future, we see high-speed networks and Grid technology as an important element in addressing the problem of processing and accessing large video data sets.

  11. Infrared remote sensing of hazardous vapours: surveillance of public areas during the FIFA Football World Cup 2006

    NASA Astrophysics Data System (ADS)

    Harig, Roland; Matz, Gerhard; Rusch, Peter; Gerhard, Hans-Hennig; Gerhard, Jörn-Hinnrich; Schlabs, Volker

    2007-04-01

    The German ministry of the interior, represented by the civil defence agency BBK, established analytical task forces for the analysis of released chemicals in the case of fires, chemical accidents, terrorist attacks, or war. One of the first assignments of the task forces was the provision of analytical services during the football world cup 2006. One part of the equipment of these emergency response forces is a remote sensing system that allows identification and visualisation of hazardous clouds from long distances, the scanning infrared gas imaging system SIGIS 2. The system is based on an interferometer with a single detector element in combination with a telescope and a synchronised scanning mirror. The system allows 360° surveillance. The system is equipped with a video camera and the results of the analyses of the spectra are displayed by an overlay of a false colour image on the video image. This allows a simple evaluation of the position and the size of a cloud. The system was deployed for surveillance of stadiums and public viewing areas, where large crowds watched the games. Although no intentional or accidental releases of hazardous gases occurred in the stadiums and in the public viewing areas, the systems identified and located various foreign gases in the air.

  12. Feasibility of Using Video Cameras for Automated Enforcement on Red-Light Running and Managed Lanes.

    DOT National Transportation Integrated Search

    2009-12-01

    The overall objective of this study is to evaluate the feasibility, effectiveness, legality, and public acceptance aspects of automated enforcement on red light running and high occupancy vehicle (HOV) occupancy requirement using video cameras in Nev...

  13. Brownian Movement and Avogadro's Number: A Laboratory Experiment.

    ERIC Educational Resources Information Center

    Kruglak, Haym

    1988-01-01

    Reports an experimental procedure for studying Einstein's theory of Brownian movement using commercially available latex microspheres and a video camera. Describes how students can monitor sphere motions and determine Avogadro's number. Uses a black and white video camera, microscope, and TV. (ML)

  14. 67. DETAIL OF VIDEO CAMERA CONTROL PANEL LOCATED IMMEDIATELY WEST ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    67. DETAIL OF VIDEO CAMERA CONTROL PANEL LOCATED IMMEDIATELY WEST OF ASSISTANT LAUNCH CONDUCTOR PANEL SHOWN IN CA-133-1-A-66 - Vandenberg Air Force Base, Space Launch Complex 3, Launch Operations Building, Napa & Alden Roads, Lompoc, Santa Barbara County, CA

  15. The Use of Video-Tacheometric Technology for Documenting and Analysing Geometric Features of Objects

    NASA Astrophysics Data System (ADS)

    Woźniak, Marek; Świerczyńska, Ewa; Jastrzębski, Sławomir

    2015-12-01

    This paper analyzes selected aspects of the use of video-tacheometric technology for inventorying and documenting geometric features of objects. Data was collected with the use of the video-tacheometer Topcon Image Station IS-3 and the professional camera Canon EOS 5D Mark II. During the field work and the development of data the following experiments have been performed: multiple determination of the camera interior orientation parameters and distortion parameters of five lenses with different focal lengths, reflectorless measurements of profiles for the elevation and inventory of decorative surface wall of the building of Warsaw Ballet School. During the research the process of acquiring and integrating video-tacheometric data was analysed as well as the process of combining "point cloud" acquired by using video-tacheometer in the scanning process with independent photographs taken by a digital camera. On the basis of tests performed, utility of the use of video-tacheometric technology in geodetic surveys of geometrical features of buildings has been established.

  16. Application of PLZT electro-optical shutter to diaphragm of visible and mid-infrared cameras

    NASA Astrophysics Data System (ADS)

    Fukuyama, Yoshiyuki; Nishioka, Shunji; Chonan, Takao; Sugii, Masakatsu; Shirahata, Hiromichi

    1997-04-01

    Pb0.9La0.09(Zr0.65,Ti0.35)0.9775O3 9/65/35) commonly used as an electro-optical shutter exhibits large phase retardation with low applied voltage. This shutter features as follows; (1) high shutter speed, (2) wide optical transmittance, and (3) high optical density in 'OFF'-state. If the shutter is applied to a diaphragm of video-camera, it could protect its sensor from intense lights. We have tested the basic characteristics of the PLZT electro-optical shutter and resolved power of imaging. The ratio of optical transmittance at 'ON' and 'OFF'-states was 1.1 X 103. The response time of the PLZT shutter from 'ON'-state to 'OFF'-state was 10 micro second. MTF reduction when putting the PLZT shutter in from of the visible video- camera lens has been observed only with 12 percent at a spatial frequency of 38 cycles/mm which are sensor resolution of the video-camera. Moreover, we took the visible image of the Si-CCD video-camera. The He-Ne laser ghost image was observed at 'ON'-state. On the contrary, the ghost image was totally shut out at 'OFF'-state. From these teste, it has been found that the PLZT shutter is useful for the diaphragm of the visible video-camera. The measured optical transmittance of PLZT wafer with no antireflection coating was 78 percent over the range from 2 to 6 microns.

  17. Imaging fall Chinook salmon redds in the Columbia River with a dual-frequency identification sonar

    USGS Publications Warehouse

    Tiffan, K.F.; Rondorf, D.W.; Skalicky, J.J.

    2004-01-01

    We tested the efficacy of a dual-frequency identification sonar (DIDSON) for imaging and enumeration of fall Chinook salmon Oncorhynchus tshawytscha redds in a spawning area below Bonneville Dam on the Columbia River. The DIDSON uses sound to form near-video-quality images and has the advantages of imaging in zero-visibility water and possessing a greater detection range and field of view than underwater video cameras. We suspected that the large size and distinct morphology of a fall Chinook salmon redd would facilitate acoustic imaging if the DIDSON was towed near the river bottom so as to cast an acoustic shadow from the tailspill over the redd pocket. We tested this idea by observing 22 different redds with an underwater video camera, spatially referencing their locations, and then navigating to them while imaging them with the DIDSON. All 22 redds were successfully imaged with the DIDSON. We subsequently conducted redd searches along transects to compare the number of redds imaged by the DIDSON with the number observed using an underwater video camera. We counted 117 redds with the DIDSON and 81 redds with the underwater video camera. Only one of the redds observed with the underwater video camera was not also documented by the DIDSON. In spite of the DIDSON's high cost, it may serve as a useful tool for enumerating fall Chinook salmon redds in conditions that are not conducive to underwater videography.

  18. User interface using a 3D model for video surveillance

    NASA Astrophysics Data System (ADS)

    Hata, Toshihiko; Boh, Satoru; Tsukada, Akihiro; Ozaki, Minoru

    1998-02-01

    These days fewer people, who must carry out their tasks quickly and precisely, are required in industrial surveillance and monitoring applications such as plant control or building security. Utilizing multimedia technology is a good approach to meet this need, and we previously developed Media Controller, which is designed for the applications and provides realtime recording and retrieval of digital video data in a distributed environment. In this paper, we propose a user interface for such a distributed video surveillance system in which 3D models of buildings and facilities are connected to the surveillance video. A novel method of synchronizing camera field data with each frame of a video stream is considered. This method records and reads the camera field data similarity to the video data and transmits it synchronously with the video stream. This enables the user interface to have such useful functions as comprehending the camera field immediately and providing clues when visibility is poor, for not only live video but also playback video. We have also implemented and evaluated the display function which makes surveillance video and 3D model work together using Media Controller with Java and Virtual Reality Modeling Language employed for multi-purpose and intranet use of 3D model.

  19. Human silhouette matching based on moment invariants

    NASA Astrophysics Data System (ADS)

    Sun, Yong-Chao; Qiu, Xian-Jie; Xia, Shi-Hong; Wang, Zhao-Qi

    2005-07-01

    This paper aims to apply the method of silhouette matching based on moment invariants to infer the human motion parameters from video sequences of single monocular uncalibrated camera. Currently, there are two ways of tracking human motion: Marker and Markerless. While a hybrid framework is introduced in this paper to recover the input video contents. A standard 3D motion database is built up by marker technique in advance. Given a video sequences, human silhouettes are extracted as well as the viewpoint information of the camera which would be utilized to project the standard 3D motion database onto the 2D one. Therefore, the video recovery problem is formulated as a matching issue of finding the most similar body pose in standard 2D library with the one in video image. The framework is applied to the special trampoline sport where we can obtain the complicated human motion parameters in the single camera video sequences, and a lot of experiments are demonstrated that this approach is feasible in the field of monocular video-based 3D motion reconstruction.

  20. Portable Airborne Laser System Measures Forest-Canopy Height

    NASA Technical Reports Server (NTRS)

    Nelson, Ross

    2005-01-01

    (PALS) is a combination of laser ranging, video imaging, positioning, and data-processing subsystems designed for measuring the heights of forest canopies along linear transects from tens to thousands of kilometers long. Unlike prior laser ranging systems designed to serve the same purpose, the PALS is not restricted to use aboard a single aircraft of a specific type: the PALS fits into two large suitcases that can be carried to any convenient location, and the PALS can be installed in almost any local aircraft for hire, thereby making it possible to sample remote forests at relatively low cost. The initial cost and the cost of repairing the PALS are also lower because the PALS hardware consists mostly of commercial off-the-shelf (COTS) units that can easily be replaced in the field. The COTS units include a laser ranging transceiver, a charge-coupled-device camera that images the laser-illuminated targets, a differential Global Positioning System (dGPS) receiver capable of operation within the Wide Area Augmentation System, a video titler, a video cassette recorder (VCR), and a laptop computer equipped with two serial ports. The VCR and computer are powered by batteries; the other units are powered at 12 VDC from the 28-VDC aircraft power system via a low-pass filter and a voltage converter. The dGPS receiver feeds location and time data, at an update rate of 0.5 Hz, to the video titler and the computer. The laser ranging transceiver, operating at a sampling rate of 2 kHz, feeds its serial range and amplitude data stream to the computer. The analog video signal from the CCD camera is fed into the video titler wherein the signal is annotated with position and time information. The titler then forwards the annotated signal to the VCR for recording on 8-mm tapes. The dGPS and laser range and amplitude serial data streams are processed by software that displays the laser trace and the dGPS information as they are fed into the computer, subsamples the laser range and amplitude data, interleaves the subsampled data with the dGPS information, and records the resulting interleaved data stream.

  1. Solid State Television Camera (CID)

    NASA Technical Reports Server (NTRS)

    Steele, D. W.; Green, W. T.

    1976-01-01

    The design, development and test are described of a charge injection device (CID) camera using a 244x248 element array. A number of video signal processing functions are included which maximize the output video dynamic range while retaining the inherently good resolution response of the CID. Some of the unique features of the camera are: low light level performance, high S/N ratio, antiblooming, geometric distortion, sequential scanning and AGC.

  2. In-camera video-stream processing for bandwidth reduction in web inspection

    NASA Astrophysics Data System (ADS)

    Jullien, Graham A.; Li, QiuPing; Hajimowlana, S. Hossain; Morvay, J.; Conflitti, D.; Roberts, James W.; Doody, Brian C.

    1996-02-01

    Automated machine vision systems are now widely used for industrial inspection tasks where video-stream data information is taken in by the camera and then sent out to the inspection system for future processing. In this paper we describe a prototype system for on-line programming of arbitrary real-time video data stream bandwidth reduction algorithms; the output of the camera only contains information that has to be further processed by a host computer. The processing system is built into a DALSA CCD camera and uses a microcontroller interface to download bit-stream data to a XILINXTM FPGA. The FPGA is directly connected to the video data-stream and outputs data to a low bandwidth output bus. The camera communicates to a host computer via an RS-232 link to the microcontroller. Static memory is used to both generate a FIFO interface for buffering defect burst data, and for off-line examination of defect detection data. In addition to providing arbitrary FPGA architectures, the internal program of the microcontroller can also be changed via the host computer and a ROM monitor. This paper describes a prototype system board, mounted inside a DALSA camera, and discusses some of the algorithms currently being implemented for web inspection applications.

  3. Feasibility of Using Video Camera for Automated Enforcement on Red-Light Running and Managed Lanes.

    DOT National Transportation Integrated Search

    2009-12-25

    The overall objective of this study is to evaluate the feasibility, effectiveness, legality, and public acceptance aspects of automated enforcement on red light running and HOV occupancy requirement using video cameras in Nevada. This objective was a...

  4. Preplanning and Evaluating Video Documentaries and Features.

    ERIC Educational Resources Information Center

    Maynard, Riley

    1997-01-01

    This article presents a ten-part pre-production outline and post-production evaluation that helps communications students more effectively improve video skills. Examines camera movement and motion, camera angle and perspective, lighting, audio, graphics, backgrounds and color, special effects, editing, transitions, and music. Provides a glossary…

  5. On the development of new SPMN diurnal video systems for daylight fireball monitoring

    NASA Astrophysics Data System (ADS)

    Madiedo, J. M.; Trigo-Rodríguez, J. M.; Castro-Tirado, A. J.

    2008-09-01

    Daylight fireball video monitoring High-sensitivity video devices are commonly used for the study of the activity of meteor streams during the night. These provide useful data for the determination, for instance, of radiant, orbital and photometric parameters ([1] to [7]). With this aim, during 2006 three automated video stations supported by Universidad de Huelva were set up in Andalusia within the framework of the SPanish Meteor Network (SPMN). These are endowed with 8-9 high sensitivity wide-field video cameras that achieve a meteor limiting magnitude of about +3. These stations have increased the coverage performed by the low-scan allsky CCD systems operated by the SPMN and, besides, achieve a time accuracy of about 0.01s for determining the appearance of meteor and fireball events. Despite of these nocturnal monitoring efforts, we realised the need of setting up stations for daylight fireball detection. Such effort was also motivated by the appearance of the two recent meteorite-dropping events of Villalbeto de la Peña [8,9] and Puerto Lápice [10]. Although the Villalbeto de la Peña event was casually videotaped, and photographed, no direct pictures or videos were obtained for the Puerto Lápice event. Consequently, in order to perform a continuous recording of daylight fireball events, we setup new automated systems based on CCD video cameras. However, the development of these video stations implies several issues with respect to nocturnal systems that must be properly solved in order to get an optimal operation. The first of these video stations, also supported by University of Huelva, has been setup in Sevilla (Andalusia) during May 2007. But, of course, fireball association is unequivocal only in those cases when two or more stations recorded the fireball, and when consequently the geocentric radiant is accurately determined. With this aim, a second diurnal video station is being setup in Andalusia in the facilities of Centro Internacional de Estudios y Convenciones Ecológicas y Medioambientales (CIECEM, University of Huelva), in the environment of Doñana Natural Park (Huelva province). In this way, both stations, which are separated by a distance of 75 km, will work as a double video station system in order to provide trajectory and orbit information of mayor bolides and, thus, increase the chance of meteorite recovery in the Iberian Peninsula. The new diurnal SPMN video stations are endowed with different models of Mintron cameras (Mintron Enterprise Co., LTD). These are high-sensitivity devices that employ a colour 1/2" Sony interline transfer CCD image sensor. Aspherical lenses are attached to the video cameras in order to maximize image quality. However, the use of fast lenses is not a priority here: while most of our nocturnal cameras use f0.8 or f1.0 lenses in order to detect meteors as faint as magnitude +3, diurnal systems employ in most cases f1.4 to f2.0 lenses. Their focal length ranges from 3.8 to 12 mm to cover different atmospheric volumes. The cameras are arranged in such a way that the whole sky is monitored from every observing station. Figure 1. A daylight event recorded from Sevilla on May 26, 2008 at 4h30m05.4 +-0.1s UT. The way our diurnal video cameras work is similar to the operation of our nocturnal systems [1]. Thus, diurnal stations are automatically switched on and off at sunrise and sunset, respectively. The images taken at 25 fps and with a resolution of 720x576 pixels are continuously sent to PC computers through a video capture device. The computers run a software (UFOCapture, by SonotaCo, Japan) that automatically registers meteor trails and stores the corresponding video frames on hard disk. Besides, before the signal from the cameras reaches the computers, a video time inserter that employs a GPS device (KIWI-OSD, by PFD Systems) inserts time information on every video frame. This allows us to measure time in a precise way (about 0.01 sec.) along the whole fireball path. EPSC Abstracts, Vol. 3, EPSC2008-A-00319, 2008 European Planetary Science Congress, Author(s) 2008 However, one of the issues with respect to nocturnal observing stations is the high number of false detections as a consequence of several factors: higher activity of birds and insects, reflection of sunlight on planes and helicopters, etc. Sometimes some of these false events follow a pattern which is very similar to fireball trails, which makes absolutely necessary the use of a second station in order to discriminate between them. Other key issue is related to the passage of the Sun before the field of view of some of the cameras. In fact, special care is necessary with this to avoid any damage to the CCD sensor. Besides, depending on atmospheric conditions (dust or moisture, for instance), the Sun may saturate most of the video frame. To solve this, our automated system determines which camera is pointing towards the Sun at a given moment and disconnects it. As the cameras are endowed with autoiris lenses, its disconnection means that the optics is fully closed and, so, the CCD sensor is protected. This, of course, means that when this happens the atmospheric volume covered by the corresponding camera is not monitored. It must be also taken into account that, in general, operation temperatures are higher for diurnal cameras. This results in higher thermal noise and, so, poses some difficulties to the detection software. To minimize this effect, it is necessary to employ CCD video cameras with proper signal to noise ratio. Refrigeration of the CCD sensor with, for instance, a Peltier system, can also be considered. The astrometric reduction procedure is also somewhat different for daytime events: it requires that reference objects are located within the field of view of every camera in order to calibrate the corresponding images. This is done by allowing every camera to capture distant buildings that, by means of said calibration, would allow us to obtain the equatorial coordinates of the fireball along its path by measuring its corresponding X and Y positions on every video frame. Such calibration can be performed from stars positions measured from nocturnal images taken with the same cameras. Once made, if the cameras are not moved it is possible to estimate the equatorial coordinates of any future fireball event. We don't use any software for automatic astrometry of the images. This crucial step is made via direct measurements of the pixel position as in all our previous work. Then, from these astrometric measurements, our software estimates the atmospheric trajectory and radiant for each fireball ([10] to [13]). During 2007 and 2008 the SPMN has also setup other diurnal stations based on 1/3' progressive-scan CMOS sensors attached to modified wide-field lenses covering a 120x80 degrees FOV. They are placed in Andalusia: El Arenosillo (Huelva), La Mayora (Málaga) and Murtas (Granada). They have also night sensitivity thanks to a infrared cut filter (ICR) which enables the camera to perform well in both high and low light condition in colour as well as provide IR sensitive Black/White video at night. Conclusions First detections of daylight fireballs by CCD video camera are being achieved in the SPMN framework. Future expansion and set up of new observing stations is currently being planned. The future establishment of additional diurnal SPMN stations will allow an increase in the number of daytime fireballs detected. This will also increase our chance of meteorite recovery.

  6. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras.

    PubMed

    Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki

    2016-06-24

    Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system.

  7. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras

    PubMed Central

    Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki

    2016-01-01

    Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system. PMID:27347961

  8. An integrated port camera and display system for laparoscopy.

    PubMed

    Terry, Benjamin S; Ruppert, Austin D; Steinhaus, Kristen R; Schoen, Jonathan A; Rentschler, Mark E

    2010-05-01

    In this paper, we built and tested the port camera, a novel, inexpensive, portable, and battery-powered laparoscopic tool that integrates the components of a vision system with a cannula port. This new device 1) minimizes the invasiveness of laparoscopic surgery by combining a camera port and tool port; 2) reduces the cost of laparoscopic vision systems by integrating an inexpensive CMOS sensor and LED light source; and 3) enhances laparoscopic surgical procedures by mechanically coupling the camera, tool port, and liquid crystal display (LCD) screen to provide an on-patient visual display. The port camera video system was compared to two laparoscopic video systems: a standard resolution unit from Karl Storz (model 22220130) and a high definition unit from Stryker (model 1188HD). Brightness, contrast, hue, colorfulness, and sharpness were compared. The port camera video is superior to the Storz scope and approximately equivalent to the Stryker scope. An ex vivo study was conducted to measure the operative performance of the port camera. The results suggest that simulated tissue identification and biopsy acquisition with the port camera is as efficient as with a traditional laparoscopic system. The port camera was successfully used by a laparoscopic surgeon for exploratory surgery and liver biopsy during a porcine surgery, demonstrating initial surgical feasibility.

  9. Utilizing ISS Camera Systems for Scientific Analysis of Lightning Characteristics and Comparison with ISS-LIS and GLM

    NASA Technical Reports Server (NTRS)

    Schultz, Christopher J.; Lang, Timothy J.; Leake, Skye; Runco, Mario, Jr.; Blakeslee, Richard J.

    2017-01-01

    Video and still frame images from cameras aboard the International Space Station (ISS) are used to inspire, educate, and provide a unique vantage point from low-Earth orbit that is second to none; however, these cameras have overlooked capabilities for contributing to scientific analysis of the Earth and near-space environment. The goal of this project is to study how geo referenced video/images from available ISS camera systems can be useful for scientific analysis, using lightning properties as a demonstration.

  10. Real-time high-level video understanding using data warehouse

    NASA Astrophysics Data System (ADS)

    Lienard, Bruno; Desurmont, Xavier; Barrie, Bertrand; Delaigle, Jean-Francois

    2006-02-01

    High-level Video content analysis such as video-surveillance is often limited by computational aspects of automatic image understanding, i.e. it requires huge computing resources for reasoning processes like categorization and huge amount of data to represent knowledge of objects, scenarios and other models. This article explains how to design and develop a "near real-time adaptive image datamart", used, as a decisional support system for vision algorithms, and then as a mass storage system. Using RDF specification as storing format of vision algorithms meta-data, we can optimise the data warehouse concepts for video analysis, add some processes able to adapt the current model and pre-process data to speed-up queries. In this way, when new data is sent from a sensor to the data warehouse for long term storage, using remote procedure call embedded in object-oriented interfaces to simplified queries, they are processed and in memory data-model is updated. After some processing, possible interpretations of this data can be returned back to the sensor. To demonstrate this new approach, we will present typical scenarios applied to this architecture such as people tracking and events detection in a multi-camera network. Finally we will show how this system becomes a high-semantic data container for external data-mining.

  11. Broadcast-quality-stereoscopic video in a time-critical entertainment and corporate environment

    NASA Astrophysics Data System (ADS)

    Gay, Jean-Philippe

    1995-03-01

    `reality present: Peter Gabrial and Cirque du Soleil' is a 12 minute original work directed and produced by Doug Brown, Jean-Philippe Gay & A. Coogan, which showcases creative content applications of commercial stereoscopic video equipment. For production, a complete equipment package including a Steadicam mount was used in support of the Ikegami LK-33 camera. Remote production units were fielded in the time critical, on-stage and off-stage environments of 2 major live concerts: Peter Gabriel's Secret World performance at the San Diego Sports Arena, and Cirque du Soleil's Saltimbanco performance in Chicago. Twin 60 Hz video channels were captured on Beta SP for maximum post production flexibility. Digital post production and field sequential mastering were effected in D-2 format at studio facilities in Los Angeles. The program was world premiered to a large public at the World of Music, Arts and Dance festivals in Los Angeles and San Francisco, in late 1993. It was presented to the artists in Los Angeles, Montreal and Washington D.C. Additional presentations have been made using a broad range of commercial and experimental stereoscopic video equipment, including projection systems, LCD and passive eyewear, and digital signal processors. Technical packages for live presentation have been fielded on site and off, through to the present.

  12. New Airborne Sensors and Platforms for Solving Specific Tasks in Remote Sensing

    NASA Astrophysics Data System (ADS)

    Kemper, G.

    2012-07-01

    A huge number of small and medium sized sensors entered the market. Today's mid format sensors reach 80 MPix and allow to run projects of medium size, comparable with the first big format digital cameras about 6 years ago. New high quality lenses and new developments in the integration prepared the market for photogrammetric work. Companies as Phase One or Hasselblad and producers or integrators as Trimble, Optec, and others utilized these cameras for professional image production. In combination with small camera stabilizers they can be used also in small aircraft and make the equipment small and easy transportable e.g. for rapid assessment purposes. The combination of different camera sensors enables multi or hyper-spectral installations e.g. useful for agricultural or environmental projects. Arrays of oblique viewing cameras are in the market as well, in many cases these are small and medium format sensors combined as rotating or shifting devices or just as a fixed setup. Beside the proper camera installation and integration, also the software that controls the hardware and guides the pilot has to solve much more tasks than a normal FMS did in the past. Small and relatively cheap Laser Scanners (e.g. Riegl) are in the market and a proper combination with MS Cameras and an integrated planning and navigation is a challenge that has been solved by different softwares. Turnkey solutions are available e.g. for monitoring power line corridors where taking images is just a part of the job. Integration of thermal camera systems with laser scanner and video capturing must be combined with specific information of the objects stored in a database and linked when approaching the navigation point.

  13. Video cameras on wild birds.

    PubMed

    Rutz, Christian; Bluff, Lucas A; Weir, Alex A S; Kacelnik, Alex

    2007-11-02

    New Caledonian crows (Corvus moneduloides) are renowned for using tools for extractive foraging, but the ecological context of this unusual behavior is largely unknown. We developed miniaturized, animal-borne video cameras to record the undisturbed behavior and foraging ecology of wild, free-ranging crows. Our video recordings enabled an estimate of the species' natural foraging efficiency and revealed that tool use, and choice of tool materials, are more diverse than previously thought. Video tracking has potential for studying the behavior and ecology of many other bird species that are shy or live in inaccessible habitats.

  14. Use and validation of mirrorless digital single light reflex camera for recording of vitreoretinal surgeries in high definition

    PubMed Central

    Khanduja, Sumeet; Sampangi, Raju; Hemlatha, B C; Singh, Satvir; Lall, Ashish

    2018-01-01

    Purpose: The purpose of this study is to describe the use of commercial digital single light reflex (DSLR) for vitreoretinal surgery recording and compare it to standard 3-chip charged coupling device (CCD) camera. Methods: Simultaneous recording was done using Sony A7s2 camera and Sony high-definition 3-chip camera attached to each side of the microscope. The videos recorded from both the camera systems were edited and sequences of similar time frames were selected. Three sequences that selected for evaluation were (a) anterior segment surgery, (b) surgery under direct viewing system, and (c) surgery under indirect wide-angle viewing system. The videos of each sequence were evaluated and rated on a scale of 0-10 for color, contrast, and overall quality Results: Most results were rated either 8/10 or 9/10 for both the cameras. A noninferiority analysis by comparing mean scores of DSLR camera versus CCD camera was performed and P values were obtained. The mean scores of the two cameras were comparable for each other on all parameters assessed in the different videos except of color and contrast in posterior pole view and color on wide-angle view, which were rated significantly higher (better) in DSLR camera. Conclusion: Commercial DSLRs are an affordable low-cost alternative for vitreoretinal surgery recording and may be used for documentation and teaching. PMID:29283133

  15. Use and validation of mirrorless digital single light reflex camera for recording of vitreoretinal surgeries in high definition.

    PubMed

    Khanduja, Sumeet; Sampangi, Raju; Hemlatha, B C; Singh, Satvir; Lall, Ashish

    2018-01-01

    The purpose of this study is to describe the use of commercial digital single light reflex (DSLR) for vitreoretinal surgery recording and compare it to standard 3-chip charged coupling device (CCD) camera. Simultaneous recording was done using Sony A7s2 camera and Sony high-definition 3-chip camera attached to each side of the microscope. The videos recorded from both the camera systems were edited and sequences of similar time frames were selected. Three sequences that selected for evaluation were (a) anterior segment surgery, (b) surgery under direct viewing system, and (c) surgery under indirect wide-angle viewing system. The videos of each sequence were evaluated and rated on a scale of 0-10 for color, contrast, and overall quality Results: Most results were rated either 8/10 or 9/10 for both the cameras. A noninferiority analysis by comparing mean scores of DSLR camera versus CCD camera was performed and P values were obtained. The mean scores of the two cameras were comparable for each other on all parameters assessed in the different videos except of color and contrast in posterior pole view and color on wide-angle view, which were rated significantly higher (better) in DSLR camera. Commercial DSLRs are an affordable low-cost alternative for vitreoretinal surgery recording and may be used for documentation and teaching.

  16. Collection and Analysis of Crowd Data with Aerial, Rooftop, and Ground Views

    DTIC Science & Technology

    2014-11-10

    collected these datasets using different aircrafts. Erista 8 HL OctaCopter is a heavy-lift aerial platform capable of using high-resolution cinema ...is another high-resolution camera that is cinema grade and high quality, with the capability of capturing videos with 4K resolution at 30 frames per...292.58 Imaging Systems and Accessories Blackmagic Production Camera 4 Crowd Counting using 4K Cameras High resolution cinema grade digital video

  17. Flat-panel detector, CCD cameras, and electron-beam-tube-based video for use in portal imaging

    NASA Astrophysics Data System (ADS)

    Roehrig, Hans; Tang, Chuankun; Cheng, Chee-Way; Dallas, William J.

    1998-07-01

    This paper provides a comparison of some imaging parameters of four portal imaging systems at 6 MV: a flat panel detector, two CCD cameras and an electron beam tube based video camera. Measurements were made of signal and noise and consequently of signal-to-noise per pixel as a function of the exposure. All systems have a linear response with respect to exposure, and with the exception of the electron beam tube based video camera, the noise is proportional to the square-root of the exposure, indicating photon-noise limitation. The flat-panel detector has a signal-to-noise ratio, which is higher than that observed with both CCD-Cameras or with the electron beam tube based video camera. This is expected because most portal imaging systems using optical coupling with a lens exhibit severe quantum-sinks. The measurements of signal-and noise were complemented by images of a Las Vegas-type aluminum contrast detail phantom, located at the ISO-Center. These images were generated at an exposure of 1 MU. The flat-panel detector permits detection of Aluminum holes of 1.2 mm diameter and 1.6 mm depth, indicating the best signal-to-noise ratio. The CCD-cameras rank second and third in signal-to- noise ratio, permitting detection of Aluminum-holes of 1.2 mm diameter and 2.2 mm depth (CCD_1) and of 1.2 mm diameter and 3.2 mm depth (CCD_2) respectively, while the electron beam tube based video camera permits detection of only a hole of 1.2 mm diameter and 4.6 mm depth. Rank Order Filtering was applied to the raw images from the CCD-based systems in order to remove the direct hits. These are camera responses to scattered x-ray photons which interact directly with the CCD of the CCD-Camera and generate 'Salt and Pepper type noise,' which interferes severely with attempts to determine accurate estimates of the image noise. The paper also presents data on the metal-phosphor's photon gain (the number of light-photons per interacting x-ray photon).

  18. Miniature Robotic Spacecraft for Inspecting Other Spacecraft

    NASA Technical Reports Server (NTRS)

    Fredrickson, Steven; Abbott, Larry; Duran, Steve; Goode, Robert; Howard, Nathan; Jochim, David; Rickman, Steve; Straube, Tim; Studak, Bill; Wagenknecht, Jennifer; hide

    2004-01-01

    A report discusses the Miniature Autonomous Extravehicular Robotic Camera (Mini AERCam)-- a compact robotic spacecraft intended to be released from a larger spacecraft for exterior visual inspection of the larger spacecraft. The Mini AERCam is a successor to the AERCam Sprint -- a prior miniature robotic inspection spacecraft that was demonstrated in a space-shuttle flight experiment in 1997. The prototype of the Mini AERCam is a demonstration unit having approximately the form and function of a flight system. The Mini AERCam is approximately spherical with a diameter of about 7.5 in. (.19 cm) and a weight of about 10 lb (.4.5 kg), yet it has significant additional capabilities, relative to the 14-in. (36-cm), 35-lb (16-kg) AERCam Sprint. The Mini AERCam includes miniaturized avionics, instrumentation, communications, navigation, imaging, power, and propulsion subsystems, including two digital video cameras and a high-resolution still camera. The Mini AERCam is designed for either remote piloting or supervised autonomous operations, including station keeping and point-to-point maneuvering. The prototype has been tested on an air-bearing table and in a hardware-in-the-loop orbital simulation of the dynamics of maneuvering in proximity to the International Space Station.

  19. Very High-Speed Digital Video Capability for In-Flight Use

    NASA Technical Reports Server (NTRS)

    Corda, Stephen; Tseng, Ting; Reaves, Matthew; Mauldin, Kendall; Whiteman, Donald

    2006-01-01

    digital video camera system has been qualified for use in flight on the NASA supersonic F-15B Research Testbed aircraft. This system is capable of very-high-speed color digital imaging at flight speeds up to Mach 2. The components of this system have been ruggedized and shock-mounted in the aircraft to survive the severe pressure, temperature, and vibration of the flight environment. The system includes two synchronized camera subsystems installed in fuselage-mounted camera pods (see Figure 1). Each camera subsystem comprises a camera controller/recorder unit and a camera head. The two camera subsystems are synchronized by use of an MHub(TradeMark) synchronization unit. Each camera subsystem is capable of recording at a rate up to 10,000 pictures per second (pps). A state-of-the-art complementary metal oxide/semiconductor (CMOS) sensor in the camera head has a maximum resolution of 1,280 1,024 pixels at 1,000 pps. Exposure times of the electronic shutter of the camera range from 1/200,000 of a second to full open. The recorded images are captured in a dynamic random-access memory (DRAM) and can be downloaded directly to a personal computer or saved on a compact flash memory card. In addition to the high-rate recording of images, the system can display images in real time at 30 pps. Inter Range Instrumentation Group (IRIG) time code can be inserted into the individual camera controllers or into the M-Hub unit. The video data could also be used to obtain quantitative, three-dimensional trajectory information. The first use of this system was in support of the Space Shuttle Return to Flight effort. Data were needed to help in understanding how thermally insulating foam is shed from a space shuttle external fuel tank during launch. The cameras captured images of simulated external tank debris ejected from a fixture mounted under the centerline of the F-15B aircraft. Digital video was obtained at subsonic and supersonic flight conditions, including speeds up to Mach 2 and altitudes up to 50,000 ft (15.24 km). The digital video was used to determine the structural survivability of the debris in a real flight environment and quantify the aerodynamic trajectories of the debris.

  20. Researching on the process of remote sensing video imagery

    NASA Astrophysics Data System (ADS)

    Wang, He-rao; Zheng, Xin-qi; Sun, Yi-bo; Jia, Zong-ren; Wang, He-zhan

    Unmanned air vehicle remotely-sensed imagery on the low-altitude has the advantages of higher revolution, easy-shooting, real-time accessing, etc. It's been widely used in mapping , target identification, and other fields in recent years. However, because of conditional limitation, the video images are unstable, the targets move fast, and the shooting background is complex, etc., thus it is difficult to process the video images in this situation. In other fields, especially in the field of computer vision, the researches on video images are more extensive., which is very helpful for processing the remotely-sensed imagery on the low-altitude. Based on this, this paper analyzes and summarizes amounts of video image processing achievement in different fields, including research purposes, data sources, and the pros and cons of technology. Meantime, this paper explores the technology methods more suitable for low-altitude video image processing of remote sensing.

  1. Teaching surgical skills using video internet communication in a resource-limited setting.

    PubMed

    Autry, Amy M; Knight, Sharon; Lester, Felicia; Dubowitz, Gerald; Byamugisha, Josaphat; Nsubuga, Yosam; Muyingo, Mark; Korn, Abner

    2013-07-01

    To study the feasibility and acceptability of using video Internet communication to teach and evaluate surgical skills in a low-resource setting. This case-controlled study used video Internet communication for surgical skills teaching and evaluation. We randomized intern physicians rotating in the Obstetrics and Gynecology Department at Mulago Hospital at Makerere University in Kampala, Uganda, to the control arm (usual practice) or intervention arm (three video teaching sessions with University of California, San Francisco faculty). We made preintervention and postintervention videos of all interns tying knots using a small video camera and uploaded the files to a file hosting service that offers cloud storage. A blinded faculty member graded all of the videos. Both groups completed a survey at the end of the study. We randomized 18 interns with complete data for eight in the intervention group and seven in the control group. We found score improvement of 50% or more in six of eight (75%) interns in the intervention group compared with one of seven (14%) in the control group (P=.04). Scores declined in five of the seven (71%) controls but in none in the intervention group. Both intervention and control groups used attendings, colleagues, and the Internet as sources for learning about knot-tying. The control group was less likely to practice knot-tying than the intervention group. The trainees and the instructors felt this method of training was enjoyable and helpful. Remote teaching in low-resource settings, where faculty time is limited and access to visiting faculty is sporadic, is feasible, effective, and well-accepted by both learner and teacher. II.

  2. The reliability and accuracy of estimating heart-rates from RGB video recorded on a consumer grade camera

    NASA Astrophysics Data System (ADS)

    Eaton, Adam; Vincely, Vinoin; Lloyd, Paige; Hugenberg, Kurt; Vishwanath, Karthik

    2017-03-01

    Video Photoplethysmography (VPPG) is a numerical technique to process standard RGB video data of exposed human skin and extracting the heart-rate (HR) from the skin areas. Being a non-contact technique, VPPG has the potential to provide estimates of subject's heart-rate, respiratory rate, and even the heart rate variability of human subjects with potential applications ranging from infant monitors, remote healthcare and psychological experiments, particularly given the non-contact and sensor-free nature of the technique. Though several previous studies have reported successful correlations in HR obtained using VPPG algorithms to HR measured using the gold-standard electrocardiograph, others have reported that these correlations are dependent on controlling for duration of the video-data analyzed, subject motion, and ambient lighting. Here, we investigate the ability of two commonly used VPPG-algorithms in extraction of human heart-rates under three different laboratory conditions. We compare the VPPG HR values extracted across these three sets of experiments to the gold-standard values acquired by using an electrocardiogram or a commercially available pulseoximeter. The two VPPG-algorithms were applied with and without KLT-facial feature tracking and detection algorithms from the Computer Vision MATLAB® toolbox. Results indicate that VPPG based numerical approaches have the ability to provide robust estimates of subject HR values and are relatively insensitive to the devices used to record the video data. However, they are highly sensitive to conditions of video acquisition including subject motion, the location, size and averaging techniques applied to regions-of-interest as well as to the number of video frames used for data processing.

  3. Remotely accessible laboratory for MEMS testing

    NASA Astrophysics Data System (ADS)

    Sivakumar, Ganapathy; Mulsow, Matthew; Melinger, Aaron; Lacouture, Shelby; Dallas, Tim E.

    2010-02-01

    We report on the construction of a remotely accessible and interactive laboratory for testing microdevices (aka: MicroElectroMechancial Systems - MEMS). Enabling expanded utilization of microdevices for research, commercial, and educational purposes is very important for driving the creation of future MEMS devices and applications. Unfortunately, the relatively high costs associated with MEMS devices and testing infrastructure makes widespread access to the world of MEMS difficult. The creation of a virtual lab to control and actuate MEMS devices over the internet helps spread knowledge to a larger audience. A host laboratory has been established that contains a digital microscope, microdevices, controllers, and computers that can be logged into through the internet. The overall layout of the tele-operated MEMS laboratory system can be divided into two major parts: the server side and the client side. The server-side is present at Texas Tech University, and hosts a server machine that runs the Linux operating system and is used for interfacing the MEMS lab with the outside world via internet. The controls from the clients are transferred to the lab side through the server interface. The server interacts with the electronics required to drive the MEMS devices using a range of National Instruments hardware and LabView Virtual Instruments. An optical microscope (100 ×) with a CCD video camera is used to capture images of the operating MEMS. The server broadcasts the live video stream over the internet to the clients through the website. When the button is pressed on the website, the MEMS device responds and the video stream shows the movement in close to real time.

  4. Remote environmental sensor array system

    NASA Astrophysics Data System (ADS)

    Hall, Geoffrey G.

    This thesis examines the creation of an environmental monitoring system for inhospitable environments. It has been named The Remote Environmental Sensor Array System or RESA System for short. This thesis covers the development of RESA from its inception, to the design and modeling of the hardware and software required to make it functional. Finally, the actual manufacture, and laboratory testing of the finished RESA product is discussed and documented. The RESA System is designed as a cost-effective way to bring sensors and video systems to the underwater environment. It contains as water quality probe with sensors such as dissolved oxygen, pH, temperature, specific conductivity, oxidation-reduction potential and chlorophyll a. In addition, an omni-directional hydrophone is included to detect underwater acoustic signals. It has a colour, high-definition and a low-light, black and white camera system, which it turn are coupled to a laser scaling system. Both high-intensity discharge and halogen lighting system are included to illuminate the video images. The video and laser scaling systems are manoeuvred using pan and tilt units controlled from an underwater computer box. Finally, a sediment profile imager is included to enable profile images of sediment layers to be acquired. A control and manipulation system to control the instruments and move the data across networks is integrated into the underwater system while a power distribution node provides the correct voltages to power the instruments. Laboratory testing was completed to ensure that the different instruments associated with the RESA performed as designed. This included physical testing of the motorized instruments, calibration of the instruments, benchmark performance testing and system failure exercises.

  5. Evaluation of smart video for transit event detection : final report.

    DOT National Transportation Integrated Search

    2009-06-01

    Transit agencies are increasingly using video cameras to fight crime and terrorism. As the volume of video data increases, the existing digital video surveillance systems provide the infrastructure only to capture, store and distribute video, while l...

  6. Digital Video Cameras for Brainstorming and Outlining: The Process and Potential

    ERIC Educational Resources Information Center

    Unger, John A.; Scullion, Vicki A.

    2013-01-01

    This "Voices from the Field" paper presents methods and participant-exemplar data for integrating digital video cameras into the writing process across postsecondary literacy contexts. The methods and participant data are part of an ongoing action-based research project systematically designed to bring research and theory into practice…

  7. Studying medical communication with video vignettes: a randomized study on how variations in video-vignette introduction format and camera focus influence analogue patients' engagement.

    PubMed

    Visser, Leonie N C; Bol, Nadine; Hillen, Marij A; Verdam, Mathilde G E; de Haes, Hanneke C J M; van Weert, Julia C M; Smets, Ellen M A

    2018-01-19

    Video vignettes are used to test the effects of physicians' communication on patient outcomes. Methodological choices in video-vignette development may have far-stretching consequences for participants' engagement with the video, and thus the ecological validity of this design. To supplement the scant evidence in this field, this study tested how variations in video-vignette introduction format and camera focus influence participants' engagement with a video vignette showing a bad news consultation. Introduction format (A = audiovisual vs. B = written) and camera focus (1 = the physician only, 2 = the physician and the patient at neutral moments alternately, 3 = the physician and the patient at emotional moments alternately) were varied in a randomized 2 × 3 between-subjects design. One hundred eighty-one students were randomly assigned to watch one of the six resulting video-vignette conditions as so-called analogue patients, i.e., they were instructed to imagine themselves being in the video patient's situation. Four dimensions of self-reported engagement were assessed retrospectively. Emotional engagement was additionally measured by recording participants' electrodermal and cardiovascular activity continuously while watching. Analyses of variance were used to test the effects of introduction format, camera focus and their interaction. The audiovisual introduction induced a stronger blood pressure response during watching the introduction (p = 0.048, [Formula: see text]= 0.05) and the consultation part of the vignette (p = 0.051, [Formula: see text]= 0.05), when compared to the written introduction. With respect to camera focus, results revealed that the variant focusing on the patient at emotional moments evoked a higher level of electrodermal activity (p = 0.003, [Formula: see text]= 0.06), when compared to the other two variants. Furthermore, an interaction effect was shown on self-reported emotional engagement (p = 0.045, [Formula: see text]= 0.04): the physician-only variant resulted in lower emotional engagement if the vignette was preceded by the audiovisual introduction. No effects were shown on the other dimensions of self-reported engagement. Our findings imply that using an audiovisual introduction combined with alternating camera focus depicting patient's emotions results in the highest levels of emotional engagement in analogue patients. This evidence can inform methodological decisions during the development of video vignettes, and thereby enhance the ecological validity of future video-vignettes studies.

  8. Collaborative web-based annotation of video footage of deep-sea life, ecosystems and geological processes

    NASA Astrophysics Data System (ADS)

    Kottmann, R.; Ratmeyer, V.; Pop Ristov, A.; Boetius, A.

    2012-04-01

    More and more seagoing scientific expeditions use video-controlled research platforms such as Remote Operating Vehicles (ROV), Autonomous Underwater Vehicles (AUV), and towed camera systems. These produce many hours of video material which contains detailed and scientifically highly valuable footage of the biological, chemical, geological, and physical aspects of the oceans. Many of the videos contain unique observations of unknown life-forms which are rare, and which cannot be sampled and studied otherwise. To make such video material online accessible and to create a collaborative annotation environment the "Video Annotation and processing platform" (V-App) was developed. A first solely web-based installation for ROV videos is setup at the German Center for Marine Environmental Sciences (available at http://videolib.marum.de). It allows users to search and watch videos with a standard web browser based on the HTML5 standard. Moreover, V-App implements social web technologies allowing a distributed world-wide scientific community to collaboratively annotate videos anywhere at any time. It has several features fully implemented among which are: • User login system for fine grained permission and access control • Video watching • Video search using keywords, geographic position, depth and time range and any combination thereof • Video annotation organised in themes (tracks) such as biology and geology among others in standard or full screen mode • Annotation keyword management: Administrative users can add, delete, and update single keywords for annotation or upload sets of keywords from Excel-sheets • Download of products for scientific use This unique web application system helps making costly ROV videos online available (estimated cost range between 5.000 - 10.000 Euros per hour depending on the combination of ship and ROV). Moreover, with this system each expert annotation adds instantaneous available and valuable knowledge to otherwise uncharted material.

  9. Image system for three dimensional, 360 DEGREE, time sequence surface mapping of moving objects

    DOEpatents

    Lu, Shin-Yee

    1998-01-01

    A three-dimensional motion camera system comprises a light projector placed between two synchronous video cameras all focused on an object-of-interest. The light projector shines a sharp pattern of vertical lines (Ronchi ruling) on the object-of-interest that appear to be bent differently to each camera by virtue of the surface shape of the object-of-interest and the relative geometry of the cameras, light projector and object-of-interest Each video frame is captured in a computer memory and analyzed. Since the relative geometry is known and the system pre-calibrated, the unknown three-dimensional shape of the object-of-interest can be solved for by matching the intersections of the projected light lines with orthogonal epipolar lines corresponding to horizontal rows in the video camera frames. A surface reconstruction is made and displayed on a monitor screen. For 360.degree. all around coverage of theobject-of-interest, two additional sets of light projectors and corresponding cameras are distributed about 120.degree. apart from one another.

  10. Image system for three dimensional, 360{degree}, time sequence surface mapping of moving objects

    DOEpatents

    Lu, S.Y.

    1998-12-22

    A three-dimensional motion camera system comprises a light projector placed between two synchronous video cameras all focused on an object-of-interest. The light projector shines a sharp pattern of vertical lines (Ronchi ruling) on the object-of-interest that appear to be bent differently to each camera by virtue of the surface shape of the object-of-interest and the relative geometry of the cameras, light projector and object-of-interest. Each video frame is captured in a computer memory and analyzed. Since the relative geometry is known and the system pre-calibrated, the unknown three-dimensional shape of the object-of-interest can be solved for by matching the intersections of the projected light lines with orthogonal epipolar lines corresponding to horizontal rows in the video camera frames. A surface reconstruction is made and displayed on a monitor screen. For 360{degree} all around coverage of the object-of-interest, two additional sets of light projectors and corresponding cameras are distributed about 120{degree} apart from one another. 20 figs.

  11. Three-dimensional visualization and display technologies; Proceedings of the Meeting, Los Angeles, CA, Jan. 18-20, 1989

    NASA Technical Reports Server (NTRS)

    Robbins, Woodrow E. (Editor); Fisher, Scott S. (Editor)

    1989-01-01

    Special attention was given to problems of stereoscopic display devices, such as CAD for enhancement of the design process in visual arts, stereo-TV improvement of remote manipulator performance, a voice-controlled stereographic video camera system, and head-mounted displays and their low-cost design alternatives. Also discussed was a novel approach to chromostereoscopic microscopy, computer-generated barrier-strip autostereography and lenticular stereograms, and parallax barrier three-dimensional TV. Additional topics include processing and user interface isssues and visualization applications, including automated analysis and fliud flow topology, optical tomographic measusrements of mixing fluids, visualization of complex data, visualization environments, and visualization management systems.

  12. A High-Speed Spectroscopy System for Observing Lightning and Transient Luminous Events

    NASA Astrophysics Data System (ADS)

    Boggs, L.; Liu, N.; Austin, M.; Aguirre, F.; Tilles, J.; Nag, A.; Lazarus, S. M.; Rassoul, H.

    2017-12-01

    Here we present a high-speed spectroscopy system that can be used to record atmospheric electrical discharges, including lightning and transient luminous events. The system consists of a Phantom V1210 high-speed camera, a Volume Phase Holographic (VPH) grism, an optional optical slit, and lenses. The spectrograph has the capability to record videos at speeds of 200,000 frames per second and has an effective wavelength band of 550-775 nm for the first order spectra. When the slit is used, the system has a spectral resolution of about 0.25 nm per pixel. We have constructed a durable enclosure made of heavy duty aluminum to house the high-speed spectrograph. It has two fans for continuous air flow and a removable tray to mount the spectrograph components. In addition, a Watec video camera (30 frames per second) is attached to the top of the enclosure to provide a scene view. A heavy duty Pelco pan/tilt motor is used to position the enclosure and can be controlled remotely through a Rasperry Pi computer. An observation campaign has been conducted during the summer and fall of 2017 at the Florida Institute of Technology. Several close cloud-to-ground discharges were recorded at 57,000 frames per second. The spectrum of a downward stepped negative leader and a positive cloud-to-ground return stroke will be reported on.

  13. Space Age Archaeology

    NASA Technical Reports Server (NTRS)

    1988-01-01

    In 1985, the Egyptian Antiques Organization (EAO) asked Dr. Farouk El-Baz whether it would be possible to examine and sample the second chamber of the subterranean chamber carved in the bedrock near the Great Pyramid of Khufu in Giza, Egypt, without admitting people, air or contaminants. He felt it could by applying space technology to the task. The initial contact led to a two year project which he organized and headed a team, co-sponsored by EAO and the National Geographic Society (NGS), to apply space technology in an effort to examine and photograph the Giza Chamber. The NGS photographic division modified and tested a remotely controlled video system and a 35-millimeter camera, and developed a lighting system that would not elevate the chamber temperature. Still needed was a drill to cut through the limestone cap without using lubricants or cooling fluids that might contaminate the chamber, and an airlock that would admit the drill shaft and photo equipment but not the air. Bob Moores from Black & Decker Corporation tailored a new drill to the Giza exploration. The drill bit broke through into the chamber at a depth of 63 inches, a stainless steel tube was lowered through the airlock to take samples of the chamber air at several levels. The video camera sent images from the chamber revealing that there was a disassembled royal boat that had been there.

  14. First results of the multi-purpose real-time processing video camera system on the Wendelstein 7-X stellarator and implications for future devices

    NASA Astrophysics Data System (ADS)

    Zoletnik, S.; Biedermann, C.; Cseh, G.; Kocsis, G.; König, R.; Szabolics, T.; Szepesi, T.; Wendelstein 7-X Team

    2018-01-01

    A special video camera has been developed for the 10-camera overview video system of the Wendelstein 7-X (W7-X) stellarator considering multiple application needs and limitations resulting from this complex long-pulse superconducting stellarator experiment. The event detection intelligent camera (EDICAM) uses a special 1.3 Mpixel CMOS sensor with non-destructive read capability which enables fast monitoring of smaller Regions of Interest (ROIs) even during long exposures. The camera can perform simple data evaluation algorithms (minimum/maximum, mean comparison to levels) on the ROI data which can dynamically change the readout process and generate output signals. Multiple EDICAM cameras were operated in the first campaign of W7-X and capabilities were explored in the real environment. Data prove that the camera can be used for taking long exposure (10-100 ms) overview images of the plasma while sub-ms monitoring and even multi-camera correlated edge plasma turbulence measurements of smaller areas can be done in parallel. These latter revealed that filamentary turbulence structures extend between neighboring modules of the stellarator. Considerations emerging for future upgrades of this system and similar setups on future long-pulse fusion experiments such as ITER are discussed.

  15. An airborne multispectral imaging system based on two consumer-grade cameras for agricultural remote sensing

    USDA-ARS?s Scientific Manuscript database

    This paper describes the design and evaluation of an airborne multispectral imaging system based on two identical consumer-grade cameras for agricultural remote sensing. The cameras are equipped with a full-frame complementary metal oxide semiconductor (CMOS) sensor with 5616 × 3744 pixels. One came...

  16. SARA South Observatory: A Fully Automated Boller & Chivens 0.6-m Telescope at C.T.I.O.

    NASA Astrophysics Data System (ADS)

    Mack, Peter; KanniahPadmanaban, S. Y.; Kaitchuck, R.; Borstad, A.; Luzier, N.

    2010-05-01

    The SARA South Observatory is the re-birth of the Lowell 24-inch telescope located on the south-east ridge of Cerro Tololo, Chile. Installed in 1968 this Boller & Chivens telescope fell into disuse for almost 20 years. The telescope and observatory have undergone a major restoration. A new dome with a wide slit has been fully automated with an ACE SmartDome controller featuring autonomous closure. The telescope was completely gutted, repainted, and virtually every electronic component and wire replaced. Modern infrastructure, such as USB, Ethernet and video ports have been incorporated into the telescope tube saddle boxes. Absolute encoders have been placed on the Hour Angle and declination axes with a resolution of less than 0.7 arc seconds. The secondary mirror is also equipped with an absolute encoder and temperature sensor to allow for fully automated focus. New mirror coatings, automated mirror covers, a new 150mm refractor, and new instrumentation have been deployed. An integrated X-stage guider and dual filter wheel containing 18 filters is used for direct imaging. The guider camera can be easily removed and a standard 2-inch eyepiece used for occasional viewing by VIP's at C.T.I.O. A 12 megapixel all-sky camera produces color images every 30 seconds showing details in the Milky Way and Magellanic Clouds. Two low light level cameras are deployed; one on the finder and one at the top of the telescope showing a 30° field. Other auxiliary equipment, including daytime color video cameras, weather station and remotely controllable power outlets permit complete control and servicing of the system. The SARA Consortium (www.saraobservatory.org), a collection of ten eastern universities, also operates a 0.9-m telescope at the Kitt Peak National Observatory using an almost identical set of instruments with the same ACE control system. This project was funded by the SARA Consortium.

  17. SWUIS-A: A Versatile, Low-Cost UV/VIS/IR Imaging System for Airborne Astronomy and Aeronomy Research

    NASA Technical Reports Server (NTRS)

    Durda, Daniel D.; Stern, S. Alan; Tomlinson, William; Slater, David C.; Vilas, Faith

    2001-01-01

    We have developed and successfully flight-tested on 14 different airborne missions the hardware and techniques for routinely conducting valuable astronomical and aeronomical observations from high-performance, two-seater military-type aircraft. The SWUIS-A (Southwest Universal Imaging System - Airborne) system consists of an image-intensified CCD camera with broad band response from the near-UV to the near IR, high-quality foreoptics, a miniaturized video recorder, an aircraft-to-camera power and telemetry interface with associated camera controls, and associated cables, filters, and other minor equipment. SWUIS-A's suite of high-quality foreoptics gives it selectable, variable focal length/variable field-of-view capabilities. The SWUIS-A camera frames at 60 Hz video rates, which is a key requirement for both jitter compensation and high time resolution (useful for occultation, lightning, and auroral studies). Broadband SWUIS-A image coadds can exceed a limiting magnitude of V = 10.5 in <1 sec with dark sky conditions. A valuable attribute of SWUIS-A airborne observations is the fact that the astronomer flies with the instrument, thereby providing Space Shuttle-like "payload specialist" capability to "close-the-loop" in real-time on the research done on each research mission. Key advantages of the small, high-performance aircraft on which we can fly SWUIS-A include significant cost savings over larger, more conventional airborne platforms, worldwide basing obviating the need for expensive, campaign-style movement of specialized large aircraft and their logistics support teams, and ultimately faster reaction times to transient events. Compared to ground-based instruments, airborne research platforms offer superior atmospheric transmission, the mobility to reach remote and often-times otherwise unreachable locations over the Earth, and virtually-guaranteed good weather for observing the sky. Compared to space-based instruments, airborne platforms typically offer substantial cost advantages and the freedom to fly along nearly any groundtrack route for transient event tracking such as occultations and eclipses.

  18. Coordinating High-Resolution Traffic Cameras : Developing Intelligent, Collaborating Cameras for Transportation Security and Communications

    DOT National Transportation Integrated Search

    2015-08-01

    Cameras are used prolifically to monitor transportation incidents, infrastructure, and congestion. Traditional camera systems often require human monitoring and only offer low-resolution video. Researchers for the Exploratory Advanced Research (EAR) ...

  19. Thermoelastic Analysis of Hyper-X Camera Windows Suddenly Exposed to Mach 7 Stagnation Aerothermal Shock

    NASA Technical Reports Server (NTRS)

    Ko, William L.; Gong, Leslie

    2000-01-01

    To visually record the initial free flight event of the Hyper-X research flight vehicle immediately after separation from the Pegasus(registered) booster rocket, a video camera was mounted on the bulkhead of the adapter through which Hyper-X rides on Pegasus. The video camera was shielded by a protecting camera window made of heat-resistant quartz material. When Hyper-X separates from Pegasus, this camera window will be suddenly exposed to Mach 7 stagnation thermal shock and dynamic pressure loading (aerothermal loading). To examine the structural integrity, thermoelastic analysis was performed, and the stress distributions in the camera windows were calculated. The critical stress point where the tensile stress reaches a maximum value for each camera window was identified, and the maximum tensile stress level at that critical point was found to be considerably lower than the tensile failure stress of the camera window material.

  20. "Ipsilateral, high, single-hand, sideways"-Ruijin rule for camera assistant in uniportal video-assisted thoracoscopic surgery.

    PubMed

    Gao, Taotao; Xiang, Jie; Jin, Runsen; Zhang, Yajie; Wu, Han; Li, Hecheng

    2016-10-01

    Camera assistant plays a very important role in uniportal video-assisted thoracoscopic surgery (VATS), who acts as the eye of the surgeon, providing the VATS team with a stable and clear operating view. Thus, a good assistant should cooperate with surgeon and manipulate the camera expertly, to ensure eye-hand coordination. We have performed more than 100 uniportal VATS in the Department Of Thoracic Surgery in Ruijin Hospital. Based on our experiences, we summarized the method of holding camera, known as "ipsilateral, high, single-hand, sideways", which largely improves the comfort and fluency of surgery.

  1. Video Altimeter and Obstruction Detector for an Aircraft

    NASA Technical Reports Server (NTRS)

    Delgado, Frank J.; Abernathy, Michael F.; White, Janis; Dolson, William R.

    2013-01-01

    Video-based altimetric and obstruction detection systems for aircraft have been partially developed. The hardware of a system of this type includes a downward-looking video camera, a video digitizer, a Global Positioning System receiver or other means of measuring the aircraft velocity relative to the ground, a gyroscope based or other attitude-determination subsystem, and a computer running altimetric and/or obstruction-detection software. From the digitized video data, the altimetric software computes the pixel velocity in an appropriate part of the video image and the corresponding angular relative motion of the ground within the field of view of the camera. Then by use of trigonometric relationships among the aircraft velocity, the attitude of the camera, the angular relative motion, and the altitude, the software computes the altitude. The obstruction-detection software performs somewhat similar calculations as part of a larger task in which it uses the pixel velocity data from the entire video image to compute a depth map, which can be correlated with a terrain map, showing locations of potential obstructions. The depth map can be used as real-time hazard display and/or to update an obstruction database.

  2. The use of digital imaging, video conferencing, and telepathology in histopathology: a national survey.

    PubMed

    Dennis, T; Start, R D; Cross, S S

    2005-03-01

    To undertake a large scale survey of histopathologists in the UK to determine the current infrastructure, training, and attitudes to digital pathology. A postal questionnaire was sent to 500 consultant histopathologists randomly selected from the membership of the Royal College of Pathologists in the UK. There was a response rate of 47%. Sixty four per cent of respondents had a digital camera mounted on their microscope, but only 12% had any sort of telepathology equipment. Thirty per cent used digital images in electronic presentations at meetings at least once a year and only 24% had ever used telepathology in a diagnostic situation. Fifty nine per cent had received no training in digital imaging. Fifty eight per cent felt that the medicolegal implications of duty of care were a barrier to its use. A large proportion of pathologists (69%) were interested in using video conferencing for remote attendance at multidisciplinary team meetings. There is a reasonable level of equipment and communications infrastructure among histopathologists in the UK but a very low level of training. There is resistance to the use of telepathology in the diagnostic context but enthusiasm for the use of video conferencing in multidisciplinary team meetings.

  3. High Speed Digital Camera Technology Review

    NASA Technical Reports Server (NTRS)

    Clements, Sandra D.

    2009-01-01

    A High Speed Digital Camera Technology Review (HSD Review) is being conducted to evaluate the state-of-the-shelf in this rapidly progressing industry. Five HSD cameras supplied by four camera manufacturers participated in a Field Test during the Space Shuttle Discovery STS-128 launch. Each camera was also subjected to Bench Tests in the ASRC Imaging Development Laboratory. Evaluation of the data from the Field and Bench Tests is underway. Representatives from the imaging communities at NASA / KSC and the Optical Systems Group are participating as reviewers. A High Speed Digital Video Camera Draft Specification was updated to address Shuttle engineering imagery requirements based on findings from this HSD Review. This draft specification will serve as the template for a High Speed Digital Video Camera Specification to be developed for the wider OSG imaging community under OSG Task OS-33.

  4. Motion-Blur-Free High-Speed Video Shooting Using a Resonant Mirror

    PubMed Central

    Inoue, Michiaki; Gu, Qingyi; Takaki, Takeshi; Ishii, Idaku; Tajima, Kenji

    2017-01-01

    This study proposes a novel concept of actuator-driven frame-by-frame intermittent tracking for motion-blur-free video shooting of fast-moving objects. The camera frame and shutter timings are controlled for motion blur reduction in synchronization with a free-vibration-type actuator vibrating with a large amplitude at hundreds of hertz so that motion blur can be significantly reduced in free-viewpoint high-frame-rate video shooting for fast-moving objects by deriving the maximum performance of the actuator. We develop a prototype of a motion-blur-free video shooting system by implementing our frame-by-frame intermittent tracking algorithm on a high-speed video camera system with a resonant mirror vibrating at 750 Hz. It can capture 1024 × 1024 images of fast-moving objects at 750 fps with an exposure time of 0.33 ms without motion blur. Several experimental results for fast-moving objects verify that our proposed method can reduce image degradation from motion blur without decreasing the camera exposure time. PMID:29109385

  5. Snowflake Visualization

    NASA Astrophysics Data System (ADS)

    Bliven, L. F.; Kucera, P. A.; Rodriguez, P.

    2010-12-01

    NASA Snowflake Video Imagers (SVIs) enable snowflake visualization at diverse field sites. The natural variability of frozen precipitation is a complicating factor for remote sensing retrievals in high latitude regions. Particle classification is important for understanding snow/ice physics, remote sensing polarimetry, bulk radiative properties, surface emissivity, and ultimately, precipitation rates and accumulations. Yet intermittent storms, low temperatures, high winds, remote locations and complex terrain can impede us from observing falling snow in situ. SVI hardware and software have some special features. The standard camera and optics yield 8-bit gray-scale images with resolution of 0.05 x 0.1 mm, at 60 frames per second. Gray-scale images are highly desirable because they display contrast that aids particle classification. Black and white (1-bit) systems display no contrast, so there is less information to recognize particle types, which is particularly burdensome for aggregates. Data are analyzed at one-minute intervals using NASA's Precipitation Link Software that produces (a) Particle Catalogs and (b) Particle Size Distributions (PSDs). SVIs can operate nearly continuously for long periods (e.g., an entire winter season), so natural variability can be documented. Let’s summarize results from field studies this past winter and review some recent SVI enhancements. During the winter of 2009-2010, SVIs were deployed at two sites. One SVI supported weather observations during the 2010 Winter Olympics and Paralympics. It was located close to the summit (Roundhouse) of Whistler Mountain, near the town of Whistler, British Columbia, Canada. In addition, two SVIs were located at the King City Weather Radar Station (WKR) near Toronto, Ontario, Canada. Access was prohibited to the SVI on Whistler Mountain during the Olympics due to security concerns. So to meet the schedule for daily data products, we operated the SVI by remote control. We also upgraded the Precipitation Link Software to allow operator selection of image sub-sampling interval during data processing. Thus quick-look data products were delivered on schedule, even for intense storms that generated large data files. Approximately 11 million snowflakes were recorded and we present highlights from the Particle Catalog and the PSDs obtained during the 2010 Winter Olympics and Paralympics. On the other hand, the SVIs at King Radar, Ontario had a standard resolution camera and a higher resolution camera (0.1 x 0.05 mm and 0.05 x 0.05 mm, respectively). The upgraded camera operated well. Using observations from the King Radar site, we will discuss camera durability and data products from the upgraded SVI. During the ’10-11 winter, a standard SVI is deployed in Finland as part of the Light Precipitation Validation Experiment. Two higher solution SVIs are also deployed in Canada at a field site ~30km from WKR, which will provide data for validation of radar polarization signatures and satellite observations.

  6. A reaction-diffusion-based coding rate control mechanism for camera sensor networks.

    PubMed

    Yamamoto, Hiroshi; Hyodo, Katsuya; Wakamiya, Naoki; Murata, Masayuki

    2010-01-01

    A wireless camera sensor network is useful for surveillance and monitoring for its visibility and easy deployment. However, it suffers from the limited capacity of wireless communication and a network is easily overflown with a considerable amount of video traffic. In this paper, we propose an autonomous video coding rate control mechanism where each camera sensor node can autonomously determine its coding rate in accordance with the location and velocity of target objects. For this purpose, we adopted a biological model, i.e., reaction-diffusion model, inspired by the similarity of biological spatial patterns and the spatial distribution of video coding rate. Through simulation and practical experiments, we verify the effectiveness of our proposal.

  7. Using a Digital Video Camera to Study Motion

    ERIC Educational Resources Information Center

    Abisdris, Gil; Phaneuf, Alain

    2007-01-01

    To illustrate how a digital video camera can be used to analyze various types of motion, this simple activity analyzes the motion and measures the acceleration due to gravity of a basketball in free fall. Although many excellent commercially available data loggers and software can accomplish this task, this activity requires almost no financial…

  8. Using a digital video camera to examine coupled oscillations

    NASA Astrophysics Data System (ADS)

    Greczylo, T.; Debowska, E.

    2002-07-01

    In our previous paper (Debowska E, Jakubowicz S and Mazur Z 1999 Eur. J. Phys. 20 89-95), thanks to the use of an ultrasound distance sensor, experimental verification of the solution of Lagrange equations for longitudinal oscillations of the Wilberforce pendulum was shown. In this paper the sensor and a digital video camera were used to monitor and measure the changes of both the pendulum's coordinates (vertical displacement and angle of rotation) simultaneously. The experiments were performed with the aid of the integrated software package COACH 5. Fourier analysis in Microsoft^{\\circledR} Excel 97 was used to find normal modes in each case of the measured oscillations. Comparison of the results with those presented in our previous paper (as given above) leads to the conclusion that a digital video camera is a powerful tool for measuring coupled oscillations of a Wilberforce pendulum. The most important conclusion is that a video camera is able to do something more than merely register interesting physical phenomena - it can be used to perform measurements of physical quantities at an advanced level.

  9. Remote Cameras Reveal Experimental Artifact in a Study of Seed Predation in a Semi-Arid Shrubland.

    PubMed

    Brown, Alissa J; Deutschman, Douglas H; Braswell, Jessica; McLaughlin, Dana

    2016-01-01

    Granivorous animals may prefer to predate or cache seed of certain plant species over others. Multiple studies have documented preference for larger, non-native seed by granivores. To accomplish this, researchers have traditionally used indirect inference by relating patterns of seed removal to the species composition of the granivorous animal community. To measure seed removal, researchers present seed to granivorous animals in the field using equipment intended to exclude certain animal taxa while permitting access to others. This approach allows researchers to differentiate patterns of seed removal among various taxa (e.g., birds, small mammals, and insects); however, it is unclear whether the animals of interest are freely using the exclusion devices, which may be a hindrance to discovering the seed dishes. We used video observation to perform a study of seed predation using a custom-built, infrared digital camera and recording system. We presented native and non-native seed mixtures in partitioned Petri dishes both within and outside of exclusion cages. The exclusion cages were intended to allow entrance by rodent taxa while preventing entrance by rabbits and birds. We documented all seed removal visits by granivorous animals, which we identified to the genus level. Genera exhibited varying seed removal patterns based on seed type (native vs. non-native) and dish type (open vs. enclosed). We documented avoidance of the enclosed dishes by all but one rodent taxa, even though these dishes were intended to be used freely by rodents. This suggests that preference for non-native seed occurs differentially among granivorous animals in this system; however, interpretation of these nuanced results would be difficult without the benefit of video observation. When feasible, video observation should accompany studies using in situ equipment to ensure incorrect assumptions do not lead to inappropriate interpretation of results.

  10. Application of airborne hyperspectral remote sensing for the retrieval of forest inventory parameters

    NASA Astrophysics Data System (ADS)

    Dmitriev, Yegor V.; Kozoderov, Vladimir V.; Sokolov, Anton A.

    2016-04-01

    Collecting and updating forest inventory data play an important part in the forest management. The data can be obtained directly by using exact enough but low efficient ground based methods as well as from the remote sensing measurements. We present applications of airborne hyperspectral remote sensing for the retrieval of such important inventory parameters as the forest species and age composition. The hyperspectral images of the test region were obtained from the airplane equipped by the produced in Russia light-weight airborne video-spectrometer of visible and near infrared spectral range and high resolution photo-camera on the same gyro-stabilized platform. The quality of the thematic processing depends on many factors such as the atmospheric conditions, characteristics of measuring instruments, corrections and preprocessing methods, etc. An important role plays the construction of the classifier together with methods of the reduction of the feature space. The performance of different spectral classification methods is analyzed for the problem of hyperspectral remote sensing of soil and vegetation. For the reduction of the feature space we used the earlier proposed stable feature selection method. The results of the classification of hyperspectral airborne images by using the Multiclass Support Vector Machine method with Gaussian kernel and the parametric Bayesian classifier based on the Gaussian mixture model and their comparative analysis are demonstrated.

  11. Telerobotic Haptic Exploration in Art Galleries and Museums for Individuals with Visual Impairments.

    PubMed

    Park, Chung Hyuk; Ryu, Eun-Seok; Howard, Ayanna M

    2015-01-01

    This paper presents a haptic telepresence system that enables visually impaired users to explore locations with rich visual observation such as art galleries and museums by using a telepresence robot, a RGB-D sensor (color and depth camera), and a haptic interface. The recent improvement on RGB-D sensors has enabled real-time access to 3D spatial information in the form of point clouds. However, the real-time representation of this data in the form of tangible haptic experience has not been challenged enough, especially in the case of telepresence for individuals with visual impairments. Thus, the proposed system addresses the real-time haptic exploration of remote 3D information through video encoding and real-time 3D haptic rendering of the remote real-world environment. This paper investigates two scenarios in haptic telepresence, i.e., mobile navigation and object exploration in a remote environment. Participants with and without visual impairments participated in our experiments based on the two scenarios, and the system performance was validated. In conclusion, the proposed framework provides a new methodology of haptic telepresence for individuals with visual impairments by providing an enhanced interactive experience where they can remotely access public places (art galleries and museums) with the aid of haptic modality and robotic telepresence.

  12. ONR Workshop on Magnetohydrodynamic Submarine Propulsion (2nd), Held in San Diego, California on November 16-17, 1989

    DTIC Science & Technology

    1990-07-01

    electrohtic dissociation of the electrode mate- pedo applications seem to be still somewhat rial, and to provide a good gas evolution wlhich out of the...rod cathode. A unique feature of this preliminary experiment was the use of a prototype gated, intensified video camera. This camera is based on a...microprocessor controlled microchannel plate intensifier tube. The intensifier tube image is focused on a standard CCD video camera so that the object

  13. Encrypting Digital Camera with Automatic Encryption Key Deletion

    NASA Technical Reports Server (NTRS)

    Oakley, Ernest C. (Inventor)

    2007-01-01

    A digital video camera includes an image sensor capable of producing a frame of video data representing an image viewed by the sensor, an image memory for storing video data such as previously recorded frame data in a video frame location of the image memory, a read circuit for fetching the previously recorded frame data, an encryption circuit having an encryption key input connected to receive the previously recorded frame data from the read circuit as an encryption key, an un-encrypted data input connected to receive the frame of video data from the image sensor and an encrypted data output port, and a write circuit for writing a frame of encrypted video data received from the encrypted data output port of the encryption circuit to the memory and overwriting the video frame location storing the previously recorded frame data.

  14. Structure-From for Calibration of a Vehicle Camera System with Non-Overlapping Fields-Of in AN Urban Environment

    NASA Astrophysics Data System (ADS)

    Hanel, A.; Stilla, U.

    2017-05-01

    Vehicle environment cameras observing traffic participants in the area around a car and interior cameras observing the car driver are important data sources for driver intention recognition algorithms. To combine information from both camera groups, a camera system calibration can be performed. Typically, there is no overlapping field-of-view between environment and interior cameras. Often no marked reference points are available in environments, which are a large enough to cover a car for the system calibration. In this contribution, a calibration method for a vehicle camera system with non-overlapping camera groups in an urban environment is described. A-priori images of an urban calibration environment taken with an external camera are processed with the structure-frommotion method to obtain an environment point cloud. Images of the vehicle interior, taken also with an external camera, are processed to obtain an interior point cloud. Both point clouds are tied to each other with images of both image sets showing the same real-world objects. The point clouds are transformed into a self-defined vehicle coordinate system describing the vehicle movement. On demand, videos can be recorded with the vehicle cameras in a calibration drive. Poses of vehicle environment cameras and interior cameras are estimated separately using ground control points from the respective point cloud. All poses of a vehicle camera estimated for different video frames are optimized in a bundle adjustment. In an experiment, a point cloud is created from images of an underground car park, as well as a point cloud of the interior of a Volkswagen test car is created. Videos of two environment and one interior cameras are recorded. Results show, that the vehicle camera poses are estimated successfully especially when the car is not moving. Position standard deviations in the centimeter range can be achieved for all vehicle cameras. Relative distances between the vehicle cameras deviate between one and ten centimeters from tachymeter reference measurements.

  15. MiniAERCam Ranging

    NASA Technical Reports Server (NTRS)

    Talley, Tom

    2003-01-01

    Johnson Space Center (JSC) is designing a small, remotely controlled vehicle that will carry two color and one black and white video cameras in space. The device will launch and retrieve from the Space Vehicle and be used for remote viewing. Off the shelf cellular technology is being used as the basis for communication system design. Existing plans include using multiple antennas to make simultaneous estimates of the azimuth of the MiniAERCam from several sites on the Space Station and use triangulation to find the location of the device. Adding range detection capability to each of the nodes on the Space Vehicle would allow an estimate of the location of the MiniAERCam to be made at each Communication And Telemetry Box (CATBox) independent of all the other communication nodes. This project will investigate the techniques used by the Global Positioning System (GPS) to achieve accurate positioning information and adapt those strategies that are appropriate to the design of the CATBox range determination system.

  16. Photogrammetry and Remote Sensing: New German Standards (din) Setting Quality Requirements of Products Generated by Digital Cameras, Pan-Sharpening and Classification

    NASA Astrophysics Data System (ADS)

    Reulke, R.; Baltrusch, S.; Brunn, A.; Komp, K.; Kresse, W.; von Schönermark, M.; Spreckels, V.

    2012-08-01

    10 years after the first introduction of a digital airborne mapping camera in the ISPRS conference 2000 in Amsterdam, several digital cameras are now available. They are well established in the market and have replaced the analogue camera. A general improvement in image quality accompanied the digital camera development. The signal-to-noise ratio and the dynamic range are significantly better than with the analogue cameras. In addition, digital cameras can be spectrally and radiometrically calibrated. The use of these cameras required a rethinking in many places though. New data products were introduced. In the recent years, some activities took place that should lead to a better understanding of the cameras and the data produced by these cameras. Several projects, like the projects of the German Society for Photogrammetry, Remote Sensing and Geoinformation (DGPF) or EuroSDR (European Spatial Data Research), were conducted to test and compare the performance of the different cameras. In this paper the current DIN (Deutsches Institut fuer Normung - German Institute for Standardization) standards will be presented. These include the standard for digital cameras, the standard for ortho rectification, the standard for classification, and the standard for pan-sharpening. In addition, standards for the derivation of elevation models, the use of Radar / SAR, and image quality are in preparation. The OGC has indicated its interest in participating that development. The OGC has already published specifications in the field of photogrammetry and remote sensing. One goal of joint future work could be to merge these formerly independent developments and the joint development of a suite of implementation specifications for photogrammetry and remote sensing.

  17. Help for the Visually Impaired

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The Low Vision Enhancement System (LVES) is a video headset that offers people with low vision a view of their surroundings equivalent to the image on a five-foot television screen four feet from the viewer. It will not make the blind see but for many people with low vision, it eases everyday activities such as reading, watching TV and shopping. LVES was developed over almost a decade of cooperation between Stennis Space Center, the Wilmer Eye Institute of the Johns Hopkins Medical Institutions, the Department of Veteran Affairs, and Visionics Corporation. With the aid of Stennis scientists, Wilmer researchers used NASA technology for computer processing of satellite images and head-mounted vision enhancement systems originally intended for the space station. The unit consists of a head-mounted video display, three video cameras, and a control unit for the cameras. The cameras feed images to the video display in the headset.

  18. Thermal imagers: from ancient analog video output to state-of-the-art video streaming

    NASA Astrophysics Data System (ADS)

    Haan, Hubertus; Feuchter, Timo; Münzberg, Mario; Fritze, Jörg; Schlemmer, Harry

    2013-06-01

    The video output of thermal imagers stayed constant over almost two decades. When the famous Common Modules were employed a thermal image at first was presented to the observer in the eye piece only. In the early 1990s TV cameras were attached and the standard output was CCIR. In the civil camera market output standards changed to digital formats a decade ago with digital video streaming being nowadays state-of-the-art. The reasons why the output technique in the thermal world stayed unchanged over such a long time are: the very conservative view of the military community, long planning and turn-around times of programs and a slower growth of pixel number of TIs in comparison to consumer cameras. With megapixel detectors the CCIR output format is not sufficient any longer. The paper discusses the state-of-the-art compression and streaming solutions for TIs.

  19. Surgeon point-of-view recording: Using a high-definition head-mounted video camera in the operating room.

    PubMed

    Nair, Akshay Gopinathan; Kamal, Saurabh; Dave, Tarjani Vivek; Mishra, Kapil; Reddy, Harsha S; Della Rocca, David; Della Rocca, Robert C; Andron, Aleza; Jain, Vandana

    2015-10-01

    To study the utility of a commercially available small, portable ultra-high definition (HD) camera (GoPro Hero 4) for intraoperative recording. A head mount was used to fix the camera on the operating surgeon's head. Due care was taken to protect the patient's identity. The recorded video was subsequently edited and used as a teaching tool. This retrospective, noncomparative study was conducted at three tertiary eye care centers. The surgeries recorded were ptosis correction, ectropion correction, dacryocystorhinostomy, angular dermoid excision, enucleation, blepharoplasty and lid tear repair surgery (one each). The recorded videos were reviewed, edited, and checked for clarity, resolution, and reproducibility. The recorded videos were found to be high quality, which allowed for zooming and visualization of the surgical anatomy clearly. Minimal distortion is a drawback that can be effectively addressed during postproduction. The camera, owing to its lightweight and small size, can be mounted on the surgeon's head, thus offering a unique surgeon point-of-view. In our experience, the results were of good quality and reproducible. A head-mounted ultra-HD video recording system is a cheap, high quality, and unobtrusive technique to record surgery and can be a useful teaching tool in external facial and ophthalmic plastic surgery.

  20. Surgeon point-of-view recording: Using a high-definition head-mounted video camera in the operating room

    PubMed Central

    Nair, Akshay Gopinathan; Kamal, Saurabh; Dave, Tarjani Vivek; Mishra, Kapil; Reddy, Harsha S; Rocca, David Della; Rocca, Robert C Della; Andron, Aleza; Jain, Vandana

    2015-01-01

    Objective: To study the utility of a commercially available small, portable ultra-high definition (HD) camera (GoPro Hero 4) for intraoperative recording. Methods: A head mount was used to fix the camera on the operating surgeon's head. Due care was taken to protect the patient's identity. The recorded video was subsequently edited and used as a teaching tool. This retrospective, noncomparative study was conducted at three tertiary eye care centers. The surgeries recorded were ptosis correction, ectropion correction, dacryocystorhinostomy, angular dermoid excision, enucleation, blepharoplasty and lid tear repair surgery (one each). The recorded videos were reviewed, edited, and checked for clarity, resolution, and reproducibility. Results: The recorded videos were found to be high quality, which allowed for zooming and visualization of the surgical anatomy clearly. Minimal distortion is a drawback that can be effectively addressed during postproduction. The camera, owing to its lightweight and small size, can be mounted on the surgeon's head, thus offering a unique surgeon point-of-view. In our experience, the results were of good quality and reproducible. Conclusions: A head-mounted ultra-HD video recording system is a cheap, high quality, and unobtrusive technique to record surgery and can be a useful teaching tool in external facial and ophthalmic plastic surgery. PMID:26655001

  1. Real-Time Acquisition and Display of Data and Video

    NASA Technical Reports Server (NTRS)

    Bachnak, Rafic; Chakinarapu, Ramya; Garcia, Mario; Kar, Dulal; Nguyen, Tien

    2007-01-01

    This paper describes the development of a prototype that takes in an analog National Television System Committee (NTSC) video signal generated by a video camera and data acquired by a microcontroller and display them in real-time on a digital panel. An 8051 microcontroller is used to acquire power dissipation by the display panel, room temperature, and camera zoom level. The paper describes the major hardware components and shows how they are interfaced into a functional prototype. Test data results are presented and discussed.

  2. Explosive Transient Camera (ETC) Program

    DTIC Science & Technology

    1991-10-01

    VOLTAGES 4.- VIDEO OUT CCD CLOCKING UNIT UUPSTAIRS" ELECTRONICS AND ANALOG TO DIGITAL IPR OCECSSER I COMMANDS TO DATA AND STATUS INSTRUMENT INFORMATION I...and transmits digital video and status information to the "downstairs" system. The clocking unit and regulator/driver board are the only CCD dependent...A. 1001, " Video Cam-era’CC’" tandari Piells" (1(P’ll m-norartlum, unpublished). Condon,, J.J., Puckpan, M.A., and Vachalski, J. 1970, A. J., 9U, 1149

  3. In-Home Exposure Therapy for Veterans with PTSD

    DTIC Science & Technology

    2017-10-01

    telehealth (HBT; Veterans stay at home and meet with the therapist using the computer and video cameras), and (3) PE delivered in home, in person (IHIP... video cameras), and (3) PE delivered in home, in person (IHIP; the therapist comes to the Veterans’ homes for treatment). We will be checking to see...when providing treatment in homes and through home based video technology. BODY: Our focus in the past year (30 Sept 2016 – 10 Oct 2017) has been to

  4. Underwater videography outperforms above-water videography and in-person surveys for monitoring the spawning of Devils Hole Pupfish

    USGS Publications Warehouse

    Chaudoin, Ambre L.; Feuerbacher, Olin; Bonar, Scott A.; Barrett, Paul J.

    2015-01-01

    The monitoring of threatened and endangered fishes in remote environments continues to challenge fisheries biologists. The endangered Devils Hole Pupfish Cyprinodon diabolis, which is confined to a single warm spring in Death Valley National Park, California–Nevada, has recently experienced record declines, spurring renewed conservation and recovery efforts. In February–December 2010, we investigated the timing and frequency of spawning in the species' native habitat by using three survey methods: underwater videography, above-water videography, and in-person surveys. Videography methods incorporated fixed-position, solar-powered cameras to record continuous footage of a shallow rock shelf that Devils Hole Pupfish use for spawning. In-person surveys were conducted from a platform placed above the water's surface. The underwater camera recorded more overall spawning throughout the year (mean ± SE = 0.35 ± 0.06 events/sample) than the above-water camera (0.11 ± 0.03 events/sample). Underwater videography also recorded more peak-season spawning (March: 0.83 ± 0.18 events/sample; April: 2.39 ± 0.47 events/sample) than above-water videography (March: 0.21 ± 0.10 events/sample; April: 0.9 ± 0.32 events/sample). Although the overall number of spawning events per sample did not differ significantly between underwater videography and in-person surveys, underwater videography provided a larger data set with much less variability than data from in-person surveys. Fixed videography was more cost efficient than in-person surveys (\\$1.31 versus \\$605 per collected data-hour), and underwater videography provided more usable data than above-water videography. Furthermore, video data collection was possible even under adverse conditions, such as the extreme temperatures of the region, and could be maintained successfully with few study site visits. Our results suggest that self-contained underwater cameras can be efficient tools for monitoring remote and sensitive aquatic ecosystems.

  5. Blinded evaluation of the effects of high definition and magnification on perceived image quality in laryngeal imaging.

    PubMed

    Otto, Kristen J; Hapner, Edie R; Baker, Michael; Johns, Michael M

    2006-02-01

    Advances in commercial video technology have improved office-based laryngeal imaging. This study investigates the perceived image quality of a true high-definition (HD) video camera and the effect of magnification on laryngeal videostroboscopy. We performed a prospective, dual-armed, single-blinded analysis of a standard laryngeal videostroboscopic examination comparing 3 separate add-on camera systems: a 1-chip charge-coupled device (CCD) camera, a 3-chip CCD camera, and a true 720p (progressive scan) HD camera. Displayed images were controlled for magnification and image size (20-inch [50-cm] display, red-green-blue, and S-video cable for 1-chip and 3-chip cameras; digital visual interface cable and HD monitor for HD camera). Ten blinded observers were then asked to rate the following 5 items on a 0-to-100 visual analog scale: resolution, color, ability to see vocal fold vibration, sense of depth perception, and clarity of blood vessels. Eight unblinded observers were then asked to rate the difference in perceived resolution and clarity of laryngeal examination images when displayed on a 10-inch (25-cm) monitor versus a 42-inch (105-cm) monitor. A visual analog scale was used. These monitors were controlled for actual resolution capacity. For each item evaluated, randomized block design analysis demonstrated that the 3-chip camera scored significantly better than the 1-chip camera (p < .05). For the categories of color and blood vessel discrimination, the 3-chip camera scored significantly better than the HD camera (p < .05). For magnification alone, observers rated the 42-inch monitor statistically better than the 10-inch monitor. The expense of new medical technology must be judged against its added value. This study suggests that HD laryngeal imaging may not add significant value over currently available video systems, in perceived image quality, when a small monitor is used. Although differences in clarity between standard and HD cameras may not be readily apparent on small displays, a large display size coupled with HD technology may impart improved diagnosis of subtle vocal fold lesions and vibratory anomalies.

  6. A 24-hour remote surveillance system for terrestrial wildlife studies

    USGS Publications Warehouse

    Sykes, P.W.; Ryman, W.E.; Kepler, C.B.; Hardy, J.W.

    1995-01-01

    The configuration, components, specifications and costs of a state-of-the-art closed-circuit television system with wide application for wildlife research and management are described. The principal system components consist of color CCTV camera with zoom lens, pan/tilt system, infrared illuminator, heavy duty tripod, coaxial cable, coaxitron system, half-duplex equalizing video/control amplifier, timelapse video cassette recorder, color video monitor, VHS video cassettes, portable generator, fuel tank and power cable. This system was developed and used in a study of Mississippi sandhiIl Crane (Grus canadensis pratensis) behaviors during incubation, hatching and fledging. The main advantages of the system are minimal downtime where a complete record of every event, its time of occurrence and duration, are permanently recorded and can be replayed as many times as necessary thereafter to retrieve the data. The system is particularly applicable for studies of behavior and predation, for counting individuals, or recording difficult to observe activities. The system can be run continuously for several weeks by two people, reducing personnel costs. This paper is intended to provide biologists who have litte knowledge of electronics with a system that might be useful to their specific needs. The disadvantages of this system are the initial costs (about $9800 basic, 1990-1991 U.S. dollars) and the time required to playback video cassette tapes for data retrieval, but the playback can be sped up when litte or no activity of interest is taking place. In our study, the positive aspects of the system far outweighed the negative.

  7. Remote measurement of surface-water velocity using infrared videography and PIV: a proof-of-concept for Alaskan rivers

    USGS Publications Warehouse

    Kinzel, Paul J.; Legleiter, Carl; Nelson, Jonathan M.; Conaway, Jeffrey S.

    2017-01-01

    Thermal cameras with high sensitivity to medium and long wavelengths can resolve features at the surface of flowing water arising from turbulent mixing. Images acquired by these cameras can be processed with particle image velocimetry (PIV) to compute surface velocities based on the displacement of thermal features as they advect with the flow. We conducted a series of field measurements to test this methodology for remote sensing of surface velocities in rivers. We positioned an infrared video camera at multiple stations across bridges that spanned five rivers in Alaska. Simultaneous non-contact measurements of surface velocity were collected with a radar gun. In situ velocity profiles were collected with Acoustic Doppler Current Profilers (ADCP). Infrared image time series were collected at a frequency of 10Hz for a one-minute duration at a number of stations spaced across each bridge. Commercial PIV software used a cross-correlation algorithm to calculate pixel displacements between successive frames, which were then scaled to produce surface velocities. A blanking distance below the ADCP prevents a direct measurement of the surface velocity. However, we estimated surface velocity from the ADCP measurements using a program that normalizes each ADCP transect and combines those normalized transects to compute a mean measurement profile. The program can fit a power law to the profile and in so doing provides a velocity index, the ratio between the depth-averaged and surface velocity. For the rivers in this study, the velocity index ranged from 0.82 – 0.92. Average radar and extrapolated ADCP surface velocities were in good agreement with average infrared PIV calculations.

  8. “Ipsilateral, high, single-hand, sideways”—Ruijin rule for camera assistant in uniportal video-assisted thoracoscopic surgery

    PubMed Central

    Gao, Taotao; Xiang, Jie; Jin, Runsen; Zhang, Yajie; Wu, Han

    2016-01-01

    Camera assistant plays a very important role in uniportal video-assisted thoracoscopic surgery (VATS), who acts as the eye of the surgeon, providing the VATS team with a stable and clear operating view. Thus, a good assistant should cooperate with surgeon and manipulate the camera expertly, to ensure eye-hand coordination. We have performed more than 100 uniportal VATS in the Department Of Thoracic Surgery in Ruijin Hospital. Based on our experiences, we summarized the method of holding camera, known as “ipsilateral, high, single-hand, sideways”, which largely improves the comfort and fluency of surgery. PMID:27867573

  9. Cross-Correlation-Based Structural System Identification Using Unmanned Aerial Vehicles

    PubMed Central

    Yoon, Hyungchul; Hoskere, Vedhus; Park, Jong-Woong; Spencer, Billie F.

    2017-01-01

    Computer vision techniques have been employed to characterize dynamic properties of structures, as well as to capture structural motion for system identification purposes. All of these methods leverage image-processing techniques using a stationary camera. This requirement makes finding an effective location for camera installation difficult, because civil infrastructure (i.e., bridges, buildings, etc.) are often difficult to access, being constructed over rivers, roads, or other obstacles. This paper seeks to use video from Unmanned Aerial Vehicles (UAVs) to address this problem. As opposed to the traditional way of using stationary cameras, the use of UAVs brings the issue of the camera itself moving; thus, the displacements of the structure obtained by processing UAV video are relative to the UAV camera. Some efforts have been reported to compensate for the camera motion, but they require certain assumptions that may be difficult to satisfy. This paper proposes a new method for structural system identification using the UAV video directly. Several challenges are addressed, including: (1) estimation of an appropriate scale factor; and (2) compensation for the rolling shutter effect. Experimental validation is carried out to validate the proposed approach. The experimental results demonstrate the efficacy and significant potential of the proposed approach. PMID:28891985

  10. Using Video Self-Analysis to Improve the "Withitness" of Student Teachers

    ERIC Educational Resources Information Center

    Snoeyink, Rick

    2010-01-01

    Although video self-analysis has been used for years in teacher education, the camera has almost always focused on the preservice teacher. In this study, the researcher videotaped eight preservice teachers four times each during their student-teaching internships. One camera was focused on them while another was focused on their students. Their…

  11. Lights, Camera, Action! Using Video Recordings to Evaluate Teachers

    ERIC Educational Resources Information Center

    Petrilli, Michael J.

    2011-01-01

    Teachers and their unions do not want test scores to count for everything; classroom observations are key, too. But planning a couple of visits from the principal is hardly sufficient. These visits may "change the teacher's behavior"; furthermore, principals may not be the best judges of effective teaching. So why not put video cameras in…

  12. Jack & the Video Camera

    ERIC Educational Resources Information Center

    Charlan, Nathan

    2010-01-01

    This article narrates how the use of video camera has transformed the life of Jack Williams, a 10-year-old boy from Colorado Springs, Colorado, who has autism. The way autism affected Jack was unique. For the first nine years of his life, Jack remained in his world, alone. Functionally non-verbal and with motor skill problems that affected his…

  13. 78 FR 17939 - Announcement of Funding Awards; Capital Fund Safety and Security Grants; Fiscal Year 2012

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-25

    ... publishing the names, addresses, and amounts of the 18 awards made under the set aside in Appendix A to this... Security Camera Harrison Street, Oakland, CA Surveillance System 94612. including digital video recorders... Cameras, 50 Lincoln Plaza, Wilkes-Barre, Network Video PA 18702. Recorders, and Lighting. Ft. Worth...

  14. The California All-sky Meteor Surveillance (CAMS) System

    NASA Astrophysics Data System (ADS)

    Gural, P. S.

    2011-01-01

    A unique next generation multi-camera, multi-site video meteor system is being developed and deployed in California to provide high accuracy orbits of simultaneously captured meteors. Included herein is a description of the goals, concept of operations, hardware, and software development progress. An appendix contains a meteor camera performance trade study made for video systems circa 2010.

  15. Photometric Calibration of Consumer Video Cameras

    NASA Technical Reports Server (NTRS)

    Suggs, Robert; Swift, Wesley, Jr.

    2007-01-01

    Equipment and techniques have been developed to implement a method of photometric calibration of consumer video cameras for imaging of objects that are sufficiently narrow or sufficiently distant to be optically equivalent to point or line sources. Heretofore, it has been difficult to calibrate consumer video cameras, especially in cases of image saturation, because they exhibit nonlinear responses with dynamic ranges much smaller than those of scientific-grade video cameras. The present method not only takes this difficulty in stride but also makes it possible to extend effective dynamic ranges to several powers of ten beyond saturation levels. The method will likely be primarily useful in astronomical photometry. There are also potential commercial applications in medical and industrial imaging of point or line sources in the presence of saturation.This development was prompted by the need to measure brightnesses of debris in amateur video images of the breakup of the Space Shuttle Columbia. The purpose of these measurements is to use the brightness values to estimate relative masses of debris objects. In most of the images, the brightness of the main body of Columbia was found to exceed the dynamic ranges of the cameras. A similar problem arose a few years ago in the analysis of video images of Leonid meteors. The present method is a refined version of the calibration method developed to solve the Leonid calibration problem. In this method, one performs an endto- end calibration of the entire imaging system, including not only the imaging optics and imaging photodetector array but also analog tape recording and playback equipment (if used) and any frame grabber or other analog-to-digital converter (if used). To automatically incorporate the effects of nonlinearity and any other distortions into the calibration, the calibration images are processed in precisely the same manner as are the images of meteors, space-shuttle debris, or other objects that one seeks to analyze. The light source used to generate the calibration images is an artificial variable star comprising a Newtonian collimator illuminated by a light source modulated by a rotating variable neutral- density filter. This source acts as a point source, the brightness of which varies at a known rate. A video camera to be calibrated is aimed at this source. Fixed neutral-density filters are inserted in or removed from the light path as needed to make the video image of the source appear to fluctuate between dark and saturated bright. The resulting video-image data are analyzed by use of custom software that determines the integrated signal in each video frame and determines the system response curve (measured output signal versus input brightness). These determinations constitute the calibration, which is thereafter used in automatic, frame-by-frame processing of the data from the video images to be analyzed.

  16. A passive terahertz video camera based on lumped element kinetic inductance detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rowe, Sam, E-mail: sam.rowe@astro.cf.ac.uk; Pascale, Enzo; Doyle, Simon

    We have developed a passive 350 GHz (850 μm) video-camera to demonstrate lumped element kinetic inductance detectors (LEKIDs)—designed originally for far-infrared astronomy—as an option for general purpose terrestrial terahertz imaging applications. The camera currently operates at a quasi-video frame rate of 2 Hz with a noise equivalent temperature difference per frame of ∼0.1 K, which is close to the background limit. The 152 element superconducting LEKID array is fabricated from a simple 40 nm aluminum film on a silicon dielectric substrate and is read out through a single microwave feedline with a cryogenic low noise amplifier and room temperature frequencymore » domain multiplexing electronics.« less

  17. [Surgery using master-slave manipulators and telementoring].

    PubMed

    Furukawa, T; Wakabayashi, G; Ozawa, S; Watanabe, M; Ohgami, M; Kitagawa, Y; Ishii, S; Arisawa, Y; Ohmori, T; Nohga, K; Kitajima, M

    2000-03-01

    Master-slave manipulators enhance surgeons' dexterity and improve the precision of surgical techniques by filtering out surgeons' tremors and scaling the movements of surgical instruments. Among clinically available master-slave manipulators, the epoch-making system called "da Vinci" developed by Intuitive Surgical Inc. (Mountain View, CA, USA), equipped with 2 articulated joints at the tip of the surgical instruments allowing 7 degrees of freedom, mimics the movements of surgeons' wrists and fingers in the abdominal or thoracic cavity. Today advanced telecommunications technology provides us excellent motion images using only 3-ISDN telephone lines. Experienced surgeons at primary surgical sites have been able to perform complex procedures successfully by consulting specialists at remote sites. Because telecommunications costs have become lower each year, telementoring will be come a routine surgical practice in the near future. The usefulness of surgical telementoring has been greatly enhanced by the development of a technique to illustrate on video images from two directions. Moreover, remote advisory surgeons will be able to provide the optimal operative field to operating surgeons using robotic camera holders with voice-recognition systems. In the near future, when master-slave manipulators will also be coupled with telementoring systems, remote experts could actually perform complex surgical procedures.

  18. Standardized access, display, and retrieval of medical video

    NASA Astrophysics Data System (ADS)

    Bellaire, Gunter; Steines, Daniel; Graschew, Georgi; Thiel, Andreas; Bernarding, Johannes; Tolxdorff, Thomas; Schlag, Peter M.

    1999-05-01

    The system presented here enhances documentation and data- secured, second-opinion facilities by integrating video sequences into DICOM 3.0. We present an implementation for a medical video server extended by a DICOM interface. Security mechanisms conforming with DICOM are integrated to enable secure internet access. Digital video documents of diagnostic and therapeutic procedures should be examined regarding the clip length and size necessary for second opinion and manageable with today's hardware. Image sources relevant for this paper include 3D laparoscope, 3D surgical microscope, 3D open surgery camera, synthetic video, and monoscopic endoscopes, etc. The global DICOM video concept and three special workplaces of distinct applications are described. Additionally, an approach is presented to analyze the motion of the endoscopic camera for future automatic video-cutting. Digital stereoscopic video sequences are especially in demand for surgery . Therefore DSVS are also integrated into the DICOM video concept. Results are presented describing the suitability of stereoscopic display techniques for the operating room.

  19. Synchronization of video recording and laser pulses including background light suppression

    NASA Technical Reports Server (NTRS)

    Kalshoven, Jr., James E. (Inventor); Tierney, Jr., Michael (Inventor); Dabney, Philip W. (Inventor)

    2004-01-01

    An apparatus for and a method of triggering a pulsed light source, in particular a laser light source, for predictable capture of the source by video equipment. A frame synchronization signal is derived from the video signal of a camera to trigger the laser and position the resulting laser light pulse in the appropriate field of the video frame and during the opening of the electronic shutter, if such shutter is included in the camera. Positioning of the laser pulse in the proper video field allows, after recording, for the viewing of the laser light image with a video monitor using the pause mode on a standard cassette-type VCR. This invention also allows for fine positioning of the laser pulse to fall within the electronic shutter opening. For cameras with externally controllable electronic shutters, the invention provides for background light suppression by increasing shutter speed during the frame in which the laser light image is captured. This results in the laser light appearing in one frame in which the background scene is suppressed with the laser light being uneffected, while in all other frames, the shutter speed is slower, allowing for the normal recording of the background scene. This invention also allows for arbitrary (manual or external) triggering of the laser with full video synchronization and background light suppression.

  20. Toward Dietary Assessment via Mobile Phone Video Cameras.

    PubMed

    Chen, Nicholas; Lee, Yun Young; Rabb, Maurice; Schatz, Bruce

    2010-11-13

    Reliable dietary assessment is a challenging yet essential task for determining general health. Existing efforts are manual, require considerable effort, and are prone to underestimation and misrepresentation of food intake. We propose leveraging mobile phones to make this process faster, easier and automatic. Using mobile phones with built-in video cameras, individuals capture short videos of their meals; our software then automatically analyzes the videos to recognize dishes and estimate calories. Preliminary experiments on 20 typical dishes from a local cafeteria show promising results. Our approach complements existing dietary assessment methods to help individuals better manage their diet to prevent obesity and other diet-related diseases.

  1. Astrometric and Photometric Analysis of the September 2008 ATV-1 Re-Entry Event

    NASA Technical Reports Server (NTRS)

    Mulrooney, Mark K.; Barker, Edwin S.; Maley, Paul D.; Beaulieu, Kevin R.; Stokely, Christopher L.

    2008-01-01

    NASA utilized Image Intensified Video Cameras for ATV data acquisition from a jet flying at 12.8 km. Afterwards the video was digitized and then analyzed with a modified commercial software package, Image Systems Trackeye. Astrometric results were limited by saturation, plate scale, and imposed linear plate solution based on field reference stars. Time-dependent fragment angular trajectories, velocities, accelerations, and luminosities were derived in each video segment. It was evident that individual fragments behave differently. Photometric accuracy was insufficient to confidently assess correlations between luminosity and fragment spatial behavior (velocity, deceleration). Use of high resolution digital video cameras in future should remedy this shortcoming.

  2. Secure Video Surveillance System Acquisition Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2009-12-04

    The SVSS Acquisition Software collects and displays video images from two cameras through a VPN, and store the images onto a collection controller. The software is configured to allow a user to enter a time window to display up to 2 1/2, hours of video review. The software collects images from the cameras at a rate of 1 image per second and automatically deletes images older than 3 hours. The software code operates in a linux environment and can be run in a virtual machine on Windows XP. The Sandia software integrates the different COTS software together to build themore » video review system.« less

  3. What do we do with all this video? Better understanding public engagement for image and video annotation

    NASA Astrophysics Data System (ADS)

    Wiener, C.; Miller, A.; Zykov, V.

    2016-12-01

    Advanced robotic vehicles are increasingly being used by oceanographic research vessels to enable more efficient and widespread exploration of the ocean, particularly the deep ocean. With cutting-edge capabilities mounted onto robotic vehicles, data at high resolutions is being generated more than ever before, enabling enhanced data collection and the potential for broader participation. For example, high resolution camera technology not only improves visualization of the ocean environment, but also expands the capacity to engage participants remotely through increased use of telepresence and virtual reality techniques. Schmidt Ocean Institute is a private, non-profit operating foundation established to advance the understanding of the world's oceans through technological advancement, intelligent observation and analysis, and open sharing of information. Telepresence-enabled research is an important component of Schmidt Ocean Institute's science research cruises, which this presentation will highlight. Schmidt Ocean Institute is one of the only research programs that make their entire underwater vehicle dive series available online, creating a collection of video that enables anyone to follow deep sea research in real time. We encourage students, educators and the general public to take advantage of freely available dive videos. Additionally, other SOI-supported internet platforms, have engaged the public in image and video annotation activities. Examples of these new online platforms, which utilize citizen scientists to annotate scientific image and video data will be provided. This presentation will include an introduction to SOI-supported video and image tagging citizen science projects, real-time robot tracking, live ship-to-shore communications, and an array of outreach activities that enable scientists to interact with the public and explore the ocean in fascinating detail.

  4. Remote stereoscopic video play platform for naked eyes based on the Android system

    NASA Astrophysics Data System (ADS)

    Jia, Changxin; Sang, Xinzhu; Liu, Jing; Cheng, Mingsheng

    2014-11-01

    As people's life quality have been improved significantly, the traditional 2D video technology can not meet people's urgent desire for a better video quality, which leads to the rapid development of 3D video technology. Simultaneously people want to watch 3D video in portable devices,. For achieving the above purpose, we set up a remote stereoscopic video play platform. The platform consists of a server and clients. The server is used for transmission of different formats of video and the client is responsible for receiving remote video for the next decoding and pixel restructuring. We utilize and improve Live555 as video transmission server. Live555 is a cross-platform open source project which provides solutions for streaming media such as RTSP protocol and supports transmission of multiple video formats. At the receiving end, we use our laboratory own player. The player for Android, which is with all the basic functions as the ordinary players do and able to play normal 2D video, is the basic structure for redevelopment. Also RTSP is implemented into this structure for telecommunication. In order to achieve stereoscopic display, we need to make pixel rearrangement in this player's decoding part. The decoding part is the local code which JNI interface calls so that we can extract video frames more effectively. The video formats that we process are left and right, up and down and nine grids. In the design and development, a large number of key technologies from Android application development have been employed, including a variety of wireless transmission, pixel restructuring and JNI call. By employing these key technologies, the design plan has been finally completed. After some updates and optimizations, the video player can play remote 3D video well anytime and anywhere and meet people's requirement.

  5. Teleneonatology: a major tool for the future.

    PubMed

    Minton, Stephen; Allan, Mark; Valdes, Wesley

    2014-02-01

    Hospitals have, for centuries, maintained a central position in the health care system, providing care for critically ill patients. Despite being a cornerstone of health care delivery, we are witnessing the beginning of a major transformation in their function. There are several forces driving this transformation, including health care costs, shortage of health care professionals, volume of people with chronic diseases, consumerism, health care reform, and hospital errors. The neonatal intensive care unit (NICU) at Utah Valley Regional Medical Center in Provo, Utah, began an aggressive redesign/quality improvement effort in 1990. It became obvious that our care processes were designed for health care deliverers and not for the families. An ongoing revamp of our care delivery processes was undertaken using significant input from a parent focus meeting, parental interviews, and development of a parent-to-parent support group. As a result of this work, it became obvious we needed a new model to truly empower parents. The idea of "NICU is Home" was born. We elected to make a mind shift, not to focus on what families think, but rather on how they think. Web cams and other video apparatus have been used in a number of NICUs across the country. We decided our equipment requirements would need to include high-resolution cameras, full high-definition video recording, autofocus, audio microphones, automatic noise reduction, and automatic low-light correction. Our conferencing software needed to accommodate multiple users and have multiple-picture capabilities, low band width, and inexpensive technology. It was recognized that a single video camera feed was insufficient to adequately capture the desired amount of information. Verbal communication between parents and their babies' principal care providers is critical. Parents loved the idea of expanding the remote NICU web cam of their baby to a two-way physician-parent communication bedside monitor. Doctors at Utah Valley Regional Medical Center now have a mobile desk using a WiFi computer/camera/audio to communicate with the family in real-time or leave a recording. Copyright 2014, SLACK Incorporated.

  6. Estimating the Infrared Radiation Wavelength Emitted by a Remote Control Device Using a Digital Camera

    ERIC Educational Resources Information Center

    Catelli, Francisco; Giovannini, Odilon; Bolzan, Vicente Dall Agnol

    2011-01-01

    The interference fringes produced by a diffraction grating illuminated with radiation from a TV remote control and a red laser beam are, simultaneously, captured by a digital camera. Based on an image with two interference patterns, an estimate of the infrared radiation wavelength emitted by a TV remote control is made. (Contains 4 figures.)

  7. Intermediate view synthesis algorithm using mesh clustering for rectangular multiview camera system

    NASA Astrophysics Data System (ADS)

    Choi, Byeongho; Kim, Taewan; Oh, Kwan-Jung; Ho, Yo-Sung; Choi, Jong-Soo

    2010-02-01

    A multiview video-based three-dimensional (3-D) video system offers a realistic impression and a free view navigation to the user. The efficient compression and intermediate view synthesis are key technologies since 3-D video systems deal multiple views. We propose an intermediate view synthesis using a rectangular multiview camera system that is suitable to realize 3-D video systems. The rectangular multiview camera system not only can offer free view navigation both horizontally and vertically but also can employ three reference views such as left, right, and bottom for intermediate view synthesis. The proposed view synthesis method first represents the each reference view to meshes and then finds the best disparity for each mesh element by using the stereo matching between reference views. Before stereo matching, we separate the virtual image to be synthesized into several regions to enhance the accuracy of disparities. The mesh is classified into foreground and background groups by disparity values and then affine transformed. By experiments, we confirm that the proposed method synthesizes a high-quality image and is suitable for 3-D video systems.

  8. Estimation of wave phase speed and nearshore bathymetry from video imagery

    USGS Publications Warehouse

    Stockdon, H.F.; Holman, R.A.

    2000-01-01

    A new remote sensing technique based on video image processing has been developed for the estimation of nearshore bathymetry. The shoreward propagation of waves is measured using pixel intensity time series collected at a cross-shore array of locations using remotely operated video cameras. The incident band is identified, and the cross-spectral matrix is calculated for this band. The cross-shore component of wavenumber is found as the gradient in phase of the first complex empirical orthogonal function of this matrix. Water depth is then inferred from linear wave theory's dispersion relationship. Full bathymetry maps may be measured by collecting data in a large array composed of both cross-shore and longshore lines. Data are collected hourly throughout the day, and a stable, daily estimate of bathymetry is calculated from the median of the hourly estimates. The technique was tested using 30 days of hourly data collected at the SandyDuck experiment in Duck, North Carolina, in October 1997. Errors calculated as the difference between estimated depth and ground truth data show a mean bias of -35 cm (rms error = 91 cm). Expressed as a fraction of the true water depth, the mean percent error was 13% (rms error = 34%). Excluding the region of known wave nonlinearities over the bar crest, the accuracy of the technique improved, and the mean (rms) error was -20 cm (75 cm). Additionally, under low-amplitude swells (wave height H ???1 m), the performance of the technique across the entire profile improved to 6% (29%) of the true water depth with a mean (rms) error of -12 cm (71 cm). Copyright 2000 by the American Geophysical Union.

  9. Visualizing the history of living spaces.

    PubMed

    Ivanov, Yuri; Wren, Christopher; Sorokin, Alexander; Kaur, Ishwinder

    2007-01-01

    The technology available to building designers now makes it possible to monitor buildings on a very large scale. Video cameras and motion sensors are commonplace in practically every office space, and are slowly making their way into living spaces. The application of such technologies, in particular video cameras, while improving security, also violates privacy. On the other hand, motion sensors, while being privacy-conscious, typically do not provide enough information for a human operator to maintain the same degree of awareness about the space that can be achieved by using video cameras. We propose a novel approach in which we use a large number of simple motion sensors and a small set of video cameras to monitor a large office space. In our system we deployed 215 motion sensors and six video cameras to monitor the 3,000-square-meter office space occupied by 80 people for a period of about one year. The main problem in operating such systems is finding a way to present this highly multidimensional data, which includes both spatial and temporal components, to a human operator to allow browsing and searching recorded data in an efficient and intuitive way. In this paper we present our experiences and the solutions that we have developed in the course of our work on the system. We consider this work to be the first step in helping designers and managers of building systems gain access to information about occupants' behavior in the context of an entire building in a way that is only minimally intrusive to the occupants' privacy.

  10. A Low-Cost Tele-Imaging Platform for Developing Countries

    PubMed Central

    Adambounou, Kokou; Adjenou, Victor; Salam, Alex P.; Farin, Fabien; N’Dakena, Koffi Gilbert; Gbeassor, Messanvi; Arbeille, Philippe

    2014-01-01

    Purpose: To design a “low-cost” tele-imaging method allowing real-time tele-ultrasound expertise, delayed tele-ultrasound diagnosis, and tele-radiology between remote peripherals hospitals and clinics (patient centers) and university hospital centers (expert center). Materials and methods: A system of communication via internet (IP camera and remote access software) enabling transfer of ultrasound videos and images between two centers allows a real-time tele-radiology expertise in the presence of a junior sonographer or radiologist at the patient center. In the absence of a sonographer or radiologist at the patient center, a 3D reconstruction program allows a delayed tele-ultrasound diagnosis with images acquired by a lay operator (e.g., midwife, nurse, technician). The system was tested both with high and low bandwidth. The system can further accommodate non-ultrasound tele-radiology (conventional radiography, mammography, and computer tomography for example). The system was tested on 50 patients between CHR Tsevie in Togo (40 km from Lomé-Togo and 4500 km from Tours-France) and CHU Campus at Lomé and CHU Trousseau in Tours. Results: A real-time tele-expertise was successfully performed with a delay of approximately 1.5 s with an internet bandwidth of around 1 Mbps (IP Camera) and 512 kbps (remote access software). A delayed tele-ultrasound diagnosis was also performed with satisfactory results. The transmission of radiological images from the patient center to the expert center was of adequate quality. Delayed tele-ultrasound and tele-radiology was possible even in the presence of a low-bandwidth internet connection. Conclusion: This tele-imaging method, requiring nothing by readily available and inexpensive technology and equipment, offers a major opportunity for telemedicine in developing countries. PMID:25250306

  11. A Pilot Study to Improve Access to Eye Care Services for Patients in Rural India by Implementing Community Ophthalmology through Innovative Telehealth Technology.

    PubMed

    John, Sheila; Premila, M; Javed, Mohd; Vikas, G; Wagholikar, Amol

    2015-01-01

    To inform about a very unique and first of its kind telehealth pilot study in India that has provided virtual telehealth consultation to eye care patients in low resource at remote villages. Provision of Access to eye care services in remote population is always challenging due to pragmatic reasons. Advances in Telehealth technologies have provided an opportunity to improve access to remote population. However, current Telehealth technologies are limited to face-to-face video consultation only. We inform about a pilot study that illustrates real-time imaging access to ophthalmologists. Our innovative software led technology solution allowed screening of patients with varying ocular conditions. Eye camps were conducted in 2 districts in South India over a 12-month period in 2014. Total of 196 eye camps were conducted. Total of 19,634 patients attended the eye camps. Innovative software was used to conduct consultation with the ophthalmologist located in the city hospital. The software enabled virtual visit and allowed instant sharing of fundus camera images for assessment and diagnosis. About 71% of the patients were found to have Refractive Error problems, 15% of them were found to have cataract, 7% of the patients were diagnosed to have Retina problems and 7% of the patients were found to have other ocular diseases. The patients requiring cataract surgery were immediately transferred to city hospital for treatment. Software led assessment of fundus camera images assisted in identifying retinal eye diseases. Our real-time virtual visit software assisted in specialist care provision and illustrated a novel tele health solution for low resource population.

  12. Nyquist Sampling Theorem: Understanding the Illusion of a Spinning Wheel Captured with a Video Camera

    ERIC Educational Resources Information Center

    Levesque, Luc

    2014-01-01

    Inaccurate measurements occur regularly in data acquisition as a result of improper sampling times. An understanding of proper sampling times when collecting data with an analogue-to-digital converter or video camera is crucial in order to avoid anomalies. A proper choice of sampling times should be based on the Nyquist sampling theorem. If the…

  13. Very low cost real time histogram-based contrast enhancer utilizing fixed-point DSP processing

    NASA Astrophysics Data System (ADS)

    McCaffrey, Nathaniel J.; Pantuso, Francis P.

    1998-03-01

    A real time contrast enhancement system utilizing histogram- based algorithms has been developed to operate on standard composite video signals. This low-cost DSP based system is designed with fixed-point algorithms and an off-chip look up table (LUT) to reduce the cost considerably over other contemporary approaches. This paper describes several real- time contrast enhancing systems advanced at the Sarnoff Corporation for high-speed visible and infrared cameras. The fixed-point enhancer was derived from these high performance cameras. The enhancer digitizes analog video and spatially subsamples the stream to qualify the scene's luminance. Simultaneously, the video is streamed through a LUT that has been programmed with the previous calculation. Reducing division operations by subsampling reduces calculation- cycles and also allows the processor to be used with cameras of nominal resolutions. All values are written to the LUT during blanking so no frames are lost. The enhancer measures 13 cm X 6.4 cm X 3.2 cm, operates off 9 VAC and consumes 12 W. This processor is small and inexpensive enough to be mounted with field deployed security cameras and can be used for surveillance, video forensics and real- time medical imaging.

  14. Texton-based super-resolution for achieving high spatiotemporal resolution in hybrid camera system

    NASA Astrophysics Data System (ADS)

    Kamimura, Kenji; Tsumura, Norimichi; Nakaguchi, Toshiya; Miyake, Yoichi

    2010-05-01

    Many super-resolution methods have been proposed to enhance the spatial resolution of images by using iteration and multiple input images. In a previous paper, we proposed the example-based super-resolution method to enhance an image through pixel-based texton substitution to reduce the computational cost. In this method, however, we only considered the enhancement of a texture image. In this study, we modified this texton substitution method for a hybrid camera to reduce the required bandwidth of a high-resolution video camera. We applied our algorithm to pairs of high- and low-spatiotemporal-resolution videos, which were synthesized to simulate a hybrid camera. The result showed that the fine detail of the low-resolution video can be reproduced compared with bicubic interpolation and the required bandwidth could be reduced to about 1/5 in a video camera. It was also shown that the peak signal-to-noise ratios (PSNRs) of the images improved by about 6 dB in a trained frame and by 1.0-1.5 dB in a test frame, as determined by comparison with the processed image using bicubic interpolation, and the average PSNRs were higher than those obtained by the well-known Freeman’s patch-based super-resolution method. Compared with that of the Freeman’s patch-based super-resolution method, the computational time of our method was reduced to almost 1/10.

  15. 3-dimensional telepresence system for a robotic environment

    DOEpatents

    Anderson, Matthew O.; McKay, Mark D.

    2000-01-01

    A telepresence system includes a camera pair remotely controlled by a control module affixed to an operator. The camera pair provides for three dimensional viewing and the control module, affixed to the operator, affords hands-free operation of the camera pair. In one embodiment, the control module is affixed to the head of the operator and an initial position is established. A triangulating device is provided to track the head movement of the operator relative to the initial position. A processor module receives input from the triangulating device to determine where the operator has moved relative to the initial position and moves the camera pair in response thereto. The movement of the camera pair is predetermined by a software map having a plurality of operation zones. Each zone therein corresponds to unique camera movement parameters such as speed of movement. Speed parameters include constant speed, or increasing or decreasing. Other parameters include pan, tilt, slide, raise or lowering of the cameras. Other user interface devices are provided to improve the three dimensional control capabilities of an operator in a local operating environment. Such other devices include a pair of visual display glasses, a microphone and a remote actuator. The pair of visual display glasses are provided to facilitate three dimensional viewing, hence depth perception. The microphone affords hands-free camera movement by utilizing voice commands. The actuator allows the operator to remotely control various robotic mechanisms in the remote operating environment.

  16. Portable telepathology: methods and tools.

    PubMed

    Alfaro, Luis; Roca, Ma José

    2008-07-15

    Telepathology is becoming easier to implement in most pathology departments. In fact e-mail image transmit can be done from almost any pathologist as a simplistic telepathology system. We tried to develop a way to improve capabilities of communication among pathologists with the idea that the system should be affordable for everybody. We took the premise that any pathology department would have microscopes and computers with Internet connection, and selected a few elements to convert them into a telepathology station. Needs were reduced to a camera to collect images, a universal microscope adapter for the camera, a device to connect the camera to the computer, and a software for the remote image transmit. We found out a microscope adapter (MaxView Plus) that allowed us connect almost any domestic digital camera to any microscope. The video out signal from the camera was sent to the computer through an Aver Media USB connector. At last, we selected a group of portable applications that were assembled into a USB memory device. Portable applications are computer programs that can be carried generally on USB flash drives, but also in any other portable device, and used on any (Windows) computer without installation. Besides, when unplugging the device, none of personal data is left behind. We selected open-source applications, and based the pathology image transmission to VLC Media Player due to its functionality as streaming server, portability and ease of use and configuration. Audio transmission was usually done through normal phone lines. We also employed alternative videoconferencing software, SightSpeed for bi-directional image transmission from microscopes, and conventional cameras allowing visual communication and also image transmit from gross pathology specimens. All these elements allowed us to install and use a telepathology system in a few minutes, fully prepared for real time image broadcast.

  17. Portable telepathology: methods and tools

    PubMed Central

    Alfaro, Luis; Roca, Ma José

    2008-01-01

    Telepathology is becoming easier to implement in most pathology departments. In fact e-mail image transmit can be done from almost any pathologist as a simplistic telepathology system. We tried to develop a way to improve capabilities of communication among pathologists with the idea that the system should be affordable for everybody. We took the premise that any pathology department would have microscopes and computers with Internet connection, and selected a few elements to convert them into a telepathology station. Needs were reduced to a camera to collect images, a universal microscope adapter for the camera, a device to connect the camera to the computer, and a software for the remote image transmit. We found out a microscope adapter (MaxView Plus) that allowed us connect almost any domestic digital camera to any microscope. The video out signal from the camera was sent to the computer through an Aver Media USB connector. At last, we selected a group of portable applications that were assembled into a USB memory device. Portable applications are computer programs that can be carried generally on USB flash drives, but also in any other portable device, and used on any (Windows) computer without installation. Besides when unplugging the device, none of personal data is left behind. We selected open-source applications, and based the pathology image transmission to VLC Media Player due to its functionality as streaming server, portability and ease of use and configuration. Audio transmission was usually done through normal phone lines. We also employed alternative videoconferencing software, SightSpeed for bi-directional image transmission from microscopes, and conventional cameras allowing visual communication and also image transmit from gross pathology specimens. All these elements allowed us to install and use a telepathology system in a few minutes, fully prepared for real time image broadcast. PMID:18673507

  18. Parallel-Processing Software for Creating Mosaic Images

    NASA Technical Reports Server (NTRS)

    Klimeck, Gerhard; Deen, Robert; McCauley, Michael; DeJong, Eric

    2008-01-01

    A computer program implements parallel processing for nearly real-time creation of panoramic mosaics of images of terrain acquired by video cameras on an exploratory robotic vehicle (e.g., a Mars rover). Because the original images are typically acquired at various camera positions and orientations, it is necessary to warp the images into the reference frame of the mosaic before stitching them together to create the mosaic. [Also see "Parallel-Processing Software for Correlating Stereo Images," Software Supplement to NASA Tech Briefs, Vol. 31, No. 9 (September 2007) page 26.] The warping algorithm in this computer program reflects the considerations that (1) for every pixel in the desired final mosaic, a good corresponding point must be found in one or more of the original images and (2) for this purpose, one needs a good mathematical model of the cameras and a good correlation of individual pixels with respect to their positions in three dimensions. The desired mosaic is divided into slices, each of which is assigned to one of a number of central processing units (CPUs) operating simultaneously. The results from the CPUs are gathered and placed into the final mosaic. The time taken to create the mosaic depends upon the number of CPUs, the speed of each CPU, and whether a local or a remote data-staging mechanism is used.

  19. Structural analysis of color video camera installation on tank 241AW101 (2 Volumes)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strehlow, J.P.

    1994-08-24

    A video camera is planned to be installed on the radioactive storage tank 241AW101 at the DOE` s Hanford Site in Richland, Washington. The camera will occupy the 20 inch port of the Multiport Flange riser which is to be installed on riser 5B of the 241AW101 (3,5,10). The objective of the project reported herein was to perform a seismic analysis and evaluation of the structural components of the camera for a postulated Design Basis Earthquake (DBE) per the reference Structural Design Specification (SDS) document (6). The detail of supporting engineering calculations is documented in URS/Blume Calculation No. 66481-01-CA-03 (1).

  20. Streaming Audio and Video: New Challenges and Opportunities for Museums.

    ERIC Educational Resources Information Center

    Spadaccini, Jim

    Streaming audio and video present new challenges and opportunities for museums. Streaming media is easier to author and deliver to Internet audiences than ever before; digital video editing is commonplace now that the tools--computers, digital video cameras, and hard drives--are so affordable; the cost of serving video files across the Internet…

  1. Mobile teledermatopathology: using a tablet PC as a novel and cost-efficient method to remotely diagnose dermatopathology cases.

    PubMed

    Speiser, Jodi J; Hughes, Ian; Mehta, Vikas; Wojcik, Eva M; Hutchens, Kelli A

    2014-01-01

    : Dermatopathology has relatively few studies regarding teledermatopathology and none have addressed the use of new technologies, such as the tablet PC. We hypothesized that the combination of our existing dynamic nonrobotic system with a tablet PC could provide a novel and cost-efficient method to remotely diagnose dermatopathology cases. 93 cases diagnosed by conventional light microscopy at least 5 months earlier by the participating dermatopathologist were retrieved by an electronic pathology database search. A high-resolution video camera (Nikon DS-L2, version 4.4) mounted on a microscope was used to transmit digital video of a slide to an Apple iPAD2 (Apple Inc, Cupertino, CA) at the pathologist's remote location via live streaming at an interval time of 500 ms and a resolution of 1280/960 pixels. Concordance to the original diagnosis and the seconds elapsed to reaching the diagnosis were recorded. 24.7% (23/93) of cases were melanocytic, 70.9% (66/93) were nonmelanocytic, and 4.4% (4/93) were inflammatory. About 92.5% (86/93) of cases were diagnosed on immediate viewing (<5 seconds), with the average time to diagnosis at 40.2 seconds (range: 10-218 seconds). Of the cases diagnosed immediately, 98.8% (85/86) of the telediagnoses were concordant with the original. Telepathology performed via a tablet PC may serve as a reliable and rapid technique for the diagnosis of routine cases with some diagnostic caveats in mind. Our study established a novel and cost-efficient solution for those institutions that may not have the capital to purchase either a dynamic robotic system or a virtual slide system.

  2. An attentive multi-camera system

    NASA Astrophysics Data System (ADS)

    Napoletano, Paolo; Tisato, Francesco

    2014-03-01

    Intelligent multi-camera systems that integrate computer vision algorithms are not error free, and thus both false positive and negative detections need to be revised by a specialized human operator. Traditional multi-camera systems usually include a control center with a wall of monitors displaying videos from each camera of the network. Nevertheless, as the number of cameras increases, switching from a camera to another becomes hard for a human operator. In this work we propose a new method that dynamically selects and displays the content of a video camera from all the available contents in the multi-camera system. The proposed method is based on a computational model of human visual attention that integrates top-down and bottom-up cues. We believe that this is the first work that tries to use a model of human visual attention for the dynamic selection of the camera view of a multi-camera system. The proposed method has been experimented in a given scenario and has demonstrated its effectiveness with respect to the other methods and manually generated ground-truth. The effectiveness has been evaluated in terms of number of correct best-views generated by the method with respect to the camera views manually generated by a human operator.

  3. Study of atmospheric discharges caracteristics using with a standard video camera

    NASA Astrophysics Data System (ADS)

    Ferraz, E. C.; Saba, M. M. F.

    In this study is showed some preliminary statistics on lightning characteristics such as: flash multiplicity, number of ground contact points, formation of new and altered channels and presence of continuous current in the strokes that form the flash. The analysis is based on the images of a standard video camera (30 frames.s-1). The results obtained for some flashes will be compared to the images of a high-speed CCD camera (1000 frames.s-1). The camera observing site is located in São José dos Campos (23°S,46° W) at an altitude of 630m. This observational site has nearly 360° field of view at a height of 25m. It is possible to visualize distant thunderstorms occurring within a radius of 25km from the site. The room, situated over a metal structure, has water and power supplies, a telephone line and a small crane on the roof. KEY WORDS: Video images, Lightning, Multiplicity, Stroke.

  4. High definition in minimally invasive surgery: a review of methods for recording, editing, and distributing video.

    PubMed

    Kelly, Christopher R; Hogle, Nancy J; Landman, Jaime; Fowler, Dennis L

    2008-09-01

    The use of high-definition cameras and monitors during minimally invasive procedures can provide the surgeon and operating team with more than twice the resolution of standard definition systems. Although this dramatic improvement in visualization offers numerous advantages, the adoption of high definition cameras in the operating room can be challenging because new recording equipment must be purchased, and several new technologies are required to edit and distribute video. The purpose of this review article is to provide an overview of the popular methods for recording, editing, and distributing high-definition video. This article discusses the essential technical concepts of high-definition video, reviews the different kinds of equipment and methods most often used for recording, and describes several options for video distribution.

  5. HDR video synthesis for vision systems in dynamic scenes

    NASA Astrophysics Data System (ADS)

    Shopovska, Ivana; Jovanov, Ljubomir; Goossens, Bart; Philips, Wilfried

    2016-09-01

    High dynamic range (HDR) image generation from a number of differently exposed low dynamic range (LDR) images has been extensively explored in the past few decades, and as a result of these efforts a large number of HDR synthesis methods have been proposed. Since HDR images are synthesized by combining well-exposed regions of the input images, one of the main challenges is dealing with camera or object motion. In this paper we propose a method for the synthesis of HDR video from a single camera using multiple, differently exposed video frames, with circularly alternating exposure times. One of the potential applications of the system is in driver assistance systems and autonomous vehicles, involving significant camera and object movement, non- uniform and temporally varying illumination, and the requirement of real-time performance. To achieve these goals simultaneously, we propose a HDR synthesis approach based on weighted averaging of aligned radiance maps. The computational complexity of high-quality optical flow methods for motion compensation is still pro- hibitively high for real-time applications. Instead, we rely on more efficient global projective transformations to solve camera movement, while moving objects are detected by thresholding the differences between the trans- formed and brightness adapted images in the set. To attain temporal consistency of the camera motion in the consecutive HDR frames, the parameters of the perspective transformation are stabilized over time by means of computationally efficient temporal filtering. We evaluated our results on several reference HDR videos, on synthetic scenes, and using 14-bit raw images taken with a standard camera.

  6. Aerospace Communications Security Technologies Demonstrated

    NASA Technical Reports Server (NTRS)

    Griner, James H.; Martzaklis, Konstantinos S.

    2003-01-01

    In light of the events of September 11, 2001, NASA senior management requested an investigation of technologies and concepts to enhance aviation security. The investigation was to focus on near-term technologies that could be demonstrated within 90 days and implemented in less than 2 years. In response to this request, an internal NASA Glenn Research Center Communications, Navigation, and Surveillance Aviation Security Tiger Team was assembled. The 2-year plan developed by the team included an investigation of multiple aviation security concepts, multiple aircraft platforms, and extensively leveraged datalink communications technologies. It incorporated industry partners from NASA's Graphical Weather-in-the-Cockpit research, which is within NASA's Aviation Safety Program. Two concepts from the plan were selected for demonstration: remote "black box," and cockpit/cabin surveillance. The remote "black box" concept involves real-time downlinking of aircraft parameters for remote monitoring and archiving of aircraft data, which would assure access to the data following the loss or inaccessibility of an aircraft. The cockpit/cabin surveillance concept involves remote audio and/or visual surveillance of cockpit and cabin activity, which would allow immediate response to any security breach and would serve as a possible deterrent to such breaches. The datalink selected for the demonstrations was VDL Mode 2 (VHF digital link), the first digital datalink for air-ground communications designed for aircraft use. VDL Mode 2 is beginning to be implemented through the deployment of ground stations and aircraft avionics installations, with the goal of being operational in 2 years. The first demonstration was performed December 3, 2001, onboard the LearJet 25 at Glenn. NASA worked with Honeywell, Inc., for the broadcast VDL Mode 2 datalink capability and with actual Boeing 757 aircraft data. This demonstration used a cockpitmounted camera for video surveillance and a coupling to the intercom system for audio surveillance. Audio, video, and "black box" data were simultaneously streamed to the ground, where they were displayed to a Glenn audience of senior management and aviation security team members.

  7. The Lancashire telemedicine ambulance.

    PubMed

    Curry, G R; Harrop, N

    1998-01-01

    An emergency ambulance was equipped with three video-cameras and a system for transmitting slow-scan video-pictures through a cellular telephone link to a hospital accident and emergency department. Video-pictures were trasmitted at a resolution of 320 x 240 pixels and a frame rate of 15 pictures/min. In addition, a helmet-mounted camera was used with a wireless transmission link to the ambulance and thence the hospital. Speech was transmitted by a second hand-held cellular telephone. The equipment was installed in 1996-7 and video-recordings of actual ambulance journeys were made in July 1997. The technical feasibility of the telemedicine ambulance has been demonstrated and further clinical assessment is now in progress.

  8. Vehicle-triggered video compression/decompression for fast and efficient searching in large video databases

    NASA Astrophysics Data System (ADS)

    Bulan, Orhan; Bernal, Edgar A.; Loce, Robert P.; Wu, Wencheng

    2013-03-01

    Video cameras are widely deployed along city streets, interstate highways, traffic lights, stop signs and toll booths by entities that perform traffic monitoring and law enforcement. The videos captured by these cameras are typically compressed and stored in large databases. Performing a rapid search for a specific vehicle within a large database of compressed videos is often required and can be a time-critical life or death situation. In this paper, we propose video compression and decompression algorithms that enable fast and efficient vehicle or, more generally, event searches in large video databases. The proposed algorithm selects reference frames (i.e., I-frames) based on a vehicle having been detected at a specified position within the scene being monitored while compressing a video sequence. A search for a specific vehicle in the compressed video stream is performed across the reference frames only, which does not require decompression of the full video sequence as in traditional search algorithms. Our experimental results on videos captured in a local road show that the proposed algorithm significantly reduces the search space (thus reducing time and computational resources) in vehicle search tasks within compressed video streams, particularly those captured in light traffic volume conditions.

  9. Research on inosculation between master of ceremonies or players and virtual scene in virtual studio

    NASA Astrophysics Data System (ADS)

    Li, Zili; Zhu, Guangxi; Zhu, Yaoting

    2003-04-01

    A technical principle about construction of virtual studio has been proposed where orientation tracker and telemeter has been used for improving conventional BETACAM pickup camera and connecting with the software module of the host. A model of virtual camera named Camera & Post-camera Coupling Pair has been put forward, which is different from the common model in computer graphics and has been bound to real BETACAM pickup camera for shooting. The formula has been educed to compute the foreground frame buffer image and the background frame buffer image of the virtual scene whose boundary is based on the depth information of target point of the real BETACAM pickup camera's projective ray. The effect of real-time consistency has been achieved between the video image sequences of the master of ceremonies or players and the CG video image sequences for the virtual scene in spatial position, perspective relationship and image object masking. The experimental result has shown that the technological scheme of construction of virtual studio submitted in this paper is feasible and more applicative and more effective than the existing technology to establish a virtual studio based on color-key and image synthesis with background using non-linear video editing technique.

  10. Analysis of the color rendition of flexible endoscopes

    NASA Astrophysics Data System (ADS)

    Murphy, Edward M.; Hegarty, Francis J.; McMahon, Barry P.; Boyle, Gerard

    2003-03-01

    Endoscopes are imaging devices routinely used for the diagnosis of disease within the human digestive tract. Light is transmitted into the body cavity via incoherent fibreoptic bundles and is controlled by a light feedback system. Fibreoptic endoscopes use coherent fibreoptic bundles to provide the clinician with an image. It is also possible to couple fibreoptic endoscopes to a clip-on video camera. Video endoscopes consist of a small CCD camera, which is inserted into gastrointestinal tract, and associated image processor to convert the signal to analogue RGB video signals. Images from both types of endoscope are displayed on standard video monitors. Diagnosis is dependent upon being able to determine changes in the structure and colour of tissues and biological fluids, and therefore is dependent upon the ability of the endoscope to reproduce the colour of these tissues and fluids with fidelity. This study investigates the colour reproduction of flexible optical and video endoscopes. Fibreoptic and video endoscopes alter image colour characteristics in different ways. The colour rendition of fibreoptic endoscopes was assessed by coupling them to a video camera and applying video colorimetric techniques. These techniques were then used on video endoscopes to assess how the colour rendition of video endoscopes compared with that of optical endoscopes. In both cases results were obtained at fixed illumination settings. Video endoscopes were then assessed with varying levels of illumination. Initial results show that at constant luminance endoscopy systems introduce non-linear shifts in colour. Techniques for examining how this colour shift varies with illumination intensity were developed and both methodology and results will be presented. We conclude that more rigorous quality assurance is required to reduce colour error and are developing calibration procedures applicable to medical endoscopes.

  11. A Taxonomy of Asynchronous Instructional Video Styles

    ERIC Educational Resources Information Center

    Chorianopoulos, Konstantinos

    2018-01-01

    Many educational organizations are employing instructional videos in their pedagogy, but there is a limited understanding of the possible video formats. In practice, the presentation format of instructional videos ranges from direct recording of classroom teaching with a stationary camera, or screencasts with voiceover, to highly elaborate video…

  12. First stereo video dataset with ground truth for remote car pose estimation using satellite markers

    NASA Astrophysics Data System (ADS)

    Gil, Gustavo; Savino, Giovanni; Pierini, Marco

    2018-04-01

    Leading causes of PTW (Powered Two-Wheeler) crashes and near misses in urban areas are on the part of a failure or delayed prediction of the changing trajectories of other vehicles. Regrettably, misperception from both car drivers and motorcycle riders results in fatal or serious consequences for riders. Intelligent vehicles could provide early warning about possible collisions, helping to avoid the crash. There is evidence that stereo cameras can be used for estimating the heading angle of other vehicles, which is key to anticipate their imminent location, but there is limited heading ground truth data available in the public domain. Consequently, we employed a marker-based technique for creating ground truth of car pose and create a dataset∗ for computer vision benchmarking purposes. This dataset of a moving vehicle collected from a static mounted stereo camera is a simplification of a complex and dynamic reality, which serves as a test bed for car pose estimation algorithms. The dataset contains the accurate pose of the moving obstacle, and realistic imagery including texture-less and non-lambertian surfaces (e.g. reflectance and transparency).

  13. Synchrotron topography project. Progress report, January 20, 1982-October 20, 1982

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bilello, J.C.; Chen, H.; Hmelo, A.B.

    1982-01-01

    The collaborators have participated in the Synchrotron Topography Project (STP) which has designed and developed instrumentation for an x-ray topography station at the National Synchrotron Light Source (NSLS) at Brookhaven National Laboratory (BNL). The two principle instruments constructed consist of a White Beam Camera (WBC) and a Multiple Crystal Camera (MCC) with high planar collimation and wide area image coverage. It is possible to perform in-situ studies in a versatile environmental chamber equipped with a miniature mechanical testing stage for both the WBC and MCC systems. Real-time video imaging plus a rapid feed cassette holder for high resolution photographic platesmore » are available for recording topographs. Provisions are made for other types of photon detection as well as spectroscopy. The facilities for the entire station have been designed for remote operation using a LSI-11/23 plus suitable interfacing. These instruments will be described briefly and the current status of the program will be reviewed. The Appendix of this report presents titles, authors and abstracts of other technical work associated with this project during the current period.« less

  14. Performance of the Satellite Test Assistant Robot in JPL's Space Simulation Facility

    NASA Technical Reports Server (NTRS)

    Mcaffee, Douglas; Long, Mark; Johnson, Ken; Siebes, Georg

    1995-01-01

    An innovative new telerobotic inspection system called STAR (the Satellite Test Assistant Robot) has been developed to assist engineers as they test new spacecraft designs in simulated space environments. STAR operates inside the ultra-cold, high-vacuum, test chambers and provides engineers seated at a remote Operator Control Station (OCS) with high resolution video and infrared (IR) images of the flight articles under test. STAR was successfully proof tested in JPL's 25-ft (7.6-m) Space Simulation Chamber where temperatures ranged from +85 C to -190 C and vacuum levels reached 5.1 x 10(exp -6) torr. STAR's IR Camera was used to thermally map the entire interior of the chamber for the first time. STAR also made several unexpected and important discoveries about the thermal processes occurring within the chamber. Using a calibrated test fixture arrayed with ten sample spacecraft materials, the IR camera was shown to produce highly accurate surface temperature data. This paper outlines STAR's design and reports on significant results from the thermal vacuum chamber test.

  15. Choreographing the Frame: A Critical Investigation into How Dance for the Camera Extends the Conceptual and Artistic Boundaries of Dance

    ERIC Educational Resources Information Center

    Preston, Hilary

    2006-01-01

    This essay investigates the collaboration between dance and choreographic practice and film/video medium in a contemporary context. By looking specifically at dance made for the camera and the proliferation of dance-film/video, critical issues will be explored that have surfaced in response to this burgeoning form. Presenting a view of avant-garde…

  16. Integrating Time-Synchronized Video with Other Geospatial and Temporal Data for Remote Science Operations

    NASA Technical Reports Server (NTRS)

    Cohen, Tamar E.; Lees, David S.; Deans, Matthew C.; Lim, Darlene S. S.; Lee, Yeon Jin Grace

    2018-01-01

    Exploration Ground Data Systems (xGDS) supports rapid scientific decision making by synchronizing video in context with map, instrument data visualization, geo-located notes and any other collected data. xGDS is an open source web-based software suite developed at NASA Ames Research Center to support remote science operations in analog missions and prototype solutions for remote planetary exploration. (See Appendix B) Typical video systems are designed to play or stream video only, independent of other data collected in the context of the video. Providing customizable displays for monitoring live video and data as well as replaying recorded video and data helps end users build up a rich situational awareness. xGDS was designed to support remote field exploration with unreliable networks. Commercial digital recording systems operate under the assumption that there is a stable and reliable network between the source of the video and the recording system. In many field deployments and space exploration scenarios, this is not the case - there are both anticipated and unexpected network losses. xGDS' Video Module handles these interruptions, storing the available video, organizing and characterizing the dropouts, and presenting the video for streaming or replay to the end user including visualization of the dropouts. Scientific instruments often require custom or expensive software to analyze and visualize collected data. This limits the speed at which the data can be visualized and limits access to the data to those users with the software. xGDS' Instrument Module integrates with instruments that collect and broadcast data in a single snapshot or that continually collect and broadcast a stream of data. While seeing a visualization of collected instrument data is informative, showing the context for the collected data, other data collected nearby along with events indicating current status helps remote science teams build a better understanding of the environment. Further, sharing geo-located, tagged notes recorded by the scientists and others on the team spurs deeper analysis of the data.

  17. Evaluation of commercial video-based intersection signal actuation systems.

    DOT National Transportation Integrated Search

    2008-12-01

    Video cameras and computer image processors have come into widespread use for the detection of : vehicles for signal actuation at controlled intersections. Video is considered both a cost-saving and : convenient alternative to conventional stop-line ...

  18. An ultrahigh-speed color video camera operating at 1,000,000 fps with 288 frame memories

    NASA Astrophysics Data System (ADS)

    Kitamura, K.; Arai, T.; Yonai, J.; Hayashida, T.; Kurita, T.; Maruyama, H.; Namiki, J.; Yanagi, T.; Yoshida, T.; van Kuijk, H.; Bosiers, Jan T.; Saita, A.; Kanayama, S.; Hatade, K.; Kitagawa, S.; Etoh, T. Goji

    2008-11-01

    We developed an ultrahigh-speed color video camera that operates at 1,000,000 fps (frames per second) and had capacity to store 288 frame memories. In 2005, we developed an ultrahigh-speed, high-sensitivity portable color camera with a 300,000-pixel single CCD (ISIS-V4: In-situ Storage Image Sensor, Version 4). Its ultrahigh-speed shooting capability of 1,000,000 fps was made possible by directly connecting CCD storages, which record video images, to the photodiodes of individual pixels. The number of consecutive frames was 144. However, longer capture times were demanded when the camera was used during imaging experiments and for some television programs. To increase ultrahigh-speed capture times, we used a beam splitter and two ultrahigh-speed 300,000-pixel CCDs. The beam splitter was placed behind the pick up lens. One CCD was located at each of the two outputs of the beam splitter. The CCD driving unit was developed to separately drive two CCDs, and the recording period of the two CCDs was sequentially switched. This increased the recording capacity to 288 images, an increase of a factor of two over that of conventional ultrahigh-speed camera. A problem with the camera was that the incident light on each CCD was reduced by a factor of two by using the beam splitter. To improve the light sensitivity, we developed a microlens array for use with the ultrahigh-speed CCDs. We simulated the operation of the microlens array in order to optimize its shape and then fabricated it using stamping technology. Using this microlens increased the light sensitivity of the CCDs by an approximate factor of two. By using a beam splitter in conjunction with the microlens array, it was possible to make an ultrahigh-speed color video camera that has 288 frame memories but without decreasing the camera's light sensitivity.

  19. High-Speed Video Analysis in a Conceptual Physics Class

    NASA Astrophysics Data System (ADS)

    Desbien, Dwain M.

    2011-09-01

    The use of probe ware and computers has become quite common in introductory physics classrooms. Video analysis is also becoming more popular and is available to a wide range of students through commercially available and/or free software.2,3 Video analysis allows for the study of motions that cannot be easily measured in the traditional lab setting and also allows real-world situations to be analyzed. Many motions are too fast to easily be captured at the standard video frame rate of 30 frames per second (fps) employed by most video cameras. This paper will discuss using a consumer camera that can record high-frame-rate video in a college-level conceptual physics class. In particular this will involve the use of model rockets to determine the acceleration during the boost period right at launch and compare it to a simple model of the expected acceleration.

  20. Low cost thermal camera for use in preclinical detection of diabetic peripheral neuropathy in primary care setting

    NASA Astrophysics Data System (ADS)

    Joshi, V.; Manivannan, N.; Jarry, Z.; Carmichael, J.; Vahtel, M.; Zamora, G.; Calder, C.; Simon, J.; Burge, M.; Soliz, P.

    2018-02-01

    Diabetic peripheral neuropathy (DPN) accounts for around 73,000 lower-limb amputations annually in the US on patients with diabetes. Early detection of DPN is critical. Current clinical methods for diagnosing DPN are subjective and effective only at later stages. Until recently, thermal cameras used for medical imaging have been expensive and hence prohibitive to be installed in primary care setting. The objective of this study is to compare results from a low-cost thermal camera with a high-end thermal camera used in screening for DPN. Thermal imaging has demonstrated changes in microvascular function that correlates with nerve function affected by DPN. The limitations for using low-cost cameras for DPN imaging are: less resolution (active pixels), frame rate, thermal sensitivity etc. We integrated two FLIR Lepton (80x60 active pixels, 50° HFOV, thermal sensitivity < 50mK) as one unit. Right and left cameras record the videos of right and left foot respectively. A compactible embedded system (raspberry pi3 model Bv1.2) is used to configure the sensors, capture and stream the video via ethernet. The resulting video has 160x120 active pixels (8 frames/second). We compared the temperature measurement of feet obtained using low-cost camera against the gold standard highend FLIR SC305. Twelve subjects (aged 35-76) were recruited. Difference in the temperature measurements between cameras was calculated for each subject and the results show that the difference between the temperature measurements of two cameras (mean difference=0.4, p-value=0.2) is not statistically significant. We conclude that the low-cost thermal camera system shows potential for use in detecting early-signs of DPN in under-served and rural clinics.

  1. The use of digital imaging, video conferencing, and telepathology in histopathology: a national survey

    PubMed Central

    Dennis, T; Start, R D; Cross, S S

    2005-01-01

    Aims: To undertake a large scale survey of histopathologists in the UK to determine the current infrastructure, training, and attitudes to digital pathology. Methods: A postal questionnaire was sent to 500 consultant histopathologists randomly selected from the membership of the Royal College of Pathologists in the UK. Results: There was a response rate of 47%. Sixty four per cent of respondents had a digital camera mounted on their microscope, but only 12% had any sort of telepathology equipment. Thirty per cent used digital images in electronic presentations at meetings at least once a year and only 24% had ever used telepathology in a diagnostic situation. Fifty nine per cent had received no training in digital imaging. Fifty eight per cent felt that the medicolegal implications of duty of care were a barrier to its use. A large proportion of pathologists (69%) were interested in using video conferencing for remote attendance at multidisciplinary team meetings. Conclusions: There is a reasonable level of equipment and communications infrastructure among histopathologists in the UK but a very low level of training. There is resistance to the use of telepathology in the diagnostic context but enthusiasm for the use of video conferencing in multidisciplinary team meetings. PMID:15735155

  2. Low-cost uncalibrated video-based tool for tridimensional reconstruction oriented to assessment of chronic wounds

    NASA Astrophysics Data System (ADS)

    Casas, Leslie; Treuillet, Sylvie; Valencia, Braulio; Llanos, Alejandro; Castañeda, Benjamín.

    2015-01-01

    Chronic wounds are a major problem worldwide which mainly affects to the geriatric population or patients with limited mobility. In tropical countries, Cutaneous Leishmaniasis(CL)s is also a cause for chronic wounds,being endemic in Peru in the 75% of the country. Therefore, the monitoring of these wounds represents a big challenge due to the remote location of the patients. This papers aims to develop a low-cost user-friendly technique to obtain a 3D reconstruction for chronic wounds oriented to clinical monitoring and assessment. The video is taken using a commercial hand-held video camera without the need of a rig. The algorithm has been specially designed for skin wounds which have certain characteristics in texture where techniques used in regular SFM applications with undefined edges wouldn't work. In addition, the technique has been developed using open source libraries. The 3D cloud point estimated allows the computation of metrics as volume, depth, superficial area which recently have been used by CL specialists showing good results in clinical assessment. Initial results in cork phantoms and CL wounds show an average distance error of less than 1mm when compared against models obtained with a industrial 3D laser scanner.

  3. System of launchable mesoscale robots for distributed sensing

    NASA Astrophysics Data System (ADS)

    Yesin, Kemal B.; Nelson, Bradley J.; Papanikolopoulos, Nikolaos P.; Voyles, Richard M.; Krantz, Donald G.

    1999-08-01

    A system of launchable miniature mobile robots with various sensors as payload is used for distributed sensing. The robots are projected to areas of interest either by a robot launcher or by a human operator using standard equipment. A wireless communication network is used to exchange information with the robots. Payloads such as a MEMS sensor for vibration detection, a microphone and an active video module are used mainly to detect humans. The video camera provides live images through a wireless video transmitter and a pan-tilt mechanism expands the effective field of view. There are strict restrictions on total volume and power consumption of the payloads due to the small size of the robot. Emerging technologies are used to address these restrictions. In this paper, we describe the use of microrobotic technologies to develop active vision modules for the mesoscale robot. A single chip CMOS video sensor is used along with a miniature lens that is approximately the size of a sugar cube. The device consumes 100 mW; about 5 times less than the power consumption of a comparable CCD camera. Miniature gearmotors 3 mm in diameter are used to drive the pan-tilt mechanism. A miniature video transmitter is used to transmit analog video signals from the camera.

  4. Application of robust face recognition in video surveillance systems

    NASA Astrophysics Data System (ADS)

    Zhang, De-xin; An, Peng; Zhang, Hao-xiang

    2018-03-01

    In this paper, we propose a video searching system that utilizes face recognition as searching indexing feature. As the applications of video cameras have great increase in recent years, face recognition makes a perfect fit for searching targeted individuals within the vast amount of video data. However, the performance of such searching depends on the quality of face images recorded in the video signals. Since the surveillance video cameras record videos without fixed postures for the object, face occlusion is very common in everyday video. The proposed system builds a model for occluded faces using fuzzy principal component analysis (FPCA), and reconstructs the human faces with the available information. Experimental results show that the system has very high efficiency in processing the real life videos, and it is very robust to various kinds of face occlusions. Hence it can relieve people reviewers from the front of the monitors and greatly enhances the efficiency as well. The proposed system has been installed and applied in various environments and has already demonstrated its power by helping solving real cases.

  5. Guerrilla Video: A New Protocol for Producing Classroom Video

    ERIC Educational Resources Information Center

    Fadde, Peter; Rich, Peter

    2010-01-01

    Contemporary changes in pedagogy point to the need for a higher level of video production value in most classroom video, replacing the default video protocol of an unattended camera in the back of the classroom. The rich and complex environment of today's classroom can be captured more fully using the higher level, but still easily manageable,…

  6. High dynamic range adaptive real-time smart camera: an overview of the HDR-ARTiST project

    NASA Astrophysics Data System (ADS)

    Lapray, Pierre-Jean; Heyrman, Barthélémy; Ginhac, Dominique

    2015-04-01

    Standard cameras capture only a fraction of the information that is visible to the human visual system. This is specifically true for natural scenes including areas of low and high illumination due to transitions between sunlit and shaded areas. When capturing such a scene, many cameras are unable to store the full Dynamic Range (DR) resulting in low quality video where details are concealed in shadows or washed out by sunlight. The imaging technique that can overcome this problem is called HDR (High Dynamic Range) imaging. This paper describes a complete smart camera built around a standard off-the-shelf LDR (Low Dynamic Range) sensor and a Virtex-6 FPGA board. This smart camera called HDR-ARtiSt (High Dynamic Range Adaptive Real-time Smart camera) is able to produce a real-time HDR live video color stream by recording and combining multiple acquisitions of the same scene while varying the exposure time. This technique appears as one of the most appropriate and cheapest solution to enhance the dynamic range of real-life environments. HDR-ARtiSt embeds real-time multiple captures, HDR processing, data display and transfer of a HDR color video for a full sensor resolution (1280 1024 pixels) at 60 frames per second. The main contributions of this work are: (1) Multiple Exposure Control (MEC) dedicated to the smart image capture with alternating three exposure times that are dynamically evaluated from frame to frame, (2) Multi-streaming Memory Management Unit (MMMU) dedicated to the memory read/write operations of the three parallel video streams, corresponding to the different exposure times, (3) HRD creating by combining the video streams using a specific hardware version of the Devebecs technique, and (4) Global Tone Mapping (GTM) of the HDR scene for display on a standard LCD monitor.

  7. Effective sociodemographic population assessment of elusive species in ecology and conservation management.

    PubMed

    Head, Josephine S; Boesch, Christophe; Robbins, Martha M; Rabanal, Luisa I; Makaga, Loïc; Kühl, Hjalmar S

    2013-09-01

    Wildlife managers are urgently searching for improved sociodemographic population assessment methods to evaluate the effectiveness of implemented conservation activities. These need to be inexpensive, appropriate for a wide spectrum of species and straightforward to apply by local staff members with minimal training. Furthermore, conservation management would benefit from single approaches which cover many aspects of population assessment beyond only density estimates, to include for instance social and demographic structure, movement patterns, or species interactions. Remote camera traps have traditionally been used to measure species richness. Currently, there is a rapid move toward using remote camera trapping in density estimation, community ecology, and conservation management. Here, we demonstrate such comprehensive population assessment by linking remote video trapping, spatially explicit capture-recapture (SECR) techniques, and other methods. We apply it to three species: chimpanzees Pan troglodytes troglodytes, gorillas Gorilla gorilla gorilla, and forest elephants Loxodonta cyclotis in Loango National Park, Gabon. All three species exhibited considerable heterogeneity in capture probability at the sex or group level and density was estimated at 1.72, 1.2, and 1.37 individuals per km(2) and male to female sex ratios were 1:2.1, 1:3.2, and 1:2 for chimpanzees, gorillas, and elephants, respectively. Association patterns revealed four, eight, and 18 independent social groups of chimpanzees, gorillas, and elephants, respectively: key information for both conservation management and studies on the species' ecology. Additionally, there was evidence of resident and nonresident elephants within the study area and intersexual variation in home range size among elephants but not chimpanzees. Our study highlights the potential of combining camera trapping and SECR methods in conducting detailed population assessments that go far beyond documenting species diversity patterns or estimating single species population size. Our study design is widely applicable to other species and spatial scales, and moderately trained staff members can collect and process the required data. Furthermore, assessments using the same method can be extended to include several other ecological, behavioral, and demographic aspects: fission and fusion dynamics and intergroup transfers, birth and mortality rates, species interactions, and ranging patterns.

  8. Effective sociodemographic population assessment of elusive species in ecology and conservation management

    PubMed Central

    Head, Josephine S; Boesch, Christophe; Robbins, Martha M; Rabanal, Luisa I; Makaga, Loïc; Kühl, Hjalmar S

    2013-01-01

    Wildlife managers are urgently searching for improved sociodemographic population assessment methods to evaluate the effectiveness of implemented conservation activities. These need to be inexpensive, appropriate for a wide spectrum of species and straightforward to apply by local staff members with minimal training. Furthermore, conservation management would benefit from single approaches which cover many aspects of population assessment beyond only density estimates, to include for instance social and demographic structure, movement patterns, or species interactions. Remote camera traps have traditionally been used to measure species richness. Currently, there is a rapid move toward using remote camera trapping in density estimation, community ecology, and conservation management. Here, we demonstrate such comprehensive population assessment by linking remote video trapping, spatially explicit capture–recapture (SECR) techniques, and other methods. We apply it to three species: chimpanzees Pan troglodytes troglodytes, gorillas Gorilla gorilla gorilla, and forest elephants Loxodonta cyclotis in Loango National Park, Gabon. All three species exhibited considerable heterogeneity in capture probability at the sex or group level and density was estimated at 1.72, 1.2, and 1.37 individuals per km2 and male to female sex ratios were 1:2.1, 1:3.2, and 1:2 for chimpanzees, gorillas, and elephants, respectively. Association patterns revealed four, eight, and 18 independent social groups of chimpanzees, gorillas, and elephants, respectively: key information for both conservation management and studies on the species' ecology. Additionally, there was evidence of resident and nonresident elephants within the study area and intersexual variation in home range size among elephants but not chimpanzees. Our study highlights the potential of combining camera trapping and SECR methods in conducting detailed population assessments that go far beyond documenting species diversity patterns or estimating single species population size. Our study design is widely applicable to other species and spatial scales, and moderately trained staff members can collect and process the required data. Furthermore, assessments using the same method can be extended to include several other ecological, behavioral, and demographic aspects: fission and fusion dynamics and intergroup transfers, birth and mortality rates, species interactions, and ranging patterns. PMID:24101982

  9. Versatile microsecond movie camera

    NASA Astrophysics Data System (ADS)

    Dreyfus, R. W.

    1980-03-01

    A laboratory-type movie camera is described which satisfies many requirements in the range 1 microsec to 1 sec. The camera consists of a He-Ne laser and compatible state-of-the-art components; the primary components are an acoustooptic modulator, an electromechanical beam deflector, and a video tape system. The present camera is distinct in its operation in that submicrosecond laser flashes freeze the image motion while still allowing the simplicity of electromechanical image deflection in the millisecond range. The gating and pulse delay circuits of an oscilloscope synchronize the modulator and scanner relative to the subject being photographed. The optical table construction and electronic control enhance the camera's versatility and adaptability. The instant replay video tape recording allows for easy synchronization and immediate viewing of the results. Economy is achieved by using off-the-shelf components, optical table construction, and short assembly time.

  10. Collaborative real-time scheduling of multiple PTZ cameras for multiple object tracking in video surveillance

    NASA Astrophysics Data System (ADS)

    Liu, Yu-Che; Huang, Chung-Lin

    2013-03-01

    This paper proposes a multi-PTZ-camera control mechanism to acquire close-up imagery of human objects in a surveillance system. The control algorithm is based on the output of multi-camera, multi-target tracking. Three main concerns of the algorithm are (1) the imagery of human object's face for biometric purposes, (2) the optimal video quality of the human objects, and (3) minimum hand-off time. Here, we define an objective function based on the expected capture conditions such as the camera-subject distance, pan tile angles of capture, face visibility and others. Such objective function serves to effectively balance the number of captures per subject and quality of captures. In the experiments, we demonstrate the performance of the system which operates in real-time under real world conditions on three PTZ cameras.

  11. The design of red-blue 3D video fusion system based on DM642

    NASA Astrophysics Data System (ADS)

    Fu, Rongguo; Luo, Hao; Lv, Jin; Feng, Shu; Wei, Yifang; Zhang, Hao

    2016-10-01

    Aiming at the uncertainty of traditional 3D video capturing including camera focal lengths, distance and angle parameters between two cameras, a red-blue 3D video fusion system based on DM642 hardware processing platform is designed with the parallel optical axis. In view of the brightness reduction of traditional 3D video, the brightness enhancement algorithm based on human visual characteristics is proposed and the luminance component processing method based on YCbCr color space is also proposed. The BIOS real-time operating system is used to improve the real-time performance. The video processing circuit with the core of DM642 enhances the brightness of the images, then converts the video signals of YCbCr to RGB and extracts the R component from one camera, so does the other video and G, B component are extracted synchronously, outputs 3D fusion images finally. The real-time adjustments such as translation and scaling of the two color components are realized through the serial communication between the VC software and BIOS. The system with the method of adding red-blue components reduces the lost of the chrominance components and makes the picture color saturation reduce to more than 95% of the original. Enhancement algorithm after optimization to reduce the amount of data fusion in the processing of video is used to reduce the fusion time and watching effect is improved. Experimental results show that the system can capture images in near distance, output red-blue 3D video and presents the nice experiences to the audience wearing red-blue glasses.

  12. What are we missing? Advantages of more than one viewpoint to estimate fish assemblages using baited video

    PubMed Central

    Huveneers, Charlie; Fairweather, Peter G.

    2018-01-01

    Counting errors can bias assessments of species abundance and richness, which can affect assessments of stock structure, population structure and monitoring programmes. Many methods for studying ecology use fixed viewpoints (e.g. camera traps, underwater video), but there is little known about how this biases the data obtained. In the marine realm, most studies using baited underwater video, a common method for monitoring fish and nekton, have previously only assessed fishes using a single bait-facing viewpoint. To investigate the biases stemming from using fixed viewpoints, we added cameras to cover 360° views around the units. We found similar species richness for all observed viewpoints but the bait-facing viewpoint recorded the highest fish abundance. Sightings of infrequently seen and shy species increased with the additional cameras and the extra viewpoints allowed the abundance estimates of highly abundant schooling species to be up to 60% higher. We specifically recommend the use of additional cameras for studies focusing on shyer species or those particularly interested in increasing the sensitivity of the method by avoiding saturation in highly abundant species. Studies may also benefit from using additional cameras to focus observation on the downstream viewpoint. PMID:29892386

  13. 4K Video of Colorful Liquid in Space

    NASA Image and Video Library

    2015-10-09

    Once again, astronauts on the International Space Station dissolved an effervescent tablet in a floating ball of water, and captured images using a camera capable of recording four times the resolution of normal high-definition cameras. The higher resolution images and higher frame rate videos can reveal more information when used on science investigations, giving researchers a valuable new tool aboard the space station. This footage is one of the first of its kind. The cameras are being evaluated for capturing science data and vehicle operations by engineers at NASA's Marshall Space Flight Center in Huntsville, Alabama.

  14. Optical registration of spaceborne low light remote sensing camera

    NASA Astrophysics Data System (ADS)

    Li, Chong-yang; Hao, Yan-hui; Xu, Peng-mei; Wang, Dong-jie; Ma, Li-na; Zhao, Ying-long

    2018-02-01

    For the high precision requirement of spaceborne low light remote sensing camera optical registration, optical registration of dual channel for CCD and EMCCD is achieved by the high magnification optical registration system. System integration optical registration and accuracy of optical registration scheme for spaceborne low light remote sensing camera with short focal depth and wide field of view is proposed in this paper. It also includes analysis of parallel misalignment of CCD and accuracy of optical registration. Actual registration results show that imaging clearly, MTF and accuracy of optical registration meet requirements, it provide important guarantee to get high quality image data in orbit.

  15. Color image processing and object tracking workstation

    NASA Technical Reports Server (NTRS)

    Klimek, Robert B.; Paulick, Michael J.

    1992-01-01

    A system is described for automatic and semiautomatic tracking of objects on film or video tape which was developed to meet the needs of the microgravity combustion and fluid science experiments at NASA Lewis. The system consists of individual hardware parts working under computer control to achieve a high degree of automation. The most important hardware parts include 16 mm film projector, a lens system, a video camera, an S-VHS tapedeck, a frame grabber, and some storage and output devices. Both the projector and tapedeck have a computer interface enabling remote control. Tracking software was developed to control the overall operation. In the automatic mode, the main tracking program controls the projector or the tapedeck frame incrementation, grabs a frame, processes it, locates the edge of the objects being tracked, and stores the coordinates in a file. This process is performed repeatedly until the last frame is reached. Three representative applications are described. These applications represent typical uses and include tracking the propagation of a flame front, tracking the movement of a liquid-gas interface with extremely poor visibility, and characterizing a diffusion flame according to color and shape.

  16. Virtual Vision

    NASA Astrophysics Data System (ADS)

    Terzopoulos, Demetri; Qureshi, Faisal Z.

    Computer vision and sensor networks researchers are increasingly motivated to investigate complex multi-camera sensing and control issues that arise in the automatic visual surveillance of extensive, highly populated public spaces such as airports and train stations. However, they often encounter serious impediments to deploying and experimenting with large-scale physical camera networks in such real-world environments. We propose an alternative approach called "Virtual Vision", which facilitates this type of research through the virtual reality simulation of populated urban spaces, camera sensor networks, and computer vision on commodity computers. We demonstrate the usefulness of our approach by developing two highly automated surveillance systems comprising passive and active pan/tilt/zoom cameras that are deployed in a virtual train station environment populated by autonomous, lifelike virtual pedestrians. The easily reconfigurable virtual cameras distributed in this environment generate synthetic video feeds that emulate those acquired by real surveillance cameras monitoring public spaces. The novel multi-camera control strategies that we describe enable the cameras to collaborate in persistently observing pedestrians of interest and in acquiring close-up videos of pedestrians in designated areas.

  17. Evaluation of a video image detection system : final report.

    DOT National Transportation Integrated Search

    1994-05-01

    A video image detection system (VIDS) is an advanced wide-area traffic monitoring system : that processes input from a video camera. The Autoscope VIDS coupled with an information : management system was selected as the monitoring device because test...

  18. Spherical visual system for real-time virtual reality and surveillance

    NASA Astrophysics Data System (ADS)

    Chen, Su-Shing

    1998-12-01

    A spherical visual system has been developed for full field, web-based surveillance, virtual reality, and roundtable video conference. The hardware is a CycloVision parabolic lens mounted on a video camera. The software was developed at the University of Missouri-Columbia. The mathematical model is developed by Su-Shing Chen and Michael Penna in the 1980s. The parabolic image, capturing the full (360 degrees) hemispherical field (except the north pole) of view is transformed into the spherical model of Chen and Penna. In the spherical model, images are invariant under the rotation group and are easily mapped to the image plane tangent to any point on the sphere. The projected image is exactly what the usual camera produces at that angle. Thus a real-time full spherical field video camera is developed by using two pieces of parabolic lenses.

  19. Performance evaluation of a two detector camera for real-time video.

    PubMed

    Lochocki, Benjamin; Gambín-Regadera, Adrián; Artal, Pablo

    2016-12-20

    Single pixel imaging can be the preferred method over traditional 2D-array imaging in spectral ranges where conventional cameras are not available. However, when it comes to real-time video imaging, single pixel imaging cannot compete with the framerates of conventional cameras, especially when high-resolution images are desired. Here we evaluate the performance of an imaging approach using two detectors simultaneously. First, we present theoretical results on how low SNR affects final image quality followed by experimentally determined results. Obtained video framerates were doubled compared to state of the art systems, resulting in a framerate from 22 Hz for a 32×32 resolution to 0.75 Hz for a 128×128 resolution image. Additionally, the two detector imaging technique enables the acquisition of images with a resolution of 256×256 in less than 3 s.

  20. Identifying sports videos using replay, text, and camera motion features

    NASA Astrophysics Data System (ADS)

    Kobla, Vikrant; DeMenthon, Daniel; Doermann, David S.

    1999-12-01

    Automated classification of digital video is emerging as an important piece of the puzzle in the design of content management systems for digital libraries. The ability to classify videos into various classes such as sports, news, movies, or documentaries, increases the efficiency of indexing, browsing, and retrieval of video in large databases. In this paper, we discuss the extraction of features that enable identification of sports videos directly from the compressed domain of MPEG video. These features include detecting the presence of action replays, determining the amount of scene text in vide, and calculating various statistics on camera and/or object motion. The features are derived from the macroblock, motion,and bit-rate information that is readily accessible from MPEG video with very minimal decoding, leading to substantial gains in processing speeds. Full-decoding of selective frames is required only for text analysis. A decision tree classifier built using these features is able to identify sports clips with an accuracy of about 93 percent.

  1. Remote Sensing Simulation Activities for Earthlings

    ERIC Educational Resources Information Center

    Krockover, Gerald H.; Odden, Thomas D.

    1977-01-01

    Suggested are activities using a Polaroid camera to illustrate the capabilities of remote sensing. Reading materials from the National Aeronautics and Space Administration (NASA) are suggested. Methods for (1) finding a camera's focal length, (2) calculating ground dimension photograph simulation, and (3) limiting size using film resolution are…

  2. Design and evaluation of controls for drift, video gain, and color balance in spaceborne facsimile cameras

    NASA Technical Reports Server (NTRS)

    Katzberg, S. J.; Kelly, W. L., IV; Rowland, C. W.; Burcher, E. E.

    1973-01-01

    The facsimile camera is an optical-mechanical scanning device which has become an attractive candidate as an imaging system for planetary landers and rovers. This paper presents electronic techniques which permit the acquisition and reconstruction of high quality images with this device, even under varying lighting conditions. These techniques include a control for low frequency noise and drift, an automatic gain control, a pulse-duration light modulation scheme, and a relative spectral gain control. Taken together, these techniques allow the reconstruction of radiometrically accurate and properly balanced color images from facsimile camera video data. These techniques have been incorporated into a facsimile camera and reproduction system, and experimental results are presented for each technique and for the complete system.

  3. Low-complexity camera digital signal imaging for video document projection system

    NASA Astrophysics Data System (ADS)

    Hsia, Shih-Chang; Tsai, Po-Shien

    2011-04-01

    We present high-performance and low-complexity algorithms for real-time camera imaging applications. The main functions of the proposed camera digital signal processing (DSP) involve color interpolation, white balance, adaptive binary processing, auto gain control, and edge and color enhancement for video projection systems. A series of simulations demonstrate that the proposed method can achieve good image quality while keeping computation cost and memory requirements low. On the basis of the proposed algorithms, the cost-effective hardware core is developed using Verilog HDL. The prototype chip has been verified with one low-cost programmable device. The real-time camera system can achieve 1270 × 792 resolution with the combination of extra components and can demonstrate each DSP function.

  4. Object Occlusion Detection Using Automatic Camera Calibration for a Wide-Area Video Surveillance System

    PubMed Central

    Jung, Jaehoon; Yoon, Inhye; Paik, Joonki

    2016-01-01

    This paper presents an object occlusion detection algorithm using object depth information that is estimated by automatic camera calibration. The object occlusion problem is a major factor to degrade the performance of object tracking and recognition. To detect an object occlusion, the proposed algorithm consists of three steps: (i) automatic camera calibration using both moving objects and a background structure; (ii) object depth estimation; and (iii) detection of occluded regions. The proposed algorithm estimates the depth of the object without extra sensors but with a generic red, green and blue (RGB) camera. As a result, the proposed algorithm can be applied to improve the performance of object tracking and object recognition algorithms for video surveillance systems. PMID:27347978

  5. Scene-aware joint global and local homographic video coding

    NASA Astrophysics Data System (ADS)

    Peng, Xiulian; Xu, Jizheng; Sullivan, Gary J.

    2016-09-01

    Perspective motion is commonly represented in video content that is captured and compressed for various applications including cloud gaming, vehicle and aerial monitoring, etc. Existing approaches based on an eight-parameter homography motion model cannot deal with this efficiently, either due to low prediction accuracy or excessive bit rate overhead. In this paper, we consider the camera motion model and scene structure in such video content and propose a joint global and local homography motion coding approach for video with perspective motion. The camera motion is estimated by a computer vision approach, and camera intrinsic and extrinsic parameters are globally coded at the frame level. The scene is modeled as piece-wise planes, and three plane parameters are coded at the block level. Fast gradient-based approaches are employed to search for the plane parameters for each block region. In this way, improved prediction accuracy and low bit costs are achieved. Experimental results based on the HEVC test model show that up to 9.1% bit rate savings can be achieved (with equal PSNR quality) on test video content with perspective motion. Test sequences for the example applications showed a bit rate savings ranging from 3.7 to 9.1%.

  6. Evaluation of lens distortion errors using an underwater camera system for video-based motion analysis

    NASA Technical Reports Server (NTRS)

    Poliner, Jeffrey; Fletcher, Lauren; Klute, Glenn K.

    1994-01-01

    Video-based motion analysis systems are widely employed to study human movement, using computers to capture, store, process, and analyze video data. This data can be collected in any environment where cameras can be located. One of the NASA facilities where human performance research is conducted is the Weightless Environment Training Facility (WETF), a pool of water which simulates zero-gravity with neutral buoyance. Underwater video collection in the WETF poses some unique problems. This project evaluates the error caused by the lens distortion of the WETF cameras. A grid of points of known dimensions was constructed and videotaped using a video vault underwater system. Recorded images were played back on a VCR and a personal computer grabbed and stored the images on disk. These images were then digitized to give calculated coordinates for the grid points. Errors were calculated as the distance from the known coordinates of the points to the calculated coordinates. It was demonstrated that errors from lens distortion could be as high as 8 percent. By avoiding the outermost regions of a wide-angle lens, the error can be kept smaller.

  7. Implementation of remote video auditing with feedback and compliance for manual-cleaning protocols of endoscopic retrograde cholangiopancreatography endoscopes.

    PubMed

    Armellino, Donna; Cifu, Kelly; Wallace, Maureen; Johnson, Sherly; DiCapua, John; Dowling, Oonagh; Jacobs, Mitchel; Browning, Susan

    2018-05-01

    A pilot initiative to assess the use of remote video auditing in monitoring compliance with manual-cleaning protocols for endoscopic retrograde cholangiopancreatography (ERCP) endoscopes was performed. Compliance with manual-cleaning steps following the initiation of feedback was measured. A video feed of the ERCP reprocessing room was provided to remote auditors who scored items of an ERCP endoscope manual-cleaning checklist. Compliance feedback was provided in the form of reports and reeducation. Outcomes were reported as checklist compliance. The use of remote video auditing to document manual processing is a feasible approach and feedback and reeducation increased manual-cleaning compliance from 53.1% (95% confidence interval, 34.7-71.6) to 98.9% (95.0% confidence interval, 98.1-99.6). Copyright © 2018 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  8. Development of a camera casing suited for cryogenic and vacuum applications

    NASA Astrophysics Data System (ADS)

    Delaquis, S. C.; Gornea, R.; Janos, S.; Lüthi, M.; von Rohr, Ch Rudolf; Schenk, M.; Vuilleumier, J.-L.

    2013-12-01

    We report on the design, construction, and operation of a PID temperature controlled and vacuum tight camera casing. The camera casing contains a commercial digital camera and a lighting system. The design of the camera casing and its components are discussed in detail. Pictures taken by this cryo-camera while immersed in argon vapour and liquid nitrogen are presented. The cryo-camera can provide a live view inside cryogenic set-ups and allows to record video.

  9. Synchronizing A Stroboscope With A Video Camera

    NASA Technical Reports Server (NTRS)

    Rhodes, David B.; Franke, John M.; Jones, Stephen B.; Dismond, Harriet R.

    1993-01-01

    Circuit synchronizes flash of light from stroboscope with frame and field periods of video camera. Sync stripper sends vertical-synchronization signal to delay generator, which generates trigger signal. Flashlamp power supply accepts delayed trigger signal and sends pulse of power to flash lamp. Designed for use in making short-exposure images that "freeze" flow in wind tunnel. Also used for making longer-exposure images obtained by use of continuous intense illumination.

  10. Utilizing ISS Camera Systems for Scientific Analysis of Lightning Characteristics and comparison with ISS-LIS and GLM

    NASA Astrophysics Data System (ADS)

    Schultz, C. J.; Lang, T. J.; Leake, S.; Runco, M.; Blakeslee, R. J.

    2017-12-01

    Video and still frame images from cameras aboard the International Space Station (ISS) are used to inspire, educate, and provide a unique vantage point from low-Earth orbit that is second to none; however, these cameras have overlooked capabilities for contributing to scientific analysis of the Earth and near-space environment. The goal of this project is to study how georeferenced video/images from available ISS camera systems can be useful for scientific analysis, using lightning properties as a demonstration. Camera images from the crew cameras and high definition video from the Chiba University Meteor Camera were combined with lightning data from the National Lightning Detection Network (NLDN), ISS-Lightning Imaging Sensor (ISS-LIS), the Geostationary Lightning Mapper (GLM) and lightning mapping arrays. These cameras provide significant spatial resolution advantages ( 10 times or better) over ISS-LIS and GLM, but with lower temporal resolution. Therefore, they can serve as a complementarity analysis tool for studying lightning and thunderstorm processes from space. Lightning sensor data, Visible Infrared Imaging Radiometer Suite (VIIRS) derived city light maps, and other geographic databases were combined with the ISS attitude and position data to reverse geolocate each image or frame. An open-source Python toolkit has been developed to assist with this effort. Next, the locations and sizes of all flashes in each frame or image were computed and compared with flash characteristics from all available lightning datasets. This allowed for characterization of cloud features that are below the 4-km and 8-km resolution of ISS-LIS and GLM which may reduce the light that reaches the ISS-LIS or GLM sensor. In the case of video, consecutive frames were overlaid to determine the rate of change of the light escaping cloud top. Characterization of the rate of change in geometry, more generally the radius, of light escaping cloud top was integrated with the NLDN, ISS-LIS and GLM to understand how the peak rate of change and the peak area of each flash aligned with each lightning system in time. Flash features like leaders could be inferred from the video frames as well. Testing is being done to see if leader speeds may be accurately calculated under certain circumstances.

  11. Installing Snowplow Cameras and Integrating Images into MnDOT's Traveler Information System

    DOT National Transportation Integrated Search

    2017-10-01

    In 2015 and 2016, the Minnesota Department of Transportation (MnDOT) installed network video dash- and ceiling-mounted cameras on 226 snowplows, approximately one-quarter of MnDOT's total snowplow fleet. The cameras were integrated with the onboard m...

  12. Helms in FGB/Zarya with cameras

    NASA Image and Video Library

    2001-06-08

    ISS002-E-6526 (8 June 2001) --- Astronaut Susan J. Helms, Expedition Two flight engineer, mounts a video camera onto a bracket in the Zarya or Functional Cargo Block (FGB) of the International Space Station (ISS). The image was recorded with a digital still camera.

  13. Using tri-axial accelerometers to identify wild polar bear behaviors

    USGS Publications Warehouse

    Pagano, Anthony M.; Rode, Karyn D.; Cutting, A.; Owen, M.A.; Jensen, S.; Ware, J.V.; Robbins, C.T.; Durner, George M.; Atwood, Todd C.; Obbard, M.E.; Middel, K.R.; Thiemann, G.W.; Williams, T.M.

    2017-01-01

    Tri-axial accelerometers have been used to remotely identify the behaviors of a wide range of taxa. Assigning behaviors to accelerometer data often involves the use of captive animals or surrogate species, as their accelerometer signatures are generally assumed to be similar to those of their wild counterparts. However, this has rarely been tested. Validated accelerometer data are needed for polar bears Ursus maritimus to understand how habitat conditions may influence behavior and energy demands. We used accelerometer and water conductivity data to remotely distinguish 10 polar bear behaviors. We calibrated accelerometer and conductivity data collected from collars with behaviors observed from video-recorded captive polar bears and brown bears U. arctos, and with video from camera collars deployed on free-ranging polar bears on sea ice and on land. We used random forest models to predict behaviors and found strong ability to discriminate the most common wild polar bear behaviors using a combination of accelerometer and conductivity sensor data from captive or wild polar bears. In contrast, models using data from captive brown bears failed to reliably distinguish most active behaviors in wild polar bears. Our ability to discriminate behavior was greatest when species- and habitat-specific data from wild individuals were used to train models. Data from captive individuals may be suitable for calibrating accelerometers, but may provide reduced ability to discriminate some behaviors. The accelerometer calibrations developed here provide a method to quantify polar bear behaviors to evaluate the impacts of declines in Arctic sea ice.

  14. Improved head-controlled TV system produces high-quality remote image

    NASA Technical Reports Server (NTRS)

    Goertz, R.; Lindberg, J.; Mingesz, D.; Potts, C.

    1967-01-01

    Manipulator operator uses an improved resolution tv camera/monitor positioning system to view the remote handling and processing of reactive, flammable, explosive, or contaminated materials. The pan and tilt motions of the camera and monitor are slaved to follow the corresponding motions of the operators head.

  15. In-flight Video Captured by External Tank Camera System

    NASA Technical Reports Server (NTRS)

    2005-01-01

    In this July 26, 2005 video, Earth slowly fades into the background as the STS-114 Space Shuttle Discovery climbs into space until the External Tank (ET) separates from the orbiter. An External Tank ET Camera System featuring a Sony XC-999 model camera provided never before seen footage of the launch and tank separation. The camera was installed in the ET LO2 Feedline Fairing. From this position, the camera had a 40% field of view with a 3.5 mm lens. The field of view showed some of the Bipod area, a portion of the LH2 tank and Intertank flange area, and some of the bottom of the shuttle orbiter. Contained in an electronic box, the battery pack and transmitter were mounted on top of the Solid Rocker Booster (SRB) crossbeam inside the ET. The battery pack included 20 Nickel-Metal Hydride batteries (similar to cordless phone battery packs) totaling 28 volts DC and could supply about 70 minutes of video. Located 95 degrees apart on the exterior of the Intertank opposite orbiter side, there were 2 blade S-Band antennas about 2 1/2 inches long that transmitted a 10 watt signal to the ground stations. The camera turned on approximately 10 minutes prior to launch and operated for 15 minutes following liftoff. The complete camera system weighs about 32 pounds. Marshall Space Flight Center (MSFC), Johnson Space Center (JSC), Goddard Space Flight Center (GSFC), and Kennedy Space Center (KSC) participated in the design, development, and testing of the ET camera system.

  16. What Counts as Educational Video?: Working toward Best Practice Alignment between Video Production Approaches and Outcomes

    ERIC Educational Resources Information Center

    Winslett, Greg

    2014-01-01

    The twenty years since the first digital video camera was made commercially available has seen significant increases in the use of low-cost, amateur video productions for teaching and learning. In the same period, production and consumption of professionally produced video has also increased, as has the distribution platforms to access it.…

  17. Instrumentation for Aim Point Determination in the Close-in Battle

    DTIC Science & Technology

    2007-12-01

    Rugged camcorder with remote “ lipstick ” camera (http://www.samsung.com/Products/ Camcorder/DigitalMemory/files/scx210wl.pdf). ........ 5 Figure 5...target. One way of making a measurement is to mount a small “ lipstick ” camera to the rifle with a mount similar to the laser-tag transmitter mount...technology.com/contractors/surveillance/viotac-inc/viotac-inc1.html). Figure 4. Rugged camcorder with remote “ lipstick ” camera (http://www.samsung.com

  18. Ready for Their Close-Ups

    ERIC Educational Resources Information Center

    Foster, Andrea L.

    2006-01-01

    American college students are increasingly posting videos of their lives online, due to Web sites like Vimeo and Google Video that host video material free and the ubiquity of camera phones and other devices that can take video-clips. However, the growing popularity of online socializing has many safety experts worried that students could be…

  19. Mobile Panoramic Video Applications for Learning

    ERIC Educational Resources Information Center

    Multisilta, Jari

    2014-01-01

    The use of videos on the internet has grown significantly in the last few years. For example, Khan Academy has a large collection of educational videos, especially on STEM subjects, available for free on the internet. Professional panoramic video cameras are expensive and usually not easy to carry because of the large size of the equipment.…

  20. Teacher Self-Captured Video: Learning to See

    ERIC Educational Resources Information Center

    Sherin, Miriam Gamoran; Dyer, Elizabeth B.

    2017-01-01

    Videos are often used for demonstration and evaluation, but a more productive approach would be using video to support teachers' ability to notice and interpret classroom interactions. That requires thinking carefully about the physical aspects of shooting video--where the camera is placed and how easily student interactions can be heard--as well…

  1. 3D-Reconstruction of recent volcanic activity from ROV-video, Charles Darwin Seamounts, Cape Verdes

    NASA Astrophysics Data System (ADS)

    Kwasnitschka, T.; Hansteen, T. H.; Kutterolf, S.; Freundt, A.; Devey, C. W.

    2011-12-01

    As well as providing well-localized samples, Remotely Operated Vehicles (ROVs) produce huge quantities of visual data whose potential for geological data mining has seldom if ever been fully realized. We present a new workflow to derive essential results of field geology such as quantitative stratigraphy and tectonic surveying from ROV-based photo and video material. We demonstrate the procedure on the Charles Darwin Seamounts, a field of small hot spot volcanoes recently identified at a depth of ca. 3500m southwest of the island of Santo Antao in the Cape Verdes. The Charles Darwin Seamounts feature a wide spectrum of volcanic edifices with forms suggestive of scoria cones, lava domes, tuff rings and maar-type depressions, all of comparable dimensions. These forms, coupled with the highly fragmented volcaniclastic samples recovered by dredging, motivated surveying parts of some edifices down to centimeter scale. ROV-based surveys yielded volcaniclastic samples of key structures linked by extensive coverage of stereoscopic photographs and high-resolution video. Based upon the latter, we present our workflow to derive three-dimensional models of outcrops from a single-camera video sequence, allowing quantitative measurements of fault orientation, bedding structure, grain size distribution and photo mosaicking within a geo-referenced framework. With this information we can identify episodes of repetitive eruptive activity at individual volcanic centers and see changes in eruptive style over time, which, despite their proximity to each other, is highly variable.

  2. The Integrated Radiation Mapper Assistant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlton, R.E.; Tripp, L.R.

    1995-03-01

    The Integrated Radiation Mapper Assistant (IRMA) system combines state-of-the-art radiation sensors and microprocessor based analysis techniques to perform radiation surveys. Control of the survey function is from a control station located outside the radiation thus reducing time spent in radiation areas performing radiation surveys. The system consists of a directional radiation sensor, a laser range finder, two area radiation sensors, and a video camera mounted on a pan and tilt platform. THis sensor package is deployable on a remotely operated vehicle. The outputs of the system are radiation intensity maps identifying both radiation source intensities and radiation levels throughout themore » room being surveyed. After completion of the survey, the data can be removed from the control station computer for further analysis or archiving.« less

  3. Simulation of parafoil reconnaissance imagery

    NASA Astrophysics Data System (ADS)

    Kogler, Kent J.; Sutkus, Linas; Troast, Douglas; Kisatsky, Paul; Charles, Alain M.

    1995-08-01

    Reconnaissance from unmanned platforms is currently of interest to DoD and civil sectors concerned with drug trafficking and illegal immigration. Platforms employed vary from motorized aircraft to tethered balloons. One appraoch currently under evaluation deploys a TV camera suspended from a parafoil delivered to the area of interest by a cannon launched projectile. Imagery is then transmitted to a remote monitor for processing and interpretation. This paper presents results of imagery obtained from simulated parafoil flights in which software techniques were developed to process-in image degradation caused by atmospheric obscurants and perturbations in the normal parafoil flight trajectory induced by wind gusts. The approach to capturing continuous motion imagery from captive flight test recordings, the introduction of simulated effects, and the transfer of the processed imagery back to video tape is described.

  4. Mobile Video in Everyday Social Interactions

    NASA Astrophysics Data System (ADS)

    Reponen, Erika; Lehikoinen, Jaakko; Impiö, Jussi

    Video recording has become a spontaneous everyday activity for many people, thanks to the video capabilities of modern mobile phones. Internet connectivity of mobile phones enables fluent sharing of captured material even real-time, which makes video an up-and-coming everyday interaction medium. In this article we discuss the effect of the video camera in the social environment, everyday life situations, mainly based on a study where four groups of people used digital video cameras in their normal settings. We also reflect on another study of ours, relating to real-time mobile video communication and discuss future views. The aim of our research is to understand the possibilities in the domain of mobile video. Live and delayed sharing seem to have their special characteristics, live video being used as a virtual window between places whereas delayed video usage has more scope for good-quality content. While this novel way of interacting via mobile video enables new social patterns, it also raises new concerns for privacy and trust between participating persons in all roles, largely due to the widely spreading possibilities of videos. Video in a social situation affects cameramen (who record), targets (who are recorded), passers-by (who are unintentionally in the situation), and the audience (who follow the videos or recording situations) but also the other way around, the participants affect the video by their varying and evolving personal and communicational motivations for recording.

  5. Reactions to a remote-controlled video-communication robot in seniors' homes: a pilot study of feasibility and acceptance.

    PubMed

    Seelye, Adriana M; Wild, Katherine V; Larimer, Nicole; Maxwell, Shoshana; Kearns, Peter; Kaye, Jeffrey A

    2012-12-01

    Remote telepresence provided by tele-operated robotics represents a new means for obtaining important health information, improving older adults' social and daily functioning and providing peace of mind to family members and caregivers who live remotely. In this study we tested the feasibility of use and acceptance of a remotely controlled robot with video-communication capability in independently living, cognitively intact older adults. A mobile remotely controlled robot with video-communication ability was placed in the homes of eight seniors. The attitudes and preferences of these volunteers and those of family or friends who communicated with them remotely via the device were assessed through survey instruments. Overall experiences were consistently positive, with the exception of one user who subsequently progressed to a diagnosis of mild cognitive impairment. Responses from our participants indicated that in general they appreciated the potential of this technology to enhance their physical health and well-being, social connectedness, and ability to live independently at home. Remote users, who were friends or adult children of the participants, were more likely to test the mobility features and had several suggestions for additional useful applications. Results from the present study showed that a small sample of independently living, cognitively intact older adults and their remote collaterals responded positively to a remote controlled robot with video-communication capabilities. Research is needed to further explore the feasibility and acceptance of this type of technology with a variety of patients and their care contacts.

  6. Fiber optic TV direct

    NASA Technical Reports Server (NTRS)

    Kassak, John E.

    1991-01-01

    The objective of the operational television (OTV) technology was to develop a multiple camera system (up to 256 cameras) for NASA Kennedy installations where camera video, synchronization, control, and status data are transmitted bidirectionally via a single fiber cable at distances in excess of five miles. It is shown that the benefits (such as improved video performance, immunity from electromagnetic interference and radio frequency interference, elimination of repeater stations, and more system configuration flexibility) can be realized if application of the proven fiber optic transmission concept is used. The control system will marry the lens, pan and tilt, and camera control functions into a modular based Local Area Network (LAN) control network. Such a system does not exist commercially at present since the Television Broadcast Industry's current practice is to divorce the positional controls from the camera control system. The application software developed for this system will have direct applicability to similar systems in industry using LAN based control systems.

  7. Optically Remote Noncontact Heart Rates Sensing Technique

    NASA Astrophysics Data System (ADS)

    Thongkongoum, W.; Boonduang, S.; Limsuwan, P.

    2017-09-01

    Heart rate monitoring via optically remote noncontact technique was reported in this research. A green laser (5 mW, 532±10 nm) was projected onto the left carotid artery. The reflected laser light on the screen carried the deviation of the interference patterns. The interference patterns were recorded by the digital camera. The recorded videos of the interference patterns were frame by frame analysed by 2 standard digital image processing (DIP) techniques, block matching (BM) and optical flow (OF) techniques. The region of interest (ROI) pixels within the interference patterns were analysed for periodically changes of the interference patterns due to the heart pumping action. Both results of BM and OF techniques were compared with the reference medical heart rate monitoring device by which a contact measurement using pulse transit technique. The results obtained from BM technique was 74.67 bpm (beats per minute) and OF technique was 75.95 bpm. Those results when compared with the reference value of 75.43±1 bpm, the errors were found to be 1.01% and 0.69%, respectively.

  8. Remote photoplethysmography system for unsupervised monitoring regional anesthesia effectiveness

    NASA Astrophysics Data System (ADS)

    Rubins, U.; Miscuks, A.; Marcinkevics, Z.; Lange, M.

    2017-12-01

    Determining the level of regional anesthesia (RA) is vitally important to both an anesthesiologist and surgeon, also knowing the RA level can protect the patient and reduce the time of surgery. Normally to detect the level of RA, usually a simple subjective (sensitivity test) and complicated quantitative methods (thermography, neuromyography, etc.) are used, but there is not yet a standardized method for objective RA detection and evaluation. In this study, the advanced remote photoplethysmography imaging (rPPG) system for unsupervised monitoring of human palm RA is demonstrated. The rPPG system comprises compact video camera with green optical filter, surgical lamp as a light source and a computer with custom-developed software. The algorithm implemented in Matlab software recognizes the palm and two dermatomes (Medial and Ulnar innervation), calculates the perfusion map and perfusion changes in real-time to detect effect of RA. Seven patients (aged 18-80 years) undergoing hand surgery received peripheral nerve brachial plexus blocks during the measurements. Clinical experiments showed that our rPPG system is able to perform unsupervised monitoring of RA.

  9. Processing Ocean Images to Detect Large Drift Nets

    NASA Technical Reports Server (NTRS)

    Veenstra, Tim

    2009-01-01

    A computer program processes the digitized outputs of a set of downward-looking video cameras aboard an aircraft flying over the ocean. The purpose served by this software is to facilitate the detection of large drift nets that have been lost, abandoned, or jettisoned. The development of this software and of the associated imaging hardware is part of a larger effort to develop means of detecting and removing large drift nets before they cause further environmental damage to the ocean and to shores on which they sometimes impinge. The software is capable of near-realtime processing of as many as three video feeds at a rate of 30 frames per second. After a user sets the parameters of an adjustable algorithm, the software analyzes each video stream, detects any anomaly, issues a command to point a high-resolution camera toward the location of the anomaly, and, once the camera has been so aimed, issues a command to trigger the camera shutter. The resulting high-resolution image is digitized, and the resulting data are automatically uploaded to the operator s computer for analysis.

  10. Progress in passive submillimeter-wave video imaging

    NASA Astrophysics Data System (ADS)

    Heinz, Erik; May, Torsten; Born, Detlef; Zieger, Gabriel; Peiselt, Katja; Zakosarenko, Vyacheslav; Krause, Torsten; Krüger, André; Schulz, Marco; Bauer, Frank; Meyer, Hans-Georg

    2014-06-01

    Since 2007 we are developing passive submillimeter-wave video cameras for personal security screening. In contradiction to established portal-based millimeter-wave scanning techniques, these are suitable for stand-off or stealth operation. The cameras operate in the 350GHz band and use arrays of superconducting transition-edge sensors (TES), reflector optics, and opto-mechanical scanners. Whereas the basic principle of these devices remains unchanged, there has been a continuous development of the technical details, as the detector array, the scanning scheme, and the readout, as well as system integration and performance. The latest prototype of this camera development features a linear array of 128 detectors and a linear scanner capable of 25Hz frame rate. Using different types of reflector optics, a field of view of 1×2m2 and a spatial resolution of 1-2 cm is provided at object distances of about 5-25m. We present the concept of this camera and give details on system design and performance. Demonstration videos show its capability for hidden threat detection and illustrate possible application scenarios.

  11. Automatic treatment of flight test images using modern tools: SAAB and Aeritalia joint approach

    NASA Astrophysics Data System (ADS)

    Kaelldahl, A.; Duranti, P.

    The use of onboard cine cameras, as well as that of on ground cinetheodolites, is very popular in flight tests. The high resolution of film and the high frame rate of cinecameras are still not exceeded by video technology. Video technology can successfully enter the flight test scenario once the availability of solid-state optical sensors dramatically reduces the dimensions, and weight of TV cameras, thus allowing to locate them in positions compatible with space or operational limitations (e.g., HUD cameras). A proper combination of cine and video cameras is the typical solution for a complex flight test program. The output of such devices is very helpful in many flight areas. Several sucessful applications of this technology are summarized. Analysis of the large amount of data produced (frames of images) requires a very long time. The analysis is normally carried out manually. In order to improve the situation, in the last few years, several flight test centers have devoted their attention to possible techniques which allow for quicker and more effective image treatment.

  12. A unified and efficient framework for court-net sports video analysis using 3D camera modeling

    NASA Astrophysics Data System (ADS)

    Han, Jungong; de With, Peter H. N.

    2007-01-01

    The extensive amount of video data stored on available media (hard and optical disks) necessitates video content analysis, which is a cornerstone for different user-friendly applications, such as, smart video retrieval and intelligent video summarization. This paper aims at finding a unified and efficient framework for court-net sports video analysis. We concentrate on techniques that are generally applicable for more than one sports type to come to a unified approach. To this end, our framework employs the concept of multi-level analysis, where a novel 3-D camera modeling is utilized to bridge the gap between the object-level and the scene-level analysis. The new 3-D camera modeling is based on collecting features points from two planes, which are perpendicular to each other, so that a true 3-D reference is obtained. Another important contribution is a new tracking algorithm for the objects (i.e. players). The algorithm can track up to four players simultaneously. The complete system contributes to summarization by various forms of information, of which the most important are the moving trajectory and real-speed of each player, as well as 3-D height information of objects and the semantic event segments in a game. We illustrate the performance of the proposed system by evaluating it for a variety of court-net sports videos containing badminton, tennis and volleyball, and we show that the feature detection performance is above 92% and events detection about 90%.

  13. Design and Smartphone-Based Implementation of a Chaotic Video Communication Scheme via WAN Remote Transmission

    NASA Astrophysics Data System (ADS)

    Lin, Zhuosheng; Yu, Simin; Li, Chengqing; Lü, Jinhu; Wang, Qianxue

    This paper proposes a chaotic secure video remote communication scheme that can perform on real WAN networks, and implements it on a smartphone hardware platform. First, a joint encryption and compression scheme is designed by embedding a chaotic encryption scheme into the MJPG-Streamer source codes. Then, multiuser smartphone communications between the sender and the receiver are implemented via WAN remote transmission. Finally, the transmitted video data are received with the given IP address and port in an Android smartphone. It should be noted that, this is the first time that chaotic video encryption schemes are implemented on such a hardware platform. The experimental results demonstrate that the technical challenges on hardware implementation of secure video communication are successfully solved, reaching a balance amongst sufficient security level, real-time processing of massive video data, and utilization of available resources in the hardware environment. The proposed scheme can serve as a good application example of chaotic secure communications for smartphone and other mobile facilities in the future.

  14. Scoliosis surgery - child

    MedlinePlus

    Spinal curvature surgery - child; Kyphoscoliosis surgery - child; Video-assisted thoracoscopic surgery - child; VATS - child ... may also do the procedure using a special video camera. A surgical cut in the back is ...

  15. Use of remote video microscopy (telepathology) as an adjunct to neurosurgical frozen section consultation.

    PubMed

    Becker, R L; Specht, C S; Jones, R; Rueda-Pedraza, M E; O'Leary, T J

    1993-08-01

    We investigated the use of remote video microscopy (telepathology) to assist in the diagnosis of 52 neurosurgical frozen section cases. The TelMed system (Discovery Medical Systems, Overland Park, KS), in which the referring pathologist selects appropriate fields for transmission to the consultant, was used for the study. There was a high degree of concordance between the diagnosis rendered on the basis of transmitted video images and that rendered on the basis of direct evaluation of frozen sections; however, in seven cases there was substantial disagreement. Remote evaluation was associated with a more rapid consultation from the standpoint of the consultant, who spent approximately 2 minutes less per case when using remote microscopy; this was achieved at the expense of considerably greater effort on the part of the referring pathologist, who spent approximately 16 minutes per case selecting an average of 4.5 images for transmission to the consultant. The use of remote video microscopy for pathology consultation is associated with a complex series of tradeoffs involving cost, information loss, and timeliness of consultation.

  16. Using oblique digital photography for alluvial sandbar monitoring and low-cost change detection

    USGS Publications Warehouse

    Tusso, Robert B.; Buscombe, Daniel D.; Grams, Paul E.

    2015-01-01

    The maintenance of alluvial sandbars is a longstanding management interest along the Colorado River in Grand Canyon. Resource managers are interested in both the long-term trend in sandbar condition and the short-term response to management actions, such as intentional controlled floods released from Glen Canyon Dam. Long-term monitoring is accomplished at a range of scales, by a combination of annual topographic survey at selected sites, daily collection of images from those sites using novel, autonomously operating, digital camera systems (hereafter referred to as 'remote cameras'), and quadrennial remote sensing of sandbars canyonwide. In this paper, we present results from the remote camera images for daily changes in sandbar topography.

  17. Information acquisition from audio-video-data sources: an experimental study on remote diagnosis. The LOTAS Group.

    PubMed

    Xiao, Y; MacKenzie, C; Orasanu, J; Spencer, R; Rahman, A; Gunawardane, V

    1999-01-01

    To determine what information sources are used during a remote diagnosis task. Experienced trauma care providers viewed segments of videotaped initial trauma patient resuscitation and airway management. Experiment 1 collected responses from anesthesiologists to probing questions during and after the presentation of recorded video materials. Experiment 2 collected the responses from three types of care providers (anesthesiologists, nurses, and surgeons). Written and verbal responses were scored according to detection of critical events in video materials and categorized according to their content. Experiment 3 collected visual scanning data using an eyetracker during the viewing of recorded video materials from the three types of care providers. Eye-gaze data were analyzed in terms of focus on various parts of the videotaped materials. Care providers were found to be unable to detect several critical events. The three groups of subjects studied (anesthesiologists, nurses, and surgeons) focused on different aspects of videotaped materials. When the remote events and activities are multidisciplinary and rapidly changing, experts linked with audio-video-data connections may encounter difficulties in comprehending remote activities, and their information usage may be biased. Special training is needed for the remote decision-maker to appreciate tasks outside his or her speciality and beyond the boundaries of traditional divisions of labor.

  18. Using Digital Time-Lapse Videos to Teach Geomorphic Processes to Undergraduates

    NASA Astrophysics Data System (ADS)

    Clark, D. H.; Linneman, S. R.; Fuller, J.

    2004-12-01

    We demonstrate the use of relatively low-cost, computer-based digital imagery to create time-lapse videos of two distinct geomorphic processes in order to help students grasp the significance of the rates, styles, and temporal dependence of geologic phenomena. Student interviews indicate that such videos help them to understand the relationship between processes and landform development. Time-lapse videos have been used extensively in some sciences (e.g., biology - http://sbcf.iu.edu/goodpract/hangarter.html, meteorology - http://www.apple.com/education/hed/aua0101s/meteor/, chemistry - http://www.chem.yorku.ca/profs/hempsted/chemed/home.html) to demonstrate gradual processes that are difficult for many students to visualize. Most geologic processes are slower still, and are consequently even more difficult for students to grasp, yet time-lapse videos are rarely used in earth science classrooms. The advent of inexpensive web-cams and computers provides a new means to explore the temporal dimension of earth surface processes. To test the use of time-lapse videos in geoscience education, we are developing time-lapse movies that record the evolution of two landforms: a stream-table delta and a large, natural, active landslide. The former involves well-known processes in a controlled, repeatable laboratory experiment, whereas the latter tracks the developing dynamics of an otherwise poorly understood slope failure. The stream-table delta is small and grows in ca. 2 days; we capture a frame on an overhead web-cam every 3 minutes. Before seeing the video, students are asked to hypothesize how the delta will grow through time. The final time-lapse video, ca. 20-80 MB, elegantly shows channel migration, progradation rates, and formation of major geomorphic elements (topset, foreset, bottomset beds). The web-cam can also be "zoomed-in" to show smaller-scale processes, such as bedload transfer, and foreset slumping. Post-lab tests and interviews with students indicate that these time-lapse videos significantly improve student interest in the material, and comprehension of the processes. In contrast, the natural landslide is relatively unconstrained, and its processes of movement, both gradual and catastrophic, are essentially impossible to observe directly without the aid of time-lapse imagery. We are constructing a remote digital camera, mounted in a tree, which will capture 1-2 photos/day of the toe. The toe is extremely active geomorphically, and the time-lapse movie should help us (and the students) to constrain the style, frequency, and rates of movement, surface slumping, and debris-flow generation. Because we have also installed a remote weather station on the landslide, we will be able to test the links between these processes and local climate conditions.

  19. A digital underwater video camera system for aquatic research in regulated rivers

    USGS Publications Warehouse

    Martin, Benjamin M.; Irwin, Elise R.

    2010-01-01

    We designed a digital underwater video camera system to monitor nesting centrarchid behavior in the Tallapoosa River, Alabama, 20 km below a peaking hydropower dam with a highly variable flow regime. Major components of the system included a digital video recorder, multiple underwater cameras, and specially fabricated substrate stakes. The innovative design of the substrate stakes allowed us to effectively observe nesting redbreast sunfish Lepomis auritus in a highly regulated river. Substrate stakes, which were constructed for the specific substratum complex (i.e., sand, gravel, and cobble) identified at our study site, were able to withstand a discharge level of approximately 300 m3/s and allowed us to simultaneously record 10 active nests before and during water releases from the dam. We believe our technique will be valuable for other researchers that work in regulated rivers to quantify behavior of aquatic fauna in response to a discharge disturbance.

  20. Video Mosaicking for Inspection of Gas Pipelines

    NASA Technical Reports Server (NTRS)

    Magruder, Darby; Chien, Chiun-Hong

    2005-01-01

    A vision system that includes a specially designed video camera and an image-data-processing computer is under development as a prototype of robotic systems for visual inspection of the interior surfaces of pipes and especially of gas pipelines. The system is capable of providing both forward views and mosaicked radial views that can be displayed in real time or after inspection. To avoid the complexities associated with moving parts and to provide simultaneous forward and radial views, the video camera is equipped with a wide-angle (>165 ) fish-eye lens aimed along the axis of a pipe to be inspected. Nine white-light-emitting diodes (LEDs) placed just outside the field of view of the lens (see Figure 1) provide ample diffuse illumination for a high-contrast image of the interior pipe wall. The video camera contains a 2/3-in. (1.7-cm) charge-coupled-device (CCD) photodetector array and functions according to the National Television Standards Committee (NTSC) standard. The video output of the camera is sent to an off-the-shelf video capture board (frame grabber) by use of a peripheral component interconnect (PCI) interface in the computer, which is of the 400-MHz, Pentium II (or equivalent) class. Prior video-mosaicking techniques are applicable to narrow-field-of-view (low-distortion) images of evenly illuminated, relatively flat surfaces viewed along approximately perpendicular lines by cameras that do not rotate and that move approximately parallel to the viewed surfaces. One such technique for real-time creation of mosaic images of the ocean floor involves the use of visual correspondences based on area correlation, during both the acquisition of separate images of adjacent areas and the consolidation (equivalently, integration) of the separate images into a mosaic image, in order to insure that there are no gaps in the mosaic image. The data-processing technique used for mosaicking in the present system also involves area correlation, but with several notable differences: Because the wide-angle lens introduces considerable distortion, the image data must be processed to effectively unwarp the images (see Figure 2). The computer executes special software that includes an unwarping algorithm that takes explicit account of the cylindrical pipe geometry. To reduce the processing time needed for unwarping, parameters of the geometric mapping between the circular view of a fisheye lens and pipe wall are determined in advance from calibration images and compiled into an electronic lookup table. The software incorporates the assumption that the optical axis of the camera is parallel (rather than perpendicular) to the direction of motion of the camera. The software also compensates for the decrease in illumination with distance from the ring of LEDs.

Top