NASA Astrophysics Data System (ADS)
Chen, Jin; Wang, Yifan; Wang, Xuelei; Wang, Yuehong; Hu, Rui
2017-01-01
Combine harvester usually works in sparsely populated areas with harsh environment. In order to achieve the remote real-time video monitoring of the working state of combine harvester. A remote video monitoring system based on ARM11 and embedded Linux is developed. The system uses USB camera for capturing working state video data of the main parts of combine harvester, including the granary, threshing drum, cab and cut table. Using JPEG image compression standard to compress video data then transferring monitoring screen to remote monitoring center over the network for long-range monitoring and management. At the beginning of this paper it describes the necessity of the design of the system. Then it introduces realization methods of hardware and software briefly. And then it describes detailedly the configuration and compilation of embedded Linux operating system and the compiling and transplanting of video server program are elaborated. At the end of the paper, we carried out equipment installation and commissioning on combine harvester and then tested the system and showed the test results. In the experiment testing, the remote video monitoring system for combine harvester can achieve 30fps with the resolution of 800x600, and the response delay in the public network is about 40ms.
Evaluation of a video image detection system : final report.
DOT National Transportation Integrated Search
1994-05-01
A video image detection system (VIDS) is an advanced wide-area traffic monitoring system : that processes input from a video camera. The Autoscope VIDS coupled with an information : management system was selected as the monitoring device because test...
Video Monitoring a Simulation-Based Quality Improvement Program in Bihar, India.
Dyer, Jessica; Spindler, Hilary; Christmas, Amelia; Shah, Malay Bharat; Morgan, Melissa; Cohen, Susanna R; Sterne, Jason; Mahapatra, Tanmay; Walker, Dilys
2018-04-01
Simulation-based training has become an accepted clinical training andragogy in high-resource settings with its use increasing in low-resource settings. Video recordings of simulated scenarios are commonly used by facilitators. Beyond using the videos during debrief sessions, researchers can also analyze the simulation videos to quantify technical and nontechnical skills during simulated scenarios over time. Little is known about the feasibility and use of large-scale systems to video record and analyze simulation and debriefing data for monitoring and evaluation in low-resource settings. This manuscript describes the process of designing and implementing a large-scale video monitoring system. Mentees and Mentors were consented and all simulations and debriefs conducted at 320 Primary Health Centers (PHCs) were video recorded. The system design, number of video recordings, and inter-rater reliability of the coded videos were assessed. The final dataset included a total of 11,278 videos. Overall, a total of 2,124 simulation videos were coded and 183 (12%) were blindly double-coded. For the double-coded sample, the average inter-rater reliability (IRR) scores were 80% for nontechnical skills, and 94% for clinical technical skills. Among 4,450 long debrief videos received, 216 were selected for coding and all were double-coded. Data quality of simulation videos was found to be very good in terms of recorded instances of "unable to see" and "unable to hear" in Phases 1 and 2. This study demonstrates that video monitoring systems can be effectively implemented at scale in resource limited settings. Further, video monitoring systems can play several vital roles within program implementation, including monitoring and evaluation, provision of actionable feedback to program implementers, and assurance of program fidelity.
Computer-aided video exposure monitoring.
Walsh, P T; Clark, R D; Flaherty, S; Gentry, S J
2000-01-01
A computer-aided video exposure monitoring system was used to record exposure information. The system comprised a handheld camcorder, portable video cassette recorder, radio-telemetry transmitter/receiver, and handheld or notebook computers for remote data logging, photoionization gas/vapor detectors (PIDs), and a personal aerosol monitor. The following workplaces were surveyed using the system: dry cleaning establishments--monitoring tetrachoroethylene in the air and in breath; printing works--monitoring white spirit type solvent; tire manufacturing factory--monitoring rubber fume; and a slate quarry--monitoring respirable dust and quartz. The system based on the handheld computer, in particular, simplified the data acquisition process compared with earlier systems in use by our laboratory. The equipment is more compact and easier to operate, and allows more accurate calibration of the instrument reading on the video image. Although a variety of data display formats are possible, the best format for videos intended for educational and training purposes was the review-preview chart superimposed on the video image of the work process. Recommendations for reducing exposure by engineering or by modifying work practice were possible through use of the video exposure system in the dry cleaning and tire manufacturing applications. The slate quarry work illustrated how the technique can be used to test ventilation configurations quickly to see their effect on the worker's personal exposure.
A new method for wireless video monitoring of bird nests
David I. King; Richard M. DeGraaf; Paul J. Champlin; Tracey B. Champlin
2001-01-01
Video monitoring of active bird nests is gaining popularity among researchers because it eliminates many of the biases associated with reliance on incidental observations of predation events or use of artificial nests, but the expense of video systems may be prohibitive. Also, the range and efficiency of current video monitoring systems may be limited by the need to...
ERIC Educational Resources Information Center
Hayes, John; Pulliam, Robert
A video performance monitoring system was developed by the URS/Matrix Company, under contract to the USAF Human Resources Laboratory and was evaluated experimentally in three technical training settings. Using input from 1 to 8 video cameras, the system provided a flexible combination of signal processing, direct monitor, recording and replay…
Kim, Young Ju; Xiao, Yan; Hu, Peter; Dutton, Richard
2009-08-01
To understand staff acceptance of a remote video monitoring system for operating room (OR) coordination. Improved real-time remote visual access to OR may enhance situational awareness but also raises privacy concerns for patients and staff. Survey. A system was implemented in a six-room surgical suite to display OR monitoring video at an access restricted control desk area. Image quality was manipulated to improve staff acceptance. Two months after installation, interviews and a survey were conducted on staff acceptance of video monitoring. About half of all OR personnel responded (n = 63). Overall levels of concerns were low, with 53% rated no concerns and 42% little concern. Top two reported uses of the video were to see if cases are finished and to see if a room is ready. Viewing the video monitoring system as useful did not reduce levels of concern. Staff in supervisory positions perceived less concern about the system's impact on privacy than did those supervised (p < 0.03). Concerns for patient privacy correlated with concerns for staff privacy and performance monitoring. Technical means such as manipulating image quality helped staff acceptance. Manipulation of image quality resulted overall acceptance of monitoring video, with residual levels of concerns. OR nurses may express staff privacy concern in the form of concerns over patient privacy. This study provided suggestions for technological and implementation strategies of video monitoring for coordination use in OR. Deployment of communication technology and integration of clinical information will likely raise concerns over staff privacy and performance monitoring. The potential gain of increased information access may be offset by negative impact of a sense of loss of autonomy.
Video monitoring system for car seat
NASA Technical Reports Server (NTRS)
Elrod, Susan Vinz (Inventor); Dabney, Richard W. (Inventor)
2004-01-01
A video monitoring system for use with a child car seat has video camera(s) mounted in the car seat. The video images are wirelessly transmitted to a remote receiver/display encased in a portable housing that can be removably mounted in the vehicle in which the car seat is installed.
Microcomputer Selection Guide for Construction Field Offices. Revision.
1984-09-01
the system, and the monitor displays information on a video display screen. Microcomputer systems today are available in a variety of configura- tions...background. White on black monitors report- edly caule more eye fatigue, while amber is reported to cause the least eye fatigue. Reverse video ...The video should be amber or green display with a resolution of at least 640 x 200 dots per in. Additional features of the monitor include an
Parihar, Vijay; Yadav, Y R; Kher, Yatin; Ratre, Shailendra; Sethi, Ashish; Sharma, Dhananjaya
2016-01-01
Steep learning curve is found initially in pure endoscopic procedures. Video telescopic operating monitor (VITOM) is an advance in rigid-lens telescope systems provides an alternative method for learning basics of neuroendoscopy with the help of the familiar principle of microneurosurgery. The aim was to evaluate the clinical utility of VITOM as a learning tool for neuroendoscopy. Video telescopic operating monitor was used 39 cranial and spinal procedures and its utility as a tool for minimally invasive neurosurgery and neuroendoscopy for initial learning curve was studied. Video telescopic operating monitor was used in 25 cranial and 14 spinal procedures. Image quality is comparable to endoscope and microscope. Surgeons comfort improved with VITOM. Frequent repositioning of scope holder and lack of stereopsis is initial limiting factor was compensated for with repeated procedures. Video telescopic operating monitor is found useful to reduce initial learning curve of neuroendoscopy.
Secure and Efficient Reactive Video Surveillance for Patient Monitoring.
Braeken, An; Porambage, Pawani; Gurtov, Andrei; Ylianttila, Mika
2016-01-02
Video surveillance is widely deployed for many kinds of monitoring applications in healthcare and assisted living systems. Security and privacy are two promising factors that align the quality and validity of video surveillance systems with the caliber of patient monitoring applications. In this paper, we propose a symmetric key-based security framework for the reactive video surveillance of patients based on the inputs coming from data measured by a wireless body area network attached to the human body. Only authenticated patients are able to activate the video cameras, whereas the patient and authorized people can consult the video data. User and location privacy are at each moment guaranteed for the patient. A tradeoff between security and quality of service is defined in order to ensure that the surveillance system gets activated even in emergency situations. In addition, the solution includes resistance against tampering with the device on the patient's side.
Secure and Efficient Reactive Video Surveillance for Patient Monitoring
Braeken, An; Porambage, Pawani; Gurtov, Andrei; Ylianttila, Mika
2016-01-01
Video surveillance is widely deployed for many kinds of monitoring applications in healthcare and assisted living systems. Security and privacy are two promising factors that align the quality and validity of video surveillance systems with the caliber of patient monitoring applications. In this paper, we propose a symmetric key-based security framework for the reactive video surveillance of patients based on the inputs coming from data measured by a wireless body area network attached to the human body. Only authenticated patients are able to activate the video cameras, whereas the patient and authorized people can consult the video data. User and location privacy are at each moment guaranteed for the patient. A tradeoff between security and quality of service is defined in order to ensure that the surveillance system gets activated even in emergency situations. In addition, the solution includes resistance against tampering with the device on the patient’s side. PMID:26729130
WISESight : a multispectral smart video-track intrusion monitor.
DOT National Transportation Integrated Search
2015-05-01
International Electronic Machines : Corporation (IEM) developed, tested, and : validated a unique smart video-based : intrusion monitoring system for use at : highway-rail grade crossings. The system : used both thermal infrared (IR) and : visible/ne...
Research of real-time video processing system based on 6678 multi-core DSP
NASA Astrophysics Data System (ADS)
Li, Xiangzhen; Xie, Xiaodan; Yin, Xiaoqiang
2017-10-01
In the information age, the rapid development in the direction of intelligent video processing, complex algorithm proposed the powerful challenge on the performance of the processor. In this article, through the FPGA + TMS320C6678 frame structure, the image to fog, merge into an organic whole, to stabilize the image enhancement, its good real-time, superior performance, break through the traditional function of video processing system is simple, the product defects such as single, solved the video application in security monitoring, video, etc. Can give full play to the video monitoring effectiveness, improve enterprise economic benefits.
NASA Astrophysics Data System (ADS)
Archetti, Renata; Vacchi, Matteo; Carniel, Sandro; Benetazzo, Alvise
2013-04-01
Measuring the location of the shoreline and monitoring foreshore changes through time represent a fundamental task for correct coastal management at many sites around the world. Several authors demonstrated video systems to be an essential tool for increasing the amount of data available for coastline management. These systems typically sample at least once per hour and can provide long-term datasets showing variations over days, events, months, seasons and years. In the past few years, due to the wide diffusion of video cameras at relatively low price, the use of video cameras and of video images analysis for environmental control has increased significantly. Even if video monitoring systems were often used in the research field they are most often applied with practical purposes including: i) identification and quantification of shoreline erosion, ii) assessment of coastal protection structure and/or beach nourishment performance, and iii) basic input to engineering design in the coastal zone iv) support for integrated numerical model validation Here we present the guidelines for the creation of a new video monitoring network in the proximity of the Jesolo beach (NW of the Adriatic Sea, Italy), Within this 10 km-long tourist district several engineering structures have been built in recent years, with the aim of solving urgent local erosion problems; as a result, almost all types of protection structures are present at this site: groynes, detached breakwaters.The area investigated experienced severe problems of coastal erosion in the past decades, inclusding a major one in the last November 2012. The activity is planned within the framework of the RITMARE project, that is also including other monitoring and scientific activities (bathymetry survey, waves and currents measurements, hydrodynamics and morphodynamic modeling). This contribution focuses on best practices to be adopted in the creation of the video monitoring system, and briefly describes the architectural design of the network, the creation of a database of images, the information extracted by the videomonitoring and its integration with other data.
Tsunoda, Koichi; Tsunoda, Atsunobu; Ishimoto, ShinnIchi; Kimura, Satoko
2006-01-01
The exclusive charge-coupled device (CCD) camera system for the endoscope and electronic fiberscopes are in widespread use. However, both are usually stationary in an office or examination room, and a wheeled cart is needed for mobility. The total costs of the CCD camera system and electronic fiberscopy system are at least US Dollars 10,000 and US Dollars 30,000, respectively. Recently, the performance of audio and visual instruments has improved dramatically, with a concomitant reduction in their cost. Commercially available CCD video cameras with small monitors have become common. They provide excellent image quality and are much smaller and less expensive than previous models. The authors have developed adaptors for the popular mini-digital video (mini-DV) camera. The camera also provides video and acoustic output signals; therefore, the endoscopic images can be viewed on a large monitor simultaneously. The new system (a mini-DV video camera and an adaptor) costs only US Dollars 1,000. Therefore, the system is both cost-effective and useful for the outpatient clinic or casualty setting, or on house calls for the purpose of patient education. In the future, the authors plan to introduce the clinical application of a high-vision camera and an infrared camera as medical instruments for clinical and research situations.
Martin, Caroline J Hollins; Kenney, Laurence; Pratt, Thomas; Granat, Malcolm H
2015-01-01
There is limited understanding of the type and extent of maternal postures that midwives should encourage or support during labor. The aims of this study were to identify a set of postures and movements commonly seen during labor, to develop an activity monitoring system for use during labor, and to validate this system design. Volunteer student midwives simulated maternal activity during labor in a laboratory setting. Participants (N = 15) wore monitors adhered to the left thigh and left shank, and adopted 13 common postures of laboring women for 3 minutes each. Simulated activities were recorded using a video camera. Postures and movements were coded from the video, and statistical analysis conducted of agreement between coded video data and outputs of the activity monitoring system. Excellent agreement between the 2 raters of the video recordings was found (Cohen's κ = 0.95). Both sensitivity and specificity of the activity monitoring system were greater than 80% for standing, lying, kneeling, and sitting (legs dangling). This validated system can be used to measure elected activity of laboring women and report on effects of postures on length of first stage, pain experience, birth satisfaction, and neonatal condition. This validated maternal posture-monitoring system is available as a reference-and for use by researchers who wish to develop research in this area. © 2015 by the American College of Nurse-Midwives.
Storing Data and Video on One Tape
NASA Technical Reports Server (NTRS)
Nixon, J. H.; Cater, J. P.
1985-01-01
Microprocessor-based system originally developed for anthropometric research merges digital data with video images for storage on video cassette recorder. Combined signals later retrieved and displayed simultaneously on television monitor. System also extracts digital portion of stored information and transfers it to solid-state memory.
Movable Cameras And Monitors For Viewing Telemanipulator
NASA Technical Reports Server (NTRS)
Diner, Daniel B.; Venema, Steven C.
1993-01-01
Three methods proposed to assist operator viewing telemanipulator on video monitor in control station when video image generated by movable video camera in remote workspace of telemanipulator. Monitors rotated or shifted and/or images in them transformed to adjust coordinate systems of scenes visible to operator according to motions of cameras and/or operator's preferences. Reduces operator's workload and probability of error by obviating need for mental transformations of coordinates during operation. Methods applied in outer space, undersea, in nuclear industry, in surgery, in entertainment, and in manufacturing.
TxDOT Video Analytics System User Manual
DOT National Transportation Integrated Search
2012-08-01
The TxDOT video analytics demonstration system is designed to monitor traffic conditions by collecting data such as speed and counts, detecting incidents such as stopped vehicles and reporting such incidents to system administrators. : As illustrated...
Video Monitoring and Analysis System for Vivarium Cage Racks | NCI Technology Transfer Center | TTC
This invention pertains to a system for continuous observation of rodents in home-cage environments with the specific aim to facilitate the quantification of activity levels and behavioral patterns for mice housed in a commercial ventilated cage rack. The National Cancer Institute’s Radiation Biology Branch seeks partners interested in collaborative research to co-develop a video monitoring system for laboratory animals.
Markerless video analysis for movement quantification in pediatric epilepsy monitoring.
Lu, Haiping; Eng, How-Lung; Mandal, Bappaditya; Chan, Derrick W S; Ng, Yen-Ling
2011-01-01
This paper proposes a markerless video analytic system for quantifying body part movements in pediatric epilepsy monitoring. The system utilizes colored pajamas worn by a patient in bed to extract body part movement trajectories, from which various features can be obtained for seizure detection and analysis. Hence, it is non-intrusive and it requires no sensor/marker to be attached to the patient's body. It takes raw video sequences as input and a simple user-initialization indicates the body parts to be examined. In background/foreground modeling, Gaussian mixture models are employed in conjunction with HSV-based modeling. Body part detection follows a coarse-to-fine paradigm with graph-cut-based segmentation. Finally, body part parameters are estimated with domain knowledge guidance. Experimental studies are reported on sequences captured in an Epilepsy Monitoring Unit at a local hospital. The results demonstrate the feasibility of the proposed system in pediatric epilepsy monitoring and seizure detection.
[Intelligent videosurveillance and falls detection: Perceptions of professionals and managers].
Lapierre, Nolwenn; Carpentier, Isabelle; St-Arnaud, Alain; Ducharme, Francine; Meunier, Jean; Jobidon, Mireille; Rousseau, Jacqueline
2016-02-01
Gerontechnologies can be used to detect accidental falls. However, existing systems do not entirely meet users' expectations. Our team developed an intelligent video-monitoring systems to fill these gaps. Authors advocate consulting potential users at the early stages of the design of gerontechnologies and integrating their suggestions. This study aims to explore health care workers' opinion regarding the intelligent video monitoring to detect falls by older adults living at home. This qualitative study explored the opinions of 31 participants using focus groups. Transcripts were analyzed using predetermined codes based on the competence model. Participants reported several advantages for using the intelligent video monitoring and provided suggestions for improving its use. The participants' suggestions and comments will help to improve the system and match it to users' needs. © CAOT 2015.
[Microinjection Monitoring System Design Applied to MRI Scanning].
Xu, Yongfeng
2017-09-30
A microinjection monitoring system applied to the MRI scanning was introduced. The micro camera probe was used to stretch into the main magnet for real-time video injection monitoring of injection tube terminal. The programming based on LabVIEW was created to analysis and process the real-time video information. The feedback signal was used for intelligent controlling of the modified injection pump. The real-time monitoring system can make the best use of injection under the condition that the injection device was away from the sample which inside the magnetic room and unvisible. 9.4 T MRI scanning experiment showed that the system in ultra-high field can work stability and doesn't affect the MRI scans.
Digital image processing of bone - Problems and potentials
NASA Technical Reports Server (NTRS)
Morey, E. R.; Wronski, T. J.
1980-01-01
The development of a digital image processing system for bone histomorphometry and fluorescent marker monitoring is discussed. The system in question is capable of making measurements of UV or light microscope features on a video screen with either video or computer-generated images, and comprises a microscope, low-light-level video camera, video digitizer and display terminal, color monitor, and PDP 11/34 computer. Capabilities demonstrated in the analysis of an undecalcified rat tibia include the measurement of perimeter and total bone area, and the generation of microscope images, false color images, digitized images and contoured images for further analysis. Software development will be based on an existing software library, specifically the mini-VICAR system developed at JPL. It is noted that the potentials of the system in terms of speed and reliability far exceed any problems associated with hardware and software development.
NASA Astrophysics Data System (ADS)
Al Hadhrami, Tawfik; Nightingale, James M.; Wang, Qi; Grecos, Christos
2014-05-01
In emergency situations, the ability to remotely monitor unfolding events using high-quality video feeds will significantly improve the incident commander's understanding of the situation and thereby aids effective decision making. This paper presents a novel, adaptive video monitoring system for emergency situations where the normal communications network infrastructure has been severely impaired or is no longer operational. The proposed scheme, operating over a rapidly deployable wireless mesh network, supports real-time video feeds between first responders, forward operating bases and primary command and control centers. Video feeds captured on portable devices carried by first responders and by static visual sensors are encoded in H.264/SVC, the scalable extension to H.264/AVC, allowing efficient, standard-based temporal, spatial, and quality scalability of the video. A three-tier video delivery system is proposed, which balances the need to avoid overuse of mesh nodes with the operational requirements of the emergency management team. In the first tier, the video feeds are delivered at a low spatial and temporal resolution employing only the base layer of the H.264/SVC video stream. Routing in this mode is designed to employ all nodes across the entire mesh network. In the second tier, whenever operational considerations require that commanders or operators focus on a particular video feed, a `fidelity control' mechanism at the monitoring station sends control messages to the routing and scheduling agents in the mesh network, which increase the quality of the received picture using SNR scalability while conserving bandwidth by maintaining a low frame rate. In this mode, routing decisions are based on reliable packet delivery with the most reliable routes being used to deliver the base and lower enhancement layers; as fidelity is increased and more scalable layers are transmitted they will be assigned to routes in descending order of reliability. The third tier of video delivery transmits a high-quality video stream including all available scalable layers using the most reliable routes through the mesh network ensuring the highest possible video quality. The proposed scheme is implemented in a proven simulator, and the performance of the proposed system is numerically evaluated through extensive simulations. We further present an in-depth analysis of the proposed solutions and potential approaches towards supporting high-quality visual communications in such a demanding context.
Using underwater video imaging as an assessment tool for coastal condition
As part of an effort to monitor ecological conditions in nearshore habitats, from 2009-2012 underwater videos were captured at over 400 locations throughout the Laurentian Great Lakes. This study focuses on developing a video rating system and assessing video images. This ratin...
Remote Video Monitor of Vehicles in Cooperative Information Platform
NASA Astrophysics Data System (ADS)
Qin, Guofeng; Wang, Xiaoguo; Wang, Li; Li, Yang; Li, Qiyan
Detection of vehicles plays an important role in the area of the modern intelligent traffic management. And the pattern recognition is a hot issue in the area of computer vision. An auto- recognition system in cooperative information platform is studied. In the cooperative platform, 3G wireless network, including GPS, GPRS (CDMA), Internet (Intranet), remote video monitor and M-DMB networks are integrated. The remote video information can be taken from the terminals and sent to the cooperative platform, then detected by the auto-recognition system. The images are pretreated and segmented, including feature extraction, template matching and pattern recognition. The system identifies different models and gets vehicular traffic statistics. Finally, the implementation of the system is introduced.
Imaging System for Vaginal Surgery.
Taylor, G Bernard; Myers, Erinn M
2015-12-01
The vaginal surgeon is challenged with performing complex procedures within a surgical field of limited light and exposure. The video telescopic operating microscope is an illumination and imaging system that provides visualization during open surgical procedures with a limited field of view. The imaging system is positioned within the surgical field and then secured to the operating room table with a maneuverable holding arm. A high-definition camera and Xenon light source allow transmission of the magnified image to a high-definition monitor in the operating room. The monitor screen is positioned above the patient for the surgeon and assistants to view real time throughout the operation. The video telescopic operating microscope system was used to provide surgical illumination and magnification during total vaginal hysterectomy and salpingectomy, midurethral sling, and release of vaginal scar procedures. All procedures were completed without complications. The video telescopic operating microscope provided illumination of the vaginal operative field and display of the magnified image onto high-definition monitors in the operating room for the surgeon and staff to simultaneously view the procedures. The video telescopic operating microscope provides high-definition display, magnification, and illumination during vaginal surgery.
iTRAC : intelligent video compression for automated traffic surveillance systems.
DOT National Transportation Integrated Search
2010-08-01
Non-intrusive video imaging sensors are commonly used in traffic monitoring : and surveillance. For some applications it is necessary to transmit the video : data over communication links. However, due to increased requirements of : bitrate this mean...
Task-oriented situation recognition
NASA Astrophysics Data System (ADS)
Bauer, Alexander; Fischer, Yvonne
2010-04-01
From the advances in computer vision methods for the detection, tracking and recognition of objects in video streams, new opportunities for video surveillance arise: In the future, automated video surveillance systems will be able to detect critical situations early enough to enable an operator to take preventive actions, instead of using video material merely for forensic investigations. However, problems such as limited computational resources, privacy regulations and a constant change in potential threads have to be addressed by a practical automated video surveillance system. In this paper, we show how these problems can be addressed using a task-oriented approach. The system architecture of the task-oriented video surveillance system NEST and an algorithm for the detection of abnormal behavior as part of the system are presented and illustrated for the surveillance of guests inside a video-monitored building.
Development of a portable bicycle/pedestrian monitoring system for safety enhancement.
DOT National Transportation Integrated Search
2017-02-02
The objective of this project was to develop a portable automated system to collect continuous video data on pedestrian and cyclist behavior at midblock locations throughout the metro Atlanta area. The system analyzes the collected video data and aut...
Advances of FishNet towards a fully automatic monitoring system for fish migration
NASA Astrophysics Data System (ADS)
Kratzert, Frederik; Mader, Helmut
2017-04-01
Restoring the continuum of river networks, affected by anthropogenic constructions, is one of the main objectives of the Water Framework Directive. Regarding fish migration, fish passes are a widely used measure. Often the functionality of these fish passes needs to be assessed by monitoring. Over the last years, we developed a new semi-automatic monitoring system (FishCam) which allows the contact free observation of fish migration in fish passes through videos. The system consists of a detection tunnel, equipped with a camera, a motion sensor and artificial light sources, as well as a software (FishNet), which helps to analyze the video data. In its latest version, the software is capable of detecting and tracking objects in the videos as well as classifying them into "fish" and "no-fish" objects. This allows filtering out the videos containing at least one fish (approx. 5 % of all grabbed videos) and reduces the manual labor to the analysis of these videos. In this state the entire system has already been used in over 20 different fish passes across Austria for a total of over 140 months of monitoring resulting in more than 1.4 million analyzed videos. As a next step towards a fully automatic monitoring system, a key feature is the automatized classification of the detected fish into their species, which is still an unsolved task in a fully automatic monitoring environment. Recent advances in the field of machine learning, especially image classification with deep convolutional neural networks, sound promising in order to solve this problem. In this study, different approaches for the fish species classification are tested. Besides an image-only based classification approach using deep convolutional neural networks, various methods that combine the power of convolutional neural networks as image descriptors with additional features, such as the fish length and the time of appearance, are explored. To facilitate the development and testing phase of this approach, a subset of six fish species of Austrian rivers and streams is considered in this study. All scripts and the data to reproduce the results of this study will be made publicly available on GitHub* at the beginning of the EGU2017 General Assembly. * https://github.com/kratzert/EGU2017_public/
Apparatus for monitoring crystal growth
Sachs, Emanual M.
1981-01-01
A system and method are disclosed for monitoring the growth of a crystalline body from a liquid meniscus in a furnace. The system provides an improved human/machine interface so as to reduce operator stress, strain and fatigue while improving the conditions for observation and control of the growing process. The system comprises suitable optics for forming an image of the meniscus and body wherein the image is anamorphic so that the entire meniscus can be viewed with good resolution in both the width and height dimensions. The system also comprises a video display for displaying the anamorphic image. The video display includes means for enhancing the contrast between any two contrasting points in the image. The video display also comprises a signal averager for averaging the intensity of at least one preselected portions of the image. The value of the average intensity, can in turn be utilized to control the growth of the body. The system and method are also capable of observing and monitoring multiple processes.
Method of monitoring crystal growth
Sachs, Emanual M.
1982-01-01
A system and method are disclosed for monitoring the growth of a crystalline body from a liquid meniscus in a furnace. The system provides an improved human/machine interface so as to reduce operator stress, strain and fatigue while improving the conditions for observation and control of the growing process. The system comprises suitable optics for forming an image of the meniscus and body wherein the image is anamorphic so that the entire meniscus can be viewed with good resolution in both the width and height dimensions. The system also comprises a video display for displaying the anamorphic image. The video display includes means for enhancing the contrast between any two contrasting points in the image. The video display also comprises a signal averager for averaging the intensity of at least one preselected portions of the image. The value of the average intensity, can in turn be utilized to control the growth of the body. The system and method are also capable of observing and monitoring multiple processes.
81. THREE ADDITIONAL BLACK AND WHITE VIDEO MONITORS LOCATED IMMEDIATELY ...
81. THREE ADDITIONAL BLACK AND WHITE VIDEO MONITORS LOCATED IMMEDIATELY WEST OF THOSE IN CA-133-1-A-80. COMPLEX SAFETY WARNING LIGHTS FOR SLC-3E (PAD 2) AND BLDG. 763 (LOB) LOCATED ABOVE MONITOR 3; GREEN LIGHTS ON BOTTOM OF EACH STACK ILLUMINATED. LEFT TO RIGHT BELOW MONITORS: ACCIDENT REPORTING EMERGENCY NOTIFICATION SYSTEM TELEPHONE, ATLAS H FUEL COUNTER, AND DIGITAL COUNTDOWN CLOCK. - Vandenberg Air Force Base, Space Launch Complex 3, Launch Operations Building, Napa & Alden Roads, Lompoc, Santa Barbara County, CA
IVTS-CEV (Interactive Video Tape System-Combat Engineer Vehicle) Gunnery Trainer.
1981-07-01
video game technology developed for and marketed in consumer video games. The IVTS/CEV is a conceptual/breadboard-level classroom interactive training system designed to train Combat Engineer Vehicle (CEV) gunners in target acquisition and engagement with the main gun. The concept demonstration consists of two units: a gunner station and a display module. The gunner station has optics and gun controls replicating those of the CEV gunner station. The display module contains a standard large-screen color video monitor and a video tape player. The gunner’s sight
Final Report to the Office of Naval Research on Precision Engineering
1991-09-30
Microscope equipped with a Panasonic Video Camera and Monitor was used to view the dressing process. Two scaled, transparent templates were made to...reservoir of hydraulic fluid. Loads were monitored by a miniature strain-guage load cell. A computer-based video image system was used to measure crack...was applied in a stepwise fashion, the stressing rate being approximately 1 MPa/s with hold periods of about 5 s at 2.5 - 5 MPa intervals. Video images
21 CFR 886.5820 - Closed-circuit television reading system.
Code of Federal Regulations, 2010 CFR
2010-04-01
... reading system. (a) Identification. A closed-circuit television reading system is a device that consists of a lens, video camera, and video monitor that is intended for use by a patient who has subnormal... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Closed-circuit television reading system. 886.5820...
Polarimeter based on video matrix
NASA Astrophysics Data System (ADS)
Pavlov, Andrey; Kontantinov, Oleg; Shmirko, Konstantin; Zubko, Evgenij
2017-11-01
In this paper we present a new measurement tool - polarimeter, based on video matrix. Polarimetric measure- ments are usefull, for example, when monitoring water areas pollutions and atmosphere constituents. New device is small enough to mount on unmanned aircraft vehicles (quadrocopters) and stationary platforms. Device and corresponding software turns it into real-time monitoring system, that helps to solve some research problems.
Design of video interface conversion system based on FPGA
NASA Astrophysics Data System (ADS)
Zhao, Heng; Wang, Xiang-jun
2014-11-01
This paper presents a FPGA based video interface conversion system that enables the inter-conversion between digital and analog video. Cyclone IV series EP4CE22F17C chip from Altera Corporation is used as the main video processing chip, and single-chip is used as the information interaction control unit between FPGA and PC. The system is able to encode/decode messages from the PC. Technologies including video decoding/encoding circuits, bus communication protocol, data stream de-interleaving and de-interlacing, color space conversion and the Camera Link timing generator module of FPGA are introduced. The system converts Composite Video Broadcast Signal (CVBS) from the CCD camera into Low Voltage Differential Signaling (LVDS), which will be collected by the video processing unit with Camera Link interface. The processed video signals will then be inputted to system output board and displayed on the monitor.The current experiment shows that it can achieve high-quality video conversion with minimum board size.
More About The Video Event Trigger
NASA Technical Reports Server (NTRS)
Williams, Glenn L.
1996-01-01
Report presents additional information about system described in "Video Event Trigger" (LEW-15076). Digital electronic system processes video-image data to generate trigger signal when image shows significant change, such as motion, or appearance, disappearance, change in color, brightness, or dilation of object. Potential uses include monitoring of hallways, parking lots, and other areas during hours when supposed unoccupied, looking for fires, tracking airplanes or other moving objects, identification of missing or defective parts on production lines, and video recording of automobile crash tests.
Design and implementation of H.264 based embedded video coding technology
NASA Astrophysics Data System (ADS)
Mao, Jian; Liu, Jinming; Zhang, Jiemin
2016-03-01
In this paper, an embedded system for remote online video monitoring was designed and developed to capture and record the real-time circumstances in elevator. For the purpose of improving the efficiency of video acquisition and processing, the system selected Samsung S5PV210 chip as the core processor which Integrated graphics processing unit. And the video was encoded with H.264 format for storage and transmission efficiently. Based on S5PV210 chip, the hardware video coding technology was researched, which was more efficient than software coding. After running test, it had been proved that the hardware video coding technology could obviously reduce the cost of system and obtain the more smooth video display. It can be widely applied for the security supervision [1].
13 point video tape quality guidelines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaunt, R.
1997-05-01
Until high definition television (ATV) arrives, in the U.S. we must still contend with the National Television Systems Committee (NTSC) video standard (or PAL or SECAM-depending on your country). NTSC, a 40-year old standard designed for transmission of color video camera images over a small bandwidth, is not well suited for the sharp, full-color images that todays computers are capable of producing. PAL and SECAM also suffers from many of NTSC`s problems, but to varying degrees. Video professionals, when working with computer graphic (CG) images, use two monitors: a computer monitor for producing CGs and an NTSC monitor to viewmore » how a CG will look on video. More often than not, the NTSC image will differ significantly from the CG image, and outputting it to NTSC as an artist works enables the him or her to see the images as others will see it. Below are thirteen guidelines designed to increase the quality of computer graphics recorded onto video tape. Viewing your work in NTSC and attempting to follow the below tips will enable you to create higher quality videos. No video is perfect, so don`t expect to abide by every guideline every time.« less
Colón-de Martí, Luz N; Rodríguez-Figueroa, Linnette; Nazario, Lelis L; Gutiérrez, Roberto; González, Alexis
2012-01-01
Video games have become a popular entertainment among adolescents. Although some video games are educational, there are others with high content of violence and the potential for other harmful effects. Lack of appropriate supervision of video games use during adolescence, a crucial stage of development, may lead to serious behavioral consequences in some adolescents. There is also concern about time spent playing video games and the subsequent neglect of more developmentally appropriate activities, such as completing academic tasks. Self-administered questionnaires were used to assess video game use patterns and parental supervision among 55 adolescent patients 13-17 years old (mean age 14.4 years; 56.4% males) and their parents. Parental supervision /monitoring of the adolescents video games use was not consistent and gender related differences were found regarding their video game use. Close to one third (32%) of the participants reported video game playing had interfered with their academic performance. Parents who understood video games rating system were more likely to prohibit their use due to rating. These findings underscore the need for clear and consistently enforced rules and monitoring of video games use by adolescents. Parents need to be educated about the relevance of their supervision, video games content and rating system; so they will decrease time playing and exposure to potentially harmful video games. It also supports the relevance of addressing supervision, gender-based parental supervisory styles, and patterns of video games use in the evaluation and treatment of adolescents.
Jalal, Ahmad; Kamal, Shaharyar; Kim, Daijin
2014-07-02
Recent advancements in depth video sensors technologies have made human activity recognition (HAR) realizable for elderly monitoring applications. Although conventional HAR utilizes RGB video sensors, HAR could be greatly improved with depth video sensors which produce depth or distance information. In this paper, a depth-based life logging HAR system is designed to recognize the daily activities of elderly people and turn these environments into an intelligent living space. Initially, a depth imaging sensor is used to capture depth silhouettes. Based on these silhouettes, human skeletons with joint information are produced which are further used for activity recognition and generating their life logs. The life-logging system is divided into two processes. Firstly, the training system includes data collection using a depth camera, feature extraction and training for each activity via Hidden Markov Models. Secondly, after training, the recognition engine starts to recognize the learned activities and produces life logs. The system was evaluated using life logging features against principal component and independent component features and achieved satisfactory recognition rates against the conventional approaches. Experiments conducted on the smart indoor activity datasets and the MSRDailyActivity3D dataset show promising results. The proposed system is directly applicable to any elderly monitoring system, such as monitoring healthcare problems for elderly people, or examining the indoor activities of people at home, office or hospital.
Jalal, Ahmad; Kamal, Shaharyar; Kim, Daijin
2014-01-01
Recent advancements in depth video sensors technologies have made human activity recognition (HAR) realizable for elderly monitoring applications. Although conventional HAR utilizes RGB video sensors, HAR could be greatly improved with depth video sensors which produce depth or distance information. In this paper, a depth-based life logging HAR system is designed to recognize the daily activities of elderly people and turn these environments into an intelligent living space. Initially, a depth imaging sensor is used to capture depth silhouettes. Based on these silhouettes, human skeletons with joint information are produced which are further used for activity recognition and generating their life logs. The life-logging system is divided into two processes. Firstly, the training system includes data collection using a depth camera, feature extraction and training for each activity via Hidden Markov Models. Secondly, after training, the recognition engine starts to recognize the learned activities and produces life logs. The system was evaluated using life logging features against principal component and independent component features and achieved satisfactory recognition rates against the conventional approaches. Experiments conducted on the smart indoor activity datasets and the MSRDailyActivity3D dataset show promising results. The proposed system is directly applicable to any elderly monitoring system, such as monitoring healthcare problems for elderly people, or examining the indoor activities of people at home, office or hospital. PMID:24991942
Security warning system monitors up to fifteen remote areas simultaneously
NASA Technical Reports Server (NTRS)
Fusco, R. C.
1966-01-01
Security warning system consisting of 15 television cameras is capable of monitoring several remote or unoccupied areas simultaneously. The system uses a commutator and decommutator, allowing time-multiplexed video transmission. This security system could be used in industrial and retail establishments.
21 CFR 886.5820 - Closed-circuit television reading system.
Code of Federal Regulations, 2011 CFR
2011-04-01
... of a lens, video camera, and video monitor that is intended for use by a patient who has subnormal vision to magnify reading material. (b) Classification. Class I (general controls). The device is exempt...
NASA Astrophysics Data System (ADS)
Deckard, Michael; Ratib, Osman M.; Rubino, Gregory
2002-05-01
Our project was to design and implement a ceiling-mounted multi monitor display unit for use in a high-field MRI surgical suite. The system is designed to simultaneously display images/data from four different digital and/or analog sources with: minimal interference from the adjacent high magnetic field, minimal signal-to-noise/artifact contribution to the MRI images and compliance with codes and regulations for the sterile neuro-surgical environment. Provisions were also made to accommodate the importing and exporting of video information via PACS and remote processing/display for clinical and education uses. Commercial fiber optic receivers/transmitters were implemented along with supporting video processing and distribution equipment to solve the video communication problem. A new generation of high-resolution color flat panel displays was selected for the project. A custom-made monitor mount and in-suite electronics enclosure was designed and constructed at UCLA. Difficulties with implementing an isolated AC power system are discussed and a work-around solution presented.
Application of robust face recognition in video surveillance systems
NASA Astrophysics Data System (ADS)
Zhang, De-xin; An, Peng; Zhang, Hao-xiang
2018-03-01
In this paper, we propose a video searching system that utilizes face recognition as searching indexing feature. As the applications of video cameras have great increase in recent years, face recognition makes a perfect fit for searching targeted individuals within the vast amount of video data. However, the performance of such searching depends on the quality of face images recorded in the video signals. Since the surveillance video cameras record videos without fixed postures for the object, face occlusion is very common in everyday video. The proposed system builds a model for occluded faces using fuzzy principal component analysis (FPCA), and reconstructs the human faces with the available information. Experimental results show that the system has very high efficiency in processing the real life videos, and it is very robust to various kinds of face occlusions. Hence it can relieve people reviewers from the front of the monitors and greatly enhances the efficiency as well. The proposed system has been installed and applied in various environments and has already demonstrated its power by helping solving real cases.
Rugged Video System For Inspecting Animal Burrows
NASA Technical Reports Server (NTRS)
Triandafils, Dick; Maples, Art; Breininger, Dave
1992-01-01
Video system designed for examining interiors of burrows of gopher tortoises, 5 in. (13 cm) in diameter or greater, to depth of 18 ft. (about 5.5 m), includes video camera, video cassette recorder (VCR), television monitor, control unit, and power supply, all carried in backpack. Polyvinyl chloride (PVC) poles used to maneuver camera into (and out of) burrows, stiff enough to push camera into burrow, but flexible enough to bend around curves. Adult tortoises and other burrow inhabitants observable, young tortoises and such small animals as mice obscured by sand or debris.
NASA Astrophysics Data System (ADS)
Ghionis, George; Trygonis, Vassilis; Karydis, Antonis; Vousdoukas, Michalis; Alexandrakis, George; Drakopoulos, Panos; Amdreadis, Olympos; Psarros, Fotis; Velegrakis, Antonis; Poulos, Serafim
2016-04-01
Effective beach management requires environmental assessments that are based on sound science, are cost-effective and are available to beach users and managers in an accessible, timely and transparent manner. The most common problems are: 1) The available field data are scarce and of sub-optimal spatio-temporal resolution and coverage, 2) our understanding of local beach processes needs to be improved in order to accurately model/forecast beach dynamics under a changing climate, and 3) the information provided by coastal scientists/engineers in the form of data, models and scientific interpretation is often too complicated to be of direct use by coastal managers/decision makers. A multispectral video system has been developed, consisting of one or more video cameras operating in the visible part of the spectrum, a passive near-infrared (NIR) camera, an active NIR camera system, a thermal infrared camera and a spherical video camera, coupled with innovative image processing algorithms and a telemetric system for the monitoring of coastal environmental parameters. The complete system has the capability to record, process and communicate (in quasi-real time) high frequency information on shoreline position, wave breaking zones, wave run-up, erosion hot spots along the shoreline, nearshore wave height, turbidity, underwater visibility, wind speed and direction, air and sea temperature, solar radiation, UV radiation, relative humidity, barometric pressure and rainfall. An innovative, remotely-controlled interactive visual monitoring system, based on the spherical video camera (with 360°field of view), combines the video streams from all cameras and can be used by beach managers to monitor (in real time) beach user numbers, flow activities and safety at beaches of high touristic value. The high resolution near infrared cameras permit 24-hour monitoring of beach processes, while the thermal camera provides information on beach sediment temperature and moisture, can detect upwelling in the nearshore zone, and enhances the safety of beach users. All data can be presented in real- or quasi-real time and are stored for future analysis and training/validation of coastal processes models. Acknowledgements: This work was supported by the project BEACHTOUR (11SYN-8-1466) of the Operational Program "Cooperation 2011, Competitiveness and Entrepreneurship", co-funded by the European Regional Development Fund and the Greek Ministry of Education and Religious Affairs.
Martínez-Avilés, Marta; Ivorra, Benjamin; Martínez-López, Beatriz; Ramos, Ángel Manuel; Sánchez-Vizcaíno, José Manuel
2017-01-01
Early detection of infectious diseases can substantially reduce the health and economic impacts on livestock production. Here we describe a system for monitoring animal activity based on video and data processing techniques, in order to detect slowdown and weakening due to infection with African swine fever (ASF), one of the most significant threats to the pig industry. The system classifies and quantifies motion-based animal behaviour and daily activity in video sequences, allowing automated and non-intrusive surveillance in real-time. The aim of this system is to evaluate significant changes in animals’ motion after being experimentally infected with ASF virus. Indeed, pig mobility declined progressively and fell significantly below pre-infection levels starting at four days after infection at a confidence level of 95%. Furthermore, daily motion decreased in infected animals by approximately 10% before the detection of the disease by clinical signs. These results show the promise of video processing techniques for real-time early detection of livestock infectious diseases. PMID:28877181
Video-based respiration monitoring with automatic region of interest detection.
Janssen, Rik; Wang, Wenjin; Moço, Andreia; de Haan, Gerard
2016-01-01
Vital signs monitoring is ubiquitous in clinical environments and emerging in home-based healthcare applications. Still, since current monitoring methods require uncomfortable sensors, respiration rate remains the least measured vital sign. In this paper, we propose a video-based respiration monitoring method that automatically detects a respiratory region of interest (RoI) and signal using a camera. Based on the observation that respiration induced chest/abdomen motion is an independent motion system in a video, our basic idea is to exploit the intrinsic properties of respiration to find the respiratory RoI and extract the respiratory signal via motion factorization. We created a benchmark dataset containing 148 video sequences obtained on adults under challenging conditions and also neonates in the neonatal intensive care unit (NICU). The measurements obtained by the proposed video respiration monitoring (VRM) method are not significantly different from the reference methods (guided breathing or contact-based ECG; p-value = 0.6), and explain more than 99% of the variance of the reference values with low limits of agreement (-2.67 to 2.81 bpm). VRM seems to provide a valid solution to ECG in confined motion scenarios, though precision may be reduced for neonates. More studies are needed to validate VRM under challenging recording conditions, including upper-body motion types.
Endoscopic techniques in aesthetic plastic surgery.
McCain, L A; Jones, G
1995-01-01
There has been an explosive interest in endoscopic techniques by plastic surgeons over the past two years. Procedures such as facial rejuvenation, breast augmentation and abdominoplasty are being performed with endoscopic assistance. Endoscopic operations require a complex setup with components such as video camera, light sources, cables and hard instruments. The Hopkins Rod Lens system consists of optical fibers for illumination, an objective lens, an image retrieval system, a series of rods and lenses, and an eyepiece for image collection. Good illumination of the body cavity is essential for endoscopic procedures. Placement of the video camera on the eyepiece of the endoscope gives a clear, brightly illuminated large image on the monitor. The video monitor provides the surgical team with the endoscopic image. It is important to become familiar with the equipment before actually doing cases. Several options exist for staff education. In the operating room the endoscopic cart needs to be positioned to allow a clear unrestricted view of the video monitor by the surgeon and the operating team. Fogging of the endoscope may be prevented during induction by using FREDD (a fog reduction/elimination device) or a warm bath. The camera needs to be white balanced. During the procedure, the nurse monitors the level of dissection and assesses for clogging of the suction.
Clustering and Flow Conservation Monitoring Tool for Software Defined Networks.
Puente Fernández, Jesús Antonio; García Villalba, Luis Javier; Kim, Tai-Hoon
2018-04-03
Prediction systems present some challenges on two fronts: the relation between video quality and observed session features and on the other hand, dynamics changes on the video quality. Software Defined Networks (SDN) is a new concept of network architecture that provides the separation of control plane (controller) and data plane (switches) in network devices. Due to the existence of the southbound interface, it is possible to deploy monitoring tools to obtain the network status and retrieve a statistics collection. Therefore, achieving the most accurate statistics depends on a strategy of monitoring and information requests of network devices. In this paper, we propose an enhanced algorithm for requesting statistics to measure the traffic flow in SDN networks. Such an algorithm is based on grouping network switches in clusters focusing on their number of ports to apply different monitoring techniques. Such grouping occurs by avoiding monitoring queries in network switches with common characteristics and then, by omitting redundant information. In this way, the present proposal decreases the number of monitoring queries to switches, improving the network traffic and preventing the switching overload. We have tested our optimization in a video streaming simulation using different types of videos. The experiments and comparison with traditional monitoring techniques demonstrate the feasibility of our proposal maintaining similar values decreasing the number of queries to the switches.
Visualizing the history of living spaces.
Ivanov, Yuri; Wren, Christopher; Sorokin, Alexander; Kaur, Ishwinder
2007-01-01
The technology available to building designers now makes it possible to monitor buildings on a very large scale. Video cameras and motion sensors are commonplace in practically every office space, and are slowly making their way into living spaces. The application of such technologies, in particular video cameras, while improving security, also violates privacy. On the other hand, motion sensors, while being privacy-conscious, typically do not provide enough information for a human operator to maintain the same degree of awareness about the space that can be achieved by using video cameras. We propose a novel approach in which we use a large number of simple motion sensors and a small set of video cameras to monitor a large office space. In our system we deployed 215 motion sensors and six video cameras to monitor the 3,000-square-meter office space occupied by 80 people for a period of about one year. The main problem in operating such systems is finding a way to present this highly multidimensional data, which includes both spatial and temporal components, to a human operator to allow browsing and searching recorded data in an efficient and intuitive way. In this paper we present our experiences and the solutions that we have developed in the course of our work on the system. We consider this work to be the first step in helping designers and managers of building systems gain access to information about occupants' behavior in the context of an entire building in a way that is only minimally intrusive to the occupants' privacy.
NASA Technical Reports Server (NTRS)
Jedlovec, Gary; Srikishen, Jayanthi; Edwards, Rita; Cross, David; Welch, Jon; Smith, Matt
2013-01-01
The use of collaborative scientific visualization systems for the analysis, visualization, and sharing of "big data" available from new high resolution remote sensing satellite sensors or four-dimensional numerical model simulations is propelling the wider adoption of ultra-resolution tiled display walls interconnected by high speed networks. These systems require a globally connected and well-integrated operating environment that provides persistent visualization and collaboration services. This abstract and subsequent presentation describes a new collaborative visualization system installed for NASA's Shortterm Prediction Research and Transition (SPoRT) program at Marshall Space Flight Center and its use for Earth science applications. The system consists of a 3 x 4 array of 1920 x 1080 pixel thin bezel video monitors mounted on a wall in a scientific collaboration lab. The monitors are physically and virtually integrated into a 14' x 7' for video display. The display of scientific data on the video wall is controlled by a single Alienware Aurora PC with a 2nd Generation Intel Core 4.1 GHz processor, 32 GB memory, and an AMD Fire Pro W600 video card with 6 mini display port connections. Six mini display-to-dual DVI cables are used to connect the 12 individual video monitors. The open source Scalable Adaptive Graphics Environment (SAGE) windowing and media control framework, running on top of the Ubuntu 12 Linux operating system, allows several users to simultaneously control the display and storage of high resolution still and moving graphics in a variety of formats, on tiled display walls of any size. The Ubuntu operating system supports the open source Scalable Adaptive Graphics Environment (SAGE) software which provides a common environment, or framework, enabling its users to access, display and share a variety of data-intensive information. This information can be digital-cinema animations, high-resolution images, high-definition video-teleconferences, presentation slides, documents, spreadsheets or laptop screens. SAGE is cross-platform, community-driven, open-source visualization and collaboration middleware that utilizes shared national and international cyberinfrastructure for the advancement of scientific research and education.
NASA Astrophysics Data System (ADS)
Jedlovec, G.; Srikishen, J.; Edwards, R.; Cross, D.; Welch, J. D.; Smith, M. R.
2013-12-01
The use of collaborative scientific visualization systems for the analysis, visualization, and sharing of 'big data' available from new high resolution remote sensing satellite sensors or four-dimensional numerical model simulations is propelling the wider adoption of ultra-resolution tiled display walls interconnected by high speed networks. These systems require a globally connected and well-integrated operating environment that provides persistent visualization and collaboration services. This abstract and subsequent presentation describes a new collaborative visualization system installed for NASA's Short-term Prediction Research and Transition (SPoRT) program at Marshall Space Flight Center and its use for Earth science applications. The system consists of a 3 x 4 array of 1920 x 1080 pixel thin bezel video monitors mounted on a wall in a scientific collaboration lab. The monitors are physically and virtually integrated into a 14' x 7' for video display. The display of scientific data on the video wall is controlled by a single Alienware Aurora PC with a 2nd Generation Intel Core 4.1 GHz processor, 32 GB memory, and an AMD Fire Pro W600 video card with 6 mini display port connections. Six mini display-to-dual DVI cables are used to connect the 12 individual video monitors. The open source Scalable Adaptive Graphics Environment (SAGE) windowing and media control framework, running on top of the Ubuntu 12 Linux operating system, allows several users to simultaneously control the display and storage of high resolution still and moving graphics in a variety of formats, on tiled display walls of any size. The Ubuntu operating system supports the open source Scalable Adaptive Graphics Environment (SAGE) software which provides a common environment, or framework, enabling its users to access, display and share a variety of data-intensive information. This information can be digital-cinema animations, high-resolution images, high-definition video-teleconferences, presentation slides, documents, spreadsheets or laptop screens. SAGE is cross-platform, community-driven, open-source visualization and collaboration middleware that utilizes shared national and international cyberinfrastructure for the advancement of scientific research and education.
Hansen, Bethany K; Fultz, Amy L; Hopper, Lydia M; Ross, Stephen R
2018-05-01
Video cameras are increasingly being used to monitor captive animals in zoo, laboratory, and agricultural settings. This technology may also be useful in sanctuaries with large and/or complex enclosures. However, the cost of camera equipment and a lack of formal evaluations regarding the use of cameras in sanctuary settings make it challenging for facilities to decide whether and how to implement this technology. To address this, we evaluated the feasibility of using a video camera system to monitor chimpanzees at Chimp Haven. We viewed a group of resident chimpanzees in a large forested enclosure and compared observations collected in person and with remote video cameras. We found that via camera, the observer viewed fewer chimpanzees in some outdoor locations (GLMM post hoc test: est. = 1.4503, SE = 0.1457, Z = 9.951, p < 0.001) and identified a lower proportion of chimpanzees (GLMM post hoc test: est. = -2.17914, SE = 0.08490, Z = -25.666, p < 0.001) compared to in-person observations. However, the observer could view the 2 ha enclosure 15 times faster by camera compared to in person. In addition to these results, we provide recommendations to animal facilities considering the installation of a video camera system. Despite some limitations of remote monitoring, we posit that there are substantial benefits of using camera systems in sanctuaries to facilitate animal care and observational research. © 2018 Wiley Periodicals, Inc.
Data Reduction and Control Software for Meteor Observing Stations Based on CCD Video Systems
NASA Technical Reports Server (NTRS)
Madiedo, J. M.; Trigo-Rodriguez, J. M.; Lyytinen, E.
2011-01-01
The SPanish Meteor Network (SPMN) is performing a continuous monitoring of meteor activity over Spain and neighbouring countries. The huge amount of data obtained by the 25 video observing stations that this network is currently operating made it necessary to develop new software packages to accomplish some tasks, such as data reduction and remote operation of autonomous systems based on high-sensitivity CCD video devices. The main characteristics of this software are described here.
RAPID: A random access picture digitizer, display, and memory system
NASA Technical Reports Server (NTRS)
Yakimovsky, Y.; Rayfield, M.; Eskenazi, R.
1976-01-01
RAPID is a system capable of providing convenient digital analysis of video data in real-time. It has two modes of operation. The first allows for continuous digitization of an EIA RS-170 video signal. Each frame in the video signal is digitized and written in 1/30 of a second into RAPID's internal memory. The second mode leaves the content of the internal memory independent of the current input video. In both modes of operation the image contained in the memory is used to generate an EIA RS-170 composite video output signal representing the digitized image in the memory so that it can be displayed on a monitor.
A web-based video annotation system for crowdsourcing surveillance videos
NASA Astrophysics Data System (ADS)
Gadgil, Neeraj J.; Tahboub, Khalid; Kirsh, David; Delp, Edward J.
2014-03-01
Video surveillance systems are of a great value to prevent threats and identify/investigate criminal activities. Manual analysis of a huge amount of video data from several cameras over a long period of time often becomes impracticable. The use of automatic detection methods can be challenging when the video contains many objects with complex motion and occlusions. Crowdsourcing has been proposed as an effective method for utilizing human intelligence to perform several tasks. Our system provides a platform for the annotation of surveillance video in an organized and controlled way. One can monitor a surveillance system using a set of tools such as training modules, roles and labels, task management. This system can be used in a real-time streaming mode to detect any potential threats or as an investigative tool to analyze past events. Annotators can annotate video contents assigned to them for suspicious activity or criminal acts. First responders are then able to view the collective annotations and receive email alerts about a newly reported incident. They can also keep track of the annotators' training performance, manage their activities and reward their success. By providing this system, the process of video analysis is made more efficient.
47 CFR 76.614 - Cable television system regular monitoring.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 4 2011-10-01 2011-10-01 false Cable television system regular monitoring. 76.614 Section 76.614 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Technical Standards § 76.614 Cable television...
Onboard Systems Record Unique Videos of Space Missions
NASA Technical Reports Server (NTRS)
2010-01-01
Ecliptic Enterprises Corporation, headquartered in Pasadena, California, provided onboard video systems for rocket and space shuttle launches before it was tasked by Ames Research Center to craft the Data Handling Unit that would control sensor instruments onboard the Lunar Crater Observation and Sensing Satellite (LCROSS) spacecraft. The technological capabilities the company acquired on this project, as well as those gained developing a high-speed video system for monitoring the parachute deployments for the Orion Pad Abort Test Program at Dryden Flight Research Center, have enabled the company to offer high-speed and high-definition video for geosynchronous satellites and commercial space missions, providing remarkable footage that both informs engineers and inspires the imagination of the general public.
Real-time unmanned aircraft systems surveillance video mosaicking using GPU
NASA Astrophysics Data System (ADS)
Camargo, Aldo; Anderson, Kyle; Wang, Yi; Schultz, Richard R.; Fevig, Ronald A.
2010-04-01
Digital video mosaicking from Unmanned Aircraft Systems (UAS) is being used for many military and civilian applications, including surveillance, target recognition, border protection, forest fire monitoring, traffic control on highways, monitoring of transmission lines, among others. Additionally, NASA is using digital video mosaicking to explore the moon and planets such as Mars. In order to compute a "good" mosaic from video captured by a UAS, the algorithm must deal with motion blur, frame-to-frame jitter associated with an imperfectly stabilized platform, perspective changes as the camera tilts in flight, as well as a number of other factors. The most suitable algorithms use SIFT (Scale-Invariant Feature Transform) to detect the features consistent between video frames. Utilizing these features, the next step is to estimate the homography between two consecutives video frames, perform warping to properly register the image data, and finally blend the video frames resulting in a seamless video mosaick. All this processing takes a great deal of resources of resources from the CPU, so it is almost impossible to compute a real time video mosaic on a single processor. Modern graphics processing units (GPUs) offer computational performance that far exceeds current CPU technology, allowing for real-time operation. This paper presents the development of a GPU-accelerated digital video mosaicking implementation and compares it with CPU performance. Our tests are based on two sets of real video captured by a small UAS aircraft; one video comes from Infrared (IR) and Electro-Optical (EO) cameras. Our results show that we can obtain a speed-up of more than 50 times using GPU technology, so real-time operation at a video capture of 30 frames per second is feasible.
Yoshida, Soichiro; Kihara, Kazunori; Takeshita, Hideki; Fujii, Yasuhisa
2014-12-01
The head-mounted display (HMD) is a new image monitoring system. We developed the Personal Integrated-image Monitoring System (PIM System) using the HMD (HMZ-T2, Sony Corporation, Tokyo, Japan) in combination with video splitters and multiplexers as a surgical guide system for transurethral resection of the prostate (TURP). The imaging information obtained from the cystoscope, the transurethral ultrasonography (TRUS), the video camera attached to the HMD, and the patient's vital signs monitor were split and integrated by the PIM System and a composite image was displayed by the HMD using a four-split screen technique. Wearing the HMD, the lead surgeon and the assistant could simultaneously and continuously monitor the same information displayed by the HMD in an ergonomically efficient posture. Each participant could independently rearrange the images comprising the composite image depending on the engaging step. Two benign prostatic hyperplasia (BPH) patients underwent TURP performed by surgeons guided with this system. In both cases, the TURP procedure was successfully performed, and their postoperative clinical courses had no remarkable unfavorable events. During the procedure, none of the participants experienced any HMD-wear related adverse effects or reported any discomfort.
Clustering and Flow Conservation Monitoring Tool for Software Defined Networks
Puente Fernández, Jesús Antonio
2018-01-01
Prediction systems present some challenges on two fronts: the relation between video quality and observed session features and on the other hand, dynamics changes on the video quality. Software Defined Networks (SDN) is a new concept of network architecture that provides the separation of control plane (controller) and data plane (switches) in network devices. Due to the existence of the southbound interface, it is possible to deploy monitoring tools to obtain the network status and retrieve a statistics collection. Therefore, achieving the most accurate statistics depends on a strategy of monitoring and information requests of network devices. In this paper, we propose an enhanced algorithm for requesting statistics to measure the traffic flow in SDN networks. Such an algorithm is based on grouping network switches in clusters focusing on their number of ports to apply different monitoring techniques. Such grouping occurs by avoiding monitoring queries in network switches with common characteristics and then, by omitting redundant information. In this way, the present proposal decreases the number of monitoring queries to switches, improving the network traffic and preventing the switching overload. We have tested our optimization in a video streaming simulation using different types of videos. The experiments and comparison with traditional monitoring techniques demonstrate the feasibility of our proposal maintaining similar values decreasing the number of queries to the switches. PMID:29614049
Optical monitoring of film pollution on sea surface
NASA Astrophysics Data System (ADS)
Pavlov, Andrey; Konstantinov, Oleg; Shmirko, Konstantin
2017-11-01
The organic films form a brightness contrast on the sea surface. It makes possible to use cheap simple and miniature systems for video monitoring of pollution of coastal marine areas by oil products in the bunkering of ships, emergency situations at oil terminals, gas and oil pipelines, hydrocarbon production platforms on the shelf, etc.1-16 A panoramic video system with a polarization filter on the lens, located at an altitude of 90 m above sea level, can provide effective control of the water area within a radius of 7 kilometers,17-19 and modern photogrammetry technologies allow not only to register the fact of pollution and get a portrait of the offender, but also with a high Spatial and temporal resolution to estimate the dimensions and trace the dynamics of movement and transformation of the film in a geographic coordinate system. Of particular relevance is the optical method of controlling the pollution of the sea surface at the present time with the development of unmanned aerial vehicles that are already equipped with video cameras and require only a minor upgrade of their video system to enhance the contrast of images of organic films.
NASA Astrophysics Data System (ADS)
Gargallo, Ana; Arines, Justo
2014-08-01
We have adapted low cost webcams to the slit lamps objectives with the aim of improving contact lens fitting practice. With this solution we obtain good quality pictures and videos, we also recorded videos of eye examination, evaluation routines of contact lens fitting, and the final practice exam of our students. In addition, the video system increases the interactions between students because they could see what their colleagues are doing and take conscious of their mistakes, helping and correcting each others. We think that the proposed system is a low cost solution for supporting the training in contact lens fitting practice.
Video auto stitching in multicamera surveillance system
NASA Astrophysics Data System (ADS)
He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang
2012-01-01
This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.
Video auto stitching in multicamera surveillance system
NASA Astrophysics Data System (ADS)
He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang
2011-12-01
This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.
DOT National Transportation Integrated Search
2016-12-01
An independent evaluation of a non-video-based onboard monitoring system (OBMS) was conducted. The objective was to determine if the OBMS system performed reliably, improved driving safety and performance, and improved fuel efficiency in a commercial...
DOT National Transportation Integrated Search
2016-11-01
An independent evaluation of a non-video-based onboard monitoring system (OBMS) was conducted. The objective was to determine if the OBMS system performed reliably, improved driving safety and performance, and improved fuel efficiency in a commercial...
Efficient implementation of neural network deinterlacing
NASA Astrophysics Data System (ADS)
Seo, Guiwon; Choi, Hyunsoo; Lee, Chulhee
2009-02-01
Interlaced scanning has been widely used in most broadcasting systems. However, there are some undesirable artifacts such as jagged patterns, flickering, and line twitters. Moreover, most recent TV monitors utilize flat panel display technologies such as LCD or PDP monitors and these monitors require progressive formats. Consequently, the conversion of interlaced video into progressive video is required in many applications and a number of deinterlacing methods have been proposed. Recently deinterlacing methods based on neural network have been proposed with good results. On the other hand, with high resolution video contents such as HDTV, the amount of video data to be processed is very large. As a result, the processing time and hardware complexity become an important issue. In this paper, we propose an efficient implementation of neural network deinterlacing using polynomial approximation of the sigmoid function. Experimental results show that these approximations provide equivalent performance with a considerable reduction of complexity. This implementation of neural network deinterlacing can be efficiently incorporated in HW implementation.
Frejlichowski, Dariusz; Gościewska, Katarzyna; Forczmański, Paweł; Hofman, Radosław
2014-06-05
"SmartMonitor" is an intelligent security system based on image analysis that combines the advantages of alarm, video surveillance and home automation systems. The system is a complete solution that automatically reacts to every learned situation in a pre-specified way and has various applications, e.g., home and surrounding protection against unauthorized intrusion, crime detection or supervision over ill persons. The software is based on well-known and proven methods and algorithms for visual content analysis (VCA) that were appropriately modified and adopted to fit specific needs and create a video processing model which consists of foreground region detection and localization, candidate object extraction, object classification and tracking. In this paper, the "SmartMonitor" system is presented along with its architecture, employed methods and algorithms, and object analysis approach. Some experimental results on system operation are also provided. In the paper, focus is put on one of the aforementioned functionalities of the system, namely supervision over ill persons.
A system for beach video-monitoring: Beachkeeper plus
NASA Astrophysics Data System (ADS)
Brignone, Massimo; Schiaffino, Chiara F.; Isla, Federico I.; Ferrari, Marco
2012-12-01
A suitable knowledge of coastal systems, of their morphodynamic characteristics and their response to storm events and man-made structures is essential for littoral conservation and management. Nowadays webcams represent a useful device to obtain information from beaches. Video-monitoring techniques are generally site specific and softwares working with any image acquisition system are rare. Therefore, this work aims at submitting theory and applications of an experimental video monitoring software: Beachkeeper plus, a freeware non-profit software, can be employed and redistributed without modifications. A license file is provided inside software package and in the user guide. Beachkeeper plus is based on Matlab® and it can be used for the analysis of images and photos coming from any kind of acquisition system (webcams, digital cameras or images downloaded from internet), without any a-priori information or laboratory study of the acquisition system itself. Therefore, it could become a useful tool for beach planning. Through a simple guided interface, images can be analyzed by performing georeferentiation, rectification, averaging and variance. This software was initially operated in Pietra Ligure (Italy), using images from a tourist webcam, and in Mar del Plata (Argentina) using images from a digital camera. In both cases the reliability in different geomorphologic and morphodynamic conditions was confirmed by the good quality of obtained images after georeferentiation, rectification and averaging.
Multiple Target Tracking in a Wide-Field-of-View Camera System
1990-01-01
assembly is mounted on a Contraves alt-azi axis table with a pointing accuracy of < 2 Urad. * Work performed under the auspices of the U.S. Department of... Contraves SUN 3 CCD DR11W VME EITHERNET SUN 3 !3T 3 RS170 Video 1 Video ^mglifier^ I WWV Clock VCR Datacube u Monitor Monitor UL...displaying processed images with overlay from the Datacube. We control the Contraves table using a GPIB interface on the SUN. GPIB also interfaces a
Video Guidance, Landing, and Imaging system (VGLIS) for space missions
NASA Technical Reports Server (NTRS)
Schappell, R. T.; Knickerbocker, R. L.; Tietz, J. C.; Grant, C.; Flemming, J. C.
1975-01-01
The feasibility of an autonomous video guidance system that is capable of observing a planetary surface during terminal descent and selecting the most acceptable landing site was demonstrated. The system was breadboarded and "flown" on a physical simulator consisting of a control panel and monitor, a dynamic simulator, and a PDP-9 computer. The breadboard VGLIS consisted of an image dissector camera and the appropriate processing logic. Results are reported.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schneider, S.; Lucero, R.; Glidewell, D.
1997-08-01
The Autoridad Regulataria Nuclear (ARN) and the United States Department of Energy (DOE) are cooperating on the development of a Remote Monitoring System for nuclear nonproliferation efforts. A Remote Monitoring System for spent fuel transfer will be installed at the Argentina Nuclear Power Station in Embalse, Argentina. The system has been designed by Sandia National Laboratories (SNL), with Los Alamos National Laboratory (LANL) and Oak Ridge National Laboratory (ORNL) providing gamma and neutron sensors. This project will test and evaluate the fundamental design and implementation of the Remote Monitoring System in its application to regional and international safeguards efficiency. Thismore » paper provides a description of the monitoring system and its functions. The Remote Monitoring System consists of gamma and neutron radiation sensors, RF systems, and video systems integrated into a coherent functioning whole. All sensor data communicate over an Echelon LonWorks Network to a single data logger. The Neumann DCM 14 video module is integrated into the Remote Monitoring System. All sensor and image data are stored on a Data Acquisition System (DAS) and archived and reviewed on a Data and Image Review Station (DIRS). Conventional phone lines are used as the telecommunications link to transmit on-site collected data and images to remote locations. The data and images are authenticated before transmission. Data review stations will be installed at ARN in Buenos Aires, Argentina, ABACC in Rio De Janeiro, IAEA Headquarters in Vienna, and Sandia National Laboratories in Albuquerque, New Mexico. 2 refs., 2 figs.« less
Geospatial Video Monitoring of Benthic Habitats Using the Shallow-Water Positioning System (SWaPS)
2007-01-01
established from the video frames collected using SWaPS. C) Cover contours for the seagrass Thalassia testudinum. A B C surveyed using a spatial grid...distributions of seagrass species within this area are clearly influenced by their tolerance to salinity patterns. Thalassia testudinum, a species
Salem, Ghadi H; Dennis, John U; Krynitsky, Jonathan; Garmendia-Cedillos, Marcial; Swaroop, Kanchan; Malley, James D; Pajevic, Sinisa; Abuhatzira, Liron; Bustin, Michael; Gillet, Jean-Pierre; Gottesman, Michael M; Mitchell, James B; Pohida, Thomas J
2015-03-01
The System for Continuous Observation of Rodents in Home-cage Environment (SCORHE) was developed to demonstrate the viability of compact and scalable designs for quantifying activity levels and behavior patterns for mice housed within a commercial ventilated cage rack. The SCORHE in-rack design provides day- and night-time monitoring with the consistency and convenience of the home-cage environment. The dual-video camera custom hardware design makes efficient use of space, does not require home-cage modification, and is animal-facility user-friendly. Given the system's low cost and suitability for use in existing vivariums without modification to the animal husbandry procedures or housing setup, SCORHE opens up the potential for the wider use of automated video monitoring in animal facilities. SCORHE's potential uses include day-to-day health monitoring, as well as advanced behavioral screening and ethology experiments, ranging from the assessment of the short- and long-term effects of experimental cancer treatments to the evaluation of mouse models. When used for phenotyping and animal model studies, SCORHE aims to eliminate the concerns often associated with many mouse-monitoring methods, such as circadian rhythm disruption, acclimation periods, lack of night-time measurements, and short monitoring periods. Custom software integrates two video streams to extract several mouse activity and behavior measures. Studies comparing the activity levels of ABCB5 knockout and HMGN1 overexpresser mice with their respective C57BL parental strains demonstrate SCORHE's efficacy in characterizing the activity profiles for singly- and doubly-housed mice. Another study was conducted to demonstrate the ability of SCORHE to detect a change in activity resulting from administering a sedative.
Improving Video Based Heart Rate Monitoring.
Lin, Jian; Rozado, David; Duenser, Andreas
2015-01-01
Non-contact measurements of cardiac pulse can provide robust measurement of heart rate (HR) without the annoyance of attaching electrodes to the body. In this paper we explore a novel and reliable method to carry out video-based HR estimation and propose various performance improvement over existing approaches. The investigated method uses Independent Component Analysis (ICA) to detect the underlying HR signal from a mixed source signal present in the RGB channels of the image. The original ICA algorithm was implemented and several modifications were explored in order to determine which one could be optimal for accurate HR estimation. Using statistical analysis, we compared the cardiac pulse rate estimation from the different methods under comparison on the extracted videos to a commercially available oximeter. We found that some of these methods are quite effective and efficient in terms of improving accuracy and latency of the system. We have made the code of our algorithms openly available to the scientific community so that other researchers can explore how to integrate video-based HR monitoring in novel health technology applications. We conclude by noting that recent advances in video-based HR monitoring permit computers to be aware of a user's psychophysiological status in real time.
DOT National Transportation Integrated Search
2015-08-01
Cameras are used prolifically to monitor transportation incidents, infrastructure, and congestion. Traditional camera systems often require human monitoring and only offer low-resolution video. Researchers for the Exploratory Advanced Research (EAR) ...
Farivar, Reza; Michaud-Landry, Danny
2016-01-01
Measurements of the fast and precise movements of the eye-critical to many vision, oculomotor, and animal behavior studies-can be made non-invasively by video oculography. The protocol here describes the construction and operation of a research-grade video oculography system with ~0.1° precision over the full typical viewing range at over 450 Hz with tight synchronization with stimulus onset. The protocol consists of three stages: (1) system assembly, (2) calibration for both cooperative, and for minimally cooperative subjects (e.g., animals or infants), and (3) gaze monitoring and recording.
Ocak, Işık; Kara, Atila; Ince, Can
2016-12-01
The clinical relevance of microcirculation and its bedside observation started gaining importance in the 1990s since the introduction of hand-held video microscopes. From then, this technology has been continuously developed, and its clinical relevance has been established in more than 400 studies. In this paper, we review the different types of video microscopes, their application techniques, the microcirculation of different organ systems, the analysis methods, and the software and scoring systems. The main focus of this review will be on the state-of-art technique, CytoCam-incident dark-field imaging, and the most recent technological and technical updates concerning microcirculation monitoring. Copyright © 2016 Elsevier Ltd. All rights reserved.
A Novel Laser and Video-Based Displacement Transducer to Monitor Bridge Deflections
Vicente, Miguel A.; Gonzalez, Dorys C.; Minguez, Jesus; Schumacher, Thomas
2018-01-01
The measurement of static vertical deflections on bridges continues to be a first-level technological challenge. These data are of great interest, especially for the case of long-term bridge monitoring; in fact, they are perhaps more valuable than any other measurable parameter. This is because material degradation processes and changes of the mechanical properties of the structure due to aging (for example creep and shrinkage in concrete bridges) have a direct impact on the exhibited static vertical deflections. This paper introduces and evaluates an approach to monitor displacements and rotations of structures using a novel laser and video-based displacement transducer (LVBDT). The proposed system combines the use of laser beams, LED lights, and a digital video camera, and was especially designed to capture static and slow-varying displacements. Contrary to other video-based approaches, the camera is located on the bridge, hence allowing to capture displacements at one location. Subsequently, the sensing approach and the procedure to estimate displacements and the rotations are described. Additionally, laboratory and in-service field testing carried out to validate the system are presented and discussed. The results demonstrate that the proposed sensing approach is robust, accurate, and reliable, and also inexpensive, which are essential for field implementation. PMID:29587380
A Novel Laser and Video-Based Displacement Transducer to Monitor Bridge Deflections.
Vicente, Miguel A; Gonzalez, Dorys C; Minguez, Jesus; Schumacher, Thomas
2018-03-25
The measurement of static vertical deflections on bridges continues to be a first-level technological challenge. These data are of great interest, especially for the case of long-term bridge monitoring; in fact, they are perhaps more valuable than any other measurable parameter. This is because material degradation processes and changes of the mechanical properties of the structure due to aging (for example creep and shrinkage in concrete bridges) have a direct impact on the exhibited static vertical deflections. This paper introduces and evaluates an approach to monitor displacements and rotations of structures using a novel laser and video-based displacement transducer (LVBDT). The proposed system combines the use of laser beams, LED lights, and a digital video camera, and was especially designed to capture static and slow-varying displacements. Contrary to other video-based approaches, the camera is located on the bridge, hence allowing to capture displacements at one location. Subsequently, the sensing approach and the procedure to estimate displacements and the rotations are described. Additionally, laboratory and in-service field testing carried out to validate the system are presented and discussed. The results demonstrate that the proposed sensing approach is robust, accurate, and reliable, and also inexpensive, which are essential for field implementation.
Chezar, H.; Lee, J.
1985-01-01
A deep-towed photographic system with completely self-contained recording instrumentation and power can obtain color-video and still-photographic transects along rough terrane without need for a long electrically conducting cable. Both the video- and still-camera systems utilize relatively inexpensive and proven off-the-shelf hardware adapted for deep-water environments. The small instrument frame makes the towed sled an ideal photographic tool for use on ship or small-boat operations. The system includes a temperature probe and altimeter that relay data acoustically from the sled to the surface ship. This relay enables the operator to monitor simultaneously water temperature and the precise height off the bottom. ?? 1985.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-15
...] Availability of Compliance Guide for the Use of Video or Other Electronic Monitoring or Recording Equipment in... the availability of a compliance guide on the use of video or other electronic monitoring or recording... Procedures video records. FSIS is soliciting comments on this compliance guide. Once FSIS receives OMB...
49 CFR 174.67 - Tank car unloading.
Code of Federal Regulations, 2010 CFR
2010-10-01
... (2) Monitored by a signaling system (e.g., video system, sensing equipment, or mechanical equipment... or at a remote location within the facility, such as a control room. The signaling system must— (i...
Studying fish near ocean energy devices using underwater video
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matzner, Shari; Hull, Ryan E.; Harker-Klimes, Genevra EL
The effects of energy devices on fish populations are not well-understood, and studying the interactions of fish with tidal and instream turbines is challenging. To address this problem, we have evaluated algorithms to automatically detect fish in underwater video and propose a semi-automated method for ocean and river energy device ecological monitoring. The key contributions of this work are the demonstration of a background subtraction algorithm (ViBE) that detected 87% of human-identified fish events and is suitable for use in a real-time system to reduce data volume, and the demonstration of a statistical model to classify detections as fish ormore » not fish that achieved a correct classification rate of 85% overall and 92% for detections larger than 5 pixels. Specific recommendations for underwater video acquisition to better facilitate automated processing are given. The recommendations will help energy developers put effective monitoring systems in place, and could lead to a standard approach that simplifies the monitoring effort and advances the scientific understanding of the ecological impacts of ocean and river energy devices.« less
Web-based video monitoring of CT and MRI procedures
NASA Astrophysics Data System (ADS)
Ratib, Osman M.; Dahlbom, Magdalena; Kho, Hwa T.; Valentino, Daniel J.; McCoy, J. Michael
2000-05-01
A web-based video transmission of images from CT and MRI consoles was implemented in an Intranet environment for real- time monitoring of ongoing procedures. Images captured from the consoles are compressed to video resolution and broadcasted through a web server. When called upon, the attending radiologists can view these live images on any computer within the secured Intranet network. With adequate compression, these images can be displayed simultaneously in different locations at a rate of 2 to 5 images/sec through standard LAN. The quality of the images being insufficient for diagnostic purposes, our users survey showed that they were suitable for supervising a procedure, positioning the imaging slices and for routine quality checking before completion of a study. The system was implemented at UCLA to monitor 9 CTs and 6 MRIs distributed in 4 buildings. This system significantly improved the radiologists productivity by saving precious time spent in trips between reading rooms and examination rooms. It also improved patient throughput by reducing the waiting time for the radiologists to come to check a study before moving the patient from the scanner.
NASA Astrophysics Data System (ADS)
Kuehl, C. Stephen
1996-06-01
Video signal system performance can be compromised in a military aircraft cockpit management system (CMS) with the tailoring of vintage Electronics Industries Association (EIA) RS170 and RS343A video interface standards. Video analog interfaces degrade when induced system noise is present. Further signal degradation has been traditionally associated with signal data conversions between avionics sensor outputs and the cockpit display system. If the CMS engineering process is not carefully applied during the avionics video and computing architecture development, extensive and costly redesign will occur when visual sensor technology upgrades are incorporated. Close monitoring and technical involvement in video standards groups provides the knowledge-base necessary for avionic systems engineering organizations to architect adaptable and extendible cockpit management systems. With the Federal Communications Commission (FCC) in the process of adopting the Digital HDTV Grand Alliance System standard proposed by the Advanced Television Systems Committee (ATSC), the entertainment and telecommunications industries are adopting and supporting the emergence of new serial/parallel digital video interfaces and data compression standards that will drastically alter present NTSC-M video processing architectures. The re-engineering of the U.S. Broadcasting system must initially preserve the electronic equipment wiring networks within broadcast facilities to make the transition to HDTV affordable. International committee activities in technical forums like ITU-R (former CCIR), ANSI/SMPTE, IEEE, and ISO/IEC are establishing global consensus on video signal parameterizations that support a smooth transition from existing analog based broadcasting facilities to fully digital computerized systems. An opportunity exists for implementing these new video interface standards over existing video coax/triax cabling in military aircraft cockpit management systems. Reductions in signal conversion processing steps, major improvement in video noise reduction, and an added capability to pass audio/embedded digital data within the digital video signal stream are the significant performance increases associated with the incorporation of digital video interface standards. By analyzing the historical progression of military CMS developments, establishing a systems engineering process for CMS design, tracing the commercial evolution of video signal standardization, adopting commercial video signal terminology/definitions, and comparing/contrasting CMS architecture modifications using digital video interfaces; this paper provides a technical explanation on how a systems engineering process approach to video interface standardization can result in extendible and affordable cockpit management systems.
Aquatic Toxic Analysis by Monitoring Fish Behavior Using Computer Vision: A Recent Progress
Fu, Longwen; Liu, Zuoyi
2018-01-01
Video tracking based biological early warning system achieved a great progress with advanced computer vision and machine learning methods. Ability of video tracking of multiple biological organisms has been largely improved in recent years. Video based behavioral monitoring has become a common tool for acquiring quantified behavioral data for aquatic risk assessment. Investigation of behavioral responses under chemical and environmental stress has been boosted by rapidly developed machine learning and artificial intelligence. In this paper, we introduce the fundamental of video tracking and present the pioneer works in precise tracking of a group of individuals in 2D and 3D space. Technical and practical issues suffered in video tracking are explained. Subsequently, the toxic analysis based on fish behavioral data is summarized. Frequently used computational methods and machine learning are explained with their applications in aquatic toxicity detection and abnormal pattern analysis. Finally, advantages of recent developed deep learning approach in toxic prediction are presented. PMID:29849612
High Resolution, High Frame Rate Video Technology
NASA Technical Reports Server (NTRS)
1990-01-01
Papers and working group summaries presented at the High Resolution, High Frame Rate Video (HHV) Workshop are compiled. HHV system is intended for future use on the Space Shuttle and Space Station Freedom. The Workshop was held for the dual purpose of: (1) allowing potential scientific users to assess the utility of the proposed system for monitoring microgravity science experiments; and (2) letting technical experts from industry recommend improvements to the proposed near-term HHV system. The following topics are covered: (1) State of the art in the video system performance; (2) Development plan for the HHV system; (3) Advanced technology for image gathering, coding, and processing; (4) Data compression applied to HHV; (5) Data transmission networks; and (6) Results of the users' requirements survey conducted by NASA.
A web-based system for home monitoring of patients with Parkinson's disease using wearable sensors.
Chen, Bor-Rong; Patel, Shyamal; Buckley, Thomas; Rednic, Ramona; McClure, Douglas J; Shih, Ludy; Tarsy, Daniel; Welsh, Matt; Bonato, Paolo
2011-03-01
This letter introduces MercuryLive, a platform to enable home monitoring of patients with Parkinson's disease (PD) using wearable sensors. MercuryLive contains three tiers: a resource-aware data collection engine that relies upon wearable sensors, web services for live streaming and storage of sensor data, and a web-based graphical user interface client with video conferencing capability. Besides, the platform has the capability of analyzing sensor (i.e., accelerometer) data to reliably estimate clinical scores capturing the severity of tremor, bradykinesia, and dyskinesia. Testing results showed an average data latency of less than 400 ms and video latency of about 200 ms with video frame rate of about 13 frames/s when 800 kb/s of bandwidth were available and we used a 40% video compression, and data feature upload requiring 1 min of extra time following a 10 min interactive session. These results indicate that the proposed platform is suitable to monitor patients with PD to facilitate the titration of medications in the late stages of the disease.
GoPro Hero Cameras for Creation of a Three-Dimensional, Educational, Neurointerventional Video.
Park, Min S; Brock, Andrea; Mortimer, Vance; Taussky, Philipp; Couldwell, William T; Quigley, Edward
2017-10-01
Neurointerventional education relies on an apprenticeship model, with the trainee observing and participating in procedures with the guidance of a mentor. While educational videos are becoming prevalent in surgical cases, there is a dearth of comparable educational material for trainees in neurointerventional programs. We sought to create a high-quality, three-dimensional video of a routine diagnostic cerebral angiogram for use as an educational tool. A diagnostic cerebral angiogram was recorded using two GoPro HERO 3+ cameras with the Dual HERO System to capture the proceduralist's hands during the case. This video was edited with recordings from the video monitors to create a real-time three-dimensional video of both the actions of the neurointerventionalist and the resulting wire/catheter movements. The final edited video, in either two or three dimensions, can serve as another instructional tool for the training of residents and/or fellows. Additional videos can be created in a similar fashion of more complicated neurointerventional cases. The GoPro HERO 3+ camera and Dual HERO System can be used to create educational videos of neurointerventional procedures.
Video fingerprinting for copy identification: from research to industry applications
NASA Astrophysics Data System (ADS)
Lu, Jian
2009-02-01
Research that began a decade ago in video copy detection has developed into a technology known as "video fingerprinting". Today, video fingerprinting is an essential and enabling tool adopted by the industry for video content identification and management in online video distribution. This paper provides a comprehensive review of video fingerprinting technology and its applications in identifying, tracking, and managing copyrighted content on the Internet. The review includes a survey on video fingerprinting algorithms and some fundamental design considerations, such as robustness, discriminability, and compactness. It also discusses fingerprint matching algorithms, including complexity analysis, and approximation and optimization for fast fingerprint matching. On the application side, it provides an overview of a number of industry-driven applications that rely on video fingerprinting. Examples are given based on real-world systems and workflows to demonstrate applications in detecting and managing copyrighted content, and in monitoring and tracking video distribution on the Internet.
Getting the Bigger Picture With Digital Surveillance
NASA Technical Reports Server (NTRS)
2002-01-01
Through a Space Act Agreement, Diebold, Inc., acquired the exclusive rights to Glenn Research Center's patented video observation technology, originally designed to accelerate video image analysis for various ongoing and future space applications. Diebold implemented the technology into its AccuTrack digital, color video recorder, a state-of- the-art surveillance product that uses motion detection for around-the- clock monitoring. AccuTrack captures digitally signed images and transaction data in real-time. This process replaces the onerous tasks involved in operating a VCR-based surveillance system, and subsequently eliminates the need for central viewing and tape archiving locations altogether. AccuTrack can monitor an entire bank facility, including four automated teller machines, multiple teller lines, and new account areas, all from one central location.
Non-intrusive head movement analysis of videotaped seizures of epileptic origin.
Mandal, Bappaditya; Eng, How-Lung; Lu, Haiping; Chan, Derrick W S; Ng, Yen-Ling
2012-01-01
In this work we propose a non-intrusive video analytic system for patient's body parts movement analysis in Epilepsy Monitoring Unit. The system utilizes skin color modeling, head/face pose template matching and face detection to analyze and quantify the head movements. Epileptic patients' heads are analyzed holistically to infer seizure and normal random movements. The patient does not require to wear any special clothing, markers or sensors, hence it is totally non-intrusive. The user initializes the person-specific skin color and selects few face/head poses in the initial few frames. The system then tracks the head/face and extracts spatio-temporal features. Support vector machines are then used on these features to classify seizure-like movements from normal random movements. Experiments are performed on numerous long hour video sequences captured in an Epilepsy Monitoring Unit at a local hospital. The results demonstrate the feasibility of the proposed system in pediatric epilepsy monitoring and seizure detection.
NASA Astrophysics Data System (ADS)
Efstathiou, Nectarios; Skitsas, Michael; Psaroudakis, Chrysostomos; Koutras, Nikolaos
2017-09-01
Nowadays, video surveillance cameras are used for the protection and monitoring of a huge number of facilities worldwide. An important element in such surveillance systems is the use of aerial video streams originating from onboard sensors located on Unmanned Aerial Vehicles (UAVs). Video surveillance using UAVs represent a vast amount of video to be transmitted, stored, analyzed and visualized in a real-time way. As a result, the introduction and development of systems able to handle huge amount of data become a necessity. In this paper, a new approach for the collection, transmission and storage of aerial videos and metadata is introduced. The objective of this work is twofold. First, the integration of the appropriate equipment in order to capture and transmit real-time video including metadata (i.e. position coordinates, target) from the UAV to the ground and, second, the utilization of the ADITESS Versatile Media Content Management System (VMCMS-GE) for storing of the video stream and the appropriate metadata. Beyond the storage, VMCMS-GE provides other efficient management capabilities such as searching and processing of videos, along with video transcoding. For the evaluation and demonstration of the proposed framework we execute a use case where the surveillance of critical infrastructure and the detection of suspicious activities is performed. Collected video Transcodingis subject of this evaluation as well.
Olson, R; Hahn, D I; Buckert, A
2009-06-01
Short-haul truck (lorry) drivers are particularly vulnerable to back pain and injury due to exposure to whole body vibration, prolonged sitting and demanding material handling tasks. The current project reports the results of video-based assessments (711 stops) and driver behavioural self-monitoring (BSM) (385 stops) of injury hazards during non-driving work. Participants (n = 3) worked in a trailer fitted with a camera system during baseline and BSM phases. Descriptive analyses showed that challenging customer environments and non-standard ingress/egress were prevalent. Statistical modelling of video-assessment results showed that each instance of manual material handling increased the predicted mean for severe trunk postures by 7%, while customer use of a forklift, moving standard pallets and moving non-standard pallets decreased predicted means by 12%, 20% and 22% respectively. Video and BSM comparisons showed that drivers were accurate at self-monitoring frequent environmental conditions, but less accurate at monitoring trunk postures and rare work events. The current study identified four predictors of severe trunk postures that can be modified to reduce risk of injury among truck drivers and showed that workers can produce reliable self-assessment data with BSM methods for frequent and easily discriminated events environmental.
50 CFR 622.5 - Recordkeeping and reporting.
Code of Federal Regulations, 2012 CFR
2012-10-01
... such record as specified in paragraph (a)(2) of this section. (B) Electronic logbook/video monitoring... participate in the NMFS-sponsored electronic logbook and/or video monitoring reporting program as directed by...) of this section. (ii) Electronic logbook/video monitoring reporting. The owner or operator of a...
Highly Protable Airborne Multispectral Imaging System
NASA Technical Reports Server (NTRS)
Lehnemann, Robert; Mcnamee, Todd
2001-01-01
A portable instrumentation system is described that includes and airborne and a ground-based subsytem. It can acquire multispectral image data over swaths of terrain ranging in width from about 1.5 to 1 km. The system was developed especially for use in coastal environments and is well suited for performing remote sensing and general environmental monitoring. It includes a small,munpilotaed, remotely controlled airplance that carries a forward-looking camera for navigation, three downward-looking monochrome video cameras for imaging terrain in three spectral bands, a video transmitter, and a Global Positioning System (GPS) reciever.
Objective video presentation QoE predictor for smart adaptive video streaming
NASA Astrophysics Data System (ADS)
Wang, Zhou; Zeng, Kai; Rehman, Abdul; Yeganeh, Hojatollah; Wang, Shiqi
2015-09-01
How to deliver videos to consumers over the network for optimal quality-of-experience (QoE) has been the central goal of modern video delivery services. Surprisingly, regardless of the large volume of videos being delivered everyday through various systems attempting to improve visual QoE, the actual QoE of end consumers is not properly assessed, not to say using QoE as the key factor in making critical decisions at the video hosting, network and receiving sites. Real-world video streaming systems typically use bitrate as the main video presentation quality indicator, but using the same bitrate to encode different video content could result in drastically different visual QoE, which is further affected by the display device and viewing condition of each individual consumer who receives the video. To correct this, we have to put QoE back to the driver's seat and redesign the video delivery systems. To achieve this goal, a major challenge is to find an objective video presentation QoE predictor that is accurate, fast, easy-to-use, display device adaptive, and provides meaningful QoE predictions across resolution and content. We propose to use the newly developed SSIMplus index (https://ece.uwaterloo.ca/~z70wang/research/ssimplus/) for this role. We demonstrate that based on SSIMplus, one can develop a smart adaptive video streaming strategy that leads to much smoother visual QoE impossible to achieve using existing adaptive bitrate video streaming approaches. Furthermore, SSIMplus finds many more applications, in live and file-based quality monitoring, in benchmarking video encoders and transcoders, and in guiding network resource allocations.
Real-time video quality monitoring
NASA Astrophysics Data System (ADS)
Liu, Tao; Narvekar, Niranjan; Wang, Beibei; Ding, Ran; Zou, Dekun; Cash, Glenn; Bhagavathy, Sitaram; Bloom, Jeffrey
2011-12-01
The ITU-T Recommendation G.1070 is a standardized opinion model for video telephony applications that uses video bitrate, frame rate, and packet-loss rate to measure the video quality. However, this model was original designed as an offline quality planning tool. It cannot be directly used for quality monitoring since the above three input parameters are not readily available within a network or at the decoder. And there is a great room for the performance improvement of this quality metric. In this article, we present a real-time video quality monitoring solution based on this Recommendation. We first propose a scheme to efficiently estimate the three parameters from video bitstreams, so that it can be used as a real-time video quality monitoring tool. Furthermore, an enhanced algorithm based on the G.1070 model that provides more accurate quality prediction is proposed. Finally, to use this metric in real-world applications, we present an example emerging application of real-time quality measurement to the management of transmitted videos, especially those delivered to mobile devices.
Economical Video Monitoring of Traffic
NASA Technical Reports Server (NTRS)
Houser, B. C.; Paine, G.; Rubenstein, L. D.; Parham, O. Bruce, Jr.; Graves, W.; Bradley, C.
1986-01-01
Data compression allows video signals to be transmitted economically on telephone circuits. Telephone lines transmit television signals to remote traffic-control center. Lines also carry command signals from center to TV camera and compressor at highway site. Video system with television cameras positioned at critical points on highways allows traffic controllers to determine visually, almost immediately, exact cause of traffic-flow disruption; e.g., accidents, breakdowns, or spills, almost immediately. Controllers can then dispatch appropriate emergency services and alert motorists to minimize traffic backups.
NASA Astrophysics Data System (ADS)
Sun, Hong; Wu, Qian-zhong
2013-09-01
In order to improve the precision of optical-electric tracking device, proposing a kind of improved optical-electric tracking device based on MEMS, in allusion to the tracking error of gyroscope senor and the random drift, According to the principles of time series analysis of random sequence, establish AR model of gyro random error based on Kalman filter algorithm, then the output signals of gyro are multiple filtered with Kalman filter. And use ARM as micro controller servo motor is controlled by fuzzy PID full closed loop control algorithm, and add advanced correction and feed-forward links to improve response lag of angle input, Free-forward can make output perfectly follow input. The function of lead compensation link is to shorten the response of input signals, so as to reduce errors. Use the wireless video monitor module and remote monitoring software (Visual Basic 6.0) to monitor servo motor state in real time, the video monitor module gathers video signals, and the wireless video module will sent these signals to upper computer, so that show the motor running state in the window of Visual Basic 6.0. At the same time, take a detailed analysis to the main error source. Through the quantitative analysis of the errors from bandwidth and gyro sensor, it makes the proportion of each error in the whole error more intuitive, consequently, decrease the error of the system. Through the simulation and experiment results shows the system has good following characteristic, and it is very valuable for engineering application.
Falling-incident detection and throughput enhancement in a multi-camera video-surveillance system.
Shieh, Wann-Yun; Huang, Ju-Chin
2012-09-01
For most elderly, unpredictable falling incidents may occur at the corner of stairs or a long corridor due to body frailty. If we delay to rescue a falling elder who is likely fainting, more serious consequent injury may occur. Traditional secure or video surveillance systems need caregivers to monitor a centralized screen continuously, or need an elder to wear sensors to detect falling incidents, which explicitly waste much human power or cause inconvenience for elders. In this paper, we propose an automatic falling-detection algorithm and implement this algorithm in a multi-camera video surveillance system. The algorithm uses each camera to fetch the images from the regions required to be monitored. It then uses a falling-pattern recognition algorithm to determine if a falling incident has occurred. If yes, system will send short messages to someone needs to be noticed. The algorithm has been implemented in a DSP-based hardware acceleration board for functionality proof. Simulation results show that the accuracy of falling detection can achieve at least 90% and the throughput of a four-camera surveillance system can be improved by about 2.1 times. Copyright © 2011 IPEM. Published by Elsevier Ltd. All rights reserved.
Video monitoring of oxygen saturation during controlled episodes of acute hypoxia.
Addison, Paul S; Foo, David M H; Jacquel, Dominique; Borg, Ulf
2016-08-01
A method for extracting video photoplethysmographic information from an RGB video stream is tested on data acquired during a porcine model of acute hypoxia. Cardiac pulsatile information was extracted from the acquired signals and processed to determine a continuously reported oxygen saturation (SvidO2). A high degree of correlation was found to exist between the video and a reference from a pulse oximeter. The calculated mean bias and accuracy across all eight desaturation episodes were -0.03% (range: -0.21% to 0.24%) and accuracy 4.90% (range: 3.80% to 6.19%) respectively. The results support the hypothesis that oxygen saturation trending can be evaluated accurately from a video system during acute hypoxia.
Hand hygiene monitoring technology: protocol for a systematic review
2013-01-01
Background Healthcare worker hand hygiene is thought to be one of the most important strategies to prevent healthcare-associated infections, but compliance is generally poor. Hand hygiene improvement interventions must include audits of compliance (almost always with feedback), which are most often done by direct observation - a method that is expensive, subjective, and prone to bias. New technologies, including electronic and video hand hygiene monitoring systems, have the potential to provide continuous and objective monitoring of hand hygiene, regular feedback, and for some systems, real-time reminders. We propose a systematic review of the evidence supporting the effectiveness of these systems. The primary objective is to determine whether hand hygiene monitoring systems yield sustainable improvements in hand hygiene compliance when compared to usual care. Methods/Design MEDLINE, EMBASE, CINAHL, and other relevant databases will be searched for randomized control studies and quasi-experimental studies evaluating a video or electronic hand hygiene monitoring system. A standard data collection form will be used to abstract relevant information from included studies. Bias will be assessed using the Cochrane Effective Practice and Organization of Care Group Risk of Bias Assessment Tool. Studies will be reviewed independently by two reviewers, with disputes resolved by a third reviewer. The primary outcome is directly observed hand hygiene compliance. Secondary outcomes include healthcare-associated infection incidence and improvements in hand hygiene compliance as measured by alternative metrics. Results will be qualitatively summarized with comparisons made between study quality, the measured outcome, and study-specific factors that may be expected to affect outcome (for example, study duration, frequency of feedback, use of real-time reminders). Meta-analysis will be performed if there is more than one study of similar systems with comparable outcome definitions. Discussion Electronic and video monitoring systems have the potential to improve hand hygiene compliance and prevent healthcare-associated infection, but are expensive, difficult to install and maintain, and may not be accepted by all healthcare workers. This review will assess the current evidence of effectiveness of these systems before their widespread adoption. Study registration PROSPERO registration number: CRD42013004519 PMID:24219817
Hand hygiene monitoring technology: protocol for a systematic review.
Srigley, Jocelyn A; Lightfoot, David; Fernie, Geoff; Gardam, Michael; Muller, Matthew P
2013-11-12
Healthcare worker hand hygiene is thought to be one of the most important strategies to prevent healthcare-associated infections, but compliance is generally poor. Hand hygiene improvement interventions must include audits of compliance (almost always with feedback), which are most often done by direct observation - a method that is expensive, subjective, and prone to bias. New technologies, including electronic and video hand hygiene monitoring systems, have the potential to provide continuous and objective monitoring of hand hygiene, regular feedback, and for some systems, real-time reminders. We propose a systematic review of the evidence supporting the effectiveness of these systems. The primary objective is to determine whether hand hygiene monitoring systems yield sustainable improvements in hand hygiene compliance when compared to usual care. MEDLINE, EMBASE, CINAHL, and other relevant databases will be searched for randomized control studies and quasi-experimental studies evaluating a video or electronic hand hygiene monitoring system. A standard data collection form will be used to abstract relevant information from included studies. Bias will be assessed using the Cochrane Effective Practice and Organization of Care Group Risk of Bias Assessment Tool. Studies will be reviewed independently by two reviewers, with disputes resolved by a third reviewer. The primary outcome is directly observed hand hygiene compliance. Secondary outcomes include healthcare-associated infection incidence and improvements in hand hygiene compliance as measured by alternative metrics. Results will be qualitatively summarized with comparisons made between study quality, the measured outcome, and study-specific factors that may be expected to affect outcome (for example, study duration, frequency of feedback, use of real-time reminders). Meta-analysis will be performed if there is more than one study of similar systems with comparable outcome definitions. Electronic and video monitoring systems have the potential to improve hand hygiene compliance and prevent healthcare-associated infection, but are expensive, difficult to install and maintain, and may not be accepted by all healthcare workers. This review will assess the current evidence of effectiveness of these systems before their widespread adoption. PROSPERO registration number: CRD42013004519.
Telepathology. Long-distance diagnosis.
Weinstein, R S; Bloom, K J; Rozek, L S
1989-04-01
Telepathology is defined as the practice of pathology at a distance, by visualizing an image on a video monitor rather than viewing a specimen directly through a microscope. Components of a telepathology system include the following: (1) a workstation equipped with a high-resolution video camera attached to a remote-controlled light microscope; (2) a pathologist workstation incorporating controls for manipulating the robotic microscope as well as a high-resolution video monitor; and (3) a telecommunications link. Progress has been made in designing and constructing telepathology workstations and fully motorized, computer-controlled light microscopes suitable for telepathology. In addition, components such as video signal digital encoders and decoders that produce remarkably stable, high-color fidelity, and high-resolution images have been incorporated into the workstations. Resolution requirements for the video microscopy component of telepathology have been formally examined in receiver operator characteristic (ROC) curve analyses. Test-of-concept demonstrations have been completed with the use of geostationary satellites as the broadband communication linkages for 750-line resolution video. Potential benefits of telepathology include providing a means of conveniently delivering pathology services in real-time to remote sites or underserviced areas, time-sharing of pathologists' services by multiple institutions, and increasing accessibility to specialty pathologists.
Video Game Adapts To Brain Waves
NASA Technical Reports Server (NTRS)
Pope, Alan T.; Bogart, Edward H.
1994-01-01
Electronic training system based on video game developed to help children afflicted with attention-deficit disorder (ADD) learn to prolong their attention spans. Uses combination of electroencephalography (EEG) and adaptive control to encourage attentiveness. Monitors trainee's brain-wave activity: if EEG signal indicates attention is waning, system increases difficulty of game, forcing trainee to devote more attention to it. Game designed to make trainees want to win and, in so doing, learn to pay attention for longer times.
Pelletier, Dominique; Leleu, Kévin; Mallet, Delphine; Mou-Tham, Gérard; Hervé, Gilles; Boureau, Matthieu; Guilpart, Nicolas
2012-01-01
Observing spatial and temporal variations of marine biodiversity from non-destructive techniques is central for understanding ecosystem resilience, and for monitoring and assessing conservation strategies, e.g. Marine Protected Areas. Observations are generally obtained through Underwater Visual Censuses (UVC) conducted by divers. The problems inherent to the presence of divers have been discussed in several papers. Video techniques are increasingly used for observing underwater macrofauna and habitat. Most video techniques that do not need the presence of a diver use baited remote systems. In this paper, we present an original video technique which relies on a remote unbaited rotating remote system including a high definition camera. The system is set on the sea floor to record images. These are then analysed at the office to quantify biotic and abiotic sea bottom cover, and to identify and count fish species and other species like marine turtles. The technique was extensively tested in a highly diversified coral reef ecosystem in the South Lagoon of New Caledonia, based on a protocol covering both protected and unprotected areas in major lagoon habitats. The technique enabled to detect and identify a large number of species, and in particular fished species, which were not disturbed by the system. Habitat could easily be investigated through the images. A large number of observations could be carried out per day at sea. This study showed the strong potential of this non obtrusive technique for observing both macrofauna and habitat. It offers a unique spatial coverage and can be implemented at sea at a reasonable cost by non-expert staff. As such, this technique is particularly interesting for investigating and monitoring coastal biodiversity in the light of current conservation challenges and increasing monitoring needs.
Tano, R; Takaku, S; Ozaki, T
2017-11-01
The objective of this study was to investigate whether having dental hygiene students monitor video recordings of their dental explorer skills is an effective means of proper self-evaluation in dental hygiene education. The study participants comprised students of a dental hygiene training school who had completed a module on explorer skills using models, and a dental hygiene instructor who was in charge of lessons. Questions regarding 'posture', 'grip', 'finger rest' and 'operation' were set to evaluate explorer skills. Participants rated each item on a two-point scale: 'competent (1)' or 'not competent (0)'. The total score was calculated for each evaluation item in evaluations by students with and without video monitoring, and in evaluations by the instructor with video monitoring. Mean scores for students with and without video monitoring were compared using a t-test, while intraclass correlation coefficients were found by reliability analysis of student and instructor evaluations. A total of 37 students and one instructor were subject to analysis. The mean score for evaluations with and without video monitoring differed significantly for posture (P < 0.0001), finger rest (P = 0.0006) and operation (P < 0.0001). The intraclass correlation coefficient between students and instructors for evaluations with video monitoring ranged from 0.90 to 0.97 for the four evaluation items. The results of this study suggested that having students monitor video recordings of their own explorer skills may be an effective means of proper self-evaluation in specialized basic education using models. © 2016 The Authors. International Journal of Dental Hygiene Published by John Wiley& Sons Ltd.
Live video monitoring robot controlled by web over internet
NASA Astrophysics Data System (ADS)
Lokanath, M.; Akhil Sai, Guruju
2017-11-01
Future is all about robots, robot can perform tasks where humans cannot, Robots have huge applications in military and industrial area for lifting heavy weights, for accurate placements, for repeating the same task number of times, where human are not efficient. Generally robot is a mix of electronic, electrical and mechanical engineering and can do the tasks automatically on its own or under the supervision of humans. The camera is the eye for robot, call as robovision helps in monitoring security system and also can reach into the places where the human eye cannot reach. This paper presents about developing a live video streaming robot controlled from the website. We designed the web, controlling for the robot to move left, right, front and back while streaming video. As we move to the smart environment or IoT (Internet of Things) by smart devices the system we developed here connects over the internet and can be operated with smart mobile phone using a web browser. The Raspberry Pi model B chip acts as heart for this system robot, the sufficient motors, surveillance camera R pi 2 are connected to Raspberry pi.
67. Building 102, view of electronic switching amplifier (in retracted ...
67. Building 102, view of electronic switching amplifier (in retracted or open position) with video monitor mounted at top to monitor performance and condition of system in oil bath. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK
Smartphone-based photoplethysmographic imaging for heart rate monitoring.
Alafeef, Maha
2017-07-01
The purpose of this study is to make use of visible light reflected mode photoplethysmographic (PPG) imaging for heart rate (HR) monitoring via smartphones. The system uses the built-in camera feature in mobile phones to capture video from the subject's index fingertip. The video is processed, and then the PPG signal resulting from the video stream processing is used to calculate the subject's heart rate. Records from 19 subjects were used to evaluate the system's performance. The HR values obtained by the proposed method were compared with the actual HR. The obtained results show an accuracy of 99.7% and a maximum absolute error of 0.4 beats/min where most of the absolute errors lay in the range of 0.04-0.3 beats/min. Given the encouraging results, this type of HR measurement can be adopted with great benefit, especially in the conditions of personal use or home-based care. The proposed method represents an efficient portable solution for HR accurate detection and recording.
VID-R and SCAN: Tools and Methods for the Automated Analysis of Visual Records.
ERIC Educational Resources Information Center
Ekman, Paul; And Others
The VID-R (Visual Information Display and Retrieval) system that enables computer-aided analysis of visual records is composed of a film-to-television chain, two videotape recorders with complete remote control of functions, a video-disc recorder, three high-resolution television monitors, a teletype, a PDP-8, a video and audio interface, three…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-10
... specifying the permissible scope and conduct of monitoring; and Be organized and carry out its business in a...-12 III. Review Log of Proposal: Log 1 24 CFR 3285--Alternative Foundation System Testing. Log 80 24...-fhydelarge.html ; Video explaining ASTM D6007: http://www.ntainc.com/video-fhyde.html . Log 81 24 CFR 3280...
Bar-Chart-Monitor System For Wind Tunnels
NASA Technical Reports Server (NTRS)
Jung, Oscar
1993-01-01
Real-time monitor system provides bar-chart displays of significant operating parameters developed for National Full-Scale Aerodynamic Complex at Ames Research Center. Designed to gather and process sensory data on operating conditions of wind tunnels and models, and displays data for test engineers and technicians concerned with safety and validation of operating conditions. Bar-chart video monitor displays data in as many as 50 channels at maximum update rate of 2 Hz in format facilitating quick interpretation.
Satellite-aided coastal zone monitoring and vessel traffic system
NASA Technical Reports Server (NTRS)
Baker, J. L.
1981-01-01
The development and demonstration of a coastal zone monitoring and vessel traffic system is described. This technique uses a LORAN-C navigational system and relays signals via the ATS-3 satellite to a computer driven color video display for real time control. Multi-use applications of the system to search and rescue operations, coastal zone management and marine safety are described. It is emphasized that among the advantages of the system are: its unlimited range; compatibility with existing navigation systems; and relatively inexpensive cost.
Hu, Peter F; Xiao, Yan; Ho, Danny; Mackenzie, Colin F; Hu, Hao; Voigt, Roger; Martz, Douglas
2006-06-01
One of the major challenges for day-of-surgery operating room coordination is accurate and timely situation awareness. Distributed and secure real-time status information is key to addressing these challenges. This article reports on the design and implementation of a passive status monitoring system in a 19-room surgical suite of a major academic medical center. Key design requirements considered included integrated real-time operating room status display, access control, security, and network impact. The system used live operating room video images and patient vital signs obtained through monitors to automatically update events and operating room status. Images were presented on a "need-to-know" basis, and access was controlled by identification badge authorization. The system delivered reliable real-time operating room images and status with acceptable network impact. Operating room status was visualized at 4 separate locations and was used continuously by clinicians and operating room service providers to coordinate operating room activities.
NASA Technical Reports Server (NTRS)
Bogart, Edward H. (Inventor); Pope, Alan T. (Inventor)
2000-01-01
A system for display on a single video display terminal of multiple physiological measurements is provided. A subject is monitored by a plurality of instruments which feed data to a computer programmed to receive data, calculate data products such as index of engagement and heart rate, and display the data in a graphical format simultaneously on a single video display terminal. In addition live video representing the view of the subject and the experimental setup may also be integrated into the single data display. The display may be recorded on a standard video tape recorder for retrospective analysis.
NASA Technical Reports Server (NTRS)
Temple, Enoch C.
1994-01-01
The space industry has developed many composite materials that have high durability in proportion to their weights. Many of these materials have a likelihood for flaws that is higher than in traditional metals. There are also coverings (such as paint) that develop flaws that may adversely affect the performance of the system in which they are used. Therefore there is a need to monitor the soundness of composite structures. To meet this monitoring need, many nondestructive evaluation (NDE) systems have been developed. An NDE system is designed to detect material flaws and make flaw measurements without destroying the inspected item. Also, the detection operation is expected to be performed in a rapid manner in a field or production environment. Some of the most recent video-based NDE methodologies are shearography, holography, thermography, and video image correlation.
DOT National Transportation Integrated Search
2002-12-01
The Virginia Department of Transportation, like many other transportation agencies, has invested significantly in extensive closed circuit television (CCTV) systems to monitor freeways in urban areas. Although these systems have proven very effective...
Covert video monitoring in the assessment of medically unexplained symptoms in children.
Wallace, Dustin P; Sim, Leslie A; Harrison, Tracy E; Bruce, Barbara K; Harbeck-Weber, Cynthia
2012-04-01
Diagnosis of medically unexplained symptoms (MUS) occurs after thorough evaluations have failed to identify a physiological cause for symptoms. However, families and providers may wonder if something has been missed, leading to reduced confidence in behavioral treatment. Confidence may be improved through the use of technology such as covert video monitoring to better assess functioning across settings. A 12-year-old male presented with progressive neurological decline, precipitated by chronic pain. After thorough evaluation and the failure of standard treatments (medical, rehabilitative, and psychological) covert video monitoring revealed that the patient demonstrated greater abilities when alone in his room. Negative reinforcement was used to initiate recovery, accompanied by positive reinforcement and a rehabilitative approach. Covert video monitoring assisted in three subsequent cases over the following 3 years. In certain complex cases, video monitoring can inform the assessment and treatment of MUS. Discussion includes ethical and practical considerations.
Video stereo-laparoscopy system
NASA Astrophysics Data System (ADS)
Xiang, Yang; Hu, Jiasheng; Jiang, Huilin
2006-01-01
Minimally invasive surgery (MIS) has contributed significantly to patient care by reducing the morbidity associated with more invasive procedures. MIS procedures have become standard treatment for gallbladder disease and some abdominal malignancies. The imaging system has played a major role in the evolving field of minimally invasive surgery (MIS). The image need to have good resolution, large magnification, especially, the image need to have depth cue at the same time the image have no flicker and suit brightness. The video stereo-laparoscopy system can meet the demand of the doctors. This paper introduces the 3d video laparoscopy has those characteristic, field frequency: 100Hz, the depth space: 150mm, resolution: 10pl/mm. The work principle of the system is introduced in detail, and the optical system and time-division stereo-display system are described briefly in this paper. The system has focusing image lens, it can image on the CCD chip, the optical signal can change the video signal, and through A/D switch of the image processing system become the digital signal, then display the polarized image on the screen of the monitor through the liquid crystal shutters. The doctors with the polarized glasses can watch the 3D image without flicker of the tissue or organ. The 3D video laparoscope system has apply in the MIS field and praised by doctors. Contrast to the traditional 2D video laparoscopy system, it has some merit such as reducing the time of surgery, reducing the problem of surgery and the trained time.
NASA Astrophysics Data System (ADS)
Barbieri, Ivano; Lambruschini, Paolo; Raggio, Marco; Stagnaro, Riccardo
2007-12-01
The increase in the availability of bandwidth for wireless links, network integration, and the computational power on fixed and mobile platforms at affordable costs allows nowadays for the handling of audio and video data, their quality making them suitable for medical application. These information streams can support both continuous monitoring and emergency situations. According to this scenario, the authors have developed and implemented the mobile communication system which is described in this paper. The system is based on ITU-T H.323 multimedia terminal recommendation, suitable for real-time data/video/audio and telemedical applications. The audio and video codecs, respectively, H.264 and G723.1, were implemented and optimized in order to obtain high performance on the system target processors. Offline media streaming storage and retrieval functionalities were supported by integrating a relational database in the hospital central system. The system is based on low-cost consumer technologies such as general packet radio service (GPRS) and wireless local area network (WLAN or WiFi) for lowband data/video transmission. Implementation and testing were carried out for medical emergency and telemedicine application. In this paper, the emergency case study is described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, V; James, J; Wang, B
Purpose: To describe an in-house video goggle feedback system for motion management during simulation and treatment of radiation therapy patients. Methods: This video goggle system works by splitting and amplifying the video output signal directly from the Varian Real-Time Position Management (RPM) workstation or TrueBeam imaging workstation into two signals using a Distribution Amplifier. The first signal S[1] gets reconnected back to the monitor. The second signal S[2] gets connected to the input of a Video Scaler. The S[2] signal can be scaled, cropped and panned in real time to display only the relevant information to the patient. The outputmore » signal from the Video Scaler gets connected to an HDMI Extender Transmitter via a DVI-D to HDMI converter cable. The S[2] signal can be transported from the HDMI Extender Transmitter to the HDMI Extender Receiver located inside the treatment room via a Cat5e/6 cable. Inside the treatment room, the HDMI Extender Receiver is permanently mounted on the wall near the conduit where the Cat5e/6 cable is located. An HDMI cable is used to connect from the output of the HDMI Receiver to the video goggles. Results: This video goggle feedback system is currently being used at two institutions. At one institution, the system was just recently implemented for simulation and treatments on two breath-hold gated patients with 8+ total fractions over a two month period. At the other institution, the system was used to treat 100+ breath-hold gated patients on three Varian TrueBeam linacs and has been operational for twelve months. The average time to prepare the video goggle system for treatment is less than 1 minute. Conclusion: The video goggle system provides an efficient and reliable method to set up a video feedback signal for radiotherapy patients with motion management.« less
NASA Astrophysics Data System (ADS)
Wickert, A. D.
2010-12-01
To understand how single events can affect landscape change, we must catch the landscape in the act. Direct observations are rare and often dangerous. While video is a good alternative, commercially-available video systems for field installation cost 11,000, weigh ~100 pounds (45 kg), and shoot 640x480 pixel video at 4 frames per second. This is the same resolution as a cheap point-and-shoot camera, with a frame rate that is nearly an order of magnitude worse. To overcome these limitations of resolution, cost, and portability, I designed and built a new observation station. This system, called ATVIS (Automatically Triggered Video or Imaging Station), costs 450--500 and weighs about 15 pounds. It can take roughly 3 hours of 1280x720 pixel video, 6.5 hours of 640x480 video, or 98,000 1600x1200 pixel photos (one photo every 7 seconds for 8 days). The design calls for a simple Canon point-and-shoot camera fitted with custom firmware that allows 5V pulses through its USB cable to trigger it to take a picture or to initiate or stop video recording. These pulses are provided by a programmable microcontroller that can take input from either sensors or a data logger. The design is easily modifiable to a variety of camera and sensor types, and can also be used for continuous time-lapse imagery. We currently have prototypes set up at a gully near West Bijou Creek on the Colorado high plains and at tributaries to Marble Canyon in northern Arizona. Hopefully, a relatively inexpensive and portable system such as this will allow geomorphologists to supplement sensor networks with photo or video monitoring and allow them to see—and better quantify—the fantastic array of processes that modify landscapes as they unfold. Camera station set up at Badger Canyon, Arizona.Inset: view into box. Clockwise from bottom right: camera, microcontroller (blue), DC converter (red), solar charge controller, 12V battery. Materials and installation assistance courtesy of Ron Griffiths and the USGS Grand Canyon Monitoring and Research Center.
Monitoring tool usage in surgery videos using boosted convolutional and recurrent neural networks.
Al Hajj, Hassan; Lamard, Mathieu; Conze, Pierre-Henri; Cochener, Béatrice; Quellec, Gwenolé
2018-05-09
This paper investigates the automatic monitoring of tool usage during a surgery, with potential applications in report generation, surgical training and real-time decision support. Two surgeries are considered: cataract surgery, the most common surgical procedure, and cholecystectomy, one of the most common digestive surgeries. Tool usage is monitored in videos recorded either through a microscope (cataract surgery) or an endoscope (cholecystectomy). Following state-of-the-art video analysis solutions, each frame of the video is analyzed by convolutional neural networks (CNNs) whose outputs are fed to recurrent neural networks (RNNs) in order to take temporal relationships between events into account. Novelty lies in the way those CNNs and RNNs are trained. Computational complexity prevents the end-to-end training of "CNN+RNN" systems. Therefore, CNNs are usually trained first, independently from the RNNs. This approach is clearly suboptimal for surgical tool analysis: many tools are very similar to one another, but they can generally be differentiated based on past events. CNNs should be trained to extract the most useful visual features in combination with the temporal context. A novel boosting strategy is proposed to achieve this goal: the CNN and RNN parts of the system are simultaneously enriched by progressively adding weak classifiers (either CNNs or RNNs) trained to improve the overall classification accuracy. Experiments were performed in a dataset of 50 cataract surgery videos, where the usage of 21 surgical tools was manually annotated, and a dataset of 80 cholecystectomy videos, where the usage of 7 tools was manually annotated. Very good classification performance are achieved in both datasets: tool usage could be labeled with an average area under the ROC curve of A z =0.9961 and A z =0.9939, respectively, in offline mode (using past, present and future information), and A z =0.9957 and A z =0.9936, respectively, in online mode (using past and present information only). Copyright © 2018 Elsevier B.V. All rights reserved.
Otto, Kristen J; Hapner, Edie R; Baker, Michael; Johns, Michael M
2006-02-01
Advances in commercial video technology have improved office-based laryngeal imaging. This study investigates the perceived image quality of a true high-definition (HD) video camera and the effect of magnification on laryngeal videostroboscopy. We performed a prospective, dual-armed, single-blinded analysis of a standard laryngeal videostroboscopic examination comparing 3 separate add-on camera systems: a 1-chip charge-coupled device (CCD) camera, a 3-chip CCD camera, and a true 720p (progressive scan) HD camera. Displayed images were controlled for magnification and image size (20-inch [50-cm] display, red-green-blue, and S-video cable for 1-chip and 3-chip cameras; digital visual interface cable and HD monitor for HD camera). Ten blinded observers were then asked to rate the following 5 items on a 0-to-100 visual analog scale: resolution, color, ability to see vocal fold vibration, sense of depth perception, and clarity of blood vessels. Eight unblinded observers were then asked to rate the difference in perceived resolution and clarity of laryngeal examination images when displayed on a 10-inch (25-cm) monitor versus a 42-inch (105-cm) monitor. A visual analog scale was used. These monitors were controlled for actual resolution capacity. For each item evaluated, randomized block design analysis demonstrated that the 3-chip camera scored significantly better than the 1-chip camera (p < .05). For the categories of color and blood vessel discrimination, the 3-chip camera scored significantly better than the HD camera (p < .05). For magnification alone, observers rated the 42-inch monitor statistically better than the 10-inch monitor. The expense of new medical technology must be judged against its added value. This study suggests that HD laryngeal imaging may not add significant value over currently available video systems, in perceived image quality, when a small monitor is used. Although differences in clarity between standard and HD cameras may not be readily apparent on small displays, a large display size coupled with HD technology may impart improved diagnosis of subtle vocal fold lesions and vibratory anomalies.
NASA Astrophysics Data System (ADS)
Babic, Z.; Pilipovic, R.; Risojevic, V.; Mirjanic, G.
2016-06-01
Honey bees have crucial role in pollination across the world. This paper presents a simple, non-invasive, system for pollen bearing honey bee detection in surveillance video obtained at the entrance of a hive. The proposed system can be used as a part of a more complex system for tracking and counting of honey bees with remote pollination monitoring as a final goal. The proposed method is executed in real time on embedded systems co-located with a hive. Background subtraction, color segmentation and morphology methods are used for segmentation of honey bees. Classification in two classes, pollen bearing honey bees and honey bees that do not have pollen load, is performed using nearest mean classifier, with a simple descriptor consisting of color variance and eccentricity features. On in-house data set we achieved correct classification rate of 88.7% with 50 training images per class. We show that the obtained classification results are not far behind from the results of state-of-the-art image classification methods. That favors the proposed method, particularly having in mind that real time video transmission to remote high performance computing workstation is still an issue, and transfer of obtained parameters of pollination process is much easier.
Video integrated measurement system. [Diagnostic display devices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spector, B.; Eilbert, L.; Finando, S.
A Video Integrated Measurement (VIM) System is described which incorporates the use of various noninvasive diagnostic procedures (moire contourography, electromyography, posturometry, infrared thermography, etc.), used individually or in combination, for the evaluation of neuromusculoskeletal and other disorders and their management with biofeedback and other therapeutic procedures. The system provides for measuring individual diagnostic and therapeutic modes, or multiple modes by split screen superimposition, of real time (actual) images of the patient and idealized (ideal-normal) models on a video monitor, along with analog and digital data, graphics, color, and other transduced symbolic information. It is concluded that this system provides anmore » innovative and efficient method by which the therapist and patient can interact in biofeedback training/learning processes and holds considerable promise for more effective measurement and treatment of a wide variety of physical and behavioral disorders.« less
Siebig, Sylvia; Kuhls, Silvia; Imhoff, Michael; Langgartner, Julia; Reng, Michael; Schölmerich, Jürgen; Gather, Ursula; Wrede, Christian E
2010-03-01
Monitoring of physiologic parameters in critically ill patients is currently performed by threshold alarm systems with high sensitivity but low specificity. As a consequence, a multitude of alarms are generated, leading to an impaired clinical value of these alarms due to reduced alertness of the intensive care unit (ICU) staff. To evaluate a new alarm procedure, we currently generate a database of physiologic data and clinical alarm annotations. Data collection is taking place at a 12-bed medical ICU. Patients with monitoring of at least heart rate, invasive arterial blood pressure, and oxygen saturation are included in the study. Numerical physiologic data at 1-second intervals, monitor alarms, and alarm settings are extracted from the surveillance network. Bedside video recordings are performed with network surveillance cameras. Based on the extracted data and the video recordings, alarms are clinically annotated by an experienced physician. The alarms are categorized according to their technical validity and clinical relevance by a taxonomy system that can be broadly applicable. Preliminary results showed that only 17% of the alarms were classified as relevant, and 44% were technically false. The presented system for collecting real-time bedside monitoring data in conjunction with video-assisted annotations of clinically relevant events is the first allowing the assessment of 24-hour periods and reduces the bias usually created by bedside observers in comparable studies. It constitutes the basis for the development and evaluation of "smart" alarm algorithms, which may help to reduce the number of alarms at the ICU, thereby improving patient safety. Copyright 2010 Elsevier Inc. All rights reserved.
Video and thermal imaging system for monitoring interiors of high temperature reaction vessels
Saveliev, Alexei V [Chicago, IL; Zelepouga, Serguei A [Hoffman Estates, IL; Rue, David M [Chicago, IL
2012-01-10
A system and method for real-time monitoring of the interior of a combustor or gasifier wherein light emitted by the interior surface of a refractory wall of the combustor or gasifier is collected using an imaging fiber optic bundle having a light receiving end and a light output end. Color information in the light is captured with primary color (RGB) filters or complimentary color (GMCY) filters placed over individual pixels of color sensors disposed within a digital color camera in a BAYER mosaic layout, producing RGB signal outputs or GMCY signal outputs. The signal outputs are processed using intensity ratios of the primary color filters or the complimentary color filters, producing video images and/or thermal images of the interior of the combustor or gasifier.
Fiber optic video monitoring system for remote CT/MR scanners clinically accepted
NASA Astrophysics Data System (ADS)
Tecotzky, Raymond H.; Bazzill, Todd M.; Eldredge, Sandra L.; Tagawa, James; Sayre, James W.
1992-07-01
With the proliferation of CT travel to distant scanners to review images before their patients can be released. We designed a fiber-optic broadband video system to transmit images from seven scanner consoles to fourteen remote monitoring stations in real time. This system has been used clinically by radiologists for over one years. We designed and conducted a user survey to categorize the levels of system use by section (Chest, GI, GU, Bone, Neuro, Peds, etc.), to measure operational utilization and acceptance of the system into the clinical environment, to clarify the system''s importance as a clinical tool for saving radiologists travel-time to distant CT the system''s performance and limitations as a diagnostic tool. The study was administered directly to radiologists using a printed survey form. The results of the survey''s compiled data show a high percentage of system usage by a wide spectrum of radiologists. Clearly, this system has been accepted into the clinical environment as a highly valued diagnostic tool in terms of time savings and functional flexibility.
DOT National Transportation Integrated Search
1999-03-01
This study focused on assessing the application of traffic monitoring and management systems which use transportable surveillance and ramp meter trailers, video image processors, and wireless communications. The mobile surveillance and wireless commu...
Dennis, John U.; Krynitsky, Jonathan; Garmendia-Cedillos, Marcial; Swaroop, Kanchan; Malley, James D.; Pajevic, Sinisa; Abuhatzira, Liron; Bustin, Michael; Gillet, Jean-Pierre; Gottesman, Michael M.; Mitchell, James B.; Pohida, Thomas J.
2015-01-01
The System for Continuous Observation of Rodents in Home-cage Environment (SCORHE) was developed to demonstrate the viability of compact and scalable designs for quantifying activity levels and behavior patterns for mice housed within a commercial ventilated cage rack. The SCORHE in-rack design provides day- and night-time monitoring with the consistency and convenience of the home-cage environment. The dual-video camera custom hardware design makes efficient use of space, does not require home-cage modification, and is animal-facility user-friendly. Given the system’s low cost and suitability for use in existing vivariums without modification to the animal husbandry procedures or housing setup, SCORHE opens up the potential for the wider use of automated video monitoring in animal facilities. SCORHE’s potential uses include day-to-day health monitoring, as well as advanced behavioral screening and ethology experiments, ranging from the assessment of the short- and long-term effects of experimental cancer treatments to the evaluation of mouse models. When used for phenotyping and animal model studies, SCORHE aims to eliminate the concerns often associated with many mouse-monitoring methods, such as circadian rhythm disruption, acclimation periods, lack of night-time measurements, and short monitoring periods. Custom software integrates two video streams to extract several mouse activity and behavior measures. Studies comparing the activity levels of ABCB5 knockout and HMGN1 overexpresser mice with their respective C57BL parental strains demonstrate SCORHE’s efficacy in characterizing the activity profiles for singly- and doubly-housed mice. Another study was conducted to demonstrate the ability of SCORHE to detect a change in activity resulting from administering a sedative. PMID:24706080
Remote console for virtual telerehabilitation.
Lewis, Jeffrey A; Boian, Rares F; Burdea, Grigore; Deutsch, Judith E
2005-01-01
The Remote Console (ReCon) telerehabilitation system provides a platform for therapists to guide rehabilitation sessions from a remote location. The ReCon system integrates real-time graphics, audio/video communication, private therapist chat, post-test data graphs, extendable patient and exercise performance monitoring, exercise pre-configuration and modification under a single application. These tools give therapists the ability to conduct training, monitoring/assessment, and therapeutic intervention remotely and in real-time.
Grand, Laszlo; Ftomov, Sergiu; Timofeev, Igor
2012-01-01
Parallel electrophysiological recording and behavioral monitoring of freely moving animals is essential for a better understanding of the neural mechanisms underlying behavior. In this paper we describe a novel wireless recording technique, which is capable of synchronously recording in vivo multichannel electrophysiological (LFP, MUA, EOG, EMG) and activity data (accelerometer, video) from freely moving cats. The method is based on the integration of commercially available components into a simple monitoring system and is complete with accelerometers and the needed signal processing tools. LFP activities of freely moving group-housed cats were recorded from multiple intracortical areas and from the hippocampus. EMG, EOG, accelerometer and video were simultaneously acquired with LFP activities 24-h a day for 3 months. These recordings confirm the possibility of using our wireless method for 24-h long-term monitoring of neurophysiological and behavioral data of freely moving experimental animals such as cats, ferrets, rabbits and other large animals. PMID:23099345
Dark-cycle monitoring of biological subjects on Space Station Freedom
NASA Technical Reports Server (NTRS)
Chuang, Sherry; Mian, Arshad
1992-01-01
The operational environment for biological research on Space Station Freedom will incorporate video technology for monitoring plant and animal subjects. The video coverage must include dark-cycle monitoring because early experiments will use rodents that are nocturnal and therefore most active during the dark part of the daily cycle. Scientific requirements for monitoring during the dark cycle are exacting. Infrared (IR) or near-IR sensors are required. The trade-offs between these two types of sensors are based on engineering constraints, sensitivity spectra, and the quality of imagery possible from each type. This paper presents results of a study conducted by the Biological Flight Research Projects Office in conjunction with the Spacecraft Data Systems Branch at ARC to investigate the use of charged-coupled-device and IR cameras to meet the scientific requirements. Also examined is the effect of low levels of near-IR illumination on the circadian rhythm in rats.
Internal corrosion monitoring of subsea oil and gas production equipment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joosten, M.W.; Fischer, K.P.; Strommen, R.
1995-04-01
Nonintrusive techniques will dominate subsea corrosion monitoring compared with the intrusive methods because such methods do not interfere with pipeline operations. The long-term reliability of the nonintrusive techniques in general is considered to be much better than that of intrusive-type probes. The nonintrusive techniques based on radioactive tracers (TLA, NA) and FSM and UT are expected to be the main types of subsea corrosion monitoring equipment in the coming years. Available techniques that could be developed specifically for subsea applications are: electrochemical noise, corrosion potentials (using new types of reference electrodes), multiprobe system for electrochemical measurements, and video camera inspectionmore » (mini-video camera with light source). The following innovative techniques have potential but need further development: ion selective electrodes, radioactive tracers, and Raman spectroscopy.« less
NASA Astrophysics Data System (ADS)
Ribera, Javier; Tahboub, Khalid; Delp, Edward J.
2015-03-01
Video surveillance systems are widely deployed for public safety. Real-time monitoring and alerting are some of the key requirements for building an intelligent video surveillance system. Real-life settings introduce many challenges that can impact the performance of real-time video analytics. Video analytics are desired to be resilient to adverse and changing scenarios. In this paper we present various approaches to characterize the uncertainty of a classifier and incorporate crowdsourcing at the times when the method is uncertain about making a particular decision. Incorporating crowdsourcing when a real-time video analytic method is uncertain about making a particular decision is known as online active learning from crowds. We evaluate our proposed approach by testing a method we developed previously for crowd flow estimation. We present three different approaches to characterize the uncertainty of the classifier in the automatic crowd flow estimation method and test them by introducing video quality degradations. Criteria to aggregate crowdsourcing results are also proposed and evaluated. An experimental evaluation is conducted using a publicly available dataset.
NASA Technical Reports Server (NTRS)
Batten, Adam; Edwards, Graeme; Gerasimov, Vadim; Hoschke, Nigel; Isaacs, Peter; Lewis, Chris; Moore, Richard; Oppolzer, Florien; Price, Don; Prokopenko, Mikhail;
2010-01-01
This report describes a significant advance in the capability of the CSIRO/NASA structural health monitoring Concept Demonstrator (CD). The main thrust of the work has been the development of a mobile robotic agent, and the hardware and software modifications and developments required to enable the demonstrator to operate as a single, self-organizing, multi-agent system. This single-robot system is seen as the forerunner of a system in which larger numbers of small robots perform inspection and repair tasks cooperatively, by self-organization. While the goal of demonstrating self-organized damage diagnosis was not fully achieved in the time available, much of the work required for the final element that enables the robot to point the video camera and transmit an image has been completed. A demonstration video of the CD and robotic systems operating will be made and forwarded to NASA.
Kelly, Christopher R; Hogle, Nancy J; Landman, Jaime; Fowler, Dennis L
2008-09-01
The use of high-definition cameras and monitors during minimally invasive procedures can provide the surgeon and operating team with more than twice the resolution of standard definition systems. Although this dramatic improvement in visualization offers numerous advantages, the adoption of high definition cameras in the operating room can be challenging because new recording equipment must be purchased, and several new technologies are required to edit and distribute video. The purpose of this review article is to provide an overview of the popular methods for recording, editing, and distributing high-definition video. This article discusses the essential technical concepts of high-definition video, reviews the different kinds of equipment and methods most often used for recording, and describes several options for video distribution.
Using video playbacks to study visual communication in a marine fish, Salaria pavo.
Gonçalves; Oliveira; Körner; Poschadel; Schlupp
2000-09-01
Video playbacks have been successfully applied to the study of visual communication in several groups of animals. However, this technique is controversial as video monitors are designed with the human visual system in mind. Differences between the visual capabilities of humans and other animals will lead to perceptually different interpretations of video images. We simultaneously presented males and females of the peacock blenny, Salaria pavo, with a live conspecific male and an online video image of the same individual. Video images failed to elicit appropriate responses. Males were aggressive towards the live male but not towards video images of the same male. Similarly, females courted only the live male and spent more time near this stimulus. In contrast, females of the gynogenetic poecilid Poecilia formosa showed an equal preference for a live and video image of a P. mexicana male, suggesting a response to live animals as strong as to video images. We discuss differences between the species that may explain their opposite reaction to video images. Copyright 2000 The Association for the Study of Animal Behaviour.
Di Gennaro, Giancarlo; Picardi, Angelo; Sparano, Antonio; Mascia, Addolorata; Meldolesi, Giulio N; Grammaldo, Liliana G; Esposito, Vincenzo; Quarato, Pier P
2012-03-01
To evaluate the efficiency and safety of pre-surgical video-EEG monitoring with a slow anti-epileptic drug (AED) taper and a rescue benzodiazepine protocol. Fifty-four consecutive patients with refractory focal epilepsy who underwent pre-surgical video-electroencephalography (EEG) monitoring during the year 2010 were included in the study. Time to first seizure, duration of monitoring, incidence of 4-h and 24-h seizure clustering, secondarily generalised tonic-clonic seizures (sGTCS), status epilepticus, falls and cardiac asystole were evaluated. A total of 190 seizures were recorded. Six (11%) patients had 4-h clusters and 21 (39%) patients had 24-h clusters. While 15 sGTCS were recorded in 14 patients (26%), status epilepticus did not occur and no seizure was complicated with cardiac asystole. Epileptic falls with no significant injuries occurred in three patients. The mean time to first seizure was 3.3days and the time to conclude video-EEG monitoring averaged 6days. Seizure clustering was common during pre-surgical video-EEG monitoring, although serious adverse events were rare with a slow AED tapering and a rescue benzodiazepine protocol. Slow AED taper pre-surgical video-EEG monitoring is fairly safe when performed in a highly specialised and supervised hospital setting. Copyright © 2011 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
1988-01-01
Hughes Aircraft Corporation's Probeye Model 3300 Thermal Video System consists of tripod mounted infrared scanner that detects the degree of heat emitted by an object and a TV monitor on which results are displayed. Latest addition to Hughes line of infrared medical applications can detect temperature variations as fine as one-tenth of a degree centigrade. Thermography, proving to be a valuable screening tool in diagnosis, can produce information to preclude necessity of performing more invasive tests that may be painful and hazardous. Also useful in verifying a patient's progress through therapy and rehabilitation.
Surveillance of ground vehicles for airport security
NASA Astrophysics Data System (ADS)
Blasch, Erik; Wang, Zhonghai; Shen, Dan; Ling, Haibin; Chen, Genshe
2014-06-01
Future surveillance systems will work in complex and cluttered environments which require systems engineering solutions for such applications such as airport ground surface management. In this paper, we highlight the use of a L1 video tracker for monitoring activities at an airport. We present methods of information fusion, entity detection, and activity analysis using airport videos for runway detection and airport terminal events. For coordinated airport security, automated ground surveillance enhances efficient and safe maneuvers for aircraft, unmanned air vehicles (UAVs) and unmanned ground vehicles (UGVs) operating within airport environments.
Meniscus Imaging for Crystal-Growth Control
NASA Technical Reports Server (NTRS)
Sachs, E. M.
1983-01-01
Silicon crystal growth monitored by new video system reduces operator stress and improves conditions for observation and control of growing process. System optics produce greater magnification vertically than horizontally, so entire meniscus and melt is viewed with high resolution in both width and height dimensions.
NASA Technical Reports Server (NTRS)
1995-01-01
George Nauck of ENCORE!!! invented and markets the Advanced Range Performance (ARPM) Video Golf System for measuring the result of a golf swing. After Nauck requested their assistance, Marshall Space Flight Center scientists suggested video and image processing/computing technology, and provided leads on commercial companies that dealt with the pertinent technologies. Nauck contracted with Applied Research Inc. to develop a prototype. The system employs an elevated camera, which sits behind the tee and follows the flight of the ball down range, catching the point of impact and subsequent roll. Instant replay of the video on a PC monitor at the tee allows measurement of the carry and roll. The unit measures distance and deviation from the target line, as well as distance from the target when one is selected. The information serves as an immediate basis for making adjustments or as a record of skill level progress for golfers.
NASA Astrophysics Data System (ADS)
Bukowiecka, Danuta; Tyburska, Agata; Struniawski, Jarosław; Jastrzebski, Pawel; Jewartowski, Blazej; Pozniak, Krzysztof; Kasprowicz, Grzegorz; Pastuszak, Grzegorz; Trochimiuk, Maciej; Abramowski, Andrzej; Gaska, Michal; Frasunek, Przemysław; Nalbach-Moszynska, Małgorzata; Brawata, Sebastian; Bubak, Iwona; Gloza, Małgorzata
2016-09-01
Preventing and eliminating the risks of terrorist attacks or natural disasters as well as an increase in the security of mass events and critical infrastructure requires the application of modern technologies. Therefore there is a proposal to construct a tool that integrates video signals transmitted by devices that are a part of video monitoring systems functioning in Poland. The article presents selected results of research conducted by the Police Academy in Szczytno under the implemented project for national defense and security on "Video Signals Integrator" Acronym - VSI. Project Leader: Warsaw University of Technology. The consortium: Police Academy in Szczytno, Atende Software Ltd., VORTEX Ltd. No. DOBBio7/ 01/02/2015 funded by the National Centre for Research and Development.
A method and data for video monitor sizing. [human CRT viewing requirements
NASA Technical Reports Server (NTRS)
Kirkpatrick, M., III; Shields, N. L., Jr.; Malone, T. B.; Guerin, E. G.
1976-01-01
The paper outlines an approach consisting of using analytical methods and empirical data to determine monitor size constraints based on the human operator's CRT viewing requirements in a context where panel space and volume considerations for the Space Shuttle aft cabin constrain the size of the monitor to be used. Two cases are examined: remote scene imaging and alphanumeric character display. The central parameter used to constrain monitor size is the ratio M/L where M is the monitor dimension and L the viewing distance. The study is restricted largely to 525 line video systems having an SNR of 32 db and bandwidth of 4.5 MHz. Degradation in these parameters would require changes in the empirically determined visual angle constants presented. The data and methods described are considered to apply to cases where operators are required to view via TV target objects which are well differentiated from the background and where the background is relatively sparse. It is also necessary to identify the critical target dimensions and cues.
Monitoring and diagnosis of vegetable growth based on internet of things
NASA Astrophysics Data System (ADS)
Zhang, Qian; Yu, Feng; Fu, Rong; Li, Gang
2017-10-01
A new condition monitoring method of vegetable growth was proposed, which was based on internet of things. It was combined remote environmental monitoring, video surveillance, intelligently decision-making and two-way video consultation together organically.
Pickering, Amy J; Blum, Annalise G; Breiman, Robert F; Ram, Pavani K; Davis, Jennifer
2014-01-01
In-person structured observation is considered the best approach for measuring hand hygiene behavior, yet is expensive, time consuming, and may alter behavior. Video surveillance could be a useful tool for objectively monitoring hand hygiene behavior if validated against current methods. Student hand cleaning behavior was monitored with video surveillance and in-person structured observation, both simultaneously and separately, at four primary schools in urban Kenya over a study period of 8 weeks. Video surveillance and in-person observation captured similar rates of hand cleaning (absolute difference <5%, p = 0.74). Video surveillance documented higher hand cleaning rates (71%) when at least one other person was present at the hand cleaning station, compared to when a student was alone (48%; rate ratio = 1.14 [95% CI 1.01-1.28]). Students increased hand cleaning rates during simultaneous video and in-person monitoring as compared to single-method monitoring, suggesting reactivity to each method of monitoring. This trend was documented at schools receiving a handwashing with soap intervention, but not at schools receiving a sanitizer intervention. Video surveillance of hand hygiene behavior yields results comparable to in-person observation among schools in a resource-constrained setting. Video surveillance also has certain advantages over in-person observation, including rapid data processing and the capability to capture new behavioral insights. Peer influence can significantly improve student hand cleaning behavior and, when possible, should be exploited in the design and implementation of school hand hygiene programs.
A 24-hour remote surveillance system for terrestrial wildlife studies
Sykes, P.W.; Ryman, W.E.; Kepler, C.B.; Hardy, J.W.
1995-01-01
The configuration, components, specifications and costs of a state-of-the-art closed-circuit television system with wide application for wildlife research and management are described. The principal system components consist of color CCTV camera with zoom lens, pan/tilt system, infrared illuminator, heavy duty tripod, coaxial cable, coaxitron system, half-duplex equalizing video/control amplifier, timelapse video cassette recorder, color video monitor, VHS video cassettes, portable generator, fuel tank and power cable. This system was developed and used in a study of Mississippi sandhiIl Crane (Grus canadensis pratensis) behaviors during incubation, hatching and fledging. The main advantages of the system are minimal downtime where a complete record of every event, its time of occurrence and duration, are permanently recorded and can be replayed as many times as necessary thereafter to retrieve the data. The system is particularly applicable for studies of behavior and predation, for counting individuals, or recording difficult to observe activities. The system can be run continuously for several weeks by two people, reducing personnel costs. This paper is intended to provide biologists who have litte knowledge of electronics with a system that might be useful to their specific needs. The disadvantages of this system are the initial costs (about $9800 basic, 1990-1991 U.S. dollars) and the time required to playback video cassette tapes for data retrieval, but the playback can be sped up when litte or no activity of interest is taking place. In our study, the positive aspects of the system far outweighed the negative.
Perceptual tools for quality-aware video networks
NASA Astrophysics Data System (ADS)
Bovik, A. C.
2014-01-01
Monitoring and controlling the quality of the viewing experience of videos transmitted over increasingly congested networks (especially wireless networks) is a pressing problem owing to rapid advances in video-centric mobile communication and display devices that are straining the capacity of the network infrastructure. New developments in automatic perceptual video quality models offer tools that have the potential to be used to perceptually optimize wireless video, leading to more efficient video data delivery and better received quality. In this talk I will review key perceptual principles that are, or could be used to create effective video quality prediction models, and leading quality prediction models that utilize these principles. The goal is to be able to monitor and perceptually optimize video networks by making them "quality-aware."
Nekton Interaction Monitoring System
DOE Office of Scientific and Technical Information (OSTI.GOV)
2017-03-15
The software provides a real-time processing system for sonar to detect and track animals, and to extract water column biomass statistics in order to facilitate continuous monitoring of an underwater environment. The Nekton Interaction Monitoring System (NIMS) extracts and archives tracking and backscatter statistics data from a real-time stream of data from a sonar device. NIMS also sends real-time tracking messages over the network that can be used by other systems to generate other metrics or to trigger instruments such as an optical video camera. A web-based user interface provides remote monitoring and control. NIMS currently supports three popular sonarmore » devices: M3 multi-beam sonar (Kongsberg), EK60 split-beam echo-sounder (Simrad) and BlueView acoustic camera (Teledyne).« less
Code of Federal Regulations, 2012 CFR
2012-07-01
... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS... as video cameras, digital scanning sonar, and upweller systems; monitoring of sediment quality...
Code of Federal Regulations, 2014 CFR
2014-07-01
... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS... as video cameras, digital scanning sonar, and upweller systems; monitoring of sediment quality...
Code of Federal Regulations, 2013 CFR
2013-07-01
... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS... as video cameras, digital scanning sonar, and upweller systems; monitoring of sediment quality...
28 CFR 115.18 - Upgrades to facilities and technologies.
Code of Federal Regulations, 2012 CFR
2012-07-01
.... 115.18 Section 115.18 Judicial Administration DEPARTMENT OF JUSTICE (CONTINUED) PRISON RAPE ELIMINATION ACT NATIONAL STANDARDS Standards for Adult Prisons and Jails Prevention Planning § 115.18 Upgrades... abuse. (b) When installing or updating a video monitoring system, electronic surveillance system, or...
28 CFR 115.18 - Upgrades to facilities and technologies.
Code of Federal Regulations, 2013 CFR
2013-07-01
.... 115.18 Section 115.18 Judicial Administration DEPARTMENT OF JUSTICE (CONTINUED) PRISON RAPE ELIMINATION ACT NATIONAL STANDARDS Standards for Adult Prisons and Jails Prevention Planning § 115.18 Upgrades... abuse. (b) When installing or updating a video monitoring system, electronic surveillance system, or...
28 CFR 115.18 - Upgrades to facilities and technologies.
Code of Federal Regulations, 2014 CFR
2014-07-01
.... 115.18 Section 115.18 Judicial Administration DEPARTMENT OF JUSTICE (CONTINUED) PRISON RAPE ELIMINATION ACT NATIONAL STANDARDS Standards for Adult Prisons and Jails Prevention Planning § 115.18 Upgrades... abuse. (b) When installing or updating a video monitoring system, electronic surveillance system, or...
3-D video techniques in endoscopic surgery.
Becker, H; Melzer, A; Schurr, M O; Buess, G
1993-02-01
Three-dimensional visualisation of the operative field is an important requisite for precise and fast handling of open surgical operations. Up to now it has only been possible to display a two-dimensional image on the monitor during endoscopic procedures. The increasing complexity of minimal invasive interventions requires endoscopic suturing and ligatures of larger vessels which are difficult to perform without the impression of space. Three-dimensional vision therefore may decrease the operative risk, accelerate interventions and widen the operative spectrum. In April 1992 a 3-D video system developed at the Nuclear Research Center Karlsruhe, Germany (IAI Institute) was applied in various animal experimental procedures and clinically in laparoscopic cholecystectomy. The system works with a single monitor and active high-speed shutter glasses. Our first trials with this new 3-D imaging system clearly showed a facilitation of complex surgical manoeuvres like mobilisation of organs, preparation in the deep space and suture techniques. The 3-D-system introduced in this article will enter the market in 1993 (Opticon Co., Karlsruhe, Germany.
2004-10-25
FUSEDOT does not require facial recognition , or video surveillance of public areas, both of which are apparently a component of TIA ([26], pp...does not use fuzzy signal detection. Involves facial recognition and video surveillance of public areas. Involves monitoring the content of voice...fuzzy signal detection, which TIA does not. Second, FUSEDOT would be easier to develop, because it does not require the development of facial
Berkeley Lab Answers Your Home Energy Efficiency Questions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walker, Iain
2013-02-14
In this follow-up "Ask Berkeley Lab" video, energy efficiency expert Iain Walker answers some of your questions about home energy efficiency. How do you monitor which appliances use the most energy? Should you replace your old windows? Are photovoltaic systems worth the cost? What to do about a leaky house? And what's the single biggest energy user in your home? Watch the video to get the answers to these and more questions.
Berkeley Lab Answers Your Home Energy Efficiency Questions
Walker, Iain
2018-01-16
In this follow-up "Ask Berkeley Lab" video, energy efficiency expert Iain Walker answers some of your questions about home energy efficiency. How do you monitor which appliances use the most energy? Should you replace your old windows? Are photovoltaic systems worth the cost? What to do about a leaky house? And what's the single biggest energy user in your home? Watch the video to get the answers to these and more questions.
Guidelines for Applying Video Simulation Technology to Training Land Design
1993-02-01
Training Land Design for Realism." The technical monitor was Dr. Victor Diersing, CEHSC-FN. This study was performed by the Environmental Resources...technology to their land management activities. 5 Objective The objective of this study was to provide a general overview of the use of video simulation...4). A market study of currently available hardware and software provided the basis for descriptions of hardware and software systems, and their
Early Detection of Infection in Pigs through an Online Monitoring System.
Martínez-Avilés, M; Fernández-Carrión, E; López García-Baones, J M; Sánchez-Vizcaíno, J M
2017-04-01
Late detection of emergency diseases causes significant economic losses for pig producers and governments. As the first signs of animal infection are usually fever and reduced motion that lead to reduced consumption of water and feed, we developed a novel smart system to monitor body temperature and motion in real time, facilitating the early detection of infectious diseases. In this study, carried out within the framework of the European Union research project Rapidia Field, we tested the smart system on 10 pigs experimentally infected with two doses of an attenuated strain of African swine fever. Biosensors and an accelerometer embedded in an eartag captured data before and after infection, and video cameras were used to monitor the animals 24 h per day. The results showed that in 8 of 9 cases, the monitoring system detected infection onset as an increase in body temperature and decrease in movement before or simultaneously with fever detection based on rectal temperature measurement, observation of clinical signs, the decrease in water consumption or positive qPCR detection of virus. In addition, this decrease in movement was reliably detected using automatic analysis of video images therefore providing an inexpensive alternative to direct motion measurement. The system can be set up to alert staff when high fever, reduced motion or both are detected in one or more animals. This system may be useful for monitoring sentinel herds in real time, considerably reducing the financial and logistical costs of periodic sampling and increasing the chances of early detection of infection. © 2015 Blackwell Verlag GmbH.
Pedestrian detection in video surveillance using fully convolutional YOLO neural network
NASA Astrophysics Data System (ADS)
Molchanov, V. V.; Vishnyakov, B. V.; Vizilter, Y. V.; Vishnyakova, O. V.; Knyaz, V. A.
2017-06-01
More than 80% of video surveillance systems are used for monitoring people. Old human detection algorithms, based on background and foreground modelling, could not even deal with a group of people, to say nothing of a crowd. Recent robust and highly effective pedestrian detection algorithms are a new milestone of video surveillance systems. Based on modern approaches in deep learning, these algorithms produce very discriminative features that can be used for getting robust inference in real visual scenes. They deal with such tasks as distinguishing different persons in a group, overcome problem with sufficient enclosures of human bodies by the foreground, detect various poses of people. In our work we use a new approach which enables to combine detection and classification tasks into one challenge using convolution neural networks. As a start point we choose YOLO CNN, whose authors propose a very efficient way of combining mentioned above tasks by learning a single neural network. This approach showed competitive results with state-of-the-art models such as FAST R-CNN, significantly overcoming them in speed, which allows us to apply it in real time video surveillance and other video monitoring systems. Despite all advantages it suffers from some known drawbacks, related to the fully-connected layers that obstruct applying the CNN to images with different resolution. Also it limits the ability to distinguish small close human figures in groups which is crucial for our tasks since we work with rather low quality images which often include dense small groups of people. In this work we gradually change network architecture to overcome mentioned above problems, train it on a complex pedestrian dataset and finally get the CNN detecting small pedestrians in real scenes.
The study of surgical image quality evaluation system by subjective quality factor method
NASA Astrophysics Data System (ADS)
Zhang, Jian J.; Xuan, Jason R.; Yang, Xirong; Yu, Honggang; Koullick, Edouard
2016-03-01
GreenLightTM procedure is an effective and economical way of treatment of benign prostate hyperplasia (BPH); there are almost a million of patients treated with GreenLightTM worldwide. During the surgical procedure, the surgeon or physician will rely on the monitoring video system to survey and confirm the surgical progress. There are a few obstructions that could greatly affect the image quality of the monitoring video, like laser glare by the tissue and body fluid, air bubbles and debris generated by tissue evaporation, and bleeding, just to name a few. In order to improve the physician's visual experience of a laser surgical procedure, the system performance parameter related to image quality needs to be well defined. However, since image quality is the integrated set of perceptions of the overall degree of excellence of an image, or in other words, image quality is the perceptually weighted combination of significant attributes (contrast, graininess …) of an image when considered in its marketplace or application, there is no standard definition on overall image or video quality especially for the no-reference case (without a standard chart as reference). In this study, Subjective Quality Factor (SQF) and acutance are used for no-reference image quality evaluation. Basic image quality parameters, like sharpness, color accuracy, size of obstruction and transmission of obstruction, are used as subparameter to define the rating scale for image quality evaluation or comparison. Sample image groups were evaluated by human observers according to the rating scale. Surveys of physician groups were also conducted with lab generated sample videos. The study shows that human subjective perception is a trustworthy way of image quality evaluation. More systematic investigation on the relationship between video quality and image quality of each frame will be conducted as a future study.
Achieving Last-Mile Broadband Access With Passive Optical Networking Technology
2002-09-01
Naval Postgraduate School Monterey, CA 93943-5000 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING / MONITORING ...AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSORING/ MONITORING AGENCY REPORT NUMBER 11. SUPPLEMENTARY NOTES The views expressed in...definition television (HDTV), video telecommuting , tele- education, video-on-demand, online video games, interactive shopping and yet to
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gressel, M.G.; Heitbrink, W.A.; Jensen, P.A.
1992-08-01
The techniques for conducting video exposure monitoring were described along with the equipment required to monitor and record worker breathing zone concentrations, the analysis of the real time exposure data using video recordings, and the use of real time concentration data from a direct reading instrument to determine the effective ventilation rate and the mixing factor of a given room at a specific time. Case studies which made use of video exposure monitoring techniques to provide information not available through integrated sampling were also discussed. The process being monitored and the methodology used to monitor the exposures were described formore » each of the case studies. The case studies included manual material weigh out, ceramic casting cleaning, dumping bags of powdered materials, furniture stripping, administration of nitrous-oxide during dental procedures, hand held sanding operation, methanol exposures in maintenance garages, brake servicing, bulk loading of railroad cars and trucks, and grinding operations.« less
Molton, James S; Pang, Yan; Wang, Zhuochun; Qiu, Boqin; Wu, Pei; Rahman-Shepherd, Afifah; Ooi, Wei Tsang; Paton, Nicholas I
2016-12-20
Suboptimal medication adherence for infectious diseases such as tuberculosis (TB) results in poor clinical outcomes and ongoing infectivity. Directly observed therapy (DOT) is now standard of care for TB treatment monitoring but has a number of limitations. We aimed to develop and evaluate a smartphone-based system to facilitate remotely observed therapy via transmission of videos rather than in-person observation. We developed an integrated smartphone and web-based system (Mobile Interactive Supervised Therapy, MIST) to provide regular medication reminders and facilitate video recording of pill ingestion at predetermined timings each day, for upload and later review by a healthcare worker. We evaluated the system in a single arm, prospective study of adherence to a dietary supplement. Healthy volunteers were recruited through an online portal. Entry criteria included age ≥21 and owning an iOS or Android-based device. Participants took a dietary supplement pill once, twice or three-times a day for 2 months. We instructed them to video each pill taking episode using the system. Adherence as measured by the smartphone system and by pill count. 42 eligible participants were recruited (median age 24; 86% students). Videos were classified as received-confirmed pill intake (3475, 82.7% of the 4200 videos expected), received-uncertain pill intake (16, <1%), received-fake pill intake (31, <1%), not received-technical issues (223, 5.3%) or not received-assumed non-adherence (455, 10.8%). Overall median estimated participant adherence by MIST was 90.0%, similar to that obtained by pill count (93.8%). There was a good relationship between participant adherence as measured by MIST and by pill count (Spearmans r s 0.66, p<0.001). We have demonstrated the feasibility, acceptability and accuracy of a smartphone-based adherence support and monitoring system. The system has the potential to supplement and support the provision of DOT for TB and also to improve adherence in other conditions such as HIV and hepatitis C. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
NASA Technical Reports Server (NTRS)
Cohen, Tamar E.; Lees, David S.; Deans, Matthew C.; Lim, Darlene S. S.; Lee, Yeon Jin Grace
2018-01-01
Exploration Ground Data Systems (xGDS) supports rapid scientific decision making by synchronizing video in context with map, instrument data visualization, geo-located notes and any other collected data. xGDS is an open source web-based software suite developed at NASA Ames Research Center to support remote science operations in analog missions and prototype solutions for remote planetary exploration. (See Appendix B) Typical video systems are designed to play or stream video only, independent of other data collected in the context of the video. Providing customizable displays for monitoring live video and data as well as replaying recorded video and data helps end users build up a rich situational awareness. xGDS was designed to support remote field exploration with unreliable networks. Commercial digital recording systems operate under the assumption that there is a stable and reliable network between the source of the video and the recording system. In many field deployments and space exploration scenarios, this is not the case - there are both anticipated and unexpected network losses. xGDS' Video Module handles these interruptions, storing the available video, organizing and characterizing the dropouts, and presenting the video for streaming or replay to the end user including visualization of the dropouts. Scientific instruments often require custom or expensive software to analyze and visualize collected data. This limits the speed at which the data can be visualized and limits access to the data to those users with the software. xGDS' Instrument Module integrates with instruments that collect and broadcast data in a single snapshot or that continually collect and broadcast a stream of data. While seeing a visualization of collected instrument data is informative, showing the context for the collected data, other data collected nearby along with events indicating current status helps remote science teams build a better understanding of the environment. Further, sharing geo-located, tagged notes recorded by the scientists and others on the team spurs deeper analysis of the data.
Pickering, Amy J.; Blum, Annalise G.; Breiman, Robert F.; Ram, Pavani K.; Davis, Jennifer
2014-01-01
Background In-person structured observation is considered the best approach for measuring hand hygiene behavior, yet is expensive, time consuming, and may alter behavior. Video surveillance could be a useful tool for objectively monitoring hand hygiene behavior if validated against current methods. Methods Student hand cleaning behavior was monitored with video surveillance and in-person structured observation, both simultaneously and separately, at four primary schools in urban Kenya over a study period of 8 weeks. Findings Video surveillance and in-person observation captured similar rates of hand cleaning (absolute difference <5%, p = 0.74). Video surveillance documented higher hand cleaning rates (71%) when at least one other person was present at the hand cleaning station, compared to when a student was alone (48%; rate ratio = 1.14 [95% CI 1.01–1.28]). Students increased hand cleaning rates during simultaneous video and in-person monitoring as compared to single-method monitoring, suggesting reactivity to each method of monitoring. This trend was documented at schools receiving a handwashing with soap intervention, but not at schools receiving a sanitizer intervention. Conclusion Video surveillance of hand hygiene behavior yields results comparable to in-person observation among schools in a resource-constrained setting. Video surveillance also has certain advantages over in-person observation, including rapid data processing and the capability to capture new behavioral insights. Peer influence can significantly improve student hand cleaning behavior and, when possible, should be exploited in the design and implementation of school hand hygiene programs. PMID:24676389
Ice flood velocity calculating approach based on single view metrology
NASA Astrophysics Data System (ADS)
Wu, X.; Xu, L.
2017-02-01
Yellow River is the river in which the ice flood occurs most frequently in China, hence, the Ice flood forecasting has great significance for the river flood prevention work. In various ice flood forecast models, the flow velocity is one of the most important parameters. In spite of the great significance of the flow velocity, its acquisition heavily relies on manual observation or deriving from empirical formula. In recent years, with the high development of video surveillance technology and wireless transmission network, the Yellow River Conservancy Commission set up the ice situation monitoring system, in which live videos can be transmitted to the monitoring center through 3G mobile networks. In this paper, an approach to get the ice velocity based on single view metrology and motion tracking technique using monitoring videos as input data is proposed. First of all, River way can be approximated as a plane. On this condition, we analyze the geometry relevance between the object side and the image side. Besides, we present the principle to measure length in object side from image. Secondly, we use LK optical flow which support pyramid data to track the ice in motion. Combining the result of camera calibration and single view metrology, we propose a flow to calculate the real velocity of ice flood. At last we realize a prototype system by programming and use it to test the reliability and rationality of the whole solution.
NASA Astrophysics Data System (ADS)
Chan, Yi-Tung; Wang, Shuenn-Jyi; Tsai, Chung-Hsien
2017-09-01
Public safety is a matter of national security and people's livelihoods. In recent years, intelligent video-surveillance systems have become important active-protection systems. A surveillance system that provides early detection and threat assessment could protect people from crowd-related disasters and ensure public safety. Image processing is commonly used to extract features, e.g., people, from a surveillance video. However, little research has been conducted on the relationship between foreground detection and feature extraction. Most current video-surveillance research has been developed for restricted environments, in which the extracted features are limited by having information from a single foreground; they do not effectively represent the diversity of crowd behavior. This paper presents a general framework based on extracting ensemble features from the foreground of a surveillance video to analyze a crowd. The proposed method can flexibly integrate different foreground-detection technologies to adapt to various monitored environments. Furthermore, the extractable representative features depend on the heterogeneous foreground data. Finally, a classification algorithm is applied to these features to automatically model crowd behavior and distinguish an abnormal event from normal patterns. The experimental results demonstrate that the proposed method's performance is both comparable to that of state-of-the-art methods and satisfies the requirements of real-time applications.
Multi-Target Camera Tracking, Hand-off and Display LDRD 158819 Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Robert J.
2014-10-01
Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn’t lead to more alarms, moremore » monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identify individual moving targets from the background imagery, and then display the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.« less
Multi-target camera tracking, hand-off and display LDRD 158819 final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Robert J.
2014-10-01
Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn't lead to more alarms, moremore » monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identifies individual moving targets from the background imagery, and then displays the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.« less
Interactive Video: One Monitor or Two?
ERIC Educational Resources Information Center
Cline, William J.
1991-01-01
Analysis of the effects of an interactive video workstation during lessons about Spanish culture suggested that the use of single or dual monitors was not an important factor in student learning, although there were some cost advantages associated with the two-monitor workstation design. (seven references) (Author/CB)
Bayesian Inference for Signal-Based Seismic Monitoring
NASA Astrophysics Data System (ADS)
Moore, D.
2015-12-01
Traditional seismic monitoring systems rely on discrete detections produced by station processing software, discarding significant information present in the original recorded signal. SIG-VISA (Signal-based Vertically Integrated Seismic Analysis) is a system for global seismic monitoring through Bayesian inference on seismic signals. By modeling signals directly, our forward model is able to incorporate a rich representation of the physics underlying the signal generation process, including source mechanisms, wave propagation, and station response. This allows inference in the model to recover the qualitative behavior of recent geophysical methods including waveform matching and double-differencing, all as part of a unified Bayesian monitoring system that simultaneously detects and locates events from a global network of stations. We demonstrate recent progress in scaling up SIG-VISA to efficiently process the data stream of global signals recorded by the International Monitoring System (IMS), including comparisons against existing processing methods that show increased sensitivity from our signal-based model and in particular the ability to locate events (including aftershock sequences that can tax analyst processing) precisely from waveform correlation effects. We also provide a Bayesian analysis of an alleged low-magnitude event near the DPRK test site in May 2010 [1] [2], investigating whether such an event could plausibly be detected through automated processing in a signal-based monitoring system. [1] Zhang, Miao and Wen, Lianxing. "Seismological Evidence for a Low-Yield Nuclear Test on 12 May 2010 in North Korea". Seismological Research Letters, January/February 2015. [2] Richards, Paul. "A Seismic Event in North Korea on 12 May 2010". CTBTO SnT 2015 oral presentation, video at https://video-archive.ctbto.org/index.php/kmc/preview/partner_id/103/uiconf_id/4421629/entry_id/0_ymmtpps0/delivery/http
Sensor and Video Monitoring of Water Quality at Bristol Floating Harbour
NASA Astrophysics Data System (ADS)
Chen, Yiheng; Han, Dawei
2017-04-01
Water system is an essential component in a smart city for its sustainability and resilience. The harbourside is a focal area of Bristol with new buildings and features redeveloped in the last ten years, attracting numerous visitors by the diversity of attractions and beautiful views. There is a strong relationship between the satisfactory of the visitors and local people with the water quality in the Harbour. The freshness and beauty of the water body would please people as well as benefit the aquatic ecosystems. As we are entering a data-rich era, this pilot project aims to explore the concept of using video cameras and smart sensors to collect and monitor water quality condition at the Bristol harbourside. The video cameras and smart sensors are connected to the Bristol Is Open network, an open programmable city platform. This will be the first attempt to collect water quality data in real time in the Bristol urban area with the wireless network. The videos and images of the water body collected by the cameras will be correlated with the in-situ water quality parameters for research purposes. The successful implementation of the sensors can attract more academic researchers and industrial partners to expand the sensor network to multiple locations around the city covering the other parts of the Harbour and River Avon, leading to a new generation of urban system infrastructure model.
Xu, Xiu; Zhang, Honglei; Li, Yiming; Li, Bin
2015-07-01
Developed the information centralization and management integration system for monitors of different brands and models with wireless sensor network technologies such as wireless location and wireless communication, based on the existing wireless network. With adaptive implementation and low cost, the system which possesses the advantages of real-time, efficiency and elaboration is able to collect status and data of the monitors, locate the monitors, and provide services with web server, video server and locating server via local network. Using an intranet computer, the clinical and device management staffs can access the status and parameters of monitors. Applications of this system provide convenience and save human resource for clinical departments, as well as promote the efficiency, accuracy and elaboration for the device management. The successful achievement of this system provides solution for integrated and elaborated management of the mobile devices including ventilator and infusion pump.
NASA Astrophysics Data System (ADS)
Zoletnik, S.; Biedermann, C.; Cseh, G.; Kocsis, G.; König, R.; Szabolics, T.; Szepesi, T.; Wendelstein 7-X Team
2018-01-01
A special video camera has been developed for the 10-camera overview video system of the Wendelstein 7-X (W7-X) stellarator considering multiple application needs and limitations resulting from this complex long-pulse superconducting stellarator experiment. The event detection intelligent camera (EDICAM) uses a special 1.3 Mpixel CMOS sensor with non-destructive read capability which enables fast monitoring of smaller Regions of Interest (ROIs) even during long exposures. The camera can perform simple data evaluation algorithms (minimum/maximum, mean comparison to levels) on the ROI data which can dynamically change the readout process and generate output signals. Multiple EDICAM cameras were operated in the first campaign of W7-X and capabilities were explored in the real environment. Data prove that the camera can be used for taking long exposure (10-100 ms) overview images of the plasma while sub-ms monitoring and even multi-camera correlated edge plasma turbulence measurements of smaller areas can be done in parallel. These latter revealed that filamentary turbulence structures extend between neighboring modules of the stellarator. Considerations emerging for future upgrades of this system and similar setups on future long-pulse fusion experiments such as ITER are discussed.
A real-time remote video streaming platform for ultrasound imaging.
Ahmadi, Mehdi; Gross, Warren J; Kadoury, Samuel
2016-08-01
Ultrasound is a viable imaging technology in remote and resources-limited areas. Ultrasonography is a user-dependent skill which depends on a high degree of training and hands-on experience. However, there is a limited number of skillful sonographers located in remote areas. In this work, we aim to develop a real-time video streaming platform which allows specialist physicians to remotely monitor ultrasound exams. To this end, an ultrasound stream is captured and transmitted through a wireless network into remote computers, smart-phones and tablets. In addition, the system is equipped with a camera to track the position of the ultrasound probe. The main advantage of our work is using an open source platform for video streaming which gives us more control over streaming parameters than the available commercial products. The transmission delays of the system are evaluated for several ultrasound video resolutions and the results show that ultrasound videos close to the high-definition (HD) resolution can be received and displayed on an Android tablet with the delay of 0.5 seconds which is acceptable for accurate real-time diagnosis.
A digital underwater video camera system for aquatic research in regulated rivers
Martin, Benjamin M.; Irwin, Elise R.
2010-01-01
We designed a digital underwater video camera system to monitor nesting centrarchid behavior in the Tallapoosa River, Alabama, 20 km below a peaking hydropower dam with a highly variable flow regime. Major components of the system included a digital video recorder, multiple underwater cameras, and specially fabricated substrate stakes. The innovative design of the substrate stakes allowed us to effectively observe nesting redbreast sunfish Lepomis auritus in a highly regulated river. Substrate stakes, which were constructed for the specific substratum complex (i.e., sand, gravel, and cobble) identified at our study site, were able to withstand a discharge level of approximately 300 m3/s and allowed us to simultaneously record 10 active nests before and during water releases from the dam. We believe our technique will be valuable for other researchers that work in regulated rivers to quantify behavior of aquatic fauna in response to a discharge disturbance.
A Miniaturized Video System for Monitoring Drosophila Behavior
NASA Technical Reports Server (NTRS)
Bhattacharya, Sharmila; Inan, Omer; Kovacs, Gregory; Etemadi, Mozziyar; Sanchez, Max; Marcu, Oana
2011-01-01
Long-term spaceflight may induce a variety of harmful effects in astronauts, resulting in altered motor and cognitive behavior. The stresses experienced by humans in space - most significantly weightlessness (microgravity) and cosmic radiation - are difficult to accurately simulate on Earth. In fact, prolonged and concomitant exposure to microgravity and cosmic radiation can only be studied in space. Behavioral studies in space have focused on model organisms, including Drosophila melanogaster. Drosophila is often used due to its short life span and generational cycle, small size, and ease of maintenance. Additionally, the well-characterized genetics of Drosophila behavior on Earth can be applied to the analysis of results from spaceflights, provided that the behavior in space is accurately recorded. In 2001, the BioExplorer project introduced a low-cost option for researchers: the small satellite. While this approach enabled multiple inexpensive launches of biological experiments, it also imposed stringent restrictions on the monitoring systems in terms of size, mass, data bandwidth, and power consumption. Suggested parameters for size are on the order of 100 mm3 and 1 kg mass for the entire payload. For Drosophila behavioral studies, these engineering requirements are not met by commercially available systems. One system that does meet many requirements for behavioral studies in space is the actimeter. Actimeters use infrared light gates to track the number of times a fly crosses a boundary within a small container (3x3x40 mm). Unfortunately, the apparatus needed to monitor several flies at once would be larger than the capacity of the small satellite. A system is presented, which expands on the actimeter approach to achieve a highly compact, low-power, ultra-low bandwidth solution for simultaneous monitoring of the behavior of multiple flies in space. This also provides a simple, inexpensive alternative to the current systems for monitoring Drosophila populations in terrestrial experiments, and could be especially useful in field experiments in remote locations. Two practical limitations of the system should be noted: first, only walking flies can be observed - not flying - and second, although it enables population studies, tracking individual flies within the population is not currently possible. The system used video recording and an analog circuit to extract the average light changes as a function of time. Flies were held in a 5-cm diameter Petri dish and illuminated from below by a uniform light source. A miniature, monochrome CMOS (complementary metal-oxide semiconductor) video camera imaged the flies. This camera had automatic gain control, and this did not affect system performance. The camera was positioned 5-7 cm above the Petri dish such that the imaging area was 2.25 sq cm. With this basic setup, still images and continuous video of 15 flies at one time were obtained. To reduce the required data bandwidth by several orders of magnitude, a band-pass filter (0.3-10 Hz) circuit compressed the video signal and extracted changes in image luminance over time. The raw activity signal output of this circuit was recorded on a computer and digitally processed to extract the fly movement "events" from the waveform. These events corresponded to flies entering and leaving the image and were used for extracting activity parameters such as inter-event duration. The efficacy of the system in quantifying locomotor activity was evaluated by varying environmental temperature, then measuring the activity level of the flies.
Cooperative multisensor system for real-time face detection and tracking in uncontrolled conditions
NASA Astrophysics Data System (ADS)
Marchesotti, Luca; Piva, Stefano; Turolla, Andrea; Minetti, Deborah; Regazzoni, Carlo S.
2005-03-01
The presented work describes an innovative architecture for multi-sensor distributed video surveillance applications. The aim of the system is to track moving objects in outdoor environments with a cooperative strategy exploiting two video cameras. The system also exhibits the capacity of focusing its attention on the faces of detected pedestrians collecting snapshot frames of face images, by segmenting and tracking them over time at different resolution. The system is designed to employ two video cameras in a cooperative client/server structure: the first camera monitors the entire area of interest and detects the moving objects using change detection techniques. The detected objects are tracked over time and their position is indicated on a map representing the monitored area. The objects" coordinates are sent to the server sensor in order to point its zooming optics towards the moving object. The second camera tracks the objects at high resolution. As well as the client camera, this sensor is calibrated and the position of the object detected on the image plane reference system is translated in its coordinates referred to the same area map. In the map common reference system, data fusion techniques are applied to achieve a more precise and robust estimation of the objects" track and to perform face detection and tracking. The work novelties and strength reside in the cooperative multi-sensor approach, in the high resolution long distance tracking and in the automatic collection of biometric data such as a person face clip for recognition purposes.
Kasaya, Takafumi; Mitsuzawa, Kyohiko; Goto, Tada-Nori; Iwase, Ryoichi; Sayanagi, Keizo; Araki, Eiichiro; Asakawa, Kenichi; Mikada, Hitoshi; Watanabe, Tomoki; Takahashi, Ichiro; Nagao, Toshiyasu
2009-01-01
Sagami Bay is an active tectonic area in Japan. In 1993, a real-time deep sea floor observatory was deployed at 1,175 m depth about 7 km off Hatsushima Island, Sagami Bay to monitor seismic activities and other geophysical phenomena. Video cameras monitored biological activities associated with tectonic activities. The observation system was renovated completely in 2000. An ocean bottom electromagnetic meter (OBEM), an ocean bottom differential pressure gauge (DPG) system, and an ocean bottom gravity meter (OBG) were installed January 2005; operations began in February of that year. An earthquake (M5.4) in April 2006, generated a submarine landslide that reached the Hatsushima Observatory, moving some sensors. The video camera took movies of mudflows; OBEM and other sensors detected distinctive changes occurring with the mudflow. Although the DPG and OBG were recovered in January 2008, the OBEM continues to obtain data.
High-resolution behavioral mapping of electric fishes in Amazonian habitats.
Madhav, Manu S; Jayakumar, Ravikrishnan P; Demir, Alican; Stamper, Sarah A; Fortune, Eric S; Cowan, Noah J
2018-04-11
The study of animal behavior has been revolutionized by sophisticated methodologies that identify and track individuals in video recordings. Video recording of behavior, however, is challenging for many species and habitats including fishes that live in turbid water. Here we present a methodology for identifying and localizing weakly electric fishes on the centimeter scale with subsecond temporal resolution based solely on the electric signals generated by each individual. These signals are recorded with a grid of electrodes and analyzed using a two-part algorithm that identifies the signals from each individual fish and then estimates the position and orientation of each fish using Bayesian inference. Interestingly, because this system involves eavesdropping on electrocommunication signals, it permits monitoring of complex social and physical interactions in the wild. This approach has potential for large-scale non-invasive monitoring of aquatic habitats in the Amazon basin and other tropical freshwater systems.
Vision-aided Monitoring and Control of Thermal Spray, Spray Forming, and Welding Processes
NASA Technical Reports Server (NTRS)
Agapakis, John E.; Bolstad, Jon
1993-01-01
Vision is one of the most powerful forms of non-contact sensing for monitoring and control of manufacturing processes. However, processes involving an arc plasma or flame such as welding or thermal spraying pose particularly challenging problems to conventional vision sensing and processing techniques. The arc or plasma is not typically limited to a single spectral region and thus cannot be easily filtered out optically. This paper presents an innovative vision sensing system that uses intense stroboscopic illumination to overpower the arc light and produce a video image that is free of arc light or glare and dedicated image processing and analysis schemes that can enhance the video images or extract features of interest and produce quantitative process measures which can be used for process monitoring and control. Results of two SBIR programs sponsored by NASA and DOE and focusing on the application of this innovative vision sensing and processing technology to thermal spraying and welding process monitoring and control are discussed.
Is partially automated driving a bad idea? Observations from an on-road study.
Banks, Victoria A; Eriksson, Alexander; O'Donoghue, Jim; Stanton, Neville A
2018-04-01
The automation of longitudinal and lateral control has enabled drivers to become "hands and feet free" but they are required to remain in an active monitoring state with a requirement to resume manual control if required. This represents the single largest allocation of system function problem with vehicle automation as the literature suggests that humans are notoriously inefficient at completing prolonged monitoring tasks. To further explore whether partially automated driving solutions can appropriately support the driver in completing their new monitoring role, video observations were collected as part of an on-road study using a Tesla Model S being operated in Autopilot mode. A thematic analysis of video data suggests that drivers are not being properly supported in adhering to their new monitoring responsibilities and instead demonstrate behaviour indicative of complacency and over-trust. These attributes may encourage drivers to take more risks whilst out on the road. Copyright © 2017 Elsevier Ltd. All rights reserved.
80. FOUR VIDEO MONITORS LOCATED ALONG THE SOUTH WALL OF ...
80. FOUR VIDEO MONITORS LOCATED ALONG THE SOUTH WALL OF SLC-3E CONTROL ROOM. (TWO VIDEOTEK MONITORS ON LEFT (EAST) ARE COLOR; OTHERS ARE BLACK AND WHITE.) DIGITAL COUNTDOWN, HOLD, AND GREENWHICH MEAN TIME CLOCKS LOCATED ABOVE MONITORS 4 AND 5. - Vandenberg Air Force Base, Space Launch Complex 3, Launch Operations Building, Napa & Alden Roads, Lompoc, Santa Barbara County, CA
NASA Astrophysics Data System (ADS)
Nightingale, James; Wang, Qi; Grecos, Christos
2015-02-01
In recent years video traffic has become the dominant application on the Internet with global year-on-year increases in video-oriented consumer services. Driven by improved bandwidth in both mobile and fixed networks, steadily reducing hardware costs and the development of new technologies, many existing and new classes of commercial and industrial video applications are now being upgraded or emerging. Some of the use cases for these applications include areas such as public and private security monitoring for loss prevention or intruder detection, industrial process monitoring and critical infrastructure monitoring. The use of video is becoming commonplace in defence, security, commercial, industrial, educational and health contexts. Towards optimal performances, the design or optimisation in each of these applications should be context aware and task oriented with the characteristics of the video stream (frame rate, spatial resolution, bandwidth etc.) chosen to match the use case requirements. For example, in the security domain, a task-oriented consideration may be that higher resolution video would be required to identify an intruder than to simply detect his presence. Whilst in the same case, contextual factors such as the requirement to transmit over a resource-limited wireless link, may impose constraints on the selection of optimum task-oriented parameters. This paper presents a novel, conceptually simple and easily implemented method of assessing video quality relative to its suitability for a particular task and dynamically adapting videos streams during transmission to ensure that the task can be successfully completed. Firstly we defined two principle classes of tasks: recognition tasks and event detection tasks. These task classes are further subdivided into a set of task-related profiles, each of which is associated with a set of taskoriented attributes (minimum spatial resolution, minimum frame rate etc.). For example, in the detection class, profiles for intruder detection will require different temporal characteristics (frame rate) from those used for detection of high motion objects such as vehicles or aircrafts. We also define a set of contextual attributes that are associated with each instance of a running application that include resource constraints imposed by the transmission system employed and the hardware platforms used as source and destination of the video stream. Empirical results are presented and analysed to demonstrate the advantages of the proposed schemes.
NASA Technical Reports Server (NTRS)
Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)
1991-01-01
A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.
Composite video and graphics display for camera viewing systems in robotics and teleoperation
NASA Technical Reports Server (NTRS)
Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)
1993-01-01
A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.
Oversampling in virtual visual sensors as a means to recover higher modes of vibration
NASA Astrophysics Data System (ADS)
Shariati, Ali; Schumacher, Thomas
2015-03-01
Vibration-based structural health monitoring (SHM) techniques require modal information from the monitored structure in order to estimate the location and severity of damage. Natural frequencies also provide useful information to calibrate finite element models. There are several types of physical sensors that can measure the response over a range of frequencies. For most of those sensors however, accessibility, limitation of measurement points, wiring, and high system cost represent major challenges. Recent optical sensing approaches offer advantages such as easy access to visible areas, distributed sensing capabilities, and comparatively inexpensive data recording while having no wiring issues. In this research we propose a novel methodology to measure natural frequencies of structures using digital video cameras based on virtual visual sensors (VVS). In our initial study where we worked with commercially available inexpensive digital video cameras we found that for multiple degrees of freedom systems it is difficult to detect all of the natural frequencies simultaneously due to low quantization resolution. In this study we show how oversampling enabled by the use of high-end high-frame-rate video cameras enable recovering all of the three natural frequencies from a three story lab-scale structure.
Flexible video conference system based on ASICs and DSPs
NASA Astrophysics Data System (ADS)
Hu, Qiang; Yu, Songyu
1995-02-01
In this paper, a video conference system we developed recently is presented. In this system the video codec is compatible with CCITT H.261, the audio codec is compatible with G.711 and G.722, the channel interface circuit is designed according to CCITT H.221. In this paper emphasis is given to the video codec, which is both flexible and robust. The video codec is based on LSI LOGIC Corporation's L64700 series video compression chipset. The main function blocks of H.261, such as DCT, motion estimation, VLC, VLD, are performed by this chipset, but the chipset is a nude chipset, no peripheral function, such as memory interface, is integrated into it, this results in great difficulty to implement the system. To implement the frame buffer controller, a DSP-TMS 320c25 and a group of GALs is used, SRAM is used as a current and previous frame buffer, the DSP is not only the controller of the frame buffer, it's also the controller of the whole video codec. Because of the use of the DSP, the architecture of the video codec is very flexible, many system parameters can be reconfigured for different applications. The architecture of the whole video codec is a streamline structure. In H.261, BCH(511,493) coding is recommended to work against random errors in transmission, but if burst error occurs, it causes serious result. To solve this problem, an interleaving method is used, that means the BCH code is interleaved before it's transmitted, in the receiver it is interleaved again and the bit stream is in the original order, but the error bits are distributed into several BCH words, and the BCH decoder is able to correct it. Considering that extreme conditions may occur, a function block is implemented which is somewhat like a watchdog, it assures that the receiver can recover from errors no matter what serious error occurs in transmission. In developing the video conference system, a new synchronization problem must be solved, the monitor on the receiver can't be easily synchronized with the camera on another side, a new method is described in detail which can solve this problem successfully.
Aerospace video imaging systems for rangeland management
NASA Technical Reports Server (NTRS)
Everitt, J. H.; Escobar, D. E.; Richardson, A. J.; Lulla, K.
1990-01-01
This paper presents an overview on the application of airborne video imagery (VI) for assessment of rangeland resources. Multispectral black-and-white video with visible/NIR sensitivity; color-IR, normal color, and black-and-white MIR; and thermal IR video have been used to detect or distinguish among many rangeland and other natural resource variables such as heavy grazing, drought-stressed grass, phytomass levels, burned areas, soil salinity, plant communities and species, and gopher and ant mounds. The digitization and computer processing of VI have also been demonstrated. VI does not have the detailed resolution of film, but these results have shown that it has considerable potential as an applied remote sensing tool for rangeland management. In the future, spaceborne VI may provide additional data for monitoring and management of rangelands.
Ethernet direct display: a new dimension for in-vehicle video connectivity solutions
NASA Astrophysics Data System (ADS)
Rowley, Vincent
2009-05-01
To improve the local situational awareness (LSA) of personnel in light or heavily armored vehicles, most military organizations recognize the need to equip their fleets with high-resolution digital video systems. Several related upgrade programs are already in progress and, almost invariably, COTS IP/Ethernet is specified as the underlying transport mechanism. The high bandwidths, long reach, networking flexibility, scalability, and affordability of IP/Ethernet make it an attractive choice. There are significant technical challenges, however, in achieving high-performance, real-time video connectivity over the IP/Ethernet platform. As an early pioneer in performance-oriented video systems based on IP/Ethernet, Pleora Technologies has developed core expertise in meeting these challenges and applied a singular focus to innovating within the required framework. The company's field-proven iPORTTM Video Connectivity Solution is deployed successfully in thousands of real-world applications for medical, military, and manufacturing operations. Pleora's latest innovation is eDisplayTM, a smallfootprint, low-power, highly efficient IP engine that acquires video from an Ethernet connection and sends it directly to a standard HDMI/DVI monitor for real-time viewing. More costly PCs are not required. This paper describes Pleora's eDisplay IP Engine in more detail. It demonstrates how - in concert with other elements of the end-to-end iPORT Video Connectivity Solution - the engine can be used to build standards-based, in-vehicle video systems that increase the safety and effectiveness of military personnel while fully leveraging the advantages of the lowcost COTS IP/Ethernet platform.
Peterson, Courtney M; Apolzan, John W; Wright, Courtney; Martin, Corby K
2016-11-01
We conducted two studies to test the validity, reliability, feasibility and acceptability of using video chat technology to quantify dietary and pill-taking (i.e. supplement and medication) adherence. In study 1, we investigated whether video chat technology can accurately quantify adherence to dietary and pill-taking interventions. Mock study participants ate food items and swallowed pills, while performing randomised scripted 'cheating' behaviours to mimic non-adherence. Monitoring was conducted in a cross-over design, with two monitors watching in-person and two watching remotely by Skype on a smartphone. For study 2, a twenty-two-item online survey was sent to a listserv with more than 20 000 unique email addresses of past and present study participants to assess the feasibility and acceptability of the technology. For the dietary adherence tests, monitors detected 86 % of non-adherent events (sensitivity) in-person v. 78 % of events via video chat monitoring (P=0·12), with comparable inter-rater agreement (0·88 v. 0·85; P=0·62). However, for pill-taking, non-adherence trended towards being more easily detected in-person than by video chat (77 v. 60 %; P=0·08), with non-significantly higher inter-rater agreement (0·85 v. 0·69; P=0·21). Survey results from study 2 (n 1076 respondents; ≥5 % response rate) indicated that 86·4 % of study participants had video chatting hardware, 73·3 % were comfortable using the technology and 79·8 % were willing to use it for clinical research. Given the capability of video chat technology to reduce participant burden and outperform other adherence monitoring methods such as dietary self-report and pill counts, video chatting is a novel and promising platform to quantify dietary and pill-taking adherence.
Hill, Aron T; Briggs, Belinda A; Seneviratne, Udaya
2014-06-01
To investigate the usefulness of adjunctive electromyographic (EMG) polygraphy in the diagnosis of clinical events captured during long-term video-EEG monitoring. A total of 40 patients (21 women, 19 men) aged between 19 and 72 years (mean 43) investigated using video-EEG monitoring were studied. Electromyographic activity was simultaneously recorded with EEG in four patients selected on clinical grounds. In these patients, surface EMG electrodes were placed over muscles suspected to be activated during a typical clinical event. Of the 40 patients investigated, 24 (60%) were given a diagnosis, whereas 16 (40%) remained undiagnosed. All four patients receiving adjunctive EMG polygraphy obtained a diagnosis, with three of these diagnoses being exclusively reliant on the EMG recordings. Specifically, one patient was diagnosed with propriospinal myoclonus, another patient was diagnosed with facio-mandibular myoclonus, and a third patient was found to have bruxism and periodic leg movements of sleep. The information obtained from surface EMG recordings aided the diagnosis of clinical events captured during video-EEG monitoring in 7.5% of the total cohort. This study suggests that EEG-EMG polygraphy may be used as a technique of improving the diagnostic yield of video-EEG monitoring in selected cases.
NASA Astrophysics Data System (ADS)
Walpitagama, Milanga; Kaslin, Jan; Nugegoda, Dayanthi; Wlodkowic, Donald
2016-12-01
The fish embryo toxicity (FET) biotest performed on embryos of zebrafish (Danio rerio) has gained significant popularity as a rapid and inexpensive alternative approach in chemical hazard and risk assessment. The FET was designed to evaluate acute toxicity on embryonic stages of fish exposed to the test chemical. The current standard, similar to most traditional methods for evaluating aquatic toxicity provides, however, little understanding of effects of environmentally relevant concentrations of chemical stressors. We postulate that significant environmental effects such as altered motor functions, physiological alterations reflected in heart rate, effects on development and reproduction can occur at sub-lethal concentrations well below than LC10. Behavioral studies can, therefore, provide a valuable integrative link between physiological and ecological effects. Despite the advantages of behavioral analysis development of behavioral toxicity, biotests is greatly hampered by the lack of dedicated laboratory automation, in particular, user-friendly and automated video microscopy systems. In this work we present a proof-of-concept development of an optical system capable of tracking embryonic vertebrates behavioral responses using automated and vastly miniaturized time-resolved video-microscopy. We have employed miniaturized CMOS cameras to perform high definition video recording and analysis of earliest vertebrate behavioral responses. The main objective was to develop a biocompatible embryo positioning structures that were suitable for high-throughput imaging as well as video capture and video analysis algorithms. This system should support the development of sub-lethal and behavioral markers for accelerated environmental monitoring.
Water quality real-time monitoring system via biological detection based on video analysis
NASA Astrophysics Data System (ADS)
Xin, Chen; Fei, Yuan
2017-11-01
With the development of society, water pollution has become the most serious problem in China. Therefore, real-time water quality monitoring is an important part of human activities and water pollution prevention. In this paper, the behavior of zebrafish was monitored by computer vision. Firstly, the moving target was extracted by the method of saliency detection, and tracked by fitting the ellipse model. Then the motion parameters were extracted by optical flow method, and the data were monitored in real time by means of Hinkley warning and threshold warning. We achieved classification warning through a number of dimensions by comprehensive toxicity index. The experimental results show that the system can achieve more accurate real-time monitoring.
Evaluation of dual-loop data accuracy using video ground truth data
DOT National Transportation Integrated Search
2002-01-01
Washington State Department of Transportation (WSDOT) initiated a : research project entitled Monitoring Freight on Puget Sound Freeways in September : 1999. Dual-loop data from the Seattle area freeway system were selected as the main data : s...
ERIC Educational Resources Information Center
Bishop, Crystal D.; Snyder, Patricia A.; Crow, Robert E.
2015-01-01
We used a multi-component single-subject experimental design across three preschool teachers to examine the effects of video self-monitoring with graduated training and feedback on the accuracy with which teachers monitored their implementation of embedded instructional learning trials. We also examined changes in teachers' implementation of…
Leving, Marika T; Horemans, Henricus L D; Vegter, Riemer J K; de Groot, Sonja; Bussmann, Johannes B J; van der Woude, Lucas H V
2018-01-01
Hypoactive lifestyle contributes to the development of secondary complications and lower quality of life in wheelchair users. There is a need for objective and user-friendly physical activity monitors for wheelchair-dependent individuals in order to increase physical activity through self-monitoring, goal setting, and feedback provision. To determine the validity of Activ8 Activity Monitors to 1) distinguish two classes of activities: independent wheelchair propulsion from other non-propulsive wheelchair-related activities 2) distinguish five wheelchair-related classes of activities differing by the movement intensity level: sitting in a wheelchair (hands may be moving but wheelchair remains stationary), maneuvering, and normal, high speed or assisted wheelchair propulsion. Sixteen able-bodied individuals performed sixteen various standardized 60s-activities of daily living. Each participant was equipped with a set of two Activ8 Professional Activity Monitors, one at the right forearm and one at the right wheel. Task classification by the Active8 Monitors was validated using video recordings. For the overall agreement, sensitivity and positive predictive value, outcomes above 90% are considered excellent, between 70 and 90% good, and below 70% unsatisfactory. Division in two classes resulted in overall agreement of 82.1%, sensitivity of 77.7% and positive predictive value of 78.2%. 84.5% of total duration of all tasks was classified identically by Activ8 and based on the video material. Division in five classes resulted in overall agreement of 56.6%, sensitivity of 52.8% and positive predictive value of 51.9%. 59.8% of total duration of all tasks was classified identically by Activ8 and based on the video material. Activ8 system proved to be suitable for distinguishing between active wheelchair propulsion and other non-propulsive wheelchair-related activities. The ability of the current system and algorithms to distinguish five various wheelchair-related activities is unsatisfactory.
Horemans, Henricus L. D.; Vegter, Riemer J. K.; de Groot, Sonja; Bussmann, Johannes B. J.; van der Woude, Lucas H. V.
2018-01-01
Background Hypoactive lifestyle contributes to the development of secondary complications and lower quality of life in wheelchair users. There is a need for objective and user-friendly physical activity monitors for wheelchair-dependent individuals in order to increase physical activity through self-monitoring, goal setting, and feedback provision. Objective To determine the validity of Activ8 Activity Monitors to 1) distinguish two classes of activities: independent wheelchair propulsion from other non-propulsive wheelchair-related activities 2) distinguish five wheelchair-related classes of activities differing by the movement intensity level: sitting in a wheelchair (hands may be moving but wheelchair remains stationary), maneuvering, and normal, high speed or assisted wheelchair propulsion. Methods Sixteen able-bodied individuals performed sixteen various standardized 60s-activities of daily living. Each participant was equipped with a set of two Activ8 Professional Activity Monitors, one at the right forearm and one at the right wheel. Task classification by the Active8 Monitors was validated using video recordings. For the overall agreement, sensitivity and positive predictive value, outcomes above 90% are considered excellent, between 70 and 90% good, and below 70% unsatisfactory. Results Division in two classes resulted in overall agreement of 82.1%, sensitivity of 77.7% and positive predictive value of 78.2%. 84.5% of total duration of all tasks was classified identically by Activ8 and based on the video material. Division in five classes resulted in overall agreement of 56.6%, sensitivity of 52.8% and positive predictive value of 51.9%. 59.8% of total duration of all tasks was classified identically by Activ8 and based on the video material. Conclusions Activ8 system proved to be suitable for distinguishing between active wheelchair propulsion and other non-propulsive wheelchair-related activities. The ability of the current system and algorithms to distinguish five various wheelchair-related activities is unsatisfactory. PMID:29641582
Using new video mapping technology in landscape ecology
Stohlgren, T.J.; Kaye, Margot W.; McCrumb, A.D.; Otsuki, Yuka; Pfister, B.; Villa, C.A.
2000-01-01
Biological and ecological monitoring continues to play an important role in the conservation of species, natural communities, and landscapes (Spellerberg 1991). Although resource-monitoring programs have advanced knowledge about natural ecosystems, weaknesses persist in our ability to rapidly transfer landscape-scale information to the public. Ecologists continue to search for new technologies to address this problem and to communicate natural resource information quickly and effectively. New video mapping technology may provide much-needed help.Ecologists realize that only a small portion of large nature reserves can be monitored because of cost and logistical constraints. However, plant and animal populations are usually patchily distributed in subpopulations scattered throughout heterogeneous landscapes, and they are often associated with rare habitats. These subpopulations and rare habitats may respond differently to climate change, land use, and management practices such as grazing, fire suppression, prescribed burning, or invasion of exotic species (Stohlgren et al. 1997b). In many national parks, monuments, and wildlife reserves, a few long-term monitoring plots are used to infer the status and trends of natural resources in much larger areas. To make defensible inferences about populations, habitats, and landscapes, it is necessary to extrapolate from a few monitoring plots (local scale) to the larger, unsampled landscape with known levels of accuracy and precision.Recent technological developments have given population biologists and landscape ecologists a unique tool for bridging the data gap between small, intensively sampled monitoring plots and the greater landscape and for transferring this information quickly to resource managers and the public. In this article, we briefly describe this tool, a hand-held video mapping system linked to a geographic information system (GIS). We provide examples of its use in quantifying patterns of native and exotic plant species and cryptobiotic crusts in the new Grand Staircase–Escalante National Monument, Utah, and in surveying aspen clones and regeneration in Rocky Mountain National Park, Colorado.
NASA Technical Reports Server (NTRS)
1995-01-01
Intelligent Vision Systems, Inc. (InVision) needed image acquisition technology that was reliable in bad weather for its TDS-200 Traffic Detection System. InVision researchers used information from NASA Tech Briefs and assistance from Johnson Space Center to finish the system. The NASA technology used was developed for Earth-observing imaging satellites: charge coupled devices, in which silicon chips convert light directly into electronic or digital images. The TDS-200 consists of sensors mounted above traffic on poles or span wires, enabling two sensors to view an intersection; a "swing and sway" feature to compensate for movement of the sensors; a combination of electronic shutter and gain control; and sensor output to an image digital signal processor, still frame video and optionally live video.
On the development of new SPMN diurnal video systems for daylight fireball monitoring
NASA Astrophysics Data System (ADS)
Madiedo, J. M.; Trigo-Rodríguez, J. M.; Castro-Tirado, A. J.
2008-09-01
Daylight fireball video monitoring High-sensitivity video devices are commonly used for the study of the activity of meteor streams during the night. These provide useful data for the determination, for instance, of radiant, orbital and photometric parameters ([1] to [7]). With this aim, during 2006 three automated video stations supported by Universidad de Huelva were set up in Andalusia within the framework of the SPanish Meteor Network (SPMN). These are endowed with 8-9 high sensitivity wide-field video cameras that achieve a meteor limiting magnitude of about +3. These stations have increased the coverage performed by the low-scan allsky CCD systems operated by the SPMN and, besides, achieve a time accuracy of about 0.01s for determining the appearance of meteor and fireball events. Despite of these nocturnal monitoring efforts, we realised the need of setting up stations for daylight fireball detection. Such effort was also motivated by the appearance of the two recent meteorite-dropping events of Villalbeto de la Peña [8,9] and Puerto Lápice [10]. Although the Villalbeto de la Peña event was casually videotaped, and photographed, no direct pictures or videos were obtained for the Puerto Lápice event. Consequently, in order to perform a continuous recording of daylight fireball events, we setup new automated systems based on CCD video cameras. However, the development of these video stations implies several issues with respect to nocturnal systems that must be properly solved in order to get an optimal operation. The first of these video stations, also supported by University of Huelva, has been setup in Sevilla (Andalusia) during May 2007. But, of course, fireball association is unequivocal only in those cases when two or more stations recorded the fireball, and when consequently the geocentric radiant is accurately determined. With this aim, a second diurnal video station is being setup in Andalusia in the facilities of Centro Internacional de Estudios y Convenciones Ecológicas y Medioambientales (CIECEM, University of Huelva), in the environment of Doñana Natural Park (Huelva province). In this way, both stations, which are separated by a distance of 75 km, will work as a double video station system in order to provide trajectory and orbit information of mayor bolides and, thus, increase the chance of meteorite recovery in the Iberian Peninsula. The new diurnal SPMN video stations are endowed with different models of Mintron cameras (Mintron Enterprise Co., LTD). These are high-sensitivity devices that employ a colour 1/2" Sony interline transfer CCD image sensor. Aspherical lenses are attached to the video cameras in order to maximize image quality. However, the use of fast lenses is not a priority here: while most of our nocturnal cameras use f0.8 or f1.0 lenses in order to detect meteors as faint as magnitude +3, diurnal systems employ in most cases f1.4 to f2.0 lenses. Their focal length ranges from 3.8 to 12 mm to cover different atmospheric volumes. The cameras are arranged in such a way that the whole sky is monitored from every observing station. Figure 1. A daylight event recorded from Sevilla on May 26, 2008 at 4h30m05.4 +-0.1s UT. The way our diurnal video cameras work is similar to the operation of our nocturnal systems [1]. Thus, diurnal stations are automatically switched on and off at sunrise and sunset, respectively. The images taken at 25 fps and with a resolution of 720x576 pixels are continuously sent to PC computers through a video capture device. The computers run a software (UFOCapture, by SonotaCo, Japan) that automatically registers meteor trails and stores the corresponding video frames on hard disk. Besides, before the signal from the cameras reaches the computers, a video time inserter that employs a GPS device (KIWI-OSD, by PFD Systems) inserts time information on every video frame. This allows us to measure time in a precise way (about 0.01 sec.) along the whole fireball path. EPSC Abstracts, Vol. 3, EPSC2008-A-00319, 2008 European Planetary Science Congress, Author(s) 2008 However, one of the issues with respect to nocturnal observing stations is the high number of false detections as a consequence of several factors: higher activity of birds and insects, reflection of sunlight on planes and helicopters, etc. Sometimes some of these false events follow a pattern which is very similar to fireball trails, which makes absolutely necessary the use of a second station in order to discriminate between them. Other key issue is related to the passage of the Sun before the field of view of some of the cameras. In fact, special care is necessary with this to avoid any damage to the CCD sensor. Besides, depending on atmospheric conditions (dust or moisture, for instance), the Sun may saturate most of the video frame. To solve this, our automated system determines which camera is pointing towards the Sun at a given moment and disconnects it. As the cameras are endowed with autoiris lenses, its disconnection means that the optics is fully closed and, so, the CCD sensor is protected. This, of course, means that when this happens the atmospheric volume covered by the corresponding camera is not monitored. It must be also taken into account that, in general, operation temperatures are higher for diurnal cameras. This results in higher thermal noise and, so, poses some difficulties to the detection software. To minimize this effect, it is necessary to employ CCD video cameras with proper signal to noise ratio. Refrigeration of the CCD sensor with, for instance, a Peltier system, can also be considered. The astrometric reduction procedure is also somewhat different for daytime events: it requires that reference objects are located within the field of view of every camera in order to calibrate the corresponding images. This is done by allowing every camera to capture distant buildings that, by means of said calibration, would allow us to obtain the equatorial coordinates of the fireball along its path by measuring its corresponding X and Y positions on every video frame. Such calibration can be performed from stars positions measured from nocturnal images taken with the same cameras. Once made, if the cameras are not moved it is possible to estimate the equatorial coordinates of any future fireball event. We don't use any software for automatic astrometry of the images. This crucial step is made via direct measurements of the pixel position as in all our previous work. Then, from these astrometric measurements, our software estimates the atmospheric trajectory and radiant for each fireball ([10] to [13]). During 2007 and 2008 the SPMN has also setup other diurnal stations based on 1/3' progressive-scan CMOS sensors attached to modified wide-field lenses covering a 120x80 degrees FOV. They are placed in Andalusia: El Arenosillo (Huelva), La Mayora (Málaga) and Murtas (Granada). They have also night sensitivity thanks to a infrared cut filter (ICR) which enables the camera to perform well in both high and low light condition in colour as well as provide IR sensitive Black/White video at night. Conclusions First detections of daylight fireballs by CCD video camera are being achieved in the SPMN framework. Future expansion and set up of new observing stations is currently being planned. The future establishment of additional diurnal SPMN stations will allow an increase in the number of daytime fireballs detected. This will also increase our chance of meteorite recovery.
Camera Control and Geo-Registration for Video Sensor Networks
NASA Astrophysics Data System (ADS)
Davis, James W.
With the use of large video networks, there is a need to coordinate and interpret the video imagery for decision support systems with the goal of reducing the cognitive and perceptual overload of human operators. We present computer vision strategies that enable efficient control and management of cameras to effectively monitor wide-coverage areas, and examine the framework within an actual multi-camera outdoor urban video surveillance network. First, we construct a robust and precise camera control model for commercial pan-tilt-zoom (PTZ) video cameras. In addition to providing a complete functional control mapping for PTZ repositioning, the model can be used to generate wide-view spherical panoramic viewspaces for the cameras. Using the individual camera control models, we next individually map the spherical panoramic viewspace of each camera to a large aerial orthophotograph of the scene. The result provides a unified geo-referenced map representation to permit automatic (and manual) video control and exploitation of cameras in a coordinated manner. The combined framework provides new capabilities for video sensor networks that are of significance and benefit to the broad surveillance/security community.
Improving human object recognition performance using video enhancement techniques
NASA Astrophysics Data System (ADS)
Whitman, Lucy S.; Lewis, Colin; Oakley, John P.
2004-12-01
Atmospheric scattering causes significant degradation in the quality of video images, particularly when imaging over long distances. The principle problem is the reduction in contrast due to scattered light. It is known that when the scattering particles are not too large compared with the imaging wavelength (i.e. Mie scattering) then high spatial resolution information may be contained within a low-contrast image. Unfortunately this information is not easily perceived by a human observer, particularly when using a standard video monitor. A secondary problem is the difficulty of achieving a sharp focus since automatic focus techniques tend to fail in such conditions. Recently several commercial colour video processing systems have become available. These systems use various techniques to improve image quality in low contrast conditions whilst retaining colour content. These systems produce improvements in subjective image quality in some situations, particularly in conditions of haze and light fog. There is also some evidence that video enhancement leads to improved ATR performance when used as a pre-processing stage. Psychological literature indicates that low contrast levels generally lead to a reduction in the performance of human observers in carrying out simple visual tasks. The aim of this paper is to present the results of an empirical study on object recognition in adverse viewing conditions. The chosen visual task was vehicle number plate recognition at long ranges (500 m and beyond). Two different commercial video enhancement systems are evaluated using the same protocol. The results show an increase in effective range with some differences between the different enhancement systems.
Beats: Video Monitors and Cameras.
ERIC Educational Resources Information Center
Worth, Frazier
1996-01-01
Presents a method to teach the concept of beats as a generalized phenomenon rather than teaching it only in the context of sound. Involves using a video camera to film a computer terminal, 16-mm projector, or TV monitor. (JRH)
DETAIL VIEW OF VIDEO MONITORS, FIRING ROOM NO. 2, FACING ...
DETAIL VIEW OF VIDEO MONITORS, FIRING ROOM NO. 2, FACING SOUTHWEST - Cape Canaveral Air Force Station, Launch Complex 39, Launch Control Center, LCC Road, East of Kennedy Parkway North, Cape Canaveral, Brevard County, FL
DETAIL VIEW OF VIDEO MONITORS, FIRING ROOM NO. 3, FACING ...
DETAIL VIEW OF VIDEO MONITORS, FIRING ROOM NO. 3, FACING SOUTH - Cape Canaveral Air Force Station, Launch Complex 39, Launch Control Center, LCC Road, East of Kennedy Parkway North, Cape Canaveral, Brevard County, FL
Gonzalez, Luis F.; Montes, Glen A.; Puig, Eduard; Johnson, Sandra; Mengersen, Kerrie; Gaston, Kevin J.
2016-01-01
Surveying threatened and invasive species to obtain accurate population estimates is an important but challenging task that requires a considerable investment in time and resources. Estimates using existing ground-based monitoring techniques, such as camera traps and surveys performed on foot, are known to be resource intensive, potentially inaccurate and imprecise, and difficult to validate. Recent developments in unmanned aerial vehicles (UAV), artificial intelligence and miniaturized thermal imaging systems represent a new opportunity for wildlife experts to inexpensively survey relatively large areas. The system presented in this paper includes thermal image acquisition as well as a video processing pipeline to perform object detection, classification and tracking of wildlife in forest or open areas. The system is tested on thermal video data from ground based and test flight footage, and is found to be able to detect all the target wildlife located in the surveyed area. The system is flexible in that the user can readily define the types of objects to classify and the object characteristics that should be considered during classification. PMID:26784196
Gonzalez, Luis F; Montes, Glen A; Puig, Eduard; Johnson, Sandra; Mengersen, Kerrie; Gaston, Kevin J
2016-01-14
Surveying threatened and invasive species to obtain accurate population estimates is an important but challenging task that requires a considerable investment in time and resources. Estimates using existing ground-based monitoring techniques, such as camera traps and surveys performed on foot, are known to be resource intensive, potentially inaccurate and imprecise, and difficult to validate. Recent developments in unmanned aerial vehicles (UAV), artificial intelligence and miniaturized thermal imaging systems represent a new opportunity for wildlife experts to inexpensively survey relatively large areas. The system presented in this paper includes thermal image acquisition as well as a video processing pipeline to perform object detection, classification and tracking of wildlife in forest or open areas. The system is tested on thermal video data from ground based and test flight footage, and is found to be able to detect all the target wildlife located in the surveyed area. The system is flexible in that the user can readily define the types of objects to classify and the object characteristics that should be considered during classification.
User interface using a 3D model for video surveillance
NASA Astrophysics Data System (ADS)
Hata, Toshihiko; Boh, Satoru; Tsukada, Akihiro; Ozaki, Minoru
1998-02-01
These days fewer people, who must carry out their tasks quickly and precisely, are required in industrial surveillance and monitoring applications such as plant control or building security. Utilizing multimedia technology is a good approach to meet this need, and we previously developed Media Controller, which is designed for the applications and provides realtime recording and retrieval of digital video data in a distributed environment. In this paper, we propose a user interface for such a distributed video surveillance system in which 3D models of buildings and facilities are connected to the surveillance video. A novel method of synchronizing camera field data with each frame of a video stream is considered. This method records and reads the camera field data similarity to the video data and transmits it synchronously with the video stream. This enables the user interface to have such useful functions as comprehending the camera field immediately and providing clues when visibility is poor, for not only live video but also playback video. We have also implemented and evaluated the display function which makes surveillance video and 3D model work together using Media Controller with Java and Virtual Reality Modeling Language employed for multi-purpose and intranet use of 3D model.
Design of multifunction anti-terrorism robotic system based on police dog
NASA Astrophysics Data System (ADS)
You, Bo; Liu, Suju; Xu, Jun; Li, Dongjie
2007-11-01
Aimed at some typical constraints of police dogs and robots used in the areas of reconnaissance and counterterrorism currently, the multifunction anti-terrorism robotic system based on police dog has been introduced. The system is made up of two parts: portable commanding device and police dog robotic system. The portable commanding device consists of power supply module, microprocessor module, LCD display module, wireless data receiving and dispatching module and commanding module, which implements the remote control to the police dogs and takes real time monitor to the video and images. The police dog robotic system consists of microprocessor module, micro video module, wireless data transmission module, power supply module and offence weapon module, which real time collects and transmits video and image data of the counter-terrorism sites, and gives military attack based on commands. The system combines police dogs' biological intelligence with micro robot. Not only does it avoid the complexity of general anti-terrorism robots' mechanical structure and the control algorithm, but it also widens the working scope of police dog, which meets the requirements of anti-terrorism in the new era.
15. NBS TOP SIDE CONTROL ROOM. THE SUIT SYSTEMS CONSOLE ...
15. NBS TOP SIDE CONTROL ROOM. THE SUIT SYSTEMS CONSOLE IS USED TO CONTROL AIR FLOW AND WATER FLOW TO THE UNDERWATER SPACE SUIT DURING THE TEST. THE SUIT SYSTEMS ENGINEER MONITORS AIR FLOW ON THE PANEL TO THE LEFT, AND SUIT DATA ON THE COMPUTER MONITOR JUST SLIGHTLY TO HIS LEFT. WATER FLOW IS MONITORED ON THE PANEL JUST SLIGHTLY TO HIS RIGHT AND TEST VIDEO TO HIS FAR RIGHT. THE DECK CHIEF MONITORS THE DIVER'S DIVE TIMES ON THE COMPUTER IN THE UPPER RIGHT. THE DECK CHIEF LOGS THEM IN AS THEY ENTER THE WATER, AND LOGS THEM OUT AS THEY EXIT THE WATER. THE COMPUTER CALCULATES TOTAL DIVE TIME. - Marshall Space Flight Center, Neutral Buoyancy Simulator Facility, Rideout Road, Huntsville, Madison County, AL
Analysis of the color rendition of flexible endoscopes
NASA Astrophysics Data System (ADS)
Murphy, Edward M.; Hegarty, Francis J.; McMahon, Barry P.; Boyle, Gerard
2003-03-01
Endoscopes are imaging devices routinely used for the diagnosis of disease within the human digestive tract. Light is transmitted into the body cavity via incoherent fibreoptic bundles and is controlled by a light feedback system. Fibreoptic endoscopes use coherent fibreoptic bundles to provide the clinician with an image. It is also possible to couple fibreoptic endoscopes to a clip-on video camera. Video endoscopes consist of a small CCD camera, which is inserted into gastrointestinal tract, and associated image processor to convert the signal to analogue RGB video signals. Images from both types of endoscope are displayed on standard video monitors. Diagnosis is dependent upon being able to determine changes in the structure and colour of tissues and biological fluids, and therefore is dependent upon the ability of the endoscope to reproduce the colour of these tissues and fluids with fidelity. This study investigates the colour reproduction of flexible optical and video endoscopes. Fibreoptic and video endoscopes alter image colour characteristics in different ways. The colour rendition of fibreoptic endoscopes was assessed by coupling them to a video camera and applying video colorimetric techniques. These techniques were then used on video endoscopes to assess how the colour rendition of video endoscopes compared with that of optical endoscopes. In both cases results were obtained at fixed illumination settings. Video endoscopes were then assessed with varying levels of illumination. Initial results show that at constant luminance endoscopy systems introduce non-linear shifts in colour. Techniques for examining how this colour shift varies with illumination intensity were developed and both methodology and results will be presented. We conclude that more rigorous quality assurance is required to reduce colour error and are developing calibration procedures applicable to medical endoscopes.
2016-01-01
Passive content fingerprinting is widely used for video content identification and monitoring. However, many challenges remain unsolved especially for partial-copies detection. The main challenge is to find the right balance between the computational cost of fingerprint extraction and fingerprint dimension, without compromising detection performance against various attacks (robustness). Fast video detection performance is desirable in several modern applications, for instance, in those where video detection involves the use of large video databases or in applications requiring real-time video detection of partial copies, a process whose difficulty increases when videos suffer severe transformations. In this context, conventional fingerprinting methods are not fully suitable to cope with the attacks and transformations mentioned before, either because the robustness of these methods is not enough or because their execution time is very high, where the time bottleneck is commonly found in the fingerprint extraction and matching operations. Motivated by these issues, in this work we propose a content fingerprinting method based on the extraction of a set of independent binary global and local fingerprints. Although these features are robust against common video transformations, their combination is more discriminant against severe video transformations such as signal processing attacks, geometric transformations and temporal and spatial desynchronization. Additionally, we use an efficient multilevel filtering system accelerating the processes of fingerprint extraction and matching. This multilevel filtering system helps to rapidly identify potential similar video copies upon which the fingerprint process is carried out only, thus saving computational time. We tested with datasets of real copied videos, and the results show how our method outperforms state-of-the-art methods regarding detection scores. Furthermore, the granularity of our method makes it suitable for partial-copy detection; that is, by processing only short segments of 1 second length. PMID:27861492
Computer vision barrel inspection
NASA Astrophysics Data System (ADS)
Wolfe, William J.; Gunderson, James; Walworth, Matthew E.
1994-02-01
One of the Department of Energy's (DOE) ongoing tasks is the storage and inspection of a large number of waste barrels containing a variety of hazardous substances. Martin Marietta is currently contracted to develop a robotic system -- the Intelligent Mobile Sensor System (IMSS) -- for the automatic monitoring and inspection of these barrels. The IMSS is a mobile robot with multiple sensors: video cameras, illuminators, laser ranging and barcode reader. We assisted Martin Marietta in this task, specifically in the development of image processing algorithms that recognize and classify the barrel labels. Our subsystem uses video images to detect and locate the barcode, so that the barcode reader can be pointed at the barcode.
Geovisualization for Smart Video Surveillance
NASA Astrophysics Data System (ADS)
Oves García, R.; Valentín, L.; Serrano, S. A.; Palacios-Alonso, M. A.; Sucar, L. Enrique
2017-09-01
Nowadays with the emergence of smart cities and the creation of new sensors capable to connect to the network, it is not only possible to monitor the entire infrastructure of a city, including roads, bridges, rail/subways, airports, communications, water, power, but also to optimize its resources, plan its preventive maintenance and monitor security aspects while maximizing services for its citizens. In particular, the security aspect is one of the most important issues due to the need to ensure the safety of people. However, if we want to have a good security system, it is necessary to take into account the way that we are going to present the information. In order to show the amount of information generated by sensing devices in real time in an understandable way, several visualization techniques are proposed for both local (involves sensing devices in a separated way) and global visualization (involves sensing devices as a whole). Taking into consideration that the information is produced and transmitted from a geographic location, the integration of a Geographic Information System to manage and visualize the behavior of data becomes very relevant. With the purpose of facilitating the decision-making process in a security system, we have integrated the visualization techniques and the Geographic Information System to produce a smart security system, based on a cloud computing architecture, to show relevant information about a set of monitored areas with video cameras.
Slow Monitoring Systems for CUORE
NASA Astrophysics Data System (ADS)
Dutta, Suryabrata; Cuore Collaboration
2016-09-01
The Cryogenic Underground Observatory for Rare Events (CUORE) is a ton-scale neutrinoless double-beta decay experiment under construction at the Laboratori Nazionali del Gran Sasso (LNGS). The experiment is comprised of 988 TeO2 bolometric crystals arranged into 19 towers and operated at a temperature of 10 mK. We have developed slow monitoring systems to monitor the cryostat during detector installation, commissioning, data taking, and other crucial phases of the experiment. Our systems use responsive LabVIEW virtual instruments and video streams of the cryostat. We built a website using the Angular, Bootstrap, and MongoDB frameworks to display this data in real-time. The website can also display archival data and send alarms. I will present how we constructed these slow monitoring systems to be robust, accurate, and secure, while maintaining reliable access for the entire collaboration from any platform in order to ensure efficient communications and fast diagnoses of all CUORE systems.
In-camera video-stream processing for bandwidth reduction in web inspection
NASA Astrophysics Data System (ADS)
Jullien, Graham A.; Li, QiuPing; Hajimowlana, S. Hossain; Morvay, J.; Conflitti, D.; Roberts, James W.; Doody, Brian C.
1996-02-01
Automated machine vision systems are now widely used for industrial inspection tasks where video-stream data information is taken in by the camera and then sent out to the inspection system for future processing. In this paper we describe a prototype system for on-line programming of arbitrary real-time video data stream bandwidth reduction algorithms; the output of the camera only contains information that has to be further processed by a host computer. The processing system is built into a DALSA CCD camera and uses a microcontroller interface to download bit-stream data to a XILINXTM FPGA. The FPGA is directly connected to the video data-stream and outputs data to a low bandwidth output bus. The camera communicates to a host computer via an RS-232 link to the microcontroller. Static memory is used to both generate a FIFO interface for buffering defect burst data, and for off-line examination of defect detection data. In addition to providing arbitrary FPGA architectures, the internal program of the microcontroller can also be changed via the host computer and a ROM monitor. This paper describes a prototype system board, mounted inside a DALSA camera, and discusses some of the algorithms currently being implemented for web inspection applications.
NASA Technical Reports Server (NTRS)
Gardner, D. G.; Tejwani, G. D.; Bircher, F. E.; Loboda, J. A.; Van Dyke, D. B.; Chenevert, D. J.
1991-01-01
Details are presented of the approach used in a comprehensive program to utilize exhaust plume diagnostics for rocket engine health-and-condition monitoring and assessing SSME component wear and degradation. This approach incorporates both spectral and video monitoring of the exhaust plume. Video monitoring provides qualitative data for certain types of component wear while spectral monitoring allows both quantitative and qualitative information. Consideration is given to spectral identification of SSME materials and baseline plume emissions.
Sanabria-Castro, A; Henríquez-Varela, F; Monge-Bonilla, C; Lara-Maier, S; Sittenfeld-Appel, M
2017-03-16
Given that epileptic seizures and non-epileptic paroxysmal events have similar clinical manifestations, using specific diagnostic methods is crucial, especially in patients with drug-resistant epilepsy. Prolonged video electroencephalography monitoring during epileptic seizures reveals epileptiform discharges and has become an essential procedure for epilepsy diagnosis. The main purpose of this study is to characterise paroxysmal events and compare patterns in patients with refractory epilepsy. We conducted a retrospective analysis of medical records from 91 patients diagnosed with refractory epilepsy who underwent prolonged video electroencephalography monitoring during hospitalisation. During prolonged video electroencephalography monitoring, 76.9% of the patients (n=70) had paroxysmal events. The mean number of events was 3.4±2.7; the duration of these events was highly variable. Most patients (80%) experienced seizures during wakefulness. The most common events were focal seizures with altered levels of consciousness, progressive bilateral generalized seizures and psychogenic non-epileptic seizures. Regarding all paroxysmal events, no differences were observed in the number or type of events by sex, in duration by sex or age at onset, or in the number of events by type of event. Psychogenic nonepileptic seizures were predominantly registered during wakefulness, lasted longer, started at older ages, and were more frequent in women. Paroxysmal events recorded during prolonged video electroencephalography monitoring in patients with refractory epilepsy show similar patterns and characteristics to those reported in other latitudes. Copyright © 2017 The Author(s). Publicado por Elsevier España, S.L.U. All rights reserved.
NASA Astrophysics Data System (ADS)
Bulan, Orhan; Bernal, Edgar A.; Loce, Robert P.; Wu, Wencheng
2013-03-01
Video cameras are widely deployed along city streets, interstate highways, traffic lights, stop signs and toll booths by entities that perform traffic monitoring and law enforcement. The videos captured by these cameras are typically compressed and stored in large databases. Performing a rapid search for a specific vehicle within a large database of compressed videos is often required and can be a time-critical life or death situation. In this paper, we propose video compression and decompression algorithms that enable fast and efficient vehicle or, more generally, event searches in large video databases. The proposed algorithm selects reference frames (i.e., I-frames) based on a vehicle having been detected at a specified position within the scene being monitored while compressing a video sequence. A search for a specific vehicle in the compressed video stream is performed across the reference frames only, which does not require decompression of the full video sequence as in traditional search algorithms. Our experimental results on videos captured in a local road show that the proposed algorithm significantly reduces the search space (thus reducing time and computational resources) in vehicle search tasks within compressed video streams, particularly those captured in light traffic volume conditions.
NASA Astrophysics Data System (ADS)
Veglio, E.; Graves, L. W.; Bank, C. G.
2014-12-01
We designed various computer-based applications and videos as educational resources for undergraduate courses at the University of Toronto in the Earth Science Department. These resources were developed in effort to enhance students' self-learning of key concepts as identified by educators at the department. The interactive learning modules and videos were created using the programs MATLAB and Adobe Creative Suite 5 (Photoshop and Premiere) and range from optical mineralogy (extinction and Becke line), petrology (equilibrium melting in 2-phase systems), crystallography (crystal systems), geophysics (gravity anomaly), and geologic history (evolution of Canada). These resources will be made available for students on internal course websites as well as through the University of Toronto Earth Science's website (www.es.utoronto.ca) where appropriate; the video platform YouTube.com may be used to reach a wide audience and promote the material. Usage of the material will be monitored and feedback will be collected over the next academic year in order to gage the use of these interactive learning tools and to assess if these computer-based applications and videos foster student engagement and active learning, and thus offer an enriched learning experience.
O' Donoghue, Deirdre; Kennedy, Norelee
2014-11-01
The activPAL™ activity monitor has potential for use in youth with Cerebral Palsy (CP) as it has demonstrated acceptable validity for the assessment of sedentary and physical activity in other populations. This study determined the validity of the activPAL™ activity monitor for the measurement of sitting, standing, walking time, transitions and step count for both legs in young people with hemiplegic and asymmetric diplegic CP. Seventeen participants with CP Gross Motor Function Classification System level I completed two video recorded test protocols that involved wearing an activPAL™ activity monitor on alternate legs. Agreement between observed video recorded data and activPAL™ activity monitor data was assessed using the Bland and Altman (BA) method and intraclass correlation coefficients (ICC 3,1). There was perfect agreement for transitions and high agreement for sitting (BA mean differences (MD): -1.8 and -1.8 s; ICCs: 0.49 and 0.95) standing (MD: 0.8 and 0.1 s; ICCs: 0.59 and 0.98) walking (MD: 1 and 1.1 s; ICCs: 0.99 and 0.94) timings and low agreement for step count (MD: 4.1 and 2.8 steps; ICCs: 0.96 and 0.95) for both legs. This study found clinically acceptable agreement with direct observation for all activPAL™ activity monitor functions, except for step count measurement with respect to the range of measurement values obtained for both legs in this study population.
Yakubova, Gulnoza; Taber-Doughty, Teresa
2013-06-01
The effects of a multicomponent intervention (a self-operated video modeling and self-monitoring delivered via an electronic interactive whiteboard (IWB) and a system of least prompts) on skill acquisition and interaction behavior of two students with autism and one student with moderate intellectual disability were examined using a multi-probe across students design. Students were taught to operate and view video modeling clips, perform a chain of novel tasks and self-monitor task performance using a SMART Board IWB. Results support the effectiveness of a multicomponent intervention in improving students' skill acquisition. Results also highlight the use of this technology as a self-operated and interactive device rather than a traditional teacher-operated device to enhance students' active participation in learning.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Definitions. 23.701... DRUG-FREE WORKPLACE Contracting for Environmentally Preferable Products and Services 23.701 Definitions. As used in this subpart— Computer monitor means a video display unit used with a computer. Desktop...
Highway-railway at-grade crossing structures : long term settlement measurements and assessments.
DOT National Transportation Integrated Search
2016-03-22
A common maintenance technique to correct track geometry at bridge transitions is hand tamping. The first section presents a non-invasive track monitoring system involving high-speed video cameras that evaluates the change in track behavior before an...
Research of Pedestrian Crossing Safety Facilities Based on the Video Detection
NASA Astrophysics Data System (ADS)
Li, Sheng-Zhen; Xie, Quan-Long; Zang, Xiao-Dong; Tang, Guo-Jun
Since that the pedestrian crossing facilities at present is not perfect, pedestrian crossing is in chaos and pedestrians from opposite direction conflict and congest with each other, which severely affects the pedestrian traffic efficiency, obstructs the vehicle and bringing about some potential security problems. To solve these problems, based on video identification, a pedestrian crossing guidance system was researched and designed. It uses the camera to monitor the pedestrians in real time and sums up the number of pedestrians through video detection program, and a group of pedestrian's induction lamp array is installed at the interval of crosswalk, which adjusts color display according to the proportion of pedestrians from both sides to guide pedestrians from both opposite directions processing separately. The emulation analysis result from cellular automaton shows that the system reduces the pedestrian crossing conflict, shortens the time of pedestrian crossing and improves the safety of pedestrians crossing.
Dirnwoeber, Markus; Machan, Rudolf; Herler, Juergen
2012-10-31
Direct field observations of fine-scaled biological processes and interactions of the benthic community of corals and associated reef organisms (e.g., feeding, reproduction, mutualistic or agonistic behavior, behavioral responses to changing abiotic factors) usually involve a disturbing intervention. Modern digital camcorders (without inflexible land-or ship-based cable connection) such as the GoPro camera enable undisturbed and unmanned, stationary close-up observations. Such observations, however, are also very time-limited (~3 h) and full 24 h-recordings throughout day and night, including nocturnal observations without artificial daylight illumination, are not possible. Herein we introduce the application of modern standard video surveillance technology with the main objective of providing a tool for monitoring coral reef or other sessile and mobile organisms for periods of 24 h and longer. This system includes nocturnal close-up observations with miniature infrared (IR)-sensitive cameras and separate high-power IR-LEDs. Integrating this easy-to-set up and portable remote-sensing equipment into coral reef research is expected to significantly advance our understanding of fine-scaled biotic processes on coral reefs. Rare events and long-lasting processes can easily be recorded, in situ -experiments can be monitored live on land, and nocturnal IR-observations reveal undisturbed behavior. The options and equipment choices in IR-sensitive surveillance technology are numerous and subject to a steadily increasing technical supply and quality at decreasing prices. Accompanied by short video examples, this report introduces a radio-transmission system for simultaneous recordings and real-time monitoring of multiple cameras with synchronized timestamps, and a surface-independent underwater-recording system.
Dirnwoeber, Markus; Machan, Rudolf; Herler, Juergen
2014-01-01
Direct field observations of fine-scaled biological processes and interactions of the benthic community of corals and associated reef organisms (e.g., feeding, reproduction, mutualistic or agonistic behavior, behavioral responses to changing abiotic factors) usually involve a disturbing intervention. Modern digital camcorders (without inflexible land-or ship-based cable connection) such as the GoPro camera enable undisturbed and unmanned, stationary close-up observations. Such observations, however, are also very time-limited (~3 h) and full 24 h-recordings throughout day and night, including nocturnal observations without artificial daylight illumination, are not possible. Herein we introduce the application of modern standard video surveillance technology with the main objective of providing a tool for monitoring coral reef or other sessile and mobile organisms for periods of 24 h and longer. This system includes nocturnal close-up observations with miniature infrared (IR)-sensitive cameras and separate high-power IR-LEDs. Integrating this easy-to-set up and portable remote-sensing equipment into coral reef research is expected to significantly advance our understanding of fine-scaled biotic processes on coral reefs. Rare events and long-lasting processes can easily be recorded, in situ-experiments can be monitored live on land, and nocturnal IR-observations reveal undisturbed behavior. The options and equipment choices in IR-sensitive surveillance technology are numerous and subject to a steadily increasing technical supply and quality at decreasing prices. Accompanied by short video examples, this report introduces a radio-transmission system for simultaneous recordings and real-time monitoring of multiple cameras with synchronized timestamps, and a surface-independent underwater-recording system. PMID:24829763
NASA Technical Reports Server (NTRS)
Stute, Robert A. (Inventor); Galloway, F. Houston (Inventor); Medelius, Pedro J. (Inventor); Swindle, Robert W. (Inventor); Bierman, Tracy A. (Inventor)
1996-01-01
A remote monitor alarm system monitors discrete alarm and analog power supply voltage conditions at remotely located communications terminal equipment. A central monitoring unit (CMU) is connected via serial data links to each of a plurality of remote terminal units (RTUS) that monitor the alarm and power supply conditions of the remote terminal equipment. Each RTU can monitor and store condition information of both discrete alarm points and analog power supply voltage points in its associated communications terminal equipment. The stored alarm information is periodically transmitted to the CMU in response to sequential polling of the RTUS. The number of monitored alarm inputs and permissible voltage ranges for the analog inputs can be remotely configured at the CMU and downloaded into programmable memory at each RTU. The CMU includes a video display, a hard disk memory, a line printer and an audio alarm for communicating and storing the alarm information received from each RTU.
Peterson, Courtney M.; Apolzan, John W.; Wright, Courtney; Martin, Corby K.
2017-01-01
We conducted a pair of studies to test the validity, reliability, feasibility, and acceptability of using video chat technology as a novel method to quantify dietary and pill-taking (i.e., supplement and medication) adherence. In the first study, we investigated whether video chat technology can accurately quantify adherence to dietary and pill-taking interventions. Mock study participants ate food items and swallowed pills while performing randomized scripted “cheating” behaviors design to mimic non-adherence. Monitoring was conducted in a crossover design, with two monitors watching in-person and two watching remotely by Skype on a smartphone. For the second study, a 22-question online survey was sent to an email listserv with more than 20,000 unique email addresses of past and present study participants to assess the feasibility and acceptability of the technology. For the dietary adherence tests, monitors detected 86% of non-adherent events (sensitivity) in-person versus 78% of events via video chat monitoring (p=0.12), with comparable inter-rater agreement (0.88 vs. 0.85; p=0.62). However, for pill-taking, non-adherence trended towards being more easily detected in-person than by video chat (77% vs. 60%; p=0.08), with non-significantly higher inter-rater agreement (0.85 vs. 0.69; p=0.21). Survey results from the second study (N=1,076 respondents; at least a 5% response rate) indicated that 86.4% of study participants had video chatting hardware, 73.3% were comfortable using the technology; and 79.8% were willing to use it for clinical research. Given the capability of video chat technology to reduce participant burden and to outperform other adherence monitoring methods such as dietary self-report and pill counts, video chatting is a novel and highly promising platform to quantify dietary and pill-taking adherence. PMID:27753427
Digital Image Correlation for Performance Monitoring
NASA Technical Reports Server (NTRS)
Palaviccini, Miguel; Turner, Dan; Herzberg, Michael
2016-01-01
Evaluating the health of a mechanism requires more than just a binary evaluation of whether an operation was completed. It requires analyzing more comprehensive, full-field data. Health monitoring is a process of non-destructively identifying characteristics that indicate the fitness of an engineered component. In order to monitor unit health in a production setting, an automated test system must be created to capture the motion of mechanism parts in a real-time and non-intrusive manner. One way to accomplish this is by using high-speed video and Digital Image Correlation (DIC). In this approach, individual frames of the video are analyzed to track the motion of mechanism components. The derived performance metrics allow for state-of-health monitoring and improved fidelity of mechanism modeling. The results are in-situ state-of-health identification and performance prediction. This paper introduces basic concepts of this test method, and discusses two main themes: the use of laser marking to add fiducial patterns to mechanism components, and new software developed to track objects with complex shapes, even as they move behind obstructions. Finally, the implementation of these tests into an automated tester is discussed.
Color infrared video mapping of upland and wetland communities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mackey, H.E. Jr.; Jensen, J.R.; Hodgson, M.E.
1987-01-01
Color infrared images were obtained using a video remote sensing system at 3000 and 5000 feet over a variety of terrestrial and wetland sites on the Savannah River Plant near Aiken, SC. The terrestrial sites ranged from secondary successional old field areas to even aged pine stands treated with varying levels of sewage sludge. The wetland sites ranged from marsh and macrophyte areas to mature cypress-tupelo swamp forests. The video data were collected in three spectral channels, 0.5-0.6 ..mu..m, 0.6-0.7 ..mu..m, and 0.7-1.1 ..mu..m at a 12.5 mm focal length. The data were converted to digital form and processed withmore » standard techniques. Comparisons of the video images were made with aircraft multispectral scanner (MSS) data collected previously from the same sites. The analyses of the video data indicated that this technique may present a low cost alternative for evaluation of vegetation and landcover types for environmental monitoring and assessment.« less
Ertel, Audrey E; Kaiser, Tiffany E; Abbott, Daniel E; Shah, Shimul A
2016-10-01
In this observational study, we analyzed the feasibility and early results of a perioperative, video-based educational program and tele-health home monitoring model on postoperative care management and readmissions for patients undergoing liver transplantation. Twenty consecutive liver transplantation recipients were provided with tele-health home monitoring and an educational video program during the perioperative period. Vital statistics were tracked and monitored daily with emphasis placed on readings outside of the normal range (threshold violations). Additionally, responses to effectiveness questionnaires were collected retrospectively for analysis. In the study, 19 of the 20 patients responded to the effectiveness questionnaire, with 95% reporting having watched all 10 videos, 68% watching some more than once, and 100% finding them effective in improving their preparedness for understanding their postoperative care. Among these 20 patients, there was an observed 19% threshold violation rate for systolic blood pressure, 6% threshold violation rate for mean blood glucose concentrations, and 8% threshold violation rate for mean weights. This subset of patients had a 90-day readmission rate of 30%. This observational study demonstrates that tele-health home monitoring and video-based educational programs are feasible in liver transplantation recipients and seem to be effective in enhancing the monitoring of vital statistics postoperatively. These data suggest that smart technology is effective in creating a greater awareness and understanding of how to manage postoperative care after liver transplantation. Copyright © 2016 Elsevier Inc. All rights reserved.
MARSnet: Mission-aware Autonomous Radar Sensor Network for Future Combat Systems
2007-05-03
34Parameter estimation for 3-parameter log-logistic distribution (LLD3) by Porne ", Parameter estimation for 3-parameter log-logistic distribu- tion...section V we physical security, air traffic control, traffic monitoring, andvidefaconu s cribedy. video surveillance, industrial automation etc. Each
NASA Astrophysics Data System (ADS)
Kachach, Redouane; Cañas, José María
2016-05-01
Using video in traffic monitoring is one of the most active research domains in the computer vision community. TrafficMonitor, a system that employs a hybrid approach for automatic vehicle tracking and classification on highways using a simple stationary calibrated camera, is presented. The proposed system consists of three modules: vehicle detection, vehicle tracking, and vehicle classification. Moving vehicles are detected by an enhanced Gaussian mixture model background estimation algorithm. The design includes a technique to resolve the occlusion problem by using a combination of two-dimensional proximity tracking algorithm and the Kanade-Lucas-Tomasi feature tracking algorithm. The last module classifies the shapes identified into five vehicle categories: motorcycle, car, van, bus, and truck by using three-dimensional templates and an algorithm based on histogram of oriented gradients and the support vector machine classifier. Several experiments have been performed using both real and simulated traffic in order to validate the system. The experiments were conducted on GRAM-RTM dataset and a proper real video dataset which is made publicly available as part of this work.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, C.S.; af Ekenstam, G.; Sallstrom, M.
1995-07-01
The Swedish Nuclear Power Inspectorate (SKI) and the US Department of Energy (DOE) sponsored work on a Remote Monitoring System (RMS) that was installed in August 1994 at the Barseback Works north of Malmo, Sweden. The RMS was designed to test the front end detection concept that would be used for unattended remote monitoring activities. Front end detection reduces the number of video images recorded and provides additional sensor verification of facility operations. The function of any safeguards Containment and Surveillance (C/S) system is to collect information which primarily is images that verify the operations at a nuclear facility. Barsebackmore » is ideal to test the concept of front end detection since most activities of safeguards interest is movement of spent fuel which occurs once a year. The RMS at Barseback uses a network of nodes to collect data from microwave motion detectors placed to detect the entrance and exit of spent fuel casks through a hatch. A video system using digital compression collects digital images and stores them on a hard drive and a digital optical disk. Data and images from the storage area are remotely monitored via telephone from Stockholm, Sweden and Albuquerque, NM, USA. These remote monitoring stations operated by SKI and SNL respectively, can retrieve data and images from the RMS computer at the Barseback Facility. The data and images are encrypted before transmission. This paper presents details of the RMS and test results of this approach to front end detection of safeguard activities.« less
A distributed cloud-based cyberinfrastructure framework for integrated bridge monitoring
NASA Astrophysics Data System (ADS)
Jeong, Seongwoon; Hou, Rui; Lynch, Jerome P.; Sohn, Hoon; Law, Kincho H.
2017-04-01
This paper describes a cloud-based cyberinfrastructure framework for the management of the diverse data involved in bridge monitoring. Bridge monitoring involves various hardware systems, software tools and laborious activities that include, for examples, a structural health monitoring (SHM), sensor network, engineering analysis programs and visual inspection. Very often, these monitoring systems, tools and activities are not coordinated, and the collected information are not shared. A well-designed integrated data management framework can support the effective use of the data and, thereby, enhance bridge management and maintenance operations. The cloud-based cyberinfrastructure framework presented herein is designed to manage not only sensor measurement data acquired from the SHM system, but also other relevant information, such as bridge engineering model and traffic videos, in an integrated manner. For the scalability and flexibility, cloud computing services and distributed database systems are employed. The information stored can be accessed through standard web interfaces. For demonstration, the cyberinfrastructure system is implemented for the monitoring of the bridges located along the I-275 Corridor in the state of Michigan.
A Framework of Simple Event Detection in Surveillance Video
NASA Astrophysics Data System (ADS)
Xu, Weiguang; Zhang, Yafei; Lu, Jianjiang; Tian, Yulong; Wang, Jiabao
Video surveillance is playing more and more important role in people's social life. Real-time alerting of threaten events and searching interesting content in stored large scale video footage needs human operator to pay full attention on monitor for long time. The labor intensive mode has limit the effectiveness and efficiency of the system. A framework of simple event detection is presented advance the automation of video surveillance. An improved inner key point matching approach is used to compensate motion of background in real-time; frame difference are used to detect foreground; HOG based classifiers are used to classify foreground object into people and car; mean-shift is used to tracking the recognized objects. Events are detected based on predefined rules. The maturity of the algorithms guarantee the robustness of the framework, and the improved approach and the easily checked rules enable the framework to work in real-time. Future works to be done are also discussed.
NASA Astrophysics Data System (ADS)
Larsen, D. G.; Schwieder, P. R.
Network video conferencing is advancing rapidly throughout the nation, and the Idaho National Engineering Laboratory (INEL), a Department of Energy (DOE) facility, is at the forefront of the development. Engineers at INEL/EG&G designed and installed a very unique DOE video conferencing system, offering many outstanding features, that include true multipoint conferencing, user-friendly design and operation with no full-time operators required, and the potential for cost effective expansion of the system. One area where INEL/EG&G engineers made a significant contribution to video conferencing was in the development of effective, user-friendly, end station driven scheduling software. A PC at each user site is used to schedule conferences via a windows package. This software interface provides information to the users concerning conference availability, scheduling, initiation, and termination. The menus are 'mouse' controlled. Once a conference is scheduled, a workstation at the hub monitors the network to initiate all scheduled conferences. No active operator participation is required once a user schedules a conference through the local PC; the workstation automatically initiates and terminates the conference as scheduled. As each conference is scheduled, hard copy notification is also printed at each participating site. Video conferencing is the wave of the future. The use of these user-friendly systems will save millions in lost productivity and travel costs throughout the nation. The ease of operation and conference scheduling will play a key role on the extent industry uses this new technology. The INEL/EG&G has developed a prototype scheduling system for both commercial and federal government use.
NASA Astrophysics Data System (ADS)
Adi, K.; Widodo, A. P.; Widodo, C. E.; Pamungkas, A.; Putranto, A. B.
2018-05-01
Traffic monitoring on road needs to be done, the counting of the number of vehicles passing the road is necessary. It is more emphasized for highway transportation management in order to prevent efforts. Therefore, it is necessary to develop a system that is able to counting the number of vehicles automatically. Video processing method is able to counting the number of vehicles automatically. This research has development a system of vehicle counting on toll road. This system includes processes of video acquisition, frame extraction, and image processing for each frame. Video acquisition is conducted in the morning, at noon, in the afternoon, and in the evening. This system employs of background subtraction and morphology methods on gray scale images for vehicle counting. The best vehicle counting results were obtained in the morning with a counting accuracy of 86.36 %, whereas the lowest accuracy was in the evening, at 21.43 %. Differences in morning and evening results are caused by different illumination in the morning and evening. This will cause the values in the image pixels to be different.
Hoffmann, Gundula; Schmidt, Mariana; Ammon, Christian; Rose-Meierhöfer, Sandra; Burfeind, Onno; Heuwieser, Wolfgang; Berg, Werner
2013-06-01
The aim of this study was to assess the variability of temperatures measured by a video-based infrared camera (IRC) in comparison to rectal and vaginal temperatures. The body surface temperatures of cows and calves were measured contactless at different body regions using videos from the IRC. Altogether, 22 cows and 9 calves were examined. The differences of the measured IRC temperatures among the body regions, i.e. eye (mean: 37.0 °C), back of the ear (35.6 °C), shoulder (34.9 °C) and vulva (37.2 °C), were significant (P < 0.01), except between eye and vulva (P = 0.99). The quartile ranges of the measured IRC temperatures at the 4 above mentioned regions were between 1.2 and 1.8 K. Of the investigated body regions the eye and the back of the ear proved to be suitable as practical regions for temperature monitoring. The temperatures of these 2 regions could be gained by the use of the maximum temperatures of the head and body area. Therefore, only the maximum temperatures of both areas were used for further analysis. The data analysis showed an increase for the maximum temperature measured by IRC at head and body area with an increase of rectal temperature in cows and calves. The use of infrared thermography videos has the advantage to analyze more than 1 picture per animal in a short period of time, and shows potential as a monitoring system for body temperatures in cattle.
ERIC Educational Resources Information Center
Gentile, Douglas, A.; Lynch, Paul, J.; Linder, Jennifer Ruh; Walsh, David, A.
2004-01-01
Video games have become one of the favorite activities of American children. A growing body of research is linking violent video game play to aggressive cognitions, attitudes, and behaviors. The first goal of this study was to document the video games habits of adolescents and the level of parental monitoring of adolescent video game use. The…
NASA Lewis' Telescience Support Center Supports Orbiting Microgravity Experiments
NASA Technical Reports Server (NTRS)
Hawersaat, Bob W.
1998-01-01
The Telescience Support Center (TSC) at the NASA Lewis Research Center was developed to enable Lewis-based science teams and principal investigators to monitor and control experimental and operational payloads onboard the International Space Station. The TSC is a remote operations hub that can interface with other remote facilities, such as universities and industrial laboratories. As a pathfinder for International Space Station telescience operations, the TSC has incrementally developed an operational capability by supporting space shuttle missions. The TSC has evolved into an environment where experimenters and scientists can control and monitor the health and status of their experiments in near real time. Remote operations (or telescience) allow local scientists and their experiment teams to minimize their travel and maintain a local complement of expertise for hardware and software troubleshooting and data analysis. The TSC was designed, developed, and is operated by Lewis' Engineering and Technical Services Directorate and its support contractors, Analex Corporation and White's Information System, Inc. It is managed by Lewis' Microgravity Science Division. The TSC provides operational support in conjunction with the NASA Marshall Space Flight Center and NASA Johnson Space Center. It enables its customers to command, receive, and view telemetry; monitor the science video from their on-orbit experiments; and communicate over mission-support voice loops. Data can be received and routed to experimenter-supplied ground support equipment and/or to the TSC data system for display. Video teleconferencing capability and other video sources, such as NASA TV, are also available. The TSC has a full complement of standard services to aid experimenters in telemetry operations.
Helmet-Cam: tool for assessing miners’ respirable dust exposure
Cecala, A.B.; Reed, W.R.; Joy, G.J.; Westmoreland, S.C.; O’Brien, A.D.
2015-01-01
Video technology coupled with datalogging exposure monitors have been used to evaluate worker exposure to different types of contaminants. However, previous application of this technology used a stationary video camera to record the worker’s activity while the worker wore some type of contaminant monitor. These techniques are not applicable to mobile workers in the mining industry because of their need to move around the operation while performing their duties. The Helmet-Cam is a recently developed exposure assessment tool that integrates a person-wearable video recorder with a datalogging dust monitor. These are worn by the miner in a backpack, safety belt or safety vest to identify areas or job tasks of elevated exposure. After a miner performs his or her job while wearing the unit, the video and dust exposure data files are downloaded to a computer and then merged together through a NIOSH-developed computer software program called Enhanced Video Analysis of Dust Exposure (EVADE). By providing synchronized playback of the merged video footage and dust exposure data, the EVADE software allows for the assessment and identification of key work areas and processes, as well as work tasks that significantly impact a worker’s personal respirable dust exposure. The Helmet-Cam technology has been tested at a number of metal/nonmetal mining operations and has proven to be a valuable assessment tool. Mining companies wishing to use this technique can purchase a commercially available video camera and an instantaneous dust monitor to obtain the necessary data, and the NIOSH-developed EVADE software will be available for download at no cost on the NIOSH website. PMID:26380529
In situ process monitoring in selective laser sintering using optical coherence tomography
NASA Astrophysics Data System (ADS)
Gardner, Michael R.; Lewis, Adam; Park, Jongwan; McElroy, Austin B.; Estrada, Arnold D.; Fish, Scott; Beaman, Joseph J.; Milner, Thomas E.
2018-04-01
Selective laser sintering (SLS) is an efficient process in additive manufacturing that enables rapid part production from computer-based designs. However, SLS is limited by its notable lack of in situ process monitoring when compared with other manufacturing processes. We report the incorporation of optical coherence tomography (OCT) into an SLS system in detail and demonstrate access to surface and subsurface features. Video frame rate cross-sectional imaging reveals areas of sintering uniformity and areas of excessive heat error with high temporal resolution. We propose a set of image processing techniques for SLS process monitoring with OCT and report the limitations and obstacles for further OCT integration with SLS systems.
Maximizing the Independence of Deaf-Blind Teenagers.
ERIC Educational Resources Information Center
Venn, J. J.; Wadler, F.
1990-01-01
The Independent Living Project for Deaf/Blind Youth emphasized the teaching of home management, personal management, social/emotional skills, work skills, and communication skills to increase low-functioning teenagers' autonomy. The project included an independent living apartment in which a video monitoring system was used for indirect…
Design of a system based on DSP and FPGA for video recording and replaying
NASA Astrophysics Data System (ADS)
Kang, Yan; Wang, Heng
2013-08-01
This paper brings forward a video recording and replaying system with the architecture of Digital Signal Processor (DSP) and Field Programmable Gate Array (FPGA). The system achieved encoding, recording, decoding and replaying of Video Graphics Array (VGA) signals which are displayed on a monitor during airplanes and ships' navigating. In the architecture, the DSP is a main processor which is used for a large amount of complicated calculation during digital signal processing. The FPGA is a coprocessor for preprocessing video signals and implementing logic control in the system. In the hardware design of the system, Peripheral Device Transfer (PDT) function of the External Memory Interface (EMIF) is utilized to implement seamless interface among the DSP, the synchronous dynamic RAM (SDRAM) and the First-In-First-Out (FIFO) in the system. This transfer mode can avoid the bottle-neck of the data transfer and simplify the circuit between the DSP and its peripheral chips. The DSP's EMIF and two level matching chips are used to implement Advanced Technology Attachment (ATA) protocol on physical layer of the interface of an Integrated Drive Electronics (IDE) Hard Disk (HD), which has a high speed in data access and does not rely on a computer. Main functions of the logic on the FPGA are described and the screenshots of the behavioral simulation are provided in this paper. In the design of program on the DSP, Enhanced Direct Memory Access (EDMA) channels are used to transfer data between the FIFO and the SDRAM to exert the CPU's high performance on computing without intervention by the CPU and save its time spending. JPEG2000 is implemented to obtain high fidelity in video recording and replaying. Ways and means of acquiring high performance for code are briefly present. The ability of data processing of the system is desirable. And smoothness of the replayed video is acceptable. By right of its design flexibility and reliable operation, the system based on DSP and FPGA for video recording and replaying has a considerable perspective in analysis after the event, simulated exercitation and so forth.
Knowledge-based understanding of aerial surveillance video
NASA Astrophysics Data System (ADS)
Cheng, Hui; Butler, Darren
2006-05-01
Aerial surveillance has long been used by the military to locate, monitor and track the enemy. Recently, its scope has expanded to include law enforcement activities, disaster management and commercial applications. With the ever-growing amount of aerial surveillance video acquired daily, there is an urgent need for extracting actionable intelligence in a timely manner. Furthermore, to support high-level video understanding, this analysis needs to go beyond current approaches and consider the relationships, motivations and intentions of the objects in the scene. In this paper we propose a system for interpreting aerial surveillance videos that automatically generates a succinct but meaningful description of the observed regions, objects and events. For a given video, the semantics of important regions and objects, and the relationships between them, are summarised into a semantic concept graph. From this, a textual description is derived that provides new search and indexing options for aerial video and enables the fusion of aerial video with other information modalities, such as human intelligence, reports and signal intelligence. Using a Mixture-of-Experts video segmentation algorithm an aerial video is first decomposed into regions and objects with predefined semantic meanings. The objects are then tracked and coerced into a semantic concept graph and the graph is summarized spatially, temporally and semantically using ontology guided sub-graph matching and re-writing. The system exploits domain specific knowledge and uses a reasoning engine to verify and correct the classes, identities and semantic relationships between the objects. This approach is advantageous because misclassifications lead to knowledge contradictions and hence they can be easily detected and intelligently corrected. In addition, the graph representation highlights events and anomalies that a low-level analysis would overlook.
12. NBS LOWER ROOM. BEHIND FAR GLASS WALL IS VIDEO ...
12. NBS LOWER ROOM. BEHIND FAR GLASS WALL IS VIDEO TAPE EQUIPMENT AND VOICE INTERCOM EQUIPMENT. THE MONITORS ABOVE GLASS WALL DISPLAY UNDERWATER TEST VIDEO TO CONTROL ROOM. FARTHEST CONSOLE ROW CONTAINS CAMERA SWITCHING, PANNING, TILTING, FOCUSING, AND ZOOMING. MIDDLE CONSOLE ROW CONTAINS TEST CONDUCTOR CONSOLES FOR MONITORING TEST ACTIVITIES AND DATA. THE CLOSEST CONSOLE ROW IS NBS FACILITY CONSOLES FOR TEST DIRECTOR, SAFETY AND QUALITY ASSURANCE REPRESENTATIVES. - Marshall Space Flight Center, Neutral Buoyancy Simulator Facility, Rideout Road, Huntsville, Madison County, AL
13. NBS LOWER ROOM. BEHIND FAR GLASS WALL IS VIDEO ...
13. NBS LOWER ROOM. BEHIND FAR GLASS WALL IS VIDEO TAPE EQUIPMENT AND VOICE INTERCOM EQUIPMENT. THE MONITORS ABOVE GLASS WALL DISPLAY UNDERWATER TEST VIDEO TO CONTROL ROOM. FARTHEST CONSOLE ROW CONTAINS CAMERA SWITCHING, PANNING, TILTING, FOCUSING, AND ZOOMING. MIDDLE CONSOLE ROW CONTAINS TEST CONDUCTOR CONSOLES FOR MONITORING TEST ACTIVITIES AND DATA. THE CLOSEST CONSOLE ROW IS NBC FACILITY CONSOLES FOR TEST DIRECTOR, SAFETY AND QUALITY ASSURANCE REPRESENTATIVES. - Marshall Space Flight Center, Neutral Buoyancy Simulator Facility, Rideout Road, Huntsville, Madison County, AL
NASA Astrophysics Data System (ADS)
Longmore, S. N.; Collins, R. P.; Pfeifer, S.; Fox, S. E.; Mulero-Pazmany, M.; Bezombes, F.; Goodwind, A.; de Juan Ovelar, M.; Knapen, J. H.; Wich, S. A.
2017-02-01
In this paper we describe an unmanned aerial system equipped with a thermal-infrared camera and software pipeline that we have developed to monitor animal populations for conservation purposes. Taking a multi-disciplinary approach to tackle this problem, we use freely available astronomical source detection software and the associated expertise of astronomers, to efficiently and reliably detect humans and animals in aerial thermal-infrared footage. Combining this astronomical detection software with existing machine learning algorithms into a single, automated, end-to-end pipeline, we test the software using aerial video footage taken in a controlled, field-like environment. We demonstrate that the pipeline works reliably and describe how it can be used to estimate the completeness of different observational datasets to objects of a given type as a function of height, observing conditions etc. - a crucial step in converting video footage to scientifically useful information such as the spatial distribution and density of different animal species. Finally, having demonstrated the potential utility of the system, we describe the steps we are taking to adapt the system for work in the field, in particular systematic monitoring of endangered species at National Parks around the world.
NASA Astrophysics Data System (ADS)
Ramirez, C. J.; Mora-Amador, R. A., Sr.; Alpizar Segura, Y.; González, G.
2015-12-01
Monitoring volcanoes have been on the past decades an expanding matter, one of the rising techniques that involve new technology is the digital video surveillance, and the automated software that come within, now is possible if you have the budget and some facilities on site, to set up a real-time network of high definition video cameras, some of them even with special features like infrared, thermal, ultraviolet, etc. That can make easier or harder the analysis of volcanic phenomena like lava eruptions, phreatic eruption, plume speed, lava flows, close/open vents, just to mention some of the many application of these cameras. We present the methodology of the installation at Poás volcano of a real-time system for processing and storing HD and thermal images and video, also the process to install and acquired the HD and IR cameras, towers, solar panels and radios to transmit the data on a volcano located at the tropics, plus what volcanic areas are our goal and why. On the other hand we show the hardware and software we consider necessary to carry on our project. Finally we show some early data examples of upwelling areas on the Poás volcano hyperacidic lake and the relation with lake phreatic eruptions, also some data of increasing temperature on an old dome wall and the suddenly wall explosions, and the use of IR video for measuring plume speed and contour for use on combination with DOAS or FTIR measurements.
Potential Utility of a 4K Consumer Camera for Surgical Education in Ophthalmology.
Ichihashi, Tsunetomo; Hirabayashi, Yutaka; Nagahara, Miyuki
2017-01-01
Purpose. We evaluated the potential utility of a cost-effective 4K consumer video system for surgical education in ophthalmology. Setting. Tokai University Hachioji Hospital, Tokyo, Japan. Design. Experimental study. Methods. The eyes that underwent cataract surgery, glaucoma surgery, vitreoretinal surgery, or oculoplastic surgery between February 2016 and April 2016 were recorded with 17.2 million pixels using a high-definition digital video camera (LUMIX DMC-GH4, Panasonic, Japan) and with 0.41 million pixels using a conventional analog video camera (MKC-501, Ikegami, Japan). Motion pictures of two cases for each surgery type were evaluated and classified as having poor, normal, or excellent visibility. Results. The 4K video system was easily installed by reading the instructions without technical expertise. The details of the surgical picture in the 4K system were highly improved over those of the conventional pictures, and the visual effects for surgical education were significantly improved. Motion pictures were stored for approximately 11 h with 512 GB SD memory. The total price of this system was USD 8000, which is a very low price compared with a commercial system. Conclusion. This 4K consumer camera was able to record and play back with high-definition surgical field visibility on the 4K monitor and is a low-cost, high-performing alternative for surgical facilities.
Video and non-video feedback interventions for teen drivers.
DOT National Transportation Integrated Search
2016-07-01
In-vehicle feedback technologies, including some that use video, help parents monitor and mentor their young drivers. While different feedback technologies have been shown to reduce some risky driving behaviors, teens and parents cite privacy concern...
Army Networks: Opportunities Exist to Better Utilize Results from Network Integration Evaluations
2013-08-01
monitor operations; a touch screen-based mission command planning tool; and an antenna mast . The Army will field only one of these systems in capability...Office JTRS Joint Tactical Radio System NIE Network Integration Evaluation OSD Office of the Secretary of Defense SUE System under Evaluation...command systems . A robust transport layer capable of delivering voice, data, imagery, and video to the tactical edge (i.e., the forward battle lines
C-130 Automated Digital Data System (CADDS)
NASA Technical Reports Server (NTRS)
Scofield, C. P.; Nguyen, Chien
1991-01-01
Real time airborne data acquisition, archiving and distribution on the NASA/Ames Research Center (ARC) C-130 has been improved over the past three years due to the implementation of the C-130 Automated Digital Data System (CADDS). CADDS is a real time, multitasking, multiprocessing ROM-based system. CADDS acquires data from both avionics and environmental sensors inflight for all C-130 data lines. The system also displays the data on video monitors throughout the aircraft.
A subjective scheduler for subjective dedicated networks
NASA Astrophysics Data System (ADS)
Suherman; Fakhrizal, Said Reza; Al-Akaidi, Marwan
2017-09-01
Multiple access technique is one of important techniques within medium access layer in TCP/IP protocol stack. Each network technology implements the selected access method. Priority can be implemented in those methods to differentiate services. Some internet networks are dedicated for specific purpose. Education browsing or tutorial video accesses are preferred in a library hotspot, while entertainment and sport contents could be subjects of limitation. Current solution may use IP address filter or access list. This paper proposes subjective properties of users or applications are used for priority determination in multiple access techniques. The NS-2 simulator is employed to evaluate the method. A video surveillance network using WiMAX is chosen as the object. Subjective priority is implemented on WiMAX scheduler based on traffic properties. Three different traffic sources from monitoring video: palace, park, and market are evaluated. The proposed subjective scheduler prioritizes palace monitoring video that results better quality, xx dB than the later monitoring spots.
A validity test of movie, television, and video-game ratings.
Walsh, D A; Gentile, D A
2001-06-01
Numerous studies have documented the potential effects on young audiences of violent content in media products, including movies, television programs, and computer and video games. Similar studies have evaluated the effects associated with sexual content and messages. Cumulatively, these effects represent a significant public health risk for increased aggressive and violent behavior, spread of sexually transmitted diseases, and pediatric pregnancy. In partial response to these risks and to public and legislative pressure, the movie, television, and gaming industries have implemented ratings systems intended to provide information about the content and appropriate audiences for different films, shows, and games. To test the validity of the current movie-, television-, and video game-rating systems. Panel study. Participants used the KidScore media evaluation tool, which evaluates films, television shows, and video games on 10 aspects, including the appropriateness of the media product for children based on age. When an entertainment industry rates a product as inappropriate for children, parent raters agree that it is inappropriate for children. However, parent raters disagree with industry usage of many of the ratings designating material suitable for children of different ages. Products rated as appropriate for adolescents are of the greatest concern. The level of disagreement varies from industry to industry and even from rating to rating. Analysis indicates that the amount of violent content and portrayals of violence are the primary markers for disagreement between parent raters and industry ratings. As 1 part of a solution to the complex public health problems posed by violent and sexually explicit media products, ratings can have value if used with caution. Parents and caregivers relying on the ratings systems to guide their children's use of media products should continue to monitor content independently. Industry ratings systems should be revised with input from the medical and scientific communities to improve their reliability and validity. A single ratings system, applied universally across industries, would greatly simplify the efforts of parents and caregivers to use the system as well as the efforts of outside parties to monitor the use and validity of the system.
Lei, Tim C.; Pendyala, Srinivas; Scherrer, Larry; Li, Buhong; Glazner, Gregory F.; Huang, Zheng
2016-01-01
Recent clinical reports suggest that overexposure to light emissions generated from cathode ray tube (CRT) and liquid crystal display (LCD) color monitors after topical or systemic administration of a photosensitizer could cause noticeable skin phototoxicity. In this study, we examined the light emission profiles (optical irradiance, spectral irradiance) of CRT and LCD monitors under simulated movie and video game modes. Results suggest that peak emissions and integrated fluence generated from monitors are clinically relevant and therefore prolonged exposure to these light sources at a close distance should be avoided after the administration of a photosensitizer or phototoxic drug. PMID:23669681
Development of a video-guided real-time patient motion monitoring system.
Ju, Sang Gyu; Huh, Woong; Hong, Chae-Seon; Kim, Jin Sung; Shin, Jung Suk; Shin, Eunhyuk; Han, Youngyih; Ahn, Yong Chan; Park, Hee Chul; Choi, Doo Ho
2012-05-01
The authors developed a video image-guided real-time patient motion monitoring (VGRPM) system using PC-cams, and its clinical utility was evaluated using a motion phantom. The VGRPM system has three components: (1) an image acquisition device consisting of two PC-cams, (2) a main control computer with a radiation signal controller and warning system, and (3) patient motion analysis software developed in-house. The intelligent patient motion monitoring system was designed for synchronization with a beam on/off trigger signal in order to limit operation to during treatment time only and to enable system automation. During each treatment session, an initial image of the patient is acquired as soon as radiation starts and is compared with subsequent live images, which can be acquired at up to 30 fps by the real-time frame difference-based analysis software. When the error range exceeds the set criteria (δ(movement)) due to patient movement, a warning message is generated in the form of light and sound. The described procedure repeats automatically for each patient. A motion phantom, which operates by moving a distance of 0.1, 0.2, 0.3, 0.5, and 1.0 cm for 1 and 2 s, respectively, was used to evaluate the system performance. The authors measured optimal δ(movement) for clinical use, the minimum distance that can be detected with this system, and the response time of the whole system using a video analysis technique. The stability of the system in a linear accelerator unit was evaluated for a period of 6 months. As a result of the moving phantom test, the δ(movement) for detection of all simulated phantom motion except the 0.1 cm movement was determined to be 0.2% of total number of pixels in the initial image. The system can detect phantom motion as small as 0.2 cm. The measured response time from the detection of phantom movement to generation of the warning signal was 0.1 s. No significant functional disorder of the system was observed during the testing period. The VGRPM system has a convenient design, which synchronizes initiation of the analysis with a beam on/off signal from the treatment machine and may contribute to a reduction in treatment error due to patient motion and increase the accuracy of treatment dose delivery.
Pressler, Ronit M; Seri, Stefano; Kane, Nick; Martland, Tim; Goyal, Sushma; Iyer, Anand; Warren, Elliott; Notghi, Lesley; Bill, Peter; Thornton, Rachel; Appleton, Richard; Doyle, Sarah; Rushton, Sarah; Worley, Alan; Boyd, Stewart G
2017-08-01
Paediatric Epilepsy surgery in the UK has recently been centralised in order to improve expertise and quality of service available to children. Video EEG monitoring or telemetry is a highly specialised and a crucial component of the pre-surgical evaluation. Although many Epilepsy Monitoring Units work to certain standards, there is no national or international guideline for paediatric video telemetry. Due to lack of evidence we used a modified Delphi process utilizing the clinical and academic expertise of the clinical neurophysiology sub-specialty group of Children's Epilepsy Surgical Service (CESS) centres in England and Wales. This process consisted of the following stages I: Identification of the consensus working group, II: Identification of key areas for guidelines, III: Consensus practice points and IV: Final review. Statements that gained consensus (median score of either 4 or 5 using a five-point Likerttype scale) were included in the guideline. Two rounds of feedback and amendments were undertaken. The consensus guidelines includes the following topics: referral pathways, neurophysiological equipment standards, standards of recording techniques, with specific emphasis on safety of video EEG monitoring both with and without drug withdrawal, a protocol for testing patient's behaviours, data storage and guidelines for writing factual reports and conclusions. All statements developed received a median score of 5 and were adopted by the group. Using a modified Delphi process we were able to develop universally-accepted video EEG guidelines for the UK CESS. Although these recommendations have been specifically developed for the pre-surgical evaluation of children with epilepsy, it is assumed that most components are transferable to any paediatric video EEG monitoring setting. Copyright © 2017 British Epilepsy Association. Published by Elsevier Ltd. All rights reserved.
High-Speed Observer: Automated Streak Detection in SSME Plumes
NASA Technical Reports Server (NTRS)
Rieckoff, T. J.; Covan, M.; OFarrell, J. M.
2001-01-01
A high frame rate digital video camera installed on test stands at Stennis Space Center has been used to capture images of Space Shuttle main engine plumes during test. These plume images are processed in real time to detect and differentiate anomalous plume events occurring during a time interval on the order of 5 msec. Such speed yields near instantaneous availability of information concerning the state of the hardware. This information can be monitored by the test conductor or by other computer systems, such as the integrated health monitoring system processors, for possible test shutdown before occurrence of a catastrophic engine failure.
NASA Astrophysics Data System (ADS)
Kobayashi, Y.; Watanabe, K.; Imai, M.; Watanabe, K.; Naruse, N.; Takahashi, Y.
2016-12-01
Hyper-densely monitoring for poor-visibility occurred by snowstorm is needed to make an alert system, because the snowstorm is difficult to predict from the observation only at a representative point. There are some problems in the previous approaches for the poor-visibility monitoring using video analyses or visibility meters; these require a wired network monitoring (a large amount of data: 10MB/sec at least) and the system cost is high (10,000 at each point). Thus, the risk of poor-visibility has been mainly measured at specific point such as airport and mountain pass, and estimated by simulation two dimensionally. To predict it two dimensionally and accurately, we have developed a low-cost meteorological system to observe the snowstorm hyper-densely. We have developed a low-cost visibility meter which works as the reduced intensity of semiconducting laser light when snow particles block off. Our developed system also has a capability of extending a hyper-densely observation in real-time on wireless network using Zigbee; A/D conversion and wireless data sent from temperature and illuminance sensors. We use a semiconducting laser chip (5) for the light source and a reflection mechanism by the use of three mirrors so as to send the light to a non-sensitive illuminance sensor directly. Thus, our visibility detecting system ($500) becomes much cheaper than previous one. We have checked the correlation between the reduced intensity taken by our system and the visibility recorded by conventional video camera. The value for the correlation coefficient was -0.67, which indicates a strong correlation. It means that our developed system is practical. In conclusion, we have developed low-cost meteorological detecting system to observe poor-visibility occurred by snowstorm, having a potential of hyper-densely monitoring on wireless network, and have made sure the practicability.
Knowledge representation in space flight operations
NASA Technical Reports Server (NTRS)
Busse, Carl
1989-01-01
In space flight operations rapid understanding of the state of the space vehicle is essential. Representation of knowledge depicting space vehicle status in a dynamic environment presents a difficult challenge. The NASA Jet Propulsion Laboratory has pursued areas of technology associated with the advancement of spacecraft operations environment. This has led to the development of several advanced mission systems which incorporate enhanced graphics capabilities. These systems include: (1) Spacecraft Health Automated Reasoning Prototype (SHARP); (2) Spacecraft Monitoring Environment (SME); (3) Electrical Power Data Monitor (EPDM); (4) Generic Payload Operations Control Center (GPOCC); and (5) Telemetry System Monitor Prototype (TSM). Knowledge representation in these systems provides a direct representation of the intrinsic images associated with the instrument and satellite telemetry and telecommunications systems. The man-machine interface includes easily interpreted contextual graphic displays. These interactive video displays contain multiple display screens with pop-up windows and intelligent, high resolution graphics linked through context and mouse-sensitive icons and text.
Portable Video/Digital Retinal Funduscope
NASA Technical Reports Server (NTRS)
Taylor, Gerald R.; Meehan, Richard; Hunter, Norwood; Caputo, Michael; Gibson, C. Robert
1991-01-01
Lightweight, inexpensive electronic and photographic instrument developed for detection, monitoring, and objective quantification of ocular/systemic disease or physiological alterations of retina, blood vessels, or other structures in anterior and posterior chambers of eye. Operated with little training. Functions with human or animal subject seated, recumbent, inverted, or in almost any other orientation; and in hospital, laboratory, field, or other environment. Produces video images viewed directly and/or digitized for simultaneous or subsequent analysis. Also equipped to produce photographs and/or fitted with adaptors to produce stereoscopic or magnified images of skin, nose, ear, throat, or mouth to detect lesions or diseases.
ViCoMo: visual context modeling for scene understanding in video surveillance
NASA Astrophysics Data System (ADS)
Creusen, Ivo M.; Javanbakhti, Solmaz; Loomans, Marijn J. H.; Hazelhoff, Lykele B.; Roubtsova, Nadejda; Zinger, Svitlana; de With, Peter H. N.
2013-10-01
The use of contextual information can significantly aid scene understanding of surveillance video. Just detecting people and tracking them does not provide sufficient information to detect situations that require operator attention. We propose a proof-of-concept system that uses several sources of contextual information to improve scene understanding in surveillance video. The focus is on two scenarios that represent common video surveillance situations, parking lot surveillance and crowd monitoring. In the first scenario, a pan-tilt-zoom (PTZ) camera tracking system is developed for parking lot surveillance. Context is provided by the traffic sign recognition system to localize regular and handicapped parking spot signs as well as license plates. The PTZ algorithm has the ability to selectively detect and track persons based on scene context. In the second scenario, a group analysis algorithm is introduced to detect groups of people. Contextual information is provided by traffic sign recognition and region labeling algorithms and exploited for behavior understanding. In both scenarios, decision engines are used to interpret and classify the output of the subsystems and if necessary raise operator alerts. We show that using context information enables the automated analysis of complicated scenarios that were previously not possible using conventional moving object classification techniques.
A System for Video Surveillance and Monitoring CMU VSAM Final Report
1999-11-30
motion-based skeletonization, neural network , spatio-temporal salience Patterns inside image chips, spurious motion rejection, model -based... network of sensors with respect to the model coordinate system, computation of 3D geolocation estimates, and graphical display of object hypotheses...rithms have been developed. The first uses view dependent visual properties to train a neural network classifier to recognize four classes: single
Power, Avionics and Software Communication Network Architecture
NASA Technical Reports Server (NTRS)
Ivancic, William D.; Sands, Obed S.; Bakula, Casey J.; Oldham, Daniel R.; Wright, Ted; Bradish, Martin A.; Klebau, Joseph M.
2014-01-01
This document describes the communication architecture for the Power, Avionics and Software (PAS) 2.0 subsystem for the Advanced Extravehicular Mobile Unit (AEMU). The following systems are described in detail: Caution Warn- ing and Control System, Informatics, Storage, Video, Audio, Communication, and Monitoring Test and Validation. This document also provides some background as well as the purpose and goals of the PAS project at Glenn Research Center (GRC).
A design of real time image capturing and processing system using Texas Instrument's processor
NASA Astrophysics Data System (ADS)
Wee, Toon-Joo; Chaisorn, Lekha; Rahardja, Susanto; Gan, Woon-Seng
2007-09-01
In this work, we developed and implemented an image capturing and processing system that equipped with capability of capturing images from an input video in real time. The input video can be a video from a PC, video camcorder or DVD player. We developed two modes of operation in the system. In the first mode, an input image from the PC is processed on the processing board (development platform with a digital signal processor) and is displayed on the PC. In the second mode, current captured image from the video camcorder (or from DVD player) is processed on the board but is displayed on the LCD monitor. The major difference between our system and other existing conventional systems is that image-processing functions are performed on the board instead of the PC (so that the functions can be used for further developments on the board). The user can control the operations of the board through the Graphic User Interface (GUI) provided on the PC. In order to have a smooth image data transfer between the PC and the board, we employed Real Time Data Transfer (RTDX TM) technology to create a link between them. For image processing functions, we developed three main groups of function: (1) Point Processing; (2) Filtering and; (3) 'Others'. Point Processing includes rotation, negation and mirroring. Filter category provides median, adaptive, smooth and sharpen filtering in the time domain. In 'Others' category, auto-contrast adjustment, edge detection, segmentation and sepia color are provided, these functions either add effect on the image or enhance the image. We have developed and implemented our system using C/C# programming language on TMS320DM642 (or DM642) board from Texas Instruments (TI). The system was showcased in College of Engineering (CoE) exhibition 2006 at Nanyang Technological University (NTU) and have more than 40 users tried our system. It is demonstrated that our system is adequate for real time image capturing. Our system can be used or applied for applications such as medical imaging, video surveillance, etc.
Image system for three dimensional, 360 DEGREE, time sequence surface mapping of moving objects
Lu, Shin-Yee
1998-01-01
A three-dimensional motion camera system comprises a light projector placed between two synchronous video cameras all focused on an object-of-interest. The light projector shines a sharp pattern of vertical lines (Ronchi ruling) on the object-of-interest that appear to be bent differently to each camera by virtue of the surface shape of the object-of-interest and the relative geometry of the cameras, light projector and object-of-interest Each video frame is captured in a computer memory and analyzed. Since the relative geometry is known and the system pre-calibrated, the unknown three-dimensional shape of the object-of-interest can be solved for by matching the intersections of the projected light lines with orthogonal epipolar lines corresponding to horizontal rows in the video camera frames. A surface reconstruction is made and displayed on a monitor screen. For 360.degree. all around coverage of theobject-of-interest, two additional sets of light projectors and corresponding cameras are distributed about 120.degree. apart from one another.
Image system for three dimensional, 360{degree}, time sequence surface mapping of moving objects
Lu, S.Y.
1998-12-22
A three-dimensional motion camera system comprises a light projector placed between two synchronous video cameras all focused on an object-of-interest. The light projector shines a sharp pattern of vertical lines (Ronchi ruling) on the object-of-interest that appear to be bent differently to each camera by virtue of the surface shape of the object-of-interest and the relative geometry of the cameras, light projector and object-of-interest. Each video frame is captured in a computer memory and analyzed. Since the relative geometry is known and the system pre-calibrated, the unknown three-dimensional shape of the object-of-interest can be solved for by matching the intersections of the projected light lines with orthogonal epipolar lines corresponding to horizontal rows in the video camera frames. A surface reconstruction is made and displayed on a monitor screen. For 360{degree} all around coverage of the object-of-interest, two additional sets of light projectors and corresponding cameras are distributed about 120{degree} apart from one another. 20 figs.
Digital Image Correlation for Performance Monitoring.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palaviccini, Miguel; Turner, Daniel Z.; Herzberg, Michael
2016-02-01
Evaluating the health of a mechanism requires more than just a binary evaluation of whether an operation was completed. It requires analyzing more comprehensive, full-field data. Health monitoring is a process of nondestructively identifying characteristics that indicate the fitness of an engineered component. In order to monitor unit health in a production setting, an automated test system must be created to capture the motion of mechanism parts in a real-time and non-intrusive manner. One way to accomplish this is by using high-speed video (HSV) and Digital Image Correlation (DIC). In this approach, individual frames of the video are analyzed tomore » track the motion of mechanism components. The derived performance metrics allow for state-of-health monitoring and improved fidelity of mechanism modeling. The results are in-situ state-of-health identification and performance prediction. This paper introduces basic concepts of this test method, and discusses two main themes: the use of laser marking to add fiducial patterns to mechanism components, and new software developed to track objects with complex shapes, even as they move behind obstructions. Finally, the implementation of these tests into an automated tester is discussed.« less
floodX: urban flash flood experiments monitored with conventional and alternative sensors
NASA Astrophysics Data System (ADS)
Moy de Vitry, Matthew; Dicht, Simon; Leitão, João P.
2017-09-01
The data sets described in this paper provide a basis for developing and testing new methods for monitoring and modelling urban pluvial flash floods. Pluvial flash floods are a growing hazard to property and inhabitants' well-being in urban areas. However, the lack of appropriate data collection methods is often cited as an impediment for reliable flood modelling, thereby hindering the improvement of flood risk mapping and early warning systems. The potential of surveillance infrastructure and social media is starting to draw attention for this purpose. In the floodX project, 22 controlled urban flash floods were generated in a flood response training facility and monitored with state-of-the-art sensors as well as standard surveillance cameras. With these data, it is possible to explore the use of video data and computer vision for urban flood monitoring and modelling. The floodX project stands out as the largest documented flood experiment of its kind, providing both conventional measurements and video data in parallel and at high temporal resolution. The data set used in this paper is available at https://doi.org/10.5281/zenodo.830513.
Mehmood, Irfan; Sajjad, Muhammad; Baik, Sung Wook
2014-01-01
Wireless capsule endoscopy (WCE) has great advantages over traditional endoscopy because it is portable and easy to use, especially in remote monitoring health-services. However, during the WCE process, the large amount of captured video data demands a significant deal of computation to analyze and retrieve informative video frames. In order to facilitate efficient WCE data collection and browsing task, we present a resource- and bandwidth-aware WCE video summarization framework that extracts the representative keyframes of the WCE video contents by removing redundant and non-informative frames. For redundancy elimination, we use Jeffrey-divergence between color histograms and inter-frame Boolean series-based correlation of color channels. To remove non-informative frames, multi-fractal texture features are extracted to assist the classification using an ensemble-based classifier. Owing to the limited WCE resources, it is impossible for the WCE system to perform computationally intensive video summarization tasks. To resolve computational challenges, mobile-cloud architecture is incorporated, which provides resizable computing capacities by adaptively offloading video summarization tasks between the client and the cloud server. The qualitative and quantitative results are encouraging and show that the proposed framework saves information transmission cost and bandwidth, as well as the valuable time of data analysts in browsing remote sensing data. PMID:25225874
Mehmood, Irfan; Sajjad, Muhammad; Baik, Sung Wook
2014-09-15
Wireless capsule endoscopy (WCE) has great advantages over traditional endoscopy because it is portable and easy to use, especially in remote monitoring health-services. However, during the WCE process, the large amount of captured video data demands a significant deal of computation to analyze and retrieve informative video frames. In order to facilitate efficient WCE data collection and browsing task, we present a resource- and bandwidth-aware WCE video summarization framework that extracts the representative keyframes of the WCE video contents by removing redundant and non-informative frames. For redundancy elimination, we use Jeffrey-divergence between color histograms and inter-frame Boolean series-based correlation of color channels. To remove non-informative frames, multi-fractal texture features are extracted to assist the classification using an ensemble-based classifier. Owing to the limited WCE resources, it is impossible for the WCE system to perform computationally intensive video summarization tasks. To resolve computational challenges, mobile-cloud architecture is incorporated, which provides resizable computing capacities by adaptively offloading video summarization tasks between the client and the cloud server. The qualitative and quantitative results are encouraging and show that the proposed framework saves information transmission cost and bandwidth, as well as the valuable time of data analysts in browsing remote sensing data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1990-01-01
This conference presents papers in the fields of airborne telemetry, measurement technology, video instrumentation and monitoring, tracking and receiving systems, and real-time processing in telemetry. Topics presented include packet telemetry ground station simulation, a predictable performance wideband noise generator, an improved drone tracking control system transponder, the application of neural networks to drone control, and an integrated real-time turbine engine flight test system.
Manufacturing Methods and Technology Program Automatic In-Process Microcircuit Evaluation.
1980-10-01
methods of controlling the AIME system are with the computer and associated inter- face (CPU control), and with controls located on the front panels...Sync and Blanking signals When the AIME system is being operated by the front panel controls , the computer does not influence the system operation. SU...the color video monitor display. The operator controls these parameters by 1) depressing the appropriate key on the keyboard, 2) observing on the
Flux of Kilogram-sized Meteoroids from Lunar Impact Monitoring. Supplemental Movies
NASA Technical Reports Server (NTRS)
Suggs, Robert; Cooke, William; Suggs, Ron; McNamara, Heather; Swift, Wesley; Moser, Danielle; Diekmann, Anne
2008-01-01
These videos, and audio accompany the slide presentation "Flux of Kilogram-sized Meteoroids from Lunar Impact Monitoring." The slide presentation reviews the routine lunar impact monitoring that has harvested over 110 impacts in 2 years of observations using telescopes and low-light level video cameras. The night side of the lunar surface provides a large collecting area for detecting these impacts and allows estimation of the flux of meteoroids down to a limiting luminous energy.
Conger, Randall W.; Bird, Philip H.
1999-01-01
Between May and July 1998, 10 monitor wells were drilled near the site of the former Naval Air Warfare Center (NAWC), Warminster, Bucks County, Pa., to monitor water levels and sample ground water in shallow and intermediate water-bearing fractures. The sampling will determine the horizontal and vertical distribution of contaminated ground water migrating from known or suspected sources. Three boreholes were drilled on the property at 960 Jacksonville Road, at the northwestern side of NAWC, along strike from Area A; seven boreholes were drilled in Area B in the southeastern corner of NAWC. Depths range from 40.5 to 150 feet below land surface.Borehole geophysical logging and video surveys were used to identify water-bearing fractures so that appropriate intervals could be screened in each monitor well. Geophysical logs were obtained at the 10 monitor wells. Video surveys were obtained at three monitor wells in the southeastern corner of the NAWC property.Caliper logs and video surveys were used to locate fractures. Inflections on fluid-temperature and fluid-resistivity logs were used to locate possible water-bearing fractures. Heatpulse-flowmeter measurements verified these locations. Natural-gamma logs provided information on stratigraphy. After interpretation of geophysical logs, video surveys, and driller's logs, all wells were screened such that water-level fluctuations could be monitored and water samples collected from discrete water-bearing fractures in each monitor well.
Using Activity-Related Behavioural Features towards More Effective Automatic Stress Detection
Giakoumis, Dimitris; Drosou, Anastasios; Cipresso, Pietro; Tzovaras, Dimitrios; Hassapis, George; Gaggioli, Andrea; Riva, Giuseppe
2012-01-01
This paper introduces activity-related behavioural features that can be automatically extracted from a computer system, with the aim to increase the effectiveness of automatic stress detection. The proposed features are based on processing of appropriate video and accelerometer recordings taken from the monitored subjects. For the purposes of the present study, an experiment was conducted that utilized a stress-induction protocol based on the stroop colour word test. Video, accelerometer and biosignal (Electrocardiogram and Galvanic Skin Response) recordings were collected from nineteen participants. Then, an explorative study was conducted by following a methodology mainly based on spatiotemporal descriptors (Motion History Images) that are extracted from video sequences. A large set of activity-related behavioural features, potentially useful for automatic stress detection, were proposed and examined. Experimental evaluation showed that several of these behavioural features significantly correlate to self-reported stress. Moreover, it was found that the use of the proposed features can significantly enhance the performance of typical automatic stress detection systems, commonly based on biosignal processing. PMID:23028461
Monitoring Coating Thickness During Plasma Spraying
NASA Technical Reports Server (NTRS)
Miller, Robert A.
1990-01-01
High-resolution video measures thickness accurately without interfering with process. Camera views cylindrical part through filter during plasma spraying. Lamp blacklights part, creating high-contrast silhouette on video monitor. Width analyzer counts number of lines in image of part after each pass of spray gun. Layer-by-layer measurements ensure adequate coat built up without danger of exceeding required thickness.
A risk-based coverage model for video surveillance camera control optimization
NASA Astrophysics Data System (ADS)
Zhang, Hongzhou; Du, Zhiguo; Zhao, Xingtao; Li, Peiyue; Li, Dehua
2015-12-01
Visual surveillance system for law enforcement or police case investigation is different from traditional application, for it is designed to monitor pedestrians, vehicles or potential accidents. Visual surveillance risk is defined as uncertainty of visual information of targets and events monitored in present work and risk entropy is introduced to modeling the requirement of police surveillance task on quality and quantity of vide information. the prosed coverage model is applied to calculate the preset FoV position of PTZ camera.
Facial Video-Based Photoplethysmography to Detect HRV at Rest.
Moreno, J; Ramos-Castro, J; Movellan, J; Parrado, E; Rodas, G; Capdevila, L
2015-06-01
Our aim is to demonstrate the usefulness of photoplethysmography (PPG) for analyzing heart rate variability (HRV) using a standard 5-min test at rest with paced breathing, comparing the results with real RR intervals and testing supine and sitting positions. Simultaneous recordings of R-R intervals were conducted with a Polar system and a non-contact PPG, based on facial video recording on 20 individuals. Data analysis and editing were performed with individually designated software for each instrument. Agreement on HRV parameters was assessed with concordance correlations, effect size from ANOVA and Bland and Altman plots. For supine position, differences between video and Polar systems showed a small effect size in most HRV parameters. For sitting position, these differences showed a moderate effect size in most HRV parameters. A new procedure, based on the pixels that contained more heart beat information, is proposed for improving the signal-to-noise ratio in the PPG video signal. Results were acceptable in both positions but better in the supine position. Our approach could be relevant for applications that require monitoring of stress or cardio-respiratory health, such as effort/recuperation states in sports. © Georg Thieme Verlag KG Stuttgart · New York.
System and process for detecting and monitoring surface defects
NASA Technical Reports Server (NTRS)
Mueller, Mark K. (Inventor)
1994-01-01
A system and process for detecting and monitoring defects in large surfaces such as the field joints of the container segments of a space shuttle booster motor. Beams of semi-collimated light from three non-parallel fiber optic light panels are directed at a region of the surface at non-normal angles of expected incidence. A video camera gathers some portion of the light that is reflected at an angle other than the angle of expected reflectance, and generates signals which are analyzed to discern defects in the surface. The analysis may be performed by visual inspection of an image on a video monitor, or by inspection of filtered or otherwise processed images. In one alternative embodiment, successive predetermined regions of the surface are aligned with the light source before illumination, thereby permitting efficient detection of defects in a large surface. Such alignment is performed by using a line scan gauge to sense the light which passes through an aperture in the surface. In another embodiment a digital map of the surface is created, thereby permitting the maintenance of records detailing changes in the location or size of defects as the container segment is refurbished and re-used. The defect detection apparatus may also be advantageously mounted on a fixture which engages the edge of a container segment.
Flashback Detection Sensor for Hydrogen Augmented Natural Gas Combustion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thornton, J.D.; Chorpening, B.T.; Sidwell, T.
2007-05-01
The use of hydrogen augmented fuel is being investigated by various researchers as a method to extend the lean operating limit, and potentially reduce thermal NOx formation in natural gas fired lean premixed (LPM) combustion systems. The resulting increase in flame speed during hydrogen augmentation, however, increases the propensity for flashback in LPM systems. Real-time in-situ monitoring of flashback is important for the development of control strategies for use of hydrogen augmented fuel in state-of-the-art combustion systems, and for the development of advanced hydrogen combustion systems. The National Energy Technology Laboratory (NETL) and Woodward Industrial Controls are developing a combustionmore » control and diagnostics sensor (CCADS), which has already been demonstrated as a useful sensor for in-situ monitoring of natural gas combustion, including detection of important combustion events such as flashback and lean blowoff. Since CCADS is a flame ionization sensor technique, the low ion concentration produced in pure hydrogen combustion raises concerns of whether CCADS can be used to monitor flashback in hydrogen augmented combustion. This paper discusses CCADS tests conducted at 0.2-0.6 MPa (2-6 atm), demonstrating flashback detection with fuel compositions up to 80% hydrogen (by volume) mixed with natural gas. NETL’s Simulation Validation (SimVal) combustor offers full optical access to pressurized combustion during these tests. The CCADS data and high-speed video show the reaction zone moves upstream into the nozzle as the hydrogen fuel concentration increases, as is expected with the increased flame speed of the mixture. The CCADS data and video also demonstrate the opportunity for using CCADS to provide the necessary in-situ monitor to control flashback and lean blowoff in hydrogen augmented combustion applications.« less
Automated detection of videotaped neonatal seizures based on motion segmentation methods.
Karayiannis, Nicolaos B; Tao, Guozhi; Frost, James D; Wise, Merrill S; Hrachovy, Richard A; Mizrahi, Eli M
2006-07-01
This study was aimed at the development of a seizure detection system by training neural networks using quantitative motion information extracted by motion segmentation methods from short video recordings of infants monitored for seizures. The motion of the infants' body parts was quantified by temporal motion strength signals extracted from video recordings by motion segmentation methods based on optical flow computation. The area of each frame occupied by the infants' moving body parts was segmented by direct thresholding, by clustering of the pixel velocities, and by clustering the motion parameters obtained by fitting an affine model to the pixel velocities. The computational tools and procedures developed for automated seizure detection were tested and evaluated on 240 short video segments selected and labeled by physicians from a set of video recordings of 54 patients exhibiting myoclonic seizures (80 segments), focal clonic seizures (80 segments), and random infant movements (80 segments). The experimental study described in this paper provided the basis for selecting the most effective strategy for training neural networks to detect neonatal seizures as well as the decision scheme used for interpreting the responses of the trained neural networks. Depending on the decision scheme used for interpreting the responses of the trained neural networks, the best neural networks exhibited sensitivity above 90% or specificity above 90%. The best among the motion segmentation methods developed in this study produced quantitative features that constitute a reliable basis for detecting myoclonic and focal clonic neonatal seizures. The performance targets of this phase of the project may be achieved by combining the quantitative features described in this paper with those obtained by analyzing motion trajectory signals produced by motion tracking methods. A video system based upon automated analysis potentially offers a number of advantages. Infants who are at risk for seizures could be monitored continuously using relatively inexpensive and non-invasive video techniques that supplement direct observation by nursery personnel. This would represent a major advance in seizure surveillance and offers the possibility for earlier identification of potential neurological problems and subsequent intervention.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Peter N.; Rayton, Michael D.; Nass, Bryan L.
2007-06-01
The Confederated Tribes of the Colville Reservation (Colville Tribes) identified the need for collecting baseline census data on the timing and abundance of adult salmonids in the Okanogan River Basin in order to determine basin and tributary-specific spawner distributions, evaluate the status and trends of natural salmonid production in the basin, document local fish populations, and augment existing fishery data. This report documents the design, installation, operation and evaluation of mainstem and tributary video systems in the Okanogan River Basin. The species-specific data collected by these fish enumeration systems are presented along with an evaluation of the operation of amore » facility that provides a count of fish using an automated method. Information collected by the Colville Tribes Fish & Wildlife Department, specifically the Okanogan Basin Monitoring and Evaluation Program (OBMEP), is intended to provide a relative abundance indicator for anadromous fish runs migrating past Zosel Dam and is not intended as an absolute census count. Okanogan Basin Monitoring and Evaluation Program collected fish passage data between October 2005 and December 2006. Video counting stations were deployed and data were collected at two locations in the basin: on the mainstem Okanogan River at Zosel Dam near Oroville, Washington, and on Bonaparte Creek, a tributary to the Okanogan River, in the town of Tonasket, Washington. Counts at Zosel Dam between 10 October 2005 and 28 February 2006 are considered partial, pilot year data as they were obtained from the operation of a single video array on the west bank fishway, and covered only a portion of the steelhead migration. A complete description of the apparatus and methodology can be found in 'Fish Enumeration Using Underwater Video Imagery - Operational Protocol' (Nass 2007). At Zosel Dam, totals of 57 and 481 adult Chinook salmon were observed with the video monitoring system in 2005 and 2006, respectively. Run timing for Chinook in 2006 indicated that peak passage occurred in early October and daily peak passage was noted on 5 October when 52 fish passed the dam. Hourly passage estimates of Chinook salmon counts for 2005 and 2006 at Zosel Dam revealed a slight diel pattern as Chinook passage events tended to remain low from 1900 hours to 0600 hours relative to other hours of the day. Chinook salmon showed a slight preference for passing the dam through the video chutes on the east bank (52%) relative to the west bank (48%). A total of 48 adult sockeye salmon in 2005 and 19,245 in 2006 were counted passing through the video chutes at Zosel Dam. The 2006 run timing pattern was characterized by a large peak in passage from 3 August through 10 August when 17,698 fish (92% of total run observed for the year) were observed passing through the video chutes. The daily peak of 5,853 fish occurred on 4 August. Hourly passage estimates of sockeye salmon counts for 2005 and 2006 at the dam showed a strong diel pattern with increased passage during nighttime hours relative to daytime hours. Sockeye showed a strong preference for passing Zosel Dam on the east bank (72%) relative to the west bank (28%). A total of 298 adult upstream-migrating steelhead were counted at Zosel Dam in 2005 and 2006, representing the 2006 cohort based on passage data from 5 October 2005 through 15 July 2006. Eighty-seven percent (87%) of the total steelhead observed passed the dam between 23 March and 25 April with a peak passage occurring on 6 April when 31 fish were observed. Steelhead passage at Zosel Dam exhibited no diel pattern. In contrast to both Chinook and sockeye salmon, steelhead were shown to have a preference for passing the dam on the west bank (71%) relative to the east bank (29%). Both Chinook and sockeye passage at Zosel Dam were influenced by Okanogan River water temperature. When water temperatures peaked in late July (daily mean exceeded 24 C and daily maximum exceeded 26.5 C), Chinook and sockeye counts went to zero. A subsequent decrease in water temperature resulted in sharp increases in both Chinook and sockeye passage. A total of six steelhead were observed with the video monitoring system at Bonaparte Creek in 2006, with three passage events occurring on 29 March and one each on 20, 21, and 23 April. This system was operational for only a portion of the migration.« less
Wireless augmented reality communication system
NASA Technical Reports Server (NTRS)
Devereaux, Ann (Inventor); Agan, Martin (Inventor); Jedrey, Thomas (Inventor)
2006-01-01
The system of the present invention is a highly integrated radio communication system with a multimedia co-processor which allows true two-way multimedia (video, audio, data) access as well as real-time biomedical monitoring in a pager-sized portable access unit. The system is integrated in a network structure including one or more general purpose nodes for providing a wireless-to-wired interface. The network architecture allows video, audio and data (including biomedical data) streams to be connected directly to external users and devices. The portable access units may also be mated to various non-personal devices such as cameras or environmental sensors for providing a method for setting up wireless sensor nets from which reported data may be accessed through the portable access unit. The reported data may alternatively be automatically logged at a remote computer for access and viewing through a portable access unit, including the user's own.
Wireless Augmented Reality Communication System
NASA Technical Reports Server (NTRS)
Jedrey, Thomas (Inventor); Agan, Martin (Inventor); Devereaux, Ann (Inventor)
2014-01-01
The system of the present invention is a highly integrated radio communication system with a multimedia co-processor which allows true two-way multimedia (video, audio, data) access as well as real-time biomedical monitoring in a pager-sized portable access unit. The system is integrated in a network structure including one or more general purpose nodes for providing a wireless-to-wired interface. The network architecture allows video, audio and data (including biomedical data) streams to be connected directly to external users and devices. The portable access units may also be mated to various non-personal devices such as cameras or environmental sensors for providing a method for setting up wireless sensor nets from which reported data may be accessed through the portable access unit. The reported data may alternatively be automatically logged at a remote computer for access and viewing through a portable access unit, including the user's own.
Wireless Augmented Reality Communication System
NASA Technical Reports Server (NTRS)
Agan, Martin (Inventor); Devereaux, Ann (Inventor); Jedrey, Thomas (Inventor)
2016-01-01
The system of the present invention is a highly integrated radio communication system with a multimedia co-processor which allows true two-way multimedia (video, audio, data) access as well as real-time biomedical monitoring in a pager-sized portable access unit. The system is integrated in a network structure including one or more general purpose nodes for providing a wireless-to-wired interface. The network architecture allows video, audio and data (including biomedical data) streams to be connected directly to external users and devices. The portable access units may also be mated to various non-personal devices such as cameras or environmental sensors for providing a method for setting up wireless sensor nets from which reported data may be accessed through the portable access unit. The reported data may alternatively be automatically logged at a remote computer for access and viewing through a portable access unit, including the user's own.
Field-Sequential Color Converter
NASA Technical Reports Server (NTRS)
Studer, Victor J.
1989-01-01
Electronic conversion circuit enables display of signals from field-sequential color-television camera on color video camera. Designed for incorporation into color-television monitor on Space Shuttle, circuit weighs less, takes up less space, and consumes less power than previous conversion equipment. Incorporates state-of-art memory devices, also used in terrestrial stationary or portable closed-circuit television systems.
Development of camera technology for monitoring nests. Chapter 15
W. Andrew Cox; M. Shane Pruett; Thomas J. Benson; Scott J. Chiavacci; Frank R., III Thompson
2012-01-01
Photo and video technology has become increasingly useful in the study of avian nesting ecology. However, researchers interested in using camera systems are often faced with insufficient information on the types and relative advantages of available technologies. We reviewed the literature for studies of nests that used cameras and summarized them based on study...
Spatial-temporal distortion metric for in-service quality monitoring of any digital video system
NASA Astrophysics Data System (ADS)
Wolf, Stephen; Pinson, Margaret H.
1999-11-01
Many organizations have focused on developing digital video quality metrics which produce results that accurately emulate subjective responses. However, to be widely applicable a metric must also work over a wide range of quality, and be useful for in-service quality monitoring. The Institute for Telecommunication Sciences (ITS) has developed spatial-temporal distortion metrics that meet all of these requirements. These objective metrics are described in detail and have a number of interesting properties, including utilization of (1) spatial activity filters which emphasize long edges on the order of 10 arc min while simultaneously performing large amounts of noise suppression, (2) the angular direction of the spatial gradient, (3) spatial-temporal compression factors of at least 384:1 (spatial compression of at least 64:1 and temporal compression of at least 6:1, and 4) simple perceptibility thresholds and spatial-temporal masking functions. Results are presented that compare the objective metric values with mean opinion scores from a wide range of subjective data bases spanning many different scenes, systems, bit-rates, and applications.
Head-mounted display for use in functional endoscopic sinus surgery
NASA Astrophysics Data System (ADS)
Wong, Brian J.; Lee, Jon P.; Dugan, F. Markoe; MacArthur, Carol J.
1995-05-01
Since the introduction of functional endoscopic sinus surgery (FESS), the procedure has undergone rapid change with evolution keeping pace with technological advances. The advent of low cost charge coupled device 9CCD) cameras revolutionized the practice and instruction of FESS. Video-based FESS has allowed for documentation of the surgical procedure as well as interactive instruction during surgery. Presently, the technical requirements of video-based FESS include the addition of one or more television monitors positioned strategically in the operating room. Thought video monitors have greatly enhanced surgical endoscopy by re- involving nurses and assistants in the actual mechanics of surgery, video monitors require the operating surgeon to be focused on the screen instead of the patient. In this study, we describe the use of a new low-cost liquid crystal display (LCD) based device that functions as a monitor but is mounted on the head on a visor (PT-O1, O1 Products, Westlake Village, CA). This study illustrates the application of these HMD devices to FESS operations. The same surgeon performed the operation in each patient. In one nasal fossa, surgery was performed using conventional video FESS methods. The contralateral side was operated on while wearing the head mounted video display. The device had adequate resolution for the purposes of FESS. No adverse effects were noted intraoperatively. The results on the patients ipsalateral and contralateral sides were similar. The visor did eliminated significant torsion of the surgeon's neck during the operation, while at the same time permitted simultaneous viewing of both the patient and the intranasal surgical field.
Αutomated 2D shoreline detection from coastal video imagery: an example from the island of Crete
NASA Astrophysics Data System (ADS)
Velegrakis, A. F.; Trygonis, V.; Vousdoukas, M. I.; Ghionis, G.; Chatzipavlis, A.; Andreadis, O.; Psarros, F.; Hasiotis, Th.
2015-06-01
Beaches are both sensitive and critical coastal system components as they: (i) are vulnerable to coastal erosion (due to e.g. wave regime changes and the short- and long-term sea level rise) and (ii) form valuable ecosystems and economic resources. In order to identify/understand the current and future beach morphodynamics, effective monitoring of the beach spatial characteristics (e.g. the shoreline position) at adequate spatio-temporal resolutions is required. In this contribution we present the results of a new, fully-automated detection method of the (2-D) shoreline positions using high resolution video imaging from a Greek island beach (Ammoudara, Crete). A fully-automated feature detection method was developed/used to monitor the shoreline position in geo-rectified coastal imagery obtained through a video system set to collect 10 min videos every daylight hour with a sampling rate of 5 Hz, from which snapshot, time-averaged (TIMEX) and variance images (SIGMA) were generated. The developed coastal feature detector is based on a very fast algorithm using a localised kernel that progressively grows along the SIGMA or TIMEX digital image, following the maximum backscatter intensity along the feature of interest; the detector results were found to compare very well with those obtained from a semi-automated `manual' shoreline detection procedure. The automated procedure was tested on video imagery obtained from the eastern part of Ammoudara beach in two 5-day periods, a low wave energy period (6-10 April 2014) and a high wave energy period (1 -5 November 2014). The results showed that, during the high wave energy event, there have been much higher levels of shoreline variance which, however, appeared to be similarly unevenly distributed along the shoreline as that related to the low wave energy event, Shoreline variance `hot spots' were found to be related to the presence/architecture of an offshore submerged shallow beachrock reef, found at a distance of 50-80 m from the shoreline. Hydrodynamic observations during the high wave energy period showed (a) that there is very significant wave energy attenuation by the offshore reef and (b) the generation of significant longshore and rip flows. The study results suggest that the developed methodology can provide a fast, powerful and efficient beach monitoring tool, particularly if combined with pertinent hydrodynamic observations.
A Smartphone-Based Driver Safety Monitoring System Using Data Fusion
Lee, Boon-Giin; Chung, Wan-Young
2012-01-01
This paper proposes a method for monitoring driver safety levels using a data fusion approach based on several discrete data types: eye features, bio-signal variation, in-vehicle temperature, and vehicle speed. The driver safety monitoring system was developed in practice in the form of an application for an Android-based smartphone device, where measuring safety-related data requires no extra monetary expenditure or equipment. Moreover, the system provides high resolution and flexibility. The safety monitoring process involves the fusion of attributes gathered from different sensors, including video, electrocardiography, photoplethysmography, temperature, and a three-axis accelerometer, that are assigned as input variables to an inference analysis framework. A Fuzzy Bayesian framework is designed to indicate the driver’s capability level and is updated continuously in real-time. The sensory data are transmitted via Bluetooth communication to the smartphone device. A fake incoming call warning service alerts the driver if his or her safety level is suspiciously compromised. Realistic testing of the system demonstrates the practical benefits of multiple features and their fusion in providing a more authentic and effective driver safety monitoring. PMID:23247416
Exploiting semantics for sensor re-calibration in event detection systems
NASA Astrophysics Data System (ADS)
Vaisenberg, Ronen; Ji, Shengyue; Hore, Bijit; Mehrotra, Sharad; Venkatasubramanian, Nalini
2008-01-01
Event detection from a video stream is becoming an important and challenging task in surveillance and sentient systems. While computer vision has been extensively studied to solve different kinds of detection problems over time, it is still a hard problem and even in a controlled environment only simple events can be detected with a high degree of accuracy. Instead of struggling to improve event detection using image processing only, we bring in semantics to direct traditional image processing. Semantics are the underlying facts that hide beneath video frames, which can not be "seen" directly by image processing. In this work we demonstrate that time sequence semantics can be exploited to guide unsupervised re-calibration of the event detection system. We present an instantiation of our ideas by using an appliance as an example--Coffee Pot level detection based on video data--to show that semantics can guide the re-calibration of the detection model. This work exploits time sequence semantics to detect when re-calibration is required to automatically relearn a new detection model for the newly evolved system state and to resume monitoring with a higher rate of accuracy.
Lim, Tae Ho; Choi, Hyuk Joong; Kang, Bo Seung
2010-01-01
We assessed the feasibility of using a camcorder mobile phone for teleconsulting about cardiac echocardiography. The diagnostic performance of evaluating left ventricle (LV) systolic function was measured by three emergency medicine physicians. A total of 138 short echocardiography video sequences (from 70 subjects) was selected from previous emergency room ultrasound examinations. The measurement of LV ejection fraction based on the transmitted video displayed on a mobile phone was compared with the original video displayed on the LCD monitor of the ultrasound machine. The image quality was evaluated using the double stimulation impairment scale (DSIS). All observers showed high sensitivity. There was an improvement in specificity with the observer's increasing experience of cardiac ultrasound. Although the image quality of video on the mobile phone was lower than that of the original, a receiver operating characteristic (ROC) analysis indicated that there was no significant difference in diagnostic performance. Immediate basic teleconsulting of echocardiography movies is possible using current commercially-available mobile phone systems.
Yellow River Icicle Hazard Dynamic Monitoring Using UAV Aerial Remote Sensing Technology
NASA Astrophysics Data System (ADS)
Wang, H. B.; Wang, G. H.; Tang, X. M.; Li, C. H.
2014-02-01
Monitoring the response of Yellow River icicle hazard change requires accurate and repeatable topographic surveys. A new method based on unmanned aerial vehicle (UAV) aerial remote sensing technology is proposed for real-time data processing in Yellow River icicle hazard dynamic monitoring. The monitoring area is located in the Yellow River ice intensive care area in southern BaoTou of Inner Mongolia autonomous region. Monitoring time is from the 20th February to 30th March in 2013. Using the proposed video data processing method, automatic extraction covering area of 7.8 km2 of video key frame image 1832 frames took 34.786 seconds. The stitching and correcting time was 122.34 seconds and the accuracy was better than 0.5 m. Through the comparison of precise processing of sequence video stitching image, the method determines the change of the Yellow River ice and locates accurate positioning of ice bar, improving the traditional visual method by more than 100 times. The results provide accurate aid decision information for the Yellow River ice prevention headquarters. Finally, the effect of dam break is repeatedly monitored and ice break five meter accuracy is calculated through accurate monitoring and evaluation analysis.
ERIC Educational Resources Information Center
Pratt, Sharon M.; Martin, Anita M.
2017-01-01
This pilot study explored two methods of eliciting beginning readers' verbalizations of their thinking when self-monitoring oral reading: video-stimulated recall and concurrent questioning. First and second graders (N = 11) were asked to explain their thinking about repetitions, attempts to self-correct, and successful self-corrects, in order to…
Advanced Video Technology for Safe and Efficient Surgical Operating Rooms
2005-03-01
should be easy to integrate into the system by non-technical personnel. " Disruptive Technologies - Such technologies can have both positive and negative...integrate new, emerging, and otherwise " disruptive technologies ." " Medical Manufacturer Markups - In some cases, potential vendor pricing of...POSITIVE disruptive technologies as they would, in some cases, eliminate the need for monitor screens. NETWORK BANDWIDTH j% J a The System must be able to
A Communication Architecture for an Advanced Extravehicular Mobile Unit
NASA Technical Reports Server (NTRS)
Ivancic, William D.; Sands, Obed S.; Bakula, Casey J.; Oldham, Daniel R.; Wright, Ted; Bradish, Martin A.; Klebau, Joseph M.
2014-01-01
This document describes the communication architecture for the Power, Avionics and Software (PAS) 1.0 subsystem for the Advanced Extravehicular Mobility Unit (AEMU). The following systems are described in detail: Caution Warning and Control System, Informatics, Storage, Video, Audio, Communication, and Monitoring Test and Validation. This document also provides some background as well as the purpose and goals of the PAS subsystem being developed at Glenn Research Center (GRC).
Viewing Welds By Computer Tomography
NASA Technical Reports Server (NTRS)
Pascua, Antonio G.; Roy, Jagatjit
1990-01-01
Computer tomography system used to inspect welds for root penetration. Source illuminates rotating welded part with fan-shaped beam of x rays or gamma rays. Detectors in circular array on opposite side of part intercept beam and convert it into electrical signals. Computer processes signals into image of cross section of weld. Image displayed on video monitor. System offers only nondestructive way to check penetration from outside when inner surfaces inaccessible.
NASA Astrophysics Data System (ADS)
Betz, Jessie M. Bethly
1993-12-01
The Video Distribution Subsystem (VDS) for Space Station Freedom provides onboard video communications. The VDS includes three major functions: external video switching; internal video switching; and sync and control generation. The Video Subsystem Routing (VSR) is a part of the VDS Manager Computer Software Configuration Item (VSM/CSCI). The VSM/CSCI is the software which controls and monitors the VDS equipment. VSR activates, terminates, and modifies video services in response to Tier-1 commands to connect video sources to video destinations. VSR selects connection paths based on availability of resources and updates the video routing lookup tables. This project involves investigating the current methodology to automate the Video Subsystem Routing and developing and testing a prototype as 'proof of concept' for designers.
Tsutsumi, Masae; Nogaki, Hiroshi; Shimizu, Yoshihisa; Stone, Teresa Elizabeth; Kobayashi, Toshio
2017-01-01
Globally, awareness of the vital link between health and the natural environment is growing. This pilot study, based on the idea of "forest bathing," or shinrin-yoku, the mindful use of all five senses to engage with nature in a natural environment, was initiated in order to determine whether stimulation by viewing an individual's preferred video of sea or forest had an effect on relaxation. The participants were 12 healthy men in their twenties and they were divided into two groups based on their preference for sea or forest scenery by using the Visual Analogue Scale. The participants watched 90 min DVDs of sea with natural sounds and forest with natural sounds while their heart rate variability and Bispectral Index System value were measured by using MemCalc/Tawara and a Bispectral Index System monitor. The participants were divided into two groups of six based on their preference for sea or forest scenery and each indicator was compared between them. Significant differences in a decrease in heart rate, increase in high frequency, and sustained arousal level were observed while viewing the preferred video. These results indicated that the viewing individual's preferred video of sea or forest had a relaxation effect. This study suggests that individual preferences should be taken into consideration for video relaxation therapy. © 2016 Japan Academy of Nursing Science.
Design and implementation of a remote UAV-based mobile health monitoring system
NASA Astrophysics Data System (ADS)
Li, Songwei; Wan, Yan; Fu, Shengli; Liu, Mushuang; Wu, H. Felix
2017-04-01
Unmanned aerial vehicles (UAVs) play increasing roles in structure health monitoring. With growing mobility in modern Internet-of-Things (IoT) applications, the health monitoring of mobile structures becomes an emerging application. In this paper, we develop a UAV-carried vision-based monitoring system that allows a UAV to continuously track and monitor a mobile infrastructure and transmit back the monitoring information in real- time from a remote location. The monitoring system uses a simple UAV-mounted camera and requires only a single feature located on the mobile infrastructure for target detection and tracking. The computation-effective vision-based tracking solution based on a single feature is an improvement over existing vision-based lead-follower tracking systems that either have poor tracking performance due to the use of a single feature, or have improved tracking performance at a cost of the usage of multiple features. In addition, a UAV-carried aerial networking infrastructure using directional antennas is used to enable robust real-time transmission of monitoring video streams over a long distance. Automatic heading control is used to self-align headings of directional antennas to enable robust communication in mobility. Compared to existing omni-communication systems, the directional communication solution significantly increases the operation range of remote monitoring systems. In this paper, we develop the integrated modeling framework of camera and mobile platforms, design the tracking algorithm, develop a testbed of UAVs and mobile platforms, and evaluate system performance through both simulation studies and field tests.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Evan; Goodale, Wing; Burns, Steve
There is a critical need to develop monitoring tools to track aerofauna (birds and bats) in three dimensions around wind turbines. New monitoring systems will reduce permitting uncertainty by increasing the understanding of how birds and bats are interacting with wind turbines, which will improve the accuracy of impact predictions. Biodiversity Research Institute (BRI), The University of Maine Orono School of Computing and Information Science (UMaine SCIS), HiDef Aerial Surveying Limited (HiDef), and SunEdison, Inc. (formerly First Wind) responded to this need by using stereo-optic cameras with near-infrared (nIR) technology to investigate new methods for documenting aerofauna behavior around windmore » turbines. The stereo-optic camera system used two synchronized high-definition video cameras with fisheye lenses and processing software that detected moving objects, which could be identified in post-processing. The stereo- optic imaging system offered the ability to extract 3-D position information from pairs of images captured from different viewpoints. Fisheye lenses allowed for a greater field of view, but required more complex image rectification to contend with fisheye distortion. The ability to obtain 3-D positions provided crucial data on the trajectory (speed and direction) of a target, which, when the technology is fully developed, will provide data on how animals are responding to and interacting with wind turbines. This project was focused on testing the performance of the camera system, improving video review processing time, advancing the 3-D tracking technology, and moving the system from Technology Readiness Level 4 to 5. To achieve these objectives, we determined the size and distance at which aerofauna (particularly eagles) could be detected and identified, created efficient data management systems, improved the video post-processing viewer, and attempted refinement of 3-D modeling with respect to fisheye lenses. The 29-megapixel camera system successfully captured 16,173 five-minute video segments in the field. During nighttime field trials using nIR, we found that bat-sized objects could not be detected more than 60 m from the camera system. This led to a decision to focus research efforts exclusively on daytime monitoring and to redirect resources towards improving the video post- processing viewer. We redesigned the bird event post-processing viewer, which substantially decreased the review time necessary to detect and identify flying objects. During daytime field trials, we determine that eagles could be detected up to 500 m away using the fisheye wide-angle lenses, and eagle-sized targets could be identified to species within 350 m of the camera system. We used distance sampling survey methods to describe the probability of detecting and identifying eagles and other aerofauna as a function of distance from the system. The previously developed 3-D algorithm for object isolation and tracking was tested, but the image rectification (flattening) required to obtain accurate distance measurements with fish-eye lenses was determined to be insufficient for distant eagles. We used MATLAB and OpenCV to improve fisheye lens rectification towards the center of the image, but accurate measurements towards the image corners could not be achieved. We believe that changing the fisheye lens to rectilinear lens would greatly improve position estimation, but doing so would result in a decrease in viewing angle and depth of field. Finally, we generated simplified shape profiles of birds to look for similarities between unknown animals and known species. With further development, this method could provide a mechanism for filtering large numbers of shapes to reduce data storage and processing. These advancements further refined the camera system and brought this new technology closer to market. Once commercialized, the stereo-optic camera system technology could be used to: a) research how different species interact with wind turbines in order to refine collision risk models and inform mitigation solutions; and b) monitor aerofauna interactions with terrestrial and offshore wind farms replacing costly human observers and allowing for long-term monitoring in the offshore environment. The camera system will provide developers and regulators with data on the risk that wind turbines present to aerofauna, which will reduce uncertainty in the environmental permitting process.« less
Video Feedback in the Classroom: Development of an Easy-to-Use Learning Environment
ERIC Educational Resources Information Center
De Poorter, John; De Jaegher, Lut; De Cock, Mieke; Neuttiens, Tom
2007-01-01
Video feedback offers great potential for use in teaching but the relative complexity of the normal set-up of a video camera, a special tripod and a monitor has limited its use in teaching. The authors have developed a computer-webcam set-up which simplifies this. Anyone with an ordinary computer and webcam can learn to control the video feedback…
Interactive Video and Informal Learning Environments.
ERIC Educational Resources Information Center
Morrissey, Kristine A.
The Michigan State University Museum used an interactive videodisc (IVD) as an introduction to a special exhibit, "Birds in Trouble in Michigan." The hardware components included a videodisc player, a microcomputer, a video monitor, and a mouse. Software included a HyperCard program and the videodisc "Audubon Society's VideoGuide to…
50 CFR 216.155 - Requirements for monitoring and reporting.
Code of Federal Regulations, 2010 CFR
2010-10-01
... place 3 autonomous digital video cameras overlooking chosen haul-out sites located varying distances from the missile launch site. Each video camera will be set to record a focal subgroup within the... presence and activity will be conducted and recorded in a field logbook or recorded on digital video for...
Written, Produced and Directed....by You.
ERIC Educational Resources Information Center
Underwood, Rachel A.
Home economics teachers comprise the newest group of professionals to become movie producers and directors. They are using video equipment--the video camera, monitor, and recorder. Advantages of video equipment for classroom use are affordable prices, tapes that can be reused, and student enjoyment of teacher-made tapes. Home economics content is…
NASA Astrophysics Data System (ADS)
Krapukhina, Nina; Senchenko, Roman; Kamenov, Nikolay
2017-12-01
Road safety and driving in dense traffic flows poses some challenges in receiving information about surrounding moving object, some of which can be in the vehicle's blind spot. This work suggests an approach to virtual monitoring of the objects in a current road scene via a system with a multitude of cooperating smart vehicles exchanging information. It also describes the intellectual agent model, and provides methods and algorithms of identifying and evaluating various characteristics of moving objects in video flow. Authors also suggest ways for integrating the information from the technical vision system into the model with further expansion of virtual monitoring for the system's objects. Implementation of this approach can help to expand the virtual field of view for a technical vision system.
Use of an UROV to develop 3-D optical models of submarine environments
NASA Astrophysics Data System (ADS)
Null, W. D.; Landry, B. J.
2017-12-01
The ability to rapidly obtain high-fidelity bathymetry is crucial for a broad range of engineering, scientific, and defense applications ranging from bridge scour, bedform morphodynamics, and coral reef health to unexploded ordnance detection and monitoring. The present work introduces the use of an Underwater Remotely Operated Vehicle (UROV) to develop 3-D optical models of submarine environments. The UROV used a Raspberry Pi camera mounted to a small servo which allowed for pitch control. Prior to video data collection, in situ camera calibration was conducted with the system. Multiple image frames were extracted from the underwater video for 3D reconstruction using Structure from Motion (SFM). This system provides a simple and cost effective solution to obtaining detailed bathymetry in optically clear submarine environments.
Simulation and Real-Time Verification of Video Algorithms on the TI C6400 Using Simulink
2004-08-20
SPONSOR/MONITOR’S ACRONYM(S) 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 12 . DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release...plot estimates over time (scrolling data) Adjust detection threshold (click mouse on graph) Monitor video capture Input video frames Captured frames 12 ...Video App: Surveillance Recording 1 2 7 3 4 9 5 6 11 SL for video Explanation of GUI 12 Target Options8 Build Process 10 13 14 15 16 M-code snippet
Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras.
Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki
2016-06-24
Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system.
Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras
Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki
2016-01-01
Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system. PMID:27347961
Duckneglect: video-games based neglect rehabilitation.
Mainetti, R; Sedda, A; Ronchetti, M; Bottini, G; Borghese, N A
2013-01-01
Video-games are becoming a common tool to guide patients through rehabilitation because of their power of motivating and engaging their users. Video-games may also be integrated into an infrastructure that allows patients, discharged from the hospital, to continue intensive rehabilitation at home under remote monitoring by the hospital itself, as suggested by the recently funded Rewire project. Goal of this work is to describe a novel low cost platform, based on video-games, targeted to neglect rehabilitation. The patient is guided to explore his neglected hemispace by a set of specifically designed games that ask him to reach targets, with an increasing level of difficulties. Visual and auditory cues helped the patient in the task and are progressively removed. A controlled randomization of scenarios, targets and distractors, a balanced reward system and music played in the background, all contribute to make rehabilitation more attractive, thus enabling intensive prolonged treatment. Results from our first patient, who underwent rehabilitation for half an hour, for five days a week for one month, showed on one side a very positive attitude of the patient towards the platform for the whole period, on the other side a significant improvement was obtained. Importantly, this amelioration was confirmed at a follow up evaluation five months after the last rehabilitation session and generalized to everyday life activities. Such a system could well be integrated into a home based rehabilitation system.
Real-time people counting system using a single video camera
NASA Astrophysics Data System (ADS)
Lefloch, Damien; Cheikh, Faouzi A.; Hardeberg, Jon Y.; Gouton, Pierre; Picot-Clemente, Romain
2008-02-01
There is growing interest in video-based solutions for people monitoring and counting in business and security applications. Compared to classic sensor-based solutions the video-based ones allow for more versatile functionalities, improved performance with lower costs. In this paper, we propose a real-time system for people counting based on single low-end non-calibrated video camera. The two main challenges addressed in this paper are: robust estimation of the scene background and the number of real persons in merge-split scenarios. The latter is likely to occur whenever multiple persons move closely, e.g. in shopping centers. Several persons may be considered to be a single person by automatic segmentation algorithms, due to occlusions or shadows, leading to under-counting. Therefore, to account for noises, illumination and static objects changes, a background substraction is performed using an adaptive background model (updated over time based on motion information) and automatic thresholding. Furthermore, post-processing of the segmentation results is performed, in the HSV color space, to remove shadows. Moving objects are tracked using an adaptive Kalman filter, allowing a robust estimation of the objects future positions even under heavy occlusion. The system is implemented in Matlab, and gives encouraging results even at high frame rates. Experimental results obtained based on the PETS2006 datasets are presented at the end of the paper.
Power Monitoring Using the Raspberry Pi
ERIC Educational Resources Information Center
Snyder, Robin M.
2014-01-01
The Raspberry Pi is a credit card size low powered compute board with Ethernet connection, HDMI video output, audio, full Linux operating system run from an SD card, and more, all for $45. With cables, SD card, etc., the cost is about $70. Originally designed to help teach computer science principles to low income children and students, the Pi has…
National Guard Counterdrug Programs
2001-02-14
comparisons to locate indoor Marijuana grows, outdoor infrastructure - Monitor activity at known sites - Meth labs, stash houses, marijuana grows - Real...Identifies key signatures of structures for indoor growth of cannabis - Vehiclelvessel surveillance * Video capabilities for evidence e Global Positioning...System Navigational Equipment - Identify marijuana locations for ground recovery Contact Information Voice (703) 607-5665 DSN Voice 327-5665 FAX (703
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-21
...] through Web CRD [System], by a Member immediately following the date of termination, but in no event later...) above, in the event that the Member learns of facts or circumstances causing any information set forth... through a window or by video monitor. The individual responsible for proctoring at each administration...
NASA Technical Reports Server (NTRS)
Larsen, D. Gail; Schwieder, Paul R.
1993-01-01
Network video conferencing is advancing rapidly throughout the nation, and the Idaho National Engineering Laboratory (INEL), a Department of Energy (DOE) facility, is at the forefront of the development. Engineers at INEL/EG&G designed and installed a very unique DOE videoconferencing system, offering many outstanding features, that include true multipoint conferencing, user-friendly design and operation with no full-time operators required, and the potential for cost effective expansion of the system. One area where INEL/EG&G engineers made a significant contribution to video conferencing was in the development of effective, user-friendly, end station driven scheduling software. A PC at each user site is used to schedule conferences via a windows package. This software interface provides information to the users concerning conference availability, scheduling, initiation, and termination. The menus are 'mouse' controlled. Once a conference is scheduled, a workstation at the hubs monitors the network to initiate all scheduled conferences. No active operator participation is required once a user schedules a conference through the local PC; the workstation automatically initiates and terminates the conference as scheduled. As each conference is scheduled, hard copy notification is also printed at each participating site. Video conferencing is the wave of the future. The use of these user-friendly systems will save millions in lost productivity and travel cost throughout the nation. The ease of operation and conference scheduling will play a key role on the extent industry uses this new technology. The INEL/EG&G has developed a prototype scheduling system for both commercial and federal government use.
NASA Astrophysics Data System (ADS)
Larsen, D. Gail; Schwieder, Paul R.
1993-02-01
Network video conferencing is advancing rapidly throughout the nation, and the Idaho National Engineering Laboratory (INEL), a Department of Energy (DOE) facility, is at the forefront of the development. Engineers at INEL/EG&G designed and installed a very unique DOE videoconferencing system, offering many outstanding features, that include true multipoint conferencing, user-friendly design and operation with no full-time operators required, and the potential for cost effective expansion of the system. One area where INEL/EG&G engineers made a significant contribution to video conferencing was in the development of effective, user-friendly, end station driven scheduling software. A PC at each user site is used to schedule conferences via a windows package. This software interface provides information to the users concerning conference availability, scheduling, initiation, and termination. The menus are 'mouse' controlled. Once a conference is scheduled, a workstation at the hubs monitors the network to initiate all scheduled conferences. No active operator participation is required once a user schedules a conference through the local PC; the workstation automatically initiates and terminates the conference as scheduled. As each conference is scheduled, hard copy notification is also printed at each participating site. Video conferencing is the wave of the future. The use of these user-friendly systems will save millions in lost productivity and travel cost throughout the nation. The ease of operation and conference scheduling will play a key role on the extent industry uses this new technology. The INEL/EG&G has developed a prototype scheduling system for both commercial and federal government use.
Hoffmann, G; Schmidt, M; Ammon, C
2016-09-01
In this study, a video-based infrared camera (IRC) was investigated as a tool to monitor the body temperature of calves. Body surface temperatures were measured contactless using videos from an IRC fixed at a certain location in the calf feeder. The body surface temperatures were analysed retrospectively at three larger areas: the head area (in front of the forehead), the body area (behind forehead) and the area of the entire animal. The rectal temperature served as a reference temperature and was measured with a digital thermometer at the corresponding time point. A total of nine calves (Holstein-Friesians, 8 to 35 weeks old) were examined. The average maximum temperatures of the area of the entire animal (mean±SD: 37.66±0.90°C) and the head area (37.64±0.86°C) were always higher than that of the body area (36.75±1.06°C). The temperatures of the head area and of the entire animal were very similar. However, the maximum temperatures as measured using IRC increased with an increase in calf rectal temperature. The maximum temperatures of each video picture for the entire visible body area of the calves appeared to be sufficient to measure the superficial body temperature. The advantage of the video-based IRC over conventional IR single-picture cameras is that more than one picture per animal can be analysed in a short period of time. This technique provides more data for analysis. Thus, this system shows potential as an indicator for continuous temperature measurements in calves.
1995-11-01
Surgical video systems (SVSs), which typically consist of a video camera attached to an optical endoscope, a video processor, a light source, and a video monitor, are now being used to perform a significant number of minimally invasive surgical procedures. SVSs offer several advantages (e.g., multiple viewer visualization of the surgical site, increased clinician comfort) over nonvideo systems and have increased the practicality and convenience of minimally invasive surgery (MIS). Currently, SVSs are used by hospitals in their general, obstetric/gynecologic, orthopedic, thoracic, and urologic procedures, as well as in other specialties for which MIS is feasible. In this study, we evaluated 19 SVSs from 10 manufacturers, focusing on their use in laparoscopic applications in general surgery. We based our ratings on the usefulness of each system's video performance and features in helping clinicians provide safe and efficacious laparoscopic surgery. We rated 18 of the systems Acceptable because of their overall good performance and features. We rated 1 system Conditionally Acceptable because, compared with the other evaluated systems, this SVS presents a greater risk of thermal injury resulting from excessive heating at the distal tip of the laparoscope. Readers should be aware that our test results, conclusions, and ratings apply only to the specific systems and components tested in this Evaluation. In addition, although our discussion focuses on the laparoscopic application of SVSs, much of the information in this study also applies to other MIS applications, and the evaluated devices can be used in a variety of surgical procedures. To help hospitals gain the perspectives necessary to assess the appropriateness of specific SVSs to ensure that the needs of their patients, as well as the expectations of their clinicians, will be satisfied, we have included a Selection and Purchasing Guide that can be used as a supplement to our Evaluation findings. We have also included a Glossary of relevant terminology and the supplementary article, "Fiberoptic Illumination Systems and the Risk of Burns or Fire during Endoscopic Procedures," which addresses a safety concern with the use of these devices. While we made every effort to present the most current information, readers should recognize that this is a rapidly evolving technology, and developments occurring after our study was complete may not be reflected in the text. For additional information on topics related to this study, refer to the following Health Devices articles: (1) our Guidance Article, "Surgical Video Systems Used in Laparoscopy," 24(1), January 1995, which serves as an introduction to SVS terminology and includes a discussion of the significance of many SVS specifications; (2) our Evaluation, "Video Colonoscope Systems," 23(5), May 1994, which includes a detailed overview of video endoscopic applications and technology; and (3) our Evaluations of laparoscopic insufflators (21[5], May 1992, and 24[7], July 1995), which address issues related to the creation of a viewing and working space inside the peritoneal cavity to facilitate visualization in laparoscopic procedures.
Reduction in Fall Rate in Dementia Managed Care Through Video Incident Review: Pilot Study
Netscher, George; Agrawal, Pulkit; Tabb Noyce, Lynn; Bayen, Alexandre
2017-01-01
Background Falls of individuals with dementia are frequent, dangerous, and costly. Early detection and access to the history of a fall is crucial for efficient care and secondary prevention in cognitively impaired individuals. However, most falls remain unwitnessed events. Furthermore, understanding why and how a fall occurred is a challenge. Video capture and secure transmission of real-world falls thus stands as a promising assistive tool. Objective The objective of this study was to analyze how continuous video monitoring and review of falls of individuals with dementia can support better quality of care. Methods A pilot observational study (July-September 2016) was carried out in a Californian memory care facility. Falls were video-captured (24×7), thanks to 43 wall-mounted cameras (deployed in all common areas and in 10 out of 40 private bedrooms of consenting residents and families). Video review was provided to facility staff, thanks to a customized mobile device app. The outcome measures were the count of residents’ falls happening in the video-covered areas, the acceptability of video recording, the analysis of video review, and video replay possibilities for care practice. Results Over 3 months, 16 falls were video-captured. A drop in fall rate was observed in the last month of the study. Acceptability was good. Video review enabled screening for the severity of falls and fall-related injuries. Video replay enabled identifying cognitive-behavioral deficiencies and environmental circumstances contributing to the fall. This allowed for secondary prevention in high-risk multi-faller individuals and for updated facility care policies regarding a safer living environment for all residents. Conclusions Video monitoring offers high potential to support conventional care in memory care facilities. PMID:29042342
High Speed Videometric Monitoring of Rock Breakage
NASA Astrophysics Data System (ADS)
Allemand, J.; Shortis, M. R.; Elmouttie, M. K.
2018-05-01
Estimation of rock breakage characteristics plays an important role in optimising various industrial and mining processes used for rock comminution. Although little research has been undertaken into 3D photogrammetric measurement of the progeny kinematics, there is promising potential to improve the efficacy of rock breakage characterisation. In this study, the observation of progeny kinematics was conducted using a high speed, stereo videometric system based on laboratory experiments with a drop weight impact testing system. By manually tracking individual progeny through the captured video sequences, observed progeny coordinates can be used to determine 3D trajectories and velocities, supporting the idea that high speed video can be used for rock breakage characterisation purposes. An analysis of the results showed that the high speed videometric system successfully observed progeny trajectories and showed clear projection of the progeny away from the impact location. Velocities of the progeny could also be determined based on the trajectories and the video frame rate. These results were obtained despite the limitations of the photogrammetric system and experiment processes observed in this study. Accordingly there is sufficient evidence to conclude that high speed videometric systems are capable of observing progeny kinematics from drop weight impact tests. With further optimisation of the systems and processes used, there is potential for improving the efficacy of rock breakage characterisation from measurements with high speed videometric systems.
NASA Technical Reports Server (NTRS)
Haines, Richard F.; Chuang, Sherry L.
1993-01-01
Current plans indicate that there will be a large number of life science experiments carried out during the thirty year-long mission of the Biological Flight Research Laboratory (BFRL) on board Space Station Freedom (SSF). Non-human life science experiments will be performed in the BFRL. Two distinct types of activities have already been identified for this facility: (1) collect, store, distribute, analyze and manage engineering and science data from the Habitats, Glovebox and Centrifuge, (2) perform a broad range of remote science activities in the Glovebox and Habitat chambers in conjunction with the remotely located principal investigator (PI). These activities require extensive video coverage, viewing and/or recording and distribution to video displays on board SSF and to the ground. This paper concentrates mainly on the second type of activity. Each of the two BFRL habitat racks are designed to be configurable for either six rodent habitats per rack, four plant habitats per rack, or a combination of the above. Two video cameras will be installed in each habitat with a spare attachment for a third camera when needed. Therefore, a video system that can accommodate up to 12-18 camera inputs per habitat rack must be considered.
Real Time Apnoea Monitoring of Children Using the Microsoft Kinect Sensor: A Pilot Study.
Al-Naji, Ali; Gibson, Kim; Lee, Sang-Heon; Chahl, Javaan
2017-02-03
The objective of this study was to design a non-invasive system for the observation of respiratory rates and detection of apnoea using analysis of real time image sequences captured in any given sleep position and under any light conditions (even in dark environments). A Microsoft Kinect sensor was used to visualize the variations in the thorax and abdomen from the respiratory rhythm. These variations were magnified, analyzed and detected at a distance of 2.5 m from the subject. A modified motion magnification system and frame subtraction technique were used to identify breathing movements by detecting rapid motion areas in the magnified frame sequences. The experimental results on a set of video data from five subjects (3 h for each subject) showed that our monitoring system can accurately measure respiratory rate and therefore detect apnoea in infants and young children. The proposed system is feasible, accurate, safe and low computational complexity, making it an efficient alternative for non-contact home sleep monitoring systems and advancing health care applications.
Background: Preflight Screening, In-flight Capabilities, and Postflight Testing
NASA Technical Reports Server (NTRS)
Gibson, Charles Robert; Duncan, James
2009-01-01
Recommendations for minimal in-flight capabilities: Retinal Imaging - provide in-flight capability for the visual monitoring of ocular health (specifically, imaging of the retina and optic nerve head) with the capability of downlinking video/still images. Tonometry - provide more accurate and reliable in-flight capability for measuring intraocular pressure. Ultrasound - explore capabilities of current on-board system for monitoring ocular health. We currently have limited in-flight capabilities on board the International Space Station for performing an internal ocular health assessment. Visual Acuity, Direct Ophthalmoscope, Ultrasound, Tonometry(Tonopen):
Time lapse video recordings of highly purified human hematopoietic progenitor cells in culture.
Denkers, I A; Dragowska, W; Jaggi, B; Palcic, B; Lansdorp, P M
1993-05-01
Major hurdles in studies of stem cell biology include the low frequency and heterogeneity of human hematopoietic precursor cells in bone marrow and the difficulty of directly studying the effect of various culture conditions and growth factors on such cells. We have adapted the cell analyzer imaging system for monitoring and recording the morphology of limited numbers of cells under various culture conditions. Hematopoietic progenitor cells with a CD34+ CD45RAlo CD71lo phenotype were purified from previously frozen organ donor bone marrow by fluorescence activated cell sorting. Cultures of such cells were analyzed with the imaging system composed of an inverted microscope contained in an incubator, a video camera, an optical memory disk recorder and a computer-controlled motorized microscope XYZ precision stage. Fully computer-controlled video images at defined XYZ positions were captured at selected time intervals and recorded at a predetermined sequence on an optical memory disk. In this study, the cell analyzer system was used to obtain descriptions and measurements of hematopoietic cell behavior, like cell motility, cell interactions, cell shape, cell division, cell cycle time and cell size changes under different culture conditions.
Guerreiro, Carlos A M; Montenegro, Maria Augusta; Kobayashi, Eliane; Noronha, Ana Lúcia A; Guerreiro, Marilisa M; Cendes, Fernando
2002-06-01
Video-EEG monitoring documentation of seizure localization is one of the most important aspects of a presurgical investigation in refractory temporal lobe epilepsy (TLE) patients. The objective of this study was to evaluate the efficacy of inpatient versus daytime outpatient telemetry. The authors evaluated prospectively 73 patients with medically intractable TLE. Ninety-one telemetry sessions were performed: 35 as inpatients and 56 as outpatients. Outpatient monitoring was performed in the EEG laboratory. They used 18-channel digital EEG. Medications were not changed in the outpatient group. For analysis of the data, time was counted in periods (12 hours = 1 period). Statistical analyses were performed using Student's t-test and the chi2 test. There were no differences between the two groups (outpatient versus inpatient) with respect to age and mean seizure frequency before monitoring, mean time to record the first seizure (1.1 versus 1.4 periods), mean number of seizures per period (0.6 for both groups), lateralization by interictal spiking (46% versus 57%), and lateralization by ictal EEG (59% versus 77%). Daytime outpatient video-EEG monitoring for presurgical evaluation is efficient and comparable with inpatient monitoring. Therefore, the improved cost benefit of outpatient monitoring may increase the access to surgery for individuals with intractable TLE.
Hardware and software improvements to a low-cost horizontal parallax holographic video monitor.
Henrie, Andrew; Codling, Jesse R; Gneiting, Scott; Christensen, Justin B; Awerkamp, Parker; Burdette, Mark J; Smalley, Daniel E
2018-01-01
Displays capable of true holographic video have been prohibitively expensive and difficult to build. With this paper, we present a suite of modularized hardware components and software tools needed to build a HoloMonitor with basic "hacker-space" equipment, highlighting improvements that have enabled the total materials cost to fall to $820, well below that of other holographic displays. It is our hope that the current level of simplicity, development, design flexibility, and documentation will enable the lay engineer, programmer, and scientist to relatively easily replicate, modify, and build upon our designs, bringing true holographic video to the masses.
Hatching and fledging times from grassland passerine nests
Pietz, Pamela J.; Granfors, Diane A.; Grant, Todd A.; Ribic, Christine A.; Thompson, Frank R.; Pietz, Pamela J.
2012-01-01
1 day and was positively correlated with clutch size. Length of the fledging period for a brood was usually Accurate estimates of fledging age are needed in field studies to avoid inducing premature fledging or missing the fledging event. Both may lead to misinterpretation of nest fate. Correctly assessing nest fate and length of the nestling period can be critical for accurate calculation of nest survival rates. For researchers who mark nestlings, knowing the age at which their activities may cause young to leave nests prematurely could prevent introducing bias to their studies. We obtained estimates of fledging age using data from grassland bird nests monitored from hatching through fledging with video-surveillance systems in North Dakota and Minnesota during 1996–2001. We compared these values to those obtained from traditional nest visits and from available literature. Mean and modal fledging ages for video-monitored nests were generally similar to those for visited nests, although Clay-colored Sparrows (Spizella pallida) typically fledged 1 day earlier from visited nests. Average fledging ages from both video and nest visits occurred within ranges reported in the literature, but expanded by 1–2 days the upper age limit for Clay-colored Sparrows and the lower age limit for Bobolinks (Dolichonyx oryzivorus). Video showed that eggs hatched throughout the day whereas most young fledged in the morning (06:30–12:30 CDT). Length of the hatching period for a clutch was usually >1 day and was positively correlated with clutch size. Length of the fledging period for a brood was usually <1 day, and in nearly half the nests, fledging was completed within <2 hr. Video surveillance has proven to be a useful tool for providing new information and for corroborating published statements related to hatching and fledging chronology. Comparison of data collected from video and nest visits showed that carefully conducted nest visits generally can provide reliable data for deriving estimates of survival.
Television image compression and small animal remote monitoring
NASA Technical Reports Server (NTRS)
Haines, Richard F.; Jackson, Robert W.
1990-01-01
It was shown that a subject can reliably discriminate a difference in video image quality (using a specific commercial product) for image compression levels ranging from 384 kbits per second to 1536 kbits per second. However, their discriminations are significantly influenced by whether or not the TV camera is stable or moving and whether or not the animals are quiescent or active, which is correlated with illumination level (daylight versus night illumination, respectively). The highest video rate used here was 1.54 megabits per second, which is about 18 percent of the so-called normal TV resolution of 8.4MHz. Since this video rate was judged to be acceptable by 27 of the 34 subjects (79 percent), for monitoring the general health and status of small animals within their illuminated (lights on) cages (regardless of whether the camera was stable or moved), it suggests that an immediate Space Station Freedom to ground bandwidth reduction of about 80 percent can be tolerated without a significant loss in general monitoring capability. Another general conclusion is that the present methodology appears to be effective in quantifying visual judgments of video image quality.
High-speed video capillaroscopy method for imaging and evaluation of moving red blood cells
NASA Astrophysics Data System (ADS)
Gurov, Igor; Volkov, Mikhail; Margaryants, Nikita; Pimenov, Aleksei; Potemkin, Andrey
2018-05-01
The video capillaroscopy system with high image recording rate to resolve moving red blood cells with velocity up to 5 mm/s into a capillary is considered. Proposed procedures of the recorded video sequence processing allow evaluating spatial capillary area, capillary diameter and central line with high accuracy and reliability independently on properties of individual capillary. Two-dimensional inter frame procedure is applied to find lateral shift of neighbor images in the blood flow area with moving red blood cells and to measure directly the blood flow velocity along a capillary central line. The developed method opens new opportunities for biomedical diagnostics, particularly, due to long-time continuous monitoring of red blood cells velocity into capillary. Spatio-temporal representation of capillary blood flow is considered. Experimental results of direct measurement of blood flow velocity into separate capillary as well as capillary net are presented and discussed.
ERIC Educational Resources Information Center
Schmitt, Rachel Calkins Oxnard
2009-01-01
Children are diagnosed with AD/HD more often than any other disorder and interventions are needed in schools to increase on-task behavior. Most studies examining on-task behavior are conducted in special education classrooms or clinical laboratories. Previous studies have not combined video self-modeling and self-monitoring as an intervention to…
Fokkenrood, H J P; Verhofstad, N; van den Houten, M M L; Lauret, G J; Wittens, C; Scheltinga, M R M; Teijink, J A W
2014-08-01
The daily life physical activity (PA) of patients with peripheral arterial disease (PAD) may be severely hampered by intermittent claudication (IC). From a therapeutic, as well as research, point of view, it may be more relevant to determine improvement in PA as an outcome measure in IC. The aim of this study was to validate daily activities using a novel type of tri-axial accelerometer (Dynaport MoveMonitor) in patients with IC. Patients with IC were studied during a hospital visit. Standard activities (locomotion, lying, sitting, standing, shuffling, number of steps and "not worn" detection) were video recorded and compared with activities scored by the MoveMonitor. Inter-rater reliability (expressed in intraclass correlation coefficients [ICC]), sensitivity, specificity, and positive predictive values (PPV) were calculated for each activity. Twenty-eight hours of video observation were analysed (n = 21). Our video annotation method (the gold standard method) appeared to be accurate for most postures (ICC > 0.97), except for shuffling (ICC = 0.38). The MoveMonitor showed a high sensitivity (>86%), specificity (>91%), and PPV (>88%) for locomotion, lying, sitting, and "not worn" detection. Moderate accuracy was found for standing (46%), while shuffling appeared to be undetectable (18%). A strong correlation was found between video recordings and the MoveMonitor with regard to the calculation of the "number of steps" (ICC = 0.90). The MoveMonitor provides accurate information on a diverse set of postures, daily activities, and number of steps in IC patients. However, the detection of low amplitude movements, such as shuffling and "sitting to standing" transfers, is a matter of concern. This tool is useful in assessing the role of PA as a novel, clinically relevant outcome parameter in IC. Copyright © 2014 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.
Evaluation and analysis of the orbital maneuvering vehicle video system
NASA Technical Reports Server (NTRS)
Moorhead, Robert J., II
1989-01-01
The work accomplished in the summer of 1989 in association with the NASA/ASEE Summer Faculty Research Fellowship Program at Marshall Space Flight Center is summarized. The task involved study of the Orbital Maneuvering Vehicle (OMV) Video Compression Scheme. This included such activities as reviewing the expected scenes to be compressed by the flight vehicle, learning the error characteristics of the communication channel, monitoring the CLASS tests, and assisting in development of test procedures and interface hardware for the bit error rate lab being developed at MSFC to test the VCU/VRU. Numerous comments and suggestions were made during the course of the fellowship period regarding the design and testing of the OMV Video System. Unfortunately from a technical point of view, the program appears at this point in time to be trouble from an expense prospective and is in fact in danger of being scaled back, if not cancelled altogether. This makes technical improvements prohibitive and cost-reduction measures necessary. Fortunately some cost-reduction possibilities and some significant technical improvements that should cost very little were identified.
NASA Astrophysics Data System (ADS)
Yang, Xiaolin; Wu, Zhongliang; Jiang, Changsheng; Xia, Min
2011-05-01
One of the important issues in macroseismology and engineering seismology is how to get as much intensity and/or strong motion data as possible. We collected and studied several cases in the May 12, 2008, Wenchuan earthquake, exploring the possibility of estimating intensities and/or strong ground motion parameters using civilian monitoring videos which were deployed originally for security purposes. We used 53 video recordings in different places to determine the intensity distribution of the earthquake, which is shown to be consistent with the intensity distribution mapped by field investigation, and even better than that given by the Community Internet Intensity Map. In some of the videos, the seismic wave propagation is clearly visible, and can be measured with the reference of some artificial objects such as cars and/or trucks. By measuring the propagating wave, strong motion parameters can be roughly but quantitatively estimated. As a demonstration of this `propagating-wave method', we used a series of civilian videos recorded in different parts of Sichuan and Shaanxi and estimated the local PGAs. The estimate is compared with the measurement reported by strong motion instruments. The result shows that civilian monitoring video provide a practical way of collecting and estimating intensity and/or strong motion parameters, having the advantage of being dynamic, and being able to be played back for further analysis, reflecting a new trend for macroseismology in our digital era.
On-line monitoring system of PV array based on internet of things technology
NASA Astrophysics Data System (ADS)
Li, Y. F.; Lin, P. J.; Zhou, H. F.; Chen, Z. C.; Wu, L. J.; Cheng, S. Y.; Su, F. P.
2017-11-01
The Internet of Things (IoT) Technology is used to inspect photovoltaic (PV) array which can greatly improve the monitoring, performance and maintenance of the PV array. In order to efficiently realize the remote monitoring of PV operating environment, an on-line monitoring system of PV array based on IoT is designed in this paper. The system includes data acquisition, data gateway and PV monitoring centre (PVMC) website. Firstly, the DSP-TMS320F28335 is applied to collect indicators of PV array using sensors, then the data are transmitted to data gateway through ZigBee network. Secondly, the data gateway receives the data from data acquisition part, obtains geographic information via GPS module, and captures the scenes around PV array via USB camera, then uploads them to PVMC website. Finally, the PVMC website based on Laravel framework receives all data from data gateway and displays them with abundant charts. Moreover, a fault diagnosis approach for PV array based on Extreme Learning Machine (ELM) is applied in PVMC. Once fault occurs, a user alert can be sent via E-mail. The designed system enables users to browse the operating conditions of PV array on PVMC website, including electrical, environmental parameters and video. Experimental results show that the presented monitoring system can efficiently real-time monitor the PV array, and the fault diagnosis approach reaches a high accuracy of 97.5%.
Nissen, Nicholas N; Menon, Vijay; Williams, James; Berci, George
2011-01-01
Background The use of loupe magnification during complex hepatobiliary and pancreatic (HBP) surgery has become routine. Unfortunately, loupe magnification has several disadvantages including limited magnification, a fixed field and non-variable magnification parameters. The aim of this report is to describe a simple system of video-microscopy for use in open surgery as an alternative to loupe magnification. Methods In video-microscopy, the operative field is displayed on a TV monitor using a high-definition (HD) camera with a special optic mounted on an adjustable mechanical arm. The set-up and application of this system are described and illustrated using examples drawn from pancreaticoduodenectomy, bile duct repair and liver transplantation. Results This system is easy to use and can provide variable magnification of ×4–12 at a camera distance of 25–35 cm from the operative field and a depth of field of 15 mm. This system allows the surgeon and assistant to work from a HD TV screen during critical phases of microsurgery. Conclusions The system described here provides better magnification than loupe lenses and thus may be beneficial during complex HPB procedures. Other benefits of this system include the fact that its use decreases neck strain and postural fatigue in the surgeon and it can be used as a tool for documentation and teaching. PMID:21929677
NASA Astrophysics Data System (ADS)
Feher, K.
Topics discussed include highlights of Canadian and US communication-satellite developments, video teleconferencing, modulation/system studies, organization/interface tradeoffs, Canadian satellite programs, performance monitoring techniques, spread spectrum satcom systems, social and educational satellite services, atmospheric/navigational satcom systems, TDMA systems, and Teleglobe/Intelsat and Inmarsat programs. Consideration is also given to SCPC developments, TV and program reception, earth station components, European satcom systems, TCTS/CNCP satellite communications services, satellite designs, coding techniques, Japanese satellite systems, network developments, the ANIK user workshop, industrial/business systems, and satellite antenna technology.
Design and implementation of a Bluetooth-based infant monitoring/saver (BIMS) system
NASA Astrophysics Data System (ADS)
Sonmez, Ahmet E.; Nalcaci, Murat T.; Pazarbasi, Mehmet A.; Toker, Onur; Fidanboylu, Kemal
2007-04-01
In this work, we discuss the design and implementation of a Bluetooth technology based infant monitoring system, which will enable the mother to monitor her baby's health condition remotely in real-time. The system will measure the heart rate, and temperature of the infant, and stream this data to the mother's Bluetooth based mobile unit, e.g. cell phone, PDA, etc. Existing infant monitors either require so many cables, or transmit only voice and/or video information, which is not enough for monitoring the health condition of an infant. With the proposed system, the mother will be warned against any abnormalities, which may be an indication of a disease, which in turn may result a sudden infant death. High temperature is a common symptom for several diseases, and heart rate is an essential sign of life, low or high heart rates are also essentials symptoms. Because of these reasons, the proposed system continously measures these two critical values. A 12 bits digital temperature sensor is used to measure infant's body temperature, and a piezo film sensor is used measure infant's heartbeat rate. These sensors, some simple analog circuitry, and a ToothPick unit are the main components of our embedded system. ToothPick unit is basically a Microchip 18LF6720 microcontroller, plus an RF circuitry with Bluetooth stack.
Automated and electronically assisted hand hygiene monitoring systems: a systematic review.
Ward, Melissa A; Schweizer, Marin L; Polgreen, Philip M; Gupta, Kalpana; Reisinger, Heather S; Perencevich, Eli N
2014-05-01
Hand hygiene is one of the most effective ways to prevent transmission of health care-associated infections. Electronic systems and tools are being developed to enhance hand hygiene compliance monitoring. Our systematic review assesses the existing evidence surrounding the adoption and accuracy of automated systems or electronically enhanced direct observations and also reviews the effectiveness of such systems in health care settings. We systematically reviewed PubMed for articles published between January 1, 2000, and March 31, 2013, containing the terms hand AND hygiene or hand AND disinfection or handwashing. Resulting articles were reviewed to determine if an electronic system was used. We identified 42 articles for inclusion. Four types of systems were identified: electronically assisted/enhanced direct observation, video-monitored direct observation systems, electronic dispenser counters, and automated hand hygiene monitoring networks. Fewer than 20% of articles identified included calculations for efficiency or accuracy. Limited data are currently available to recommend adoption of specific automatic or electronically assisted hand hygiene surveillance systems. Future studies should be undertaken that assess the accuracy, effectiveness, and cost-effectiveness of such systems. Given the restricted clinical and infection prevention budgets of most facilities, cost-effectiveness analysis of specific systems will be required before these systems are widely adopted. Published by Mosby, Inc.
Voice control of the space shuttle video system
NASA Technical Reports Server (NTRS)
Bejczy, A. K.; Dotson, R. S.; Brown, J. W.; Lewis, J. L.
1981-01-01
A pilot voice control system developed at the Jet Propulsion Laboratory (JPL) to test and evaluate the feasibility of controlling the shuttle TV cameras and monitors by voice commands utilizes a commercially available discrete word speech recognizer which can be trained to the individual utterances of each operator. Successful ground tests were conducted using a simulated full-scale space shuttle manipulator. The test configuration involved the berthing, maneuvering and deploying a simulated science payload in the shuttle bay. The handling task typically required 15 to 20 minutes and 60 to 80 commands to 4 TV cameras and 2 TV monitors. The best test runs show 96 to 100 percent voice recognition accuracy.
Cameras Monitor Spacecraft Integrity to Prevent Failures
NASA Technical Reports Server (NTRS)
2014-01-01
The Jet Propulsion Laboratory contracted Malin Space Science Systems Inc. to outfit Curiosity with four of its cameras using the latest commercial imaging technology. The company parlayed the knowledge gained under working with NASA to develop an off-the-shelf line of cameras, along with a digital video recorder, designed to help troubleshoot problems that may arise on satellites in space.
ERIC Educational Resources Information Center
Yakubova, Gulnoza; Taber-Doughty, Teresa
2013-01-01
The effects of a multicomponent intervention (a self-operated video modeling and self-monitoring delivered via an electronic interactive whiteboard (IWB) and a system of least prompts) on skill acquisition and interaction behavior of two students with autism and one student with moderate intellectual disability were examined using a multi-probe…
Phase-based motion magnification video for monitoring of vital signals using the Hermite transform
NASA Astrophysics Data System (ADS)
Brieva, Jorge; Moya-Albor, Ernesto
2017-11-01
In this paper we present a new Eulerian phase-based motion magnification technique using the Hermite Transform (HT) decomposition that is inspired in the Human Vision System (HVS). We test our method in one sequence of the breathing of a newborn baby and on a video sequence that shows the heartbeat on the wrist. We detect and magnify the heart pulse applying our technique. Our motion magnification approach is compared to the Laplacian phase based approach by means of quantitative metrics (based on the RMS error and the Fourier transform) to measure the quality of both reconstruction and magnification. In addition a noise robustness analysis is performed for the two methods.
Magnetic field exposure and behavioral monitoring system.
Thomas, A W; Drost, D J; Prato, F S
2001-09-01
To maximize the availability and usefulness of a small magnetic field exposure laboratory, we designed a magnetic field exposure system that has been used to test human subjects, caged or confined animals, and cell cultures. The magnetic field exposure system consists of three orthogonal pairs of coils 2 m square x 1 m separation, 1.751 m x 0.875 m separation, and 1.5 m x 0.75 m separation. Each coil consisted of ten turns of insulated 8 gauge stranded copper conductor. Each of the pairs were driven by a constant-current amplifier via digital to analog (D/A) converter. A 9 pole zero-gain active Bessel low-pass filter (1 kHz corner frequency) before the amplifier input attenuated the expected high frequencies generated by the D/A conversion. The magnetic field was monitored with a 3D fluxgate magnetometer (0-3 kHz, +/- 1 mT) through an analog to digital converter. Behavioral monitoring utilized two monochrome video cameras (viewing the coil center vertically and horizontally), both of which could be video recorded and real-time digitally Moving Picture Experts Group (MPEG) encoded to CD-ROM. Human postural sway (standing balance) was monitored with a 3D forceplate mounted on the floor, connected to an analog to digital converter. Lighting was provided by 12 offset overhead dimmable fluorescent track lights and monitored using a digitally connected spectroradiometer. The dc resistance, inductance of each coil pair connected in series were 1.5 m coil (0.27 Omega, 1.2 mH), 1.75 m coil (0.32 Omega, 1.4 mH), and 2 m coil (0.38 Omega, 1.6 mH). The frequency response of the 1.5 m coil set was 500 Hz at +/- 463 microT, 1 kHz at +/- 232 microT, 150 micros rise time from -200 microT(pk) to + 200 microT(pk) (square wave) and is limited by the maximum voltage ( +/- 146 V) of the amplifier (Bessel filter bypassed). Copyright 2001 Wiley-Liss, Inc.
Repurposing video recordings for structure motion estimations
NASA Astrophysics Data System (ADS)
Khaloo, Ali; Lattanzi, David
2016-04-01
Video monitoring of public spaces is becoming increasingly ubiquitous, particularly near essential structures and facilities. During any hazard event that dynamically excites a structure, such as an earthquake or hurricane, proximal video cameras may inadvertently capture the motion time-history of the structure during the event. If this dynamic time-history could be extracted from the repurposed video recording it would become a valuable forensic analysis tool for engineers performing post-disaster structural evaluations. The difficulty is that almost all potential video cameras are not installed to monitor structure motions, leading to camera perspective distortions and other associated challenges. This paper presents a method for extracting structure motions from videos using a combination of computer vision techniques. Images from a video recording are first reprojected into synthetic images that eliminate perspective distortion, using as-built knowledge of a structure for calibration. The motion of the camera itself during an event is also considered. Optical flow, a technique for tracking per-pixel motion, is then applied to these synthetic images to estimate the building motion. The developed method was validated using the experimental records of the NEESHub earthquake database. The results indicate that the technique is capable of estimating structural motions, particularly the frequency content of the response. Further work will evaluate variants and alternatives to the optical flow algorithm, as well as study the impact of video encoding artifacts on motion estimates.
Williams, Gary E.; Wood, P.B.
2002-01-01
We used miniature infrared video cameras to monitor Wood Thrush (Hylocichla mustelina) nests during 1998–2000. We documented nest predators and examined whether evidence at nests can be used to predict predator identities and nest fates. Fifty-six nests were monitored; 26 failed, with 3 abandoned and 23 depredated. We predicted predator class (avian, mammalian, snake) prior to review of video footage and were incorrect 57% of the time. Birds and mammals were underrepresented whereas snakes were over-represented in our predictions. We documented ≥9 nest-predator species, with the southern flying squirrel (Glaucomys volans) taking the most nests (n = 8). During 2000, we predicted fate (fledge or fail) of 27 nests; 23 were classified correctly. Traditional methods of monitoring nests appear to be effective for classifying success or failure of nests, but ineffective at classifying nest predators.
Kychakoff, George; Afromowitz, Martin A; Hugle, Richard E
2005-06-21
A system for detection and control of deposition on pendant tubes in recovery and power boilers includes one or more deposit monitoring sensors operating in infrared regions and about 4 or 8.7 microns and directly producing images of the interior of the boiler. An image pre-processing circuit (95) in which a 2-D image formed by the video data input is captured, and includes a low pass filter for performing noise filtering of said video input. An image segmentation module (105) for separating the image of the recovery boiler interior into background, pendant tubes, and deposition. An image-understanding unit (115) matches derived regions to a 3-D model of said boiler. It derives a 3-D structure the deposition on pendant tubes in the boiler and provides the information about deposits to the plant distributed control system (130) for more efficient operation of the plant pendant tube cleaning and operating systems.
Blumberg, Julie; Fernández, Iván Sánchez; Vendrame, Martina; Oehl, Bernhard; Tatum, William O; Schuele, Stephan; Alexopoulos, Andreas V; Poduri, Annapurna; Kellinghaus, Christoph; Schulze-Bonhage, Andreas; Loddenkemper, Tobias
2012-10-01
To provide an estimate of the frequency of dacrystic seizures in video-electroencephalography (EEG) long-term monitoring units of tertiary referral epilepsy centers and to describe the clinical presentation of dacrystic seizures in relationship to the underlying etiology. We screened clinical records and video-EEG reports for the diagnosis of dacrystic seizures of all patients admitted for video-EEG long-term monitoring at five epilepsy referral centers in the United States and Germany. Patients with a potential diagnosis of dacrystic seizures were identified, and their clinical charts and video-EEG recordings were reviewed. We included only patients with: (1) stereotyped lacrimation, sobbing, grimacing, yelling, or sad facial expression; (2) long-term video-EEG recordings (at least 12 h); and (3) at least one brain magnetic resonance imaging (MRI) study. Nine patients (four female) with dacrystic seizures were identified. Dacrystic seizures were identified in 0.06-0.53% of the patients admitted for long-term video-EEG monitoring depending on the specific center. Considering our study population as a whole, the frequency was 0.13%. The presence of dacrystic seizures without other accompanying clinical features was found in only one patient. Gelastic seizures accompanied dacrystic seizures in five cases, and a hypothalamic hamartoma was found in all of these five patients. The underlying etiology in the four patients with dacrystic seizures without gelastic seizures was left mesial temporal sclerosis (three patients) and a frontal glioblastoma (one patient). All patients had a difficult-to-control epilepsy as demonstrated by the following: (1) at least three different antiepileptic drugs were tried in each patient, (2) epilepsy was well controlled with antiepileptic drugs in only two patients, (3) six patients were considered for epilepsy surgery and three of them underwent a surgical/radiosurgical or radioablative procedure. Regarding outcome, antiepileptic drugs alone achieved seizure freedom in two patients and did not change seizure frequency in another patient. Radiosurgery led to moderately good seizure control in one patient and did not improve seizure control in another patient. Three patients were or are being considered for epilepsy surgery on last follow-up. One patient remains seizure free 3 years after epilepsy surgery. Dacrystic seizures are a rare but clinically relevant finding during video-EEG monitoring. Our data show that when the patient has dacrystic and gelastic seizures, the cause is a hypothalamic hamartoma. In contrast, when dacrystic seizures are not accompanied by gelastic seizures the underlying lesion is most commonly located in the temporal cortex. Wiley Periodicals, Inc. © 2012 International League Against Epilepsy.
Exploding Head Syndrome in the Epilepsy Monitoring Unit: Case Report and Literature Review.
Gillis, Kara; Ng, Marcus C
2017-01-01
Diagnosis of paroxysmal events in epilepsy patients is often made through video-telemetry electroencephalography in the epilepsy monitoring unit. This case report describes the first-ever diagnosis of exploding head syndrome in a patient with longstanding epilepsy and novel nocturnal events. In this report, we describe the presentation of exploding head syndrome and its prevalence and risk factors. In addition, the prevalence of newly diagnosed sleep disorders through video-telemetry electroencephalography in the epilepsy monitoring unit is briefly reviewed. This report also illustrates the novel use of clobazam for the treatment of exploding head syndrome.
NASA Technical Reports Server (NTRS)
Strutzenberg, Louise L.; Grugel, R. N.; Trivedi, R. K.
2005-01-01
A series of experiments performed using the Pore Formation and Mobility Investigation (PFMI) apparatus within the glovebox facility (GBX) on board the International Space Station (ISS) has provided video images of the morphological evolution of a three-dimensional interface in a diffusion controlled regime. The experimental samples were prepared on ground by filling glass tubes, 1 cm ID and approximately 30 cm in length, with "alloys" of succinonitrile (SCN) and water in an atmosphere of nitrogen at 450 millibar pressure. The compositions of the samples processed and analyzed are 0.25,0.5 and 1.0 wt% water. Experimental processing parameters of temperature gradient and translation speed, as well as camera settings, were remotely monitored and manipulated from the ground Telescience Center (TSC) at the Marshall !3pace Flight Center. During the experiments, the sample was first subjected to a unidirectional melt back, generally at 10 microns per second, with a constant temperature gradient ahead of the melting interface. Following the melt back, the interface was allowed to stabilize before translation is initiated. The temperatures in the sample were monitored by six in situ thermocouples and the position is monitored by an optical linear encoder. For the experiments performed and analyzed, the gradients ranged from 2.5 - 3.3 K/mm and the initial pulling velocities ranged from 0.7 micron per second to 1 micron per second with subsequent transition velocities of up to 100 microns per second. The data provided by the PFMI for analysis includes near-real-time (NRT) video captured on the ground during the experiment runs, ISS Video Tape Recorder (VTR) data dumped from the VTR at the end of the experiment run and recorded on the ground, telemetry data including temperature and position measurements, and limited flight HI-8 tapes in 2 camera views of experiment runs for which tapes have been returned to the investigators from ISS. Because of limited down mass from the ISS, the majority of the initial analysis has been performed using the NRT and VTR video data but will be supplemented with the HI-8 video as it becomes available. hardware and procedures required to prepare samples for correlation to flight samples is described. Using this ground-based system, a series of experiments have been performed for direct comparison with the flight data. The results of these comparisons as well as implications to future microgravity experiments are presented and discussed. A ground-based thin-sample directional solidification system as well as all associated
A real-time single sperm tracking, laser trapping, and ratiometric fluorescent imaging system
NASA Astrophysics Data System (ADS)
Shi, Linda Z.; Botvinick, Elliot L.; Nascimento, Jaclyn; Chandsawangbhuwana, Charlie; Berns, Michael W.
2006-08-01
Sperm cells from a domestic dog were treated with oxacarbocyanine DiOC II(3), a ratiometrically-encoded membrane potential fluorescent probe in order to monitor the mitochondria stored in an individual sperm's midpiece. This dye normally emits a red fluorescence near 610 nm as well as a green fluorescence near 515 nm. The ratio of red to green fluorescence provides a substantially accurate and precise measurement of sperm midpiece membrane potential. A two-level computer system has been developed to quantify the motility and energetics of sperm using video rate tracking, automated laser trapping (done by the upper-level system) and fluorescent imaging (done by the lower-level system). The communication between these two systems is achieved by a networked gigabit TCP/IP cat5e crossover connection. This allows for the curvilinear velocity (VCL) and ratio of the red to green fluorescent images of individual sperm to be written to the hard drive at video rates. This two-level automatic system has increased experimental throughput over our previous single-level system (Mei et al., 2005) by an order of magnitude.
Feng, Chuan; Rozenblit, Jerzy W; Hamilton, Allan J
2010-11-01
Surgeons performing laparoscopic surgery have strong biases regarding the quality and nature of the laparoscopic video monitor display. In a comparative study, we used a unique computerized sensing and analysis system to evaluate the various types of monitors employed in laparoscopic surgery. We compared the impact of different types of monitor displays on an individual's performance of a laparoscopic training task which required the subject to move the instrument to a set of targets. Participants (varying from no laparoscopic experience to board-certified surgeons) were asked to perform the assigned task while using all three display systems, which were randomly assigned: a conventional laparoscopic monitor system (2D), a high-definition monitor system (HD), and a stereoscopic display (3D). The effects of monitor system on various performance parameters (total time consumed to finish the task, average speed, and movement economy) were analyzed by computer. Each of the subjects filled out a subjective questionnaire at the end of their training session. A total of 27 participants completed our study. Performance with the HD monitor was significantly slower than with either the 3D or 2D monitor (p < 0.0001). Movement economy with the HD monitor was significantly reduced compared with the 3D (p < 0.0004) or 2D (p < 0.0001) monitor. In terms of average time required to complete the task, performance with the 3D monitor was significantly faster than with the HD (p < 0.0001) or 2D (p < 0.0086) monitor. However, the HD system was the overwhelming favorite according to subjective evaluation. Computerized sensing and analysis is capable of quantitatively assessing the seemingly minor effect of monitor display on surgical training performance. The study demonstrates that, while users expressed a decided preference for HD systems, actual quantitative analysis indicates that HD monitors offer no statistically significant advantage and may even worsen performance compared with standard 2D or 3D laparoscopic monitors.
NASA Astrophysics Data System (ADS)
Huang, Yushi; Nigam, Abhimanyu; Campana, Olivia; Nugegoda, Dayanthi; Wlodkowic, Donald
2016-12-01
Biomonitoring studies apply biological responses of sensitive biomonitor organisms to rapidly detect adverse environmental changes such as presence of physic-chemical stressors and toxins. Behavioral responses such as changes in swimming patterns of small aquatic invertebrates are emerging as sensitive endpoints to monitor aquatic pollution. Although behavioral responses do not deliver information on an exact type or the intensity of toxicants present in water samples, they could provide orders of magnitude higher sensitivity than lethal endpoints such as mortality. Despite the advantages of behavioral biotests performed on sentinel organisms, their wider application in real-time and near realtime biomonitoring of water quality is limited by the lack of dedicated and automated video-microscopy systems. Current behavioral analysis systems rely mostly on static test conditions and manual procedures that are time-consuming and labor intensive. Tracking and precise quantification of locomotory activities of multiple small aquatic organisms requires high-resolution optical data recording. This is often problematic due to small size of fast moving animals and limitations of culture vessels that are not specially designed for video data recording. In this work, we capitalized on recent advances in miniaturized CMOS cameras, high resolution optics and biomicrofluidic technologies to develop near real-time water quality sensing using locomotory activities of small marine invertebrates. We present proof-of-concept integration of high-resolution time-resolved video recording system and high-throughput miniaturized perfusion biomicrofluidic platform for optical tracking of nauplii of marine crustacean Artemia franciscana. Preliminary data demonstrate that Artemia sp. exhibits rapid alterations of swimming patterns in response to toxicant exposure. The combination of video-microscopy and biomicrofluidic platform facilitated straightforward recording of fast moving objects. We envisage that prospectively such system can be scaled up to perform high-throughput water quality sensing in a robotic biomonitoring facility.
Harford, Mirae; Catherall, Jacqueline; Gerry, Stephen; Young, Duncan; Watkinson, Peter
2017-10-25
For many vital signs, monitoring methods require contact with the patient and/or are invasive in nature. There is increasing interest in developing still and video image-guided monitoring methods that are non-contact and non-invasive. We will undertake a systematic review of still and video image-based monitoring methods. We will perform searches in multiple databases which include MEDLINE, Embase, CINAHL, Cochrane library, IEEE Xplore and ACM Digital Library. We will use OpenGrey and Google searches to access unpublished or commercial data. We will not use language or publication date restrictions. The primary goal is to summarise current image-based vital signs monitoring methods, limited to heart rate, respiratory rate, oxygen saturations and blood pressure. Of particular interest will be the effectiveness of image-based methods compared to reference devices. Other outcomes of interest include the quality of the method comparison studies with respect to published reporting guidelines, any limitations of non-contact non-invasive technology and application in different populations. To the best of our knowledge, this is the first systematic review of image-based non-contact methods of vital signs monitoring. Synthesis of currently available technology will facilitate future research in this highly topical area. PROSPERO CRD42016029167.
ERIC Educational Resources Information Center
Bergman, Daniel
2015-01-01
This study examined the effects of audio and video self-recording on preservice teachers' written reflections. Participants (n = 201) came from a secondary teaching methods course and its school-based (clinical) fieldwork. The audio group (n[subscript A] = 106) used audio recorders to monitor their teaching in fieldwork placements; the video group…
Mishra, Vikas; Gautier, Nicole M; Glasscock, Edward
2018-01-29
In epilepsy, seizures can evoke cardiac rhythm disturbances such as heart rate changes, conduction blocks, asystoles, and arrhythmias, which can potentially increase risk of sudden unexpected death in epilepsy (SUDEP). Electroencephalography (EEG) and electrocardiography (ECG) are widely used clinical diagnostic tools to monitor for abnormal brain and cardiac rhythms in patients. Here, a technique to simultaneously record video, EEG, and ECG in mice to measure behavior, brain, and cardiac activities, respectively, is described. The technique described herein utilizes a tethered (i.e., wired) recording configuration in which the implanted electrode on the head of the mouse is hard-wired to the recording equipment. Compared to wireless telemetry recording systems, the tethered arrangement possesses several technical advantages such as a greater possible number of channels for recording EEG or other biopotentials; lower electrode costs; and greater frequency bandwidth (i.e., sampling rate) of recordings. The basics of this technique can also be easily modified to accommodate recording other biosignals, such as electromyography (EMG) or plethysmography for assessment of muscle and respiratory activity, respectively. In addition to describing how to perform the EEG-ECG recordings, we also detail methods to quantify the resulting data for seizures, EEG spectral power, cardiac function, and heart rate variability, which we demonstrate in an example experiment using a mouse with epilepsy due to Kcna1 gene deletion. Video-EEG-ECG monitoring in mouse models of epilepsy or other neurological disease provides a powerful tool to identify dysfunction at the level of the brain, heart, or brain-heart interactions.
Adaptive video-based vehicle classification technique for monitoring traffic.
DOT National Transportation Integrated Search
2015-08-01
This report presents a methodology for extracting two vehicle features, vehicle length and number of axles in order : to classify the vehicles from video, based on Federal Highway Administration (FHWA)s recommended vehicle : classification scheme....
Remote environmental sensor array system
NASA Astrophysics Data System (ADS)
Hall, Geoffrey G.
This thesis examines the creation of an environmental monitoring system for inhospitable environments. It has been named The Remote Environmental Sensor Array System or RESA System for short. This thesis covers the development of RESA from its inception, to the design and modeling of the hardware and software required to make it functional. Finally, the actual manufacture, and laboratory testing of the finished RESA product is discussed and documented. The RESA System is designed as a cost-effective way to bring sensors and video systems to the underwater environment. It contains as water quality probe with sensors such as dissolved oxygen, pH, temperature, specific conductivity, oxidation-reduction potential and chlorophyll a. In addition, an omni-directional hydrophone is included to detect underwater acoustic signals. It has a colour, high-definition and a low-light, black and white camera system, which it turn are coupled to a laser scaling system. Both high-intensity discharge and halogen lighting system are included to illuminate the video images. The video and laser scaling systems are manoeuvred using pan and tilt units controlled from an underwater computer box. Finally, a sediment profile imager is included to enable profile images of sediment layers to be acquired. A control and manipulation system to control the instruments and move the data across networks is integrated into the underwater system while a power distribution node provides the correct voltages to power the instruments. Laboratory testing was completed to ensure that the different instruments associated with the RESA performed as designed. This included physical testing of the motorized instruments, calibration of the instruments, benchmark performance testing and system failure exercises.
Context-Aware Fusion of RGB and Thermal Imagery for Traffic Monitoring
Alldieck, Thiemo; Bahnsen, Chris H.; Moeslund, Thomas B.
2016-01-01
In order to enable a robust 24-h monitoring of traffic under changing environmental conditions, it is beneficial to observe the traffic scene using several sensors, preferably from different modalities. To fully benefit from multi-modal sensor output, however, one must fuse the data. This paper introduces a new approach for fusing color RGB and thermal video streams by using not only the information from the videos themselves, but also the available contextual information of a scene. The contextual information is used to judge the quality of a particular modality and guides the fusion of two parallel segmentation pipelines of the RGB and thermal video streams. The potential of the proposed context-aware fusion is demonstrated by extensive tests of quantitative and qualitative characteristics on existing and novel video datasets and benchmarked against competing approaches to multi-modal fusion. PMID:27869730
A video method to study Drosophila sleep.
Zimmerman, John E; Raizen, David M; Maycock, Matthew H; Maislin, Greg; Pack, Allan I
2008-11-01
To use video to determine the accuracy of the infrared beam-splitting method for measuring sleep in Drosophila and to determine the effect of time of day, sex, genotype, and age on sleep measurements. A digital image analysis method based on frame subtraction principle was developed to distinguish a quiescent from a moving fly. Data obtained using this method were compared with data obtained using the Drosophila Activity Monitoring System (DAMS). The location of the fly was identified based on its centroid location in the subtracted images. The error associated with the identification of total sleep using DAMS ranged from 7% to 95% and depended on genotype, sex, age, and time of day. The degree of the total sleep error was dependent on genotype during the daytime (P < 0.001) and was dependent on age during both the daytime and the nighttime (P < 0.001 for both). The DAMS method overestimated sleep bout duration during both the day and night, and the degree of these errors was genotype dependent (P < 0.001). Brief movements that occur during sleep bouts can be accurately identified using video. Both video and DAMS detected a homeostatic response to sleep deprivation. Video digital analysis is more accurate than DAMS in fly sleep measurements. In particular, conclusions drawn from DAMS measurements regarding daytime sleep and sleep architecture should be made with caution. Video analysis also permits the assessment of fly position and brief movements during sleep.
A video event trigger for high frame rate, high resolution video technology
NASA Astrophysics Data System (ADS)
Williams, Glenn L.
1991-12-01
When video replaces film the digitized video data accumulates very rapidly, leading to a difficult and costly data storage problem. One solution exists for cases when the video images represent continuously repetitive 'static scenes' containing negligible activity, occasionally interrupted by short events of interest. Minutes or hours of redundant video frames can be ignored, and not stored, until activity begins. A new, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term or short term changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pretrigger and post-trigger storage techniques are then adaptable for archiving the digital stream from only the significant video images.
A video event trigger for high frame rate, high resolution video technology
NASA Technical Reports Server (NTRS)
Williams, Glenn L.
1991-01-01
When video replaces film the digitized video data accumulates very rapidly, leading to a difficult and costly data storage problem. One solution exists for cases when the video images represent continuously repetitive 'static scenes' containing negligible activity, occasionally interrupted by short events of interest. Minutes or hours of redundant video frames can be ignored, and not stored, until activity begins. A new, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term or short term changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pretrigger and post-trigger storage techniques are then adaptable for archiving the digital stream from only the significant video images.
Sañudo, Borja; Rueda, David; Pozo-Cruz, Borja Del; de Hoyo, Moisés; Carrasco, Luis
2016-10-01
Sañudo, B, Rueda, D, del Pozo-Cruz, B, de Hoyo, M, and Carrasco, L. Validation of a video analysis software package for quantifying movement velocity in resistance exercises. J Strength Cond Res 30(10): 2934-2941, 2016-The aim of this study was to establish the validity of a video analysis software package in measuring mean propulsive velocity (MPV) and the maximal velocity during bench press. Twenty-one healthy males (21 ± 1 year) with weight training experience were recruited, and the MPV and the maximal velocity of the concentric phase (Vmax) were compared with a linear position transducer system during a standard bench press exercise. Participants performed a 1 repetition maximum test using the supine bench press exercise. The testing procedures involved the simultaneous assessment of bench press propulsive velocity using 2 kinematic (linear position transducer and semi-automated tracking software) systems. High Pearson's correlation coefficients for MPV and Vmax between both devices (r = 0.473 to 0.993) were observed. The intraclass correlation coefficients for barbell velocity data and the kinematic data obtained from video analysis were high (>0.79). In addition, the low coefficients of variation indicate that measurements had low variability. Finally, Bland-Altman plots with the limits of agreement of the MPV and Vmax with different loads showed a negative trend, which indicated that the video analysis had higher values than the linear transducer. In conclusion, this study has demonstrated that the software used for the video analysis was an easy to use and cost-effective tool with a very high degree of concurrent validity. This software can be used to evaluate changes in velocity of training load in resistance training, which may be important for the prescription and monitoring of training programmes.
A low delay transmission method of multi-channel video based on FPGA
NASA Astrophysics Data System (ADS)
Fu, Weijian; Wei, Baozhi; Li, Xiaobin; Wang, Quan; Hu, Xiaofei
2018-03-01
In order to guarantee the fluency of multi-channel video transmission in video monitoring scenarios, we designed a kind of video format conversion method based on FPGA and its DMA scheduling for video data, reduces the overall video transmission delay.In order to sace the time in the conversion process, the parallel ability of FPGA is used to video format conversion. In order to improve the direct memory access (DMA) writing transmission rate of PCIe bus, a DMA scheduling method based on asynchronous command buffer is proposed. The experimental results show that this paper designs a low delay transmission method based on FPGA, which increases the DMA writing transmission rate by 34% compared with the existing method, and then the video overall delay is reduced to 23.6ms.
Design of a Wireless Sensor Network Platform for Tele-Homecare
Chung, Yu-Fang; Liu, Chia-Hui
2013-01-01
The problem of an ageing population has become serious in the past few years as the degeneration of various physiological functions has resulted in distinct chronic diseases in the elderly. Most elderly are not willing to leave home for healthcare centers, but caring for patients at home eats up caregiver resources, and can overwhelm patients' families. Besides, a lot of chronic disease symptoms cause the elderly to visit hospitals frequently. Repeated examinations not only exhaust medical resources, but also waste patients' time and effort. To make matters worse, this healthcare system does not actually appear to be effective as expected. In response to these problems, a wireless remote home care system is designed in this study, where ZigBee is used to set up a wireless network for the users to take measurements anytime and anywhere. Using suitable measuring devices, users' physiological signals are measured, and their daily conditions are monitored by various sensors. Being transferred through ZigBee network, vital signs are analyzed in computers which deliver distinct alerts to remind the users and the family of possible emergencies. The system could be further combined with electric appliances to remotely control the users' environmental conditions. The environmental monitoring function can be activated to transmit in real time dynamic images of the cared to medical personnel through the video function when emergencies occur. Meanwhile, in consideration of privacy, the video camera would be turned on only when it is necessary. The caregiver could adjust the angle of camera to a proper position and observe the current situation of the cared when a sensor on the cared or the environmental monitoring system detects exceptions. All physiological data are stored in the database for family enquiries or accurate diagnoses by medical personnel. PMID:24351630
Design of a wireless sensor network platform for tele-homecare.
Chung, Yu-Fang; Liu, Chia-Hui
2013-12-12
The problem of an ageing population has become serious in the past few years as the degeneration of various physiological functions has resulted in distinct chronic diseases in the elderly. Most elderly are not willing to leave home for healthcare centers, but caring for patients at home eats up caregiver resources, and can overwhelm patients' families. Besides, a lot of chronic disease symptoms cause the elderly to visit hospitals frequently. Repeated examinations not only exhaust medical resources, but also waste patients' time and effort. To make matters worse, this healthcare system does not actually appear to be effective as expected. In response to these problems, a wireless remote home care system is designed in this study, where ZigBee is used to set up a wireless network for the users to take measurements anytime and anywhere. Using suitable measuring devices, users' physiological signals are measured, and their daily conditions are monitored by various sensors. Being transferred through ZigBee network, vital signs are analyzed in computers which deliver distinct alerts to remind the users and the family of possible emergencies. The system could be further combined with electric appliances to remotely control the users' environmental conditions. The environmental monitoring function can be activated to transmit in real time dynamic images of the cared to medical personnel through the video function when emergencies occur. Meanwhile, in consideration of privacy, the video camera would be turned on only when it is necessary. The caregiver could adjust the angle of camera to a proper position and observe the current situation of the cared when a sensor on the cared or the environmental monitoring system detects exceptions. All physiological data are stored in the database for family enquiries or accurate diagnoses by medical personnel.
NASA Astrophysics Data System (ADS)
Akimoto, Makio; Chen, Yu; Miyazaki, Michio; Yamashita, Toyonobu; Miyakawa, Michio; Hata, Mieko
The skin is unique as an organ that is highly accessible to direct visual inspection with light. Visual inspection of cutaneous morphology is the mainstay of clinical dermatology, but relies heavily on subjective assessment by the skilled dermatologists. We present an imaging colorimeter of non-contact skin color measuring system and some experimented results using such instrument. The system is comprised by a video camera, light source, a real-time image processing board, magneto optics disk and personal computer which controls the entire system. The CIE-L*a*b* uniform color space is used. This system is used for monitoring of some clinical diagnosis. The instrument is non-contact, easy to operate, and has a high precision unlike the conventional colorimeters. This instrument is useful for clinical diagnoses, monitoring and evaluating the effectiveness of treatment.
Guaranha, Mirian S B; Garzon, Eliana; Buchpiguel, Carlos A; Tazima, Sérgio; Yacubian, Elza M T; Sakamoto, Américo C
2005-01-01
Hyperventilation is an activation method that provokes physiological slowing of brain rhythms, interictal discharges, and seizures, especially in generalized idiopathic epilepsies. In this study we assessed its effectiveness in inducing focal seizures during video-EEG monitoring. We analyzed the effects of hyperventilation (HV) during video-EEG monitoring (video-EEG) of patients with medically intractable focal epilepsies. We excluded children younger than 10 years, mentally retarded patients, and individuals with frequent seizures. We analyzed 97 patients; 24 had positive seizure activation (PSA), and 73 had negative seizure activation (NSA). No differences were found between groups regarding sex, age, age at epilepsy onset, duration of epilepsy, frequency of seizures, and etiology. Temporal lobe epilepsies were significantly more activated than frontal lobe epilepsies. Spontaneous and activated seizures did not differ in terms of their clinical characteristics, and the activation did not affect the performance of ictal single-photon emission computed tomography (SPECT). HV is a safe and effective method of seizure activation during monitoring. It does not modify any of the characteristics of the seizures and allows the obtaining of valuable ictal SPECTs. This observation is clinically relevant and suggests the effectiveness and the potential of HV in shortening the presurgical evaluation, especially of temporal lobe epilepsy patients, consequently reducing its costs and increasing the number of candidates for epilepsy surgery.
Alessi, Sheila M; Rash, Carla J; Petry, Nancy M
2017-03-01
Abstinence reinforcement is efficacious for improving smoking treatment outcomes, but practical constraints related to the need for multiple in-person carbon monoxide (CO) breath tests daily to verify smoking abstinence have limited its use. This study tested an mHealth procedure to remotely monitor and reinforce smoking abstinence in individuals' natural environment. Eligible treatment-seeking smokers (N = 90) were randomized to (1) usual care and ecological monitoring with abstinence reinforcement (mHealth reinforcement) or (2) without reinforcement (mHealth monitoring). Usual care was 8 weeks of transdermal nicotine and twice-weekly telephone counseling. Following training, an interactive voice response system prompted participants to conduct CO tests 1-3 daily at pseudorandom times (7 am to 10 pm) for 4 weeks. When prompted, participants used a study cell phone and CO monitor to complete a CO self-test, video record the process, and submit videos using multimedia messaging. mHealth reinforcement participants could earn prizes for smoking-negative on-time CO tests. The interactive voice response generated preliminary earnings immediately. Earnings were finalized by comparing video records against participants' self-reports. mHealth reinforcement was associated with a greater proportion of smoking-negative CO tests, longest duration of prolonged abstinence, and point-prevalence abstinence during the monitoring/reinforcement phase compared to mHealth monitoring (p < .01, d = 0.8-1.3). Follow-up (weeks 4-24) analyses indicated main effects of reinforcement on point-prevalence abstinence and proportion of days smoked (p ≤ .05); values were comparable by week 24. mHealth reinforcement has short-term efficacy. Research on methods to enhance and sustain benefits is needed. This study suggests that mHealth abstinence reinforcement is efficacious and may present temporal and spatial opportunities to research, engage, and support smokers trying to quit that do not exist with conventional (not technology-based) reinforcement interventions. © The Author 2016. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Mobile Delivery of Treatment for Alcohol Use Disorders
Quanbeck, Andrew; Chih, Ming-Yuan; Isham, Andrew; Johnson, Roberta; Gustafson, David
2014-01-01
Several systems for treating alcohol-use disorders (AUDs) exist that operate on mobile phones. These systems are categorized into four groups: text-messaging monitoring and reminder systems, text-messaging intervention systems, comprehensive recovery management systems, and game-based systems. Text-messaging monitoring and reminder systems deliver reminders and prompt reporting of alcohol consumption, enabling continuous monitoring of alcohol use. Text-messaging intervention systems additionally deliver text messages designed to promote abstinence and recovery. Comprehensive recovery management systems use the capabilities of smart-phones to provide a variety of tools and services that can be tailored to individuals, including in-the-moment assessments and access to peer discussion groups. Game-based systems engage the user using video games. Although many commercial applications for treatment of AUDs exist, few (if any) have empirical evidence of effectiveness. The available evidence suggests that although texting-based applications may have beneficial effects, they are probably insufficient as interventions for AUDs. Comprehensive recovery management systems have the strongest theoretical base and have yielded the strongest and longest-lasting effects, but challenges remain, including cost, understanding which features account for effects, and keeping up with technological advances. PMID:26259005
Reduction in Fall Rate in Dementia Managed Care Through Video Incident Review: Pilot Study.
Bayen, Eleonore; Jacquemot, Julien; Netscher, George; Agrawal, Pulkit; Tabb Noyce, Lynn; Bayen, Alexandre
2017-10-17
Falls of individuals with dementia are frequent, dangerous, and costly. Early detection and access to the history of a fall is crucial for efficient care and secondary prevention in cognitively impaired individuals. However, most falls remain unwitnessed events. Furthermore, understanding why and how a fall occurred is a challenge. Video capture and secure transmission of real-world falls thus stands as a promising assistive tool. The objective of this study was to analyze how continuous video monitoring and review of falls of individuals with dementia can support better quality of care. A pilot observational study (July-September 2016) was carried out in a Californian memory care facility. Falls were video-captured (24×7), thanks to 43 wall-mounted cameras (deployed in all common areas and in 10 out of 40 private bedrooms of consenting residents and families). Video review was provided to facility staff, thanks to a customized mobile device app. The outcome measures were the count of residents' falls happening in the video-covered areas, the acceptability of video recording, the analysis of video review, and video replay possibilities for care practice. Over 3 months, 16 falls were video-captured. A drop in fall rate was observed in the last month of the study. Acceptability was good. Video review enabled screening for the severity of falls and fall-related injuries. Video replay enabled identifying cognitive-behavioral deficiencies and environmental circumstances contributing to the fall. This allowed for secondary prevention in high-risk multi-faller individuals and for updated facility care policies regarding a safer living environment for all residents. Video monitoring offers high potential to support conventional care in memory care facilities. ©Eleonore Bayen, Julien Jacquemot, George Netscher, Pulkit Agrawal, Lynn Tabb Noyce, Alexandre Bayen. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 17.10.2017.
Fluorescent screens and image processing for the APS linac test stand
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berg, W.; Ko, K.
A fluorescent screen was used to monitor relative beam position and spot size of a 56-MeV electron beam in the linac test stand. A chromium doped alumina ceramic screen inserted into the beam was monitored by a video camera. The resulting image was captured using a frame grabber and stored into memory. Reconstruction and analysis of the stored image was performed using PV-WAVE. This paper will discuss the hardware and software implementation of the fluorescent screen and imaging system. Proposed improvements for the APS linac fluorescent screens and image processing will also be discussed.
Promoting mental health recovery and improving clinical assessment using video technology.
Bradford, Daniel W; Cuddeback, Gary; Elbogen, Eric B
2017-12-01
Although individuals with medical problems (e.g., diabetes, hypertension) can monitor their symptoms using objective measures (e.g., blood glucose, blood pressure), objective measures are not typically used by individuals with psychotic disorders to monitor symptoms of mental illness. To examine the benefits and limitations of the use of video self-observation for treatment of individuals with psychotic disorders. The authors reviewed studies examining video self-observation among individuals with severe mental illnesses. Individuals with psychotic disorders who viewed videos of themselves while symptomatic reported some benefit to this approach, with 1 study showing sustained improvement in understanding of mental illness. Still, some individuals reported negative feelings about the process, and also attributed symptoms to stress or drug abuse rather than their psychotic disorder. The authors found no studies examining the potential for video self-observation as a strategy to improve clinical decision-making in the context of mental health care. Implications of this approach for mental health recovery and clinical practice are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Artese, Serena; Achilli, Vladimiro; Zinno, Raffaele
2018-01-01
Deck inclination and vertical displacements are among the most important technical parameters to evaluate the health status of a bridge and to verify its bearing capacity. Several methods, both conventional and innovative, are used for structural rotations and displacement monitoring; however, none of these allow, at the same time, precision, automation, static and dynamic monitoring without using high cost instrumentation. The proposed system uses a common laser pointer and image processing. The elastic line inclination is measured by analyzing the single frames of an HD video of the laser beam imprint projected on a flat target. For the image processing, a code was developed in Matlab® that provides instantaneous rotation and displacement of a bridge, charged by a mobile load. An important feature is the synchronization of the load positioning, obtained by a GNSS receiver or by a video. After the calibration procedures, a test was carried out during the movements of a heavy truck maneuvering on a bridge. Data acquisition synchronization allowed us to relate the position of the truck on the deck to inclination and displacements. The inclination of the elastic line at the support was obtained with a precision of 0.01 mrad. The results demonstrate the suitability of the method for dynamic load tests, and the control and monitoring of bridges. PMID:29370082
Conger, R.W.
1997-01-01
Between April and June 1997, the U.S. Navy contracted Brown and Root Environmental, Inc., to drill 20 monitor wells at the Willow Grove Naval Air Station in Horsham Township, Montgomery County, Pa. The wells were installed to monitor water levels and allow collection of water samples from shallow, intermediate, and deep water-bearing zones. Analysis of the samples will determine the horizontal and vertical distribution of any contaminated ground water migrating from known contaminant sources. Eight wells were drilled near the Fire Training Area (Site 5), five wells near the 9th Street Landfill (Site 3), four wells at the Antenna Field Landfill (Site 2), and three wells near Privet Road Compound (Site 1). Depths range from 73 to 167 feet below land surface. The U.S. Geological Survey conducted borehole-geophysical and borehole-video logging to identify water-bearing zones so that appropriate intervals could be screened in each monitor well. Geophysical logs were run on the 20 monitor wells and 1 existing well. Video logs were run on 16 wells. Caliper and video logs were used to locate fractures, inflections on fluid-temperature and fluid-resistivity logs were used to locate possible water-bearing fractures, and flowmeter measurements verified these locations. Single-point-resistance and natural-gamma logs provided information on stratigraphy. After interpretation of geophysical logs, video logs, and driller's notes, all wells were screened such that water-level fluctuations could be monitored and discrete water samples collected from one or more shallow and intermediate water-bearing zones in each borehole.
Costa, Daniel G.; Collotta, Mario; Pau, Giovanni; Duran-Faundez, Cristian
2017-01-01
The advance of technologies in several areas has allowed the development of smart city applications, which can improve the way of life in modern cities. When employing visual sensors in that scenario, still images and video streams may be retrieved from monitored areas, potentially providing valuable data for many applications. Actually, visual sensor networks may need to be highly dynamic, reflecting the changing of parameters in smart cities. In this context, characteristics of visual sensors and conditions of the monitored environment, as well as the status of other concurrent monitoring systems, may affect how visual sensors collect, encode and transmit information. This paper proposes a fuzzy-based approach to dynamically configure the way visual sensors will operate concerning sensing, coding and transmission patterns, exploiting different types of reference parameters. This innovative approach can be considered as the basis for multi-systems smart city applications based on visual monitoring, potentially bringing significant results for this research field. PMID:28067777
Costa, Daniel G; Collotta, Mario; Pau, Giovanni; Duran-Faundez, Cristian
2017-01-05
The advance of technologies in several areas has allowed the development of smart city applications, which can improve the way of life in modern cities. When employing visual sensors in that scenario, still images and video streams may be retrieved from monitored areas, potentially providing valuable data for many applications. Actually, visual sensor networks may need to be highly dynamic, reflecting the changing of parameters in smart cities. In this context, characteristics of visual sensors and conditions of the monitored environment, as well as the status of other concurrent monitoring systems, may affect how visual sensors collect, encode and transmit information. This paper proposes a fuzzy-based approach to dynamically configure the way visual sensors will operate concerning sensing, coding and transmission patterns, exploiting different types of reference parameters. This innovative approach can be considered as the basis for multi-systems smart city applications based on visual monitoring, potentially bringing significant results for this research field.
Multisensor monitoring of deforestation in the Guinea Highlands of West Africa
NASA Technical Reports Server (NTRS)
Gilruth, Peter T.; Hutchinson, Charles F.
1990-01-01
Multiple remote sensing systems were used to assess deforestation in the Guinea Highlands (Fouta Djallon) of West Africa. Sensor systems included: (1) historical (1953) and current (1989) aerial mapping photography; (2) current large-scale, small format (35mm) aerial photography; (3) current aerial video imagery; and (4) historical (1973) and recent (1985) LANDSAT MSS. Photographic and video data were manually interpreted and incorporated in a vector-based geographic information system (GIS). LANDSAT data were digitally classified. General results showed an increase in permanent and shifting agriculture over the past 35 years. This finding is consistent with hypothesized strategies to increase agricultural production through a shortening of the fallow period in areas of shifting cultivation. However, results also show that the total area of both permanent and shifting agriculture had expanded at the expense of natural vegetation and an increase in erosion. Although sequential LANDSAT MSS cannot be used in this region to accurately map land over, the location, direction and magnitude of changes can be detected in relative terms. Historical and current aerial photography can be used to map agricultural land use changes with some accuracy. Video imagery is useful as ancillary data for mapping vegetation. The most prudent approach to mapping deforestation would incorporate a multistage approach based on these sensors.
Optimized Two-Party Video Chat with Restored Eye Contact Using Graphics Hardware
NASA Astrophysics Data System (ADS)
Dumont, Maarten; Rogmans, Sammy; Maesen, Steven; Bekaert, Philippe
We present a practical system prototype to convincingly restore eye contact between two video chat participants, with a minimal amount of constraints. The proposed six-fold camera setup is easily integrated into the monitor frame, and is used to interpolate an image as if its virtual camera captured the image through a transparent screen. The peer user has a large freedom of movement, resulting in system specifications that enable genuine practical usage. Our software framework thereby harnesses the powerful computational resources inside graphics hardware, and maximizes arithmetic intensity to achieve over real-time performance up to 42 frames per second for 800 ×600 resolution images. Furthermore, an optimal set of fine tuned parameters are presented, that optimizes the end-to-end performance of the application to achieve high subjective visual quality, and still allows for further algorithmic advancement without loosing its real-time capabilities.
Flow visualization and characterization of evaporating liquid drops
NASA Technical Reports Server (NTRS)
Chao, David F. (Inventor); Zhang, Nengli (Inventor)
2004-01-01
An optical system, consisting of drop-reflection image, reflection-refracted shadowgraphy and top-view photography, is used to measure the spreading and instant dynamic contact angle of a volatile-liquid drop on a non-transparent substrate. The drop-reflection image and the shadowgraphy is shown by projecting the images of a collimated laser beam partially reflected by the drop and partially passing through the drop onto a screen while the top view photograph is separately viewed by use of a camera video recorder and monitor. For a transparent liquid on a reflective solid surface, thermocapillary convection in the drop, induced by evaporation, can be viewed nonintrusively, and the drop real-time profile data are synchronously recorded by video recording systems. Experimental results obtained from this technique clearly reveal that evaporation and thermocapillary convection greatly affect the spreading process and the characteristics of dynamic contact angle of the drop.
NASA Technical Reports Server (NTRS)
2011-01-01
Topics covered include: Amperometric Solid Electrolyte Oxygen Microsensors with Easy Batch Fabrication; Two-Axis Direct Fluid Shear Stress Sensor for Aerodynamic Applications; Target Assembly to Check Boresight Alignment of Active Sensors; Virtual Sensor Test Instrumentation; Evaluation of the Reflection Coefficient of Microstrip Elements for Reflectarray Antennas; Miniaturized Ka-Band Dual-Channel Radar; Continuous-Integration Laser Energy Lidar Monitor; Miniaturized Airborne Imaging Central Server System; Radiation-Tolerant, SpaceWire-Compatible Switching Fabric; Small Microprocessor for ASIC or FPGA Implementation; Source-Coupled, N-Channel, JFET-Based Digital Logic Gate Structure Using Resistive Level Shifters; High-Voltage-Input Level Translator Using Standard CMOS; Monitoring Digital Closed-Loop Feedback Systems; MASCOT - MATLAB Stability and Control Toolbox; MIRO Continuum Calibration for Asteroid Mode; GOATS Image Projection Component; Coded Modulation in C and MATLAB; Low-Dead-Volume Inlet for Vacuum Chamber; Thermal Control Method for High-Current Wire Bundles by Injecting a Thermally Conductive Filler; Method for Selective Cleaning of Mold Release from Composite Honeycomb Surfaces; Infrared-Bolometer Arrays with Reflective Backshorts; Commercialization of LARC (trade mark) -SI Polyimide Technology; Novel Low-Density Ablators Containing Hyperbranched Poly(azomethine)s; Carbon Nanotubes on Titanium Substrates for Stray Light Suppression; Monolithic, High-Speed Fiber-Optic Switching Array for Lidar; Grid-Tied Photovoltaic Power System; Spectroelectrochemical Instrument Measures TOC; A Miniaturized Video System for Monitoring Drosophila Behavior; Hydrofocusing Bioreactor Produces Anti-Cancer Alkaloids; Creep Measurement Video Extensometer; Radius of Curvature Measurement of Large Optics Using Interferometry and Laser Tracker n-B-pi-p Superlattice Infrared Detector; Safe Onboard Guidance and Control Under Probabilistic Uncertainty; General Tool for Evaluating High-Contrast Coronagraphic Telescope Performance Error Budgets; Hidden Statistics of Schroedinger Equation; Optimal Padding for the Two-Dimensional Fast Fourier Transform; Spatial Query for Planetary Data; Higher Order Mode Coupling in Feed Waveguide of a Planar Slot Array Antenna; Evolutionary Computational Methods for Identifying Emergent Behavior in Autonomous Systems; Sampling Theorem in Terms of the Bandwidth and Sampling Interval; Meteoroid/Orbital Debris Shield Engineering Development Practice and Procedure; Self-Balancing, Optical-Center-Pivot, Fast-Steering Mirror; Wireless Orbiter Hang-Angle Inclinometer System; and Internal Electrostatic Discharge Monitor - IESDM.
VLSI-based video event triggering for image data compression
NASA Astrophysics Data System (ADS)
Williams, Glenn L.
1994-02-01
Long-duration, on-orbit microgravity experiments require a combination of high resolution and high frame rate video data acquisition. The digitized high-rate video stream presents a difficult data storage problem. Data produced at rates of several hundred million bytes per second may require a total mission video data storage requirement exceeding one terabyte. A NASA-designed, VLSI-based, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term (DC-like) or short term (AC-like) changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pre-trigger and post-trigger storage techniques are then adaptable to archiving only the significant video images.
VLSI-based Video Event Triggering for Image Data Compression
NASA Technical Reports Server (NTRS)
Williams, Glenn L.
1994-01-01
Long-duration, on-orbit microgravity experiments require a combination of high resolution and high frame rate video data acquisition. The digitized high-rate video stream presents a difficult data storage problem. Data produced at rates of several hundred million bytes per second may require a total mission video data storage requirement exceeding one terabyte. A NASA-designed, VLSI-based, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term (DC-like) or short term (AC-like) changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pre-trigger and post-trigger storage techniques are then adaptable to archiving only the significant video images.
Real Time Apnoea Monitoring of Children Using the Microsoft Kinect Sensor: A Pilot Study
Al-Naji, Ali; Gibson, Kim; Lee, Sang-Heon; Chahl, Javaan
2017-01-01
The objective of this study was to design a non-invasive system for the observation of respiratory rates and detection of apnoea using analysis of real time image sequences captured in any given sleep position and under any light conditions (even in dark environments). A Microsoft Kinect sensor was used to visualize the variations in the thorax and abdomen from the respiratory rhythm. These variations were magnified, analyzed and detected at a distance of 2.5 m from the subject. A modified motion magnification system and frame subtraction technique were used to identify breathing movements by detecting rapid motion areas in the magnified frame sequences. The experimental results on a set of video data from five subjects (3 h for each subject) showed that our monitoring system can accurately measure respiratory rate and therefore detect apnoea in infants and young children. The proposed system is feasible, accurate, safe and low computational complexity, making it an efficient alternative for non-contact home sleep monitoring systems and advancing health care applications. PMID:28165382
Research of real-time communication software
NASA Astrophysics Data System (ADS)
Li, Maotang; Guo, Jingbo; Liu, Yuzhong; Li, Jiahong
2003-11-01
Real-time communication has been playing an increasingly important role in our work, life and ocean monitor. With the rapid progress of computer and communication technique as well as the miniaturization of communication system, it is needed to develop the adaptable and reliable real-time communication software in the ocean monitor system. This paper involves the real-time communication software research based on the point-to-point satellite intercommunication system. The object-oriented design method is adopted, which can transmit and receive video data and audio data as well as engineering data by satellite channel. In the real-time communication software, some software modules are developed, which can realize the point-to-point satellite intercommunication in the ocean monitor system. There are three advantages for the real-time communication software. One is that the real-time communication software increases the reliability of the point-to-point satellite intercommunication system working. Second is that some optional parameters are intercalated, which greatly increases the flexibility of the system working. Third is that some hardware is substituted by the real-time communication software, which not only decrease the expense of the system and promotes the miniaturization of communication system, but also aggrandizes the agility of the system.
Source-Monitoring Training Facilitates Preschoolers' Eyewitness Memory Performance.
ERIC Educational Resources Information Center
Thierry, Karen L.; Spence, Melanie J.
2002-01-01
Investigated whether source-monitoring training would decrease 3- to 4-year-olds' suggestibility. After observing live or video target-events, children received source-monitoring or recognition (control) training. Found that children given source-monitoring training were more accurate than control group children in response to misleading and…
Fukuda, H; Kawaida, M; Oki, K; Kano, S; Kawasaki, Y; Tsuji, H; Kohno, N
1990-06-01
The phonatory examination was performed while monitoring vocal fold vibration by laryngostrobovideography. Vocal fold vibration was video-taped by a laryngostroboscope and flexible laryngofiberscope inserted through the nasal cavity. Simultaneously, the phonatory examination was conducted with a phonation analyzer. The data were entered into a personal microcomputer via an A/D converter and analyzed to obtain the parameters of sound pitch, sound intensity and mean expiratory air flow volume, which were superimposed on the color video monitor screen.
NASA Technical Reports Server (NTRS)
Richards, Stephanie E. (Compiler); Levine, Howard G.; Romero, Vergel
2016-01-01
Biotube was developed for plant gravitropic research investigating the potential for magnetic fields to orient plant roots as they grow in microgravity. Prior to flight, experimental seeds are placed into seed cassettes, that are capable of containing up to 10 seeds, and inserted between two magnets located within one of three Magnetic Field Chamber (MFC). Biotube is stored within an International Space Station (ISS) stowage locker and provides three levels of containment for chemical fixatives. Features include monitoring of temperature, fixative/ preservative delivery to specimens, and real-time video imaging downlink. Biotube's primary subsystems are: (1) The Water Delivery System that automatically activates and controls the delivery of water (to initiate seed germination). (2) The Fixative Storage and Delivery System that stores and delivers chemical fixative or RNA later to each seed cassette. (3) The Digital Imaging System consisting of 4 charge-coupled device (CCD) cameras, a video multiplexer, a lighting multiplexer, and 16 infrared light-emitting diodes (LEDs) that provide illumination while the photos are being captured. (4) The Command and Data Management System that provides overall control of the integrated subsystems, graphical user interface, system status and error message display, image display, and other functions.
Ali, S M; Reisner, L A; King, B; Cao, A; Auner, G; Klein, M; Pandya, A K
2008-01-01
A redesigned motion control system for the medical robot Aesop allows automating and programming its movements. An IR eye tracking system has been integrated with this control interface to implement an intelligent, autonomous eye gaze-based laparoscopic positioning system. A laparoscopic camera held by Aesop can be moved based on the data from the eye tracking interface to keep the user's gaze point region at the center of a video feedback monitor. This system setup provides autonomous camera control that works around the surgeon, providing an optimal robotic camera platform.
Utility gas turbine combustor viewing system: Volume 2, Engine operating envelope test: Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morey, W.W.
1988-12-01
This report summarizes the development and field testing of a combustor viewing probe (CVP) as a flame diagnostic monitor for utility gas turbine engines. The prototype system is capable of providing a visual record of combustor flame images, recording flame spectral data, analyzing image and spectral data, and diagnosing certain engine malfunctions. The system should provide useful diagnostic information to utility plant operators, and reduced maintenance costs. The field tests demonstrated the ability of the CVP to monitor combustor flame condition and to relate changes in the engine operation with variations in the flame signature. Engine light off, run upmore » to full speed, the addition of load, and the effect of water injection for NO/sub x/ control could easily be identified on the video monitor. The viewing probe was also valuable in identifying hard startups and shutdowns, as well as transient effects that can seriously harm the engine.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morey, W.W.
1988-12-01
This report summarizes the development and field testing of a combustor viewing probe (CVP) as a flame diagnostic monitor for utility gas turbine engines. The prototype system is capable of providing a visual record of combustor flame images, recording flame spectral data, analyzing image and spectral data, and diagnosing certain engine malfunctions. The system should provide useful diagnostic information to utility plant operators, and reduce maintenance costs. The field tests demonstrated the ability of the CVP to monitor combustor flame condition and to relate changes in the engine operation with variations in the flame signature. Engine light off, run upmore » to full speed, the addition of load, and the effect of water injection for NO/sub x/ control could easily be identified on the video monitor. The viewing probe was also valuable in identifying hard startups and shutdowns, as well as transient effects that can seriously harm the engine. 11 refs.« less
NASA Astrophysics Data System (ADS)
Saad, W. H. M.; Khoo, C. W.; Rahman, S. I. Ab; Ibrahim, M. M.; Saad, N. H. M.
2017-06-01
Getting enough sleep at the right times can help in improving quality of life and protect mental and physical health. This study proposes a portable sleep monitoring device to determine the relationship between the room ambient and quality of sleep. Body condition parameter such as heart rate, body temperature and body movement was used to determine quality of sleep and Audio/video-based monitoring system. The functionality test on all sensors is carried out to make sure that all sensors is working properly. The functionality of the overall system is designed for a better experience with a very minimal intervention to the user. The simple test on the body condition (body temperature and heart rate) while asleep with several different ambient parameters (humidity, brightness and temperature) are varied and the result shows that someone has a better sleep in a dark and colder ambient. This can prove by lower body temperature and lower heart rate.
Passive detection of vehicle loading
NASA Astrophysics Data System (ADS)
McKay, Troy R.; Salvaggio, Carl; Faulring, Jason W.; Salvaggio, Philip S.; McKeown, Donald M.; Garrett, Alfred J.; Coleman, David H.; Koffman, Larry D.
2012-01-01
The Digital Imaging and Remote Sensing Laboratory (DIRS) at the Rochester Institute of Technology, along with the Savannah River National Laboratory is investigating passive methods to quantify vehicle loading. The research described in this paper investigates multiple vehicle indicators including brake temperature, tire temperature, engine temperature, acceleration and deceleration rates, engine acoustics, suspension response, tire deformation and vibrational response. Our investigation into these variables includes building and implementing a sensing system for data collection as well as multiple full-scale vehicle tests. The sensing system includes; infrared video cameras, triaxial accelerometers, microphones, video cameras and thermocouples. The full scale testing includes both a medium size dump truck and a tractor-trailer truck on closed courses with loads spanning the full range of the vehicle's capacity. Statistical analysis of the collected data is used to determine the effectiveness of each of the indicators for characterizing the weight of a vehicle. The final sensing system will monitor multiple load indicators and combine the results to achieve a more accurate measurement than any of the indicators could provide alone.
Laparoscopic skills training using a webcam trainer.
Chung, Steve Y; Landsittel, Douglas; Chon, Chris H; Ng, Christopher S; Fuchs, Gerhard J
2005-01-01
Many sophisticated and expensive trainers have been developed to assist surgeons in learning basic laparoscopic skills. We developed an inexpensive trainer and evaluated its effectiveness. The webcam laparoscopic training device is composed of a webcam, cardboard box, desk lamp and home computer. This homemade trainer was evaluated against 2 commercially available systems, namely the video Pelvitrainer (Karl Storz Endoscopy, Culver City, California) and the dual mirror Simuview (Simulab Corp., Seattle, Washington). The Pelvitrainer consists of a fiberglass box, single lens optic laparoscope, fiberoptic light source, endoscopic camera and video monitor, while the Simuview trainer uses 2 offset, facing mirrors and an uncovered plastic box. A total of 42 participants without prior laparoscopic training were enrolled in the study and asked to execute 2 tasks, that is peg transfer and pattern cutting. Participants were randomly assigned to 6 groups with each group representing a different permutation of trainers to be used. The time required for participants to complete each task was recorded and differences in performance were calculated. Paired t tests, the Wilcoxon signed rank test and ANOVA were performed to analyze the statistical difference in performance times for all conditions. Statistical analyses of the 2 tasks showed no significant difference for the video and webcam trainers. However, the mirror trainer gave significantly higher outcome values for tasks 1 and 2 compared to the video (p = 0.01 and <0.01) and webcam (p = 0.04 and <0.01, respectively) methods. ANOVA indicated no overall difference for tasks 1 and 2 across the orderings (p = 0.36 and 0.99, respectively). However, by attempt 3 the time required to complete the skill tests decreased significantly for all 3 trainers (each p <0.01). Our homemade webcam system is comparable in function to the more elaborate video trainer but superior to the dual mirror trainer. For novice laparoscopists we believe that the webcam system is an inexpensive and effective laparoscopic training device. Furthermore, the webcam system also allows instant recording and review of techniques.
Video-tracker trajectory analysis: who meets whom, when and where
NASA Astrophysics Data System (ADS)
Jäger, U.; Willersinn, D.
2010-04-01
Unveiling unusual or hostile events by observing manifold moving persons in a crowd is a challenging task for human operators, especially when sitting in front of monitor walls for hours. Typically, hostile events are rare. Thus, due to tiredness and negligence the operator may miss important events. In such situations, an automatic alarming system is able to support the human operator. The system incorporates a processing chain consisting of (1) people tracking, (2) event detection, (3) data retrieval, and (4) display of relevant video sequence overlaid by highlighted regions of interest. In this paper we focus on the event detection stage of the processing chain mentioned above. In our case, the selected event of interest is the encounter of people. Although being based on a rather simple trajectory analysis, this kind of event embodies great practical importance because it paves the way to answer the question "who meets whom, when and where". This, in turn, forms the basis to detect potential situations where e.g. money, weapons, drugs etc. are handed over from one person to another in crowded environments like railway stations, airports or busy streets and places etc.. The input to the trajectory analysis comes from a multi-object video-based tracking system developed at IOSB which is able to track multiple individuals within a crowd in real-time [1]. From this we calculate the inter-distances between all persons on a frame-to-frame basis. We use a sequence of simple rules based on the individuals' kinematics to detect the event mentioned above to output the frame number, the persons' IDs from the tracker and the pixel coordinates of the meeting position. Using this information, a data retrieval system may extract the corresponding part of the recorded video image sequence and finally allows for replaying the selected video clip with a highlighted region of interest to attract the operator's attention for further visual inspection.
(Video Assisted) thoracoscopic surgery: Getting started
Molnar, Tamas F
2007-01-01
Thoracoscopic surgery without or with video assistance (VATS) is simpler and easier to learn as it seems to be. Potential benefits of the procedure in rural surgical environment are outlined while basic requirements and limitations are listed. Thoracoscopy kit, thoracotomy tray at hand, patient monitoring, proper drainage system, pain control and access to chest physiotherapy are the basic requirements. Having headlight, bronchoscope, Ligasure and mechanical staplers offer clear advantages but they are not indispensable. Exploration and evacuation of pleural space, pleurodesis, surgery for Stage I and II thoracic empyema are evidenced fields of VATS procedures. Some of the cases can be performed under controlled local anesthesia. Acute chest trauma cannot be recommended for VATS treatment. Lung cancer is out of the scope of rural surgery. PMID:19789679
1994-07-10
TEMPUS, an electromagnetic levitation facility that allows containerless processing of metallic samples in microgravity, first flew on the IML-2 Spacelab mission. The principle of electromagnetic levitation is used commonly in ground-based experiments to melt and then cool metallic melts below their freezing points without solidification occurring. The TEMPUS operation is controlled by its own microprocessor system; although commands may be sent remotely from the ground and real time adjustments may be made by the crew. Two video cameras, a two-color pyrometer for measuring sample temperatures, and a fast infrared detector for monitoring solidification spikes, will be mounted to the process chamber to facilitate observation and analysis. In addition, a dedicated high-resolution video camera can be attached to the TEMPUS to measure the sample volume precisely.
Promoting health equity: WHO health inequality monitoring at global and national levels.
Hosseinpoor, Ahmad Reza; Bergen, Nicole; Schlotheuber, Anne
2015-01-01
Health equity is a priority in the post-2015 sustainable development agenda and other major health initiatives. The World Health Organization (WHO) has a history of promoting actions to achieve equity in health, including efforts to encourage the practice of health inequality monitoring. Health inequality monitoring systems use disaggregated data to identify disadvantaged subgroups within populations and inform equity-oriented health policies, programs, and practices. This paper provides an overview of a number of recent and current WHO initiatives related to health inequality monitoring at the global and/or national level. We outline the scope, content, and intended uses/application of the following: Health Equity Monitor database and theme page; State of inequality: reproductive, maternal, newborn, and child health report; Handbook on health inequality monitoring: with a focus on low- and middle-income countries; Health inequality monitoring eLearning module; Monitoring health inequality: an essential step for achieving health equity advocacy booklet and accompanying video series; and capacity building workshops conducted in WHO Member States and Regions. The paper concludes by considering how the work of the WHO can be expanded upon to promote the establishment of sustainable and robust inequality monitoring systems across a variety of health topics among Member States and at the global level.
Promoting health equity: WHO health inequality monitoring at global and national levels
Hosseinpoor, Ahmad Reza; Bergen, Nicole; Schlotheuber, Anne
2015-01-01
Background Health equity is a priority in the post-2015 sustainable development agenda and other major health initiatives. The World Health Organization (WHO) has a history of promoting actions to achieve equity in health, including efforts to encourage the practice of health inequality monitoring. Health inequality monitoring systems use disaggregated data to identify disadvantaged subgroups within populations and inform equity-oriented health policies, programs, and practices. Objective This paper provides an overview of a number of recent and current WHO initiatives related to health inequality monitoring at the global and/or national level. Design We outline the scope, content, and intended uses/application of the following: Health Equity Monitor database and theme page; State of inequality: reproductive, maternal, newborn, and child health report; Handbook on health inequality monitoring: with a focus on low- and middle-income countries; Health inequality monitoring eLearning module; Monitoring health inequality: an essential step for achieving health equity advocacy booklet and accompanying video series; and capacity building workshops conducted in WHO Member States and Regions. Conclusions The paper concludes by considering how the work of the WHO can be expanded upon to promote the establishment of sustainable and robust inequality monitoring systems across a variety of health topics among Member States and at the global level. PMID:26387506
Development of SPIES (Space Intelligent Eyeing System) for smart vehicle tracing and tracking
NASA Astrophysics Data System (ADS)
Abdullah, Suzanah; Ariffin Osoman, Muhammad; Guan Liyong, Chua; Zulfadhli Mohd Noor, Mohd; Mohamed, Ikhwan
2016-06-01
SPIES or Space-based Intelligent Eyeing System is an intelligent technology which can be utilized for various applications such as gathering spatial information of features on Earth, tracking system for the movement of an object, tracing system to trace the history information, monitoring driving behavior, security and alarm system as an observer in real time and many more. SPIES as will be developed and supplied modularly will encourage the usage based on needs and affordability of users. SPIES are a complete system with camera, GSM, GPS/GNSS and G-Sensor modules with intelligent function and capabilities. Mainly the camera is used to capture pictures and video and sometimes with audio of an event. Its usage is not limited to normal use for nostalgic purpose but can be used as a reference for security and material of evidence when an undesirable event such as crime occurs. When integrated with space based technology of the Global Navigational Satellite System (GNSS), photos and videos can be recorded together with positioning information. A product of the integration of these technologies when integrated with Information, Communication and Technology (ICT) and Geographic Information System (GIS) will produce innovation in the form of information gathering methods in still picture or video with positioning information that can be conveyed in real time via the web to display location on the map hence creating an intelligent eyeing system based on space technology. The importance of providing global positioning information is a challenge but overcome by SPIES even in areas without GNSS signal reception for the purpose of continuous tracking and tracing capability
Chen, Yen-Lin; Chiang, Hsin-Han; Yu, Chao-Wei; Chiang, Chuan-Yen; Liu, Chuan-Ming; Wang, Jenq-Haur
2012-01-01
This study develops and integrates an efficient knowledge-based system and a component-based framework to design an intelligent and flexible home health care system. The proposed knowledge-based system integrates an efficient rule-based reasoning model and flexible knowledge rules for determining efficiently and rapidly the necessary physiological and medication treatment procedures based on software modules, video camera sensors, communication devices, and physiological sensor information. This knowledge-based system offers high flexibility for improving and extending the system further to meet the monitoring demands of new patient and caregiver health care by updating the knowledge rules in the inference mechanism. All of the proposed functional components in this study are reusable, configurable, and extensible for system developers. Based on the experimental results, the proposed intelligent homecare system demonstrates that it can accomplish the extensible, customizable, and configurable demands of the ubiquitous healthcare systems to meet the different demands of patients and caregivers under various rehabilitation and nursing conditions.
Chen, Yen-Lin; Chiang, Hsin-Han; Yu, Chao-Wei; Chiang, Chuan-Yen; Liu, Chuan-Ming; Wang, Jenq-Haur
2012-01-01
This study develops and integrates an efficient knowledge-based system and a component-based framework to design an intelligent and flexible home health care system. The proposed knowledge-based system integrates an efficient rule-based reasoning model and flexible knowledge rules for determining efficiently and rapidly the necessary physiological and medication treatment procedures based on software modules, video camera sensors, communication devices, and physiological sensor information. This knowledge-based system offers high flexibility for improving and extending the system further to meet the monitoring demands of new patient and caregiver health care by updating the knowledge rules in the inference mechanism. All of the proposed functional components in this study are reusable, configurable, and extensible for system developers. Based on the experimental results, the proposed intelligent homecare system demonstrates that it can accomplish the extensible, customizable, and configurable demands of the ubiquitous healthcare systems to meet the different demands of patients and caregivers under various rehabilitation and nursing conditions. PMID:23112650
Service oriented network architecture for control and management of home appliances
NASA Astrophysics Data System (ADS)
Hayakawa, Hiroshi; Koita, Takahiro; Sato, Kenya
2005-12-01
Recent advances in multimedia network systems and mechatronics have led to the development of a new generation of applications that associate the use of various multimedia objects with the behavior of multiple robotic actors. The connection of audio and video devices through high speed multimedia networks is expected to make the system more convenient to use. For example, many home appliances, such as a video camera, a display monitor, a video recorder, an audio system and so on, are being equipped with a communication interface in the near future. Recently some platforms (i.e. UPnP1, HAVi2 and so on) are proposed for constructing home networks; however, there are some issues to be solved to realize various services by connecting different equipment via the pervasive peer-to-peer network. UPnP offers network connectivity of PCs of intelligent home appliances, practically, which means to require a PC in the network to control other devices. Meanwhile, HAVi has been developed for intelligent AV equipments with sophisticated functions using high CPU power and large memory. Considering the targets of home alliances are embedded systems, this situation raises issues of software and hardware complexity, cost, power consumption and so on. In this study, we have proposed and developed the service oriented network architecture for control and management of home appliances, named SONICA (Service Oriented Network Interoperability for Component Adaptation), to address these issues described before.
Advanced Infant Car Seat Would Increase Highway Safety
NASA Technical Reports Server (NTRS)
Dabney, Richard; Elrod, Susan
2004-01-01
An advanced infant car seat has been proposed to increase highway safety by reducing the incidence of crying, fussy behavior, and other child-related distractions that divert an adult driver s attention from driving. In addition to a conventional infant car seat with safety restraints, the proposed advanced infant car seat would include a number of components and subsystems that would function together as a comprehensive infant-care system that would keep its occupant safe, comfortable, and entertained, and would enable the driver to monitor the baby without having to either stop the car or turn around to face the infant during driving. The system would include a vibrator with bulb switch to operate; the switch would double as a squeeze toy that would make its own specific sound. A music subsystem would include loudspeakers built into the seat plus digital and analog circuitry that would utilize plug-in memory modules to synthesize music or a variety of other sounds. The music subsystem would include a built-in sound generator that could synthesize white noise or a human heartbeat to calm the baby to sleep. A second bulb switch could be used to control the music subsystem and would double as a squeeze toy that would make a distinct sound. An anti-noise sound-suppression system would isolate the baby from potentially disturbing ambient external noises. This subsystem would include small microphones, placed near the baby s ears, to detect ambient noise. The outputs of the microphone would be amplified and fed to the loudspeakers at appropriate amplitude and in a phase opposite that of the detected ambient noise, such that the net ambient sound arriving at the baby s ears would be almost completely cancelled. A video-camera subsystem would enable the driver to monitor the baby visually while continuing to face forward. One or more portable miniature video cameras could be embedded in the side of the infant car seat (see figure) or in a flip-down handle. The outputs of the video cameras would be transmitted by radio or infrared to a portable, miniature receiver/video monitor unit that would be attached to the dashboard of the car. The video-camera subsystem can also be used within transmission/reception range when the seat was removed from the car. The system would include a biotelemetric and tracking subsystem, which would include a Global Positioning System receiver for measuring its location. This subsystem would transmit the location of the infant car seat (even if the seat were not in a car) along with such biometric data as the baby s heart rate, perspiration rate, urinary status, temperature, and rate of breathing. Upon detecting any anomalies in the biometric data, this subsystem would send a warning to a paging device installed in the car or carried by the driver, so that the driver could pull the car off the road to attend to the baby. A motion detector in this subsystem would send a warning if the infant car seat were to be moved or otherwise disturbed unexpectedly while the infant was seated in it: this warning function, in combination with the position- tracking function, could help in finding a baby who had been kidnapped with the seat. Removable rechargeable batteries would enable uninterrupted functioning of all parts of the system while transporting the baby to and from the car. The batteries could be recharged via the cigarette-lighter outlet in the car or by use of an external AC-powered charger.
[Telemetry in the clinical setting].
Hilbel, Thomas; Helms, Thomas M; Mikus, Gerd; Katus, Hugo A; Zugck, Christian
2008-09-01
Telemetric cardiac monitoring was invented in 1949 by Norman J Holter. Its clinical use started in the early 1960s. In the hospital, biotelemetry allows early mobilization of patients with cardiovascular risk and addresses the need for arrhythmia or oxygen saturation monitoring. Nowadays telemetry either uses vendor-specific UHF band broadcasting or the digital ISM band (Industrial, Scientific, and Medical Band) standardized Wi-Fi network technology. Modern telemetry radio transmitters can measure and send multiple physiological parameters like multi-channel ECG, NIPB and oxygen saturation. The continuous measurement of oxygen saturation is mandatory for the remote monitoring of patients with cardiac pacemakers. Real 12-lead ECG systems with diagnostic quality are an advantage for monitoring patients with chest pain syndromes or in drug testing wards. Modern systems are light-weight and deliver a maximum of carrying comfort due to optimized cable design. Important for the system selection is a sophisticated detection algorithm with a maximum reduction of artifacts. Home-monitoring of implantable cardiac devices with telemetric functionalities are becoming popular because it allows remote diagnosis of proper device functionality and also optimization of the device settings. Continuous real-time monitoring at home for patients with chronic disease may be possible in the future using Digital Video Broadcasting Terrestrial (DVB-T) technology in Europe, but is currently not yet available.
Video Surveillance in Mental Health Facilities: Is it Ethical?
Stolovy, Tali; Melamed, Yuval; Afek, Arnon
2015-05-01
Video surveillance is a tool for managing safety and security within public spaces. In mental health facilities, the major benefit of video surveillance is that it enables 24 hour monitoring of patients, which has the potential to reduce violent and aggressive behavior. The major disadvantage is that such observation is by nature intrusive. It diminishes privacy, a factor of huge importance for psychiatric inpatients. Thus, an ongoing debate has developed following the increasing use of cameras in this setting. This article presents the experience of a medium-large academic state hospital that uses video surveillance, and explores the various ethical and administrative aspects of video surveillance in mental health facilities.
Brownian Movement and Avogadro's Number: A Laboratory Experiment.
ERIC Educational Resources Information Center
Kruglak, Haym
1988-01-01
Reports an experimental procedure for studying Einstein's theory of Brownian movement using commercially available latex microspheres and a video camera. Describes how students can monitor sphere motions and determine Avogadro's number. Uses a black and white video camera, microscope, and TV. (ML)
Instrumentation for Infrared Airglow Clutter.
1987-03-10
gain, and filter position to the Camera Head, and monitors these parameters as well as preamp video. GAZER is equipped with a Lenzar wide angle, low...Specifications/Parameters VIDEO SENSOR: Camera ...... . LENZAR Intensicon-8 LLLTV using 2nd gen * micro-channel intensifier and proprietary camera tube
NASA Technical Reports Server (NTRS)
Diner, Daniel B. (Inventor)
1994-01-01
Real-time video presentations are provided in the field of operator-supervised automation and teleoperation, particularly in control stations having movable cameras for optimal viewing of a region of interest in robotics and teleoperations for performing different types of tasks. Movable monitors to match the corresponding camera orientations (pan, tilt, and roll) are provided in order to match the coordinate systems of all the monitors to the operator internal coordinate system. Automated control of the arrangement of cameras and monitors, and of the configuration of system parameters, is provided for optimal viewing and performance of each type of task for each operator since operators have different individual characteristics. The optimal viewing arrangement and system parameter configuration is determined and stored for each operator in performing each of many types of tasks in order to aid the automation of setting up optimal arrangements and configurations for successive tasks in real time. Factors in determining what is optimal include the operator's ability to use hand-controllers for each type of task. Robot joint locations, forces and torques are used, as well as the operator's identity, to identify the current type of task being performed in order to call up a stored optimal viewing arrangement and system parameter configuration.
NASA Technical Reports Server (NTRS)
Das, V. E.; Thomas, C. W.; Zivotofsky, A. Z.; Leigh, R. J.
1996-01-01
Video-based eye-tracking systems are especially suited to studying eye movements during naturally occurring activities such as locomotion, but eye velocity records suffer from broad band noise that is not amenable to conventional filtering methods. We evaluated the effectiveness of combined median and moving-average filters by comparing prefiltered and postfiltered records made synchronously with a video eye-tracker and the magnetic search coil technique, which is relatively noise free. Root-mean-square noise was reduced by half, without distorting the eye velocity signal. To illustrate the practical use of this technique, we studied normal subjects and patients with deficient labyrinthine function and compared their ability to hold gaze on a visual target that moved with their heads (cancellation of the vestibulo-ocular reflex). Patients and normal subjects performed similarly during active head rotation but, during locomotion, patients held their eyes more steadily on the visual target than did subjects.
Cognitive chrono-ethnography lite.
Nakajima, Masato; Yamada, Kosuke C; Kitajima, Muneo
2012-01-01
Conducting field research facilitates understanding human daily activities. Cognitive Chrono-Ethnography (CCE) is a study methodology used to understand how people select actions in daily life by conducting ethnographical field research. CCE consists of measuring monitors' daily activities in a specified field and in-depth interviews using the recorded videos afterward. However, privacy issues may arise when conducting standard CCE with video recordings in a daily field. To resolve these issues, we developed a new study methodology, CCE Lite. To replace video recordings, we created pseudo-first-personview (PFPV) movies using a computer-graphic technique. The PFPV movies were used to remind the monitors of their activities. These movies replicated monitors' activities (e.g., locomotion and change in physical direction), with no human images and voices. We applied CCE Lite in a case study that involved female employees of hotels at a spa resort. In-depth interviews while showing the PFPV movies determined service schema of the employees (i.e., hospitality). Results indicated that using PFPV movies helped the employees to remember and reconstruct the situation of recorded activities.
Enhanced technologies for unattended ground sensor systems
NASA Astrophysics Data System (ADS)
Hartup, David C.
2010-04-01
Progress in several technical areas is being leveraged to advantage in Unattended Ground Sensor (UGS) systems. This paper discusses advanced technologies that are appropriate for use in UGS systems. While some technologies provide evolutionary improvements, other technologies result in revolutionary performance advancements for UGS systems. Some specific technologies discussed include wireless cameras and viewers, commercial PDA-based system programmers and monitors, new materials and techniques for packaging improvements, low power cueing sensor radios, advanced long-haul terrestrial and SATCOM radios, and networked communications. Other technologies covered include advanced target detection algorithms, high pixel count cameras for license plate and facial recognition, small cameras that provide large stand-off distances, video transmissions of target activity instead of still images, sensor fusion algorithms, and control center hardware. The impact of each technology on the overall UGS system architecture is discussed, along with the advantages provided to UGS system users. Areas of analysis include required camera parameters as a function of stand-off distance for license plate and facial recognition applications, power consumption for wireless cameras and viewers, sensor fusion communication requirements, and requirements to practically implement video transmission through UGS systems. Examples of devices that have already been fielded using technology from several of these areas are given.
Stone, Erik E; Skubic, Marjorie
2011-01-01
We present an analysis of measuring stride-to-stride gait variability passively, in a home setting using two vision based monitoring techniques: anonymized video data from a system of two web-cameras, and depth imagery from a single Microsoft Kinect. Millions of older adults fall every year. The ability to assess the fall risk of elderly individuals is essential to allowing them to continue living safely in independent settings as they age. Studies have shown that measures of stride-to-stride gait variability are predictive of falls in older adults. For this analysis, a set of participants were asked to perform a number of short walks while being monitored by the two vision based systems, along with a marker based Vicon motion capture system for ground truth. Measures of stride-to-stride gait variability were computed using each of the systems and compared against those obtained from the Vicon.
Multispectral Remote Sensing of the Earth and Environment Using KHawk Unmanned Aircraft Systems
NASA Astrophysics Data System (ADS)
Gowravaram, Saket
This thesis focuses on the development and testing of the KHawk multispectral remote sensing system for environmental and agricultural applications. KHawk Unmanned Aircraft System (UAS), a small and low-cost remote sensing platform, is used as the test bed for aerial video acquisition. An efficient image geotagging and photogrammetric procedure for aerial map generation is described, followed by a comprehensive error analysis on the generated maps. The developed procedure is also used for generation of multispectral aerial maps including red, near infrared (NIR) and colored infrared (CIR) maps. A robust Normalized Difference Vegetation index (NDVI) calibration procedure is proposed and validated by ground tests and KHawk flight test. Finally, the generated aerial maps and their corresponding Digital Elevation Models (DEMs) are used for typical application scenarios including prescribed fire monitoring, initial fire line estimation, and tree health monitoring.
NASA Astrophysics Data System (ADS)
Coltelli, Mauro; Biale, Emilio; Ciancitto, Francesco; Pecora, Emilio; Prestifilippo, Michele
2014-05-01
Since 1994 a video-surveillance camera located on a peak just above the active volcanic vents of Stromboli island records the explosive activity of one of the few volcanoes on the world performing a persistent eruptive activity. From 2003, after one of the larger lava flow eruption of the last century, the video-surveillance system was enhanced with more stations having both thermal and visual cameras. The video-surveillance helps volcanologists to characterize the mild explosive activity of Stromboli named Strombolian and to distinguish between the frequent "ordinary" Strombolian explosions and the occasional "extraordinary" strong Strombolian explosions that periodically occur. A new class of extraordinary explosions was discovered filling the gap between the ordinary activity and the strong explosions named major explosions when the tephra fallout covers large areas on the volcano summit and paroxysmal ones when the bombs fall down to the inhabited area along the coast of the island. In order to quantify the trend of the ordinary Strombolian explosions and to understand the occurring of the extraordinary strong Strombolian explosions a computer assisted image analysis was developed to process the huge amount of thermal and visual images recorded in several years. The results of this complex analysis allow us to clarify the processes occurring in the upper plumbing system where the pockets/trains of bubbles coalesce and move into the active vent conduits producing the ordinary Strombolian activity, and to infer the process into the deeper part of the plumbing system where new magma supply and its evolution lead to the formation of the extraordinary strong Strombolian explosions.
Strategies of performance self-monitoring in automotive production.
Faye, Hélène; Falzon, Pierre
2009-09-01
Production in the automotive industry, based on assembly line work, is now characterized by lean manufacturing and customization. This results in greater flexibility and increased quality demands, including worker performance self-monitoring. The objectives of this study are to refine the concept of performance self-monitoring and to characterize the strategies developed by operators to achieve it. Data were collected based on the method of individual auto-confrontation, consisting of two steps: eleven assembly-line operators of a French automotive company were individually observed and video-taped while they were working; an interview then allowed each operator to discuss his/her activity based on the video-tape. This study expands the concept of performance self-monitoring by highlighting three types of strategies directly oriented toward quality: prevention, feedback control and control action strategies.
PNNLâs Building Operations Control Center
Belew, Shan
2018-01-16
PNNL's Building Operations Control Center (BOCC) video provides an overview of the center, its capabilities, and its objectives. The BOCC was relocated to PNNL's new 3820 Systems Engineering Building in 2015. Although a key focus of the BOCC is on monitoring and improving the operations of PNNL buildings, the center's state-of-the-art computational, software and visualization resources also have provided a platform for PNNL buildings-related research projects.
Giraldo, Paula Jimena Ramos; Aguirre, Álvaro Guerrero; Muñoz, Carlos Mario; Prieto, Flavio Augusto; Oliveros, Carlos Eugenio
2017-04-06
Smartphones show potential for controlling and monitoring variables in agriculture. Their processing capacity, instrumentation, connectivity, low cost, and accessibility allow farmers (among other users in rural areas) to operate them easily with applications adjusted to their specific needs. In this investigation, the integration of inertial sensors, a GPS, and a camera are presented for the monitoring of a coffee crop. An Android-based application was developed with two operating modes: ( i ) Navigation: for georeferencing trees, which can be as close as 0.5 m from each other; and ( ii ) Acquisition: control of video acquisition, based on the movement of the mobile device over a branch, and measurement of image quality, using clarity indexes to select the most appropriate frames for application in future processes. The integration of inertial sensors in navigation mode, shows a mean relative error of ±0.15 m, and total error ±5.15 m. In acquisition mode, the system correctly identifies the beginning and end of mobile phone movement in 99% of cases, and image quality is determined by means of a sharpness factor which measures blurriness. With the developed system, it will be possible to obtain georeferenced information about coffee trees, such as their production, nutritional state, and presence of plagues or diseases.
Ramos Giraldo, Paula Jimena; Guerrero Aguirre, Álvaro; Muñoz, Carlos Mario; Prieto, Flavio Augusto; Oliveros, Carlos Eugenio
2017-01-01
Smartphones show potential for controlling and monitoring variables in agriculture. Their processing capacity, instrumentation, connectivity, low cost, and accessibility allow farmers (among other users in rural areas) to operate them easily with applications adjusted to their specific needs. In this investigation, the integration of inertial sensors, a GPS, and a camera are presented for the monitoring of a coffee crop. An Android-based application was developed with two operating modes: (i) Navigation: for georeferencing trees, which can be as close as 0.5 m from each other; and (ii) Acquisition: control of video acquisition, based on the movement of the mobile device over a branch, and measurement of image quality, using clarity indexes to select the most appropriate frames for application in future processes. The integration of inertial sensors in navigation mode, shows a mean relative error of ±0.15 m, and total error ±5.15 m. In acquisition mode, the system correctly identifies the beginning and end of mobile phone movement in 99% of cases, and image quality is determined by means of a sharpness factor which measures blurriness. With the developed system, it will be possible to obtain georeferenced information about coffee trees, such as their production, nutritional state, and presence of plagues or diseases. PMID:28383494
Image processing for improved eye-tracking accuracy
NASA Technical Reports Server (NTRS)
Mulligan, J. B.; Watson, A. B. (Principal Investigator)
1997-01-01
Video cameras provide a simple, noninvasive method for monitoring a subject's eye movements. An important concept is that of the resolution of the system, which is the smallest eye movement that can be reliably detected. While hardware systems are available that estimate direction of gaze in real-time from a video image of the pupil, such systems must limit image processing to attain real-time performance and are limited to a resolution of about 10 arc minutes. Two ways to improve resolution are discussed. The first is to improve the image processing algorithms that are used to derive an estimate. Off-line analysis of the data can improve resolution by at least one order of magnitude for images of the pupil. A second avenue by which to improve resolution is to increase the optical gain of the imaging setup (i.e., the amount of image motion produced by a given eye rotation). Ophthalmoscopic imaging of retinal blood vessels provides increased optical gain and improved immunity to small head movements but requires a highly sensitive camera. The large number of images involved in a typical experiment imposes great demands on the storage, handling, and processing of data. A major bottleneck had been the real-time digitization and storage of large amounts of video imagery, but recent developments in video compression hardware have made this problem tractable at a reasonable cost. Images of both the retina and the pupil can be analyzed successfully using a basic toolbox of image-processing routines (filtering, correlation, thresholding, etc.), which are, for the most part, well suited to implementation on vectorizing supercomputers.
Expert Behavior in Children's Video Game Play.
ERIC Educational Resources Information Center
VanDeventer, Stephanie S.; White, James A.
2002-01-01
Investigates the display of expert behavior by seven outstanding video game-playing children ages 10 and 11. Analyzes observation and debriefing transcripts for evidence of self-monitoring, pattern recognition, principled decision making, qualitative thinking, and superior memory, and discusses implications for educators regarding the development…
Senior, Lisa A.; Bird, Philip H.
2010-01-01
As part of technical assistance to the U.S. Environmental Protection Agency (USEPA) in the remediation of properties on the North Penn Area 6 Superfund Site in Lansdale, Pa., the U.S. Geological Survey (USGS) in 2006-07 collected data in four monitor wells at the Rogers Mechanical (former Tate Andale) property. During this period, USGS collected and analyzed borehole geophysical and video logs of three new monitor wells (Rogers 4, Rogers 5, and Rogers 6) ranging in depth from 80 to 180 feet, a borehole video log and additional heatpulse-flowmeter measurements (to quantify vertical borehole flow) in one existing 100-foot deep well (Rogers 3S), and water-level data during development of two wells (Rogers 5 and Rogers 6) to determine specific capacity. USGS also summarized results of passive-diffusion bag sampling for volatile organic compounds (VOCs) in the four wells. These data were intended to help understand the groundwater system and the distribution of VOC contaminants in groundwater at the property.
JXTA: A Technology Facilitating Mobile P2P Health Management System
Rajkumar, Rajasekaran; Nallani Chackravatula Sriman, Narayana Iyengar
2012-01-01
Objectives Mobile JXTA (Juxtapose) gaining momentum and has attracted the interest of doctors and patients through P2P service that transmits messages. Audio and video can also be transmitted through JXTA. The use of mobile streaming mechanism with the support of mobile hospital management and healthcare system would enable better interaction between doctors, nurses, and the hospital. Experimental results demonstrate good performance in comparison with conventional systems. This study evaluates P2P JXTA/JXME (JXTA functionality to MIDP devices.) which facilitates peer-to-peer application+ using mobile-constraint devices. Also a proven learning algorithm was used to automatically send and process sorted patient data to nurses. Methods From December 2010 to December 2011, a total of 500 patients were referred to our hospital due to minor health problems and were monitored. We selected all of the peer groups and the control server, which controlled the BMO (Block Medical Officer) peer groups and BMO through the doctor peer groups, and prescriptions were delivered to the patient’s mobile phones through the JXTA/ JXME network. Results All 500 patients were registered in the JXTA network. Among these, 300 patient histories were referred to the record peer group by the doctors, 100 patients were referred to the external doctor peer group, and 100 patients were registered as new users in the JXTA/JXME network. Conclusion This system was developed for mobile streaming applications and was designed to support the mobile health management system using JXTA/ JXME. The simulated results show that this system can carry out streaming audio and video applications. Controlling and monitoring by the doctor peer group makes the system more flexible and structured. Enhanced studies are needed to improve knowledge mining and cloud-based M health management technology in comparison with the traditional system. PMID:24159509
Video: useful tool for delivering family planning messages.
Sumarsono, S K
1985-10-01
In 1969, the Government of Indonesia declared that the population explosion was a national problem. The National Family Planning Program was consequently launched to encourage adoption of the ideal of a small, happy and prosperous family norm. Micro-approach messages are composed of the following: physiology of menstruation; reproductive process; healthy pregnancy; rational family planning; rational application of contraceptives; infant and child care; nutrition improvement; increase in breastfeeding; increase in family income; education in family life; family health; and deferred marriage age. Macro-approach messages include: the population problem and its impact on socioeconomic aspects; efforts to cope with the population problem; and improvement of women's lot. In utilizing the media and communication channels, the program encourages the implementation of units and working units of IEC to produce IEC materials; utilizes all possible existing media and IEC channels; maintains the consistent linkage between the activity of mass media and the IEC activities in the field; and encourages the private sector to participate in the production of IEC media and materials. A media production center was set up and carries out the following activities: producing video cassettes for tv broadcasts of family planning drama, family planning news, and tv spots; producing duplicates of the video cassettes for distribution to provinces in support of the video network; producing teaching materials for family planning workers; and transfering family planning films into video cassettes. A video network was developed and includes video monitors in family planning service points such as hospitals, family planning clinics and public places like bus stations. In 1985, the program will be expanded by 50 mobile information units equipped with video monitors. Video has potentials to increase the productivity and effectiveness of the family planning program. The video production process is cheaper and simpler than film production. Video will be very helpful as a communication aid in group meetings. It can also be used as a teaching aid for training.
Improving Weight Loss Outcomes of Community Interventions by Incorporating Behavioral Strategies
Crane, Melissa M.; Thomas, J. Graham; Kumar, Rajiv; Weinberg, Brad
2010-01-01
Objectives. We examined whether adding behavioral weight loss strategies could improve the outcomes of a community weight loss campaign. Methods. Shape Up RI is a 12-week, online, team-based program for health improvement in Rhode Island. In study 1, we randomly assigned participants to the standard Shape Up RI program or to the program plus video lessons on weight loss. In study 2, we randomly assigned participants to the standard program or to the program plus video lessons; daily self-monitoring of weight, eating, and exercise; and computer-generated feedback. Results. Adding video lessons alone (study 1) did not result in significantly improved weight loss (2.0 ±2.8 kg vs 1.4 ±2.9 kg; P = .15). However, when the video lessons were supplemented with self-monitoring and feedback (study 2), the average weight loss more than doubled (3.5 ±3.8 kg vs 1.4 ±2.7 kg; P < .01), and the proportion of individuals achieving a weight loss of 5% or more tripled (40.5% vs 13.2%; P < .01). Participants in study 2 submitted self-monitoring records on 78% of days, and adherence was significantly related to outcome. Conclusions. Adding behavioral strategies to community campaigns may improve weight loss outcomes with minimal additional cost. PMID:20966375
Stefan, H; Kreiselmeyer, G; Kasper, B; Graf, W; Pauli, E; Kurzbuch, K; Hopfengärtner, R
2011-03-01
A reliable method for the estimation of seizure frequency and severity is indispensable in assessing the efficacy of drug treatment in epilepsies. These quantities are usually deduced from subjective patient reports, which may cause considerable problems due to insufficient or false descriptions of seizures and their frequency. We present data from two difficult-to-treat patients with intractable epilepsy. Pat. 1 has had an unknown number of CP seizures. Here, a prolonged outpatient video-EEG monitoring over 160 h and 137 h (over an interval of three months) was performed with an automated seizure detection method. Pat. 2 suffered exclusively from nocturnal seizures originating from the frontal lobe. In this case, an objective quantification of the efficacy of drug treatment over a time period of 22 weeks was established. For the reliable quantification of seizures, a prolonged outpatient video/video-EEG monitoring was appended after a short-term inpatient monitoring period. Patient 1: The seizure detection algorithm was capable of detecting 10 out of 11 seizures. The number of false-positive events was <0.03/h. It was clearly demonstrated that the patient showed more seizures than originally reported. Patient 2: The add-on medication of lacosamide led to a significant reduction in seizure frequency and to a marked decrease in the mean duration of seizures. The severity of seizures was reduced from numerous hypermotoric seizures to few mild, head-turning seizures. Outpatient monitoring may be helpful to guide treatment for severe epilepsies and offers the possibility to more reliably quantify the efficacy of treatment in the long-term, even over several months. Copyright © 2010 British Epilepsy Association. Published by Elsevier Ltd. All rights reserved.
Remote photoplethysmography system for unsupervised monitoring regional anesthesia effectiveness
NASA Astrophysics Data System (ADS)
Rubins, U.; Miscuks, A.; Marcinkevics, Z.; Lange, M.
2017-12-01
Determining the level of regional anesthesia (RA) is vitally important to both an anesthesiologist and surgeon, also knowing the RA level can protect the patient and reduce the time of surgery. Normally to detect the level of RA, usually a simple subjective (sensitivity test) and complicated quantitative methods (thermography, neuromyography, etc.) are used, but there is not yet a standardized method for objective RA detection and evaluation. In this study, the advanced remote photoplethysmography imaging (rPPG) system for unsupervised monitoring of human palm RA is demonstrated. The rPPG system comprises compact video camera with green optical filter, surgical lamp as a light source and a computer with custom-developed software. The algorithm implemented in Matlab software recognizes the palm and two dermatomes (Medial and Ulnar innervation), calculates the perfusion map and perfusion changes in real-time to detect effect of RA. Seven patients (aged 18-80 years) undergoing hand surgery received peripheral nerve brachial plexus blocks during the measurements. Clinical experiments showed that our rPPG system is able to perform unsupervised monitoring of RA.
Towards continuous monitoring of pulse rate in neonatal intensive care unit with a webcam.
Mestha, Lalit K; Kyal, Survi; Xu, Beilei; Lewis, Leslie Edward; Kumar, Vijay
2014-01-01
We describe a novel method to monitor pulse rate (PR) on a continuous basis of patients in a neonatal intensive care unit (NICU) using videos taken from a high definition (HD) webcam. We describe algorithms that determine PR from videoplethysmographic (VPG) signals extracted from multiple regions of interest (ROI) simultaneously available within the field of view of the camera where cardiac signal is registered. We detect motion from video images and compensate for motion artifacts from each ROI. Preliminary clinical results are presented on 8 neonates each with 30 minutes of uninterrupted video. Comparisons to hospital equipment indicate that the proposed technology can meet medical industry standards and give improved patient comfort and ease of use for practitioners when instrumented with proper hardware.
General Aviation Citizen Science Study to Help Tackle Remote Sensing of Harmful Algal Blooms (HABs)
NASA Technical Reports Server (NTRS)
Ansari, Rafat R.; Schubert, Terry
2018-01-01
We present a new, low-cost approach, based on volunteer pilots conducting high-resolution aerial imaging, to help document the onset, growth, and outbreak of harmful algal blooms (HABs) and related water quality issues in central and western Lake Erie. In this model study, volunteer private pilots acting as citizen scientists frequently flew over 200 mi of Lake Erie coastline, its islands, and freshwater estuaries, taking high-quality aerial photographs and videos. The photographs were taken in the nadir (vertical) position in red, green, and blue (RGB) and near-infrared (NIR) every 5 s with rugged, commercially available built-in Global Positioning System (GPS) cameras. The high-definition (HD) videos in 1080p format were taken continuously in an oblique forward direction. The unobstructed, georeferenced, high-resolution images, and HD videos can provide an early warning of ensuing HAB events to coastal communities and freshwater resource managers. The scientists and academic researchers can use the data to compliment a collection of in situ water measurements, matching satellite imagery, and help develop advanced airborne instrumentation, and validation of their algorithms. This data may help develop empirical models, which may lead to the next steps in predicting a HAB event as some watershed observed events changed the water quality such as particle size, sedimentation, color, mineralogy, and turbidity delivered to the Lake site. This paper shows the efficacy and scalability of citizen science (CS) aerial imaging as a complimentary tool for rapid emergency response in HABs monitoring, land and vegetation management, and scientific studies. This study can serve as a model for monitoring/management of freshwater and marine aquatic systems.
NASA Technical Reports Server (NTRS)
Jones, D. H.; Coates, G. D.; Kirby, R. H.
1983-01-01
The effectiveness of incroporating a real-time oculometer system into a Boeing 737 commercial flight training program was studied. The study combined a specialized oculometer system with sophisticated video equipment that would allow instructor pilots (IPs) to monitor pilot and copilot trainees' instrument scan behavior in real-time, and provide each trainee with video tapes of his/her instrument scanning behavior for each training session. The IPs' performance ratings and trainees' self-ratings were compared to the performance ratings by IPs and trainees in a control group. The results indicate no difference in IP ratings or trainees' self-ratings for the control and experimental groups. The results indicated that the major beneficial role of a real-time oculometer system for pilots and copilots having a significant amount of flight experience would be for problem solving or refinement of instrument scanning behavior rather than a general instructional scheme. It is suggested that this line of research be continued with the incorporation of objective data (e.g., state of the aircraft data), measures of cost effectiveness and with trainees having less flight experience.
Automated intelligent video surveillance system for ships
NASA Astrophysics Data System (ADS)
Wei, Hai; Nguyen, Hieu; Ramu, Prakash; Raju, Chaitanya; Liu, Xiaoqing; Yadegar, Jacob
2009-05-01
To protect naval and commercial ships from attack by terrorists and pirates, it is important to have automatic surveillance systems able to detect, identify, track and alert the crew on small watercrafts that might pursue malicious intentions, while ruling out non-threat entities. Radar systems have limitations on the minimum detectable range and lack high-level classification power. In this paper, we present an innovative Automated Intelligent Video Surveillance System for Ships (AIVS3) as a vision-based solution for ship security. Capitalizing on advanced computer vision algorithms and practical machine learning methodologies, the developed AIVS3 is not only capable of efficiently and robustly detecting, classifying, and tracking various maritime targets, but also able to fuse heterogeneous target information to interpret scene activities, associate targets with levels of threat, and issue the corresponding alerts/recommendations to the man-in- the-loop (MITL). AIVS3 has been tested in various maritime scenarios and shown accurate and effective threat detection performance. By reducing the reliance on human eyes to monitor cluttered scenes, AIVS3 will save the manpower while increasing the accuracy in detection and identification of asymmetric attacks for ship protection.
What Makes a Message Stick? The Role of Content and Context in Social Media Epidemics
2013-09-23
First, we propose visual memes , or frequently re-posted short video segments, for detecting and monitoring latent video interactions at scale. Content...interactions (such as quoting, or remixing, parts of a video). Visual memes are extracted by scalable detection algorithms that we develop, with...high accuracy. We further augment visual memes with text, via a statistical model of latent topics. We model content interactions on YouTube with
47 CFR 76.1503 - Carriage of video programming providers on open video systems.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 4 2011-10-01 2011-10-01 false Carriage of video programming providers on open video systems. 76.1503 Section 76.1503 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1503...
47 CFR 76.1503 - Carriage of video programming providers on open video systems.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 4 2012-10-01 2012-10-01 false Carriage of video programming providers on open video systems. 76.1503 Section 76.1503 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1503...
47 CFR 76.1503 - Carriage of video programming providers on open video systems.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 4 2014-10-01 2014-10-01 false Carriage of video programming providers on open video systems. 76.1503 Section 76.1503 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1503...
47 CFR 76.1503 - Carriage of video programming providers on open video systems.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 4 2010-10-01 2010-10-01 false Carriage of video programming providers on open video systems. 76.1503 Section 76.1503 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1503...
47 CFR 76.1503 - Carriage of video programming providers on open video systems.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 4 2013-10-01 2013-10-01 false Carriage of video programming providers on open video systems. 76.1503 Section 76.1503 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1503...
NASA Astrophysics Data System (ADS)
Klaessens, John H.; van den Born, Marlies; van der Veen, Albert; Sikkens-van de Kraats, Janine; van den Dungen, Frank A.; Verdaasdonk, Rudolf M.
2014-02-01
For infants and neonates in an incubator vital signs, such as heart rate, breathing, skin temperature and blood oxygen saturation are measured by sensors and electrodes sticking to the skin. This can damage the vulnerable skin of neonates and cause infections. In addition, the wires interfere with the care and hinder the parents in holding and touching the baby. These problems initiated the search for baby friendly 'non-contact' measurement of vital signs. Using a sensitive color video camera and specially developed software, the heart rate was derived from subtle repetitive color changes. Potentially also respiration and oxygen saturation could be obtained. A thermal camera was used to monitor the temperature distribution of the whole body and detect small temperature variations around the nose revealing the respiration rate. After testing in the laboratory, seven babies were monitored (with parental consent) in the neonatal intensive care unit (NICU) simultaneously with the regular monitoring equipment. From the color video recordings accurate heart rates could be derived and the thermal images provided accurate respiration rates. To correct for the movements of the baby, tracking software could be applied. At present, the image processing was performed off-line. Using narrow band light sources also non-contact blood oxygen saturation could be measured. Non-contact monitoring of vital signs has proven to be feasible and can be developed into a real time system. Besides the application on the NICU non-contact vital function monitoring has large potential for other patient groups.
Mor, Vincent; Volandes, Angelo E; Gutman, Roee; Gatsonis, Constantine; Mitchell, Susan L
2017-04-01
Background/Aims Nursing homes are complex healthcare systems serving an increasingly sick population. Nursing homes must engage patients in advance care planning, but do so inconsistently. Video decision support tools improved advance care planning in small randomized controlled trials. Pragmatic trials are increasingly employed in health services research, although not commonly in the nursing home setting to which they are well-suited. This report presents the design and rationale for a pragmatic cluster randomized controlled trial that evaluated the "real world" application of an Advance Care Planning Video Program in two large US nursing home healthcare systems. Methods PRagmatic trial Of Video Education in Nursing homes was conducted in 360 nursing homes (N = 119 intervention/N = 241 control) owned by two healthcare systems. Over an 18-month implementation period, intervention facilities were instructed to offer the Advance Care Planning Video Program to all patients. Control facilities employed usual advance care planning practices. Patient characteristics and outcomes were ascertained from Medicare Claims, Minimum Data Set assessments, and facility electronic medical record data. Intervention adherence was measured using a Video Status Report embedded into electronic medical record systems. The primary outcome was the number of hospitalizations/person-day alive among long-stay patients with advanced dementia or cardiopulmonary disease. The rationale for the approaches to facility randomization and recruitment, intervention implementation, population selection, data acquisition, regulatory issues, and statistical analyses are discussed. Results The large number of well-characterized candidate facilities enabled several unique design features including stratification on historical hospitalization rates, randomization prior to recruitment, and 2:1 control to intervention facilities ratio. Strong endorsement from corporate leadership made randomization prior to recruitment feasible with 100% participation of facilities randomized to the intervention arm. Critical regulatory issues included minimal risk determination, waiver of informed consent, and determination that nursing home providers were not engaged in human subjects research. Intervention training and implementation were initiated on 5 January 2016 using corporate infrastructures for new program roll-out guided by standardized training elements designed by the research team. Video Status Reports in facilities' electronic medical records permitted "real-time" adherence monitoring and corrective actions. The Centers for Medicare and Medicaid Services Virtual Research Data Center allowed for rapid outcomes ascertainment. Conclusion We must rigorously evaluate interventions to deliver more patient-focused care to an increasingly frail nursing home population. Video decision support is a practical approach to improve advance care planning. PRagmatic trial Of Video Education in Nursing homes has the potential to promote goal-directed care among millions of older Americans in nursing homes and establish a methodology for future pragmatic randomized controlled trials in this complex healthcare setting.
NASA Technical Reports Server (NTRS)
1995-01-01
In the early 1990s, the Ohio State University Center for Mapping, a NASA Center for the Commercial Development of Space (CCDS), developed a system for mobile mapping called the GPSVan. While driving, the users can map an area from the sophisticated mapping van equipped with satellite signal receivers, video cameras and computer systems for collecting and storing mapping data. George J. Igel and Company and the Ohio State University Center for Mapping advanced the technology for use in determining the contours of a construction site. The new system reduces the time required for mapping and staking, and can monitor the amount of soil moved.
Lessons from UNSCOM and IAEA regarding remote monitoring and air sampling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dupree, S.A.
1996-01-01
In 1991, at the direction of the United Nations Security Council, UNSCOM and IAEA developed plans for On-going Monitoring and Verification (OMV) in Iraq. The plans were accepted by the Security Council and remote monitoring and atmospheric sampling equipment has been installed at selected sites in Iraq. The remote monitoring equipment consists of video cameras and sensors positioned to observe equipment or activities at sites that could be used to support the development or manufacture of weapons of mass destruction, or long-range missiles. The atmospheric sampling equipment provides unattended collection of chemical samples from sites that could be used tomore » support the development or manufacture of chemical weapon agents. To support OMV in Iraq, UNSCOM has established the Baghdad Monitoring and Verification Centre. Imagery from the remote monitoring cameras can be accessed in near-real time from the Centre through RIF communication links with the monitored sites. The OMV program in Iraq has implications for international cooperative monitoring in both global and regional contexts. However, monitoring systems such as those used in Iraq are not sufficient, in and of themselves, to guarantee the absence of prohibited activities. Such systems cannot replace on-site inspections by competent, trained inspectors. However, monitoring similar to that used in Iraq can contribute to openness and confidence building, to the development of mutual trust, and to the improvement of regional stability.« less
47 CFR 76.1504 - Rates, terms and conditions for carriage on open video systems.
Code of Federal Regulations, 2012 CFR
2012-10-01
... open video systems. 76.1504 Section 76.1504 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1504 Rates, terms and conditions for carriage on open video systems. (a) Reasonable rate principle. An...
47 CFR 76.1504 - Rates, terms and conditions for carriage on open video systems.
Code of Federal Regulations, 2011 CFR
2011-10-01
... open video systems. 76.1504 Section 76.1504 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1504 Rates, terms and conditions for carriage on open video systems. (a) Reasonable rate principle. An...
47 CFR 76.1504 - Rates, terms and conditions for carriage on open video systems.
Code of Federal Regulations, 2014 CFR
2014-10-01
... open video systems. 76.1504 Section 76.1504 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1504 Rates, terms and conditions for carriage on open video systems. (a) Reasonable rate principle. An...
47 CFR 76.1504 - Rates, terms and conditions for carriage on open video systems.
Code of Federal Regulations, 2013 CFR
2013-10-01
... open video systems. 76.1504 Section 76.1504 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1504 Rates, terms and conditions for carriage on open video systems. (a) Reasonable rate principle. An...
NASA Astrophysics Data System (ADS)
Nightingale, James; Wang, Qi; Grecos, Christos
2011-03-01
Users of the next generation wireless paradigm known as multihomed mobile networks expect satisfactory quality of service (QoS) when accessing streamed multimedia content. The recent H.264 Scalable Video Coding (SVC) extension to the Advanced Video Coding standard (AVC), offers the facility to adapt real-time video streams in response to the dynamic conditions of multiple network paths encountered in multihomed wireless mobile networks. Nevertheless, preexisting streaming algorithms were mainly proposed for AVC delivery over multipath wired networks and were evaluated by software simulation. This paper introduces a practical, hardware-based testbed upon which we implement and evaluate real-time H.264 SVC streaming algorithms in a realistic multihomed wireless mobile networks environment. We propose an optimised streaming algorithm with multi-fold technical contributions. Firstly, we extended the AVC packet prioritisation schemes to reflect the three-dimensional granularity of SVC. Secondly, we designed a mechanism for evaluating the effects of different streamer 'read ahead window' sizes on real-time performance. Thirdly, we took account of the previously unconsidered path switching and mobile networks tunnelling overheads encountered in real-world deployments. Finally, we implemented a path condition monitoring and reporting scheme to facilitate the intelligent path switching. The proposed system has been experimentally shown to offer a significant improvement in PSNR of the received stream compared with representative existing algorithms.
Xi, Huijun; Cao, Jie; Liu, Jingjing; Li, Zhaoshen; Kong, Xiangyu; Wang, Yonghua; Chen, Jing; Ma, Su; Zhang, Lingjuan
2016-08-01
The purpose of this study was to investigate the importance of supervision through video surveillance in improving the quality of personal protection in preparing health care workers working in Ebola treatment units. Wardens supervise, remind, and guide health care workers' behavior through onsite voice and video systems when they are in the suspected patient observation ward and in the patient diagnosed ward of the Ebola treatment center. The observation results were recorded, and timely feedback was given to the health care workers. After 2 months of supervision, 1,797 cases of incorrect personal protection behaviors were identified and corrected. The error rate continuously declined. The declined rate of the first 2 weeks was statistically different from other time periods. Through reminding and supervising, nonstandard personal protective behaviors can be discovered and corrected, which can help health care workers standardize personal protection. The timely feedback from video surveillance can also offer psychologic support and encouragement promptly to ease psychologic pressure. Finally, this can ensure health care workers stay at a zero infection rate during patient treatment. Personal protective equipment protocol supervised by wardens through a video monitoring process can be used as an effective complement to conventional mutual supervision methods and can help health care workers avoid Ebola infection during treatment. Copyright © 2016 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.
NASA Missions Monitor a Waking Black Hole
2015-06-30
On June 15, NASA's Swift caught the onset of a rare X-ray outburst from a stellar-mass black hole in the binary system V404 Cygni. Astronomers around the world are watching the event. In this system, a stream of gas from a star much like the sun flows toward a 10 solar mass black hole. Instead of spiraling toward the black hole, the gas accumulates in an accretion disk around it. Every couple of decades, the disk switches into a state that sends the gas rushing inward, starting a new outburst. Read more: www.nasa.gov/feature/goddard/nasa-missions-monitor-a-waki... Credits: NASA's Goddard Space Flight Center Download this video in HD formats from NASA Goddard's Scientific Visualization Studio svs.gsfc.nasa.gov/cgi-bin/details.cgi?aid=11110
Multiple Vehicle Detection and Segmentation in Malaysia Traffic Flow
NASA Astrophysics Data System (ADS)
Fariz Hasan, Ahmad; Fikri Che Husin, Mohd; Affendi Rosli, Khairul; Norhafiz Hashim, Mohd; Faiz Zainal Abidin, Amar
2018-03-01
Vision based system are widely used in the field of Intelligent Transportation System (ITS) to extract a large amount of information to analyze traffic scenes. By rapid number of vehicles on the road as well as significant increase on cameras dictated the need for traffic surveillance systems. This system can take over the burden some task was performed by human operator in traffic monitoring centre. The main technique proposed by this paper is concentrated on developing a multiple vehicle detection and segmentation focusing on monitoring through Closed Circuit Television (CCTV) video. The system is able to automatically segment vehicle extracted from heavy traffic scene by optical flow estimation alongside with blob analysis technique in order to detect the moving vehicle. Prior to segmentation, blob analysis technique will compute the area of interest region corresponding to moving vehicle which will be used to create bounding box on that particular vehicle. Experimental validation on the proposed system was performed and the algorithm is demonstrated on various set of traffic scene.
Human recognition in a video network
NASA Astrophysics Data System (ADS)
Bhanu, Bir
2009-10-01
Video networks is an emerging interdisciplinary field with significant and exciting scientific and technological challenges. It has great promise in solving many real-world problems and enabling a broad range of applications, including smart homes, video surveillance, environment and traffic monitoring, elderly care, intelligent environments, and entertainment in public and private spaces. This paper provides an overview of the design of a wireless video network as an experimental environment, camera selection, hand-off and control, anomaly detection. It addresses challenging questions for individual identification using gait and face at a distance and present new techniques and their comparison for robust identification.
Ambulatory monitoring of activities and motor symptoms in Parkinson's disease.
Zwartjes, Daphne G M; Heida, Tjitske; van Vugt, Jeroen P P; Geelen, Jan A G; Veltink, Peter H
2010-11-01
Ambulatory monitoring of motor symptoms in Parkinsons disease (PD) can improve our therapeutic strategies, especially in patients with motor fluctuations. Previously published monitors usually assess only one or a few basic aspects of the cardinal motor symptoms in a laboratory setting. We developed a novel ambulatory monitoring system that provides a complete motor assessment by simultaneously analyzing current motor activity of the patient (e.g. sitting, walking) and the severity of many aspects related to tremor, bradykinesia, and hypokinesia. The monitor consists of a set of four inertial sensors. Validity of our monitor was established in seven healthy controls and six PD patients treated with deep brain stimulation (DBS) of the subthalamic nucleus. Patients were tested at three different levels of DBS treatment. Subjects were monitored while performing different tasks, including motor tests of the Unified Parkinsons Disease Rating Scale (UPDRS). Output of the monitor was compared to simultaneously recorded videos. The monitor proved very accurate in discriminating between several motor activities. Monitor output correlated well with blinded UPDRS ratings during different DBS levels. The combined analysis of motor activity and symptom severity by our PD monitor brings true ambulatory monitoring of a wide variety of motor symptoms one step closer..
State of the art in video system performance
NASA Technical Reports Server (NTRS)
Lewis, Michael J.
1990-01-01
The closed circuit television (CCTV) system that is onboard the Space Shuttle has the following capabilities: camera, video signal switching and routing unit (VSU); and Space Shuttle video tape recorder. However, this system is inadequate for use with many experiments that require video imaging. In order to assess the state-of-the-art in video technology and data storage systems, a survey was conducted of the High Resolution, High Frame Rate Video Technology (HHVT) products. The performance of the state-of-the-art solid state cameras and image sensors, video recording systems, data transmission devices, and data storage systems versus users' requirements are shown graphically.
NASA Technical Reports Server (NTRS)
Donner, Kimberly A.; Holden, Kritina L.; Manahan, Meera K.
1991-01-01
Investigated are five designs of software-based ON/OFF indicators in a hypothetical Space Station Power System monitoring task. The hardware equivalent of the indicators used in the present study is the traditional indicator light that illuminates an ON label or an OFF label. Coding methods used to represent the active state were reverse video, color, frame, check, or reverse video with check. Display background color was also varied. Subjects made judgments concerning the state of indicators that resulted in very low error rates and high percentages of agreement across indicator designs. Response time measures for each of the five indicator designs did not differ significantly, although subjects reported that color was the best communicator. The impact of these results on indicator design is discussed.
System Synchronizes Recordings from Separated Video Cameras
NASA Technical Reports Server (NTRS)
Nail, William; Nail, William L.; Nail, Jasper M.; Le, Doung T.
2009-01-01
A system of electronic hardware and software for synchronizing recordings from multiple, physically separated video cameras is being developed, primarily for use in multiple-look-angle video production. The system, the time code used in the system, and the underlying method of synchronization upon which the design of the system is based are denoted generally by the term "Geo-TimeCode(TradeMark)." The system is embodied mostly in compact, lightweight, portable units (see figure) denoted video time-code units (VTUs) - one VTU for each video camera. The system is scalable in that any number of camera recordings can be synchronized. The estimated retail price per unit would be about $350 (in 2006 dollars). The need for this or another synchronization system external to video cameras arises because most video cameras do not include internal means for maintaining synchronization with other video cameras. Unlike prior video-camera-synchronization systems, this system does not depend on continuous cable or radio links between cameras (however, it does depend on occasional cable links lasting a few seconds). Also, whereas the time codes used in prior video-camera-synchronization systems typically repeat after 24 hours, the time code used in this system does not repeat for slightly more than 136 years; hence, this system is much better suited for long-term deployment of multiple cameras.
47 CFR 76.1712 - Open video system (OVS) requests for carriage.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 4 2011-10-01 2011-10-01 false Open video system (OVS) requests for carriage... RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Documents to be Maintained for Inspection § 76.1712 Open video system (OVS) requests for carriage. An open video system operator shall maintain a...
47 CFR 76.1712 - Open video system (OVS) requests for carriage.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 4 2010-10-01 2010-10-01 false Open video system (OVS) requests for carriage... RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Documents to be Maintained for Inspection § 76.1712 Open video system (OVS) requests for carriage. An open video system operator shall maintain a...
47 CFR 76.1712 - Open video system (OVS) requests for carriage.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 4 2014-10-01 2014-10-01 false Open video system (OVS) requests for carriage... RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Documents to be Maintained for Inspection § 76.1712 Open video system (OVS) requests for carriage. An open video system operator shall maintain a...
47 CFR 76.1712 - Open video system (OVS) requests for carriage.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 4 2013-10-01 2013-10-01 false Open video system (OVS) requests for carriage... RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Documents to be Maintained for Inspection § 76.1712 Open video system (OVS) requests for carriage. An open video system operator shall maintain a...
47 CFR 76.1712 - Open video system (OVS) requests for carriage.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 4 2012-10-01 2012-10-01 false Open video system (OVS) requests for carriage... RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Documents to be Maintained for Inspection § 76.1712 Open video system (OVS) requests for carriage. An open video system operator shall maintain a...
47 CFR 76.1501 - Qualifications to be an open video system operator.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 4 2011-10-01 2011-10-01 false Qualifications to be an open video system... RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1501 Qualifications to be an open video system operator. Any person may obtain a certification to operate an open...
47 CFR 76.1501 - Qualifications to be an open video system operator.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 4 2014-10-01 2014-10-01 false Qualifications to be an open video system... RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1501 Qualifications to be an open video system operator. Any person may obtain a certification to operate an open...
47 CFR 76.1508 - Network non-duplication.
Code of Federal Regulations, 2014 CFR
2014-10-01
... MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1508 Network non-duplication. (a) Sections 76.92 through 76.97 shall apply to open video systems in accordance with the provisions contained... unit” shall apply to an open video system or that portion of an open video system that operates or will...
47 CFR 76.1508 - Network non-duplication.
Code of Federal Regulations, 2012 CFR
2012-10-01
... MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1508 Network non-duplication. (a) Sections 76.92 through 76.97 shall apply to open video systems in accordance with the provisions contained... unit” shall apply to an open video system or that portion of an open video system that operates or will...
47 CFR 76.1501 - Qualifications to be an open video system operator.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 4 2012-10-01 2012-10-01 false Qualifications to be an open video system... RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1501 Qualifications to be an open video system operator. Any person may obtain a certification to operate an open...
47 CFR 76.1508 - Network non-duplication.
Code of Federal Regulations, 2013 CFR
2013-10-01
... MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1508 Network non-duplication. (a) Sections 76.92 through 76.97 shall apply to open video systems in accordance with the provisions contained... unit” shall apply to an open video system or that portion of an open video system that operates or will...
47 CFR 76.1501 - Qualifications to be an open video system operator.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 4 2010-10-01 2010-10-01 false Qualifications to be an open video system... RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1501 Qualifications to be an open video system operator. Any person may obtain a certification to operate an open...
47 CFR 76.1501 - Qualifications to be an open video system operator.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 4 2013-10-01 2013-10-01 false Qualifications to be an open video system... RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1501 Qualifications to be an open video system operator. Any person may obtain a certification to operate an open...
47 CFR 76.1508 - Network non-duplication.
Code of Federal Regulations, 2011 CFR
2011-10-01
... MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1508 Network non-duplication. (a) Sections 76.92 through 76.97 shall apply to open video systems in accordance with the provisions contained... unit” shall apply to an open video system or that portion of an open video system that operates or will...
Görges, Matthias; West, Nicholas C; Christopher, Nancy A; Koch, Jennifer L; Brodie, Sonia M; Lowlaavar, Nasim; Lauder, Gillian R; Ansermino, J Mark
2016-04-01
Respiratory depression in children receiving postoperative opioid infusions is a significant risk because of the interindividual variability in analgesic requirement. Detection of respiratory depression (or apnea) in these children may be improved with the introduction of automated acoustic respiratory rate (RR) monitoring. However, early detection of adverse events must be balanced with the risk of alarm fatigue. Our objective was to evaluate the use of acoustic RR monitoring in children receiving opioid infusions on a postsurgical ward and identify the causes of false alarm and optimal alarm thresholds. A video ethnographic study was performed using an observational, mixed methods approach. After surgery, an acoustic RR sensor was placed on the participant's neck and attached to a Rad87 monitor. The monitor was networked with paging for alarms. Vital signs data and paging notification logs were obtained from the central monitoring system. Webcam videos of the participant, infusion pump, and Rad87 monitor were recorded, stored on a secure server, and subsequently analyzed by 2 research nurses to identify the cause of the alarm, response, and effectiveness. Alarms occurring within a 90-second window were grouped into a single-alarm response opportunity. Data from 49 patients (30 females) with median age 14 (range, 4.4-18.8) years were analyzed. The 896 bedside vital sign threshold alarms resulted in 160 alarm response opportunities (44 low RR, 74 high RR, and 42 low SpO2). In 141 periods (88% of total), for which video was available, 65% of alarms were deemed effective (followed by an alarm-related action within 10 minutes). Nurses were the sole responders in 55% of effective alarms and the patient or parent in 20%. Episodes of desaturation (SpO2 < 90%) were observed in 9 patients: At the time of the SpO2 paging trigger, the RR was >10 bpm in 6 of 9 patients. Based on all RR samples observed, the default alarm thresholds, to serve as a starting point for each patient, would be a low RR of 6 (>10 years of age) and 10 (4-9 years of age). In this study, the use of RR monitoring did not improve the detection of respiratory depression. An RR threshold, which would have been predictive of desaturations, would have resulted in an unacceptably high false alarm rate. Future research using a combination of variables (e.g., SpO2 and RR), or the measurement of tidal volumes, may be needed to improve patient safety in the postoperative ward.
Program Helps Generate And Manage Graphics
NASA Technical Reports Server (NTRS)
Truong, L. V.
1994-01-01
Living Color Frame Maker (LCFM) computer program generates computer-graphics frames. Graphical frames saved as text files, in readable and disclosed format, easily retrieved and manipulated by user programs for wide range of real-time visual information applications. LCFM implemented in frame-based expert system for visual aids in management of systems. Monitoring, diagnosis, and/or control, diagrams of circuits or systems brought to "life" by use of designated video colors and intensities to symbolize status of hardware components (via real-time feedback from sensors). Status of systems can be displayed. Written in C++ using Borland C++ 2.0 compiler for IBM PC-series computers and compatible computers running MS-DOS.
Holographic interferometry imaging monitoring of photodynamic (PDT) reactions in gelatin biophantom
NASA Astrophysics Data System (ADS)
Davidenko, N.; Mahdi, H.; Zheng, X.; Davidenko, I.; Pavlov, V.; Kuranda, N.; Chuprina, N.; Studzinsky, S.; Pandya, A.; Karia, H.; Tajouri, S.; Dervenis, M.; Gergely, C.; Douplik, A.
2018-01-01
Heat and photochemical reactions with human hemoglobin and photosensitizer were monitored by holography interference method in gelatin phantom. The method has successfully facilitated monitoring the reactions as a highresolution refraction index mapping in real time video regime. Methylene Blue was exploited as a photosensitizer.
The Quest for Contact: NASA's Search for Extraterrestrial Intelligence
NASA Technical Reports Server (NTRS)
1992-01-01
This video details the history and current efforts of NASA's Search for Extraterrestrial Intelligence program. The video explains the use of radiotelescopes to monitor electromagnetic frequencies reaching the Earth, and the analysis of this data for patterns or signals that have no natural origin. The video presents an overview of Frank Drake's 1960 'Ozma' experiment, the current META experiment, and planned efforts incorporating an international Deep Space Network of radiotelescopes that will be trained on over 800 stars.
NASA Technical Reports Server (NTRS)
1984-01-01
Key tool of Redken Laboratories new line of hair styling appliances is an instrument called a thermograph, a heat sensing device originally developed by Hughes Aircraft Co. under U.S. Army and NASA funding. Redken Laboratories bought one of the early models of the Hughes Probeye Thermal Video System or TVS which detects the various degrees of heat emitted by an object and displays the results in color on a TV monitor with colors representing different temperatures detected.
2018-01-26
attitude toward the use of the viewer. Clinicians may have different receptiveness to the new tool and various way to manage information during rounding...any patented invention that may relate to them. Qualified requestors may obtain copies of this report from the Defense Technical Information Center...This report is published in the interest of scientific and technical information exchange, and its publication does not constitute the Government’s
Schneider, David J.; Vallance, James W.; Wessels, Rick L.; Logan, Matthew; Ramsey, Michael S.; Sherrod, David R.; Scott, William E.; Stauffer, Peter H.
2008-01-01
A helicopter-mounted thermal imaging radiometer documented the explosive vent-clearing and effusive phases of the eruption of Mount St. Helens in 2004. A gyrostabilized gimbal controlled by a crew member housed the radiometer and an optical video camera attached to the nose of the helicopter. Since October 1, 2004, the system has provided thermal and video observations of dome growth. Flights conducted as frequently as twice daily during the initial month of the eruption monitored rapid changes in the crater and 1980-86 lava dome. Thermal monitoring decreased to several times per week once dome extrusion began. The thermal imaging system provided unique observations, including timely recognition that the early explosive phase was phreatic, location of structures controlling thermal emissions and active faults, detection of increased heat flow prior to the extrusion of lava, and recognition of new lava extrusion. The first spines, 1 and 2, were hotter when they emerged (maximum temperature 700-730°C) than subsequent spines insulated by as much as several meters of fault gouge. Temperature of gouge-covered spines was about 200°C where they emerged from the vent, and it decreased rapidly with distance from the vent. The hottest parts of these spines were as high as 500-730°C in fractured and broken-up regions. Such temperature variation needs to be accounted for in the retrieval of eruption parameters using satellite-based techniques, as such features are smaller than pixels in satellite images.
Real time video analysis to monitor neonatal medical condition
NASA Astrophysics Data System (ADS)
Shirvaikar, Mukul; Paydarfar, David; Indic, Premananda
2017-05-01
One in eight live births in the United States is premature and these infants have complications leading to life threatening events such as apnea (pauses in breathing), bradycardia (slowness of heart) and hypoxia (oxygen desaturation). Infant movement pattern has been hypothesized as an important predictive marker for these life threatening events. Thus estimation of movement along with behavioral states, as a precursor of life threatening events, can be useful for risk stratification of infants as well as for effective management of disease state. However, more important and challenging is the determination of the behavioral state of the infant. This information includes important cues such as sleep position and the status of the eyes, which are important markers for neonatal neurodevelopment state. This paper explores the feasibility of using real time video analysis to monitor the condition of premature infants. The image of the infant can be segmented into regions to localize and focus on specific areas of interest. Analysis of the segmented regions can be performed to identify different parts of the body including the face, arms, legs and torso. This is necessary due to real-time processing speed considerations. Such a monitoring system would be of great benefit as an aide to medical staff in neonatal hospital settings requiring constant surveillance. Any such system would have to satisfy extremely stringent reliability and accuracy requirements, before it can be deployed in a hospital care unit, due to obvious reasons. The effect of lighting conditions and interference will have to be mitigated to achieve such performance.
A novel Kalman filter based video image processing scheme for two-photon fluorescence microscopy
NASA Astrophysics Data System (ADS)
Sun, Wenqing; Huang, Xia; Li, Chunqiang; Xiao, Chuan; Qian, Wei
2016-03-01
Two-photon fluorescence microscopy (TPFM) is a perfect optical imaging equipment to monitor the interaction between fast moving viruses and hosts. However, due to strong unavoidable background noises from the culture, videos obtained by this technique are too noisy to elaborate this fast infection process without video image processing. In this study, we developed a novel scheme to eliminate background noises, recover background bacteria images and improve video qualities. In our scheme, we modified and implemented the following methods for both host and virus videos: correlation method, round identification method, tree-structured nonlinear filters, Kalman filters, and cell tracking method. After these procedures, most of noises were eliminated and host images were recovered with their moving directions and speed highlighted in the videos. From the analysis of the processed videos, 93% bacteria and 98% viruses were correctly detected in each frame on average.